Advertisements

The Apache Hadoop Ecosystem

Apache Hadoop, simply termed Hadoop, is an increasingly popular open-source framework for distributed computing.  It has had a major impact on the business intelligence / data analytics / data warehousing space, spawning a new practice in this space, referred to as Big Data.  Hadoop’s core architecture consists of a storage part known as Hadoop Distributed File System (HDFS) and a processing part called MapReduce.   It provides a reliable, scalable, and cost-effective means for storing and processing large data sets, and it does so like no other software frameworks before its time.

It is cost-effective and scalable because it is designed to run on commodity hardware servers that can be scaled from one to hundreds, or even thousands, therefore avoiding the cost of the expensive super-computers (which eventually hits limits).  With Hadoop, you are able to add commodity servers as needed without much difficulty at minimal costs.

It is reliable because all the modules in Hadoop are designed with a fundamental assumption that hardware failures will occur and these failures should be automatically handled in software by the Hadoop framework.

Beyond the core components, the Hadoop eco-system has grown to include a number of additional packages that run on top of or alongside the core Hadoop components, including but not limited to, Apache Hive, Apache Pig, Apache HBase, Apache Phoenix, Apache Spark, Apache ZooKeeper, Impala, Apache Flume, Apache Sqoop, Apache Oozie, Apache Storm, Apache Mahout, Ambari, Apache Drill, Tez, and others.  This post will serve as a quick look-up for the components of the eco-system to allow you to quickly identify what the components are and understand what they do.

Hadoop component Component Category Purpose / Usage
Hadoop The ecosystem The core Apache Hadoop framework is composed of the following modules:

  • Hadoop Common
  • Hadoop Distributed File System (HDFS)
  • Hadoop YARN
  • Hadoop MapReduce
Hadoop Common Software Libraries shared across the ecosystem Hadoop Common contains libraries and utilities needed by other Hadoop modules
Hadoop Distributed File System (HDFS) Distributed Storage HDFS is a distributed file system that is the foundational storage component of Hadoop and it sits on top of the file system of the commodity hardware that Hadoop runs on. It stores data on these commodity servers and provides high bandwidth and throughput across the cluster of servers.
Hadoop YARN Resource Management & Scheduling YARN (which stands for Yet Another Resource Negotiator) is a resource-management platform that manages computing resources in Hadoop clusters and uses them to schedule users’ applications.
Hadoop MapReduce Distributed Processing MapReduce is a programming and processing paradigm that pairs with HDFS for large scale data processing.

It is a distributed computational algorithm comprised of a Map() procedure and a Reduce() procedure that pushes computation down to each server in the Hadoop cluster.

The Map procedure performs functions such as filtering and sorting of data; while the Reduce() procedure performs summary / aggregate type operations on the data.

Hive MapReduce Abstraction / Analysis / Querying Apache Hive is a data warehouse infrastructure that provides an abstraction layer on top of MapReduce.  It provides a SQL-like language called HiveQL and transparently converts queries to MapReduce, Apache Tez, and Apache Spark jobs.

It can handle analysis of large datasets and provides functionality for indexing, data summarization, query, and analysis of the data stored in HDFS or other compatible file systems.

Pig MapReduce Abstraction / Analysis / Querying Pig is a functional programming interface that allows you to use a higher level scripting language (called Pig Latin) to create MapReduce code for Hadoop. Pig is similar to PL/SQL and can be extended using UDF’s written in Java, Python and other languages.

It was originally developed to provide analysts an ad-hoc way of creating and executing map-reduce jobs on very large data sets.

Ambari Monitoring & Management of Clusters Ambari is a web-based tool for provisioning, managing, and monitoring Apache Hadoop clusters. It includes support for many of the key components of the Hadoop eco-system, such as, Hadoop HDFS, Hadoop MapReduce, Hive, HCatalog, HBase, ZooKeeper, Oozie, Pig and Sqoop.

Ambari also provides a user-friendly dashboard for viewing cluster health and MapReduce, Pig and Hive applications, And it also provides features to diagnose the performance of the various components.

HBase Storage / database HBase is a non-relational (NoSQL) distributed, fast, and scalable database that runs on top of HDFS.  It is modeled after Google’s Big Table, providing BigTable-like capabilities to Hadoop, and is written in Java.

It provides fault-tolerant storage and retrieval of huge quantities of sparse data – such as top 10 out of 10 billion records or the 0.1% of recrods that are non-zero.

HBase features include compression, and in-memory operation.  HBase tables can be used as the input and output for MapReduce jobs run in Hadoop, and are accessed through APIs.

HBase can be integrated with BI and Analytics applications through drivers and through Apache Phoenix’s SQL layer. However, HBase is not a RDBMS replacement.

Hue Web GUI Hue is an open-source Web interface for end users that supports Apache Hadoop and its ecosystem.

Hue provides a single interface for the most common Apache Hadoop components with an emphasis on user experience. Its main goal is to have the users make the most of Hadoop without worrying about the underlying complexity or using a command line.

Sqoop Data Integration Sqoop, named from a combination of SQL+Hadoop, is an application with a command-line interface that pulls and pushes data from/to relational data sources, to/from Hadoop.

It supports compression, incremental loads of a single table, or a free form SQL query. You can also save jobs which can be run multiple times to perform the incremental loads. Imports can also be used to populate tables in Hive or HBase.

Exports can be used to put data from Hadoop into a relational database.

Several software vendors provide Sqoop-based functionality into their database and BI/analytics products.

Flume Data Integration Apache Flume is a distributed service for efficiently collecting, aggregating, and moving large amounts of log data (such as web logs or sensor data) into and out of Hadoop (HDFS).

It features include fault tolerance and a simple extensible data model that supports streaming data flows and allows for online analysis.

Impala Analysis / Querying Cloudera Impala is a massively parallel processing, low-latency SQL Query engine that runs on Hadoop and communicates directly with HDFS, bypassing MapReduce.

It allows you to run SQL queries in lower data volume scenarios on data stored in HDFS and HBase, and returns results much quicker than Pig and Hive.

Impala is designed and integrated with Hadoop to use the same file and data formats, metadata, security and resource management frameworks used by MapReduce, Apache Hive, Apache Pig and other Hadoop software, which allows for both large scale data processing and interactive queries to be done on the same system.

Impala is great for data analysts and scientists to perform analytics on data stored in Hadoop via SQL or other business intelligence tools.

Avro Data Integration Avro is a data interchange protocol/framework that provides data serialization and de-serialization in a compact binary format.

Its primary use is in Apache Hadoop, where it can provide both a serialization format for persistent data, and a wire format for communication between Hadoop nodes, and from client programs to the Hadoop services.

Storm Data Integration Apache Storm is a distributed computation framework, written predominantly in the Clojure programming language that moves streaming data into and out of Hadoop.

It allows for the definition of information sources and manipulations to allow batch, distributed processing of streaming data.

Storm’s architecture acts as a data transformation pipeline. At a very high level the general architecture is similar to a MapReduce job, with the main difference being that data is processed in real-time as opposed to in individual batches.

Oozie Workflow Builder Apache Oozie is a server-based workflow scheduling system, built using Java, to manage Hadoop jobs. It chains together MapReduce jobs, and data import/export scripts.

Workflows in Oozie are defined as a collection of control flow and action nodes. Action nodes are the mechanism by which a workflow triggers the execution of a computation/processing task. Oozie provides support for different types of actions including Hadoop MapReduce, HDFS operations, Pig, SSH, and email; and it can be extended to support additional types of actions.

Mahout Machine Learning Apache Mahout is a set of libraries for distributed, scalable machine learning; data mining; and mathematical algorithms that run primarily on the Hadoop, and focused primarily in the areas of collaborative filtering, clustering and classification.

Mahout’s core algorithms for clustering, classification and batch based collaborative filtering are implemented on top of Apache Hadoop using the map/reduce paradigm, but is not restricted to Hadoop-based implementations.

ZooKeeper Coordination ZooKeeper is a high-performance, high-availability coordination service for distributed applications. It provides a distributed configuration service, synchronization service, and naming registry for large distributed systems.

ZooKeeper is used by open source enterprise search systems like Solr.

Spark Data Integration, Processing, Machine Learning Apache Spark sits directly on top of HDFS, bypassing MapReduce, and is a fast, general compute engine for Hadoop.  It is said that Spark could eventually replace MapReduce because it provides solutions for everything MapReduce does, plus adds a lot more functionality.

It uses a different paradigm from MapReduce (synonymous to rows vs sets processing in SQL), and uses more in-memory capabilities which makes it typically faster than MapReduce.  In contrast to MapReduce’s two-stage disk-based paradigm, Spark’s multi-stage in-memory primitives provides performance up to 100 times faster for certain applications.

Spark is very versatile and provides a simple and expressive programming model that supports a wide range of applications, including ETL, machine learning, stream processing, and graph computation.

Spark requires a cluster manager and supports standalone (native Spark cluster), Hadoop YARN, or Apache Mesos; and also requires a distributed storage system such as Hadoop Distributed File System (HDFS), Cassandra, Amazon S3, or even custom systems; but it does support a pseudo-distributed local mode (for development and testing).

Spark is one of the most active projects in the Apache Software Foundation.

Phoenix Data Manipulation Apache Phoenix is a massively parallel, relational database layer on top of noSQL stores such as Apache HBase.

Phoenix provides a JDBC driver that hides the complexities of the noSQL store enabling users to use the familiar SQL to create, delete, and alter SQL tables, views, indexes, and sequences; insert, update, and delete rows singly and in bulk; and query data.

Phoenix compiles queries and other statements into native noSQL store APIs rather than using MapReduce enabling the building of low latency applications on top of noSQL stores.

Cassandra Storage / database Apache Cassandra is an open source, scalable, multi-master, high-performance, distributed database management system designed to handle large amounts of data across many commodity servers, providing high availability with no single point of failure.

Cassandra supports clusters spanning multiple datacenters, with asynchronous masterless replication allowing for low latency operations.

Solr Search Solr (pronounced “solar”) is an open source enterprise search platform, written in Java, and runs as a standalone full-text search server.

Its features include full-text search, hit highlighting, faceted search, real-time indexing, dynamic clustering, database integration, NoSQL capabilities, and rich document (e.g., Word, PDF) handling. Solr is designed for scalability and Fault tolerance with distributed search and index replication. Solr is a very popular enterprise search engine.

Solr has APIs and a plugin architecture that makes it customizable using various programming languages.

There is the flexibility to root the data being brought in by SQOOP and FLUME directly into SOLR to do indexing on the fly.  But you can also tell SOLR to index the data in batches.

MongoDB Storage / database MongoDB (from humongous) is an open-source, cross-platform document-oriented, NoSQL database. It uses a JSON-like structure, called BSON, with dynamic schemas which makes the integration of data in certain types of applications easier and faster.

MongoDB is one of the most popular type of database management systems, and is said to be the most popular for document stores.

Kafka Data Integration Apache Kafka is an open-source message broker project written in Scala. It provides a unified, high-throughput, low-latency platform for handling real-time data feeds. The design is heavily influenced by transaction logs.

Apache Kafka was originally developed by LinkedIn, and was subsequently open sourced in early 2011.

Accumulo Storage / database Apache Accumulo is a data store with a sorted, distributed key/value and, like HBase, is based on the BigTable technology from Google.  It is written in Java, and is built on top of Apache Hadoop, Apache ZooKeeper, and Apache Thrift. Accumulo is said to be third most popular NoSQL wide column store behind Apache Cassandra and HBase as of 2015.
Chukwa Data Integration Chukwa is a data collection system for managing large distributed systems.
Tez Processing Tez is a flexible data-flow programming framework, built on Hadoop YARN, that processes both in batch and interactive modes. It is being adopted by Hive, Pig and other frameworks in the Hadoop ecosystem, and also by other commercial software (e.g. ETL tools), to replace Hadoop MapReduce as the underlying execution engine.
Drill Processing Apache Drill is an open-source software framework that supports data-intensive distributed applications for interactive analysis of large-scale datasets, even across multiple data stores, at amazing speed.

It is the open source version of Google’s Dremel system which is available as a service called Google BigQuery, and it supports a variety of NoSQL databases and file systems, including HBase, MongoDB, HDFS, Amazon S3, Azure Blob Storage, local files, and more.

Apache Sentry Security Apache Sentry is a system for enforcing fine grained role based authorization to data and metadata stored on a Hadoop cluster.

 

You can get more information about Apache Hadoop here: http://hadoop.apache.org/

Advertisements