Showing posts with label MapReduce. Show all posts
Showing posts with label MapReduce. Show all posts

Friday, March 16, 2012

Hanborq Improved Hadoop MapReduce


Major Features:
1. Worker Pool
Does not spawn new JVM processes for each job/task, but instead start these slot/worker processes at initialization phase and keep them running constantly.
2. Sort Avoidance.
Many aggregation job need not sort.
---------------------
A Hanborq optimized Hadoop Distribution, especially with high performance of MapReduce. It's the core part of HDH (Hanborq Distribution with Hadoop for Big Data Engineering).
Here is our presentation: Hanborq Optimizations on Hadoop MapReduce

HDH (Hanborq Distribution with Hadoop)

Hanborq, a start-up team focuses on Cloud & BigData products and businesses, delivers a series of software products for Big Data Engineering, including a optimized Hadoop Distribution.
HDH delivers a series of improvements on Hadoop Core, and Hadoop-based tools and applications for putting Hadoop to work solving Big Data problems in production. HDH is ideal for enterprises seeking an integrated, fast, simple, and robust Hadoop Distribution. In particular, if you think your MapReduce jobs are slow and low performing, the HDH may be you choice.

Hanborq optimized Hadoop

It is a open source distribution, to make Hadoop FastSimple and Robust.
- Fast: High performance, fast MapReduce job execution, low latency.
- Simple: Easy to use and develop BigData applications on Hadoop.
- Robust: Make hadoop more stable.

MapReduce Benchmarks

The Testbed: 5 node cluster (4 slaves), 8 map slots and 2 reduce slots per node.
1. MapReduce Runtime Environment Improvement
In order to reduce job latency, HDH implements Distributed Worker Pool like Google Tenzing. HDH MapReduce framework does not spawn new JVM processes for each job/task, but instead keep the slot processes running constantly. Additionally, there are many other improvements at this aspect.
bin/hadoop jar hadoop-examples-0.20.2-hdh3u3.jar sleep -m 32 -r 4 -mt 1 -rt 1
bin/hadoop jar hadoop-examples-0.20.2-hdh3u3.jar sleep -m 96 -r 4 -mt 1 -rt 1
HDH MapReduce Runtime Job/Task Latency
2. MapReduce Processing Engine Improvement
Many improvements are applied on Hadoop MapReduce Processing engine, such as shuffle, sort-avoidance, etc. HDH MapReduce Processing Engine Benchmark
Please refer to the page MapReduce Benchmarks for detail.

Features

MapReduce

- Fast job launching: such as the time of job lunching drop from 20 seconds to 1 second.
- Low latency: not only job setup, job cleanup, but also data shuffle, etc.
- High performance shuffle: low overhead of CPU, network, memory, disk, etc.
- Sort avoidance: some case of jobs need not sorting, which result in too many unnecessary system overhead and long latency.
... and more and continuous ...

How to build?

$ cd cloudera/maven-packaging  
$ mvn -Dnot.cdh.release.build=true -Dmaven.test.skip=true -DskipTests=true clean package  
Then use this package: build/hadoop-{main-version}-hdh{hdh-version}, for example: build/hadoop-0.20.2-hdh3u2

Compatibility

The API, configuration, scripts are all back-compatible with Apache Hadoop and Cloudera Hadoop(CDH). The user and developer need not to study new, except new features.

Innovations and Inspirations

The open source community and our real enterprise businesses are the strong source of our continuous innovations. Google, the great father of MapReduce, GFS, etc., always outputs papers and experiences that bring us inspirations, such as:
MapReduce: Simplified Data Processing on Large Clusters
MapReduce: A Flexible Data Processing Tool
Tenzing: A SQL Implementation On The MapReduce Framework
Dremel: Interactive Analysis of Web-Scale Datasets
... and more and more ...

Open Source License

All Hanborq offered code is licensed under the Apache License, Version 2.0. And others follow the original license announcement.

Wednesday, January 6, 2010

Jeff Dean and Sanjay Ghemawat's good advices on MapReduce


I'd like to put a copy here, since this paper[1] matchs my opinions so much on MapReduce model and the pratices about large-dataset management/processing implementations.

In the paper, Jeffrey Dean and Sanjay Ghemawat reply Stonebrake and DeWitt's misconceptions about MapReduce. In fact, these misconceptions are so obvious and easy to understand for us.

It is also a good guide to improve the implementation of Hadoop and other members in the family. Suggest you reading it carefully.

Dean and other scientists from Google always bring us clear and reasonable explains about their technologies and pratices. But sometimes, someones from other organizations bring use puzzles.

Except for the five witchcrafts which Google exposed in following papers:
GFS: http://labs.google.com/papers/gfs.html
MapReduce: http://labs.google.com/papers/mapreduce.html
Bigtable: http://labs.google.com/papers/bigtable.html
Chubby: http://labs.google.com/papers/chubby.html
幻灯片 6 Google Cluster and WorkQueue Cluster Management

Following papers/articles/keynotes are very worthy of careful reading:
Jeff Dean Keynotes on LADIS09 (Designs, Lessons and Advice from Building Large
Distributed Systems): http://www.cs.cornell.edu/projects/ladis2009/talks/dean-keynote-ladis2009.pdf
Jeff Dean Keynotes on WSDM09(Challenges in Building Large-Scale Information Retrieval Systems): http://research.google.com/people/jeff/WSDM09-keynote.pdf
Jeff Dean Stanford-295-talk (Software Engineering Advice from Building Large-Scale Distributed Systems): http://research.google.com/people/jeff/stanford-295-talk.pdf
Jeff Dean "Handling Large Datasets at Google": http://hepix.caspur.it/storage/hep_pdf/2008/Spring/handling-large-datasets-20080507.pdf
Jeff Dean "A Behind the ScenesTour": http://www.slideshare.net/rawwell/googleabehindthescenestourjeffdean

And following so called GFS-II articals:
Sean Quinlan: GFS: Evolution on Fast-forward (http://queue.acm.org/detail.cfm?id=1594206)

Friday, October 2, 2009

The Integration of Analytic DBMS and Hadoop

Recently, tow famous vendors of analytic DBMS, Vertica and Aster-Data announced their integration with Hadoop. The analytic DBMS and Hadoop, each address distinct but complementary problems for managing large data.

Vertica:

Currently it is a light integration.
  • ETL, ELT, data cleansing, data mining, etc.
  • Moving data between Hadoop and Vertica.
  • InputFormat (InputSplit , VerticaRecord, push down relational map operations by parameterizing the database query).
  • OutputFormat (to existing or create a new table).
  • Easy for Hadoop developers to push down Map operations to Vertica databases in parallel by specifying parameterized queries which result in pre-aggregated data for each mapper.
  • Support Hadoop streaming interface.

Typical usages:

(1) Raw Data->Hadoop(ETL)->Vertical (for fast ad-hoc query, near realtime)
(2) Vertical -> Hadoop(ETL) ->Vertical (for fast ad-hoc query, near realtime)
(3) Vertical -> Hadoop (sophisticated query for analysis or mining)

We can expect to see tighter integration and higher performance.

References
[1] The Scoop on Hadoop and Vertica: http://databasecolumn.vertica.com/2009/09/the_scoop_on_hadoop_and_vertic.html
[2] Using Vertica as a Structured Data Repository for Apache Hadoop: http://www.vertica.com/MapReduce
[3] Cloudera DBInputFormat interface: http://www.cloudera.com/blog/2009/03/06/database-access-with-hadoop/
[4] Managing Big Data with Hadoop and Vertica: http://www.vertica.com/resourcelogin?type=pdf&item=ManagingBigDatawithHadoopandVertica.pdf

AsterData:

AsterData already provide in-database MapReduce.

The new Aster-Hadoop Data Connector, which utilizes Aster’s patent-pending SQL-MapReduce capabilities for two-way, high-speed, data transfer between Apache Hadoop and Aster Data’s massively parallel data warehouse.
  • ETL processing or data mining, and then pull that data into Aster for interactive queries or ad-hoc analytics on massive data scales.
  • The Connector utilizes key new SQL-MapReduce functions to provide ultra-fast, two-way data loading between HDFS (Hadoop Distributed File System) and Aster Data’s MPP Database.
  • Parallel loader.
  • LoadFromHadoop: Parallel data loading from HDFS to Aster nCluster.
  • LoadToHadoop: Parallel data loading from Aster nCluster to HDFS.

Key advantages of Aster’s Hadoop Connector include:
  • High-performance: Fast, parallel data transfer between Hadoop and Aster nCluster.
  • Ease-of-use: Analysts can now seamlessly invoke a SQL command for ultra-simple import of Hadoop-MapReduce jobs, for deeper data analysis. Aster intelligently and automatically parallelizes the load.
  • Data Consistency: Aster Data's data integrity and transactional consistency capabilities treat the data load as a 'transaction', ensuring that the data load or export is always consistent and can be carried out while other queries are running in parallel in Aster.
  • Extensibility: Customers can easily further extend the Connector using SQL-MapReduce, to provide further customization for their specific environment.

The typical usages are similar to Vertica.

References
[1] Aster Data Announces Seamless Connectivity With Hadoop: http://www.nearshorejournal.com/2009/10/aster-data-announces-seamless-connectivity-with-hadoop/
http://www.asterdata.com/news/091001-Aster-Hadoop-connector.php
[2] DBMS2 - MapReduce tidbits http://www.dbms2.com/2009/10/01/mapreduce-tidbits/#more-983
[3] AstaData Blog: Aster Data Seamlessly Connects to Hadoop, http://www.asterdata.com/blog/index.php/2009/10/05/aster-data-seamlessly-connects-to-hadoop/

Another Integration of Analytic DBMS and Hadoop case is HadoopDB project. http://db.cs.yale.edu/hadoopdb/hadoopdb.html

Monday, August 17, 2009

Hybrid store of row and column! Hybrid query of lookup and MapReduce?

- Hybrid store of row and column!

In our practices, we were aware of the hybrid of row-oriented store and column-oriented store is a realistic choice. I got this inspiration from Bigtable's column-family concept.

Now Vertica 3.5 move from pure columnar store to hybrid. It is called "Column Grouping", which is the major part of the veritica's enhancement in storing and processing columnar data called FlexStore. I think FlexStore means "Flexible Store". Users can define their column group flexibly.

Hybrid is the trend. I like Bigtable's model abstraction, it is simple and flexible.

- Hybrid query of lookup and MapReduce?

It seems it is a contradiction for low-latency lookup and high-latency ad-hoc MapReduce query. But I don't know if it make sense to support both in one data system. But sometimes, it seems needed.

Hive is one of the best practices to provide a easy-to-used MapReduce expression tool, or data warehouse. No real-time lookup. In fact, it is not a easy work to melt MapReduce into SQL, after reading of the DAG abstraction in Hive's paper in VLDB09.

As expected, Dr. Stonebraker's Vertica 3.5 also integrate Hadoop MapReduce now. And HadoopDB is Hive+Hadoop+PostgresDB. Vertica does not integrate MapReduce into SQL now, it is different from Greenplum and AsterData and HadoopDB.

References:
http://www.vertica.com/company/news/vertica-analytic-database-broadens-reach-with-flexstore
http://www.dbms2.com/2009/08/04/flexstore-and-the-rest-of-vertica-35/
http://www.dbms2.com/2009/08/04/verticas-version-of-mapreduce-integration/
http://db.cs.yale.edu/hadoopdb/hadoopdb.html
http://db.csail.mit.edu/pubs/benchmarks-sigmod09.pdf
http://www.slideshare.net/namit_jain/hive-demo-paper-at-vldb-2009