Friday, March 16, 2012

Hanborq Improved Hadoop MapReduce


Major Features:
1. Worker Pool
Does not spawn new JVM processes for each job/task, but instead start these slot/worker processes at initialization phase and keep them running constantly.
2. Sort Avoidance.
Many aggregation job need not sort.
---------------------
A Hanborq optimized Hadoop Distribution, especially with high performance of MapReduce. It's the core part of HDH (Hanborq Distribution with Hadoop for Big Data Engineering).
Here is our presentation: Hanborq Optimizations on Hadoop MapReduce

HDH (Hanborq Distribution with Hadoop)

Hanborq, a start-up team focuses on Cloud & BigData products and businesses, delivers a series of software products for Big Data Engineering, including a optimized Hadoop Distribution.
HDH delivers a series of improvements on Hadoop Core, and Hadoop-based tools and applications for putting Hadoop to work solving Big Data problems in production. HDH is ideal for enterprises seeking an integrated, fast, simple, and robust Hadoop Distribution. In particular, if you think your MapReduce jobs are slow and low performing, the HDH may be you choice.

Hanborq optimized Hadoop

It is a open source distribution, to make Hadoop FastSimple and Robust.
- Fast: High performance, fast MapReduce job execution, low latency.
- Simple: Easy to use and develop BigData applications on Hadoop.
- Robust: Make hadoop more stable.

MapReduce Benchmarks

The Testbed: 5 node cluster (4 slaves), 8 map slots and 2 reduce slots per node.
1. MapReduce Runtime Environment Improvement
In order to reduce job latency, HDH implements Distributed Worker Pool like Google Tenzing. HDH MapReduce framework does not spawn new JVM processes for each job/task, but instead keep the slot processes running constantly. Additionally, there are many other improvements at this aspect.
bin/hadoop jar hadoop-examples-0.20.2-hdh3u3.jar sleep -m 32 -r 4 -mt 1 -rt 1
bin/hadoop jar hadoop-examples-0.20.2-hdh3u3.jar sleep -m 96 -r 4 -mt 1 -rt 1
HDH MapReduce Runtime Job/Task Latency
2. MapReduce Processing Engine Improvement
Many improvements are applied on Hadoop MapReduce Processing engine, such as shuffle, sort-avoidance, etc. HDH MapReduce Processing Engine Benchmark
Please refer to the page MapReduce Benchmarks for detail.

Features

MapReduce

- Fast job launching: such as the time of job lunching drop from 20 seconds to 1 second.
- Low latency: not only job setup, job cleanup, but also data shuffle, etc.
- High performance shuffle: low overhead of CPU, network, memory, disk, etc.
- Sort avoidance: some case of jobs need not sorting, which result in too many unnecessary system overhead and long latency.
... and more and continuous ...

How to build?

$ cd cloudera/maven-packaging  
$ mvn -Dnot.cdh.release.build=true -Dmaven.test.skip=true -DskipTests=true clean package  
Then use this package: build/hadoop-{main-version}-hdh{hdh-version}, for example: build/hadoop-0.20.2-hdh3u2

Compatibility

The API, configuration, scripts are all back-compatible with Apache Hadoop and Cloudera Hadoop(CDH). The user and developer need not to study new, except new features.

Innovations and Inspirations

The open source community and our real enterprise businesses are the strong source of our continuous innovations. Google, the great father of MapReduce, GFS, etc., always outputs papers and experiences that bring us inspirations, such as:
MapReduce: Simplified Data Processing on Large Clusters
MapReduce: A Flexible Data Processing Tool
Tenzing: A SQL Implementation On The MapReduce Framework
Dremel: Interactive Analysis of Web-Scale Datasets
... and more and more ...

Open Source License

All Hanborq offered code is licensed under the Apache License, Version 2.0. And others follow the original license announcement.

Hadoop is a tool that still needs work.

Cloudera wants to tackle Hadoop the way RedHat tackled Linux - offer support, services, and additional around it. I think it may be too early, it may be the reason why Bisciglia(Wibidata,Ex-googler) and Srivas(MapR,Ex-googler) left Cloudera.

Maybe Googlers know clearer of the gaps.


Saturday, October 29, 2011

Snappy and the compressed format


Google snappy: A fast compressor/decompressor

1. Optimized for:
64-bit platforms
x86 little-endian

2. Block compression
Snappy compressor works in 32KB blocks and does not do matching across blocks, it will never produce a bitstream with offsets larger than about 32768. However, the decompressor should not rely on this, as it may change in the future.

3. Snappy is a LZ77-type compressor with a fixed, byte-oriented encoding.
The format of snappy compressed data like the left figure.



Sunday, June 12, 2011

Cassandra Compression and the Performance Evaluation

Even though we had put the Cassandra away in all our products, we would like to share our works here.



Why we put away the Cassandra in our products? Because:
(1) It is a big wrong in Cassandra’s implementation, especially on it’s local storage engine layer, i.e. SSTable and Indexing.
(2) It is a big wrong to combine Bigtable and Dynamo. Dynamo’s hash ring architecture is a obsolete technolohy for scale, it’s consistency and replication policy is also unusable in big data storage.

Saturday, July 10, 2010

My comments to "Cassandra at Twitter Today"

Someone said the twitter blog post "Cassandra at Twitter Today" is a big blow to the reputation of Cassandra.


Here are my comments:

1. Cassandra is very young! Especially, the design and implementation of local storage and local indexing are junior and not good.

2. Pool read-performance is also due to the poor local storage implementation.

3. The local storage, indexing and persistence structures are not stable. They need to be re-designed /re-implemented. If Twitter move data to current Cassandra, they should do another move later for a new local storage, indexing and persistence structure.

4. There are many good techniques in Cassandra and other open-sourced projects (such as Hadoop, HBase ...), etc. But, they are not ready for production. Understand the detail of these techniques and implement them in your projects/products.

Monday, April 19, 2010

Cassandra Insert Throughput

** 0.5.1

Test Cluster:
DELL 2950 1*CPU Intel Xeon 5310 (4 cores)
5 nodes
1 node: 2GB heap for Cassandra JVM
4 nodes: 4GB heap for Cassandra JVM

Commit-log and Data stored on same disks.
25 client threads run on 5 nodes.

Data Model:
Keyspace Name = “Test”
Column Family Name = “ABC”
CompareWith for Column = LongType
Column Name = Timestamp (LongType), Value = 400 bytes binary
Billions of keys, thousands of columns.

Partitioner = dht.RandomPartitioner
MemtableSizeInMB = 64MB
ReplicationFactor = 3

Use Thrift Client Interface
Client.insert(..)
Consistency Level (write) = 1

Total inserted 1,076,333,461 columns.
Disk Use: 302GB+283GB+335GB+186GB+276GB=1,382GB (~~400B*1G=400GB *3= 1200GB)

On inserting: 1000 SSTables on each node. The latency of a query is about 1~3 seconds.
Quiet for long time: 10 SSTables (very big files, such as there is one 144GB SSTable data file)
The latency of a query is in ms.

Result: 18,000 columns/second


** 0.6.0
Only 4 nodes.

JVM GC for big heap.
Memory, GC..., always to be the bottleneck and big issue of java-based infrastructure software!
https://issues.apache.org/jira/browse/CASSANDRA-896 (LinkedBlockingQueue issue, fixed in jdk-6u19)

Seems 0.5.1 performed better.
0.6.0 eat more memory.