Sunday, September 27, 2009

Tips of Hadoop Runtime Environment


To setup a cluster for running Hadoop/HBase/Hive, etc., besides the configuration and tuning of these open-source programs themselves, the system hardware and software, and some useful utilities, should also be considered to improve the system performance and to ease the system maintenance.

1. Cluster Facilities and Hardware [1]

(1) Data center:

Usually, we run Hadoop/HBase/Hive in a single data center.

(2) Servers:

Clusters are often either capacity bound or CPU bound.

The 1U or 2U configuration is usually used.

The storage capability of each node is usually not too dense (<= 4TB is recommended).

Commodity server: 2x4 core CPU, 16 GB RAM, 4x1TB SATA, 2x1 GE NIC

Use ECC RAM and cheap hard drives: 7200 RPM SATA.

Start with standard 64-bit box for masters and workers.

(3) Network:

Gigabit Ethernet, 2 level tree, 5:1 oversubscription to core

May want redundancy at top of rack and core

Usually, for a small cluster, all nodes are under a single GE switch.

(4) RAID configuration: RAID0

If there are two or more disks in each machine, RAID0 can provide better disk throughput than other RAID levels (and JBOD?). The multiple data replicas of HDFS can tolerate failure and guarantee the data safety.

2. System Software

(1) Linux:

RedHat5+ or CentOS5+ (recommended). Now, we use CentOS5.3-x64.

(2) Local File System:

Ext3 is ok.

We usually configure a separate disk partition for Hadoop used local file system, create and mount separate local file system for Hadoop.

Mount with noatime and nodiratime for performance improvements. Default, Linux will update the atime of files and directories, which is unnecessary in most cases. [2]

-- Edit /etc/fstab:

e.g. /dev/VolGroup00/LogVol02 /data ext3 defaults,noatime,nodiratime 1 2

-- remound:

mount -o remount /data

(3) Swappiness configuration: [3]

With the introduction of version 2.6, the new variable "swappiness" was added in the Linux kernel memory management subsystem and a tunable was created for it. High value of swappiness will make the kernel page out application text in favour of another application or even file-system cache. The default value is 60 (see mm/vmscan.c).

If you end up swapping, you're going to start seeing some weird behavior and very slow GC runs, and likely killing off HBase regionservers as ZooKeeper times out and assume the RegionServer is dead. Suggest setting vm.swappiness = 0 or other low number (e.g. 10), and observe the state of swap.

-- Edit /etc/sysctl.conf : vm.swappiness=0

-- To check the current value on a running system: sysctl vm.swappiness

(4) Linux default file handle limit: [4]

Currently HBase is a file handle glutton. To up the users' file handles, edit /etc/security/limits.conf on all nodes.

* - nofile 32768

(5) Java: JRE/JDK1.6 latest and GC options [5]

For machine with 4-16 cores, our Hadoop/HBase and other java applications should use GC option as: “-XX:+UseConcMarkSweepGC”.

For machine with 2 cores, should use GC option as: “-XX:+UseConcMarkSweepGC -XX:+CMSIncrementalMode”.

(6) Apache ANT 1.7.1

(7) Useful Linux utilities:

top, sar, iostat, iftop, vmstat, nfsstat, strace, dmesg, and friends

Especially iostat is very useful for disk I/O analysis.

(8) Useful java utilities:

jps, jstack, jconsole

(9) Compression native library: Gzip and LZO [7]

(10) Ganglia:

To integrate metrics of Hadoop, HBase, Hive, applications, and Linux system.


[1] Hadoop and Cloudera, Managing Petabytes with Open Source, Jeff Hammerbacher, Aug. 21 or

[2] Set noatime of local file system:

[3] Linux Swappiness:

[4] HBase FAQ:

[5] Java SE 6 HotSpot[tm] Virtual Machine Garbage Collection Tuning,

[6] Apache Ant:



  1. The one surprise to me is recommending RAID over JBOD. I'd heard the opposite, e.g.

  2. Thanks Dave. It seems JBOD gains better performance. We should have a test. The post from Runping Qi at Yahoo is interesting.

  3. Though the hadoop online training gave me the much needed information about the basic hadoop concepts I learned more information like data, cloud, analytic grealy on thie website. Thanks for sharing.

  4. Very nice post here thanks for it I always like and search such topics and everything connected to them.Excellent and very cool idea and the subject at the top of magnificence and I am happy to comment on this topic through which we address the idea of positive reaction.

    Hadoop Training in Chennai

  5. Very inspiring article! You’ve really made it! These tech giants are leading the tech world because they think different. Thanks for sharing this wonderful article here!
    Selenium Training in Chennai

  6. Really i enjoyed very much. And this may helpful for lot of peoples. So you are provided such a nice and great article within this.

    Dot Net Training in Chennai

    Software Testing Training in Chennai

  7. After reading this blog i very strong in this topics and this blog really helpful to all.Big Data Hadoop Online Training Hyderabad

  8. Hadoop is very interesting topic and your article is great and helpful. Thanks for this posting.

    Hadoop Big Data Classes in Pune
    Big Data Hadoop Training in Pune
    Big Data Certification in Pune

  9. Thanks for sharing this blog with us, it will be so helpful for us. Keep sharing like this. Big Data Training in Pune

  10. Really i enjoyed very much. And this may helpful for lot of peoples. So you are provided such a nice and great article within this.

    Best Hadoop Training Pune

  11. We are a group of volunteers and starting a new initiative in a community. Your blog provided us valuable information to work on.You have done a marvellous job!
    Selenium training in Chennai

    Selenium training in Bangalore