Thursday, January 2, 2014

Getting started with Big Data - Part 3 - Installation and configuration of Apache Bigtop

http://www.breconbeacons.org/
In earlier blog entries we looked at how to install VirtualBox and then installing Ubuntu on top of VirtualBox. In the final series, we will look on how to install Bigtop on top of the Ubuntu Guest OS. From the Apache Bigtop site

The primary goal of Bigtop is to build a community around the packaging and interoperability testing of Hadoop-related projects. This includes testing at various levels (packaging, platform, runtime, upgrade, etc...) developed by a community with a focus on the system as a whole, rather than individual projects.

Open source software/frameworks work good individually, but it takes some effort/time to integrate them, the main challenge is the interoperability issues between the different frameworks. This is where companies like Cloudera, Hortonworks, MapR and others come into play. They take the different frameworks from the Apache Software Foundation and make sure they play nice to each other. Not only do they address the interoperability issues, but also make performance/usability enhancements.

Apache Bigtop takes this effort to a community level from a individual company level. Bigtop can be compared to Fedora, while Cloudera (CDH) / Hortonworks (HDP) / MapR (M5/M5/M7) can be compared to RHEL. Red Hat provides commercial support for RHEL, while Cloudera / Hortonworks / MapR provide commercial support for their own distributions of Hadoop. Also, Fedora has some of the leading edge and more variety of softwares and same is the case with Bigtop also. A lot of Apache frameworks (like Mahout / Hama) are included in Bigtop, but not in the commercial distributions like CDH/HDP/M*.

For those who wanted to deep dive into Big Data, Bigtop makes sense as it includes a lot of additional Big Data frameworks. Also, there are not many restrictions on it's usage. More on What is Bigtop, and Why Should You Care?

Here is the official documentation on installing Bigtop. But, the documentation is a bit outdated and has some steps missing. Here are the steps in detail.

- Install Java as mentioned here. Make sure that Oracle JDK 6 is installed and not JDK 7, because Bigtop has been tested with JDK 6.

- Get the key for Bigtop and add it to the list of trusted keys.
wget -O- http://archive.apache.org/dist/bigtop/bigtop-0.7.0/repos/GPG-KEY-bigtop | sudo apt-key add -

- Add the Bigtop repository to the Guest OS.
sudo wget -O /etc/apt/sources.list.d/bigtop.list http://www.apache.org/dist/bigtop/bigtop-0.7.0/repos/`lsb_release --codename --short`/bigtop.list

- Resynchronize the package index files from their sources
sudo apt-get update 

- Run the below command to get the list of packages in the Bigtop repository.
grep Package /var/lib/apt/lists/bigtop.s3.amazonaws.com_releases_0.7.0_ubuntu_precise_x86%5f64_dists_bigtop_contrib_binary-i386_Packages

- Install bigtop-utils. After the installation of the different frameworks, at any point of time check the log files in the `/var/log/` folder for any exceptions or errors.
sudo apt-get install bigtop-utils
- During the bigtop-utils installation, enter the password for the MySQL db and the select the appropriate Postfix configuration.
Screen 1

Screen 2

- Install hadoop and hue packages and the other packages of interest. Hive also gets installed automatically, because of the way the dependencies have been specified.
sudo apt-get install hadoop\* hue

- Modify `/etc/hue/conf/hue.ini` file using sudo.
# Use WebHdfs/HttpFs as the communication mechanism.
# This should be the web service root URL, such as
# http://namenode:50070/webhdfs/v1
webhdfs_url=http://localhost:50070/webhdfs/v1

# Defaults to $HADOOP_MR1_HOME or /usr/lib/hadoop-0.20-mapreduce
hadoop_mapred_home=/usr/lib/hadoop-mapreduce 

- To `/etc/hadoop/conf/hdfs-site.xml` add the below properties using sudo.
<property>
   <name>dfs.webhdfs.enabled</name>
   <value>true</value>
</property>

<property>
   <name>hadoop.proxyuser.hue.hosts</name>
   <value>*</value>
</property>

<property>
   <name>hadoop.proxyuser.hue.groups</name>
   <value>*</value>
</property>

- Get the ip address of the Guest OS using the ifconfig command and modify the `/etc/hosts` to change the ip of the host
bigdatavm@bigdatavm:~$ cat /etc/hosts
127.0.0.1    localhost
#127.0.1.1    bigdatavm
10.0.2.15    bigdatavm

# The following lines are desirable for IPv6 capable hosts
::1     ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
- In the `/etc/default/bigtop-utils` set
export JAVA_HOME=/usr/lib/jvm/java-6-oracle/
- Format the NameNode
sudo /etc/init.d/hadoop-hdfs-namenode init

- For pseudo mode start the HDFS services as
for i in hadoop-hdfs-namenode hadoop-hdfs-datanode ; do sudo service $i start ; done

- From the HDFS web console (http://localhost:50070/dfshealth.jsp) make sure that the number of live nodes is 1.

Screen 3

- Make sure to create a sub-directory structure in HDFS before running any daemons.
sudo /usr/lib/hadoop/libexec/init-hdfs.sh

- Restart the OS or start the individual services, so that the changes to the above configuration files comes into effect.

- Ubuntu by default runs at level 2. Note that the Resource Manager (S20*) and the Node Manager (S20*) have the same priority at startup.

Screen 4
 
- If the NM starts before the RM,  because of a bug the NM crashes with the below exception (/var/log/hadoop-yarn/yarn-yarn-nodemanager-bigdatavm.log) and the NM has to be started again.
Caused by: java.net.ConnectException: Call From bigdatavm/10.0.2.15 to 0.0.0.0:8031 failed on connection exception: java.net.ConnectException: Connection refused; For more details see:  http://wiki.apache.org/hadoop/ConnectionRefused
        at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
        at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
        at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
        at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
        at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:780)
        at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:727)
        at org.apache.hadoop.ipc.Client.call(Client.java:1244)
        at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:202)
        ... 8 more

Caused by: java.net.ConnectException: Connection refused
        at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
        at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:599)
        at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
        at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:526)
        at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:490)
        at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:511)
        at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:606)
        at org.apache.hadoop.ipc.Client$Connection.access$2200(Client.java:255)
        at org.apache.hadoop.ipc.Client.getConnection(Client.java:1293)
        at org.apache.hadoop.ipc.Client.call(Client.java:1211)
        ... 9 more 
- Start the node manager manually.
sudo service hadoop-yarn-nodemanager start

- Check the log files (in /var/log) for any errors and also check the following consoles
Hue UI - http://localhost:8888
RM UI - http://localhost:8088
HDFS UI - http://localhost:50070 

- Run a MR job which comes with the default installation.
sudo -u hdfs hadoop jar /usr/lib/hadoop-mapreduce/hadoop-mapreduce-examples.jar teragen 1000 terasort-input

Screen 5

Hue provides a Web Console for the different Big Data components. In the upcoming blog, we will look into how to create Oozie work flows using Hue and various other components.

Screen 6


References
1) http://blog.cloudera.com/blog/2013/06/apache-bigtop-the-fedora-of-hadoop-built-on-hadoop2/

5 comments:

  1. Excellent tutorial! Will refer to it when people wants to use Hue with BigTop!
    (noticed that the 'Modify `/etc/hue/conf/hue.ini` file using sudo.' should not be needed as the default is localhost even if the ini does not show it)

    ReplyDelete
    Replies
    1. Thanks for the feedback. I don't remember, but I think webhdfs_url was commented out in the hue.ini file.

      Delete
  2. Have you gotten Hue to work properly outside of the Cloudera environment by chance?

    ReplyDelete
    Replies
    1. I figure it out for some fonction but impala, solr and RDBMS are not running till now

      Delete
  3. It is possible to run mapreduce jobs in oozie with hadoop-2.2.0 without bigtop ? i tried it but not works for me.

    ReplyDelete