Apache Hive Installation

Hive is a data warehouse infrastructure built on top of Hadoop that provides tools to enable easy data summarization, adhoc querying and analysis of large datasets data stored in Hadoop files. It provides a mechanism to put structure on this data and it also provides a simple query language called Hive QL which is based on SQL and which enables users familiar with SQL to query this data. At the same time, this language also allows traditional map/reduce programmers to be able to plug in their custom mappers and reducers to do more sophisticated analysis which may not be supported by the built-in capabilities of the language.

Installation of Hive is pretty straigtforward and easy. With least chit-chatting, I will get to business for ya!

Prerequisites

Sun Java 6

Hadoop requires Sun Java 5.0.x. However, Hive wiki mentions a prerequisite of Sun Java 6.0. Thus we will stick to Sun Java 6.0

Hadoop (0.17.x – 0.19.x)

We must have Hadoop already up and running (support for 0.20.x is still under progress – so 0.17.x to 0.19.x is preferable)! If you don’t have Hadoop already installed for you, try and deploy it by going through the following tutorials:

Single Node Cluster Hadoop Installation: http://www.michael-noll.com/wiki/Running_Hadoop_On_Ubuntu_Linux_(Single-Node_Cluster)
Multi Node Cluster Hadoop Installation: http://www.michael-noll.com/wiki/Running_Hadoop_On_Ubuntu_Linux_(Multi-Node_Cluster)

I would have written a guide for Hadoop installation, but I really find Michael’s tutorial very cool for anyone to follow and get along with Hadoop! So if you havent installed Hadoop, thats the place to learn and do it fellas!

Note:
a) For this tutorial purpose, we will be referring to a Single Node Hadoop installation

SVN

SVN aka Subversion is an open source version control system. Most of the apache projects are hosted over SVN. Thus, its a good idea to have it on your system if not already.

For the current tutorial, you will need it to grab the code out of Hive SVN Repository

Download it from: http://subversion.tigris.org/

Ant

Ant or Apache Ant is a Java-based build tool. In present context, you will need it to build the ‘checked out’ Hive code.

Download it from: http://ant.apache.org/

Downloading and Building Hive

Hive is available via SVN at: http://svn.apache.org/repos/asf/hadoop/hive/trunk

We will first checkout Hive’s code

svn co http://svn.apache.org/repos/asf/hadoop/hive/trunk hive

This will put Hive trunk’s content (Hive’s development repository) in your local ‘hive’ directory

Now, we will build the downloaded code

cd hive

ant -Dhadoop.version=”<your-hadoop-version>” package

For example

ant -Dhadoop.version=”0.19.2″ package

Your built code is now in build/dist directory

cd build/dist
ls

On ‘ls’ you will see the following content:

README.txt
bin/ (all the shell scripts)
lib/ (required jar files)
conf/ (configuration files)
examples/ (sample input and query files)

The “build/dist/” directory is your Hive Installation and moving further we are going to call it Hive Home.

Let us set an environment variable for our Hive Home too:

export HIVE_HOME=<some path>/build/dist

For example

export HIVE_HOME=/data/build/dist

Hadoop Side Changes

Hive uses hadoop that means:

1. you must have hadoop in your path OR
2. export HADOOP_HOME=<hadoop-install-dir>

In addition, you must create /tmp and /user/hive/warehouse (aka hive.metastore.warehouse.dir) and set them chmod g+w in HDFS before a table can be created in Hive.

Commands to perform these changes

$HADOOP_HOME/bin/hadoop fs -mkdir /tmp
$HADOOP_HOME/bin/hadoop fs -mkdir /user/hive/warehouse
$HADOOP_HOME/bin/hadoop fs -chmod g+w /tmp
$HADOOP_HOME/bin/hadoop fs -chmod g+w /user/hive/warehouse

Running Hive

Now, you are all set to run Hive for yourself! Invoke the command line interface (cli) from the shell:

$HIVE_HOME/bin/hive

Note: You can also read about Hive Installation at Hive’s wiki

6 thoughts on “Apache Hive Installation

  1. “1. you must have hadoop in your path”

    the “your path” refer to the environment path
    # echo $PATH ?

  2. hey Daves, sorry I was engaged with other stuffs! I am sure you must have figured your way out by now.. but still answering the question… yes, it means the same (however, I havent tried that..) I prefer keeping the Hadoop path as a variable and using it along (as in the article)..

  3. When running ~/hive/bin/hive, I receive a message of:

    “Unable to determine Hadoop version”
    Hadoop version:

    and then the cursor appears to be the same as root rather than hive>, and it does not execute any hive command at CLI. What did I do wrong in configuration or setting up hive environment. I did however got Hadoop working in all nodes including namenodes, secondary node, jobtracker, datanode and tasktracker.

    Please help, I am frustrated.

    Thanks,

    JB

  4. Hi Jony,

    I went through hive script.. and found this message in this code..

    cli () {
    CLASS=org.apache.hadoop.hive.cli.CliDriver

    – # cli specific code
    – if [ ! -f ${HIVE_LIB}/hive-cli-*.jar ]; then
    – echo “Missing Hive CLI Jar”
    – exit 3;
    – fi

    – if $cygwin; then
    – HIVE_LIB=`cygpath -w “$HIVE_LIB”`
    – fi

    – version=$($HADOOP version | awk ‘{if (NR == 1) {print $2;}}’);

    – # Save the regex to a var to workaround quoting incompatabilities
    – # between Bash 3.1 and 3.2
    – version_re=”^([[:digit:]]+)\.([[:digit:]]+)(\.([[:digit:]]+))?.*$”

    – if [[ “$version” =~ $version_re ]]; then
    – major_ver=${BASH_REMATCH[1]}
    – minor_ver=${BASH_REMATCH[2]}
    – patch_ver=${BASH_REMATCH[4]}
    – else
    – echo “Unable to determine Hadoop version information.”
    – echo “‘hadoop version’ returned:”
    – echo `$HADOOP version`
    – exit 6
    – fi

    ……………………….
    + execHiveCmd $CLASS “$@”
    }

    thus.. it is trying to determine Hadoop version by invoking Hadoop itself… can you please verify that you have Hadoop environment variable set and available to the script ?

  5. I am having the same problem and hadoop is definitely in the path as it logs the output of the version command(I am using version 0.20.2 download….and hive 0.6 download(not source from trunk).

    Unable to determine Hadoop version information.
    ‘hadoop version’ returned:
    classpath=/opt/hbase-install/hbase/hbase-0.20.6.jar:/opt/hbase-install/hbase/hba
    se-0.20.6-test.jar:/deanconfig/hbase_config/:/opt/hbase-install/hbase/lib/zookee
    per-3.2.2.jar Hadoop 0.20.2 Subversion https://svn.apache.org/repos/asf/hadoop/c
    ommon/branches/branch-0.20 -r 911707 Compiled by chrisdo on Fri Feb 19 08:07:34
    UTC 2010

  6. Interesting! I will try the same versions of Hadoop and Hive, and will try and see if I can reproduce the issue faced by you

Comments are closed.