How to Setup Hadoop 1.2.1 on CentOS/RHEL 6/5
Preparing ... draft ..
Apache Hadoop is an open-source software framework for storage and large-scale processing of data-
sets on clusters of commodity hardware. Hadoop is an Apache top-level project being built and used by
a global community of contributors and users. Hadoop brings the ability to cheaply process large
amounts of data, regardless of its structure.
Server configuration
--------------------
Pre, installation steps:
install and setup the vmware server or virtual box and create following servers below.
OS: CentOS 6.5
servers specification
It is going to be 4 Node cluster.
Assign more memory to the first node cluster which requires resources.
server1.expanor.local 2 CPU 4GB Ram 50GB disk space
server2.expanor.local 1 CPU 2GB Ram 50GB disk space
server3.expanor.local 1 CPU 2GB Ram 50GB disk space
server4.expanor.local 1 CPU 2GB Ram 50GB disk space
VM creation
Create the VM with the following parameters:
- Bridge network
- Enough disk space (more than 40GB)
- 2 GB of RAM
- Setup the DVD to point to the CentOS iso image
Network Configuration
---------------------
Make the following changes for network configuration which allow all cluster nodes to interact.
# cat /etc/resolv.conf
search expanor.local
nameserver 192.168.10.110
# cat /etc/sysconfig/network
NETWORKING=yes
HOSTNAME=sam.expanor.local
GATEWAY=192.168.10.1
# cat /etc/sysconfig/network-scripts/ifcfg-eth0
DEVICE=eth0
ONBOOT=yes
PROTO=static
IPADDR=192.168.10.200
NETMASK=255.255.255.0
# cat /etc/selinux/config
SELINUX=disabled
# /etc/yum/pluginconf.d/fastestmirror.conf
enabled=0
restart the network services to make changes effective.
# chkconfig iptables off
# /etc/init.d/network restart or service network stop/start/restart
Setup Cluster Hosts
If you don't have dns set up, add the following entry to your /etc/hosts
cat /etc/hosts
192.168.10.201 hadoop1.expanor.local hadoop1
192.168.10.202 hadoop2.expanor.local hadoop2
192.168.10.203 hadoop3.expanor.local hadoop3
192.168.10.204 hadoop4.expanor.local hadoop4
Setup SSH
set up ssh key to have passwordless authentication.
# yum -y install perl openssh-clients
# ssh-keygen (type enter, enter, enter)
# cd ~/.ssh
# cp id_rsa.pub authorized_keys
Modify the ssh configuration file to no so that it will prevent asking question when connecting with
SSH to the host.
# vi /etc/ssh/ssh_config
StrictHostKeyChecking no
Shutdown and Clone
Now, shutdown the system.
# init 0
Lets create the server nodes that will be members of the cluster.
in VirtualBox, clone the base server, using the ‘Linked Clone’ option and name the nodes hadoop1,
hadoop2, hadoop3 and hadoop4.
For the first node (hadoop1), change the memory settings to 8GB of memory. Most of the roles will be
installed on this node, and therefore it is important that it have sufficient memory available.
Clones Customization
For every node, proceed with the following operations:
Modify the hostname of the server, change the following line in the file:
/etc/sysconfig/network
HOSTNAME=hadoop[n].example.com
Where [n] = 1..4 (up to the number of nodes)
Modify the fixed IP address of the server, change the following line in the file:
/etc/sysconfig/network-scripts/ifcfg-eth0
IPADDR=10.0.1.20[n]
Where [n] = 1..4 (up to the number of nodes)
Let’s restart the networking services and reboot the server, so that the above changes takes effect:
# /etc/init.d/network restart
# init 6
Now, we have four running virtual machines with CentOS correctly configured.
===============================================================
1. Install Java
Verify that JAVA is install on your system using java -version on the command prompt or just type
java and see if you get something on return on command not found.
Steps to Install JAVA 7 on your Lunux system. ( Redhat and CentOS)
Download the java package from Oracle.
# cd /var/tmp
# wget http://download.oracle.com/otn-pub/java/jdk/7u55-b13/jdk-7u55-linux-i586.tar.gz?
AuthParam=1398049773_df113de6ac9a884bbf0b37f61c742aeb
# tar xzf jdk-7u55-linux-i586.tar.gz
# cd /opt/jdk1.7.0_55/
# alternatives --install /usr/bin/java java /opt/jdk1.7.0_55/bin/java 2
# alternatives --config java
option 4
Verify java version
# java -version
Setup Environment Variables
# export JAVA_HOME=/opt/jdk1.7.0_55
Setup JRE_HOME Variable
# export JRE_HOME=/opt/jdk1.7.0_55/jre
Setup PATH Variable
# export PATH=$PATH:/opt/jdk1.7.0_55/bin:/opt/jdk1.7.0_55/jre/bin
2. Create User Account
Create a user account for hadoop installation.
# useradd hadoop
# passwd hadoop
3. Configuring Key Based authentication (passwordless login).
# su - hadoop
$ ssh-keygen -t rsa
$ cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
$ chmod 0600 ~/.ssh/authorized_keys
$ exit
Edit .bash_profile to add java stuff to load upon login.
# vi .bash_profile
Export JAVA_HOME=/usr/java/jdk1.7.0_45
PATH =$PATH:$HOME/bin:/usr/java/jdk1.7.0_45/bin
Export PATH
4. Download and Extract the file.
# mkdir /opt/hadoop; cd /opt/hadoop/
# wget http://apache.mesi.com.ar/hadoop/common/hadoop-1.2.1/hadoop-1.2.1.tar.gz
# tar -xzf hadoop-1.2.1.tar.gz
# mv hadoop-1.2.1 hadoop
# chown -R hadoop /opt/hadoop
# cd /opt/hadoop/hadoop/
5: Configure Hadoop
a. Edit core-site.xml
# vi conf/core-site.xml
#Add the following inside the configuration tag
<property>
<name>fs.default.name</name>
<value>hdfs://localhost:9000/</value>
</property>
<property>
<name>dfs.permissions</name>
<value>false</value>
</property>
wq!
b. Edit hdfs-site.xml
# vi conf/hdfs-site.xml
# Add the following inside the configuration tag
<property>
<name>dfs.data.dir</name>
<value>/opt/hadoop/hadoop/dfs/name/data</value>
<final>true</final>
</property>
<property>
<name>dfs.name.dir</name>
<value>/opt/hadoop/hadoop/dfs/name</value>
<final>true</final>
</property>
<property>
<name>dfs.replication</name>
<value>2</value>
</property>
c. Edit mapred-site.xml
# vi conf/mapred-site.xml
# Add the following inside the configuration tag
<property>
<name>mapred.job.tracker</name>
<value>localhost:9001</value>
</property>
d. Edit hadoop-env.sh
# vim conf/hadoop-env.sh
export JAVA_HOME=/opt/jdk1.7.0_55
export HADOOP_OPTS=-Djava.net.preferIPv4Stack=true
Next to format Name Node
# su - hadoop
$ cd /opt/hadoop/hadoop
$ bin/hadoop namenode -format
6: Start Hadoop Services
$ bin/start-all.sh
7: Test and Access Hadoop Services
Use ‘jps‘ command to check if all services are started well.
$ jps
or
$ $JAVA_HOME/bin/jps
Web Access URLs for Services
http://srv1.tecadmin.net:50030/ for the Jobtracker
http://srv1.tecadmin.net:50070/ for the Namenode
http://srv1.tecadmin.net:50060/ for the Tasktracker
8: Stop Hadoop Services
# bin/stop-all.sh
source
http://en.wikipedia.org/wiki/Apache_Hadoop
http://tecadmin.net/steps-to-install-hadoop-on-centosrhel-6/
https://blog.cloudera.com/blog/2014/01/how-to-create-a-simple-hadoop-cluster-with-virtualbox/
http://solutionsatexperts.com/hadoop-installation-steps-on-centos-6/
http://hortonworks.com/blog/set-up-apache-hadoop-in-minutes-with-rpms/
not used
http://icanhadoop.blogspot.com/2012/09/configuring-hadoop-is-very-if-you-just.html
http://gbif.blogspot.com/2011/01/setting-up-hadoop-cluster-part-1-manual.html
https://blog.codecentric.de/en/2012/12/tutorial-installing-a-apache-hadoop-single-node-cluster-with-hortonworks-data-platform/
========================
Detail ------------------
================================
[root@sama ~]# ls -l jre-7u55-linux-i586.rpm\?AuthParam\=1400640097_944834eac90eb39afbab6dec970e6473\&GroupName\=JSC\&FilePath\=%2FESD6%2FJSCDL%2Fjdk%2F7u55-b13%2Fjre-7u55-linux-i586.rpm\&File\=jre-7u55-linux-i586.rpm\&BHost\=javadl.sun.com
-rw-r--r--. 1 root root 33040762 Apr 22 11:44 jre-7u55-linux-i586.rpm?AuthParam=1400640097_944834eac90eb39afbab6dec970e6473&GroupName=JSC&FilePath=%2FESD6%2FJSCDL%2Fjdk%2F7u55-b13%2Fjre-7u55-linux-i586.rpm&File=jre-7u55-linux-i586.rpm&BHost=javadl.sun.com
[root@sama ~]# mv jre-7u55-linux-i586.rpm\?AuthParam\=1400640097_944834eac90eb39afbab6dec970e6473\&GroupName\=JSC\&FilePath\=%2FESD6%2FJSCDL%2Fjdk%2F7u55-b13%2Fjre-7u55-linux-i586.rpm\&File\=jre-7u55-linux-i586.rpm\&BHost\=javadl.sun.com jre-7u55-linux-i586.rpm
[root@sama ~]# file jre-7u55-linux-i586.rpm
jre-7u55-linux-i586.rpm: RPM v3.0 bin i386/x86_64
[root@sama ~]# rpm -ql jre-7u55-linux-i586.rpm | more
package jre-7u55-linux-i586.rpm is not installed
[root@sama ~]# rpm -ivh jre-7u55-linux-i586.rpm
Preparing... ########################################### [100%]
1:jre ########################################### [100%]
Unpacking JAR files...
rt.jar...
jsse.jar...
charsets.jar...
localedata.jar...
jfxrt.jar...
plugin.jar...
javaws.jar...
deploy.jar...
[root@sama ~]# which java
/usr/bin/java
[root@sama ~]# rpm -qf /usr/bin/java
file /usr/bin/java is not owned by any package
[root@sama ~]# which /usr/bin/java
/usr/bin/java
[root@sama ~]# java -version
java version "1.6.0_20"
OpenJDK Runtime Environment (IcedTea6 1.9.7) (rhel-1.39.1.9.7.el6-i386)
OpenJDK Client VM (build 19.0-b09, mixed mode)
[root@sama ~]# echo $JAVA_HOME
[root@sama ~]# cd /usr/local/
[root@sama local]# ls
bin etc games include lib libexec sbin share src
[root@sama local]# cd bin/
[root@sama bin]# ls
noip2
[root@sama bin]# cd ..
[root@sama local]# pwd
/usr/local
[root@sama local]# rpm -qa | grpe -i java
-bash: grpe: command not found
^C[root@sama local]# rpm -qa | grep -i java
tzdata-java-2011g-1.el6.noarch
java-1.6.0-openjdk-1.6.0.0-1.39.1.9.7.el6.i686
[root@sama local]# rpm -qf java-1.6.0-openjdk-1.6.0.0-1.39.1.9.7.el6.i686 | more
error: file /usr/local/java-1.6.0-openjdk-1.6.0.0-1.39.1.9.7.el6.i686: No such file or directory
[root@sama local]# cd /usr/local/
[root@sama local]# la
-bash: la: command not found
[root@sama local]# ls
bin etc games include lib libexec sbin share src
[root@sama local]# cd bin
[root@sama bin]# ls
noip2
[root@sama bin]# cd ../etc
[root@sama etc]# ls
NO-IPxCB4Bc
[root@sama etc]# cd ..
[root@sama local]# pwd
/usr/local
[root@sama local]# cd /usr/
[root@sama usr]# cd java
[root@sama java]# ls
default jre1.7.0_55 latest
[root@sama java]# pwd
/usr/java
[root@sama java]# pwd
/usr/java
[root@sama java]# ls -ltr
total 4
drwxr-xr-x. 6 root root 4096 May 20 22:52 jre1.7.0_55
lrwxrwxrwx. 1 root root 21 May 20 22:52 latest -> /usr/java/jre1.7.0_55
lrwxrwxrwx. 1 root root 16 May 20 22:52 default -> /usr/java/latest
[root@sama java]# java
Usage: java [-options] class [args...]
(to execute a class)
or java [-options] -jar jarfile [args...]
(to execute a jar file)
where options include:
-d32 use a 32-bit data model if available
-d64 use a 64-bit data model if available
-client to select the "client" VM
-server to select the "server" VM
-hotspot is a synonym for the "client" VM [deprecated]
The default VM is client.
-cp <class search path of directories and zip/jar files>
-classpath <class search path of directories and zip/jar files>
A : separated list of directories, JAR archives,
and ZIP archives to search for class files.
-D<name>=<value>
set a system property
-verbose[:class|gc|jni]
enable verbose output
-version print product version and exit
-version:<value>
require the specified version to run
-showversion print product version and continue
-jre-restrict-search | -jre-no-restrict-search
include/exclude user private JREs in the version search
-? -help print this help message
-X print help on non-standard options
-ea[:<packagename>...|:<classname>]
-enableassertions[:<packagename>...|:<classname>]
enable assertions with specified granularity
-da[:<packagename>...|:<classname>]
-disableassertions[:<packagename>...|:<classname>]
disable assertions with specified granularity
-esa | -enablesystemassertions
enable system assertions
-dsa | -disablesystemassertions
disable system assertions
-agentlib:<libname>[=<options>]
load native agent library <libname>, e.g. -agentlib:hprof
see also, -agentlib:jdwp=help and -agentlib:hprof=help
-agentpath:<pathname>[=<options>]
load native agent library by full pathname
-javaagent:<jarpath>[=<options>]
load Java programming language agent, see java.lang.instrument
-splash:<imagepath>
show splash screen with specified image
See http://java.sun.com/javase/reference for more details.
[root@sama java]# export JAVA_HOME=/usr/java
[root@sama java]# pwd
/usr/java
[root@sama java]# ls
default jre1.7.0_55 latest
[root@sama java]# cd latest
[root@sama latest]# ls
bin man THIRDPARTYLICENSEREADME-JAVAFX.txt
COPYRIGHT plugin THIRDPARTYLICENSEREADME.txt
lib README Welcome.html
LICENSE release
[root@sama latest]# cd bin
[root@sama bin]# ls
ControlPanel java_vm jcontrol orbd policytool rmiregistry tnameserv
java javaws keytool pack200 rmid servertool unpack200
[root@sama bin]# pwd
/usr/java/latest/bin
[root@sama bin]# export PATH=$PATH:/usr/java/latest/bin
[root@sama bin]# useradd hadoop
[root@sama bin]# passwd hadoop
Changing password for user hadoop.
New password:
BAD PASSWORD: it is based on a dictionary word
Retype new password:
passwd: all authentication tokens updated successfully.
[root@sama bin]# su - hadoop
[hadoop@sama ~]$ ssh-keygen -t rsa
Generating public/private rsa key pair.
Enter file in which to save the key (/home/hadoop/.ssh/id_rsa):
Created directory '/home/hadoop/.ssh'.
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/hadoop/.ssh/id_rsa.
Your public key has been saved in /home/hadoop/.ssh/id_rsa.pub.
The key fingerprint is:
d4:0d:37:5d:55:73:47:5d:92:5e:64:38:01:34:01:16 hadoop@sama.expanor.local
The key's randomart image is:
+--[ RSA 2048]----+
| E+B+oO/|
| o + o=o*|
| . . .. o |
| . . |
| S |
| |
| |
| |
| |
+-----------------+
[hadoop@sama ~]$ cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys^C
[hadoop@sama ~]$ cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
[hadoop@sama ~]$ chmod 0600 ~/.ssh/authorized_keys
[hadoop@sama ~]$ exit
logout
[root@sama bin]# ssh hadoop@localhost
####################################################################
####################################################################
####################################################################
##### #####
##### #####
##### WARNING** This computer system is Expanor, LLC #####
##### property, and is to be used only by authorized #####
##### users. Misuse of this computer system is a #####
##### violation of Federal law. All users of this #####
##### system, whether authorized or not, are subject #####
##### to monitoring by system personnel and by law #####
##### enforcement officials. Anyone using this system #####
##### expressly consents to such monitoring. Evidence #####
##### of criminal activity or other misconduct may be #####
##### provided to law enforcement officials. #####
##### Electronic messages(e-mail) on this system are #####
##### Expanor, LLC property. The Expanor may access #####
##### these messages whenever such access serves a #####
##### legitimate purpose. #####
##### #####
####################################################################
####################################################################
####################################################################
hadoop@localhost's password:
[root@sama bin]# su - hadoop
[hadoop@sama ~]$ pwd
/home/hadoop
[hadoop@sama ~]$ ls
[hadoop@sama ~]$ cd .ssh
[hadoop@sama .ssh]$ ls -ltr
total 12
-rw-------. 1 hadoop hadoop 1675 May 20 23:02 id_rsa
-rw-r--r--. 1 hadoop hadoop 407 May 20 23:02 id_rsa.pub
-rw-------. 1 hadoop hadoop 407 May 20 23:02 authorized_keys
[hadoop@sama .ssh]$ vi vi .bash_profile ^C
[hadoop@sama .ssh]$ cd ..
[hadoop@sama ~]$ vi .bash_profile
[hadoop@sama ~]$ cd mkdir /opt/hadoop; cd /opt/hadoop/^C
[hadoop@sama ~]$ mkdir /opt/hadoop; cd /opt/hadoop/
mkdir: cannot create directory `/opt/hadoop': Permission denied
-bash: cd: /opt/hadoop/: No such file or directory
[hadoop@sama ~]$ logout
[root@sama bin]# mkdir /opt/hadoop; cd /opt/hadoop/
[root@sama hadoop]# wget http://apache.mesi.com.ar/hadoop/common/hadoop-1.2.1/hadoop-1.2.1.tar.gz
--2014-05-20 23:05:56-- http://apache.mesi.com.ar/hadoop/common/hadoop-1.2.1/hadoop-1.2.1.tar.gz
Resolving apache.mesi.com.ar... 64.95.245.79
Connecting to apache.mesi.com.ar|64.95.245.79|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 63851630 (61M) [application/x-gzip]
Saving to: “hadoop-1.2.1.tar.gz”
100%[======================================>] 63,851,630 2.16M/s in 29s
2014-05-20 23:06:25 (2.09 MB/s) - “hadoop-1.2.1.tar.gz” saved [63851630/63851630]
[root@sama hadoop]# cat ~hadoop/.bash_profile
# .bash_profile
# Get the aliases and functions
if [ -f ~/.bashrc ]; then
. ~/.bashrc
fi
# User specific environment and startup programs
PATH=$PATH:$HOME/bin
export PATH
Export JAVA_HOME=/usr/java/latest/
export PATH=$PATH:/usr/java/latest/bin
[root@sama hadoop]# tar -xzf hadoop-1.2.1.tar.gz
[root@sama hadoop]# mv hadoop-1.2.1 hadoop
[root@sama hadoop]# chown -R hadoop /opt/hadoop
[root@sama hadoop]# cd /opt/hadoop/hadoop/
[root@sama hadoop]# vi conf/core-site.xml
[root@sama hadoop]# cp -i conf/core-site.xml conf/core-site.xml.bk
[root@sama hadoop]# vi conf/core-site.xml
[root@sama hadoop]# cp -p conf/hdfs-site.xml conf/hdfs-site.xml.bk
[root@sama hadoop]# vi conf/hdfs-site.xml
[root@sama hadoop]# cp -p conf/mapred-site.xml conf/mapred-site.xml.bk
[root@sama hadoop]# vi conf/mapred-site.xml
[root@sama hadoop]# cp -p conf/hadoop-env.sh conf/hadoop-env.sh.bk
[root@sama hadoop]# vi conf/hadoop-env.sh
[root@sama hadoop]# su - hadoop
-bash: Export: command not found
[hadoop@sama ~]$ vi .bash_profile
[hadoop@sama ~]$ logout
[root@sama hadoop]# su - hadoop
[hadoop@sama ~]$ cd /opt/hadoop/hadoop
[hadoop@sama hadoop]$ bin/hadoop namenode -format
14/05/20 23:14:27 INFO namenode.NameNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG: host = sama.expanor.local/192.168.10.110
STARTUP_MSG: args = [-format]
STARTUP_MSG: version = 1.2.1
STARTUP_MSG: build = https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.2 -r 1503152; compiled by 'mattf' on Mon Jul 22 15:23:09 PDT 2013
STARTUP_MSG: java = 1.7.0_55
************************************************************/
14/05/20 23:14:28 INFO util.GSet: Computing capacity for map BlocksMap
14/05/20 23:14:28 INFO util.GSet: VM type = 32-bit
14/05/20 23:14:28 INFO util.GSet: 2.0% max memory = 1013645312
14/05/20 23:14:28 INFO util.GSet: capacity = 2^22 = 4194304 entries
14/05/20 23:14:28 INFO util.GSet: recommended=4194304, actual=4194304
14/05/20 23:14:28 INFO namenode.FSNamesystem: fsOwner=hadoop
14/05/20 23:14:29 INFO namenode.FSNamesystem: supergroup=supergroup
14/05/20 23:14:29 INFO namenode.FSNamesystem: isPermissionEnabled=true
14/05/20 23:14:29 INFO namenode.FSNamesystem: dfs.block.invalidate.limit=100
14/05/20 23:14:29 INFO namenode.FSNamesystem: isAccessTokenEnabled=false accessKeyUpdateInterval=0 min(s), accessTokenLifetime=0 min(s)
14/05/20 23:14:29 INFO namenode.FSEditLog: dfs.namenode.edits.toleration.length = 0
14/05/20 23:14:29 INFO namenode.NameNode: Caching file names occuring more than 10 times
14/05/20 23:14:29 INFO common.Storage: Image file /opt/hadoop/hadoop/dfs/name/current/fsimage of size 112 bytes saved in 0 seconds.
14/05/20 23:14:29 INFO namenode.FSEditLog: closing edit log: position=4, editlog=/opt/hadoop/hadoop/dfs/name/current/edits
14/05/20 23:14:29 INFO namenode.FSEditLog: close success: truncate to 4, editlog=/opt/hadoop/hadoop/dfs/name/current/edits
14/05/20 23:14:30 INFO common.Storage: Storage directory /opt/hadoop/hadoop/dfs/name has been successfully formatted.
14/05/20 23:14:30 INFO namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at sama.expanor.local/192.168.10.110
************************************************************/
[hadoop@sama hadoop]$ pwd
/opt/hadoop/hadoop
[hadoop@sama hadoop]$ bin/start-all.sh
starting namenode, logging to /opt/hadoop/hadoop/libexec/../logs/hadoop-hadoop-namenode-sama.expanor.local.out
The authenticity of host 'localhost (::1)' can't be established.
RSA key fingerprint is c4:dd:1b:00:b0:91:28:b4:83:14:0d:55:be:8f:4f:0a.
Are you sure you want to continue connecting (yes/no)? yes
localhost: Warning: Permanently added 'localhost' (RSA) to the list of known hosts.
localhost:
localhost: ####################################################################
localhost: ####################################################################
localhost: ####################################################################
localhost: ##### #####
localhost: ##### #####
localhost: ##### WARNING** This computer system is Expanor, LLC #####
localhost: ##### property, and is to be used only by authorized #####
localhost: ##### users. Misuse of this computer system is a #####
localhost: ##### violation of Federal law. All users of this #####
localhost: ##### system, whether authorized or not, are subject #####
localhost: ##### to monitoring by system personnel and by law #####
localhost: ##### enforcement officials. Anyone using this system #####
localhost: ##### expressly consents to such monitoring. Evidence #####
localhost: ##### of criminal activity or other misconduct may be #####
localhost: ##### provided to law enforcement officials. #####
localhost: ##### Electronic messages(e-mail) on this system are #####
localhost: ##### Expanor, LLC property. The Expanor may access #####
localhost: ##### these messages whenever such access serves a #####
localhost: ##### legitimate purpose. #####
localhost: ##### #####
localhost: ####################################################################
localhost: ####################################################################
localhost: ####################################################################
localhost: starting datanode, logging to /opt/hadoop/hadoop/libexec/../logs/hadoop-hadoop-datanode-sama.expanor.local.out
localhost:
localhost: ####################################################################
localhost: ####################################################################
localhost: ####################################################################
localhost: ##### #####
localhost: ##### #####
localhost: ##### WARNING** This computer system is Expanor, LLC #####
localhost: ##### property, and is to be used only by authorized #####
localhost: ##### users. Misuse of this computer system is a #####
localhost: ##### violation of Federal law. All users of this #####
localhost: ##### system, whether authorized or not, are subject #####
localhost: ##### to monitoring by system personnel and by law #####
localhost: ##### enforcement officials. Anyone using this system #####
localhost: ##### expressly consents to such monitoring. Evidence #####
localhost: ##### of criminal activity or other misconduct may be #####
localhost: ##### provided to law enforcement officials. #####
localhost: ##### Electronic messages(e-mail) on this system are #####
localhost: ##### Expanor, LLC property. The Expanor may access #####
localhost: ##### these messages whenever such access serves a #####
localhost: ##### legitimate purpose. #####
localhost: ##### #####
localhost: ####################################################################
localhost: ####################################################################
localhost: ####################################################################
localhost: starting secondarynamenode, logging to /opt/hadoop/hadoop/libexec/../logs/hadoop-hadoop-secondarynamenode-sama.expanor.local.out
starting jobtracker, logging to /opt/hadoop/hadoop/libexec/../logs/hadoop-hadoop-jobtracker-sama.expanor.local.out
localhost:
localhost: ####################################################################
localhost: ####################################################################
localhost: ####################################################################
localhost: ##### #####
localhost: ##### #####
localhost: ##### WARNING** This computer system is Expanor, LLC #####
localhost: ##### property, and is to be used only by authorized #####
localhost: ##### users. Misuse of this computer system is a #####
localhost: ##### violation of Federal law. All users of this #####
localhost: ##### system, whether authorized or not, are subject #####
localhost: ##### to monitoring by system personnel and by law #####
localhost: ##### enforcement officials. Anyone using this system #####
localhost: ##### expressly consents to such monitoring. Evidence #####
localhost: ##### of criminal activity or other misconduct may be #####
localhost: ##### provided to law enforcement officials. #####
localhost: ##### Electronic messages(e-mail) on this system are #####
localhost: ##### Expanor, LLC property. The Expanor may access #####
localhost: ##### these messages whenever such access serves a #####
localhost: ##### legitimate purpose. #####
localhost: ##### #####
localhost: ####################################################################
localhost: ####################################################################
localhost: ####################################################################
localhost: starting tasktracker, logging to /opt/hadoop/hadoop/libexec/../logs/hadoop-hadoop-tasktracker-sama.expanor.local.out
[hadoop@sama hadoop]$ jps
-bash: jps: command not found
[hadoop@sama hadoop]$ cat /opt/hadoop/hadoop/libexec/../logs/hadoop-hadoop-tasktracker-sama.expanor.local.out
ulimit -a for user hadoop
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 16061
max locked memory (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
open files (-n) 1024
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 10240
cpu time (seconds, -t) unlimited
max user processes (-u) 1024
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited
[hadoop@sama hadoop]$ $JAVA_HOME/bin/jps
-bash: /usr/java/latest//bin/jps: No such file or directory
[hadoop@sama hadoop]$
http://sama.expanor.local:50030/
http://sama.expanor.local:50030/jobtracker.jsp
http://sama.expanor.local:50070/dfshealth.jsp
http://sama.expanor.local:50060/tasktracker.jsp
Preparing ... draft ..
Apache Hadoop is an open-source software framework for storage and large-scale processing of data-
sets on clusters of commodity hardware. Hadoop is an Apache top-level project being built and used by
a global community of contributors and users. Hadoop brings the ability to cheaply process large
amounts of data, regardless of its structure.
Server configuration
--------------------
Pre, installation steps:
install and setup the vmware server or virtual box and create following servers below.
OS: CentOS 6.5
servers specification
It is going to be 4 Node cluster.
Assign more memory to the first node cluster which requires resources.
server1.expanor.local 2 CPU 4GB Ram 50GB disk space
server2.expanor.local 1 CPU 2GB Ram 50GB disk space
server3.expanor.local 1 CPU 2GB Ram 50GB disk space
server4.expanor.local 1 CPU 2GB Ram 50GB disk space
VM creation
Create the VM with the following parameters:
- Bridge network
- Enough disk space (more than 40GB)
- 2 GB of RAM
- Setup the DVD to point to the CentOS iso image
Network Configuration
---------------------
Make the following changes for network configuration which allow all cluster nodes to interact.
# cat /etc/resolv.conf
search expanor.local
nameserver 192.168.10.110
# cat /etc/sysconfig/network
NETWORKING=yes
HOSTNAME=sam.expanor.local
GATEWAY=192.168.10.1
# cat /etc/sysconfig/network-scripts/ifcfg-eth0
DEVICE=eth0
ONBOOT=yes
PROTO=static
IPADDR=192.168.10.200
NETMASK=255.255.255.0
# cat /etc/selinux/config
SELINUX=disabled
# /etc/yum/pluginconf.d/fastestmirror.conf
enabled=0
restart the network services to make changes effective.
# chkconfig iptables off
# /etc/init.d/network restart or service network stop/start/restart
Setup Cluster Hosts
If you don't have dns set up, add the following entry to your /etc/hosts
cat /etc/hosts
192.168.10.201 hadoop1.expanor.local hadoop1
192.168.10.202 hadoop2.expanor.local hadoop2
192.168.10.203 hadoop3.expanor.local hadoop3
192.168.10.204 hadoop4.expanor.local hadoop4
Setup SSH
set up ssh key to have passwordless authentication.
# yum -y install perl openssh-clients
# ssh-keygen (type enter, enter, enter)
# cd ~/.ssh
# cp id_rsa.pub authorized_keys
Modify the ssh configuration file to no so that it will prevent asking question when connecting with
SSH to the host.
# vi /etc/ssh/ssh_config
StrictHostKeyChecking no
Shutdown and Clone
Now, shutdown the system.
# init 0
Lets create the server nodes that will be members of the cluster.
in VirtualBox, clone the base server, using the ‘Linked Clone’ option and name the nodes hadoop1,
hadoop2, hadoop3 and hadoop4.
For the first node (hadoop1), change the memory settings to 8GB of memory. Most of the roles will be
installed on this node, and therefore it is important that it have sufficient memory available.
Clones Customization
For every node, proceed with the following operations:
Modify the hostname of the server, change the following line in the file:
/etc/sysconfig/network
HOSTNAME=hadoop[n].example.com
Where [n] = 1..4 (up to the number of nodes)
Modify the fixed IP address of the server, change the following line in the file:
/etc/sysconfig/network-scripts/ifcfg-eth0
IPADDR=10.0.1.20[n]
Where [n] = 1..4 (up to the number of nodes)
Let’s restart the networking services and reboot the server, so that the above changes takes effect:
# /etc/init.d/network restart
# init 6
Now, we have four running virtual machines with CentOS correctly configured.
===============================================================
1. Install Java
Verify that JAVA is install on your system using java -version on the command prompt or just type
java and see if you get something on return on command not found.
Steps to Install JAVA 7 on your Lunux system. ( Redhat and CentOS)
Download the java package from Oracle.
# cd /var/tmp
# wget http://download.oracle.com/otn-pub/java/jdk/7u55-b13/jdk-7u55-linux-i586.tar.gz?
AuthParam=1398049773_df113de6ac9a884bbf0b37f61c742aeb
# tar xzf jdk-7u55-linux-i586.tar.gz
# cd /opt/jdk1.7.0_55/
# alternatives --install /usr/bin/java java /opt/jdk1.7.0_55/bin/java 2
# alternatives --config java
option 4
Verify java version
# java -version
Setup Environment Variables
# export JAVA_HOME=/opt/jdk1.7.0_55
Setup JRE_HOME Variable
# export JRE_HOME=/opt/jdk1.7.0_55/jre
Setup PATH Variable
# export PATH=$PATH:/opt/jdk1.7.0_55/bin:/opt/jdk1.7.0_55/jre/bin
2. Create User Account
Create a user account for hadoop installation.
# useradd hadoop
# passwd hadoop
3. Configuring Key Based authentication (passwordless login).
# su - hadoop
$ ssh-keygen -t rsa
$ cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
$ chmod 0600 ~/.ssh/authorized_keys
$ exit
Edit .bash_profile to add java stuff to load upon login.
# vi .bash_profile
Export JAVA_HOME=/usr/java/jdk1.7.0_45
PATH =$PATH:$HOME/bin:/usr/java/jdk1.7.0_45/bin
Export PATH
4. Download and Extract the file.
# mkdir /opt/hadoop; cd /opt/hadoop/
# wget http://apache.mesi.com.ar/hadoop/common/hadoop-1.2.1/hadoop-1.2.1.tar.gz
# tar -xzf hadoop-1.2.1.tar.gz
# mv hadoop-1.2.1 hadoop
# chown -R hadoop /opt/hadoop
# cd /opt/hadoop/hadoop/
5: Configure Hadoop
a. Edit core-site.xml
# vi conf/core-site.xml
#Add the following inside the configuration tag
<property>
<name>fs.default.name</name>
<value>hdfs://localhost:9000/</value>
</property>
<property>
<name>dfs.permissions</name>
<value>false</value>
</property>
wq!
b. Edit hdfs-site.xml
# vi conf/hdfs-site.xml
# Add the following inside the configuration tag
<property>
<name>dfs.data.dir</name>
<value>/opt/hadoop/hadoop/dfs/name/data</value>
<final>true</final>
</property>
<property>
<name>dfs.name.dir</name>
<value>/opt/hadoop/hadoop/dfs/name</value>
<final>true</final>
</property>
<property>
<name>dfs.replication</name>
<value>2</value>
</property>
c. Edit mapred-site.xml
# vi conf/mapred-site.xml
# Add the following inside the configuration tag
<property>
<name>mapred.job.tracker</name>
<value>localhost:9001</value>
</property>
d. Edit hadoop-env.sh
# vim conf/hadoop-env.sh
export JAVA_HOME=/opt/jdk1.7.0_55
export HADOOP_OPTS=-Djava.net.preferIPv4Stack=true
Next to format Name Node
# su - hadoop
$ cd /opt/hadoop/hadoop
$ bin/hadoop namenode -format
6: Start Hadoop Services
$ bin/start-all.sh
7: Test and Access Hadoop Services
Use ‘jps‘ command to check if all services are started well.
$ jps
or
$ $JAVA_HOME/bin/jps
Web Access URLs for Services
http://srv1.tecadmin.net:50030/ for the Jobtracker
http://srv1.tecadmin.net:50070/ for the Namenode
http://srv1.tecadmin.net:50060/ for the Tasktracker
8: Stop Hadoop Services
# bin/stop-all.sh
source
http://en.wikipedia.org/wiki/Apache_Hadoop
http://tecadmin.net/steps-to-install-hadoop-on-centosrhel-6/
https://blog.cloudera.com/blog/2014/01/how-to-create-a-simple-hadoop-cluster-with-virtualbox/
http://solutionsatexperts.com/hadoop-installation-steps-on-centos-6/
http://hortonworks.com/blog/set-up-apache-hadoop-in-minutes-with-rpms/
not used
http://icanhadoop.blogspot.com/2012/09/configuring-hadoop-is-very-if-you-just.html
http://gbif.blogspot.com/2011/01/setting-up-hadoop-cluster-part-1-manual.html
https://blog.codecentric.de/en/2012/12/tutorial-installing-a-apache-hadoop-single-node-cluster-with-hortonworks-data-platform/
========================
Detail ------------------
================================
[root@sama ~]# ls -l jre-7u55-linux-i586.rpm\?AuthParam\=1400640097_944834eac90eb39afbab6dec970e6473\&GroupName\=JSC\&FilePath\=%2FESD6%2FJSCDL%2Fjdk%2F7u55-b13%2Fjre-7u55-linux-i586.rpm\&File\=jre-7u55-linux-i586.rpm\&BHost\=javadl.sun.com
-rw-r--r--. 1 root root 33040762 Apr 22 11:44 jre-7u55-linux-i586.rpm?AuthParam=1400640097_944834eac90eb39afbab6dec970e6473&GroupName=JSC&FilePath=%2FESD6%2FJSCDL%2Fjdk%2F7u55-b13%2Fjre-7u55-linux-i586.rpm&File=jre-7u55-linux-i586.rpm&BHost=javadl.sun.com
[root@sama ~]# mv jre-7u55-linux-i586.rpm\?AuthParam\=1400640097_944834eac90eb39afbab6dec970e6473\&GroupName\=JSC\&FilePath\=%2FESD6%2FJSCDL%2Fjdk%2F7u55-b13%2Fjre-7u55-linux-i586.rpm\&File\=jre-7u55-linux-i586.rpm\&BHost\=javadl.sun.com jre-7u55-linux-i586.rpm
[root@sama ~]# file jre-7u55-linux-i586.rpm
jre-7u55-linux-i586.rpm: RPM v3.0 bin i386/x86_64
[root@sama ~]# rpm -ql jre-7u55-linux-i586.rpm | more
package jre-7u55-linux-i586.rpm is not installed
[root@sama ~]# rpm -ivh jre-7u55-linux-i586.rpm
Preparing... ########################################### [100%]
1:jre ########################################### [100%]
Unpacking JAR files...
rt.jar...
jsse.jar...
charsets.jar...
localedata.jar...
jfxrt.jar...
plugin.jar...
javaws.jar...
deploy.jar...
[root@sama ~]# which java
/usr/bin/java
[root@sama ~]# rpm -qf /usr/bin/java
file /usr/bin/java is not owned by any package
[root@sama ~]# which /usr/bin/java
/usr/bin/java
[root@sama ~]# java -version
java version "1.6.0_20"
OpenJDK Runtime Environment (IcedTea6 1.9.7) (rhel-1.39.1.9.7.el6-i386)
OpenJDK Client VM (build 19.0-b09, mixed mode)
[root@sama ~]# echo $JAVA_HOME
[root@sama ~]# cd /usr/local/
[root@sama local]# ls
bin etc games include lib libexec sbin share src
[root@sama local]# cd bin/
[root@sama bin]# ls
noip2
[root@sama bin]# cd ..
[root@sama local]# pwd
/usr/local
[root@sama local]# rpm -qa | grpe -i java
-bash: grpe: command not found
^C[root@sama local]# rpm -qa | grep -i java
tzdata-java-2011g-1.el6.noarch
java-1.6.0-openjdk-1.6.0.0-1.39.1.9.7.el6.i686
[root@sama local]# rpm -qf java-1.6.0-openjdk-1.6.0.0-1.39.1.9.7.el6.i686 | more
error: file /usr/local/java-1.6.0-openjdk-1.6.0.0-1.39.1.9.7.el6.i686: No such file or directory
[root@sama local]# cd /usr/local/
[root@sama local]# la
-bash: la: command not found
[root@sama local]# ls
bin etc games include lib libexec sbin share src
[root@sama local]# cd bin
[root@sama bin]# ls
noip2
[root@sama bin]# cd ../etc
[root@sama etc]# ls
NO-IPxCB4Bc
[root@sama etc]# cd ..
[root@sama local]# pwd
/usr/local
[root@sama local]# cd /usr/
[root@sama usr]# cd java
[root@sama java]# ls
default jre1.7.0_55 latest
[root@sama java]# pwd
/usr/java
[root@sama java]# pwd
/usr/java
[root@sama java]# ls -ltr
total 4
drwxr-xr-x. 6 root root 4096 May 20 22:52 jre1.7.0_55
lrwxrwxrwx. 1 root root 21 May 20 22:52 latest -> /usr/java/jre1.7.0_55
lrwxrwxrwx. 1 root root 16 May 20 22:52 default -> /usr/java/latest
[root@sama java]# java
Usage: java [-options] class [args...]
(to execute a class)
or java [-options] -jar jarfile [args...]
(to execute a jar file)
where options include:
-d32 use a 32-bit data model if available
-d64 use a 64-bit data model if available
-client to select the "client" VM
-server to select the "server" VM
-hotspot is a synonym for the "client" VM [deprecated]
The default VM is client.
-cp <class search path of directories and zip/jar files>
-classpath <class search path of directories and zip/jar files>
A : separated list of directories, JAR archives,
and ZIP archives to search for class files.
-D<name>=<value>
set a system property
-verbose[:class|gc|jni]
enable verbose output
-version print product version and exit
-version:<value>
require the specified version to run
-showversion print product version and continue
-jre-restrict-search | -jre-no-restrict-search
include/exclude user private JREs in the version search
-? -help print this help message
-X print help on non-standard options
-ea[:<packagename>...|:<classname>]
-enableassertions[:<packagename>...|:<classname>]
enable assertions with specified granularity
-da[:<packagename>...|:<classname>]
-disableassertions[:<packagename>...|:<classname>]
disable assertions with specified granularity
-esa | -enablesystemassertions
enable system assertions
-dsa | -disablesystemassertions
disable system assertions
-agentlib:<libname>[=<options>]
load native agent library <libname>, e.g. -agentlib:hprof
see also, -agentlib:jdwp=help and -agentlib:hprof=help
-agentpath:<pathname>[=<options>]
load native agent library by full pathname
-javaagent:<jarpath>[=<options>]
load Java programming language agent, see java.lang.instrument
-splash:<imagepath>
show splash screen with specified image
See http://java.sun.com/javase/reference for more details.
[root@sama java]# export JAVA_HOME=/usr/java
[root@sama java]# pwd
/usr/java
[root@sama java]# ls
default jre1.7.0_55 latest
[root@sama java]# cd latest
[root@sama latest]# ls
bin man THIRDPARTYLICENSEREADME-JAVAFX.txt
COPYRIGHT plugin THIRDPARTYLICENSEREADME.txt
lib README Welcome.html
LICENSE release
[root@sama latest]# cd bin
[root@sama bin]# ls
ControlPanel java_vm jcontrol orbd policytool rmiregistry tnameserv
java javaws keytool pack200 rmid servertool unpack200
[root@sama bin]# pwd
/usr/java/latest/bin
[root@sama bin]# export PATH=$PATH:/usr/java/latest/bin
[root@sama bin]# useradd hadoop
[root@sama bin]# passwd hadoop
Changing password for user hadoop.
New password:
BAD PASSWORD: it is based on a dictionary word
Retype new password:
passwd: all authentication tokens updated successfully.
[root@sama bin]# su - hadoop
[hadoop@sama ~]$ ssh-keygen -t rsa
Generating public/private rsa key pair.
Enter file in which to save the key (/home/hadoop/.ssh/id_rsa):
Created directory '/home/hadoop/.ssh'.
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/hadoop/.ssh/id_rsa.
Your public key has been saved in /home/hadoop/.ssh/id_rsa.pub.
The key fingerprint is:
d4:0d:37:5d:55:73:47:5d:92:5e:64:38:01:34:01:16 hadoop@sama.expanor.local
The key's randomart image is:
+--[ RSA 2048]----+
| E+B+oO/|
| o + o=o*|
| . . .. o |
| . . |
| S |
| |
| |
| |
| |
+-----------------+
[hadoop@sama ~]$ cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys^C
[hadoop@sama ~]$ cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
[hadoop@sama ~]$ chmod 0600 ~/.ssh/authorized_keys
[hadoop@sama ~]$ exit
logout
[root@sama bin]# ssh hadoop@localhost
####################################################################
####################################################################
####################################################################
##### #####
##### #####
##### WARNING** This computer system is Expanor, LLC #####
##### property, and is to be used only by authorized #####
##### users. Misuse of this computer system is a #####
##### violation of Federal law. All users of this #####
##### system, whether authorized or not, are subject #####
##### to monitoring by system personnel and by law #####
##### enforcement officials. Anyone using this system #####
##### expressly consents to such monitoring. Evidence #####
##### of criminal activity or other misconduct may be #####
##### provided to law enforcement officials. #####
##### Electronic messages(e-mail) on this system are #####
##### Expanor, LLC property. The Expanor may access #####
##### these messages whenever such access serves a #####
##### legitimate purpose. #####
##### #####
####################################################################
####################################################################
####################################################################
hadoop@localhost's password:
[root@sama bin]# su - hadoop
[hadoop@sama ~]$ pwd
/home/hadoop
[hadoop@sama ~]$ ls
[hadoop@sama ~]$ cd .ssh
[hadoop@sama .ssh]$ ls -ltr
total 12
-rw-------. 1 hadoop hadoop 1675 May 20 23:02 id_rsa
-rw-r--r--. 1 hadoop hadoop 407 May 20 23:02 id_rsa.pub
-rw-------. 1 hadoop hadoop 407 May 20 23:02 authorized_keys
[hadoop@sama .ssh]$ vi vi .bash_profile ^C
[hadoop@sama .ssh]$ cd ..
[hadoop@sama ~]$ vi .bash_profile
[hadoop@sama ~]$ cd mkdir /opt/hadoop; cd /opt/hadoop/^C
[hadoop@sama ~]$ mkdir /opt/hadoop; cd /opt/hadoop/
mkdir: cannot create directory `/opt/hadoop': Permission denied
-bash: cd: /opt/hadoop/: No such file or directory
[hadoop@sama ~]$ logout
[root@sama bin]# mkdir /opt/hadoop; cd /opt/hadoop/
[root@sama hadoop]# wget http://apache.mesi.com.ar/hadoop/common/hadoop-1.2.1/hadoop-1.2.1.tar.gz
--2014-05-20 23:05:56-- http://apache.mesi.com.ar/hadoop/common/hadoop-1.2.1/hadoop-1.2.1.tar.gz
Resolving apache.mesi.com.ar... 64.95.245.79
Connecting to apache.mesi.com.ar|64.95.245.79|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 63851630 (61M) [application/x-gzip]
Saving to: “hadoop-1.2.1.tar.gz”
100%[======================================>] 63,851,630 2.16M/s in 29s
2014-05-20 23:06:25 (2.09 MB/s) - “hadoop-1.2.1.tar.gz” saved [63851630/63851630]
[root@sama hadoop]# cat ~hadoop/.bash_profile
# .bash_profile
# Get the aliases and functions
if [ -f ~/.bashrc ]; then
. ~/.bashrc
fi
# User specific environment and startup programs
PATH=$PATH:$HOME/bin
export PATH
Export JAVA_HOME=/usr/java/latest/
export PATH=$PATH:/usr/java/latest/bin
[root@sama hadoop]# tar -xzf hadoop-1.2.1.tar.gz
[root@sama hadoop]# mv hadoop-1.2.1 hadoop
[root@sama hadoop]# chown -R hadoop /opt/hadoop
[root@sama hadoop]# cd /opt/hadoop/hadoop/
[root@sama hadoop]# vi conf/core-site.xml
[root@sama hadoop]# cp -i conf/core-site.xml conf/core-site.xml.bk
[root@sama hadoop]# vi conf/core-site.xml
[root@sama hadoop]# cp -p conf/hdfs-site.xml conf/hdfs-site.xml.bk
[root@sama hadoop]# vi conf/hdfs-site.xml
[root@sama hadoop]# cp -p conf/mapred-site.xml conf/mapred-site.xml.bk
[root@sama hadoop]# vi conf/mapred-site.xml
[root@sama hadoop]# cp -p conf/hadoop-env.sh conf/hadoop-env.sh.bk
[root@sama hadoop]# vi conf/hadoop-env.sh
[root@sama hadoop]# su - hadoop
-bash: Export: command not found
[hadoop@sama ~]$ vi .bash_profile
[hadoop@sama ~]$ logout
[root@sama hadoop]# su - hadoop
[hadoop@sama ~]$ cd /opt/hadoop/hadoop
[hadoop@sama hadoop]$ bin/hadoop namenode -format
14/05/20 23:14:27 INFO namenode.NameNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG: host = sama.expanor.local/192.168.10.110
STARTUP_MSG: args = [-format]
STARTUP_MSG: version = 1.2.1
STARTUP_MSG: build = https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.2 -r 1503152; compiled by 'mattf' on Mon Jul 22 15:23:09 PDT 2013
STARTUP_MSG: java = 1.7.0_55
************************************************************/
14/05/20 23:14:28 INFO util.GSet: Computing capacity for map BlocksMap
14/05/20 23:14:28 INFO util.GSet: VM type = 32-bit
14/05/20 23:14:28 INFO util.GSet: 2.0% max memory = 1013645312
14/05/20 23:14:28 INFO util.GSet: capacity = 2^22 = 4194304 entries
14/05/20 23:14:28 INFO util.GSet: recommended=4194304, actual=4194304
14/05/20 23:14:28 INFO namenode.FSNamesystem: fsOwner=hadoop
14/05/20 23:14:29 INFO namenode.FSNamesystem: supergroup=supergroup
14/05/20 23:14:29 INFO namenode.FSNamesystem: isPermissionEnabled=true
14/05/20 23:14:29 INFO namenode.FSNamesystem: dfs.block.invalidate.limit=100
14/05/20 23:14:29 INFO namenode.FSNamesystem: isAccessTokenEnabled=false accessKeyUpdateInterval=0 min(s), accessTokenLifetime=0 min(s)
14/05/20 23:14:29 INFO namenode.FSEditLog: dfs.namenode.edits.toleration.length = 0
14/05/20 23:14:29 INFO namenode.NameNode: Caching file names occuring more than 10 times
14/05/20 23:14:29 INFO common.Storage: Image file /opt/hadoop/hadoop/dfs/name/current/fsimage of size 112 bytes saved in 0 seconds.
14/05/20 23:14:29 INFO namenode.FSEditLog: closing edit log: position=4, editlog=/opt/hadoop/hadoop/dfs/name/current/edits
14/05/20 23:14:29 INFO namenode.FSEditLog: close success: truncate to 4, editlog=/opt/hadoop/hadoop/dfs/name/current/edits
14/05/20 23:14:30 INFO common.Storage: Storage directory /opt/hadoop/hadoop/dfs/name has been successfully formatted.
14/05/20 23:14:30 INFO namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at sama.expanor.local/192.168.10.110
************************************************************/
[hadoop@sama hadoop]$ pwd
/opt/hadoop/hadoop
[hadoop@sama hadoop]$ bin/start-all.sh
starting namenode, logging to /opt/hadoop/hadoop/libexec/../logs/hadoop-hadoop-namenode-sama.expanor.local.out
The authenticity of host 'localhost (::1)' can't be established.
RSA key fingerprint is c4:dd:1b:00:b0:91:28:b4:83:14:0d:55:be:8f:4f:0a.
Are you sure you want to continue connecting (yes/no)? yes
localhost: Warning: Permanently added 'localhost' (RSA) to the list of known hosts.
localhost:
localhost: ####################################################################
localhost: ####################################################################
localhost: ####################################################################
localhost: ##### #####
localhost: ##### #####
localhost: ##### WARNING** This computer system is Expanor, LLC #####
localhost: ##### property, and is to be used only by authorized #####
localhost: ##### users. Misuse of this computer system is a #####
localhost: ##### violation of Federal law. All users of this #####
localhost: ##### system, whether authorized or not, are subject #####
localhost: ##### to monitoring by system personnel and by law #####
localhost: ##### enforcement officials. Anyone using this system #####
localhost: ##### expressly consents to such monitoring. Evidence #####
localhost: ##### of criminal activity or other misconduct may be #####
localhost: ##### provided to law enforcement officials. #####
localhost: ##### Electronic messages(e-mail) on this system are #####
localhost: ##### Expanor, LLC property. The Expanor may access #####
localhost: ##### these messages whenever such access serves a #####
localhost: ##### legitimate purpose. #####
localhost: ##### #####
localhost: ####################################################################
localhost: ####################################################################
localhost: ####################################################################
localhost: starting datanode, logging to /opt/hadoop/hadoop/libexec/../logs/hadoop-hadoop-datanode-sama.expanor.local.out
localhost:
localhost: ####################################################################
localhost: ####################################################################
localhost: ####################################################################
localhost: ##### #####
localhost: ##### #####
localhost: ##### WARNING** This computer system is Expanor, LLC #####
localhost: ##### property, and is to be used only by authorized #####
localhost: ##### users. Misuse of this computer system is a #####
localhost: ##### violation of Federal law. All users of this #####
localhost: ##### system, whether authorized or not, are subject #####
localhost: ##### to monitoring by system personnel and by law #####
localhost: ##### enforcement officials. Anyone using this system #####
localhost: ##### expressly consents to such monitoring. Evidence #####
localhost: ##### of criminal activity or other misconduct may be #####
localhost: ##### provided to law enforcement officials. #####
localhost: ##### Electronic messages(e-mail) on this system are #####
localhost: ##### Expanor, LLC property. The Expanor may access #####
localhost: ##### these messages whenever such access serves a #####
localhost: ##### legitimate purpose. #####
localhost: ##### #####
localhost: ####################################################################
localhost: ####################################################################
localhost: ####################################################################
localhost: starting secondarynamenode, logging to /opt/hadoop/hadoop/libexec/../logs/hadoop-hadoop-secondarynamenode-sama.expanor.local.out
starting jobtracker, logging to /opt/hadoop/hadoop/libexec/../logs/hadoop-hadoop-jobtracker-sama.expanor.local.out
localhost:
localhost: ####################################################################
localhost: ####################################################################
localhost: ####################################################################
localhost: ##### #####
localhost: ##### #####
localhost: ##### WARNING** This computer system is Expanor, LLC #####
localhost: ##### property, and is to be used only by authorized #####
localhost: ##### users. Misuse of this computer system is a #####
localhost: ##### violation of Federal law. All users of this #####
localhost: ##### system, whether authorized or not, are subject #####
localhost: ##### to monitoring by system personnel and by law #####
localhost: ##### enforcement officials. Anyone using this system #####
localhost: ##### expressly consents to such monitoring. Evidence #####
localhost: ##### of criminal activity or other misconduct may be #####
localhost: ##### provided to law enforcement officials. #####
localhost: ##### Electronic messages(e-mail) on this system are #####
localhost: ##### Expanor, LLC property. The Expanor may access #####
localhost: ##### these messages whenever such access serves a #####
localhost: ##### legitimate purpose. #####
localhost: ##### #####
localhost: ####################################################################
localhost: ####################################################################
localhost: ####################################################################
localhost: starting tasktracker, logging to /opt/hadoop/hadoop/libexec/../logs/hadoop-hadoop-tasktracker-sama.expanor.local.out
[hadoop@sama hadoop]$ jps
-bash: jps: command not found
[hadoop@sama hadoop]$ cat /opt/hadoop/hadoop/libexec/../logs/hadoop-hadoop-tasktracker-sama.expanor.local.out
ulimit -a for user hadoop
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 16061
max locked memory (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
open files (-n) 1024
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 10240
cpu time (seconds, -t) unlimited
max user processes (-u) 1024
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited
[hadoop@sama hadoop]$ $JAVA_HOME/bin/jps
-bash: /usr/java/latest//bin/jps: No such file or directory
[hadoop@sama hadoop]$
http://sama.expanor.local:50030/
http://sama.expanor.local:50030/jobtracker.jsp
http://sama.expanor.local:50070/dfshealth.jsp
http://sama.expanor.local:50060/tasktracker.jsp
No comments:
Post a Comment