Hadoop环境变量配置及HBase用户添加

在Unix系统中,为了管理Hadoop环境变量,通常会将所有变量保存在一个单独的文件中,例如命名为"hadoopenv"。这个文件会被所有账户引用。以下是"hadoopenv"文件的内容,注意这个文件并不位于HBase用户的主目录下:

# HADOOP VARIABLES START export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64 export HADOOP_INSTALL=/usr/local/hadoop export HADOOP_CONF_DIR=$HADOOP_INSTALL/etc/hadoop export YARN_CONF_DIR=$HADOOP_INSTALL/etc/hadoop export PATH=$PATH:$HADOOP_INSTALL/bin export PATH=$PATH:$HADOOP_INSTALL/sbin export HADOOP_MAPRED_HOME=$HADOOP_INSTALL export HADOOP_COMMON_HOME=$HADOOP_INSTALL export HADOOP_HDFS_HOME=$HADOOP_INSTALL export YARN_HOME=$HADOOP_INSTALL export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_INSTALL/lib/native export HADOOP_OPTS="-Djava.library.path=$HADOOP_INSTALL/lib" export HADOOP_OPTS="$HADOOP_OPTS -Djava.library.path=$HADOOP_INSTALL/lib/native" export JAVA_LIBRARY_PATH=$HADOOP_INSTALL/lib/native export LD_LIBRARY_PATH=$HADOOP_INSTALL/lib/native:$LD_LIBRARY_PATH export YARN_EXAMPLES=$HADOOP_INSTALL/share/hadoop/mapreduce export HADOOP_MAPRED_STOP_TIMEOUT=30 export YARN_STOP_TIMEOUT=30 # HADOOP VARIABLES END # PIG VARIABLES export PIG_HOME=/usr/local/pig export PATH=$PATH:$PIG_HOME/bin export PIG_CLASSPATH=$PIG_HOME/conf:$HADOOP_INSTALL/etc/hadoop # PIG VARIABLES END # HBASE VARIABLES export HBASE_HOME=/usr/local/hbase export PATH=$PATH:$HBASE_HOME/bin # HIVE VARIABLES export HIVE_HOME=/usr/local/hive export PATH=$PATH:$HIVE_HOME/bin

接下来,需要创建一个新的用户"hbase"并将其加入到"hadoop"组中。然后,需要编辑hbase用户的.bashrc文件,以便引用"hadoopenv"文件。

sudo adduser --ingroup hadoop hbase

下载并解压HBase的二进制文件:

cd Downloads wget http://www-us.apache.org/dist/hbase/stable/hbase-1.2.6-bin.tar.gz tar xvf hbase-1.2.6-bin.tar.gz sudo mv hbase-1.2.6-bin.tar.gz /usr/local/hbase cd /usr/local sudo chown -R hbase:hadoop hbase

配置SSH免密登录:

sudo su - hbase ssh-keygen -t rsa -P "" cat $HOME/.ssh/id_rsa.pub >> $HOME/.ssh/authorized_keys ssh localhost exit

接下来,需要配置HBase的环境变量配置文件。首先,以hbase用户登录并编辑hbase-env.sh文件:

sudo su - hbase vi $HBASE_HOME/conf/hbase-env.sh

在hbase-env.sh文件中,更新以下参数:

export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64 export HBASE_CLASSPATH=/usr/local/hadoop/etc/hadoop export HBASE_MANAGES_ZK=false

然后,配置hbase-site.xml文件:

<configuration> <property> <name>hbase.cluster.distributed</name> <value>true</value> </property> <property> <name>hbase.rootdir</name> <value>hdfs://localhost:9000/user/hbase</value> </property> <property> <name>hbase.zookeeper.quorum</name> <value>localhost:2181,localhost:2182</value> </property> <property> <name>hbase.security.authentication</name> <value>simple</value> </property> <property> <name>hbase.security.authorization</name> <value>true</value> </property> <property> <name>hbase.coprocessor.master.classes</name> <value>org.apache.hadoop.hbase.security.access.AccessController</value> </property> <property> <name>hbase.coprocessor.region.classes</name> <value>org.apache.hadoop.hbase.security.access.AccessController</value> </property> <property> <name>hbase.coprocessor.regionserver.classes</name> <value>org.apache.hadoop.hbase.security.access.AccessController</value> </property> </configuration>

启动HBase:

sudo su - hbase start-hbase.sh hbase-daemon.sh start thrift exit

验证安装:

hbase shell hbase(main):004:0> create 'test2', 'cf1'

如果出现权限错误,说明需要为用户"kamal"授予权限。首先,以hbase用户登录并授予权限:

sudo su - hbase hbase shell ... hbase(main):001:0> create 'test2','cf' hbase(main):002:0> grant 'kamal','RWC' hbase(main):003:0> quit exit logout

现在,可以验证是否能够成功创建表:

kamal@kamal-Lenovo-G505:/usr/local/hbase/conf$ hbase shell ... hbase(main):001:0> create 'test3','cf' => Hbase::Table - test3 . /home/kamal/hadoopenv case $1 in start) su -p - hbase -c $HBASE_HOME/bin/start-hbase.sh su -p - hbase -c "$HBASE_HOME/bin/hbase-daemon.sh start thrift" ;; stop) su -p - hbase -c $HBASE_HOME/bin/stop-hbase.sh su -p - hbase -c "$HBASE_HOME/bin/hbase-daemon.sh stop thrift" ;; esac sudo ./run_hbase.sh start sudo ./run_hbase.sh stop
沪ICP备2024098111号-1
上海秋旦网络科技中心:上海市奉贤区金大公路8218号1幢 联系电话:17898875485