hue-3.9-cdh-5.7.0安装

编译HUE

安装依赖包

1
[root@hadoop etc]# yum -y install ant asciidoc cyrus-sasl-devel cyrus-sasl-gssapi gcc gcc-c++ krb5-devel libtidy libxml2-devel libxslt-devel openldap-devel python-devel sqlite-devel openssl-devel mysql-devel gmp-devel

安装hue

1
2
3
4
5
6
7
software> wget http://archive.cloudera.com/cdh5/cdh/5/hue-3.9.0-cdh5.7.0.tar.gz

software> tar zxfv hue-3.9.0-cdh5.7.0.tar.gz

software> cd hue-3.9.0-cdh5.7.0

hue-3.9.0-cdh5.7.0> make apps

进入apps目录下;看到编译完成的hue;配置环境变量

HUE_HOME=/home/hadoop/app/hue-3.9.0-cdh5.7.0
PATH=$HUE_HOME/build/env/lib:$PATH

编辑配置文件

通用配置项

进入desktop的conf目录下修改hue.ini配置文件和修改db权限

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
[hadoop@hadoop hue-3.9.0-cdh5.7.0]$ cd desktop/conf/
[hadoop@hadoop conf]$ ll
总用量 56
-rw-r--r--. 1 hadoop hadoop 49070 3月 24 2016 hue.ini
-rw-r--r--. 1 hadoop hadoop 1843 3月 24 2016 log4j.properties
-rw-r--r--. 1 hadoop hadoop 1721 3月 24 2016 log.conf
[hadoop@hadoop conf]$ pwd
/home/hadoop/app/hue-3.9.0-cdh5.7.0/desktop/conf
[hadoop@hadoop conf]$ vim hue.ini


-----------------------------------开始--------------------------------------
# Set this to a random string, the longer the better.
# This is used for secure hashing in the session store.
secret_key=jFE93j;2[290-eiw.KEiwN2s3['d;/.q[eIW^y#e=+Iei*@Mn<qW5o

# Webserver listens on this address and port
http_host=hadoop001 (注:这里需要配置的是你自己的网址)
http_port=8888

# Time zone name
time_zone=Asia/Shanghai

配置hdfs的用户
# This should be the hadoop cluster admin
## default_hdfs_superuser=hadoop (这里配置的是用户)
-----------------------------------结束------------------------------------



修改desktop的权限
[hadoop@hadoop hue-3.9.0-cdh5.7.0]$ cd desktop/
[hadoop@hadoop desktop]$ ll
总用量 252
drwxr-xr-x. 2 hadoop hadoop 81 4月 14 18:20 conf
drwxr-xr-x. 5 hadoop hadoop 183 4月 14 18:09 core
-rw-r--r--. 1 hadoop hadoop 253952 4月 14 18:14 desktop.db
drwxr-xr-x. 15 hadoop hadoop 210 3月 24 2016 libs
drwxrwxr-x. 2 hadoop hadoop 78 4月 14 18:13 logs
-rw-r--r--. 1 hadoop hadoop 3467 3月 24 2016 Makefile
[hadoop@hadoop desktop]$ chmod o+w desktop.db
[hadoop@hadoop desktop]$ ll
总用量 252
drwxr-xr-x. 2 hadoop hadoop 81 4月 14 18:20 conf
drwxr-xr-x. 5 hadoop hadoop 183 4月 14 18:09 core
-rw-r--rw-. 1 hadoop hadoop 253952 4月 14 18:14 desktop.db
drwxr-xr-x. 15 hadoop hadoop 210 3月 24 2016 libs
drwxrwxr-x. 2 hadoop hadoop 78 4月 14 18:13 logs
-rw-r--r--. 1 hadoop hadoop 3467 3月 24 2016 Makefile

Hadoop集成环境相关添加

1、hdfs-site.xml

1
2
3
4
5
6
7
8
9
<property>
<name>dfs.webhdfs.enabled</name>
<value>true</value>
</property>

<property>
<name>dfs.permissions.enabled</name>
<value>false</value>
</property>

2、core-site.xml

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
<property>
<name>hadoop.proxyuser.hue.hosts</name>
<value>*</value>
</property>

<property>
<name>hadoop.proxyuser.hue.groups</name>
<value>*</value>
</property>

<property>
<name>hadoop.proxyuser.hadoop.hosts</name>
<value>*</value>
</property>

<property>
<name>hadoop.proxyuser.hadoop.groups</name>
<value>*</value>
</property>

3、yarn-site.xml

1
2
3
4
5
6
7
8
9
10
11
<!--打开HDFS上日志记录功能-->
<property>
<name>yarn.log-aggregation-enable</name>
<value>true</value>
</property>

<!--在HDFS上聚合的日志最长保留多少秒。3天-->
<property>
<name>yarn.log-aggregation.retain-seconds</name>
<value>259200</value>
</property>

4、httpfs-site.xml

1
2
3
4
5
6
7
8
9
<property>
<name>httpfs.proxyuser.hue.hosts</name>
<value>*</value>
</property>

<property>
<name>httpfs.proxyuser.hue.groups</name>
<value>*</value>
</property>

Hive环境变量的添加

(hiveserver2,使用Mysql作为独立的元数据库)

hive-site.xml

1
2
3
4
5
6
7
8
9
10
11
<property>
<name>hive.metastore.uris</name>
<value>thrift://192.168.137.130:9083</value>
<description>Thrift URI for the remote metastore. Used by metastore client to connect to remote metastore.</description>
</property>

<property>
<name>hive.server2.thrift.bind.host</name>
<value>192.168.137.130</value>
<description>Bind host on which to run the HiveServer2 Thrift service.</description>
</property>

集成文件配置hive Hadoop MySQL

$HUE_HOME/desktop/conf/hue.ini (编辑 hue.ini)

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
[hadoop]

# Configuration for HDFS NameNode
# ------------------------------------------------------------------------
[[hdfs_clusters]]
# HA support by using HttpFs



配置hdfs
[[[default]]]
# Enter the filesystem uri

fs_defaultfs=hdfs://hadoop001:8020 (自己的网络ip)

# NameNode logical name.
## logical_name=

# Use WebHdfs/HttpFs as the communication mechanism.
# Domain should be the NameNode or HttpFs host.
# Default port is 14000 for HttpFs.

webhdfs_url=http://hadoop001:50070/webhdfs/v1 (自己的网络ip)

# Change this if your HDFS cluster is Kerberos-secured
## security_enabled=false

# In secure mode (HTTPS), if SSL certificates from YARN Rest APIs
# have to be verified against certificate authority
## ssl_cert_ca_verify=True

# Directory of the Hadoop configuration
## hadoop_conf_dir=$HADOOP_CONF_DIR when set or '/etc/hadoop/conf'

# Configuration for YARN (MR2)
# ------------------------------------------------------------------------
[[yarn_clusters]]



配置yarn
[[[default]]]
# Enter the host on which you are running the ResourceManager

resourcemanager_host=hadoop001 (自己的网络ip)

# The port where the ResourceManager IPC listens on

resourcemanager_port=8032

#参考yarn-site.xml中的yarn.resourcemanager.address.rm1
# Whether to submit jobs to this cluster
submit_to=True

# Resource Manager logical name (required for HA)
## logical_name=

# Change this if your YARN cluster is Kerberos-secured
## security_enabled=false

# URL of the ResourceManager API

resourcemanager_api_url=http://hadoop:8088

# URL of the ProxyServer API

proxy_api_url=http://hadoop:8088

# URL of the HistoryServer API
#参考mapred-site.xml中的mapreduce.jobhistory.webapp.address

history_server_api_url=http://hadoop:19888

# In secure mode (HTTPS), if SSL certificates from YARN Rest APIs
# have to be verified against certificate authority
## ssl_cert_ca_verify=True

[beeswax]

配置hive
# Host where HiveServer2 is running.
# If Kerberos security is enabled, use fully-qualified domain name (FQDN).

hive_server_host=hadoop001

# Port where HiveServer2 Thrift server runs on.

hive_server_port=10000

配置MySQL
# mysql, oracle, or postgresql configuration.
## [[[mysql]]]
# Name to show in the UI.

nice_name="My SQL DB"

# For MySQL and PostgreSQL, name is the name of the database.
# For Oracle, Name is instance of the Oracle server. For express edition
# this is 'xe' by default.

name=mysqldb

# Database backend to use. This can be:
# 1. mysql
# 2. postgresql
# 3. oracle

engine=mysql

# IP or hostname of the database to connect to.

host=hadoop001

# Port the database server is listening to. Defaults are:
# 1. MySQL: 3306
# 2. PostgreSQL: 5432
# 3. Oracle Express Edition: 1521

port=3306

# Username to authenticate with when connecting to the database.

user=root

# Password matching the username to authenticate with when
# connecting to the database.

password=123456

启动hue

执行以下指令对hue数据库进行初始化

1
2
3
4
5
cd $HUE_HOME/build/env/

bin/hue syncdb

bin/hue migrate

启动顺序

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
1、启动Hive metastore
nohup hive --service metastore &

2、启动hiveserver2
nohup hive --service hiveserver2 &

3、启动Hue
nohup supervisor &

如果用当前窗口启动:将nohup和&省略 就可以

4、【desktop目录下有配置文件hue.ini;build/env/lib下有启动命令】

启动
[hadoop@hadoop bin]$ pwd
/home/hadoop/app/hue-3.9.0-cdh5.7.0/build/env/bin
[hadoop@hadoop bin]$ ./supervisor

查看hdfs上的文件目录:

执行一条hive查询

本文标题:hue-3.9-cdh-5.7.0安装

文章作者:skygzx

发布时间:2019年04月18日 - 12:30

最后更新:2019年04月19日 - 16:30

原始链接:http://yoursite.com/2019/04/18/hue-3.9-cdh-5.7.0安装/

许可协议: 署名-非商业性使用-禁止演绎 4.0 国际 转载请保留原文链接及作者。

-------------本文结束感谢您的阅读-------------
0%