# hadoop-docker **Repository Path**: shampoole/hadoop-docker ## Basic Information - **Project Name**: hadoop-docker - **Description**: hadoop3 dockerfile - **Primary Language**: Docker - **License**: Not specified - **Default Branch**: master - **Homepage**: None - **GVP Project**: No ## Statistics - **Stars**: 0 - **Forks**: 1 - **Created**: 2022-09-19 - **Last Updated**: 2022-09-26 ## Categories & Tags **Categories**: Uncategorized **Tags**: None ## README ## 创建独立的网络 ```shell # 下载安装包到根目录, 构建镜像使用 https://www.apache.org/dyn/closer.cgi/hadoop/common/hadoop-3.3.3/hadoop-3.3.3.tar.gz https://dlcdn.apache.org/hive/hive-3.1.3/apache-hive-3.1.3-bin.tar.gz ## 构建镜像 docker build -f Dockerfile-Ubuntu --rm -t local/hadoop:3 . ## 安装 docker compose yum install -y docker-compose-plugin ## 部署 docker compose up -d # 容器起来后,需要一段时间,可以访问resource manager的8088端口,查看mr任务和节点情况 # 查看日志 docker logs hadoop1 ## 使用 hadoop 用户进入容器 docker exec -it -u hadoop hadoop1 bash # 查看 hadoop 数据节点状态 $HADOOP_HOME/bin/hdfs dfsadmin -report # 测试 $HADOOP_HOME/bin/hdfs dfs -mkdir -p /user/hadoop $HADOOP_HOME/bin/hdfs dfs -mkdir input $HADOOP_HOME/bin/hdfs dfs -put $HADOOP_HOME/etc/hadoop/*.xml input $HADOOP_HOME/bin/hadoop jar $HADOOP_HOME/share/hadoop/mapreduce/hadoop-mapreduce-examples-3.3.2.jar grep input output 'dfs[a-z.]+' ## 停止服务 docker compose down ## 清理磁盘 docker rmi local/hadoop:3 docker system prune -f ``` ## 端口说明 NameNode: http://127.0.0.1:9870 ResourceManager: http://127.0.0.1:8088 MapReduce JobHistory Server: http://127.0.0.1:19888 NodeManager: http://127.0.0.1:8042 DataNode: http://127.0.0.1:9864 HiveServer2: http://127.0.0.1:1002 beeline -u jdbc:hive2://127.0.0.1:10000/default -n hadoop -p hadoop hive http: 10001 ssh: 10022:22 - 9000:9000 - 8031:8031