Hadoop-window下借助vm搭建hadoop环境

本文详细介绍如何下载并安装JDK及Hadoop,通过具体步骤指导如何在Linux环境下配置Hadoop集群,包括环境变量设置、SSH免密码登录、核心配置文件调整等关键环节。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

下载jdk:http://www.oracle.com/technetwork/java/javase/downloads/jdk10-downloads-4416644.html

hadoop:http://hadoop.apache.org/

将文件传输到Linux :yum install lrzsz

上传文件:rz -y  下载文件:sz filename

若文件后缀为rpm,则使用:rpm -ivh ....rpm

tar.gz :tar -zxf xxxx.tar.gz

jdk环境变量:vi /etc/profile 加入java-Home 使用source /etc/profile 使用配置文件生效

配置SSH,免密码登陆:

ssh-keygen -t rsa 安装后会产生一个公钥id_rsa.pub 私钥 id_rsa

cat id_rsa.pub >> authorized_keys

chmod 644 authorized_keys

配置hadoop:

vi /etc/profile 

Hadoop=....

PATH=$Hadoop/bin:$PATH

source /etc/profile

配置时需要的文件:

core-site.xml:

<configuration>
   <property>
     <name>fs.default.name</name>
     <value>hdfs://10.172.37.52:9000</value>
   </property>
  <property>
     <name>fs.tmp.dir</name>
     <value>/usr/mj/tmp</value>
   </property>
<property>
     <name>fs.trash.interval</name>
     <value>4320</value>
   </property> 



hdfs-site.xml

   <property>
           <name>dfs.namenode.name.dir</name>
           <value>/usr/mj/hadoop-2.9.1/current/data</value>
        </property>
        <property>
           <name>dfs.datanode.data.dir</name>
           <value>/usr/mj/hadoop-2.9.1/current/data</value>
        </property>
        <property>
           <name>dfs.replication</name>
           <value>1</value>
        </property>
        <property>
           <name>dfs.webhdfs.enabled</name>
           <value>true</value>
        </property>
        <property>
           <name>dfs.permissions.superusergroup</name>
           <value>staff</value>
        </property>
        <property>
           <name>dfs.permissions.enabled</name>
           <value>false</value>
        </property>

parn.xml

 <property>
   <name>yarn.resourcemanager.hostname</name>
   <value>10.172.37.52</value>
 </property>
 <property>
   <name>yarn.nodemanager.aux.services</name>
   <value>mapreduce_shuffle</value>
 </property>

 <property>
   <name>yarn.nodemanager.aux.services.mapreduce.shuffle.class</name>
   <value>org.apache.hadoop.mapred.ShuffleHandler</value>
 </property>
 <property>
   <name>yarn.resourcemanager.address</name>
   <value>10.172.37.52:18040</value>
 </property>
 <property>
   <name>yarn.resourcemanager.scheduler.address</name>
   <value>10.172.37.52:18030</value>
 </property>
<property>
   <name>yarn.resourcemanager.resource.tracker.address</name>
   <value>10.172.37.52:18025</value>
 </property>
<property>
   <name>yarn.resource.manager.admin.address</name>
   <value>10.172.37.52:18141</value>
 </property>
 <property>
   <name>yarn.resourcemanager.webapp.address</name>
   <value>10.172.37.52:18088</value>
 </property>
 <property>
   <name>yarn.log-aggregation-enable</name>
   <value>true</value>
 </property>
 <property>
   <name>yarn.log-aggregation.retain-seconds</name>
   <value>86400</value>
 </property>
 <property>
   <name>yarn.log-aggregation.retain-check-interval-seconds</name>
   <value>86400</value>
 </property>
 <property>
   <name>yarn.nodemanager.remote-app-log-dir</name>
   <value>/tmp/log</value>
 </property>
 <property>
   <name>yarn.nodemanager.remote-app-log-dir-suffix</name>
   <value>logs</value>
 </property>

mapred.xml

<property>
  <name>mapreduce.framework.name</name>
  <value>yarn</value>
</property>
<property>
  <name>mapreduce.jobtracker.http.address</name>
  <value>10.172.37.52:50038</value>
</property>
<property>
  <name>mapreduce.jobhistory.address</name>
  <value>10.172.37.52:10020</value>
</property>
<property>
  <name>mapreduce.jobhistory.webapp.address</name>
  <value>10.172.37.52:19888</value>
</property>
<property>
  <name>mapreduce.jobhistory.done-dir</name>
  <value>/jobhistory/done</value>
</property>

<property>
  <name>mapreduce.intermediate-done-dir</name>
  <value>/jobhistory/done_intermediate</value>
</property>
<property>
  <name>mapreduce.job.ubertask.enable</name>
  <value>true</value>
</property>

slaves: 机器名\ip

vi hadoop-env-sh javahome

格式化:hdfs namenode -format

启动hadoop:

start-all.sh

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值