eclipse/intellij idea 長途調試hadoop 2.6.0。本站提示廣大學習愛好者:(eclipse/intellij idea 長途調試hadoop 2.6.0)文章只能為提供參考,不一定能成為您想要的結果。以下是eclipse/intellij idea 長途調試hadoop 2.6.0正文
許多hadoop初學者估量都我一樣,因為沒有足夠的機械資本,只能在虛擬機裡弄一個linux裝置hadoop的偽散布,然後在host機上win7裡應用eclipse或Intellj idea來寫代碼測試,那末成績來了,win7下的eclipse或intellij idea若何長途提交map/reduce義務到長途hadoop,並斷點調試?
1、預備任務
1.1 在win7中,找一個目次,解壓hadoop-2.6.0,本文中是D:\yangjm\Code\study\hadoop\hadoop-2.6.0 (以下用$HADOOP_HOME表現)
1.2 在win7中添加幾個情況變量
HADOOP_HOME=D:\yangjm\Code\study\hadoop\hadoop-2.6.0
HADOOP_BIN_PATH=%HADOOP_HOME%\bin
HADOOP_PREFIX=D:\yangjm\Code\study\hadoop\hadoop-2.6.0
別的,PATH變量在最初追加;%HADOOP_HOME%\bin
2、eclipse長途調試
1.1 下載hadoop-eclipse-plugin插件
hadoop-eclipse-plugin是一個專門用於eclipse的hadoop插件,可以直接在IDE情況中檢查hdfs的目次和文件內容。其源代碼托管於github上,官網地址是 https://github.com/winghc/hadoop2x-eclipse-plugin
有興致的可以本身下載源碼編譯,百度一下N多文章,但假如只是應用 https://github.com/winghc/hadoop2x-eclipse-plugin/tree/master/release%20這裡曾經供給了各類編譯好的版本,直接用就行,將下載後的hadoop-eclipse-plugin-2.6.0.jar復制到eclipse/plugins目次下,然後重啟eclipse就完事了
1.2 下載windows64位平台的hadoop2.6插件包(hadoop.dll,winutils.exe)
在hadoop2.6.0源碼的hadoop-common-project\hadoop-common\src\main\winutils下,有一個vs.net工程,編譯這個工程可以獲得這一堆文件,輸入的文件中,
hadoop.dll、winutils.exe 這二個最有效,將winutils.exe復制到$HADOOP_HOME\bin目次,將hadoop.dll復制到%windir%\system32目次 (重要是避免插件報各類莫名毛病,好比空對象援用啥的)
注:假如不想編譯,可直接下載編譯好的文件 hadoop2.6(x64)V0.2.rar
1.3 設置裝備擺設hadoop-eclipse-plugin插件
啟動eclipse,windows->show view->other
window->preferences->hadoop map/reduce 指定win7上的hadoop根目次(即:$HADOOP_HOME)
然後在Map/Reduce Locations 面板中,點擊小象圖標
添加一個Location
這個界面灰常主要,說明一下幾個參數:
Location name 這裡就是起個名字,隨意起
Map/Reduce(V2) Master Host 這裡就是虛擬機裡hadoop master對應的IP地址,上面的端口對應 hdfs-site.xml裡dfs.datanode.ipc.address屬性所指定的端口
DFS Master Port: 這裡的端口,對應core-site.xml裡fs.defaultFS所指定的端口
最初的user name要跟虛擬機裡運轉hadoop的用戶名分歧,我是用hadoop身份裝置運轉hadoop 2.6.0的,所以這裡填寫hadoop,假如你是用root裝置的,響應的改成root
這些參數指定好今後,點擊Finish,eclipse就曉得若何去銜接hadoop了,一切順遂的話,在Project Explorer面板中,就可以看到hdfs裡的目次和文件了
可以在文件上右擊,選擇刪除試下,平日第一次是不勝利的,會提醒一堆器械,年夜意是權限缺乏之類,緣由是以後的win7登錄用戶不是虛擬機裡hadoop的運轉用戶,處理方法有許多,好比你可以在win7上新建一個hadoop的治理員用戶,然後切換成hadoop登錄win7,再應用eclipse開辟,然則如許太煩,最簡略的方法:
hdfs-site.xml裡添加
<property> <name>dfs.permissions</name> <value>false</value> </property>
然後在虛擬機裡,運轉hadoop dfsadmin -safemode leave
保險起見,再來一個 hadoop fs -chmod 777 /
總而言之,就是完全把hadoop的平安檢測關失落(進修階段不須要這些,正式臨盆上時,不要這麼干),最初重啟hadoop,再到eclipse裡,反復適才的刪除文件操作試下,應當可以了。
1.4 創立WoldCount示例項目
新建一個項目,選擇Map/Reduce Project
前面的Next就好了,然後放一上WodCount.java,代碼以下:
package yjmyzz; import java.io.IOException; import java.util.StringTokenizer; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.fs.Path; import org.apache.hadoop.io.IntWritable; import org.apache.hadoop.io.Text; import org.apache.hadoop.mapreduce.Job; import org.apache.hadoop.mapreduce.Mapper; import org.apache.hadoop.mapreduce.Reducer; import org.apache.hadoop.mapreduce.lib.input.FileInputFormat; import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat; import org.apache.hadoop.util.GenericOptionsParser; public class WordCount { public static class TokenizerMapper extends Mapper<Object, Text, Text, IntWritable> { private final static IntWritable one = new IntWritable(1); private Text word = new Text(); public void map(Object key, Text value, Context context) throws IOException, InterruptedException { StringTokenizer itr = new StringTokenizer(value.toString()); while (itr.hasMoreTokens()) { word.set(itr.nextToken()); context.write(word, one); } } } public static class IntSumReducer extends Reducer<Text, IntWritable, Text, IntWritable> { private IntWritable result = new IntWritable(); public void reduce(Text key, Iterable<IntWritable> values, Context context) throws IOException, InterruptedException { int sum = 0; for (IntWritable val : values) { sum += val.get(); } result.set(sum); context.write(key, result); } } public static void main(String[] args) throws Exception { Configuration conf = new Configuration(); String[] otherArgs = new GenericOptionsParser(conf, args).getRemainingArgs(); if (otherArgs.length < 2) { System.err.println("Usage: wordcount <in> [<in>...] <out>"); System.exit(2); } Job job = Job.getInstance(conf, "word count"); job.setJarByClass(WordCount.class); job.setMapperClass(TokenizerMapper.class); job.setCombinerClass(IntSumReducer.class); job.setReducerClass(IntSumReducer.class); job.setOutputKeyClass(Text.class); job.setOutputValueClass(IntWritable.class); for (int i = 0; i < otherArgs.length - 1; ++i) { FileInputFormat.addInputPath(job, new Path(otherArgs[i])); } FileOutputFormat.setOutputPath(job, new Path(otherArgs[otherArgs.length - 1])); System.exit(job.waitForCompletion(true) ? 0 : 1); } }
然後再放一個log4j.properties,內容以下:(為了便利運轉起來後,檢查各類輸入)
log4j.rootLogger=INFO, stdout #log4j.logger.org.springframework=INFO #log4j.logger.org.apache.activemq=INFO #log4j.logger.org.apache.activemq.spring=WARN #log4j.logger.org.apache.activemq.store.journal=INFO #log4j.logger.org.activeio.journal=INFO log4j.appender.stdout=org.apache.log4j.ConsoleAppender log4j.appender.stdout.layout=org.apache.log4j.PatternLayout log4j.appender.stdout.layout.ConversionPattern=%d{ABSOLUTE} | %-5.5p | %-16.16t | %-32.32c{1} | %-32.32C %4L | %m%n
終究的目次構造以下:
然後可以Run了,固然是不會勝利的,由於沒給WordCount輸出參數,參考下圖:
1.5 設置運轉參數
由於WordCount是輸出一個文件用於統計單詞字,然後輸入到另外一個文件夾下,所以給二個參數,參考上圖,在Program arguments裡,輸出
hdfs://172.28.20.xxx:9000/jimmy/input/README.txt
hdfs://172.28.20.xxx:9000/jimmy/output/
年夜家參考這個改一下(重要是把IP換成本身虛擬機裡的IP),留意的是,假如input/READM.txt文件沒有,請先手動上傳,然後/output/ 必需是不存在的,不然法式運轉到最初,發明目的目次存在,也會報錯,這個弄完後,可以在恰當的地位打個斷點,終究可以調試了:
3、intellij idea 長途調試hadoop
3.1 創立一個maven的WordCount項目
pom文件以下:
<?xml version="1.0" encoding="UTF-8"?> <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd"> <modelVersion>4.0.0</modelVersion> <groupId>yjmyzz</groupId> <artifactId>mapreduce-helloworld</artifactId> <version>1.0-SNAPSHOT</version> <dependencies> <dependency> <groupId>org.apache.hadoop</groupId> <artifactId>hadoop-common</artifactId> <version>2.6.0</version> </dependency> <dependency> <groupId>org.apache.hadoop</groupId> <artifactId>hadoop-mapreduce-client-jobclient</artifactId> <version>2.6.0</version> </dependency> <dependency> <groupId>commons-cli</groupId> <artifactId>commons-cli</artifactId> <version>1.2</version> </dependency> </dependencies> <build> <finalName>${project.artifactId}</finalName> </build> </project>
項目構造以下:
項目上右擊-》Open Module Settings 或按F12,翻開模塊屬性
添加依附的Libary援用
然後把$HADOOP_HOME下的對應包全導出去
導入的libary可以起個稱號,好比hadoop2.6
3.2 設置運轉參數
留意二個處所:
1是Program aguments,這裡跟eclipes相似的做法,指定輸出文件和輸入文件夾
2是Working Directory,即任務目次,指定為$HADOOP_HOME地點目次
然後便可以調試了
intellij下獨一不爽的,因為沒有相似eclipse的hadoop插件,每次運轉完wordcount,下次再要運轉時,只妙手動敕令行刪除output目次,再行調試。為懂得決這個成績,可以將WordCount代碼改良一下,在運轉前先刪除output目次,見上面的代碼:
package yjmyzz; import java.io.IOException; import java.util.StringTokenizer; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.fs.FileSystem; import org.apache.hadoop.fs.Path; import org.apache.hadoop.io.IntWritable; import org.apache.hadoop.io.Text; import org.apache.hadoop.mapreduce.Job; import org.apache.hadoop.mapreduce.Mapper; import org.apache.hadoop.mapreduce.Reducer; import org.apache.hadoop.mapreduce.lib.input.FileInputFormat; import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat; import org.apache.hadoop.util.GenericOptionsParser; public class WordCount { public static class TokenizerMapper extends Mapper<Object, Text, Text, IntWritable> { private final static IntWritable one = new IntWritable(1); private Text word = new Text(); public void map(Object key, Text value, Context context) throws IOException, InterruptedException { StringTokenizer itr = new StringTokenizer(value.toString()); while (itr.hasMoreTokens()) { word.set(itr.nextToken()); context.write(word, one); } } } public static class IntSumReducer extends Reducer<Text, IntWritable, Text, IntWritable> { private IntWritable result = new IntWritable(); public void reduce(Text key, Iterable<IntWritable> values, Context context) throws IOException, InterruptedException { int sum = 0; for (IntWritable val : values) { sum += val.get(); } result.set(sum); context.write(key, result); } } /** * 刪除指定目次 * * @param conf * @param dirPath * @throws IOException */ private static void deleteDir(Configuration conf, String dirPath) throws IOException { FileSystem fs = FileSystem.get(conf); Path targetPath = new Path(dirPath); if (fs.exists(targetPath)) { boolean delResult = fs.delete(targetPath, true); if (delResult) { System.out.println(targetPath + " has been deleted sucessfullly."); } else { System.out.println(targetPath + " deletion failed."); } } } public static void main(String[] args) throws Exception { Configuration conf = new Configuration(); String[] otherArgs = new GenericOptionsParser(conf, args).getRemainingArgs(); if (otherArgs.length < 2) { System.err.println("Usage: wordcount <in> [<in>...] <out>"); System.exit(2); } //先刪除output目次 deleteDir(conf, otherArgs[otherArgs.length - 1]); Job job = Job.getInstance(conf, "word count"); job.setJarByClass(WordCount.class); job.setMapperClass(TokenizerMapper.class); job.setCombinerClass(IntSumReducer.class); job.setReducerClass(IntSumReducer.class); job.setOutputKeyClass(Text.class); job.setOutputValueClass(IntWritable.class); for (int i = 0; i < otherArgs.length - 1; ++i) { FileInputFormat.addInputPath(job, new Path(otherArgs[i])); } FileOutputFormat.setOutputPath(job, new Path(otherArgs[otherArgs.length - 1])); System.exit(job.waitForCompletion(true) ? 0 : 1); } }
然則光如許還不敷,在IDE情況中運轉時,IDE須要曉得去連哪個hdfs實例(就好象在db開辟中,須要在設置裝備擺設xml中指定DataSource一樣的事理),將$HADOOP_HOME\etc\hadoop下的core-site.xml,復制到resouces目次下,相似上面如許:
外面的內容以下:
<?xml version="1.0" encoding="UTF-8"?> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?> <configuration> <property> <name>fs.defaultFS</name> <value>hdfs://172.28.20.***:9000</value> </property> </configuration>
下面的IP換成虛擬機裡的IP便可。