還有另外幾個WARN
15/05/19 11:19:19 INFO TaskSchedulerImpl: Adding task set 1.0 with 2 tasks
15/05/19 11:19:33 INFO AppClient$ClientActor: Connecting to master spark://172.18.219.136:7077...
15/05/19 11:19:34 WARN TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient memory
15/05/19 11:19:49 WARN TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient memory
15/05/19 11:19:53 INFO AppClient$ClientActor: Connecting to master spark://172.18.219.136:7077...
15/05/19 11:20:04 WARN TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient memory
15/05/19 11:20:13 ERROR SparkDeploySchedulerBackend: Application has been killed. Reason: All masters are unresponsive! Giving up.
15/05/19 11:20:13 INFO TaskSchedulerImpl: Removed TaskSet 1.0, whose tasks have all completed, from pool
15/05/19 11:20:13 INFO TaskSchedulerImpl: Cancelling stage 1
15/05/19 11:20:13 INFO DAGScheduler: Failed to run collect at WordCount.scala:31
Exception in thread "main" org.apache.spark.SparkException: Job aborted due to stage failure: All masters are unresponsive! Giving up.
http://taoistwar.gitbooks.io/spark-operationand-maintenance-management/content/spark_relate_software/hadoop_2x_install.html
spark-env.sh,中export SPARK_MASTER_IP= master節點的機器名或IP
如何是機器名,查看一下/etc/hosts有沒有解析主機名