datastax - spark-cassandra connector in local gives Spark cluster looks down -


i new spark , cassandra. trying simple java progam trying add new rows cassandra table using spark-cassandra-connector provided datastax.

i running dse on laptop . using java, trying save data cassandra db thru spark . following code :

map<string, string> = new hashmap<string, string>();         extra.put("city", "bangalore");         extra.put("dept", "software");         list<user> products = arrays.aslist(new user(1, "vamsi", extra));         javardd<user> productsrdd = sc.parallelize(products);         javafunctions(productsrdd, user.class).savetocassandra("test", "users"); 

when execute code getting following error

16/03/26 20:57:31 info client.appclient$clientactor: connecting master spark://127.0.0.1:7077... 16/03/26 20:57:44 warn scheduler.taskschedulerimpl: initial job has not accepted resources; check cluster ui ensure workers registered , have sufficient memory 16/03/26 20:57:51 info client.appclient$clientactor: connecting master spark://127.0.0.1:7077... 16/03/26 20:57:59 warn scheduler.taskschedulerimpl: initial job has not accepted resources; check cluster ui ensure workers registered , have sufficient memory 16/03/26 20:58:11 error client.appclient$clientactor: masters unresponsive! giving up. 16/03/26 20:58:11 error cluster.sparkdeployschedulerbackend: spark cluster looks dead, giving up. 16/03/26 20:58:11 info scheduler.taskschedulerimpl: removed taskset 0.0, tasks have completed, pool 16/03/26 20:58:11 info scheduler.dagscheduler: failed run runjob @ rddfunctions.scala:48 exception in thread "main" org.apache.spark.sparkexception: job aborted: spark cluster looks down @ org.apache.spark.scheduler.dagscheduler$$anonfun$org$apache$spark$scheduler$dagscheduler$$abortstage$1.apply(dagscheduler.scala:1020) @ org.apache.spark.scheduler.dagscheduler$$anonfun$org$apache$spark$scheduler$dagscheduler$$abortstage$1.apply(dagscheduler.scala:1018) @ scala.collection.mutable.resizablearray$class.foreach(resizablearray.scala:59) @ scala.collection.mutable.arraybuffer.foreach(arraybuffer.scala:47) @ org.apache.spark.scheduler.dagscheduler.org$apache$spark$scheduler$dagscheduler$$abortstage(dagscheduler.scala:1018) @ org.apache.spark.scheduler.dagscheduler$$anonfun$processevent$10.apply(dagscheduler.scala:604) @ org.apache.spark.scheduler.dagscheduler$$anonfun$processevent$10.apply(dagscheduler.scala:604) @ scala.option.foreach(option.scala:236) @ org.apache.spark.scheduler.dagscheduler.processevent(dagscheduler.scala:604) @ org.apache.spark.scheduler.dagscheduler$$anonfun$start$1$$anon$2$$anonfun$receive$1.applyorelse(dagscheduler.scala:190) @ akka.actor.actorcell.receivemessage(actorcell.scala:498) @ akka.actor.actorcell.invoke(actorcell.scala:456) @ akka.dispatch.mailbox.processmailbox(mailbox.scala:237) @ akka.dispatch.mailbox.run(mailbox.scala:219) @ akka.dispatch.forkjoinexecutorconfigurator$akkaforkjointask.exec(abstractdispatcher.scala:386) @ scala.concurrent.forkjoin.forkjointask.doexec(forkjointask.java:260) @ scala.concurrent.forkjoin.forkjoinpool$workqueue.runtask(forkjoinpool.java:1339) @ scala.concurrent.forkjoin.forkjoinpool.runworker(forkjoinpool.java:1979) @ scala.concurrent.forkjoin.forkjoinworkerthread.run(forkjoinworkerthread.java:107)

looks need fix spark configuration...see this:

http://www.datastax.com/dev/blog/common-spark-troubleshooting


Comments