I am running an application on Spark cluster using yarn client mode with 4 nodes. Other then Master node there are three worker nodes available but spark execute the application on only two workers. Workers are selected at random, there aren't any specific workers that get selected each time application is run.
For the worker not being used following lines got printed in the logs
**INFO Client:54*
client token: N/A
diagnostics: N/A
ApplicationMaster host: 192.168.0.67
ApplicationMaster RPC port: 0
queue: default
start time: 1550748030360
final status: UNDEFINED
tracking URL: http://aiserver:8088/proxy/application_1550744631375_0004/
user: root
Find below Spark submit command:
spark-submit --master yarn --class com.i2c.chprofiling.App App.jar --num-executors 4 --executor-cores 3 --conf "spark.locality.wait.node=0"
Why doesn't my Spark Yarn client runs on all available worker machines?