Spark Kill Running Application

0 votes
I cannot allocate resources to any spark application as one of the spark application is occupying all the cores.

Help needed.

Thanks in advance
Apr 25, 2018 in Apache Spark by Ashish
• 2,650 points
1,786 views

1 answer to this question.

0 votes
you can copy the application id from spark scheduler

connect to the server which is running the big application you want to kill and just use the command

yarn application -kill "application_id"
answered Apr 25, 2018 by kurt_cobain
• 9,350 points

Related Questions In Apache Spark

0 votes
1 answer

Is it mandatory to start Hadoop to run spark application?

No, it is not mandatory, but there ...READ MORE

answered Jun 14, 2018 in Apache Spark by nitinrawat895
• 11,380 points
832 views
0 votes
1 answer

When running Spark on Yarn, do I need to install Spark on all nodes of Yarn Cluster?

No, it is not necessary to install ...READ MORE

answered Jun 14, 2018 in Apache Spark by nitinrawat895
• 11,380 points
6,256 views
0 votes
1 answer

What is Executor Memory in a Spark application?

Every spark application has same fixed heap ...READ MORE

answered Jan 5, 2019 in Apache Spark by Frankie
• 9,830 points
6,504 views
0 votes
1 answer

Passing condition dynamically to Spark application.

You can try this: d.filter(col("value").isin(desiredThings: _*)) and if you ...READ MORE

answered Feb 19, 2019 in Apache Spark by Omkar
• 69,220 points
9,047 views
0 votes
1 answer
0 votes
0 answers

How can I kill a process by name instead of PID, on Linux?

Sometimes when I try to start Firefox ...READ MORE

Apr 13, 2022 in Linux Administration by Rahul
• 9,680 points
450 views
0 votes
0 answers

What killed my process and why?

This happened two times. I asked if ...READ MORE

Apr 13, 2022 in Linux Administration by Aditya
• 7,680 points
617 views
0 votes
1 answer

How to stop messages from being displayed on spark console?

In your log4j.properties file you need to ...READ MORE

answered Apr 24, 2018 in Apache Spark by kurt_cobain
• 9,350 points
5,555 views
+1 vote
2 answers

Hadoop 3 compatibility with older versions of Hive, Pig, Sqoop and Spark

Hadoop 3 is not widely used in ...READ MORE

answered Apr 20, 2018 in Apache Spark by kurt_cobain
• 9,350 points
5,930 views
0 votes
1 answer

Efficient way to read specific columns from parquet file in spark

As parquet is a column based storage ...READ MORE

answered Apr 20, 2018 in Apache Spark by kurt_cobain
• 9,350 points
7,854 views
webinar REGISTER FOR FREE WEBINAR X
REGISTER NOW
webinar_success Thank you for registering Join Edureka Meetup community for 100+ Free Webinars each month JOIN MEETUP GROUP