Unresolved dependency issue on sbt package command

0 votes

Hi, After a long time again I have started to practice the spark n scala.
I have written a word count program which is successfully running on eclipse but when I have tried it through sbt is failing and throwing below Exception:

sbt.ResolveException: unresolved dependency: org.scala-lang#scala-library;2.11.8: not found
[error] unresolved dependency: org.apache.spark#spark-core;2.1.1: not found
[error] unresolved dependency: org.scala-lang#scala-compiler;2.11.8: not found

My Spark is showing version 2.1.1 using scala version 2.11.8.
I have taken the reference of scala version 2.11.8 but its failing . Please tell me how to solve this issue?

Jan 3, 2019 in Apache Spark by slayer
• 29,370 points

edited Jan 3, 2019 by Omkar 2,966 views

1 answer to this question.

0 votes

Check if you are able to access internet to the system. Build issue sometimes happen if you do not have proper connectivity.

If you have internet connection and build still isn't working, try changing scala version.

As given below file.

build.sbt 

name := "WordcountFirstapp" 
version := "1.0"
scalaVersion := "2.10.4"
libraryDependencies += "org.apache.spark" %% "spark-core" % "2.1.1"
libraryDependencies += "org.apache.spark" %% "spark-sql" % "2.1.1"

==============

answered Jan 3, 2019 by Omkar
• 69,220 points

Related Questions In Apache Spark

+1 vote
1 answer

How to install Scala Build Tool (SBT) on ubuntu?

Hey, To install SBT on Ubuntu first you need ...READ MORE

answered Jul 23, 2019 in Apache Spark by Gitika
• 65,770 points
1,665 views
0 votes
1 answer

How to stop messages from being displayed on spark console?

In your log4j.properties file you need to ...READ MORE

answered Apr 24, 2018 in Apache Spark by kurt_cobain
• 9,350 points
5,555 views
0 votes
1 answer
0 votes
3 answers

Filtering a row in Spark DataFrame based on matching values from a list

Use the function as following: var notFollowingList=List(9.8,7,6,3,1) df.filter(col("uid").isin(notFollowingList:_*)) You can ...READ MORE

answered Jun 6, 2018 in Apache Spark by Shubham
• 13,490 points
92,729 views
0 votes
1 answer

When running Spark on Yarn, do I need to install Spark on all nodes of Yarn Cluster?

No, it is not necessary to install ...READ MORE

answered Jun 14, 2018 in Apache Spark by nitinrawat895
• 11,380 points
6,257 views
+1 vote
2 answers
+1 vote
1 answer

Hadoop Mapreduce word count Program

Firstly you need to understand the concept ...READ MORE

answered Mar 16, 2018 in Data Analytics by nitinrawat895
• 11,380 points
11,028 views
0 votes
1 answer

hadoop.mapred vs hadoop.mapreduce?

org.apache.hadoop.mapred is the Old API  org.apache.hadoop.mapreduce is the ...READ MORE

answered Mar 16, 2018 in Data Analytics by nitinrawat895
• 11,380 points
2,536 views
+2 votes
11 answers

hadoop fs -put command?

Hi, You can create one directory in HDFS ...READ MORE

answered Mar 16, 2018 in Big Data Hadoop by nitinrawat895
• 11,380 points
108,832 views
0 votes
1 answer

Installing Spark on Ubuntu

Hey. Follow these steps to install Spark ...READ MORE

answered Feb 20, 2019 in Apache Spark by Omkar
• 69,220 points
1,916 views
webinar REGISTER FOR FREE WEBINAR X
REGISTER NOW
webinar_success Thank you for registering Join Edureka Meetup community for 100+ Free Webinars each month JOIN MEETUP GROUP