Well, it depends on the block of files in HDFS. If you are using the default settings of Spark, then one partition is created for every block of a file. But you can explicitly specify the number of partitions to be created.
Here is an example below:
val rdd1 = sc.textFile("/home/hdadmin/wc-data.txt")