Query regarding Temporary table s metadata in HIVE

0 votes

Hi,

Suppose I have created a temporary table in HIVE and used that table for further processing. If i create a temporary table where does the schema (or) Metadata gets stored, does it created in an External RDBMS table or in HDFS? 

If in HDFS where does that reside?

May 22, 2019 in Big Data Hadoop by Raj
1,443 views

1 answer to this question.

0 votes

Registered tables are not cached in memory.  The registerTempTable createOrReplaceTempView method will just create or replace a view of the given DataFrame with a given query plan. It will convert the query plan to canonicalized SQL string, and store it as view text in metastore, if we need to create a permanent view. 

You'll need to cache your DataFrame explicitly. e.g :

df.createOrReplaceTempView("my_table") # df.registerTempTable("my_table") for spark <2.+

spark.cacheTable("my_table")

Let's illustrate this with an example :

Using cacheTable :

scala> val df = Seq(("1",2),("b",3)).toDF

// df: org.apache.spark.sql.DataFrame = [_1: string, _2: int]

scala> sc.getPersistentRDDs

// res0: scala.collection.Map[Int,org.apache.spark.rdd.RDD[_]] = Map()


scala> df.createOrReplaceTempView("my_table")


scala> sc.getPersistentRDDs

// res2: scala.collection.Map[Int,org.apache.spark.rdd.RDD[_]] = Map()

scala> spark.cacheTable("my_table")

scala> sc.getPersistentRDDs

// res4: scala.collection.Map[Int,org.apache.spark.rdd.RDD[_]] = Map(2 -> In-memory table my_table MapPartitionsRDD[2] at cacheTable at <console>:26)

Now the same example using cache.registerTempTable cache.createOrReplaceTempView :

scala> sc.getPersistentRDDs

// res2: scala.collection.Map[Int,org.apache.spark.rdd.RDD[_]] = Map()


scala> val df = Seq(("1",2),("b",3)).toDF

// df: org.apache.spark.sql.DataFrame = [_1: string, _2: int]


scala> df.createOrReplaceTempView("my_table")


scala> sc.getPersistentRDDs

// res4: scala.collection.Map[Int,org.apache.spark.rdd.RDD[_]] = Map()


scala> df.cache.registerTempTable("my_table")


scala> sc.getPersistentRDDs

// res6: scala.collection.Map[Int,org.apache.spark.rdd.RDD[_]] =

// Map(2 -> ConvertToUnsafe

// +- LocalTableScan [_1#0,_2#1], [[1,2],[b,3]]

// MapPartitionsRDD[2] at cache at <console>:28)
answered May 22, 2019 by Tina

Related Questions In Big Data Hadoop

0 votes
1 answer

Query regarding String and Varchar in Hive

Varchar datatype is also saved internally as ...READ MORE

answered Feb 20, 2019 in Big Data Hadoop by Omkar
• 69,220 points
2,186 views
0 votes
1 answer

How Impala is fast compared to Hive in terms of query response?

Impala provides faster response as it uses MPP(massively ...READ MORE

answered Mar 21, 2018 in Big Data Hadoop by nitinrawat895
• 11,380 points
2,217 views
–1 vote
1 answer

How we can run spark SQL over hive tables in our cluster?

Open spark-shell. scala> import org.apache.spark.sql.hive._ scala> val hc = ...READ MORE

answered Dec 26, 2018 in Big Data Hadoop by Omkar
• 69,220 points
1,531 views
–1 vote
1 answer

Beeline and Hive Query Editor in Embedded mode

Running Hive client tools with embedded servers ...READ MORE

answered Dec 31, 2018 in Big Data Hadoop by Omkar
• 69,220 points
1,441 views
+1 vote
1 answer

Hadoop Mapreduce word count Program

Firstly you need to understand the concept ...READ MORE

answered Mar 16, 2018 in Data Analytics by nitinrawat895
• 11,380 points
11,029 views
0 votes
1 answer

hadoop.mapred vs hadoop.mapreduce?

org.apache.hadoop.mapred is the Old API  org.apache.hadoop.mapreduce is the ...READ MORE

answered Mar 16, 2018 in Data Analytics by nitinrawat895
• 11,380 points
2,538 views
+2 votes
11 answers

hadoop fs -put command?

Hi, You can create one directory in HDFS ...READ MORE

answered Mar 16, 2018 in Big Data Hadoop by nitinrawat895
• 11,380 points
108,836 views
–1 vote
1 answer

Hadoop dfs -ls command?

In your case there is no difference ...READ MORE

answered Mar 16, 2018 in Big Data Hadoop by kurt_cobain
• 9,350 points
4,612 views
0 votes
1 answer

Can you please help with Hive Query to get FirstName, MiddleName, LastName and Suffix from FullName in Hive. Thank you

Hey, You can get first name, middle name, ...READ MORE

answered May 15, 2019 in Big Data Hadoop by Gitika
• 65,770 points

edited May 15, 2019 by Gitika 2,125 views
0 votes
1 answer

How to load unencrypted tables to encrypted in Hive?

Refer to the below command to load ...READ MORE

answered May 30, 2019 in Big Data Hadoop by John
719 views
webinar REGISTER FOR FREE WEBINAR X
REGISTER NOW
webinar_success Thank you for registering Join Edureka Meetup community for 100+ Free Webinars each month JOIN MEETUP GROUP