sql exception when exporting

+1 vote

Hello All,

i have one issue with hadoop, every day job runs based on query, the query i am using it is getting 3 years data but we are looking for 6 years data. 

the query is:

SELECT account.account, 
       denorm.account_id, 
       denorm.incident_number, 
       denorm.incident_id, 
       denorm.casenumber, 
       denorm.incident_type, 
       denorm.incident_status, 
       denorm.comm_pref_code, 
       denorm.complexity, 
       denorm.current_severity, 
       denorm.initial_severity, 
       denorm.max_severity, 
       denorm.bug_cnt, 
       denorm.outage, 
       denorm.initial_portfolio_name, 
       denorm.entry_channel, 
       denorm.creation_date, 
       denorm.closed_date, 
       denorm.current_serial_number, 
       denorm.router_node_name, 
       denorm.summary, 
       denorm.customer_ticket_number, 
       denorm.incident_contact_email, 
       denorm.problem_code, 
       denorm.resolution_code, 
       denorm.sr_create_pfg, 
       denorm.install_at_site_id, 
       denorm.solution_release, 
       denorm.nlp_status, 
       denorm.b2b_flag, 
       denorm.install_at_site_key, 
       denorm.portfolio_number, 
       denorm.portfolio_desc, 
       denorm.contact_party_name, 
       denorm.contact_details, 
       denorm.org_party_name, 
       denorm.cco_id, 
       denorm.contract_number, 
       denorm.contract_service_line, 
       denorm.contract_line_status, 
       denorm.coverage_template_desc, 
       denorm.contract_start_date, 
       denorm.contract_end_date, 
       denorm.contract_expire_date, 
       denorm.tech_name, 
       denorm.hw_part_number, 
       denorm.hw_family, 
       denorm.hw_platform, 
       denorm.hw_business_unit, 
       denorm.sw_part_number, 
       denorm.sw_version, 
       denorm.sw_part_type, 
       denorm.sw_business_unit, 
       denorm.sw_family, 
       denorm.producttable_item_name, 
       denorm.producttable_item_description, 
       denorm.producttable_business_unit, 
       denorm.producttable_family, 
       denorm.bl_last_update_date, 
       denorm.sub_tech_name, 
       denorm.change_done_by_cco_id 
FROM 
"csp_tsbi.csp_tss_incidents curated_input account inner join service_request_transformed_tsbi.sr_denorm_incidents denorm on account.contract=denorm.contract_number  where coalesce(to_date(closed_date), to_date(from_unixtime(unix_timestamp())))  between  date_sub(to_date(from_unixtime( unix_timestamp() ) ) ,1095) and to_date(from_unixtime(unix_timestamp()))";

Now i am trying for 6 years so i have changed to 2190 for 6 years instead of 1095 but when i run the job, it is geting failed when exporting data to 1 server.

please see the error below. I would really appreciate if anyone can guide on this. Thanks in adv

LogType:stderr
Log Upload Time:Tue Sep 03 05:15:51 -0400 2019
LogLength:0
Log Contents:
End of LogType:stderr
 
LogType:stdout
Log Upload Time:Tue Sep 03 05:15:51 -0400 2019
LogLength:0
Log Contents:
End of LogType:stdout
 
LogType:syslog
Log Upload Time:Tue Sep 03 05:15:51 -0400 2019
LogLength:10480
Log Contents:
2019-09-03 02:05:22,625 INFO [main] org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from hadoop-metrics2.properties
2019-09-03 02:05:22,678 INFO [main] org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot period at 10 second(s).
2019-09-03 02:05:22,679 INFO [main] org.apache.hadoop.metrics2.impl.MetricsSystemImpl: MapTask metrics system started
2019-09-03 02:05:22,680 INFO [main] org.apache.hadoop.mapred.YarnChild: Executing with tokens:
2019-09-03 02:05:22,680 INFO [main] org.apache.hadoop.mapred.YarnChild: Kind: mapreduce.job, Service: job_1563651888010_2140784, Ident: (org.apache.hadoop.mapreduce.security.token.JobTokenIdentifier@30bce90b)
2019-09-03 02:05:22,764 INFO [main] org.apache.hadoop.mapred.YarnChild: Sleeping for 0ms before retrying again. Got null now.
2019-09-03 02:05:23,119 INFO [main] org.apache.hadoop.mapred.YarnChild: mapreduce.cluster.local.dir for child: /hdfs/app/local.hdprd-c01-r06-07.cisco.com.logs/usercache/phodisvc/appcache/application_1563651888010_2140784
2019-09-03 02:05:23,245 INFO [main] com.pepperdata.supervisor.agent.resource.O: Set a new configuration for the first time.
2019-09-03 02:05:23,330 INFO [main] com.pepperdata.common.reflect.d: Method not implemented in this version of Hadoop: org.apache.hadoop.fs.FileSystem.getGlobalStorageStatistics
2019-09-03 02:05:23,330 INFO [main] com.pepperdata.common.reflect.d: Method not implemented in this version of Hadoop: org.apache.hadoop.fs.FileSystem$Statistics.getBytesReadLocalHost
2019-09-03 02:05:23,344 INFO [main] com.pepperdata.supervisor.agent.resource.u: Scheduling statistics report every 2000 millisecs
2019-09-03 02:05:23,491 INFO [Pepperdata Statistics Reporter] com.pepperdata.supervisor.protocol.handler.http.Handler: Shuffle URL path prefix: /mapOutput
2019-09-03 02:05:23,491 INFO [Pepperdata Statistics Reporter] com.pepperdata.supervisor.protocol.handler.http.Handler: Initialized shuffle handler, starting uncontrolled.
2019-09-03 02:05:23,519 INFO [main] org.apache.hadoop.mapred.Task: mapOutputFile class: org.apache.hadoop.mapred.MapRFsOutputFile
2019-09-03 02:05:23,519 INFO [main] org.apache.hadoop.conf.Configuration.deprecation: session.id is deprecated. Instead, use dfs.metrics.session-id
2019-09-03 02:05:23,544 INFO [main] org.apache.hadoop.mapred.Task:  Using ResourceCalculatorProcessTree : [ ]
2019-09-03 02:05:23,670 INFO [main] org.apache.hadoop.mapred.MapTask: Processing split: Paths:/app/SmartAnalytics/Apps/CSP/hivewarehouse/csp_tsbi.db/sr_passim_data_platform_mood_db_p01/000000_0:0+30277399
2019-09-03 02:05:23,674 INFO [main] org.apache.hadoop.conf.Configuration.deprecation: map.input.file is deprecated. Instead, use mapreduce.map.input.file
2019-09-03 02:05:23,674 INFO [main] org.apache.hadoop.conf.Configuration.deprecation: map.input.start is deprecated. Instead, use mapreduce.map.input.start
2019-09-03 02:05:23,675 INFO [main] org.apache.hadoop.conf.Configuration.deprecation: map.input.length is deprecated. Instead, use mapreduce.map.input.length
2019-09-03 02:05:26,449 WARN [Thread-12] org.apache.sqoop.mapreduce.SQLServerExportDBExecThread: Error executing statement: java.sql.BatchUpdateException: String or binary data would be truncated.
2019-09-03 02:05:26,450 WARN [Thread-12] org.apache.sqoop.mapreduce.SQLServerAsyncDBExecThread: Trying to recover from DB write failure:
java.sql.BatchUpdateException: String or binary data would be truncated.
        at com.microsoft.sqlserver.jdbc.SQLServerPreparedStatement.executeBatch(SQLServerPreparedStatement.java:1178)
        at org.apache.sqoop.mapreduce.SQLServerExportDBExecThread.executeStatement(SQLServerExportDBExecThread.java:96)
        at org.apache.sqoop.mapreduce.SQLServerAsyncDBExecThread.write(SQLServerAsyncDBExecThread.java:272)
        at org.apache.sqoop.mapreduce.SQLServerAsyncDBExecThread.run(SQLServerAsyncDBExecThread.java:240)
2019-09-03 02:05:26,451 WARN [Thread-12] org.apache.sqoop.mapreduce.db.SQLServerConnectionFailureHandler: Cannot handle error with SQL State: 22001
2019-09-03 02:05:26,451 ERROR [Thread-12] org.apache.sqoop.mapreduce.SQLServerAsyncDBExecThread: Failed to write records.
java.io.IOException: Registered handler cannot recover error with SQL State: 22001, error code: 8152
        at org.apache.sqoop.mapreduce.SQLServerAsyncDBExecThread.write(SQLServerAsyncDBExecThread.java:293)
        at org.apache.sqoop.mapreduce.SQLServerAsyncDBExecThread.run(SQLServerAsyncDBExecThread.java:240)
Caused by: java.sql.BatchUpdateException: String or binary data would be truncated.
        at com.microsoft.sqlserver.jdbc.SQLServerPreparedStatement.executeBatch(SQLServerPreparedStatement.java:1178)
        at org.apache.sqoop.mapreduce.SQLServerExportDBExecThread.executeStatement(SQLServerExportDBExecThread.java:96)
        at org.apache.sqoop.mapreduce.SQLServerAsyncDBExecThread.write(SQLServerAsyncDBExecThread.java:272)
        ... 1 more
2019-09-03 02:05:26,452 ERROR [Thread-12] org.apache.sqoop.mapreduce.SQLServerAsyncDBExecThread: Got exception in update thread: java.io.IOException: Registered handler cannot recover error with SQL State: 22001, error code: 8152
        at org.apache.sqoop.mapreduce.SQLServerAsyncDBExecThread.write(SQLServerAsyncDBExecThread.java:293)
        at org.apache.sqoop.mapreduce.SQLServerAsyncDBExecThread.run(SQLServerAsyncDBExecThread.java:240)
Caused by: java.sql.BatchUpdateException: String or binary data would be truncated.
        at com.microsoft.sqlserver.jdbc.SQLServerPreparedStatement.executeBatch(SQLServerPreparedStatement.java:1178)
        at org.apache.sqoop.mapreduce.SQLServerExportDBExecThread.executeStatement(SQLServerExportDBExecThread.java:96)
        at org.apache.sqoop.mapreduce.SQLServerAsyncDBExecThread.write(SQLServerAsyncDBExecThread.java:272)
        ... 1 more
 
2019-09-03 02:05:26,460 ERROR [main] org.apache.sqoop.mapreduce.SQLServerAsyncDBExecThread: Asynchronous writer thread encountered the following exception: java.io.IOException: Registered handler cannot recover error with SQL State: 22001, error code: 8152
2019-09-03 02:05:26,461 ERROR [main] org.apache.sqoop.mapreduce.TextExportMapper:
2019-09-03 02:05:26,461 ERROR [main] org.apache.sqoop.mapreduce.TextExportMapper: Exception raised during data export
2019-09-03 02:05:26,461 ERROR [main] org.apache.sqoop.mapreduce.TextExportMapper:
2019-09-03 02:05:26,461 ERROR [main] org.apache.sqoop.mapreduce.TextExportMapper: Exception:
java.io.IOException: java.io.IOException: Registered handler cannot recover error with SQL State: 22001, error code: 8152
        at org.apache.sqoop.mapreduce.SQLServerAsyncDBExecThread.put(SQLServerAsyncDBExecThread.java:175)
        at org.apache.sqoop.mapreduce.SQLServerResilientExportOutputFormat$SQLServerExportRecordWriter.write(SQLServerResilientExportOutputFormat.java:159)
        at org.apache.sqoop.mapreduce.SQLServerResilientExportOutputFormat$SQLServerExportRecordWriter.write(SQLServerResilientExportOutputFormat.java:104)
        at org.apache.hadoop.mapred.MapTask$NewDirectOutputCollector.write(MapTask.java:667)
        at org.apache.hadoop.mapreduce.task.TaskInputOutputContextImpl.write(TaskInputOutputContextImpl.java:89)
        at org.apache.hadoop.mapreduce.lib.map.WrappedMapper$Context.write(WrappedMapper.java:112)
        at org.apache.sqoop.mapreduce.TextExportMapper.map(TextExportMapper.java:84)
        at org.apache.sqoop.mapreduce.TextExportMapper.map(TextExportMapper.java:39)
        at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:146)
        at org.apache.sqoop.mapreduce.AutoProgressMapper.run(AutoProgressMapper.java:64)
        at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:796)
        at org.apache.hadoop.mapred.MapTask.run(MapTask.java:346)
        at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:163)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:422)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1633)
        at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)
Caused by: java.io.IOException: Registered handler cannot recover error with SQL State: 22001, error code: 8152
        at org.apache.sqoop.mapreduce.SQLServerAsyncDBExecThread.write(SQLServerAsyncDBExecThread.java:293)
        at org.apache.sqoop.mapreduce.SQLServerAsyncDBExecThread.run(SQLServerAsyncDBExecThread.java:240)
Caused by: java.sql.BatchUpdateException: String or binary data would be truncated.
        at com.microsoft.sqlserver.jdbc.SQLServerPreparedStatement.executeBatch(SQLServerPreparedStatement.java:1178)
        at org.apache.sqoop.mapreduce.SQLServerExportDBExecThread.executeStatement(SQLServerExportDBExecThread.java:96)
        at org.apache.sqoop.mapreduce.SQLServerAsyncDBExecThread.write(SQLServerAsyncDBExecThread.java:272)
        ... 1 more
2019-09-03 02:05:26,461 ERROR [main] org.apache.sqoop.mapreduce.TextExportMapper: On input: 6862654212019-03-01 19:26:50JP Morgan Chase S1333Spark room  / issue   not accepate  push from cucmClosedCONFIG_ASSISTANCE\NPHONEmichael.j.chiappalone@jpmorgan.com2019-03-01 20:26:05SW_CONFIGTACMichael Chiappalone1-+19179397379- Ext: JP MORGAN CHASE BANKTelepresenceCTSSOL2006241223103554YYY403475303JP MORGAN CHASE BANK770658JEFFERSONVILLE\N47130-3451USNAMJPMC11228303652019-03-02 03:01:27Webex Room Kit (On-Prem/not cloud registered)
2019-09-03 02:05:26,461 ERROR [main] org.apache.sqoop.mapreduce.TextExportMapper: On input file: maprfs:///app/SmartAnalytics/Apps/CSP/hivewarehouse/csp_tsbi.db/sr_passim_data_platform_mood_db_p01/000000_0
2019-09-03 02:05:26,461 ERROR [main] org.apache.sqoop.mapreduce.TextExportMapper: At position 23180227
2019-09-03 02:05:26,462 ERROR [main] org.apache.sqoop.mapreduce.TextExportMapper:
2019-09-03 02:05:26,462 ERROR [main] org.apache.sqoop.mapreduce.TextExportMapper: Currently processing split:
2019-09-03 02:05:26,462 ERROR [main] org.apache.sqoop.mapreduce.TextExportMapper: Paths:/app/SmartAnalytics/Apps/CSP/hivewarehouse/csp_tsbi.db/sr_passim_data_platform_mood_db_p01/000000_0:0+30277399
2019-09-03 02:05:26,462 ERROR [main] org.apache.sqoop.mapreduce.TextExportMapper:
2019-09-03 02:05:26,462 ERROR [main] org.apache.sqoop.mapreduce.TextExportMapper: This issue might not necessarily be caused by current input
2019-09-03 02:05:26,462 ERROR [main] org.apache.sqoop.mapreduce.TextExportMapper: due to the batching nature of export.
2019-09-03 02:05:26,462 ERROR [main] org.apache.sqoop.mapreduce.TextExportMapper:
2019-09-03 02:05:26,462 INFO [Thread-13] org.apache.sqoop.mapreduce.AutoProgressMapper: Auto-progress thread is finished. keepGoing=false
End of LogType:syslog
Sep 4, 2019 in Big Data Hadoop by Hemanth
• 250 points

edited Sep 4, 2019 by Omkar 2,687 views

1 answer to this question.

+2 votes

Hello @Hemanth, the error

String or binary data would be truncated.

means that the size of the attribute is less than the data that is being inserted. For example, if there is an attribute NAME in your table that is declared to hold 10 characters (VARCHAR(10)). Now if you try to insert a string with more than 10 characters, then you are returned with the above error.

Check your table structure. See if the attribute lengths are big enough to hold the data that is being inserted. 

answered Sep 4, 2019 by Ramya
@RAMYA, Thanks a lot ramya. i will check and update you

@Ramya, i have checked the column size all columns are string only. can u give any guidance on this 

All column details:

1 account string

2 account_id bigint

3 incident_number string

4 incident_id bigint

5 casenumber decimal(38,0)

6 incident_type string

7 incident_status string

8 comm_pref_code string

9 complexity string

10 current_severity string

11 initial_severity string

12 max_severity string

13 bug_cnt bigint

14 outage string

15 initial_portfolio_name string

16 entry_channel string

17 creation_date timestamp

18 closed_date timestamp

19 current_serial_number string

20 router_node_name string

21 summary string

22 customer_ticket_number string

23 incident_contact_email string

24 problem_code string

25 resolution_code string

26 sr_create_pfg string

27 install_at_site_id bigint

28 solution_release string

29 nlp_status string

30 b2b_flag string

31 install_at_site_key bigint

32 portfolio_number string

33 portfolio_desc string

34 contact_party_name string

35 contact_details string

36 org_party_name string

37 cco_id string

38 contract_number string

39 contract_service_line string

40 contract_line_status string

41 coverage_template_desc string

42 contract_start_date timestamp

43 contract_end_date timestamp

44 contract_expire_date timestamp

45 tech_name string

46 hw_part_number string

47 hw_family string

48 hw_platform string

49 hw_business_unit string

50 sw_part_number string

51 sw_version string

52 sw_part_type string

53 sw_business_unit string

54 sw_family string

55 producttable_item_name string

56 producttable_item_description string

57 producttable_business_unit string

58 producttable_family string

59 bl_last_update_date timestamp

60 sub_tech_name string

And that particular column details which is failed in exception:

 
csp_tss_incidents.account csp_tss_incidents.account_id csp_tss_incidents.incident_number csp_tss_incidents.incident_id csp_tss_incidents.casenumber csp_tss_incidents.incident_type csp_tss_incidents.incident_status csp_tss_incidents.comm_pref_code csp_tss_incidents.complexity csp_tss_incidents.current_severity csp_tss_incidents.initial_severity csp_tss_incidents.max_severity csp_tss_incidents.bug_cnt csp_tss_incidents.outage csp_tss_incidents.initial_portfolio_name csp_tss_incidents.entry_channel csp_tss_incidents.creation_date csp_tss_incidents.closed_date csp_tss_incidents.current_serial_number csp_tss_incidents.router_node_name csp_tss_incidents.summary csp_tss_incidents.customer_ticket_number csp_tss_incidents.incident_contact_email csp_tss_incidents.problem_code csp_tss_incidents.resolution_code csp_tss_incidents.sr_create_pfg csp_tss_incidents.install_at_site_id csp_tss_incidents.solution_release csp_tss_incidents.nlp_status csp_tss_incidents.b2b_flag csp_tss_incidents.install_at_site_key csp_tss_incidents.portfolio_number csp_tss_incidents.portfolio_desc csp_tss_incidents.contact_party_name csp_tss_incidents.contact_details csp_tss_incidents.org_party_name csp_tss_incidents.cco_id csp_tss_incidents.contract_number csp_tss_incidents.contract_service_line csp_tss_incidents.contract_line_status csp_tss_incidents.coverage_template_desc csp_tss_incidents.contract_start_date csp_tss_incidents.contract_end_date csp_tss_incidents.contract_expire_date csp_tss_incidents.tech_name csp_tss_incidents.hw_part_number csp_tss_incidents.hw_family csp_tss_incidents.hw_platform csp_tss_incidents.hw_business_unit csp_tss_incidents.sw_part_number csp_tss_incidents.sw_version csp_tss_incidents.sw_part_type csp_tss_incidents.sw_business_unit csp_tss_incidents.sw_family csp_tss_incidents.producttable_item_name csp_tss_incidents.producttable_item_description csp_tss_incidents.producttable_business_unit csp_tss_incidents.producttable_family csp_tss_incidents.bl_last_update_date csp_tss_incidents.sub_tech_name
1 JPMC NULL 686265421 NULL 1122830365 TAC Closed PHONE 2 Level -Advanced 3 3 3 NULL No JP Morgan Chase S1 PHONE 2019-03-01 19:26:50.0 2019-03-01 20:26:05.0 Spark room  / issue   not accepate  push from cucm michael.j.chiappalone@jpmorgan.com CONFIG_ASSISTANCE SW_CONFIG NULL 403475303 N/A Closed Y 72747016 10750 JP Morgan Chase S1 Michael Chiappalone 1-+19179397379- Ext:  JP MORGAN CHASE BANK MICIAELCHIAPPALONE5482 200624122 ECMU ACTIVE SWSS UPGRADES 2016-03-08 00:09:07.0 2021-10-09 00:00:00.0 NULL Telepresence CS-R55-UNIT-K9+ CTSSOL VERTSOL CVEBU cmterm-s53200ce9_6_2-5672d8aee2f.k3.cop.sgn CE9.6.2 TelePresence Software ITD Not Available CUWL-PMP+K9 Migration from CUWL Pro  PMP or Acano to PMP Plus UCIBU UWLU 2019-03-02 03:01:27.0 Webex Room Kit (On-Prem/not cloud registered
@ramya, any guess on this.
Hi @Hemanth. While creating the table, you must have used VARCHAR for the string attributes right? Have you mentioned enough size? Also, can you share the query used to create the table, it will be easier to analyze the problem?
@Ramya, i couldn't able to find the create query coz it was created long time ago and i couldn't able to find it. But is there any query that alter the length of table column even if it is string ?

@ramya 

Hello ramya, again i have tried to run the job this time same error but with different input row.

May i know what is mean by AT POSITION 23181010 and due to batching nature of export

pls check error and if have any idea about this pls do let me know.

2019-09-13 00:23:28,227 INFO [main] org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from hadoop-metrics2.properties
2019-09-13 00:23:28,290 INFO [main] org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot period at 10 second(s).
2019-09-13 00:23:28,290 INFO [main] org.apache.hadoop.metrics2.impl.MetricsSystemImpl: MapTask metrics system started
2019-09-13 00:23:28,292 INFO [main] org.apache.hadoop.mapred.YarnChild: Executing with tokens:
2019-09-13 00:23:28,292 INFO [main] org.apache.hadoop.mapred.YarnChild: Kind: mapreduce.job, Service: job_1567580309142_394265, Ident: (org.apache.hadoop.mapreduce.security.token.JobTokenIdentifier@30bce90b)
2019-09-13 00:23:28,398 INFO [main] org.apache.hadoop.mapred.YarnChild: Sleeping for 0ms before retrying again. Got null now.
2019-09-13 00:23:28,797 INFO [main] org.apache.hadoop.mapred.YarnChild: mapreduce.cluster.local.dir for child: /hdfs/app/local.hdprd-c01-r08-09.cisco.com.logs/usercache/phodisvc/appcache/application_1567580309142_394265
2019-09-13 00:23:28,917 INFO [main] com.pepperdata.supervisor.agent.resource.O: Set a new configuration for the first time.
2019-09-13 00:23:29,011 INFO [main] com.pepperdata.common.reflect.d: Method not implemented in this version of Hadoop: org.apache.hadoop.fs.FileSystem.getGlobalStorageStatistics
2019-09-13 00:23:29,012 INFO [main] com.pepperdata.common.reflect.d: Method not implemented in this version of Hadoop: org.apache.hadoop.fs.FileSystem$Statistics.getBytesReadLocalHost
2019-09-13 00:23:29,036 INFO [main] com.pepperdata.supervisor.agent.resource.u: Scheduling statistics report every 2000 millisecs
2019-09-13 00:23:29,185 INFO [Pepperdata Statistics Reporter] com.pepperdata.supervisor.protocol.handler.http.Handler: Shuffle URL path prefix: /mapOutput
2019-09-13 00:23:29,185 INFO [Pepperdata Statistics Reporter] com.pepperdata.supervisor.protocol.handler.http.Handler: Initialized shuffle handler, starting uncontrolled.
2019-09-13 00:23:29,209 INFO [main] org.apache.hadoop.mapred.Task: mapOutputFile class: org.apache.hadoop.mapred.MapRFsOutputFile
2019-09-13 00:23:29,211 INFO [main] org.apache.hadoop.conf.Configuration.deprecation: session.id is deprecated. Instead, use dfs.metrics.session-id
2019-09-13 00:23:29,248 INFO [main] org.apache.hadoop.mapred.Task:  Using ResourceCalculatorProcessTree : [ ]
2019-09-13 00:23:29,383 INFO [main] org.apache.hadoop.mapred.MapTask: Processing split: Paths:/app/SmartAnalytics/Apps/CSP/hivewarehouse/csp_tsbi.db/sr_passim_data_platform_mood_db_p01/000000_0:0+30595592
2019-09-13 00:23:29,388 INFO [main] org.apache.hadoop.conf.Configuration.deprecation: map.input.file is deprecated. Instead, use mapreduce.map.input.file
2019-09-13 00:23:29,388 INFO [main] org.apache.hadoop.conf.Configuration.deprecation: map.input.start is deprecated. Instead, use mapreduce.map.input.start
2019-09-13 00:23:29,388 INFO [main] org.apache.hadoop.conf.Configuration.deprecation: map.input.length is deprecated. Instead, use mapreduce.map.input.length
2019-09-13 00:23:33,751 WARN [Thread-12] org.apache.sqoop.mapreduce.SQLServerExportDBExecThread: Error executing statement: java.sql.BatchUpdateException: String or binary data would be truncated.
2019-09-13 00:23:33,751 WARN [Thread-12] org.apache.sqoop.mapreduce.SQLServerAsyncDBExecThread: Trying to recover from DB write failure:
java.sql.BatchUpdateException: String or binary data would be truncated.
        at com.microsoft.sqlserver.jdbc.SQLServerPreparedStatement.executeBatch(SQLServerPreparedStatement.java:1178)
        at org.apache.sqoop.mapreduce.SQLServerExportDBExecThread.executeStatement(SQLServerExportDBExecThread.java:96)
        at org.apache.sqoop.mapreduce.SQLServerAsyncDBExecThread.write(SQLServerAsyncDBExecThread.java:272)
        at org.apache.sqoop.mapreduce.SQLServerAsyncDBExecThread.run(SQLServerAsyncDBExecThread.java:240)
2019-09-13 00:23:33,753 WARN [Thread-12] org.apache.sqoop.mapreduce.db.SQLServerConnectionFailureHandler: Cannot handle error with SQL State: 22001
2019-09-13 00:23:33,753 ERROR [Thread-12] org.apache.sqoop.mapreduce.SQLServerAsyncDBExecThread: Failed to write records.
java.io.IOException: Registered handler cannot recover error with SQL State: 22001, error code: 8152
        at org.apache.sqoop.mapreduce.SQLServerAsyncDBExecThread.write(SQLServerAsyncDBExecThread.java:293)
        at org.apache.sqoop.mapreduce.SQLServerAsyncDBExecThread.run(SQLServerAsyncDBExecThread.java:240)
Caused by: java.sql.BatchUpdateException: String or binary data would be truncated.
        at com.microsoft.sqlserver.jdbc.SQLServerPreparedStatement.executeBatch(SQLServerPreparedStatement.java:1178)
        at org.apache.sqoop.mapreduce.SQLServerExportDBExecThread.executeStatement(SQLServerExportDBExecThread.java:96)
        at org.apache.sqoop.mapreduce.SQLServerAsyncDBExecThread.write(SQLServerAsyncDBExecThread.java:272)
        ... 1 more
2019-09-13 00:23:33,754 ERROR [Thread-12] org.apache.sqoop.mapreduce.SQLServerAsyncDBExecThread: Got exception in update thread: java.io.IOException: Registered handler cannot recover error with SQL State: 22001, error code: 8152
        at org.apache.sqoop.mapreduce.SQLServerAsyncDBExecThread.write(SQLServerAsyncDBExecThread.java:293)
        at org.apache.sqoop.mapreduce.SQLServerAsyncDBExecThread.run(SQLServerAsyncDBExecThread.java:240)
Caused by: java.sql.BatchUpdateException: String or binary data would be truncated.
        at com.microsoft.sqlserver.jdbc.SQLServerPreparedStatement.executeBatch(SQLServerPreparedStatement.java:1178)
        at org.apache.sqoop.mapreduce.SQLServerExportDBExecThread.executeStatement(SQLServerExportDBExecThread.java:96)
        at org.apache.sqoop.mapreduce.SQLServerAsyncDBExecThread.write(SQLServerAsyncDBExecThread.java:272)
        ... 1 more
at org.apache.sqoop.mapreduce.SQLServerAsyncDBExecThread.run(SQLServerAsyncDBExecThread.java:240)
2019-09-13 00:23:33,753 WARN [Thread-12] org.apache.sqoop.mapreduce.db.SQLServerConnectionFailureHandler: Cannot handle error with SQL State: 22001
2019-09-13 00:23:33,753 ERROR [Thread-12] org.apache.sqoop.mapreduce.SQLServerAsyncDBExecThread: Failed to write records.
java.io.IOException: Registered handler cannot recover error with SQL State: 22001, error code: 8152
        at org.apache.sqoop.mapreduce.SQLServerAsyncDBExecThread.write(SQLServerAsyncDBExecThread.java:293)
        at org.apache.sqoop.mapreduce.SQLServerAsyncDBExecThread.run(SQLServerAsyncDBExecThread.java:240)
Caused by: java.sql.BatchUpdateException: String or binary data would be truncated.
        at com.microsoft.sqlserver.jdbc.SQLServerPreparedStatement.executeBatch(SQLServerPreparedStatement.java:1178)
        at org.apache.sqoop.mapreduce.SQLServerExportDBExecThread.executeStatement(SQLServerExportDBExecThread.java:96)
        at org.apache.sqoop.mapreduce.SQLServerAsyncDBExecThread.write(SQLServerAsyncDBExecThread.java:272)
        ... 1 more
2019-09-13 00:23:33,754 ERROR [Thread-12] org.apache.sqoop.mapreduce.SQLServerAsyncDBExecThread: Got exception in update thread: java.io.IOException: Registered handler cannot recover error with SQL State: 22001, error code: 8152
        at org.apache.sqoop.mapreduce.SQLServerAsyncDBExecThread.write(SQLServerAsyncDBExecThread.java:293)
        at org.apache.sqoop.mapreduce.SQLServerAsyncDBExecThread.run(SQLServerAsyncDBExecThread.java:240)
Caused by: java.sql.BatchUpdateException: String or binary data would be truncated.
        at com.microsoft.sqlserver.jdbc.SQLServerPreparedStatement.executeBatch(SQLServerPreparedStatement.java:1178)
        at org.apache.sqoop.mapreduce.SQLServerExportDBExecThread.executeStatement(SQLServerExportDBExecThread.java:96)
        at org.apache.sqoop.mapreduce.SQLServerAsyncDBExecThread.write(SQLServerAsyncDBExecThread.java:272)
        ... 1 more
2019-09-13 00:23:33,763 ERROR [main] org.apache.sqoop.mapreduce.TextExportMapper: On input: 6395228522019-03-02 12:40:20JP Morgan Chase S1333SVO 03/02/2019Closed\NWEBnalin.sharma@jpmorgan.com2019-03-16 18:14:46NOT AVAILABLESVONalin Sharma91-9916109909- Ext: C3 TO BE ASSIGNED CPR COMPANYNot AvailableNot Available9313171512296661229666NN\N\N\N\N\N\NJPMC11228337212019-03-17 02:52:02Not Available
2019-09-13 00:23:33,763 ERROR [main] org.apache.sqoop.mapreduce.TextExportMapper: On input file: maprfs:///app/SmartAnalytics/Apps/CSP/hivewarehouse/csp_tsbi.db/sr_passim_data_platform_mood_db_p01/000000_0
2019-09-13 00:23:33,764 ERROR [main] org.apache.sqoop.mapreduce.TextExportMapper: At position 23181010
2019-09-13 00:23:33,764 ERROR [main] org.apache.sqoop.mapreduce.TextExportMapper:
2019-09-13 00:23:33,764 ERROR [main] org.apache.sqoop.mapreduce.TextExportMapper: Currently processing split:
2019-09-13 00:23:33,764 ERROR [main] org.apache.sqoop.mapreduce.TextExportMapper: Paths:/app/SmartAnalytics/Apps/CSP/hivewarehouse/csp_tsbi.db/sr_passim_data_platform_mood_db_p01/000000_0:0+30595592
2019-09-13 00:23:33,764 ERROR [main] org.apache.sqoop.mapreduce.TextExportMapper:
2019-09-13 00:23:33,764 ERROR [main] org.apache.sqoop.mapreduce.TextExportMapper: This issue might not necessarily be caused by current input
2019-09-13 00:23:33,764 ERROR [main] org.apache.sqoop.mapreduce.TextExportMapper: due to the batching nature of export.

2019-09-13 00:23:33,764 ERROR [main] org.apache.sqoop.mapreduce.TextExportMapper:
2019-09-13 00:23:33,764 INFO [Thread-13] org.apache.sqoop.mapreduce.AutoProgressMapper: Auto-progress thread is finished. keepGoing=false
End of LogType:syslog

Related Questions In Big Data Hadoop

0 votes
1 answer

Class not found exception when I am running my Word Count Program jar file

You have forgotten to include the package name ...READ MORE

answered Jan 18, 2019 in Big Data Hadoop by Omkar
• 69,220 points
1,251 views
0 votes
0 answers
0 votes
1 answer

Hadoop security GroupMappingServiceProvider exception for Spark job via Dataproc API

One of the reason behin you getting ...READ MORE

answered Mar 23, 2018 in Big Data Hadoop by nitinrawat895
• 11,380 points
961 views
0 votes
1 answer

How can I get the respective Bitcoin value for an input in USD when using c#

Simply make call to server and parse ...READ MORE

answered Mar 25, 2018 in Big Data Hadoop by charlie_brown
• 7,720 points
1,047 views
+1 vote
1 answer

Hadoop Mapreduce word count Program

Firstly you need to understand the concept ...READ MORE

answered Mar 16, 2018 in Data Analytics by nitinrawat895
• 11,380 points
11,028 views
0 votes
1 answer

hadoop.mapred vs hadoop.mapreduce?

org.apache.hadoop.mapred is the Old API  org.apache.hadoop.mapreduce is the ...READ MORE

answered Mar 16, 2018 in Data Analytics by nitinrawat895
• 11,380 points
2,535 views
+2 votes
11 answers

hadoop fs -put command?

Hi, You can create one directory in HDFS ...READ MORE

answered Mar 16, 2018 in Big Data Hadoop by nitinrawat895
• 11,380 points
108,830 views
–1 vote
1 answer

Hadoop dfs -ls command?

In your case there is no difference ...READ MORE

answered Mar 16, 2018 in Big Data Hadoop by kurt_cobain
• 9,350 points
4,611 views
0 votes
1 answer

Hive with JSON: getting EOF exception while querying

You are missing the commas in your ...READ MORE

answered Aug 30, 2018 in Big Data Hadoop by Neha
• 6,300 points
1,730 views
webinar REGISTER FOR FREE WEBINAR X
REGISTER NOW
webinar_success Thank you for registering Join Edureka Meetup community for 100+ Free Webinars each month JOIN MEETUP GROUP