Skip to main content
Announcements
Qlik Connect 2024! Seize endless possibilities! LEARN MORE
cancel
Showing results for 
Search instead for 
Did you mean: 
harsh2
Partner - Contributor III
Partner - Contributor III

Databricks as target

Getting below error : 

Handling End of table 'test'.'Rooms' loading failed by subtask 1 thread 1
Failed to copy data of file C:\Program Files\Attunity\Replicate\data\tasks\DATABRICKS\cloud\6\LOAD00000001.csv to database
RetCode: SQL_ERROR SqlState: 42000 NativeError: 80 Message: [Simba][Hardy] (80) Syntax or semantic analysis error thrown in server while executing query. Error message from server: org.apache.hive.service.cli.HiveSQLException: Error running query: Failure to initialize configuration for storage account databricks0.dfs.core.windows.net: Invalid configuration value detected for fs.azure.account.keyInvalid configuration value detected for fs.azure.account.key
at org.apache.spark.sql.hive.thriftserver.HiveThriftServerErrors$.runningQueryError(HiveThriftServerErrors.scala:56)
at org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation.$anonfun$execute$1(SparkExecuteStatementOperation.scala:697)
at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
at com.databricks.unity.EmptyHandle$.runWith(UCSHandle.scala:124)
at org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation.org$apache$spark$sql$hive$thriftserver$SparkExecuteStatementOperation$$execute(SparkExecuteStatementOperation.scala:574)
at org.apache.spark.sql.hive.t
Failed (retcode -1) to execute statement: COPY INTO `test`.`rooms` FROM(SELECT cast(_c0 as INT) as `room_id`, cast(_c1 as INT) as `hotel_id`, cast(_c2 as INT) as `room_number`, _c3 as `room_type`, cast(_c4 as INT) as `max_occupancy`, cast(_c5 as DECIMAL(8,2)) as `price_per_night` from 'abfss://test@databricks0.dfs.core.windows.net//DATABRICK_STG/4a77f60e-5074-fd40-8c76-49da2764d806/6') FILEFORMAT = CSV FILES = ('LOAD00000001.csv.gz') FORMAT_OPTIONS('nullValue' = 'attrep_null', 'multiLine'='true') COPY_OPTIONS('force' = 'true')

 

Also tried to execute upper query manually  but got following error :

KeyProviderException: Failure to initialize configuration for storage account databricks0.dfs.core.windows.net: Invalid configuration value detected for fs.azure.account.key

Can anyone provide the correct Python code to mount data drive 

Thanks,

Harsh

1 Solution

Accepted Solutions
shashi_holla
Support
Support

@harsh2 

As @OritA mentioned, pls check the user guide especially the following statement which should help to resolve this issue.

  • To be able to access the storage directories from the Databricks cluster, users need to add a configuration (in Spark Config) for that Storage Account and its key.

    Example:  

    fs.azure.account.key.<storage-account-name>.dfs.core.windows.net <storage-account-access-key>

    For details, refer to the Databricks online help at: https://docs.databricks.com/clusters/configure.html#spark-configuration

View solution in original post

2 Replies
OritA
Support
Support

Hi, 

First thing we recommend you to check that you are working with the ODBC driver version that correlates the Replicate version that you are running. Second point to check is that the key that was provided is the correct key. Please check the following link for further information related to the 2 points metioned above. IF you still face problems please open a case and provide the task diagnostic package with the logs with the error. 

https://help.qlik.com/en-US/replicate/November2023/Content/Replicate/Main/Databricks%20Lakehouse%20(...

Thanks & regards,

Orit

 

shashi_holla
Support
Support

@harsh2 

As @OritA mentioned, pls check the user guide especially the following statement which should help to resolve this issue.

  • To be able to access the storage directories from the Databricks cluster, users need to add a configuration (in Spark Config) for that Storage Account and its key.

    Example:  

    fs.azure.account.key.<storage-account-name>.dfs.core.windows.net <storage-account-access-key>

    For details, refer to the Databricks online help at: https://docs.databricks.com/clusters/configure.html#spark-configuration