Skip to main content
Announcements
Qlik Connect 2024! Seize endless possibilities! LEARN MORE
cancel
Showing results for 
Search instead for 
Did you mean: 
raghujakkani
Contributor III
Contributor III

Apply Latency increasing at Snowflake on Azure

Hello,

Task configured with full Load + Apply changes. 

Source : Oracle database

Target : Snowflake on Azure. 

Full load completed fine and Apply changes ran without issue for few days but later the Apply Latency kept on building up. 

Here are my Task settings. 

raghujakkani_0-1695918159073.png

 

 

raghujakkani_1-1695918181975.png

 

Target End Point Advanced settings :

raghujakkani_2-1695918250875.png

 

 

 

3 Replies
Heinvandenheuvel
Specialist II
Specialist II

Run for 30 minutes or an hour with TARGET_APPLY logging set to TRACE. ROLL the log before and AFTER. Next   carefully study to individual applies to see which is slow and try to explain/understand. You can also approach this from the Snowflake angle looking for longer running queries by the Replicate connection.

For the parameters shown - please enable 'apply concurrently' that way, if there is one slower apply at least the other applies in the batch do not have to wait but can proceed at the same time. Of course this is no garantuee because the applies are done alphabetically by tablename and the slowest table might be the very last to be started. I also thing 100MB for the files is a bit small, that's like 10 pictures worth these days right ? :-). Go to 500 and see if it helps some.

As TARGET_APPLY DEBUG gives a bunch of information you may want a structured tool to analyze the output. For that I created a PERL script execute with '-h' or read the code to check for help/options, it has helped many Replicate users. For mere money I'll help you use it, but the code is attached for your testing.

Hein.

Dana_Baldwin
Support
Support

@raghujakkani Many customers have had improved performance with the file size set to the max of 2000 MB. For a more nuanced approach, you can adjust based on the Snowflake Warehouse size XS/S/M/L as a starting point. Set the Max file size on the Advanced table of connection to either 500/1000/1500/2000.

Also - you can increasing logging for component Performance to trace just to confirm where the latency is, source, target, or on the Replicate server.

Thanks,

Dana

william75
Contributor
Contributor

We use a bootleg merge on read approach in my current company. The data is loaded to a raw table with snowpipe streaming. Complex transformation is applied daily to materialize a table. Then, in a view we union the materialized table and the transformation results for the raw data we havent processed yet. The view can serve low latency data with not much of a performance hit this way.