Skip to main content
Announcements
Qlik Introduces a New Era of Visualization! READ ALL ABOUT IT
cancel
Showing results for 
Search instead for 
Did you mean: 
dan_stroebel
Contributor II
Contributor II

Task stops when redo log is missing

I am replicating Data from an Oracle DB into SQL Server 2019.

After a couple of days, the replication task stops with error:

Stream component 'st_0_TU' terminated
Stream component failed at subtask 0, component st_0_TU
Error executing source loop

Archived Redo log with the sequence 3799 does not exist, thread 1

 

The redo logs are deleted after 72 hours by our administrators, and this is sadly non-negotiable.

Is there a way to keep replicate running without the redo logs?
E.g. is there some option to disregard them?

 

Thans in advance,

Dan

 

Labels (1)
3 Replies
Megha_More
Partner - Creator
Partner - Creator

Replicate needs redo logs to capture the changes; it is accessible by either a log miner or a log reader.

If replicate doesn't find any logs or if they are deleted, then we will get the log doesn't exist error.

john_wang
Support
Support

Hello @dan_stroebel , 

Thanks for pasting articles in Qlik community!

Qlik Replicate is log based product, it parses/reads database transaction log (Redo logs in Oracle, TLOG in SQL Server, bin-log in MySQL, Journal in DB2i, OPLOG in MongoDB etc...) to capture changes and apply these changes (DML and/or DDL) to target side, it's no way to get changes without redo log, agree with @Megha_More .

However, there are several options to meet Replicate requirements before Oracle redo logs are cleanup:

1. Backup Oracle Redo Log files to another folder at the Oracle server, and config Replicate to Look for missing archived redo logs in folder; or

2. FTP transfer Oracle Redo Log files to Replicate Server and config Replicate to read the redo files locally.

    (We need to use 3rd party tools or OS scripts to copy redo log files between Oracle server and Replicate Server in a timely fashion)

Certainly in the above scenarios we need to consider the cleanup, by using scripts, or enable option "Delete processed archived redo log files" in Oracle source endpoint.

BTW, in general 72 hours cleanup interval is good enough, you may also set up notifications for latency , DBA could get email and get involved as soon as possible when the latency or other factors reach the high threshold (e.g. 48 hours) to make sure the replication task runs smoothly.

Ultimately a full load is needed if the redo log files cannot be restored and make it available to Replicate.

Hope this helps.

Regards,

John.

 

Help users find answers! Do not forget to mark a solution that worked for you! If already marked, give it a thumbs up!
KellyHobson
Former Employee
Former Employee

Hey @dan_stroebel ,

Agreed with @john_wang , a full reload is the best way to ensure you query and capture all data on the tables if you are past the retention window.

It is possible to 'skip' a redo log and move to a more recent one by querying the SCN after the problematic redo and then doing an advanced run -> restart from SCN. *Note* this will lead to lost data but can be a temporary workaround until you can schedule a reload. 

Best,

Kelly