Skip to main content
Announcements
Qlik Introduces a New Era of Visualization! READ ALL ABOUT IT
cancel
Showing results for 
Search instead for 
Did you mean: 
jm32
Contributor III
Contributor III

Amazon RDS for SQL Server as an Endpoint - LSN errors

Does anybody else use Amazon RDS for SQL Server as an Endpoint?  We have a requirement where it's preferred to replicate using the transaction log as a source vs ms-cdc tables for performance reasons (also DDL changes get deployed to the target better with tlog as source vs ms-cdc).     We have the tasks working fine but get periodic (at least once a week) errors out with the following error message example.   We then have to reload the target.  Are there any best practices or tips for using RDS for SQL Server as an endpoint?

Stream component 'st_0_xxx' terminated Stream component failed at subtask 0, component st_0_xxx Error executing source loop. ----------------------------------------
05:54:46 UTC  
2023/11/30 05:54:46 INFO Content-Type:text/plain; charset=UTF-8 MIME-Version: 1.0  LSN '000a235b:000db582:0025' could not be found, neither in the transaction log nor in the backup log. ----
Labels (2)
13 Replies
Alan_Wang
Support
Support

Hi @jm32 

The LSN that could not be found in the transaction logs or backup logs is listed in the error message. To prevent a reload, you can restore the backup log that contains that LSN and resume the task.

'000a235b:000db582:0025'

To prevent the LSN missing error, we recommend keeping the retention for the backups for a longer period.

 

If the issue is solved please mark the answer with Accept as Solution.
jm32
Contributor III
Contributor III
Author

Does RDS for SQL Server connect to the log backup or is it only the actual transaction log?  

Dana_Baldwin
Support
Support

Hi @jm32 

You are correct Replicate cannot access the backup TLOG. From our list of limitations in the User Guide:

jm32
Contributor III
Contributor III
Author

Thank you for your response!  I ran this query and the results below are all the log files.  The first one is the TLog that we are using for replicate.  So unless I'm misunderstanding.. it looks like it's not a storage issue where the online logs are moved to backup location...

SELECT D.name AS [database_name]
    , F.name AS [file_name]
    , F.type_desc AS [file_type]
    , CONVERT(decimal(10,2), F.size * 0.0078125) AS [size_on_disk_mb]
    , CONVERT(decimal(10,2), F.max_size * 0.0078125) AS [max_size_mb]
FROM sys.master_files AS F
INNER JOIN sys.databases AS D
    ON F.database_id = D.database_id;

jm32_0-1701383168234.png

 


Dana_Baldwin
Support
Support

Hi @jm32 

Try this query using your LSN to see if it's still in the online log:

select top 10 * from ::fn_dblog('xxxxxxx:xxxxxxxx:0001, null)

Note: Please replace xxxxxxx:xxxxxxxx with a valid LSN

Borrowed from this knowledge article:

Qlik Replicate: SQL Server Active/Backup presence ... - Qlik Community - 1902692

sys.fn_dblog which reads from the active portion of the transaction log.
sys.fn_dump_dblog which reads from the transaction log backups, so you can query this function to confirm that the LSN is in the backup.

Dana_Baldwin
Support
Support

Hi @jm32 

I'm having difficulty identifying the exact query to run against fn_dump_dblog, where to pass the lsn. These links might help:

Using fn_dblog, fn_dump_dblog, and restoring with STOPBEFOREMARK to an LSN - Paul S. Randal (sqlskil...

Read SQL Server Transaction Log Backups to Find When Transactions Occurred (mssqltips.com)

SushilKumar
Support
Support

Hello team,

 

If our response has been helpful, please consider clicking "Accept as Solution". This will assist other users in easily finding the answer.

 

Regards,

Sushil Kumar

jm32
Contributor III
Contributor III
Author

Is there any advice on how to configure the task and the source to avoid these errors?

Dana_Baldwin
Support
Support

Hi @jm32 

To my knowledge the only way to ensure that an LSN is not moved from the online log before it is read by the task is to increase the max allocated storage size for the TLOG. I am not sure how "full" the log has to be before data is moved to the backup log, but you need enough to have a margin or buffer. Unfortunately I don't know how to estimate how much to increase that by, as it depends on the volume of activity on the source - you'll have to add space and monitor for this condition, and add more space if it is not enough.

Thanks,

Dana