It depends on how “eventual” it can be. You could leave rdbRemote as best effort, and when it rolls to history (assuming it does) sync up the historical partition.
Simon wrote some code for recovering between servers if one of the TPs crashed:
http://code.kx.com/wsvn/code/contrib/simon/tickrecover/recover.q. You could maybe do something similar to this, though I imagine your case is much easier given the data is a replica - you would just have to find the gaps, copy the missing segment(s), then re-sort the table(s).
I would maybe be tempted to change the model though. Instead of pushing async to rdbRemote, have it pull periodically synchronously. It could either pull from rdbLocal, or maybe a separate process. You could build a local process which subscribes to the TP and keeps the TP messages in memory, exactly as received. rdbRemote can then pull them synchronously on a slower timer and execute each one. When they have been read by rdbRemote they could be dropped from memory. The reason for doing it as a message list rather than converting to tables is to keep the sequencing correct, and so control messages (e.g. u.end) can be processed as well.
(whatever it is, you’ll likely also have to change the start up procedure for rdbRemote as I imagine it won’t have access to the TP log for replay).
Thanks
Jonny
AquaQ Analytics