How can I handle reorgs with streams?

I have to quickly react to new events in my backend. Ideally I would only consider unconfirmed events, but block reorganisations break the flow of my application so I have to handle that as well.

So far the only solution seems to be to use an RPC node, scan blocks one by one and revert old changes/apply new changes when a reorg is detected.

Does the stream API offer some way to handle this problem (aside from providing ā€œconfimedā€/ā€œunconfirmedā€ flags)?

for now it only provides those confirmed and unconfirmed flags

what other info could be provided in case of a reorg that would help you?

I think good info (in case a reorg is detected) would be hashes of dropped blocks.

Do you maybe know if the events provided by the stream API arrive in the ā€œcorrectā€ order? I.e. if event A is emitted in block 1 and event B is emitted in block 2, is it possible to get B before A?

would it help you if we make a separate stream with reorg notifications?

we are already providing the hash of the confirmed block when a transaction is confirmed, maybe you can use that information

It would help a lot, I would be able to fully rely on streams.

confirmed blocks will come in order

1 Like

Do you mean the block hash that is sent together with the confirmed event?
If so, I’m not sure how I can use that info for this particular problem (any ideas are welcome).

I’m thinking that when you have this info:

  "confirmed": true,
  "block": {
    "hash": "0x9e77b2e848c5bfa67cdd46a4fsd12df0daa2e8fde18f35db58c0406fe43e766f",
    "timestamp": "1627400000",
    "number": "12990639",
  }

then you can keep track of the unconfirmed block numbers and block hashes that you already processed, and when a confirmed block number shows you can compare it with previous unconfirmed block number and you could know if you have to change something

or you want to do it even faster than this?

If I got it right, I can (in essence) detect a reorg by detecting two blocks with identical block numbers but different hashes.

I believe your idea is to store confirmed blocks together with their hashes. As an example, I would be able to search old events by block 12990639 and eliminate those events whose hash differs from 0x9e77b2e848c5bfa67cdd46a4fsd12df0daa2e8fde18f35db58c0406fe43e766f. That would work.

However, I see one problem here. Assume the transaction from block 12990639 gets dropped. It’s gone and I never get the confirmation from Moralis. The next relevant transaction appears in block 12990641. This transaction ends up being confirmed at some point in time.

The event from block 12990639 would still hang in my database. Since there were no other relevant events in that block, and the new event appears in a later block, I have no way of detecting that I should remove the event from block 12990639 (using only Moralis).

in this case that you posted there could be a solution (you could check the next validated block number and remove anything from before that was not validated), but if the next relevant transaction is in block 12990641 + 1000 I don’t know a solution yet

1 Like

or, if you didn’t receive a confirmation for that transaction after x minutes, then you should remove it

1 Like

sending a webhook request for every confirmed block would work for this case? even if there is no info associated with that block number for that particular stream, just to know what block numbers were confirmed

That would work well!