We have a dapp that needs to push global state of NFT totalSupply to each active user. The actual total supply isn’t too much data (max 100 data points), but it all runs through a single Transfer event, and we expect some lumpy traffic, maybe 1 event per hour sometimes, but 2000 events in 10 minutes during a launch.
All the AfterSave functions for a given token creation transfer will impact the same data object. I assume they’re processed sequentially (otherwise some baaad data will result), and therefore if 200 new events are added to the table in 1s, those afterSaves might take a little time to process (each is one await to load 2 objects, and one await to save 3 objects), but that’s unlikely to be terrible performance-wise.
If those 200 afterSaves happen over the course of 3-4 seconds, what happens with users who have live queries running (and how much will that impact performance)? Does every single of the 200 db object changes come over the wire separately (possibly creating race conditions), or is some batching able to take place, where the live query knows a change has taken place, and returns the recently updated data?
Is there any cacheing available to help out here? All the clients would be running the same live query (we can separate the lighter client-specific queries out), so it seems like it could be possible.