Moralis server delay hours when updating transaction to Database

I want to listen to new comming BSC transactions,
I have also followed the instruction of Moralis:

  Moralis.start({
        serverUrl: 'xxxxxxx',
        appId: 'xxxxxx',
  });

  const query = new Moralis.Query("BscTransactions");
  query.equalTo("to_address", '0xxxxxxxxxxxxxxxxx');
  const subscription = await query.subscribe();

  subscription.on("create", function (data) {
        console.log(data.attributes);
  });

but results are not expected, for example:

{
block_timestamp: 2021-12-08T07:33:06.000Z,
hash: ‘0xbe3750f9279f4f8d77140d10997b4e0d96294ec2b55fadfb285622fd09962718’,
nonce: 3279,
block_hash: ‘0x8499b081bb78b4ee68a512f11d7f4d16d04e8607148ca44e807e5e2998e06927’,
block_number: 13292197,
transaction_index: 421,
from_address: ‘0x8b4894e04251d0bcb434fd078760b763f25bac57’,
to_address: ‘0x10ed43c718714eb63d5aa57b78b54704e256024e’,
value: ‘0’,
gas_price: 5000000000,
gas: 208408,
input: ‘0x38ed1739’,
confirmed: false,
createdAt: 2021-12-08T07:51:50.618Z,
updatedAt: 2021-12-08T07:51:50.618Z
}

block_timestamp: 2021-12-08 07:33:06
but
createdAt: 2021-12-08 07:51:50

=> 20 minutes delays

:smirk: :confounded: :persevere:

It can be a delay if your server is too overloaded and it can not keep up with all the transactions. How is that delay now?

its still too lag, 21 minutes delays at this time ! :worried:

what do you have there in particular? a watch on a specific address on bsc?
I see that you only have less than 5k transactions in that table, so the load shouldn’t be the problem

I do nothing special, just filter by to_address, but server seem overloaded (CPU 77%)

it looks like there is some CPU usage there, can you paste that address that you watch to see how many transactions it has? and also, because it is syncing historical data, it could take some time until it is up to date

I have recreated new server without historical data check, but result does not changed !

now it looks like it is close to 10 minutes delay, maybe the server can not handle the load, how many transactions has that particular address per minute?

now it looks like it is close to 10 minutes delay, maybe the server can not handle the load, how many transactions has that particular address per minute?

I’m having a similar issue here. Blocks are around 40-45 minutes behind.

Admittedly I’m feeding from a firehouse (watching all Swap() on a particular chain. Each table quickly fills with tens of thousands of entries.

Is there a recommended approach for this? Perhaps purging tables at “n-hourly” intervals, or running Swap() watchers on separate dApps?

A few mins ago, I stopped all but one watcher, and purged the tables, but CPU is still firing @ 99%.

is that an upgraded server?
purging old data from tables could help

Define “upgraded”. :thinking:

It’s on a “paid” account, yes.

An upgraded server is a production server available at an extra cost, you can view the details here.

If you’re on a regular server, it may be having trouble coping with that amount of data/syncing.

Gotcha. Yeah, I saw that, but didn’t see any tiers. The “contact us for pricing” always scares me. 🫠

Anyway, about an hour later, resetting all tables has brought syncing back to within about 6-7 mins. I think I’ll go with this solution in the short term.

Now I need some assistance with queries for purging tables at regular intervals (I’ve revived another thread on “Deletes”).

I am not currently interacting with mongo (and would prefer to avoid having to load libraries for it). Also, I’d like to do it in a “dynamic” fashion rather than having to rely on “pre-deployed” Cloud functions – unless said cloud functions accept relevant params. :thinking:

You could use a cloud function that clears your tables with jobs to run it at regular intervals.

Do you dynamically create the syncs from code or do you do it manually from the server interface?

I’m doing 99.999% of everything in the application dynamically. But I also try to make it as “deterministic” as possible. I prefer to develop that way.

Do you still need more help with this or is your current solution fine for now?