Server error from the speedy nodes

Hey guys,

I built an indexer of ERC721 and ERC1155 transfers to save them to my own database. The indexer is using the standard Web3 API to be compatible with different Web3 providers.
Itā€™s running without issue on Ethereum mainnet and ropsten using another provider.
I recently had to index transactions from BSC so I started to use Moralis as a provider using the Speedy nodes.

The indexer is getting a lots of error since.

Here are the three most common errors Iā€™m getting:

  • From the block listener:
Error bad response (status=401, headers={"date":"Tue, 22 Feb 2022 13:19:22 GMT","content-type":"text/plain","content-length":"12","connection":"close","cf-cache-status":"DYNAMIC","expect-ct":"max-age=604800, report-uri=\"https://report-uri.cloudflare.com/cdn-cgi/beacon/expect-ct\"","server":"cloudflare","cf-ray":"6e188176d8571875-EWR"}, body="Unauthorized", requestBody="{\"method\":\"eth_blockNumber\",\"params\":[],\"id\":2249,\"jsonrpc\":\"2.0\"}", requestMethod="POST", url="https://speedy-nodes-nyc.moralis.io/XXXX/bsc/mainnet/archive", code=SERVER_ERROR, version=web/5.5.1) 
Error timeout (requestBody="{\"method\":\"eth_blockNumber\",\"params\":[],\"id\":5269,\"jsonrpc\":\"2.0\"}", requestMethod="POST", timeout=120000, url="https://speedy-nodes-nyc.moralis.io/XXXX/bsc/mainnet/archive", code=TIMEOUT, version=web/5.5.1) 
  • From functions that fetch logs:
Error bad response (status=503, headers={"date":"Tue, 22 Feb 2022 07:05:40 GMT","content-type":"text/plain","content-length":"95","connection":"close","x-envoy-upstream-service-time":"0","cf-cache-status":"DYNAMIC","expect-ct":"max-age=604800, report-uri=\"https://report-uri.cloudflare.com/cdn-cgi/beacon/expect-ct\"","server":"cloudflare","cf-ray":"6e165e09db168c30-EWR"}, body="upstream connect error or disconnect/reset before headers. reset reason: connection termination", requestBody="{\"method\":\"eth_getLogs\",\"params\":[{\"fromBlock\":\"0xec158b\",\"toBlock\":\"0xec158b\",\"address\":\"0x3537bd14254a04b0f940c976b4dd481ff91251b5\",\"topics\":[[\"0xc3d58168c5ae7397731d063d5bbf3d657854427343f4c083240f7aacaa2d0f62\",\"0x4a39dc06d4c0dbc64b70af90fd698a233a518aa5d07e595d983b8c0526c8f7fb\"]]}],\"id\":4766,\"jsonrpc\":\"2.0\"}", requestMethod="POST", url="https://speedy-nodes-nyc.moralis.io/XXXX/bsc/mainnet/archive", code=SERVER_ERROR, version=web/5.5.1) 

I checked the quota of my account and it seems ok.
The 4 instances of the indexer Iā€™m currently running are using the same Moralis account and are using around 400,000 requests a day. Thatā€™s an average of 277 requests per minutes, well bellow the 1,500 of the free account.
Anyway, the errors code doesnā€™t seems to be the ones for rate/usage limit.


Here is what the indexer is requesting from the Web3 provider:

  1. The indexer is calling eth_blockNumber every 2 seconds
  2. On new block:
    a. it calls 2 times eth_getLogs, one time to get logs from an ERC721 contract and one time to get logs from an ERC1155 contract. The calls are done using the a specific topic, the from and the to block filters.
    b. it also calls one time the eth_getBlock to get more info about the block
    c. when the logs are fetched, decoded and saved to database, two calls (one for each contract) are done to verify the correctness of saved information. This uses a the multicall2 smart contract from MakerDAO to batch all the requests into one request.

I would like to know what are the potentials issues and how to fix them.

Could it be some limitation from the archive speedy nodes?

Thank you!

from those errors it doesnā€™t look like it is a rate limit problem.

also, there are different compute units per requests: https://docs.moralis.io/misc/rate-limit#speedy-node-requests

if you try again after you get that error it works fine next time, or you get that error every time for some specific parameters?

It works fine the next time. The indexer restarts on error and try to sync from the last synced block. So it executes the same request, maybe the to parameter for the eth_getLogs is different if a new block have been mined since.
I never see the indexer restarting indefinitely. But once I noticed a not-sync transfer, not sure exactly how or why, but re-executing the indexer from start fixed this specific not-sync transfer.

Also the indexer is executing the sync one block behind the block returned by eth_blockNumber to avoid uncles and also possible issue from Web3 providers where some nodes are not yet up to date with each others.

How does Moralis manage nodes that are not yet up to date with the latest block?
Does increasing the block ā€œconfirmationā€ between the value returned by eth_blockNumber and the actual indexerā€™s sync would help?

are you using http RPC or web socket version?

HTTP RPC.
From experience, the web socket are even more unstable than http rpc

for HTTP RPC you could get a random node behind a load balancer, for web socket it will be same node

Most of the error I get are from the block listener which query the eth_blockNumber without any parameter:

Error bad response (status=400, headers={"date":"Tue, 22 Feb 2022 15:50:02 GMT","content-type":"application/json","transfer-encoding":"chunked","connection":"close","access-control-allow-origin":"*","x-envoy-upstream-service-time":"28077","cf-cache-status":"DYNAMIC","expect-ct":"max-age=604800, report-uri=\"https://report-uri.cloudflare.com/cdn-cgi/beacon/expect-ct\"","server":"cloudflare","cf-ray":"6e195d7a5bd40ca5-EWR"}, body="{\"error\":{\"message\":\"Request failed with status code 503\"}}", requestBody="{\"method\":\"eth_blockNumber\",\"params\":[],\"id\":428,\"jsonrpc\":\"2.0\"}", requestMethod="POST", url="https://speedy-nodes-nyc.moralis.io/XXXX/bsc/mainnet/archive", code=SERVER_ERROR, version=web/5.5.1)

It looks like Moralis server return a 503 error but Cloudflare return it to me as a 400.
What does the 503 means internally for the Speedy Nodes?

is there a way to easily replicate that error?

1 Like

I end up using the https://docs.ethers.io/v5/api/providers/other/#FallbackProvider from Ethers.js with two providers configured with the same Moralis endpoint and a quorum of 1. No issue since. The fallback provider output some error something like every 30min when switching to the second provider but the indexer works.

I still wonder if the errors I was getting are known from Moralis.