What does your current delete code look like?
Have you used bulkDeleteMany in a cloud function?
Well, Iâm trying to do something like this:
let swapsToDelete = [{ filter: { block_timestamp: { lessThan: relevantDate.toUTCString() } } }];
return Moralis.bulkDeleteMany(swapTable, swapsToDelete);
I receive the following error: You cannot use [object Object] as a query parameter.
Itâs not clear from the docs how you filter on greater than and less than. Or, Iâm simply missing it.
Any update here? It would really be helpful to get some âpainfully detailedâ examples of advanced filters for bulk deletes. I would imagine that gt/lt
would be fairly commonly expected comparisons.
What is the value of swapTable
? Iâm not sure if you can add additional constraints in this way but see if you can get past the error.
Looks like you can do it a different way using destroyAll
, tested with about 5 rows. Iâm not sure how well it will do with larger amounts e.g. in the 100s/1000s.
Cloud function (or run it with the SDK in your app):
Moralis.Cloud.define("deleteMany", async (request) => {
const query = new Moralis.Query("Ages");
query.greaterThan("age", 3);
query.find().then(Moralis.Object.destroyAll);
});
Hrm, I didnât realize it was ok to use that for massive deletes (hundreds of thousands and millions of records).
Moralis.Cloud.define('purgeSwapTable', async (request) => {
let relevantDate = new Date();
relevantDate.setHours(relevantDate.getHours() - 2);
const swapTable = request.params.swapTable;
const logger = Moralis.Cloud.getLogger();
const query = new Moralis.Query(swapTable);
query.lessThan('block_timestamp', relevantDate);
query
.find()
.then(Moralis.Object.destroyAll)
.catch((error) => {
logger.error('ERROR FINDING ITEMS: ' + error.code + ' : ' + error.message);
});
});
This doesnât work. Iâm dynamically passing the swapTable
and Iâve verified that it passes the correct class names. Iâve also tried multiple Date
formats.
Maybe an issue with the query constraint for block_timestamp. Does the query actually work in general (not trying to delete anything but if the query works with that lessThan and gets a result(s)).
hundreds of thousands and millions of records).
Yes possibly wonât work then. destroyAll
can empty a class completely with no other constraints set so you can test it on one of these classes to see if it can empty it.
This doesnât work. Iâm dynamically passing the
swapTable
and Iâve verified that it passes the correct class names
Iâm not sure what you mean, you can only be using one class name at a time for each query. What is an example of the value of swapTable
?
I have a similar local query (using my local Moralis
instance) that works for finding
items via block_timestamp
and it works. But nothing in the cloud code seems to work or return âquerywiseâ
But nothing in the cloud code seems to work or return âquerywiseâ
Are you getting a server error log from your catch? Thatâs fine, you just need to check your server dashboard if anything was deleted.
Tables are currently named like so: UNISWAPvTWOETH, UNISWAPvTWOMATIC, âŠ
Itâs definitely finding them. I can output the count when I use .limit()
. I can also loop through the array of results
. But no âactionâ on the results has any . . . well . . . result. All table sizes remain the same, or continue to grow as inserts happen.
Iâve tried .then()
off the promise using Moralis.Object
with no errors.
Iâve tried const result = await ...
and used res.destroyAll
with no errors.
Iâm finding anything beyond a basic âlistingâ cloud function frustratingly unintuitive. Would be nice to get a full spec on the Moralis.Cloud object? Other than size, itâs not clear to me why calling destroyAll
doesnât work, or why itâs not a âproperâ function call, or . . . so many other things.
Iâll see if I can just batch a background local async()
process since it appears local queries work.
What is an example of an event sync youâre using to get this sort of large amount of data into a class? I would like to test on my end.
Moralis uses Parse Server so you can look up documentation/info on it that way that Moralisâs docs donât seem to cover.
Literally using coreservices_addEventSync
to watch all of BSC/ETH/AVAX/MATIC (in 4 separate tables) for Swap
events. I was using Speedy Nodes
for this before Moralis shut them down. Now Iâm trying to use sync events with cloud functions to emulate that and simply pull results every couple of minutes. However, itâs necessary to purge the data regularly or the tables become extremely delayed.
Literally using
coreservices_addEventSync
to watch all of BSC/ETH/AVAX/MATIC (in 4 separate tables) forSwap
events
An exact example(s)/code with your parameters would help.
A detail I missed was setting the limit which also applies to destroyAll:
query.limit(10000000);
Before it was just using the default of 100.
This seems to work ok on my end with a lot of results - but it may take some time before itâs finished. It may make sense to have an upgraded server to handle all of these tasks.
Yeah, I set limits to something manageable, but still discernible (e.g. 5000), and nothing changed. When I went to anything above 15000 (triggering over HTTP) I simply received 500 errors.
Yeah, maybe once we hit production. But for now, I have no interest in paying $300/mth to be able to delete large numbers of entries. đ«
Yeah, I set limits to something manageable, but still discernible (e.g. 5000), and nothing changed
10,000 deletions at a time works fine on my end - it took about 45 seconds for the dashboard to reflect this. 20,000 deletions took 75 seconds. This would just be your server maxing out its resources. You can run some tests to find out the limit you can use for your server with its current load.
A smaller limit of a few thousand would be fine which you can repeatedly unless youâre adding tens of thousands of objects every few seconds.
Otherwise look at doing it directly through MongoDB. You just extend out your API use to include your own server/backend.
Are you generating a cloud function or a local Moralis instance? Mind showing the code?
Whatâs the recommended way here? Mongoose? Extend the base Moralis object? I tried Mongoose but got no results. Do I also need to WL IP addresses? Also, what is the default db name for connections? Is it just the dApp name? Any pointers would be helpful.
Never mind, I found the parse
table. Without searching the forums (or guessing based on the Python example) this isnât immediately clear. Also had to WL my IP.
It would really be appreciated if whomever is responsible for the docs would provide more detail and more explicit examples. I realize this isnât practical for all cases, but when you have examples connecting to admin
with no (obvious) examples of where to find the names of other tables, it makes for much â:dizzy:
In summary:
Mongo
client to connect directlyparse
tableMongoClient.db(...).collection(...).deleteMany()
[I prefer to break these up into separate objects but just used a single line to demo the ârelationshipsâAnyway, thanks for the input.
NOTE: This entire listing is, of course, a workaround due to bulkDeleteMany
Cloud functions simply not working well for me.
NOTE: .deleteMany()
may time out if the result set is too large, in which case you may .find().limit()
and loop through.
@cryptokid @alex, what is the recommended opposite of coreservices_addEventSync
?
If we keep the tables pruned on a regular basis (every 30 mins or so), we have zero issues. However, in situations in which the sync gets behind (e.g. we stop testing our apps for the day), the syncs get waaaaaaay behind again (as expected).
Clearing the tables in those situations still works, it just takes a while to âcatch upâ. Can we simply âdestroyâ our sync subscription and recreate it?
There is a way to do it, I donât remember the exact name now. It could be something that starts with remove_
There is a way to do it, I donât remember the exact name now. It could be something that starts with remove_