Spotify metadata

I notice that the metadata json in the Spotify tutorial is all created manually, track by track, parameter by parameter.

I happen to use a program called Beatunes by the Frauhofer Lab in Germany, where MP3s were invented, I think. It calculates and embeds all kinds of info––as we see here.

Anyhow, it would be great, from a user’s POV, if recordings could be searched according to more parameters: BPM, mood, [AI-estimated] genre, transitions, “acousticness,” “danceability” and so forth. The kind of emphases that we see in the Spotify API.

Might it be possible for people building your clone to employ existing and richer metadata that’s already baked into audio files?


Yes you would just need to have those parameters in the metadata e.g. for the token uri and then implement a search feature to filter results.

1 Like

But the ingestion of the metadata, field by field, track by track, remains manual?

I’m not sure what you mean by manual ingestion. Do you mean the process of putting this extra data into the NFT’s metadata?

Sorry: I misspoke. If I remember correctly from the YT tutorial, the metadata from a file (author, year, etc), was typed out manually in a json.
My question is: I have filled many files with many fields automatically. I’d like Moralis, if possible, to pull that same metadata into a json automatically, too

I’d like Moralis

I’m not sure what you mean by this, are you looking for this to be done for you?

You would have to automate this process in some way. If this data is retrievable from the media file itself, it could be read e.g. with JavaScript (there seems to be a few libraries for this) and then put into a JSON format.

No, I just mean are there––for example––tweaks to handle such matters within known, open-source Moralis projects, that’s all. Familiar automation tools, commonly-used snippets, etc.
I’m not looking to hire anybody, thanks: just to get some pointers before I embark on substantial investments of my own time––so I remain v grateful for your comments.