I notice that the metadata
json in the Spotify tutorial is all created manually, track by track, parameter by parameter.
I happen to use a program called Beatunes by the Frauhofer Lab in Germany, where MP3s were invented, I think. It calculates and embeds all kinds of info––as we see here.
Anyhow, it would be great, from a user’s POV, if recordings could be searched according to more parameters: BPM, mood, [AI-estimated] genre, transitions, “acousticness,” “danceability” and so forth. The kind of emphases that we see in the Spotify API.
Might it be possible for people building your clone to employ existing and richer metadata that’s already baked into audio files?