You could try re-compressing the mp3 file to lower and lower bitrates and check the amplitude of differences. Since mp3 is a lossy codec, there will always be a slight difference, but you should see a sudden increase in difference when you surpass the "true" encoding bitrate.
You could probably write a script for it using ffmpeg and some other tools to generate a bitrate-difference chart.
Objectivity aside, IMO the easiest way to tell when listening is paying attention to high-frequency sounds, especially hi-hat cymbals. Unless lossy encoders have gotten remarkably better since I last tried, there’s always a marked loss of shimmer / reverb on those.
PEAQ is an algorithm and scoring system that takes psychoacoustic modeling into account. When I looked into this more than ten years ago, I managed to find a command line utility called pqevalaudio or something that I could just use to assign a score to a file.
It's a popular tool for verifying the quality of music downloaded via P2P platforms. Re-encoding YouTube rips is popular - especially bootleg tracks - and this helps weed them out.
I haven't tried it but you may want to look into PAM, which is relatively new and doesn't require a reference (you don't need the original uncompressed audio), and is open source.
However, all approaches are quite far from perfect. Human evaluation is still the gold standard.
Detecting re-encoding or double encoding is sometimes researched, though mostly for audio forensic purposes.
Conceptually it would be possible to use the encoding with different codecs and nitrates/settings on a sizable corpus of music, to learn a ML model that can learn to identify the "true" bitrate on new unseen audio clips.
spectral analyzer. not sure if you need cli or batch function, but the frequency will be cut off regardless of the purported bitrate even if it was "upscaled" since those frequencies were chopped previously. you can see a sample screenshot in the upper left showing the frequency. re-encode a 320kbps to 128kbps and you can see the frequency range diminished on the 128kbps.
I was so sad that this project is not open-source but their Research papers give some interesting clues about detecting bad quality files.
On my side, I used it through a [Bash script](https://gist.github.com/madeindjs/d5e3949313b141f2e5eea62b98...) to detect bad files in my library. The tool produces a lot of false positives since it triggered on some High Res audio musics I bought on Qobuz.
You could use image processing/DSP methods on a sample of spectrogram images taken from the file
Visibly it’s obvious when it’s compressed, you get “glitchy” or “smeary” repeated artefacts
I’d also look for cuts on the high end (over ~15k hz) that clip more than normal (compared to uncompressed)
Years ago, I did a study. I wrote a program to compare the original to the encoded version of a file. I used high-resolution DVD-A rips, to try to avoid artifacts introduced by downsampling the master to CD resolution.
The source code that I used for the above article is at: https://github.com/GWBasic/MeasureDegredation
How to objectively compare the sound quality of two files?
https://superuser.com/questions/693238/how-to-objectively-co...
SNR is a classic, and simple enough to give you an intuitive sense for the underlying signal processing.
Compare to known good.