Asking which file sounds "better" is meaningless; some distortions sound good. The proper way to do this is an ABX test, where you compare both to a known original, and ask which sounds more like the original.
Agreed. This is why a lot of audiophiles will swear by vinyl. It's not that they reproduce the sound more accurately, it's that they introduce an artifact that they lovingly refer to as "warmth".
The entire exercise is predicated on the notion that if a person consciously listens to an audio recording for a while they will be able to provide a completely accurate estimation of the quality of the recording. To some degree that is true, but I dispute the idea that it is completely true.
Yes, there are crazy audiophiles who will tell you that special crystal bullshit gives them a better sound, nevertheless there is an element of truth in a degree of audio fidelity than is immediately and consciously apparent upon a single casual listen. Sometimes it can take listening to a given recording a few times before you start to appreciate all of the nuances. Maybe you don't notice the little things like the squeak of the bass drum pedal the first few times you hear it, or the richness of the intonation of the clarinet. Or maybe you don't immediately notice that the cymbal crashes are all fuzzed out to hell and back by compression artifacts because you weren't paying close enough attention the first time you listened, but on the 10th listen it starts to become a real annoyance and you start to think seriously about acquire a better recording or encoding.
Here's a perfect example. Pink has a very popular song called Raise Your Glass, you can listen to it on youtube here: http://www.youtube.com/watch?v=XjVNlG5cZyQ. In that song there are many prominent and annoying high-pitched beeps (right in the middle of the chorus even), but I find that most people do not even know they are there, they do not consciously hear them, their brain just filters it out. But once you pick up on it, they become obvious, you can't help but hear them. And the same is true of things like encoding distortion. If you draw the bar at exactly the boundary where people will be able to consciously perceive a difference in quality based on a single listen then you will have placed the bar too low. People will have a slightly lower appreciation for the recordings due to factors they may not even consciously be aware of.
I'm with you on this. But I think that in pop music the vocals have such a high prominence that you kinda miss the song if try to be aware of the "whole song". Especially if English is not you native language. So the brain keeps filtering while you sing along.
In the case of Raise Your Glass, are you referring to the synth sound that "bleeds" through from the background during the chorus?
> Asking which file sounds "better" is meaningless
Indeed the phrasing of the question poses problem. The real question should rather be something like "what is the most faithful"? ANd without the source to compare to, that would be reducing comparisons to what a tester has experienced previously.
Yet I answered both confidently and correctly, without very much special audio equipment (Dell XPS 8300 stock sound card, and a cheap Sennheiser HD212 Pro).
What made me confident is I know what to look for†. If I did not I would have had to make a "what sounds best" judgement.
(spoiler alert if you want to take the test yourself)
† MP3 has a hard time encoding some corner-case specific stuff. Here the rattling at 0:06 produces an artifact that is a dead giveaway. Cymbals at 0:11 are more subtle but noticeable.
Sound was generally just more rich. But yeah the background noise was incredibly noticable. I use SE530's with an audioengine D1 at work since using an integrated soundcard always bleeds motherboard sounds in the audio :/
I read a study a few years back though about how the MP3 encoding artifacts and background noise were so ingrained in culture today that people actually prefer the MP3 encoding artifacts over not having them- Which seems to be supported by this poll. The incorrect choice has a non-trivial lead.
I may not disclose full details, but one of the engineers that made particular widely used audio decoder said that they've left the bug whose symptoms were slight high frequency noise.
It was widely reported to sound "better" than other "by the book" implementations.
http://wiki.hydrogenaudio.org/index.php?title=ABX - sadly, of all the links on that page only the Gnome ABX one is still live. But you can find/ask for more recent material in their forums: many of the people who work on lossy audio codecs hang out there.
This is meaningless.
Statistically meaningful tests are performed using ABX testing methods (http://en.wikipedia.org/wiki/ABX_test) with high-quality equipment. And, yes, many people have trained ears (and brains?) that can easily distinguish the artifacts made by the MP3 compression even in 320kbps.
See this (http://listening-tests.hydrogenaudio.org/sebastian/mp3-128-1...) as an example of an ABX test.
As for me, with my current medium-quality headphones, I can't distinguish between the original and an 128kbps mp3.
To my ears neither sounded like a good quality recording and I think that's what made it so hard to tell the difference. Even with cheap speakers, it's easy to spot compression artifacts in poorly encoded audio but in these samples the drum tracks on both sound "mashed". That might be an instrumental trait but it sounds more like a production or postprocessing error to me.
Again, not an ABX test. No wonder the first file did best - it sounded most like the first file. You can't do listening tests without a labelled original to compare to.
I didn't think I was an audiophile, until I noticed a crispness missing from some of my jazz. I encode at 320kbps out of paranoia now, but what's an extra few megabytes per album?
FYI if anyone is using Google Music, turn on HTML5 audio (via labs). I swear there is a perceptible improvement.
It's fairly easy to tell between 192kbps CBR LAME and an original CD. 192kbps VBR is a bit hit or miss. High end gear can absolutely make the imperfections more noticeable.
Personally I just got fed up of the goalposts being moved around (LAME codec constantly being updated, mp3 scene switching from 3.97 -V2 to 3.98 -V0, etc.) and now I just have all my music collection in lossless.
Basic DSP knowledge tells us that a digital audio copy is exactly the same quality as the original.
Also, testing the difference between 128kbps and 320kbps MP3s over and over again is like drinking warm beer all the time to make sure it's really worse than the cold one.
I suppose if you use really crappy media and an even worse CD recorder you will have significant signal degradation because of all the bits missing so error correction is kicking in
Yup, tested this before. Also with quite good hardware (sound card, circumaural quality headphones), between 128kbps and 192 you're already not sure whether there a difference or not, but anything above 192 is really wasted effort.
128kbit sounds so poor that I get a headache from listening to it. There is a tinnyness and high pitched squeal that is like a drill through my head. Maybe it's like using a 14" monitor - if you're so used to hearing low bitrate, it may be hard to notice better quality when you hear it occasionally.
Bad choice for the sample. I however still picked #1. Choose a song with bass, and mids. The difference between 128kbps and 320kbps will become painfuly obvious.
192kbs VBR with a good encoder. There's been countless tests and this is indistinguishable to the vast majority of listeners (teens might have a slight edge as the ability to hear high frequencies diminishes with age).
I don't have time to dig up citations but check out the Hydrogen Audio forums and other sane, non-cranky-audiophile, evidence based sources.
I would say that 192kbit (2 pass, VBR) is the minimum bitrate that one should encode music. I can pick lower than this without trying. I do struggle to hear more than 192kbit, except occasionally at the high end.
In reality, disk is so cheap that I encode in flac. I'd rather keep myself future proof. Let's just say that my entire cd collection is smaller than 2 bluray rips at 1080p.
A 10-second sample misses the point of HQ v. LQ: a very few sections of most albums will sound better at HQ. The vast, vast majority of songs will sound the same. The point isn't that HQ is a little better most the time, it's that some of the time it is much better.
LTJ Bukem - Watercoulours comes to mind as a great example of the phenomenon.
Very interesting. This then set me off reading about ABX testing for longer than I wish to admit. For those looking to try some ABX testing between 320kbps and 128kbps mp3 files, this is the best in-browser one I've found so far: http://mp3ornot.com/
If you have quality headphones I don't think it's very hard. On earbuds, or laptop speakers, almost impossible. Put it on a real stereo and you will also be able to.
Perhaps the fact that music is now being produced and optimized to sound good on tinny speakers is altering what we perceive to be a "good sound."
Laptop speakers: Nope.
High Quality Headphones: Maybe.
Loud sound system in a crowded club: Definitely.
Also just because one track sounds acceptable at 192, doesn't mean another will. The only way to make sure tracks have the same 'opportunity' to sound good is to throw away less information.
I picked the higher one correctly. Though, the difference is barely audible on my laptop (MacBook Pro aka worthless DAC, but quite ok Sennheiser microphones).
Personally I like lossless for practical reasons (e.g. conversions without quality loss).
> Personally I like lossless for practical reasons (e.g. conversions without quality loss).
That was the clincher for me. At first my music collection was a mish-mash of different bitrates. I converted the whole lot to the LAME V2 preset (~192kbps VBR) a la the mp3 scene standard, then no sooner had I done that than the standard changed to V0 (~245kbps).
Screw it, I figured. Storage is cheap. Might as well go lossless.
At work, with crappy Microsoft LifeChat LX-3000 there's a barely noticeable difference. Can't tell which is which anyway. Need to try at home with a proper setup.
I would argue this song is terrible for this test. I usually can instantly tell the difference between 320 and 128 kbps, but I wasn't able to with this test.
Obviously for the author's signal source, the original CD sound tracks (sampled at 44.1 KHz * ) would limit the importance of a change from 128 to 320 kbps.
The depth of the sample doesn't get around the Nyquist-Shannon sampling limitation. This means resampling a 44.1 KHz * recording at a higher bit rate is pointless.
To be specific, a 44.1 KHz sampling rate provides a bandwidth of 22.05 KHz, period, full stop.
A quote: "The full range of human hearing is between 20 Hz and 20 kHz.[3] The minimum sampling rate that satisfies the sampling theorem for this full bandwidth is 40 kHz. The 44.1 kHz sampling rate used for Compact Disc was chosen for this and other technical reasons."
- The sampling rate, measured in kHz, for CD tracks is 44.1 kHz (not kbps), which determines the frequency range. Nyquist-Shannon applies to this.
- The bit-depth, number of bits/sample, determines the SNR of the signal, but has no effect on the frequency range. A typical audio CD has a bit depth of 16.
- The bit-rate, measured in kbps, is (sampling_rate * bit_depth * channel_count). So for an audio CD, this would be 441000 * 16 * 2 = 1411200 bps = 1411.2 kbps.
> The sampling rate, measured in kHz, for CD tracks is 44.1 kHz (not kbps) ...
Yes, corrected, thank you. The original point stands -- the higher resampling rates are increasingly pointless.
> The bit-depth, number of bits/sample, determines the SNR of the signal ...
Yes, and is thought to be the source of the much-remarked, subtle difference between vinyl album music and CD music. But no one knows for sure.
> The bit-rate, measured in kbps, is (sampling_rate * bit_depth * channel_count). So for an audio CD, this would be 441000 * 16 * 2 = 1411200 bps = 1411.2 kbps.
True. Unfortunately, these numbers may motivate people to argue for higher and higher MP3 sampling rates, ignoring the fact that the original temporal sampling rate severely limits the usefulness of these efforts.
>The original point stands -- the higher resampling rates are increasingly pointless.
If we were talking about sampling rates it would. But we're not, we're talking about bitrates.
Any MP3 of a CD, whether 128kbps, 192kbps, 320kbps or something else, will still have a sample rate of 44.1kHz. Resampling at 128kHz would indeed be stupid and pointless, and would also result in a much larger file than the original CD. That is not what mp3 does; it takes your 16bits@44.1kHz CD bitstream, and gives you a compressed bitstream that will, when decompressed, give you another 16bits@44.1kHz bitstream which sounds similar. The bitrate measures the size of this compressed bitstream, and therefore gives an indication of how much information (in the technical sense) you must have discarded to form it. But it's entirely unrelated to the sampling rate - you can't even say bitrate = sampling rate * bit depth when we're talking about a compressed bitstream.
> The original point stands -- the higher resampling rates are increasingly pointless.
But the mp3 tracks are not resampled. They use the same sampling rate of 44.1kHz, with a variable bit-depth. So, for instance, a 320kbps mp3 file still has the original sampling rate of 44.1kHz.
You're right. That's what I ended up focusing on after a few listens and chose the correct answer, which could probably be attributed to being an ex pro drummer. Most people were probably focusing on the vocals, not at all attuned to what quality percussion sounds like, which is why more people chose the wrong answer. One might even go far as to say that more people think vocals sound better at 128 than 320.