Still, stuff the science. We wanted to see whether the choice of lossless or lossy audio format made a difference when tracks were listened to by reasonably ordinary subjects, including members of the TR team. We won't pretend that we took the most scientifically rigorous approach or brought out an armoury of test equipment to check and compare waveforms. Instead, we ripped four tracks from CD to FLAC using DBPowerAmp CD Ripper, then used the freeware WavePad editor to create thirty-second excerpts from those files for testing purposes. We then used DBPowerAmp converter to make two MP3 encodes of those tracks, one at a constant bit rate (CBR) of 192kbps, and one at 320kbps. The LAME encoder, widely considered the best for high-bit rate MP3, was selected for encoding duties. 192kbps is widely considered the minimum bit rate for decent quality MP3 audio. 320kbps is the top-end standard for most MP3 players, and the one adopted by online music stores such as 7 Digital or Play.com. We wanted to see whether our guinea pigs could spot the difference between these files and the original FLACs.
Our test tracks went onto an Asus notebook. The kind chaps at hifi headphones had provided us with an iBasso D3 Python USB DAC and headphone amplifier - similar to the iBasso D2 we reviewed earlier in the year, but with enhanced sound quality and a little more oomph in the output stages. We used this to provide the audio output. Into the D3 we plugged a pair of BeyerDynamic DT770 Pro headphones, ensuring that our test subjects would get excellent (though not ridiculously high-end) audio without being bothered by any background noise (though there wasn't much inside the TR office used for testing).
Each test subject heard each track in two versions. Massive Attack's Small Time, Shot Away and Radiohead's There There were heard in both 192kbps MP3 and FLAC formats, while Maxwell's Ascension and Yumeji's Theme from the In the Mood for Love soundtrack were heard in 320kbps and FLAC. In each case, the subject simply had to say which version sounded best. We gave them two listens to each version, and the option to listen again if they wanted. We then jotted down their findings, plus any comments they had as to why they had made a particular judgement.
No test subject could see the screen during testing, so all tests were conducted blind. We took every step possible to ensure that the subjects did not know which version of a track they were listening to at any time.