Computer Audio Misconceptions

UPDATED: 11.5.17


Introduction:

If you’ve been involved in computer audio for any amount of time, you’ve likely heard all sorts of conflicting opinions. Over the years, I’ve heard audiophiles make statements like these:

  • “Computers use error correction, so upgrading the power supply makes no difference.”
  • “If the USB input on the DAC is asynchronous, all the jitter and bit read errors are removed.”
  • “If the output from the computer is reclocked, all the jitter and bit read errors are removed.”
  • “If the player software buffers to RAM, everything output from the computer is bit perfect.”
  • “Batteries have pure DC power, which makes them optimal to power any component.”
  • “Super Cap power supplies are off the AC power grid and have the lowest possible noise."
  • “The more you upsample the higher the resolution and better your music will sound."
  • “The faster your CPU the higher the resolution and better your music will sound."

Though there is some basis in fact for all of the above statements, they are all somewhat incomplete and/or conditional. I wrote this blog to clarify these common misconceptions. Please note that in order to make the information in this blog more accessible for the layman, I’ve simplified and generalized somewhat.


What does "bit perfect" mean?

Before going any further, I need to define the term "bit perfect" as it is used in the blog so as to not confuse my readers. The term "bit perfect" is a technical term that is used to describe any form of digital communication that involves a series of checks and error correction (i.e., checksum), ensuring the data that arrives at the receiver is identical to the data that was transmitted from the source. This is what allows you to download a file from a server halfway around the world and know that it will arrive at your computer identical in every way to the original.

Of course unlike most digital data transfer, music is played in real-time, so even if you are using digital communication devices (i.e. streamers, modems, and routers) that can potentially correct corrupted data, there is often no time to do this, and therefore the corrupted data is passed on to the next component.

When the term "bit perfect" is used in regards to player software, it can be somewhat misleading, since it implies that what is output from the computer has not been altered in any way from the original music data file. This is not the case. All bit perfect means in regards to player software is that the player software doesn't intentionally alter the music data files before decoding and/or streaming them.

If bit perfect player software did in fact assure the music data leaving the computer had no bit errors, then all so-called bit perfect players would sound identical, and this is certainly not the case. What would be more accurate would be to say that a specific music player software can be operated in "bit perfect mode," in which no algorithms were purposely used to alter the original data file.

This is a perfect example of why I sincerely recommend you view the claims of companies that sell music player software (and anything else in the audiophile industry) as "marketing language" as opposed to quantifiable facts.


How can a low-noise power supply improve computer performance?

All computer communication works on a system of checks and error correction (check sum). If a packet of data doesn’t pass the check, a new packet of data is sent to replace the original. The lower the power supply noise, the lower the amount of data corruption, the lower the amount of corrupted data to correct, and the greater the system resources.

When you free up system resources with a cleaner power supply a computer will perform as if it has a faster processor, faster storage drive, and more RAM. When a low-noise power supply is used with a computer-based music server or streamer the result is more liquid and articulate sound, combined with greater depth, detail, and dynamics.

Also note that switch-mode power supplies radiate significant amounts of noise corrupting the signal in any cable or component in physical proximity and polluting the AC ground for any component plugged into the same AC circuit.


Aren’t RAM-buffered music players bit perfect?

Yes, the data buffered in the RAM is bit perfect, but RAM is not the final link in the audio chain. Data corruption can still occur between the RAM and the output buffer, and between the output buffer and the digital to analog converter (DAC). Unlike most computer communication, the music data that leaves a computer through USB, Firewire, and optical ports is most often unidirectional (out only), not extensively buffered, and not error corrected at the DAC.

In addition, the system resources required to error correct corrupted data that is being buffered in the RAM significantly slows computer performance resulting in a more awkward and less liquid presentation. So wouldn't it make more sense to minimize error correction required in RAM buffering with a low-noise power supply?


Doesn’t reclocking the USB data from the computer eliminate jitter and bit read errors?

Reclocking and buffering music data from USB can remove jitter and will improve performance by removing noise, cleaning up the square wave of the digital signal, and buffering enough clock cycles so as to allow all of the corrected packets of data to be read within the appropriate clock cycle. This process minimizes bit read errors in subsequent stages, but it doesn't correct existing corrupted data. If corrupted data exists, unless the reclocking and buffering device has a bidirectional protocol that incorporates error correction, these bad bits will continue through to the next stage.


Aren’t streaming devices bit perfect?

Streaming devices receive data from a computer in a local area network either through Ethernet cable or wireless transfer. This means they use the same checking and error-correcting protocols as other Ethernet and Wi-Fi devices, which ensures that uncorrupted bit perfect data is received. Of course in normal computer communication error correction is done before the data packet is released. But in music transfer the data stream is clocked to the speed of the music, which means if a corrected packet of data is not received in time, the device will often pass on corrupted data.

Of course, the problem with most audiophile music streamers and Network Audio Adapters (NAA) is the cheap wall wart type switching mode power supplies (SMPS) they come with. These dirty SMPS cause significantly more corrupted data in the output stage of the streamer even if the input to the streamer was error-corrected. This is why replacing a wall wart SMPS with a battery or linear power supply will significantly improve the performance of audiophile music streamers.


Don’t asynchronous USB inputs remove all jitter, resulting in bit perfect sound?

Asynchronous communication is defined as transmission of data without the use of an external clock signal. This allows data to be transmitted intermittently rather than in a steady stream. This also allows for variable bit rates and eliminates the need for the transmitter and receiver to have their clock generators synchronized.

Asynchronous digital communication is nothing new. It has been used for decades in obsolete protocols, such as RS-232C. Adopting an asynchronous protocol for audiophile USB inputs is far from what would be considered cutting edge technology.

As stated earlier, most computer communication is bidirectional and works with a system of checks and error correction. When the source sends a packet of data, the destination checks the packet and requests corrupted data packets be resent. Since the asynchronous USB protocols used for most audio data is unidirectional, when an error occurs, no error correction is possible.

The combination of asynchronous clocking and data buffering can remove jitter caused by packets of data arriving at irregular intervals, but it can’t correct corrupted data. Though asynchronous USB results in more liquid, more resolving, and more musical sound, if it isn't bidirectional, it has no error correction, and can not assure uncorrupted bit perfect data.


Don’t batteries have the purest DC power?

Though better than the inexpensive switch-mode power supplies that come with many audio, video, and computer products, battery performance can’t compare to the performance of an ultra-low noise linear power supply.

Batteries use a chemical reaction to generate DC power, and each chemical reaction from each type of battery has its own audible noise signature. This is why a specific type of battery, such as LiO4, sounds better than another type, such as SLA. The noise level of a battery also changes significantly during different phases of the discharge and recharge cycle, making batteries an inconsistent-sounding power source as well. And then there's the additional expense of replacing batteries every few years.

Batteries also have much slower dynamic response than linear power supplies, so they don't respond as quickly to changes in current requirements. Their slower dynamic response results in batteries making music sound slower, less dynamic, and less articulate when compared to a linear power supply.

If battery power has the lowest noise, then why do the military, aerospace, and telecommunications industries only use them for portable devices and uninterrupted AC power supplies (UPS)?


What about "Super Cap" power supplies?

Super Caps were engineered to have a high micro-farads storage capacity in a small package. That is what makes them "super." They were engineered to keep CMOS memory in computers during power brown outs. There were never intended for use as a permanent battery bank in audio equipment.

As Super Caps discharge their output voltage changes significantly. This means when the charging controller switches between the discharged and charged banks, the fully charged bank of Super Caps will have a significantly higher voltage than the discharged bank. This results is a "saw-tooth" pattern with sharp peaks and valleys.

In contrast, a linear power supply has constant voltage with relatively subtle ripples (noise). It requires far less additional filtering to remove the subtle ripple in a linear power supply than to remove the deep saw-tooth pattern coming from a dual-bank Super Cap power supply.

Also, Super Caps have poor durability. There expected life is roughly 3-years. Now you know why the best of Super Cap power supplies are never warrantied for more than 3-years. In contrast, a properly engineered linear power supply will last for decades.

And in the end, what you are really listening to is the final regulator that fixes the output voltage and polishes any remaining ripples from the DC power. Most Super Cap power supplies use inexpensive low-noise IC regulators that only have modest levels of performance. Mojo Audio uses Belleson ultralow-noise ultrahigh-dynamic regulator modules - the finest regulators in the audiophile industry. Not only is their noise remarkably low, their dynamic response is <10uS from zero to full current output assuring incredibly clean and stable DC power regardless of ever changing current requirements.

But what sets our Illuminati series of power supplies apart from nearly every ultra-low noise power supply in the audiophile industry is our input choke filtering. By adding a choke between the rectifier and first capacitor of a power supply the crest factor, heat, and parts wear are reduced by literally 50%. The choke also acts as a reservoir for power and pre-regulates the DC doubling the efficiency and effectiveness of each consecutive stage of filtering. Choke input power supplies have been the gold standard for roughly 90 years. Their only disadvantages are higher cost, larger size, and additional weight. No Super Cap power supply uses choke input.

Super Cap audiophile power supplies are just a gimmick. If Super Caps were actually a good way to provide low-noise DC power than why are they not used this way by the military, aerospace, and telecommunications industries?


Doesn't upsampling increase resolution?

There is no way to increase the resolution of a music file through upsampling. The purpose of upsampling is two fold: to provide multiple versions of each bit in order to statically improve performance with fewer bit errors and to put quantization noise octaves above the audible music range allowing for more sophisticated digital filters at the output stage of a DAC.

The statistical concept of upsampling improves the performance of Delta Sigma DACs that employ error-correction algorithms by having more identical data points to read. This allows more bit errors to be removed because there are more data points to average. The problem is that error-correction algorithms interpolate rather than decode the music resulting in a very clean sounding, but more smoothed over, less articulate, and less harmonically coherent sound.

Wouldn't it make more sense to prevent errors with lower noise power supplies as opposed to attempting to correct them using interpolation algorithms?

Upsampling to move quantization noise octaves above the audible frequencies is a solid concept. Of course this is most effective in single-bit Delta Sigma DACs that decode native DSD as opposed to R-2R ladder DACs that decode native PCM. Where as PCM has quantization noise around the sampling frequency, which is at least a full octave above the audible frequencies, DSD64 SACD or PCM converted to DSD in Delta Sigma DACs has most of its quantization noise around 25KHz, right above the audible range.

This is why converting DSD64 SACD files to Double-Rate DSD (aka DSD128) or Quad-Rate DSD (aka DSD256) make a huge difference in single-bit Delta Sigma DAC performance. For more information on this topic please refer to our blog on DSD vs PCM: Myth vs Truth.


Don't faster, multi-core, more advanced CPUs improve performances?

The faster the CPU, the more power it consumes, the more noise in the system, and the more potential bit errors. Having any more CPU speed than is required to run the processes a music server is running degrades performance.

Most audiophile player software uses only one core in a CPU. Having multiple cores in the CPU that are unused also causes additional noise and degrades performance. So a simple high-efficiency dual-core CPU is often your best alternative.

And having an "i7" or some other more advanced type of CPU is of no advantage to the audiophile. Note that the "i" in i7 refers to an "instruction set" that is not used by any audiophile player software. The only advantage of these more advanced CPUs in a music server would be having more cache. Using a dual-core high-efficiency CPU with 3MB of cache is more than enough for most audiophile player software.

The only exception to this would be some advanced player software that takes advantage of higher speed CPUs and multiple cores, such as HQ Player. Software such as HQ Player can be put into insanely high upsampling modes for both PCM and DSD. Though there are some DACs that are optimized for this insanely dense data stream, most are not. And when audiophiles state "it sounds better with more upsampling" they are in fact only truly stating that their insanely high-performance CPU music server sounds better with additional upsampling. The fact is that they are adding additional noise with their super fast CPU and then removing some of that noise using error correction algorithms. Does that make any sense?

The higher the efficiency of the CPU the lower the noise and the more liquid, harmonic, and musical the server sounds. Also, a simple motherboard that consumes minimal power, such as one with only VGA video, low-res HDMI video, and/or no 7.1 channel audio, will have lower noise and better performance. This is why Mojo Audio uses a 6w stripped down industrial motherboard in our award-winning Deja Vu music server. Error prevention is always better than error correction.


What is the best way to set up a computer-based music server?

The most significant way to circumvent the shortcomings of audiophile optimized computer-based servers, streamers, and NAAs is to minimize power supply noise. When power supply noise is minimized, the result is cleaner and more defined “square waves” in the digital signal, which translates to fewer bit read errors, less error correction, and less jitter.

Also, the more of these upgrades you make, the higher performance your music server will have:

  • Use ultra-high efficiency motherboards and CPUs with a minimum of features.
  • Isolate all boards, modules, storage drives, and PCIe cards with independent power supplies.
  • Upgrade RAM with a minimum of 8GB of high-performance low-latency RAM (CAS 9 or better).
  • Use a low-capacity (64GB) SLC mSATA SSD card or 2.5" SLC SSD for your operating system and player software.
  • Use a large capacity internal MLC SSD or external NAS/RAID array with MLC SSDs for your music library.
  • Independently power any external drives, switches, routers, or converters with an ultralow-noise linear power supply..
  • Use high-performance shielded data cables both inside and outside the chassis (USB, Ethernet, SATA, etc).
  • Isolate input music data, output music data, and software commands on an independent data buss.
  • Turn off unused wireless control interfaces such as infrared, Bluetooth, and WiFi.
  • Interface WiFi through the Ethernet port using an external wireless router and turn off internal wireless hardware.
  • Only use a monitor, keyboard, and mouse for setup - control the server “headless” with a mobile device.
  • Use anti-resonant products under your computer, power supply, converters, and storage drives.
  • Add anti-resonant sheeting inside your computer, converter, and storage drive chassis.
  • Add EMI/RFI shielding materials on all ICs, around all cables, and inside all major chassis panels.
  • Optimize your Windows or OS X operating system to improve audio performance.

For music libraries smaller than 4TB we recommend an internal SSD for everything. For larger music libraries we recommend an external NAS/RAID array with all SSD drives powered by an ultralow-noise power supply.

Our Illuminati v2 power supply is optimized for powering a NAS/RAID array with up to four HDDs or any number of SSDs.

If you can't afford SSD drives for your music library, the correct type of HDD is one that is optimized for 24/7 audio video performance. These AV optimized drives are used in professional surveillance and audio/video recording studio systems because unlike normal HDDs, they don't stop every 10 minutes or so to self-calibrate. Though not audible most of the time, when the self-calibration mode in a normal HDD takes place, data drop-outs and audible time shifts occur.


Use dedicated data paths for each type of data:

The three categories of data are:

  • Operating system and software commands.
  • Music data coming in from internet streaming and/or library drives.
  • Music data going out to your DAC.
When data is going both in and out of the same data buss at the same time, the data controller has to act like a traffic cop, constantly stopping and starting data going in each direction. This makes music sound awkward and less fluid.

By spreading your data flow over three dedicated data controllers you will improve performance more than having a faster processor, more RAM, and faster drives. For example, if your DAC's data input is USB, then use Firewire or Ethernet coming in from your music library drive and/or internet streaming service.

And of course use a dedicated drive for your operating system and player software. Ideally you would want to use a high-performance SLC mSATA card for your operating system and player software. Not only are these SLC mSATA SSD cards faster than a normal 2.5" SSD, they use a dedicated data buss from the SATA drives in your system.


Would you like to prove or disprove this for yourself?

A simple way to prove or disprove the bit perfect status of a computer-based audio system (note the term "system" and not "software") is to compare a $10 digital cable with an expensive audiophile cable. If you can hear any difference between these two cables, you can be certain that the system does not employ error correction in its final stage(s) and is not delivering uncorrupted bit perfect data to your DAC. If the digital input device on a DAC error corrects corrupted data a $10 digital cable would sound nearly identical to the best audiophile digital cable in the world.

Another way to prove or disprove how much power supply noise can effect computer audio performance is to use an AC power conditioner. If you can hear any improvement in system performance using an AC power conditioner, then you can be certain power supply noise is corrupting your data. Of course upgrading to a high-performance linear power supply will make a more significant improvement in your system's performance.

And a third way to prove or disprove the effect power supply can have on computer audio is to replace the wall wart or SMPS on your server, streamer, or NAA with an LIO4 battery or linear power supply. If you can hear any improvement in system performance when you upgrade the SMPS powering your component to a battery or linear power supply, you can be certain that power supply noise was corrupting your data.

For even higher performance, you could audition one of Mojo Audio’s ultralow-noise ultra-high dynamic linear power supplies, Mac Mini upgrades, or music servers. Of course, one advantage of going with Mojo Audio's products is that with our 45-day no-risk audition , you have nothing to lose but your misconceptions.

If you like what you read in this blog and are interested in getting more free tips and tricks, check out the rest of the blogs on our website. Also, sign up for our e-newsletter to get more useful info as well as discount coupons, special offers, and first looks at new products. Plus, don’t forget to “like us” on Facebook.

Enjoy!

Benjamin Zwickel
Owner, Mojo Audio