Studio
Home Digital Audio Master Clock - The Audible Difference

My studio setup started off quite modestly, with all analog audio paths up until the final master destination to digital audio tape (DAT). Eventually, each step of the way was replaced with a digital version, starting with the addition of the Echo Layla, then eventual replacement of all the analog mixers with digital mixers, a progression that took over 20 years to make.

One aspect of digital audio that I knew very little about was audio synchronization and its effects on audio quality. Up till now, each piece of digital audio equipment was running independently clock-wise from everything else. For example, the Yamaha 02R is connected to via ADAT to my RME Hammerfall card in my recording PC and the Behringer ADA-8000 is connected to the 02R via ADAT. But that's it. Each device was running on its own internal clock, which was at 48kHz during recording and then switched to 44.1kHz after mixing and conversion. Unaware of the downside to not syncing every piece to a central clock, I went on my merry way producing albums and singles with my configuration.

The Problem

But something bothered me about my recordings. Not being able to quite put my finger on it, what I felt was that my finished recordings weren't as clear as I thought they could be, even having some sort of graniness to them. Other criticisms I had of them were that the bass notes were never as deep as I would have liked. Comparing them to commercial recordings from big name artists and studios, they held up well enough, but never fully at par to my ears. My studio could never possibly compare too favourably to the experience and equipment of a million dollar studio, but I thought I could somehow come closer than what I thought I was creating. With more experience creating recordings that would be listened to by hundreds, if not, thousands of people, I would certainly improve on the quality of my output.

I had come across a recording I had released that was done using my old Tascam M-224 mixer feeding into my Echo Layla. The roundness of the bass notes surprised me in recent listenings, making me conclude that they actually sounded better--somehow--to my later recordings with a fully digital audio chain. But how could that be? Surely, all the "better" equipment should have seen (heard) a better quality out of my later recordings.

To a certain extent it did improve over the years. With thoughtful use of plugins, learning more mixing and mastering techniques, and improving on my listening environment with acoustic treatment and higher quality monitoring, there were definite improvements to the sound quality of my recordings (here's an article about my steps to improving my monitoring environment, which is a large contributor to how good my mixes translated). With all those efforts, however, my finished recordings still had something missing. I noticed it on my first full album, Filled With Joy, where the busier passages seemed to lose their clarity even though I had taken care to carve out as many unique frequency ranges for each instrument in the mix. To me, there *should* have been much more separation to each of instrument recorded. There was, but there could have been more. The audio somehow had distortion.

Was it just how the instruments blended together at mixdown that was causing the problems. Did Cubase not have enough bit-wise resolution to handle the blending of all the frequencies into something clear and pleasing? Certainly to me, the individual tracks seemed to sound fine, so the problem must have been something later in the mix process. There really wasn't much I could do in the mix process since this was happening all inside computer memory with my software suite (Cubase and Wavelab). So what if the problem is that the sound has not been captured in the best possible way?

If all the tracks that I recorded had the same problematic mediocre sound quality, then I would listen to all of them with a "leveling" mentality that causes me to think that they are all of equal "good" sound quality. I had no "cleanly recorded" tracks used in my songs that I could compare them to. Even the drum loops that I used (Stylus RMX) where were supposedly recorded in professional settings had likely been "grungified" in a way suitable to the style in which they were being marketed. However, I did notice that some of those drum loop tracks somehow had more depth in them than my recorded tracks.

I tried to fix what I could, by making sure the arrangements weren't too busy if it wasn't needed or if it didn't serve the song. I made sure that there was good gain on the original tracks, using compression and limited judiciously to ensure every track had good space within the mix. And in listening to the finished album on a home stereo system (a Yamaha CD player, Yamaha receiver and Mission speakers), the album has some great punchiness and that commercial "bite" that is pleasing in that radio kind of way.

But I also felt that it could have had more roundness in its tone, the kind that I find in a Diana Krall album or a Sting album. I know those are jazz or borderline jazz recordings, but a pop-styled contemporary album shouldn't be excluded from this sonic territory. Jacksoul and late 70's Supertramp also come to mind for the type of sonic character that I'm looking for.

Enter Master Clock

When two new digital mixers were added to the studio, I knew that syncing their audio clocks was something useful in preventing random pops and clicks, something that I had experienced sporadically in the studio, especially when I had one piece of equipment running at 44.1kHz and another piece that it was connected to running at 48kHz. In those instances, the error manifests itself pretty obviously with a loud pop every few seconds that sounds like static. I'm not sure why it happens, but perhaps it's a correction artifact, something that happens when the clocks somehow fall into sync momentarily before going back out of sync. In any case, it is not a sound that you would want on a recorded track, especially if the performance on the track is a keeper. Sure, there ways to clean up the track if a pop or crackle is recorded on it (e.g. iZotope RX), but that would be a hassle, especially with multiple tracks being used for a composite; and sometimes the cleanup software can't fully clean up the track. Ultimately, the goal is to always have clean tracks and not have to think about having to clean it up every time.

This got me thinking about whether I should have a way to sync all my audio devices all the time and I decided to make it happen. This meant getting a master clock source for the studio and onto eBay to find something. The prices weren't as ridiculous as I initially thought, but there was definitely a range of products and many had multiiple purposes for them, built not just for digital audio clock syncing but also for video sync. Some minor research accompanied my shopping; "minor" because I inherently knew that the master clock device's purpose was simply to ensure that all digital audio devices were cleanly synchronized with each other. What I didn't know was what that really meant.

During my research, I found articles of other users introducing master clock to their studios and realizing the effects it had on their sound quality. This was a big surprise for me, as I did not expect there to be a tangible impact on how the audio sounded. I thought clock sync would only ensure that artifacts such as pops and clicks would appear in the audio stream. But here were people talking about clock sync changing the depth of their recordings, changing their clarity, and how the highs were less shrill and the lows had real bottom to them.

Most articles I've read (and it hasn't been that many) simply say that things sound better with a master clock. And I haven't found any article that satisfies me as to why this is really so, but maybe I should just take it at face value. So when my master clock device arrived (a Rosendahl Nanosyncs HD), I hooked it up and hoped for the best.

Hooking Up

I bought a used Nanosyncs HD from eBay for a great price along with a package of extra 75ohm BNC cables to get it wired into all the devices to be synced. With 8 word clock outputs, that was enough for my studio, which has the following syncable devices:

  • Yamaha 02R
  • Yamaha 02R
  • Yamaha 01V96
  • Behringer ADA-8000
  • RME Hammerfall 9652

I finished connecting everything and one of the first synthesizers I tried was my little Korg MicroKORG.

The Audition

Cool as it looked, I always thought this little keyboard was really lacking in the roundness and depth department with respect to its sound quality, which is why I hadn't really used it for much of anything, including recording. But when I heard it through the newly clocked synced audio path, I was blown away. Was this even the same keyboard? One of the stock sounds was a falling bass sound that I had never noticed how far the falling notes fell until now.

After this, I tried a few other of my synths (the Kronos, the Kurzweil K2500XS, and the Fantom X7), all of which sounded very full. Then I had to try the Vertigo sample loaded into my Akai S3000. This is the scariest sound I have and I recall the night I first tried it, nearly giving me a coronary. I loaded it up and the sounds on the package had way more depth and clear imaging than I remember, amking it still the scariest sound I know.

After a few more tryouts on various synthesizers and a little bit of audio from microphones (my AudioTechnica AT4050 through the MindPrint), I was convinced that this was a great decision and that I'll finally have the signal path that I seemed to have lost until now. Let's see how this goes.

Here's a good article on why master clock is needed in a multi-device, all digital signal path environment.