Digital Cables and Analog Cables--
What's the Difference?
Many of the best video cables on the market today were designed primarily for use in the digital domain; but can a digital cable really perform well as an analogue cable? Can an analogue cable be used as a digital cable? What's the difference, anyway, between digital and analogue cables? They both move electrons, don't they? All of these are interesting questions, and bear some discussion. To begin that discussion, let's first talk about what digital and analogue signals are, and how they perform in cables.
Digital and Analog Signals:
The first generation of video and audio cables were designed with analogue signals in mind. An analogue signal represents the information it is intended to convey by presenting a continuous waveform analogueous to the information itself. If the information is a 1000 Hertz sine-wave tone, for example, the analogue signal is a voltage varying from positive to negative and back again, 1000 times per second, in a sine-wave-shaped pattern. If we could use that electrical signal to drive a speaker cone to physically move in and out in that same pattern, we'd hear the tone come out of the speaker.
A digital signal, unlike an analogue signal, bears no superficial resemblance to the information it seeks to convey. Instead, it consists of a series of "1" and "0" bits, encoded according to some particular standard, and delivered as a series of rapid transitions in voltage. Ideally, these transitions are instantaneous, creating what we call a "square wave." This is so despite the fact that, when the signal is decoded, the result may be the very same 1000 Hertz tone, with its continuous slopes and lack of sharp transitions, represented by the analogue signal described above.
When Signals Go Bad:
One of the interesting distinctions between digital and analogue signals is that they degrade in rather different ways. Both are electrical signals, carried by a stream of electrons in a wire, and so both are subject to alteration by the electrical characteristics of the cable and by the intrusion of outside electrical noise. But while the alteration of an analogue waveform is progressive and continuous--the more noise is introduced, the more noise will come out of our speaker along with the tone--the digital signal suffers alteration quite differently.
First, a digital signal, because of its sharp transitions, is highly subject to degradation in its waveform; those sharp transitions are equivalent to a long--indeed, an infinite--series of harmonics of the base frequency, and the higher the frequency of the signal, the more transmission line effects, such as the characteristic impedance of the cable, and resulting signal reflections ("return loss") come into play. This means that while the signal may originate as a square wave, it never quite arrives as one. Depending on the characteristic impedance of the cable, the capacitance of the cable, and the impedance match between the source and load devices, the corners of the square wave will round off to a greater or lesser degree, and the "flat" portions of the wave will become uneven as well. This makes it harder for the receiving circuit to accurately identify the transitions and clock the incoming signal. The more degradation in the signal, the harder it is for the receiving device to accurately measure the content of the bitstream.
But a digital signal, because of the way its information is stored, can be quite robust. While the signal will always degrade to some degree in the cable, if the receiving circuit can actually reconstitute the original bitstream, reception of the signal will be, in the end analysis, perfect. No matter how much jitter, how much rounding of the shoulders of the square wave, or how much noise, if the bitstream is accurately reconstituted at the receiving end, the result is as though there'd been no degradation of signal at all.
One note: it's often assumed that SPDIF digital audio, or HDMI digital video, employ error correction. Because these formats are meant to deliver content in real time, however, they can't employ error correction; there's no time to re-send data packets which aren't correctly received. So when digital data are lost, the loss is final. Depending on what is lost, this may result in an interpolation (to "guess" the content of the missing data), an error (where data are misread and the erroneous content is rendered in place of the correct content), or a total failure (where the signal may disappear entirely for a period of time).
The result is that digital signals can be quite robust; they can exhibit no functional degradation at all up to a point. But the difference between perfect rendering of a digital signal and total loss of signal can be surprisingly small; one can reach a threshold where the digital signal begins to fall apart, and not long after that threshold, find that there is no signal at all. The signal which gets through flawlessly over several hundred feet may be unable to get through at all, even in a damaged condition, when the cable run is lengthened by another fifty feet.
How soon this threshold is reached depends a great deal upon the signal, and upon the tolerances of the cable in which it is run. The higher the bitrate, the more difficult it is to maintain reliable digital communication; the problem is that as the bitrate increases, the frequencies a cable must carry increase, and as frequency increases, the wavelength correspondingly decreases. The shorter the wavelength, the more likely it is that a cable of any given length, especially one close to a large fraction (1/4 wavelength is often considered a benchmark) of the wavelength will start to play a significant role in signal degradation. As this happens, the characteristic impedance of the cable becomes increasingly important. The degradation of the digital waveform will depend directly upon the impedance match between the source, the cable, and the load.
The job of the cable in a digital signal circuit is clear enough: maintain the specified impedance, as tightly as possible. Video cable is designed for 75 ohm characteristic impedance, and analogue video cables to that spec have been in production for many decades; but tolerances, in a world of analogue composite video running at a bandwidth of a few megahertz, were not as tight as modern digital video signals require. The advent of high-bitrate digital video required cables with exceptionally tight impedance tolerance--that is, as little deviation from 75 ohms as possible. Solving this problem required cable manufacturers to address every aspect of coaxial cable design, from drawing wire to more consistent diameter to making the microscopic bubble size in polyethylene foam more consistent, to figuring out how to structure that foam to prevent the dielectric from distorting excessively when cables are flexed. The result is the modern "precision video cable," with specified impedance tolerance of +/- 1.5 ohms, and in-practice tolerance tighter still.
The importance of impedance tolerance in digital cable design can be seen in the contrast between the consumer digital video standard known as HDMI and the professional standard, SDI. HDMI uses nineteen conductors to convey a digital video signal; SDI uses one signal conductor and a grounded shield. The bandwidth of HDMI, and its ability to run dependably over great lengths, ought to be much greater with all those added conductors--right? Wrong. SDI was designed to run in coaxial cable, with that +/- 1.5 ohm impedance tolerance, and can be run hundreds of feet down a single coax without any information loss at all. HDMI, meanwhile, was designed without any evident consideration of transmission line problems; the HDMI cable is a rat's nest of shielded twisted pair cable and miscellaneous conductors. Twisted pair cable inherently has poor impedance tolerance. The HDMI spec calls for 100 ohm pairs, +/- 10 ohms, and even that 10% tolerance is hard to achieve in twisted pairs. Consequently, the degradation of signal in an HDMI cable from return loss is enormous. Meanwhile, the need to pack nineteen conductors into a small-profile cable results in the use of small-gauge wire. (Compare the 18-AWG center conductor of an RG-6 precision video coax versus the 24 AWG or even 28 AWG conductors of a typical HDMI cable.) This small gauge wire, having high resistance, causes increased attenuation; more importantly, the smallness of the conductors makes controlling impedance even more difficult, as close tolerances are more easily maintained in larger materials. The result: a standard which works well at a few feet and rapidly degrades over distance. While HDMI cables are becoming better, the inherent limitations of the twisted-pair design will always limit HDMI cable performance.
Turning back to the world of coaxial cable, we can now answer a few questions. Can analogue cables be used in digital applications? Yes, up to a point; but the looser tolerances of older analogue cable designs will limit their run lengths, at least when used in high-bandwidth applications like SDI video. Can digital cables be used in analogue applications? Yes, absolutely; the same tight tolerances which make digital cables appropriate for digital applications make them superb for analogue applications. One may not "need" the improvement, but it will never hurt, and can help. SDI coax, like Belden 1694A, costs very little more than (and in many cases less than) traditional analogue coax designs, and will outperform it on every measure of analogue cable performance. The advances in dielectric foaming result in a cable which is not only higher-performing, but more flexible and easy to work with, than its older counterparts. Contrast, for example, 1694A with a cable that was once the broadcast industry standard cable, Belden 8281. 8281 is heavier, thicker, and dramatically less flexible. And if the notion of heavier and thicker cable sounds better--bigger cable equals lower loss, right?--think again. Because of efficient dielectric foaming, 1694A is able to use a larger center conductor in a smaller cable, for lower loss.
Electrons don't know whether they're "digital" or "analogue." But that does not mean that digital and analogue signals behave alike, or that digital and analogue cables are interchangeable. As we look forward to an increasingly digital future, the advantage of using digital-ready cable even in analogue applications is apparent.