Radio News November 2018

WHEAT:NEWS RADIO  November 2018 Vol 9, No 11

The Making of a New Studio. It Begins Here.

In this video series, our Jay Tyler walks you through phase 1 of the new WheatNet-IP-based system for Bonneville Broadcasting's Salt Lake City stations. The entire system end to end is set up and running in the Wheatstone factory, ready for final testing. Here, Jay shows us how it all comes together, including sleek new surface wedges that share the same engine with other consoles. 

You’ll also see some comparative boot times for the BLADEs and Cisco switches. The numbers will surprise you. 

In addition to designing and building all of our gear right here in New Bern, NC, USA, we regularly set up large systems like this and test how they’re going to be used in their destination studios to ensure everything will work flawlessly. This also gives our customers a chance to visit the factory and get their hands on the gear before it's put into operation in their own studios. 

Grand Central Station

GrandCentralAutomationIllustrationWe don’t claim to be at the center of everyone’s universe.

That spot rightly belongs to the automation system, without which most radio stations would cease to function today. We do have connections, though, and lots of them. WheatNet-IP audio network connects into all the major automation systems and many other third-party products or systems for extending functions throughout the facility and beyond. 

You’ve probably hear us talk about ACI, or Automation Control Interface, which is our protocol that makes it possible for automation systems and others to talk back and forth with any hardware or software element in the WheatNet-IP network. ACI operates over the network via TCP/IP and we now can do incredible things with automation because of it. 

Consider voicetracking. “One of the key things we’re seeing people move away from is voicetracking three days or more in advance. They want it closer to real-time so their coverage can be more topical,” says William “Dub” Irvin, VP Radio Automation for WideOrbit, a Wheatstone ACI partner. What’s changed is that voicetracked files are no longer stored and forwarded. Broadcasters are tapping directly into centralized WideOrbit audio libraries, and they are no longer shuttling files around, in part because of WheatNet-IP audio networking.

This is an especially crucial development as more and more groups go to a regional studio operation that feeds many stations at once. “They can generate content that is created in as close to real time as possible. If they have music stations that have similar formats in many markets, for example, this allows them to have one DJ manage all of them and record voicetracks for a group of stations at once. And, if they need to replace one of those voicetracks for just one of those markets, say, there’s a car accident in Poughkeepsie, they can replace that and replace it quickly,” explains Irvin. 

With automation centralized and content immediately accessible through AoIP and IP in general, announcers can be anywhere even as transmitter and studio locations remain fixed. The benefits are hard to measure in some cases. “For example, the automation failover can now live at the transmitter site so that if the studio goes dark, the failover system kicks in and the automation now runs from the transmitter site feeding one of our BLADEs – which is feeding into our AoIP network and getting audio into the transmitter,” says Robert Ferguson, who worked for several automation vendors before coming to Wheatstone as a systems engineer supporting and installing studio systems. 

All of this is working up to some exciting changes in radio. There are real cost savings associated with being able to host automation and libraries in a central data center, either your own or through service providers such as Amazon or Google. You get all the benefits of the centralization model, but without the burden of having to purchase and maintain hardware and square footage for that hardware in your facility – plus, you don’t have the heating and power consumption costs associated with all that equipment in one room. 

As it is, integrating WheatNet-IP audio networking into the station automation system comes with a substantial cost savings. Even just replacing audio cards with WheatNet-IP audio drivers can save $2500 to $3000 per computer, not to mention the associated costs of adding connectors and other wiring infrastructure. 

There are more than 50 brand partnerships that integrate seamlessly into the WheatNet-IP audio network through ACI. We’ll be talking to a few of them in the months ahead. 

IP QA

Your IP Question Answered

Q: I see that your system doesn’t provide for specifying low latency/high latency streams. Why is that? 

A: All AoIP systems have to deal with something called packet overhead. Because we are all using standard protocols, there is a whole bunch of data that must accompany each "packet" of data to adhere to the standards. This is addressing, protocol, and timing information that all network switches depend on to route the packets to the right places. This packet overhead is actually more than 10 times more data than an audio sample. Since a standard IP packet can hold up to 1500 bytes of data, to be able to actually stream audio on the network efficiently, we all bundle or group a number of audio samples together in each packet, thus minimizing the percent of data used for the overhead. You can see that the more audio samples you can cram into each packet, the less packets you have to send and the lower the overhead becomes. In some systems, this is done because of limited bandwidth and processor resources, and the result is that these streams have about 100msec of latency. Why does latency go up with these larger packets? It's simple: it takes more time to assemble these larger packets because the audio data is created at its normal sample rate. You have to wait for all of the audio samples to get enough data to fill the packet; waiting means latency goes up.

Our WheatNet-IP audio network does it differently. We have enough bandwidth with gigabit links and enough processing power that we can make every packet a smaller, low latency packet. You're not forced to have to make a choice nor do you need to spend the effort to specify packet depth per stream.

A Bad Trip With Diversity Delay

HD FM AnimationBy Scott Johnson

This morning began like any other. I stepped out the front door, got into my car, and set my radio to the local station that airs a particular newscast I like to listen to while I drive to work. 

As I left my neighborhood and headed for the highway, I began to notice something strange. I was having trouble following what the newscaster was saying. It was as if parts of it were missing, but my brain was still a bit foggy. Then I heard a phrase actually repeated. Then there was a skip, then another repeated phrase. Some audio was being repeated and other audio was being dropped. The mystery was solved when I looked down at the display: the radio’s “HD” indicator was coming on and going out at random intervals.

When a station broadcasts the same programming on analog FM as well as their HD1 feed, it’s very easy for the two signals to fall out of alignment due to latency (inherent delay) in the two processing chains. In this case, the analog and HD signals had a time difference between them of nearly two seconds. As my radio blended back and forth between analog and HD, I was hearing two different versions of the audio, one delayed more than the other. The result, with repeated and missing words, was absolutely unlistenable. I changed the station. Your listeners might tune out, too, when faced with the crazy-sounding jitter that bad diversity delay can present them with at the outer fringes of your listening area. 

HD FM Delay Text

That’s why Wheatstone’s newest processor, the AirAura X4, includes built-in systems for detecting and correcting errors in diversity delay, locking your analog and HD signals together precisely, and readjusting the delay as necessary to maintain that relationship. Alternatively, if you have a Belar or DaySequerra modulation monitor that supports the feature, you can feed its delay correction factor to several of Wheatstone’s on-air audio processors, which will then apply the correction automatically.

You can learn more about Wheatstone’s diversity delay correction technology as well as our partnerships with DaySequerra and Belar on the Wheatstone web site, and in these three informative videos: 

Exit
Why AES67 is the Important Audio Standard

BLADES AES67Standardization is the reason you can fly across the country, hop in any rental car, and drive off without having to read through the operator’s manual. It’s why you can plug your phone into any USB port to charge, and why you can access your email, your texts and your social media from anywhere, regardless of wireless carrier or make of phone.

It’s why broadcasting exists today and why the future of our industry continues to shape-shift into smart TVs and radios, OTT, and VR and AR.  Every new advancement starts with a standard, and for audio, the future begins with the AES67 standard.

AES67 was ratified in 2013 as an interoperable standard for transporting audio between different IP audio devices. It is a layer 3 protocol suite now supported by all the major IP audio network manufacturers, including Wheatstone. Additionally, AES67 is specified as the audio streaming format by SMPTE ST 2110, which is another important suite of standards that will provide the structure for an all-IP studio where separate video, audio and data streams are created, carried, and synchronized seamlessly for a multitude of delivery methods and purposes.

Our VP of Engineering and Technology Andy Calvanese goes into a few of the issues AES67 solves. You’ll also find a link at the end of this article that will take you to useful tips on commissioning AES67 in your plant.

More AES67

It’s About Timing

One of the key differences between various IP audio network systems is timing, and it’s an important difference. The AES67 standard was created to keep disparate audio devices synchronized so audio would play back without clicks and pops due to sample rate and timing mismatches. For example, Wheatstone’s I/O BLADEs (the I/O access units) in the WheatNet-IP audio network all synchronize their internal clocking to a special signal that the system distributes to every BLADE. We call that signal the “metronome” and we send it around the network many times per second to keep all of the individual clocks in BLADEs in sync with each other. We developed this method of synchronization back in 2004 when we designed WheatNet-IP. Other vendors did something else. We each did our own thing because back in the day, there was nothing suitable for the purpose; we had to invent something to make it work.

Without AES67, WheatNet-IP BLADEs can’t synchronize audio with another system’s node and vice versa without pulling an AES3 signal from one system and patching it into the other so that the second is slaved from the first. Now with AES67, we have a method to share synchronization amongst devices directly. We can use the standard timing protocol that has come out in recent years, IEEE-1588, also known as the Precision Time Protocol, or PTP. In fact, the standard specifies a particular version of this protocol known as PTPv2.


..and Packet Structure

Since all the common AoIP systems use RTP to transmit audio packets on the network, it’s important that the transmitting device and receiving device use the same system for filling and decoding the payload. Otherwise it’s indecipherable. There are many different ways this can be done. What AES67 does is to specify a single common packet structure that must be used; this is the 1msec packet timing you see in all of the specifications and this is what makes interoperability possible. It equates to 48 samples left-right interleaved in a stereo stream. By the way, AES67 goes on to specify some additional packet structures that could be used because there is no one right answer for the ideal structure. The problem is that larger packet structures are more efficient for the network infrastructure because fewer packets are needed to stream, but larger packets induce greater latency because it takes longer to fill a big packet with 48K audio data samples than a small one. In fact, this is why the WheatNet-IP audio network uses gigabit Ethernet connections and ¼ msec packet timing as our default to keep latency to a minimum. BLADEs don’t have a problem with this because they receive and decode packets directly in hardware without going through the CPU. We use much larger 5 msec packets for streams that come from or go to PCs, which have much higher built-in latency and might get really congested trying to process lots and lots of little packets. One size does not fit all applications but by publishing a standard somewhat in the middle, AES67 provides a common ground for interoperability.

Plus, Stream Configuration

AoIP systems that are meant to be complete audio routing systems frequently need to send the same stream to a number of different destinations simultaneously. Think of all the places your on-air program feed goes to. In AoIP systems and as specified in AES67, streams that are intended to be sent to more than one device take advantage of a standard IP mechanism, multicasting, to maximize network efficiency while avoiding congestion. Since multicast streams can arbitrarily have a wide range of multicast stream addresses, the standard assures that there is a common address range to be used so as to facilitate interoperability.

Likewise, it is desirable to forward the stream payload data (in this case the audio samples) directly to the AoIP stream playback application. Of course historically, different vendors have chosen their own ports to use: AES67 specifies a standard port, 5004, that all devices should be capable of using.

Since AoIP systems were already using standard IP protocols (RTP, IGMP, etc), by providing these specific configuration details for the streams, AES67 makes it possible for devices that adhere to these details to stream audio between them.

AES67 will no doubt be a part of every major studio that includes audio. For five key findings on implementing AES67, see Commissioning AES67 in Your Plant.”

 

Download This FREE AES67 White Paper

Commissioning AES67 ThumbnailWheatstone's VP of Technology has put together a fact-packed White Paper on all of the issues you might encounter when implementing AES67 in a real-world facility. This contains a lot of useful information that will help with commissioning AES67. The article above and the link at its end are consolidated into this white paper. Download it for free here.

Our Annual Thanksgiving Video

This video has become a perennial favorite. Scott Johnson's take off on the famous WKRP video. Particularly poignant this year, as Jack Cosgrove (who reprises Mr. Carlson's role) has recently retired (see story last month).

MAKING SENSE OF THE VIRTUAL STUDIO COVERMaking Sense of the Virtual Studio
SMART STRATEGIES AND VIRTUAL TOOLS FOR ADAPTING TO CHANGE

Curious about how the modern studio has evolved in an IP world? Virtualization of the studio is WAY more than tossing a control surface on a touch screen. With today's tools, you can virtualize almost ANYTHING you want to do with your audio network. This free e-book illustrates what real-world engineers and radio studios are doing. Pretty amazing stuff.

AdvancingAOIP E BookCoverAdvancing AOIP for Broadcast
TAKING ADVANTAGE OF EMERGING STANDARDS SUCH AS AES67 VIA AUDIO OVER IP TO GET THE MOST OUT OF YOUR BROADCAST FACILITY

Putting together a new studio? Updating an existing studio? We've put together this e-book with fresh info and some of the articles that we've authored for our website, white papers, and news that dives into some of the cool stuff you can do with a modern AoIP network like Wheatstone's WheatNet-IP.

 

Stay up to date on the world of broadcast radio / television.
Click here to subscribe to our monthly newsletter.

Got feedback or questions? Click my name below to send us an e-mail. You can also use the links at the top or bottom of the page to follow us on popular social networking sites and the tabs will take you to our most often visited pages.

-- Scott Johnson, Editor

Site Navigations