Text2Speech Blog

NeoSpeech: Text-to-Speech Solutions.

FCC extends deadline for broadcasters to convey visual emergency information in an audio format

FCC extends text-to-speech deadline

Last month, the Federal Communications Commission (FCC) granted a petition to extend the deadline requiring broadcasters to provide non-textual emergency information (such as radar maps) in an audible format. The looming deadline was pushed back 18 months. Broadcasters will now be required to comply by May 2017.

The rule is one of many put in place by the FCC as a result of the 21st Century Communications and Video Accessibility Act (CVAA). Congress passed the CVAA in 2010 to ensure that new forms of communication and video are accessible to people with disabilities.

Here’s everything you need to know about the FCC’s rule, and why its deadline was pushed back.

What is the rule?

Per the FFC’s Memorandum Opinion and Order:

“On April 8, 2013, the Commission adopted a rule requiring that emergency information provided visually during non-newscast programming be made accessible to individuals who are blind or visually impaired through the use of a secondary audio stream to provide such information aurally. In particular, the rule provides that the video programming provider or video programming distributor that creates the visual emergency information content and adds it to the programming stream is responsible for providing an aural representation of the information on a secondary audio stream, accompanied by an aural tone. Visual emergency information content can be either textual, e.g., an onscreen crawl, or non-textual, e.g., maps or other graphic displays. In the Emergency Information/Video Description Order, the Commission found that if visual but non-textual emergency information is shown during non-newscast programming, the aural description of this information must accurately and effectively convey the critical details regarding the emergency and how to respond to the emergency.”

.

Simply put, the FCC is requiring that all emergency information that is displayed visually on a screen (such as text, images, or maps) must also be conveyed audibly through a secondary audio stream so people who are blind or visually impaired can still get the emergency information.

It also requires that the video programming provider or distributor that creates the visual emergency content must also be the one to provide the audio emergency content.

Finally, the last big takeaway here is that the audio stream must still be able to “accurately and effectively” convey the critical details of non-textual emergency information being shown. For example, if a TV station broadcasts an emergency alert and shows a map of where a tornado might be, the audio stream would have to effectively explain the possible locations of the tornado.

The FCC wants non-textual information to be converted into speech

Why was the deadline pushed back?

The National Association of Broadcasters (NAB), American Council of the Blind (ACB), and the American Foundation for the Blind (AFB) together petitioned the FCC to grant a limited extension of the FCC’s compliance deadline.

They cited that an automated solution which aurally provides non-textual information does not exist yet, thus making it impossible for broadcasters to have a viable way to comply with the FCC’s rule. Seeing as no such solution currently exists today, the FCC granted the deadline extension.

This wasn’t the first time the FCC had to extend the deadline for this rule either. The original deadline was set for May 26, 2015. Just a couple months before that, the NAB asked to extend it because no solutions existed.

Now, the deadline is being pushed back again, and for the same reason. While devices such as Enco’s Audio Insertion Manager (AIM-100) are able to convert textual information into an audible format via text-to-speech technology, there are still no solutions out there that can automatically convert non-textual information into speech for broadcasting emergency alerts.

Why is an automated solution so important?

Enco’s AIM-100 and devices like it have been adopted by almost all broadcasters across the US. They are highly valuable because of the fact that they are automated.

Automation is important because an emergency alert can happen at any moment without warning. Broadcasters are required to transmit the alert immediately. When they receive the alert in the form of text, a text-to-speech engine is instantly able to convert that text into speech for the secondary audio stream. No time was wasted, and no voice actor was needed. The device was able to do it all on its own in real-time.

It’s trickier for non-textual information such as a map. A text-to-speech engine needs textual input to be able to do its job. Without text, it can’t do anything. Today, there are no specialized products for broadcasters that are able to turn non-textual information into text.

There are a few ways broadcasters can work around this issue. The National Federation of the Blind (NFB) argued against the deadline extension. They said that aurally describing images doesn’t have to be done automatically, and that broadcasters should require actual people to transcribe the information themselves whenever an alert comes in.

However, the ACB argued that any non-automated solutions were not viable. One reason is because stations in smaller markets may not have the adequate staffing to always have somebody ready to transcribe non-textual information. They argued that taking even a few minutes to create the audio stream manually could mean the difference between life and death for viewers in an emergency situation.

What lies ahead

It is very likely that an automated solution that is able to transcribe non-textual information into speech will be made for broadcasters in the near future. The ACB noted that Facebook has already developed technology that is able to give descriptions of images in real-time. All a vendor would need to do is pair that technology with a high quality text-to-speech engine and they will have invented the first automated solution for broadcasters to convert non-textual information into speech. Broadcasters desperately need this technology to be in compliance with the FCC’s rule.

If you’re a broadcaster, keep checking in with your current text-to-speech and automated solution vendors to see if and when they’ll have this product for you.

If you’re a vendor and looking to build an automated solution, get in touch with us as we can provide you with high quality text-to-speech engines with the specific SDKs and/or APIs you need to integrate it into your product.

What do you think?

Are you aware of the FCC’s ruling? What do you think of the deadline extension? Let us know in the comments!

Learn More about NeoSpeech’s Text-to-Speech

To learn more about the different areas in which Text-to-Speech technology can be used, visit our Text-to-Speech Areas of Application page. And to learn more about the products we offer, visit our Text-to-Speech Products page.

If you’re interested in adding Text-to-Speech software to your application or would like to learn more about TTS, please fill out our Sales Inquiry form and one of our friendly team members will be happy to help.

Related Articles

Introducing the FCC’s Accessibility Regulations

The Basics of Title 2 of the CVAA: Video Programming

What Does Your Business Need to Do? How to Get Up to Date with the CVAA

No Comments

Post a Comment