Deep learning-powered speech-to-text engine powers Automated AV Captioning and Transcription for ENCO’s enCaption5
While closed captioning began as a way for the deaf and hard of hearing to understand the audio portion of TV shows, its use has now become quite pervasive.
Whether it’s broadcast television or video streamed over the top, viewers are familiar with closed captioning on their screens. This is because the FCC mandates on-screen captions to make the audio portions of over-the-air TV broadcasts accessible to the deaf and hard of hearing.
By offering live local radio captioning, WAMU is broadening its market reach by making its programming accessible to the deaf or hard of hearing, including those associated with Gallaudet University—a world-renowned Washington, DC-based university that serves deaf and hard of hearing students.
Video podcasting on social media allows everyday people to become broadcasters with their own signature channels.
The company’s focus, however, is on the live aspect of closed captioning—ENCO provides one of the few solutions that converts speech to text within a couple of seconds that can be displayed on a screen during a live event or within a web stream. They are now partnering with another Medialooks customer, RUSHWORKS, to provide a tight integration with their video production solutions by leveraging the built-in data transfer approach available in the Medialooks SDK, alleviating the need for additional hardware.
Debuting at the 2019 NAB Show, tailored enCaption4 configurations enable radio broadcasters to serve hearing-impaired audiences via web browser or mobile app