Following my post last week on Newshooter.com “Robbing our subjects or helping our audience” on voice-overs vs. subtitles, I was happy to see R&D’s work at IBC 2013 on the future of subtitles.
Thereports over 7 million people regularly use subtitles to view television and video. More interesting, is that most of them do so for reasons other than hearing impairment or disabilities. Looking to create a rich and improved experience for these people, the BBC has been researching how we perceive and understand content with subtitles.
I got a chance to talk to Senior Engineer Matthew Brooks on what the research could mean for the future of subtitles. Brooks’ research shows our conventional methods, may not be in fact the best way forward into making sure our audiences understand content.
From the BBC:
We have been examining ways in which we might use language models for individual programme topics to improve the performance of speech to text engines and to detect errors in existing subtitles. We have had some early success modelling weather forecast subtitles which suggests there may be some value in this approach, but it will require a great deal more work.
We have carried out a ground-breaking study into the relative impact of subtitle delay and subtitle accuracy. This work required the development of new test methodologies based on industry standards for measuring audio quality. A user study was carried out in December 2012 with a broad sample of people who regularly use subtitles when watching television. The results are being presented at IBC2013 in September.
Most recently we have been exploring ways to take the live broadcast subtitles and carry out automatic post-processing to remove the original delay and improve the formatting. Early results are promising and we are in the process of talking to the iPlayer team about the potential for this work. We are also looking at how live subtitles could be realigned and reformatted for delayed streaming.
More on the BBC’s R&D Lab here.
Video shot and edited by Jonah Kessel, Scott Karlins and Li-Lian Ahlskog.