Wade Davis gives a talk at the TED conference, captioned in Farsi using dotSUB technology. TED Media
By Andy Carvin
For the last 25 years, the Technology, Entertainment and Design Conference - better known has TED - has served as an annual pilgrimage for some of the world's leading thinkers to present their best ideas. For those of us who weren't able to attend in person, TED's online videos have served as a gateway to these discussions. One problem that's cropped up with these videos, though, was that these "TEDTalks" were usually in English. Today's rollout of TED's Open Translation Project intends to change that.
At first, the TED team hired professional translators to caption some of their videos, but after around six months realized it wasn't scalable. So they decided to flip the model on its head and focus on volunteer online translators. Working with the multilingual subtitling service dotSUB, they implemented a system for teams of volunteers to coordinate the translation of individual videos. "The system we've established requires two pairs of eyes on each translation," explained TED Media executive producer June Cohen. "Every translation must be reviewed by a second fluent speaker. Sometimes the people know each other; sometimes they don't. But we encourage them to collaborate and confer on any changes, in order to get to the best possible translation. This proof-reading stage is essential, because we, of course, can't possibly review every translation."
Once a volunteer team completes a translation, the new captioning is linked to the video stream. For example, the video featuring National Geographic Explorer-in-Residence Wade Davis features a drop-down menu allowing you to toggle the captioning into 23 different languages. If you toggle the captioning to Farsi, let's say, the video immediately switches the captioning to that language, as well as the description of the video. You can also open an interactive transcript module that displays the full text of the speech in the language you've selected; clicking any sentence in the transcript causes the video stream to jump to the appropriate point in the presentation. Meanwhile, if you're bilingual, you can set the interactive transcript to a language that's different from the video captioning, giving the page a horizontal rosetta stone-like effect.
The decision by TED to invite volunteers to coordinate the translations will now make it possible for more videos to be made available in a much wider range of languages than if TED staff had done the work itself; so far, they've managed to complete 300 translations in 40 languages. "We're a pretty global staff; we speak around 12 languages among us," Cohen continued. "But direct review of translations isn't remotely scalable. We needed a system where the translators themselves would be checking each other's work, and constantly evolving. And it's working! In fact, we've even had translators approach us because they want to improve the professional translations on the site."
Though the dotSUB technology embraced by TED to create the project have been around for a while now, TED's use of it is perhaps the most high-profile example of crowdsourced video translation to date. Given how automatic translation tools still have a long way to go before becoming 100% reliable, human translation remains the best way to go when accuracy is paramount. But human translation, in the context of video captioning at least, has never been particularly scalable. The combination of dotSUB's innovative technology and TED's large online audience may indeed change that.