Tribal Mix is creating a tool that will allow newsrooms around the world to keep up with the growing volume of video content and find those worth integrating into news stories.
In order to keep up with the endless stream of video content (35 hrs of video uploaded every minute!) we need a tool that will mimic the ways in which users consume video today: multiple sources and channels, endless streams of content, collaborative viewing, short snippets.
The Use Case: imagine you’re running the Al Jazeera news desk responsible for reporting Egypt’s revolution. You have a couple of crews on the ground placed at strategic locations such as Tahir Square, but in reality nobody really knows what is about to happen or how long it will take. Spontaneously, civilians on the ground start to take little video vignettes with their mobile cameras and upload. Back at the news desk, you launch the Tribal Mix dashboard, configure a collection of tags that seem to be popular in the context of this story and let it run. The dashboard becomes available as part of your main web story about the event as a “latest videos” link. When users everywhere launch the dashboard, they automatically participate in the curation process by scanning the feed, clicking on those videos that appear more interesting. In fact, only a couple of hours into this process, a thumbnail that contains a sequence with a burning car starts to “grow”, as a result of its visual impact. Now you have a story to report… with video.
Breaking news today requires monitoring an overwhelming number of “social media” channels, yet journalists can’t afford to spend their time doing so. Traditional tools such as Google and other custom search engines have the disadvantage that you will only uncover good content once it has become really popular, which means that someone else already ran the snippet.
On the other hand, the public has demonstrated to be very effective in spotting good content amid an ocean of submissions. The crowd sourced approach makes sense very early in the media gathering and filtering process as an initial step to dramatically reduce the volume of content that professional news people have to interact with. Let the audience be your collaborator.
Some professional video tools are available at enterprise prices, but they are limited in that they can only be used by paid user seats. By combining a set of existing open technologies and frameworks, not only I can deliver the solution presented, but can do so in a way that the resulting software product has little development cost and very low operational costs, usually scaling whenever there is intense use of the infrastructure.
Earlier in the process I had provided a quick snapshot of what the tool could look like and even provided a concise set of design directives that would make sure the tool remains true to the idea of openness and transparency and is built on the fundamental premise that the audience is integrated into the news gathering process. But to implement this project a lot more is needed:
The Stream: videos exist everywhere and the ability to use videos where they are is important to consolidate sources (YouTube, Vimeo, Twitter) into a single stream of all possible candidates.
The Engine: using a series of video techniques such as time-lapsing, a series of vignettes would be rendered for each video submitted. This is a CPU intensive task that can scale very nicely using a cluster of servers (Amazon EC2 for example). On the other hand, many of the tools to accomplish this technically already exists and have very open software licenses such as FFMPEG. At this point, I’m assuming that all videos will be pre-processed and the version used in the Dashboard is an animation rendered by the browser.
The Dashboard: is the main user interface that allows a viewer to inspect all the pre-processed snippets and implicitly mark those that seem to have better content quality. It is meant to be used by a large number of viewers and for that group to influence each other’s viewing targets by visually giving higher priority to those snippets that get more “air time” from the audience. The only currency that viewers can use to favour specific content is their own viewing time, so the system is hard to game. Built on HTML5 + CSS3 + JQuery (compatibility) + Masonry (layout) + Popcorn (video integration and measurement), this solution is at the forefront of web standards and should work beautifully across all modern browsers.
Play with the prototype yourself: notice how video boxes “grow” as you let the videos run longer. Imagine that effect multiplied by a group of hundreds of viewers using the tool simultaneously.
Technical Challenges: during the prototype development I was able to identify the following technical issues that will need special attention to create a mature tool:
- Video content found on the web is not necessarily tagged properly to reflect proper licensing. The engine will without question render a “modified” version of the original content so a proper copy-left license is required. Finding the right approach to integrate this information into the stream will likely require further thinking. I’m particularly interested in Creative Commons machine-readable licenses as a possible solution, but this may limit the amount of content available to the stream in the first place.
- Vignettes rendered are currently built as Animated GIFs. This format, though, has several technical limitation such as a colour palette restricted to 256 colours and the fact that there is absolutely no compression on the final animation, creating a very large file to be downloaded by the browser. Further to that, rendering these requires a fairly good CPU which disqualifies mobile browsers. I believe technology on video is evolving fast enough that using an actual video codec would be possible.
- To Phillip, Pippin, Alex and the rest of the #MozNewsLab crew for organizing an incredible lab.
- To Amy, James, Raynor, Saleem and a great group of peers for the help and stimulating feedback.