This is part three of four in this week’s series on the state of web video. Read parts one and two. The next installment will be posted tomorrow.

So far, we’ve explored our abilities to load and control video playback and to tweak every pixel and audio sample to any creative whim. There are some recently available and upcoming tools we can use to fling our creations far and wide. But they are still rough, so using them will take careful preparation and, as always, lots of testing.

Webcams, Microphones and Peer-to-Peer Streaming

The MediaStream Processing API presents new options for streaming audio and video on the web. Each stream is a collection of synchronized audio and video tracks that can come from a webcam or someone else’s web browser over a peer-to-peer connection. The stream can also be played back in many ways — in a regular HTML5 audio or video element, to a Web Audio API “node” or to an outgoing peer connection. HTML5 Rocks has an article on real-time communications between browsers (“WebRTC”) with a decent primer on the MediaStream object, but it’s a couple of years old and a lot has changed, so be sure to also check the reference.

For now, MediaStream support is limited to Firefox, Chrome and Opera. The browsers that support MediaStream all support cameras and microphones as sources and they can both send and receive these streams with WebRTC. The technology is mature enough that Google has started using it for Hangouts and Mozilla is using it for Firefox’s upcoming Loop. There is no shortage of tutorials and examples that use webcams and WebRTC on their own or with WebGL or Web Audio API. The only specific bug I’ve noticed so far is that video dimensions for streams are not available when they should be in Firefox. In the future, we will hopefully see more uses for media streams. There are proposals to be able to stream video from video elements or from a canvas, which would be very powerful but have so far had limited traction.

Adaptive Streaming

Similar to MediaStreams (and easily confused), Media Source Extensions allow for the long-awaited “adaptive streaming” ability. That means that if you’re watching a video on a slow network connection, the browser can automatically and seamlessly switch to a lower bit-rate stream to avoid pausing to buffer. If the connection speeds up again, the video quality will come back up.

This has been one of the few remaining reasons for video providers to hold on to Flash. Now, the ability is available without a browser plugin, but it’s not easy. The BBC has an article explaining how this works and there’s an in-depth tutorial on Microsoft’s developer site. There is a Javascript library for handling this on the browser side, but the video files need to be created in to an adaptive-format standard, such as MPEG-DASH standard, which can be tricky to encode.

Even though Media Source Extensions are new, support is spreading rapidly — Chrome and Internet Explorer can already use them. YouTube can, when it’s available (on Firefox, though only for WebM files and not yet for MP4). Netflix uses it as well and just announced that it will make adaptive HTML video available in the next version of Safari.


Finally, with all we can do to generate and process media in the browser, it would be nice to be able to save our work. The ability to record video and audio is coming in the MediaRecorder API, but it’s not quite ready. So far, it’s only been implemented in Firefox, and while audio works well, the video files created by the process are broken and would need to be re-encoded to play back properly. There is a hack that allows Javascript recording of a canvas using WebP images, but it only works in Chrome, and audio and video tracks must be recorded in separate files. Recording is a feature to keep an eye on, but it’s not yet ready for much more than experimentation.

Part four will go live tomorrow. I’ll discuss the different ways web video technologies fail and what we can do about it.

Get more documentary film news and features: Subscribe to POV’s documentary blog, like POV on Facebook or follow us on Twitter @povdocs.

Published by

Brian Chirls is the Digital Technology Fellow at POV, developing digital tools for documentary filmmakers, journalists and other nonfiction media-makers. The position is a first for POV and is funded as part of a $250,000 grant from John S. and James L. Knight Foundation. Follow him on Twitter @bchirls and GitHub.