View the demo » | Demo source code »
Full documentation and source code »

One of the best ways to innovate is to borrow conventions from other fields. Often, those working in another industry or medium have solved problems that you may be stuck on or didn’t even know you had. For interactive media, I find that video games are a great place to look for ideas. It’s a field that’s been tackling problems of story structure, user controls and dynamic aesthetics for over four decades.

A great example of such an idea is the way video games handle automatic control of a virtual camera, specifically in two-dimensional side-scrolling games. From classic games like Defender and Super Mario Bros. to modern “2.5D” games like Never Alone, side-scrollers are presented as an extended lateral tracking shot, following a player character while directing the viewer’s attention. Inspired by a thoroughly well researched article, The Theory and Practice of Cameras in Side-Scrollers, I built dolly.js, a JavaScript library that emulates some of the rich and subtle behaviors of these virtual cameras to see how they might apply to the kinds of presentations typical of modern non-fiction.

The developer of “Insanely Twisted Shadow Planet” explains the game’s camera control logic.

How It Works

The virtual camera concept can be applied to any interaction where there is a user-controlled cursor, along one or more axes within a space representing data or other media, analogous to a “player” character in a game. Though we could simply lock the virtual camera to the controls so the cursor is always placed right in the middle of our frame, I was hoping to create a more expressive tool. So I built dolly.js with the following goals in mind.

  • Always keep the “player” in view. The virtual camera should actually follow the cursor, even though it may not be centered in the frame.
  • Move the camera smoothly, avoiding movement that’s abrupt or too fast.
  • Place the “player” in the frame in a way that makes sense of the local context. Just like a movie frame, we need to adjust the zoom or size of the scene as well as the horizontal and vertical offsets so the viewer can see relevant features around them.

The dolly.js library represents our scene as a collection of simple objects, or “props,” each of which has a position in three-dimensional space. Any object can follow any other objects, with parameters that modify how each object behaves. For example, an object might not move towards its target until the target moves a certain distance away. We can introduce a lag or a maximum speed, so it takes some time for the following object to catch up. These parameters give us some creative control in adjusting the smoothness of motion and subtlety of the effect. An object can follow multiple targets, moving towards the average of all their positions. This can be used to keep multiple objects in view, and it can also create interesting effects, like positioning the frame with lots of empty space toward the direction of movement. We can designate certain objects as “attractors,” or points of interest. As our “player” gets close to an attractor, it takes over control of the camera, ignoring the target objects and positioning it instead to a pre-designated frame. Since our objects are very simple, the output can be displayed with any method, like WebGL, 2D canvas or the DOM. It could even be run in non-browser environments like a Node.js server or a robot.

Demo

To test out the code, I’ve built an example of a line chart representing U.S. immigration data. The user can move a cursor back and forward a long the timeline, showing the year and number of immigrants for the current position. The camera follows smoothly along, even as the user may move the cursor abruptly back and forth. I’ve created an attractor for a number of notable dates, so the frame can be adjusted to show an entire peak or valley following a given historical event. The library provides events when we approach or leave these points of interest, so I use them to fire callbacks that show a note about each one. The result is a variation on the ubiquitous line chart that I think is a bit more expressive and fun.

There is more we could do with this code in the future. The software isn’t aware of which object is the camera and which is our player, so we could invert the relationship — have the cursor follow the camera. Or we could use it to drive other objects like an information box or virtual assistant. I’d like to add the ability to zoom in and out to keep multiple objects in view, even as they move far apart from each other. The software currently works in three dimensions, but it would be more powerful with the ability to match rotation of its target objects as well as just the position.

Let us know if you end up using the code or can think of how to improve the experience. Share a link. You can comment below, use the hashtag #povtech or email us at filmmakers@pov.org.

Get more documentary film news and features: Subscribe to POV’s documentary blog, like POV on Facebook or follow us on Twitter @povdocs!

Published by

Brian Chirls is the Digital Technology Fellow at POV, developing digital tools for documentary filmmakers, journalists and other nonfiction media-makers. The position is a first for POV and is funded as part of a $250,000 grant from John S. and James L. Knight Foundation. Follow him on Twitter @bchirls and GitHub.