View the demo »
View the source code »

After several weeks of producing code experimenting with virtual reality, I decided to place users inside representations of real cities. Finding a new way to build a compelling setting seemed important, as the sense of immersion is supposed to be one of the unique strengths of the medium, and tools such as panoramic videos, which are becoming common, have some inherent drawbacks. With the increasing availability of public data about the real world, it seemed possible to simulate a city by marshaling the resources at hand.

Using Real-World City Data

I happened to be tracking a startup called Vizicities, which is building an open-source software library for visualizing real-world cities. Vizicities uses Three.js and WebGL to render a three-dimensional view of the world based on web data feeds, including map or satellite image tiles and 3D models of real buildings. Thanks to projects like OpenStreetMap, there is enough data to visit almost any city in the world. The creators of Vizicities were inspired by the kind of data visualization seen in Sim City, so it uses a top-down view and a mouse to move and look around. It would have to be adapted to work in virtual reality.

Mobile Controls and 180-Degree Turns

I dug into the code, and moved the camera into the city rather than high above it. And the existing mouse controls — similar to what you’d find with Google Maps — didn’t translate well to an immersive simulation, so I pulled them out and replaced them with the smartphone-based controls from one of my previous demos. This turned out to be an effective and fun way to fly around a city.

Now that I had a richer environment to move around in, I needed a better way to turn all the way around, in spite of the constraints of a desk chair and the wires coming out of the Oculus Rift. So I added the ability to instantly turn around 180 degrees by shaking the phone.

Getting to an Acceptable Frame Rate

I’ve found that the virtual reality experience is very sensitive to the performance — any lag or drop in frame rate can break the sense of presence, or even cause motion sickness. Simulating a full city requires an enormous amount of data, and it all needs to be transferred from the server, processed and converted into the appropriate format, checked for errors and copied into the graphics processor. In a browser, most of this happens on a single CPU thread, so if any one of these steps takes more than a few milliseconds, redrawing is delayed and the image the user sees no longer reflects the position of their head. That’s a big cause of motion sickness, and the effect is worse if you’re flying around.

If I were building an offline experience, I could package all this data in a large download and load it all in advance before the user puts the headset on. But I would have to restrict the experience to one or two cities, and it’s much cooler to be able to seamlessly fly around to any city in the world. I ended up optimizing the Vizicities code to increase my frame rate from about 5 frames per second to 55-60 frames per second, which is close to the minimum you’d want for VR. Drops in the frame rate are rare and only last a fraction of a second.

Adding Depth to Virtual Cities

Even with the frame rate and navigation control solved, the city models presented another problem in that there wasn’t a good sense of depth. The buildings didn’t really look three-dimensional, and it didn’t feel like you were really there. To solve this problem, it helps to understand how 3D vision works. Displays such as the Oculus Rift provide a sense of depth through “motion parallax” — the difference between what the left and right eyes see. The closer an object is to the point of view, the greater the difference between the left and right images. The problem here is that the buildings are too big and therefore mostly too far away to see any difference. To make matters worse the data I have to work with doesn’t include any texture images, so all the building faces are big, white rectangles. That leaves fewer visual cues to reference, with or without the parallax.

One solution to this is to scale down the virtual model of the buildings, which has the effect of increasing the parallax effect – the buildings are closer to the point of view, so the difference between the left and right images is much greater. This offers some improvement, but it has the side effect of making the city feel small, like you’re standing in a miniature model where the buildings are at most a few feet high. I decided to scrap it for now, since I wanted to preserve the scale of the city. Moving around also helped, presumably providing parallax over time when it is not available from binocular vision.

I had read an article hypothesizing that differences in the way men and women tend to process visual information about depth may affect how they respond to virtual reality. I learned that the brain also interprets depth from shading – the way light falls on an object. So I tried applying a technique called ambient occlusion, which adds soft shadows to a scene. This made a big difference in improving the sense of depth and had the nice side effect of looking really cool.

The resulting demo is available and working in virtual reality builds of Firefox and Chrome. You can search for a location almost anywhere in the world.

Try the demo, let us know what you think, or what you think we should do with it next! You can comment below, use the hashtag #povtech or email us at filmmakers@pov.org.

View the demo »
View the source code »

Get more documentary film news and features: Subscribe to POV’s documentary blog, like POV on Facebook or follow us on Twitter @povdocs!

Published by

Brian Chirls is the Digital Technology Fellow at POV, developing digital tools for documentary filmmakers, journalists and other nonfiction media-makers. The position is a first for POV and is funded as part of a $250,000 grant from John S. and James L. Knight Foundation. Follow him on Twitter @bchirls and GitHub.