From my childhood, I have loved cars; perhaps it should not be surprising that my graduate studies would involve cars in some form or the other. However, I count myself as being extremely fortunate that my advisor (Mike!) let me to formulate and investigate a research problem for my PhD thesis that involved an automobile.
In short, with AutoPoint, I designed and developed techniques that used a car’s window as an interactive and transparent display surface. AutoPoint enabled passengers to select and engage with Points-of-Interest (PoI) outside the car. The photo above shows the two distinct versions that I had developed. One for the lab and the other a prototype I tested by driving people around town.
Take a look at the photo below; notice how car dashboards have changed over the years as technology has progressed.
With the recent strides made in self-driving technology, cars and car interiors were due for a change. As I saw it, there were two competing visions: a) one involved bringing entertainment (e.g. movies) on a larger screen to a passenger in the backseat, and b) using the glass surfaces around a passenger as a transparent interactive display to engage with the world going by.
Without going into too much detail, here are a couple of fundamental questions that I wanted to explore:
- How does one solve occlusion issues? This would obviously depend on how close a target in the real world was to the passenger, and how fast the car was moving.
- What if something interesting was nearby (again, a loose term), but the passenger had no way of knowing it
- How can we allow for open-ended exploration of a space that the car was driving through?
After getting the go-ahead from Mike, I went about developing the system in the lab. A visit to a junkyard and $100 led to a Volkswagen Passat rear door. Many visits to hardware stores (Home Depot mostly) and many hours at the lab this to this: a car door mounted to a platform that could be wheeled around easily.
After removing the rear window from the door, I made a replacement car window assembly by using an IR touchscreen panel, a sheet of glass, and switchable privacy tint (from a company called SmartTint). This allowed me to use a rear-projection setup that worked great with a projector (when the privacy tint was opaque) and could also allow someone to see the world outside – a TV screen with a 3D world in this case – when the privacy screen was set to transparent.
I plugged the switchable tint to a power strip controlled using a solid state relay (SSR) that I'd hooked up to an Arduino.
To simulate the outside world, I built a virtual 3D city in Unity. Modeled after Chicago, just customized more to my tastes by adding mountains :). This is what someone sitting alongside the car door would see on the TV.
I designed a bunch of interactions (multitouch, or gesture-based via a Leap Motion sensor) that enabled a user to acquire a point of interest, but here's a short video of this system in action (using an interaction I called World Tilt to peer over something that was occluding a target).
Other than pilot testing the lab setup with colleagues, I had 36 participants in the lab test out the system and the various interaction techniques.
After this in-lab phase, I had to come up with a way to make a similar system that could be tested in the real world. After trying roof mounted projectors, video cameras etc., I settled on an iPad mounted on the window. Not ideal, but I was able to get the experience of using this system to be very similar to the in-lab setup.
The prototype above didn't work well with the natural curvature of most car windows, the iPad tended to point too far up (could perhaps have been taken care of with a custom lens, but there was a simpler solution)
The iPad app had two modes, a map mode (made using Apple Maps) and a stream mode (basically a video stream of the world going by). In each mode there were numerous interactive techniques that mimicked the ones tested in the lab (one called Time Slices let people go back to something they might have missed, World Tilt etc.)
One difference in the testing methodology was to let participants collect snapshots of their choice during the drive, instead of the pre-assigned targets
Here's a short video of this system in action, where a participant captured something as we drove by:
If you've made it this far, thank you for your time :).
Software | UNITY with C#, micro-controller programmed with C++, and SWIFT for an IOS app.
Hardware | engineered custom mounts & sensor assemblies to alter a car window.
Evaluation| U/X | tested in a lab with 36 participants, and 14 in a car driving around Evanston & Wilmette.