Part-I of my Master's thesis (Remote Multitouch: In-Air Pointing Techniques for Large Display Interactions) where we were looking at how well in-air gestures can work for data manipulation tasks on a large vertical display. We designed and built a docking task where participants would either use dual laser pointers, or freehand pointing (along with in-air gestures) to move + rotate a block from a start location to a destination (i.e. dock).

The position of the dual "cursors" were determined from the either a perspective-based pointing technique, or by using dual laser pointers (both captured using Vicon cameras)

The docking task was meant to be representative of large display interactions (e.g. during a presentation).

this is a screenshot of the experimental task we asked participants to perform repeatedly with various sizes and orientations of the target.

Here's a video with this system in action:


Software: C# with WPF
Hardware: Vicon Motion Capture cameras, designed and built custom markers used to track hand and head position

Excerpt from the paper published in IJHCS'12:

Over the past few years, interactive large displays have gained traction as a vehicle for public and large-scale media—with applications in advertising, information visualization, and public collaboration (Ball & North, 2007; Brignull & Rogers, 2003). For example CityWall, a large multi-touch display installed at a central location in Helsinki, provided people with an engaging and highly interactive interface in an urban environment (Peltonen et al., 2008). The popularity of large interactive displays in these applications can, in large part, be attributed to their significantly increased screen real estate, which provides more pixels for collaboration, higher densities of information, or better visibility at a distance (Bi & Balakrishnan, 2009). Since large displays provide more physical space in front of the display, they also allow for multi-user applications that are not easily accommodated or communicated via standard desktop monitors (Vogel & Balakrishnan, 2005).
We believe this presents an opportunity to explore interaction techniques that capitalize on the inherent strength of large displays—greater screen real estate—when physical input devices are not readily available. While many innovative techniques have been proposed in the literature to deal with the difficulties in pointing at hard-to-reach parts of a large display, the majority focus on within-arms-reach interactions through touch or multi-touch, with the underlying assumption that the user stands sufficiently close to the screen to touch its surface (Brignull & Rogers, 2003; Myers et al., 2002; Peltonen et al., 2008). Alternatively, they require users to navigate a mouse cursor using some form of traditional pointing device (Baudisch et al., 2007).

Related Publication(s)

Amartya Banerjee, Jesse Burstyn, Audrey Girouard, and Roel Vertegaal. 2012. MultiPoint: Comparing laser and manual pointing as remote input in large display interactions. IJHCS'12