How do we rapidly adapt a world built for touch to zero-touch?In the past, Second Story has used touchless interaction techniques to delight audiences and fully immerse them in the environments we create. In response to the pandemic, we’ve been exploring more practical applications for our zero-touch interaction library. These methods can be put to use almost immediately to retrofit existing touch-based interactions in a manner that is realistically scalable, cost-effective, and requires minimal fabrication and technical effort. We’ve used rapid prototyping to enable quick discovery and dialogue–a critical part of exploring technical feasibility and designing exceptional experiences for the future.
Hand GesturesUltraleap’s Leap Motion Controller is an infrared-camera-enabled computer vision device with the ability to recognize complex hand gestures in amazing detail. As a result, it’s a highly effective controller for immersive entertainment experiences like VR. While it has the potential to tackle more common digital navigation tasks, widespread adoption will hinge on standardizing simple, intuitive gestures like the ones we’ve prototyped here.
IR Touch Frame OverlaysInfrared (IR) touch frames, produced by companies such as NEC and PQ Labs, have been used for years to turn passive digital signage into interactive touchscreens. The typical setup involves a sheet of protective glass coupled with an IR frame overlay, mounted on top of a digital display. We wondered: if we removed the glass and set the touch frame farther away from the digital display, could this be a viable touchless technology? To explore this option, we created a retail prototype using an empty IR touch frame overlay set back 6 inches from the digital display. Though it works from a technical perspective, we discovered that without the haptic feedback of the glass, some gestures—like pinch-to-zoom—will need to be reimagined.
Voice CommandWhen considering deploying voice recognition technology in public spaces, it’s easy to imagine a future where everyone is shouting out commands everywhere they go. Thoughtful use of this modality will be key to helping us avoid this fate. Voice should be used with interactions that already have a precedent for being dialogue-based (such as ordering food) or in situations where being hands-free would actually benefit the experience (like back-of-house operations where hands are often otherwise occupied). We built our elevator and QSR ordering prototypes using Microsoft Azure Cognitive Services.
Hover ButtonsCapacitive touch sensors can imbue non-interactive materials, like wood and fabric, with “magic” by giving them the unexpected ability to sense touch. While capacitive buttons are typically configured to react only to literal touch, we can reconfigure them to detect a “touch” before any physical contact actually happens.
Body TrackingRecognizing body gestures through skeletal data or using “blob detection” can enable new practical experiences such as navigating large digital canvases, facilitating crowd control management, or activating environmental controls (lighting, music, or conference room configurations, etc.). We've leveraged this capability in the past, using depth sensing cameras from Orbbec and Microsoft, in order to add immersive interactivity to our activations.