Voice, gesture, and touchless interfaces: Is it time to retire screens?

IMAGE CREDIT:
Image credit
iStock

Voice, gesture, and touchless interfaces: Is it time to retire screens?

Voice, gesture, and touchless interfaces: Is it time to retire screens?

Subheading text
Touchless interfaces are becoming more accessible as companies develop platforms to simulate a seamless user experience.
    • Author:
    • Author name
      Quantumrun Foresight
    • January 4, 2024

    Insight summary

    Touchless interfaces, employing voice, gestures, and sensor recognition, are set to revolutionize device interaction, particularly in secure environments like offices. These intuitive systems, reducing the need for traditional screens, promise a more integrated and anticipatory user experience. Challenges include ensuring reliability, security, and privacy, with broader implications like potential biometric data misuse, a shift towards touchless device preference, impact on display manufacturing sectors, and a move in IoT devices towards advanced technologies like motion sensors and voiceprints. This trend is shaping the future of computer design, emphasizing biometrics and VR technology over conventional displays.

    Voice, gesture, and touchless interfaces context

    Touchless interfaces (or zero user interface (UI)) enable access to devices through voice, gestures, and sensor recognition. An example is the increasing reliance of smartphone users on their digital assistants to do everything – from scheduling reminders and setting alarms to sending messages and emails. These activities are performed by simply saying a command (e.g., Hey Siri, Hey Google). In the virtual and augmented reality (VR/AR) space, zero UI comes in the form of gestures and motion sensors, where head-mounted displays (HMDs) and gloves detect a person’s movements.

    Touchless interactions may eventually provide a superior user experience by reducing app cognitive load. Furthermore, ambient intelligent interfaces may revolutionize everything from branding to marketing to customer and employee engagement in a hands-free environment. Eventually, computer displays and smartphone screens will become obsolete as smart glasses and contact lenses take over, directly sending information and text to the wearer’s eyes. As digital assistants, chatbots, and smart devices become more interconnected and intuitive, they will function as one cohesive unit that can remotely anticipate the needs of their owners and clients.

    Disruptive impact

    Consultancy firm Ernst & Young points to some crucial features that should be present for a zero UI to work well. First, companies must ensure that the technology is reliable, meaning that the success of the interactions should be repeatable without fail. Next, these interfaces should be secure using various biometrics and multi-factor authentication methods. Privacy should also be prioritized, including informing users when their biometrics and other personal data are being used and how often. Finally, customers should be given a choice to learn touchless interfaces alongside traditional ones. Instead of completely removing existing touchscreen interfaces, firms can first give users time to get familiar with voice- and gestured-based technologies. 

    Meanwhile, gesture control is rapidly improving to support a dynamic VR community. The far-reaching applications of this technique may allow for interfaces that effortlessly convert sign language into speech, allowing individuals with physical disabilities to communicate with others without an interpreter. In the automotive space, computer vision company Clay AIR is working with car and VR/AR vendors on hand-tracking gesture recognition technology to provide hardware-agnostic solutions. Manufacturer Renault has teamed up with Clay AIR to boost safety and improve convenience for drivers by incorporating a new layer of security. In addition, Clay AIR’s technology complements voice user interfaces by turning complicated instructions into fast and easy shortcuts. Gesture actions also allow for multi-step interactions using intuitive gestures that are easy to understand. The drivers may utilize precise in-air movements to control Android Auto navigation using the app.

    Wider implications of voice, gesture, and touchless interfaces

    Possible implications of voice, gesture, and touchless interfaces may include: 

    • Designs for future computers incorporating more biometrics and VR technology rather than high-definition screens and displays.
    • Potential misuse of facial and voice recognition data for hacking and identity theft.
    • More people preferring to use touchless devices for their convenience and low cost.
    • Display manufacturers such as LCDs and LEDs losing businesses and resulting in unemployment within the sector.
    • Internet of Things (IoT) devices transitioning from fingerprint authentication to motion sensors and voiceprint authentication techniques.

    Questions to comment on

    • Do you prefer to interact with touchless interfaces?
    • What are the other benefits of having zero UI everywhere?

    Insight references

    The following popular and institutional links were referenced for this insight: