Device adoption in US Internet households continues to grow

Device adoption in us internet households continues to grow

The potential for vision-based interfaces is incredibly broad across consumer, enterprise, industrial, medical, transportation, military and other industry verticals. A subset of consumer technology devices that consumers interact with via touchscreens or tactile input reveal the scope of this opportunity. Popular devices that use touch-centric interactions include smartphones, tablets, smart TVs, and laptops.

Currently, US households have an average of 16 connected devices, including 11 from the mature CE category, three from the smart home category, and two from the connected health category.

The smartphone is the most popular of all touch-based devices and the focal point of human-technology interaction. Smartphones are present in 87% of US Internet households.

Picture1 1

Previous attempts at vision-based interaction in the smartphone industry have been unsuccessful, relying either on limited computer vision capabilities based on smartphone processors and cameras, eye tracking on tablets and smartphones, or using near-field sensors. Google Glass is the most prominent example of a vision-based product that arrived with great fanfare but ultimately failed. The lack of consumer interest was due to a number of reasons, including high cost, poor UI/UX design, poor battery life, and a lack of significant tangible improvement over the old touch interaction. Some smartphones continue to use the front-facing camera for attention tracking, but the industry has generally focused on touch as the primary mode of smartphone interaction.

In terms of control and interaction, the dynamics of tablets is almost identical to that of smartphones, but with an even larger screen and touch input area. Like smartphones, a limited amount of touchless interaction is still possible today, mostly through voice assistants. Tablets are in 64 percent of US Internet households.

Smart TV manufacturers are constantly fighting the pressure to produce products, and control and user interfaces are a constant focus and differentiating experience. Smart TVs are now in 63 percent of Internet households.

While TV controls have evolved somewhat, the dominant method of control is still via the device’s remote control or a remote control app on a mobile device. Consumers can use voice input to control the TV via built-in microphones or smart speakers, but this method is merely an addition to what remains a largely manual control paradigm. For the best streaming video products, such as smart TVs and streaming media players, voice control is not often used. 17 percent of smart TV owners use voice to control video every day.


Laptops generally rely on two main input methods: a keyboard and a touchpad or an attached mouse. Some models have expanded these methods with touchscreens, cameras, fingerprint sensors, and microphones, but these devices are mostly associated with tactile input methods. Laptops are in 77 percent of US Internet households.

38 percent of US Internet households have at least one smart home device, such as a smart thermostat, smart door lock, video doorbell, or smart light bulb. Dedicated apps are the primary control method for smart home device owners, but 28 percent say their primary method of controlling voice is through the device’s native microphones and processing power, or when issuing voice commands to their smartphone, computer, tablet or smart speaker (perhaps with the help of a voice assistant). Noise is particularly common with lamps and sockets, where users may operate devices while seated or with their hands full.

As mentioned, anything with a camera is a candidate for vision-based interaction (VBI). Some security systems use embedded video to enhance the user experience, with facial recognition that can identify a family member versus a visitor and intruder. This could be an area of ​​the smart home ecosystem where VBI could make early inroads, such as enabling a recognized head of household to disarm or alert the system through specific eye movements or gaze.

Picture2 1

13 percent of US internet households own VR headsets in 2022, more than double from 2019. VR systems require several forms of input and positioning data to properly display users in a virtual space. Accelerometers, gyroscopes, and magnetometers help measure a person’s rotational motion in space. Some systems also use special markers and depth-sensing cameras to obtain positional data. Accessories such as hand controllers and gloves can provide additional information. Finally, eye tracking serves the purpose of aiming and navigating within the VR headset screen. Eye trackers measure both eye movement and the point in the distance that the headset wearer’s eye is focused on (gaze point).

These devices are only a subset of the potential number of consumer technology devices that can take advantage of vision-based interactions as a replacement or augmentation of their existing touch-based input interactions. Their high level of adoption in society today suggests how broad vision-based interaction capabilities could potentially pave the way for everyday consumer interactions with mainstream technology devices.

Vision-based interaction can be applied to these common devices and potentially augment and/or replace common interface modalities.

This is an excerpt Parks Associatesadditional White Paper “Vision-based technology. next generation control” in cooperation with Adeia. The white paper explores these and other potential use cases where vision-based interfaces will enhance the consumer experience and enhance their ability to interact with and control their devices. Download now to explore vision-based solutions that can improve and simplify the consumer experience of navigating and selecting content on this device.

Related Articles

Sorry, delete AdBlocks

Add Ban ads I wish to close them