Think about the ways we’ve been navigating around computers and information systems for the past four decades: keyboards, mice, touchscreens, even data gloves. What do they have in common? They are all designed for people who can use their hands as part of their information task. But what about the workforce that needs to keep their hands on their tools and equipment, the people working on assembly lines, in warehouses or at customer locations and job sites?
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
Voice offers hands-on professionals a much needed interface
Help for hands-on workers may be on the way from a technology that has seen breakout success in the consumer world. Digital assistants like Amazon Echo and Apple’s Siri let people interact with devices, find information online and perform complex tasks, all through the power of voice. Imagine how that could help workers who need to be connected to information but don’t have hands free to operate a keyboard or touchpad.
In the industrial environment, that capability needs to follow workers around the factory or warehouse. Fortunately, voice is being bundled with another technology to assist hands-on workers: smart glasses. Smart glasses offer a viewing experience that doesn’t require people to look away from what they’re doing to look at a display screen or paper document. Instead, these head-mounted displays connect workers to information like checklists, maps, product documentation, data outputs from connected machines and even instructional videos in their field of view. It’s part of a subset of augmented/mixed reality technologies that we call assisted reality.
The utility of AR and smart glasses raises a question of UX design: How should hands-on workers interact with information presented to them on a display device that is not equipped with traditional inputs, in work scenarios that do not allow for them to hold or interact with even a simple mobile phone or touchscreen?
Voice interaction solves this problem. Workers can issue simple voice commands like “mark all steps complete” or “open next task” to invoke the powerful capabilities of the system. Some software allows for voice-to-text transcription, turning spoken words into documents, annotations on a picture or process, or communications with a remote colleague or expert. As artificial intelligence capabilities mature, some software will be able to accommodate context-based queries (“Where does this part go?” or “Am I doing this right?”) that will allow people to learn faster and work faster with less effort, greater confidence and fewer errors.
What’s muzzling voice in the enterprise?
The technology that enables this isn’t science fiction. Companies like Apple, Amazon and Microsoft have already invested billions in making it real and useful for consumers around the world. So why hasn’t it taken off in the enterprise?
There are two main reasons. First, voice recognition is largely irrelevant to the desk-based knowledge workforce who commands the majority of IT spending and attention, so it hasn’t been high on the agendas of CIOs tasked with provisioning business systems. As companies turn their attention to Industry 4.0 and an era of smart, connected machines, investments that empower the frontline workforce in manufacturing, logistics and field service will start to increase as well.
The second big stumbling block is the technical architecture of the systems themselves. Voice recognition is powered by machine learning systems that are constantly updating based on millions of user interactions. These systems, and the accuracy they afford, require the kind of massive processing power and back-end data that resides in the cloud; and for some customers, the idea of any public cloud implementation raises concerns about data security, access control, user privacy and legal risk. Consequently, enterprises that are reluctant, for whatever reason, to migrate business applications to the cloud cut themselves off from these kinds of advanced capabilities.
Speaking up for higher productivity
Businesses that take a go-slow approach on voice-enabled work processes are missing a big opportunity, and may be putting themselves, their partners and customers, and their workers at risk of being out-produced by competitors that have already learned to accelerate with voice as an available tool to the workforce.
We say to the vendors supplying hardware and AR devices, like Google, Vuzix and RealWear, for the industrial enterprise, that they need to offer more robust support for voice functionality, with better quality microphones, rugged design and on-board noise reduction that meet the needs of the industrial workplace.
For our partners developing speech engines and system software, we communicate the enterprise concerns about data security and privacy, and push for features that enable customers to configure and control data for how it is processed or resides in the cloud.
Finally, we say to industrial enterprises customers, approach the issues of cloud-based systems with an open mind, especially when they promise to unlock capabilities that can help your business and employees operate more productively, with greater safety and quality control.
We have been using voice input as a keystone of our AR technology approach for years and see it delivering real results in customer scenarios every day. Eventually, when paired with future technologies like computer vision and more immersive augmented reality, it will open a whole new set of possibilities for advanced manufacturing and other industries.
So let’s start the conversation.
All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.