Can you believe that Siri has been with us for over a decade? And what a difference a decade makes!
What was, for many, a curiosity or source of fun (You: “What does Siri mean?” Siri: “It’s a riddle wrapped in an enigma, tied with a pretty ribbon of obfuscation”) has evolved into a principal tool people use to interact with their technology.
From the phone in your pocket to voice assistant devices like Alexa or Google Home speakers, there’s no denying that talking to technology has become a regular part of daily life.
Market research in the last few years agrees, clearly showing that voice recognition is no longer science fiction but a mature technology in widespread use:
- In 2013, Google’s voice recognition AI could recognize 77% of spoken words. By 2019, its accuracy had risen to 97%.
- According to Google, 72% of peoplewho use voice recognition have embedded it into their daily routines.
- Forecasts predict that by 2024, there will be 4 billion digital voice assistants used in devices globally. That’s more than the population of the world.
Back in 2018, we shared a post discussing a research project called ‘LOTTIE’ (LabVantage Open Talk Interactive Experience) – a digital voice assistant for LIMS our team was working on. With the release of LabVantage 8.7, voice commands can go beyond the commercial devices in your home and on the go — they’re now available in your LIMS.
How voice recognition works
Voice recognition begins with computer software designed to hear and interpret human speech. The software listens as you talk, digitizing what you say into a format that can be used for further processing and analysis. The data from your device is then interpreted by a speech recognition server, often making a lightning-fast journey through the internet and back – pausing just a millisecond or less for encryption to protect your privacy.
The world’s leading speech recognition systems are driven by ever-improving artificial intelligence (AI) and machine learning (ML) tools, designed to sift through huge amounts of data and detect common patterns, which are used to create language-analyzing algorithms.
Google’s AI, for example, learned how to be more conversational in English by reading thousands of romance novels. And while we haven’t been scanning the steamy Bridgerton novels into LabVantage LIMS, we have given it an education in a wide range of commands to allow precise control of the LIMS.
The more you use speech recognition tools, the better they understand you. This is because they learn common patterns in your speech over time. The more patterns the tools hear, the more accurately they can compare what you’re currently saying to what you’ve said in the past. In this way they can improve comprehension, even to the point of understanding your regional dialect.
A voice crying “eureka!” in the laboratory
As one of the first LIMS to enable voice commands, LabVantage uses a property-driven configuration to allow one or more commands to be executed either by voice or with the click of a button. Natural language processing, combined with expression-based language, enable the system to execute actions, calculate or query data, enter results and navigate the LIMS. It’s also fully customizable and extensible, allowing you to define your own unique commands or skills.
LIMS voice-recognition capabilities offer significant benefits to your lab, including:
· Hands-free commands and dictation
Voice recognition enables users to dictate experimental findings or sample results, automating transcription into your Electronic Laboratory Notebook (ELN). This can significantly increase productivity, because the LIMS can capture speech in real time — much faster than most people can type.
Hands-free interaction is also a critical feature in any situation where touching a LIMS mobile device might compromise the cleanliness of a test or transfer a substance that might damage the device. Voice commands enable researchers wearing protective gloves to navigate the system and continue working without interruption. Finally, a voice-controlled LIMS opens the possibility of many new potential uses. Imagine extending LIMS capabilities to off-limits areas such as cleanrooms or other environments where hands can’t be used.
Researchers with disabilities or conditions that make it difficult to use a mouse or keyboard, such as problems with sight or carpal tunnel syndrome, can benefit significantly from speech-to-text capabilities. Voice recognition makes these workers more comfortable and productive and may even enable you to attract additional team members at a time when skilled workers are in shorter supply.
· Streamlined client interactions
The latest version of LabVantage LIMS includes a redesigned web portal that lets you extend appropriate access rights to your clients outside the laboratory. This new Portal protects your valuable and sensitive data, while eliminating the need for your customers to manually request tests and other services. By supporting voice commands, this next-generation user interface makes Portal features easier than ever to use, especially for users unfamiliar with LIMS who need access to data outside the lab.
· Task automation
The LIMS assistant can have access to LIMS data, enabling it to take readings, enter results and analyze data by voice command.
· Improved accuracy, productivity, and efficiency
Using a digital assistant reduces the risks of human error when data is transcribed. And since the system can direct multiple tasks simultaneously, data collection also becomes more efficient.
A voice you can trust
LabVantage researchers have spent years developing voice recognition for our LIMS environment. Working closely with customers who opted to beta test the tool, we’ve been evaluating the voice capabilities detailed in this article and working on solutions to issues. While still early, Voice Command is an exciting new interface capability with the potential to extend and expand the role of LIMS in today’s labs.