Duke Incorporates Voice Recognition into its EHR
Duke’s health system includes about 1,000 inpatient and ambulatory care physicians who will be able to use the technology as an alternative to the click-and-type entry method. This way doctors can choose how they make the transition from paper to digital.
“We’re confident that this technology will effectively support our system-wide implementation of an electronic health record by providing more options for providers to efficiently capture critically important information and leverage this data for improved care decisions,” Art Glasgow, chief information officer of Duke University Health System, said in a news release.
M*Modal’s Speech Understanding service recognizes conversational speech so physicians don’t have to use structured language or verbal cues. The entered data, however, is automatically structured and encoded so that it can be shared with other systems.
Using natural language, doctors can talk to their EMR similar to the way they talk to Siri on their iPhones.
In a recent article Norman Winarsky and Bill Mark, who helped found the Siri venture, described how far voice recognition tech has come and how far it might go.
“Using speech instead of keyboards to communicate with computers is an old dream, but it took more than thirty years to achieve the robustness and performance needed to make speech systems practical for consumers,” they wrote.
Computers’ recently acquired ability to understand natural language is now what’s making voice recognition functional enough for consumers to use in their everyday lives as well as for doctors to use in a clinical setting.