Archive for September, 2007

A game of two halves…

A game of two halves… If you haven’t come across the news yet: it’s official, Philips sold its remaining 2.5 percent stake in Nuance Communications Inc. for $83 million (€60 million) last Thursday, considering that the American giant, which sells automated telephone answering systems, was not part of its core holdings. In a previous press release, Philips had indeed announced the reorganization of the company into three business units corresponding to its core holdings – healthcare, consumer lifestyle and lighting – while projecting that it would double share value by 2010 by concentrating on these core markets.

Philips is expecting a gain of $41 million (€30 million) for the third quarter off the sale of its 4.6 million shares in Nuance.

A bit of history

  • Philips had acquired shares in Nuance during Nuance’s 2005 acquisition of ScanSoft, the maker of Dragon Naturally Speaking.
  • Scansoft had in turn acquired the Philips Speech Processing Telephony and Voice Control business units, and related intellectual property back in 2002. This had allowed ScanSoft to strengthen their leadership in the ASR (automatic speech recognition) and telephony markets. It was however clearly stated at the time that “the Philips Speech Processing Dictation business was not part of the transaction and would remain within Philips to develop applications for its medical businesses and other professions in the fields of hardware and software dictation systems.” A strategic direction that has proved successful for Philips eversince.
  • For those interested, the full Nuance/ScanSoft history is detailed on Wikipedia.

Speech Recognition & Sound Compression

Speech Recognition & Sound Compression The question of sound compression is often asked by CTOs, hence this dedicated thread. SpeechMagic provides high sound compression (Philips CELP – 19.2 kBit/s) to easily transfer sound data over band-limited channels with guaranteed high recognition rates. The following sound file formats can be processed by the SpeechMagic engine:

  • Philips CELP 16 kHz / 16 bit – 19 kbit/s (SpeechMagic native format) (8,24 MB/h)
  • Philips CELP 8 kHz / 16 bit – 19 kbit/s (SpeechMagic native format)
  • PCM 16 kHz / 16 bit – 256 kbit/s (PC)
  • PCM 11 kHz / 16 bit – 176 kbit/s (mobile input devices)
  • PCM 8 kHz / 8 bit and 16 bit – 64/128 kbit/s (telephone)
  • CCITT A-law, µ-law 8 kHz / 8 bit – 64 kbit/s (telephone)
  • DSS Standard Play

New interactive speech recognition module for clinical notes

New interactive speech recognition module for clinical notes Time to kiss those pens goodbye as Crescendo is about to announce the launch of a new, speech recognition based module to accelerate the processing of clinical notes. This new module allows physicians to fill-in clinical notes in real-time thanks to customized templates, macros, pull down selection, check boxes, voice commands, front-end speech recognition and electronic signature, all integrated into one single application. Resulting electronic reports can easily be stored and shared with other healthcare professionals.

Upon selecting a patient, the system loads a clinical notes template with bookmarks (i.e.: exam time, medication, length of stay, allergies, diagnostics, etc.) to guide the physician during the dictation process. The Physician can navigate from a bookmark to another using voice commands (i.e.: “go to next bookmark” or “go to medication bookmark”.) Text is populated on screen as the physician dictates thanks to the Crescendo front-end speech recognition system powered by SpeechMagic.

The new module also supports HL7 ADT integration to automate the import of patient demographics directly into the report. Once the report is complete, the physician can simply sign it off within the same application.

Automatic and predefined distribution occurs after the report has been signed off, and is configurable by work type, author, department and facility. “Every minute saved on charting time is an additional minute spent with a patient,” notes Costa Mandilaras, President, Crescendo Systems Corporation. “By combining different technologies and productivity features within one single application, Crescendo finally offers a new, consolidated approach to the management of clinical notes.”

On the integration end, like all Crescendo products, the new Clinical Notes module offers seamless interfaces with major PACS and RIS systems and supports thin-client infrastructures (Citrix Presentation ServerTM 3.5 and later versions) and terminals such as Wyse Windows Terminals.

The new module will be available for demonstration at the Crescendo booth during the AHIMA (Oct 8-11, 2007 – booth# 855) and RSNA (Nov 25-30, 2007 – booth #6451) tradeshows. Online demos can also be requested at anytime from the Crescendo web site.

Speech Recognition: Impact on the Medical Transcription Profession

Impact on the Medical Transcription Profession I would like to respond to the following comment posted by user “syed irfan” this morning.

Thank you for your input, Syed. Yes, the medical transcriptionist profession is changing all the more than speech recognition is being widely implemented within healthcare facilities. From 100% report creation to an editor/correctionist role, the stakes are not the same. Even medical knowledge is not so critical for the position anymore.

A large percentage of physicians will indeed no longer depend on a transcriptionist to issue reports and clinical notes. That is particularly true for departments like ER, where front-end speech recognition allows physicians to issue and correct reports as they dictate, thereby releasing medical documentation prior to the patient being discharged. That is also true for Radiology, where large volume of standard reports (i.e.: normal findings) are typically processed.

This being said, it is important to keep in mind that this is not and will probably never be the case for ALL physicians. While some will simply never adapt, others will never reach a good enough recognition rate (God knows why). On the other hand, front-end speech recognition does not necessarily make sense for all departments, since some physicians simply don’t have time to make their own corrections. From a vendor perspective, I can tell you that we see more back-end speech recognition being rolled out than front-end. And where there is back-end speech recognition, there is and will always be a review process involved. Even if the speech recognition engine gave a 100% accuracy, a human being would still be required at some point to validate this number. I don’t see, for the many years to come, anybody, let alone a healthcare facility leave life-sensitive data in the hands of a machine, as powerful as it may be, without a human stamp of approval at some point in the process. That’s why, even though the MT profession might be vanishing, I truly believe that the medical editor (ME) role is here to stay. And not just for the next five years…

Related thread: Speech Recognition: The End of Medical Transcription?

Red handed: Radiologist caught playing with speech recognition off hours…

Radiologist caught using speech recognition off hours…

Just to start the week on a fun note…

Experts Discuss Speech Recognition & EMR Synergy

Experts Discuss Speech Recognition & EMR Synergy In the latest issue of For the Record Magazine, a panel of industry experts including Eric Fishman, MD, founder of, Nick van Terheyden, MD, chief medical officer at Philips Speech Recognition Systems and Kathy LePar, RN, MBA, a senior manager at Beacon Partners, discuss the role of speech recognition & EMR integration as a response to the ever-growing-healthcare-documentation-mountain issue; a very instructive “state of the union” address providing answers to healthcare organizations’ questions on the subject, from challenges and benefits to pitfalls and trends. Here are a few interesting extracts.

SR’s Resurgence

“The medical profession is overwhelmed with data,” says van Terheyden, who estimates the amount of data doubles every 18 months. “A typical patient looks to his healthcare provider to know what is best for him. But the idea that they know the latest and greatest information is impossible.”

The EMR is a critical piece of technology that can corral patient data into one complete record. Implementation of it, however, is often difficult for financial reasons, as well as because of resistance from physicians who fear substantial interruption in their workflow. But according to Eric Fishman, MD, founder of, the EMR acceptance rate “goes up astronomically when physicians know that speech recognition will be part of the implementation.” …

Implementing Speech Recognition (SR) With the EMR

While experts agree that the use of SR in conjunction with EMRs is an important technology, they have varying opinions on exactly how SR can best serve the physician, patient, and healthcare facility. Nonetheless, experts concur that from their experiences, improving workflow and patient care and reducing costs are among the top benefits.

“There is no one perfect solution,” says Fishman. Finding technology that works best for an organization can include using a combination of dictation, SR, templates, and revisionists (also known as medical transcriptionists).

Additional Applications

Applying the dictated information directly into the EMR for physicians is another way some healthcare facilities are using SR technology. According to van Terheyden, voice recognition used solely as a narrative note doesn’t achieve optimal results following EMR integration because it doesn’t always link data points to the record and can’t be queried.

Egerman believes that a balance of textual and objective data can be the most valuable, as there is always anecdotal data from a patient consultation that can’t be captured on a typical “point-and-click” screen that physicians use. He points out how a patient’s social situation is an important aspect that can’t always be captured in the objective data. “For example, a healthy, 96-year-old woman may be at the physician’s office for her yearly check-up,” says Egerman. That noteworthy anecdotal data is important to be aware of, but, nonetheless, may not be included on a physician’s transcript if only EMR objective data is used.

Possible Pitfalls

“Physicians are so comfortable with SR and the dictating process that, often times, the narrative documentation replaces the point-and-click process of entering data elements, which is necessary to derive outcome data,” says LePar. Many programs already have the capacity to extract meaningful data embedded into their systems, and therefore, it is important to use the narrative fields in conjunction with inputting the data points, she says.

Improving Patient Care

Because information is documented immediately, patient information can be sent in real time to doctors, referrals can be made quickly, and the possibility of medical errors is reduced. From a management standpoint, transcription costs are reduced, and billing can be done in a timelier manner because documentation is completed sooner.

SR, when integrated into the EMR, “ties it back to clinically actionable data,” says van Terheyden, and can tie it in with clinical coding. He believes this is the future of SR because it drives actions that physicians are desperate for. It also keeps physicians up-to-date on the most current information, thus improving patient care.

Benefits to Healthcare Organizations

At first, physicians may view integrating SR technology with EMRs as a cost because it appears to take more time. However, reports are actually turned around more quickly and have better data from which to draw patient care information. In addition, because patient information is delivered to the EMR more quickly, physicians can bill almost instantaneously. “The reality is, you have to pay bills, so now you don’t miss any information and get fair compensation for work done by capturing it at point of care,” says van Terheyden.

Evolving Process

The latest versions of SR technology for use in conjunction with EMRs have “made more of an impact than what was available several years ago,” according to LePar. While she says it takes on average five to 10 hours of use to “train” a physician’s voice, the time spent is well worth the effort in the long run.

“I see it being used much more frequently than in the past. This technology is what many of the physicians are requesting,” says LePar.

> Read full article

Blog Stats

  • 99,346 hits