Archive for the 'Management’s perspective' Category

The Ten Commandments of Professional Speech Recognition

Ten Commandments From Stone to RFP…

I decided to turn my original Ten Commandments of Speech Recognition document into a more comprehensive list of the critical elements to consider before delving into the RFP writing process. Why? Because I believe that only a well documented Request for Proposal that reflects the operational, technology and legal issues at stake will provide the framework for the expected productivity and workflow improvements – your own organization’s Holy Grail.

> Download Tablets (1.2 Mb)

The Future of Clinical Dictation and Transcription

The Future of Clinical Dictation and Transcription In the latest issue of For the Record Magazine, Robbi Hess discusses the pros and cons of the Once-and-Done transcription model (the other name for front-end speech recognition, whereby physicians see the results of their dictation on screen and make their own corrections). First, the article highlights findings from a Gartner Report by Barry Hieb, MD, healthcare research, titled “The Evolving Model of Clinical Dictation and Transcription:”

The lack of efficiency and the money that leaks through transcription cracks have always been issues in the healthcare industry. The role of dictation and transcription in clinical documentation is evolving in response to new technologies and new functional requirements…

Traditional dictation and transcription are giving way to ‘editor-based’ approaches and that once-and-done dictation will eventually be adopted in the majority of situations.” […] Because speech recognition makes increased productivity and associated cost savings possible, it is now an integral part of most new dictation and transcription contracts.

The OAD dictation model will take longer to unfold but will be driven by the need to provide value back to physicians at the time they are dictating reports.

The importance of the physician’s sign off
The Author then quotes Peter Preziosi, PhD, CAE, CEO of the Medical Transcription Industry Alliance and executive director of the Association for Healthcare Documentation Integrity (AHDI):

The reality today is that you have physicians that don’t even sign off on charts even though they are legally liable for the content. My concern is that when we look at building a national healthcare information infrastructure, it will be that much more critical to ensure the accuracy and completeness of information. The onus of chart accuracy must be on the clinician, not the MT, making the need for clinician sign-off imperative.

Back-end vs Front-end Speech Recogniton
Hieb says using speech recognition technology cuts significant costs from the entire transcription cycle:

The editors can crank out 50% to 100% more copy a day and, as a result, the hospital gets charged less money. But the downside is there is still a two-day turnaround time, and the hospital is still paying for both transcription and dictation costs.

The advantage to the editor mode of transcription is that the doctor is not being asked to change the way he or she operates, and the report is turned around faster. Presumably, the editor is happier, is doing more work, and is being more productive. And from a documentation standpoint, you are telling the physician that the report will be back more quickly, but you still have to look at it, revise it, and sign off on it.

The OAD model, carries with it a good news/bad news scenario: The hospital can save money and enhance performance, but the doctor has to change his or her dictation routine.

Doctors are hesitant now because we will be telling them, ‘You will be dictating at a computer, but you can see what you are dictating’. Hieb acknowledges that some doctors are poor dictators, but with OAD, they can receive direct feedback and make edits as they go, while the patient information is fresh in their minds.

What the future holds…
Hieb believes that one of OAD’s benefits is that when the clinician dictates, edits, and signs off on the record, it’s ready to go into the electronic chart:

The turnaround time has dropped from four days to two days [with back-end speech recognition] to two minutes, and now any doctor can see that report as soon as the physician signs off on it. That turnaround time brings nothing but benefits and better care to the patients.

Although a hospital would have to invest in the software and hardware technologies necessary to implement an OAD system, Hieb says those expenses would pay off in the long run. “There will be set-up and maintenance fees, but they will be nowhere near the costs of the money spent on dictation and transcription.” He agrees, though, that getting doctors to change their behavior will be the largest hurdle to overcome.

With OAD, there is no subsequent time added to that chart, little hassle, and minimal risk of error. The single best defense against malpractice is good documentation, and with once-and-done, you have given the doctor the chance to do the reports and be done with it.

How physicians are accepting speech recognition technology…
Hieb goes on to comment:

Doctors are surprised to see that they can save time and money. If they spend a little bit of time up front ‘training’ the system, they are reaping dividends in time and money saved.”[…] OAD is being accepted more readily in private practices. That is where the technology is really making inroads because the doctors see they can save time and money, and if they have an electronic copy of the record, the staff isn’t busy chasing down records. In fact, they may be able to reduce the amount of staff they are paying. The OAD knows to file the record in Susie Smith’s chart, and the general trend in medicine is toward more clinical automation.

OAD can also be effective in the emergency department (ED), where time is of the essence. In the ED, the real benefit is getting that data out there and into the chart instantly. The more quickly and effectively the information is captured in the chart, the more quickly the physicians have access to that data.

Conclusion

For Hieb, the reason to embrace OAD is because the goal of the healthcare system is to help sick people get well and healthy people stay healthy. “We are entering an age when information is a critical component of achieving these two goals, and once and done is a better, more efficient way to capture that information,” he says.

Preziosi says the concept is not feasible in today’s marketplace, even with the enabling technologies that will be seen in the future. “Given the cost restraints, the persistent labor shortages, and the increased demands on the healthcare system, I don’t see OAD as being realistic,” he explains. “I think the clinical documentation sector needs to listen to the concerns of the consumers of our services and adapt our service offerings to meet their ever-evolving demands.”

> Read full article

Speech Recognition Podcast

Speech Recognition Podcast Here is a rather fascinating interview of Dr. Nick van Terheyden, Chief Medical Officer for Philips Speech Recognition Systems, on the challenges facing healthcare today and the role of speech recognition, EHR and thin-client technologies in the fail-safe delivery of high quality care. Far beyond the technical aspects, Dr. van Terheyden makes us take a realistic look at healthcare today and think about what tomorrow’s hospital should look like. Here’s a sneak peak:

Douglas Brown: What does an industrial grade system deliver to the software industry that on that off-the-shelf product doesn’t?

Nick van Terheyden: You want to optimize the workflow and throughput for an entire organization. An off-the-shelf product that you install on a single desktop isn’t scaled or designed to actually deliver that. It is designed for the individual user. As soon as you start to move across an organization and need to transfer your profiles, you start to run into trouble.

We’ve focused on the professional market right from the very beginning, constantly delivering on those changing market requirements, specifically in healthcare. The breakthrough of the Citrix of application delivery infrastructure has been one of these events that triggered advances for speech recognition technology. And the other one that’s driven a lot of change is the Electronic Health Record – EHR or EMR as it’s referred to, which is aimed at improving availability and accessibility of medical information. That’s really a key component of safer, more value-added care, which is suffering in the US setting. One of the numbers that’s bandaged around fairly frequently from the Institute of Medicine report from some years ago, is the 98,000 medical errors that occur killing patients in the US every year. If you equate that to the airline industry, that’s approximately one aircraft crashing with all people on board every day.

So anything that we can do to enhance the delivery of quality information to our clinicians is going to be a key factor in that. And enhancing the EHR with the seamless integration of speech recognition, speech being the most natural form of communications, brings some significant benefits. Specifically, we’re going to bring that information to be instantly available to all the members of the team. Medicine used to be an individual specialty, where one physician treated patients. Now it’s a team approach. (…) We’ve got multiple clinicians, and not just physicians, delivering care and the communication of that data to all of the team members as quickly as possible and as accurately as possible is a key factor in delivering high quality care. Much of the errors that occur actually occur in the hand off of that information.

…Users who have access to dictation devices – either handheld devices or even PCs – with thin-client technology are more mobile and therefore can be more efficient in the delivery of that care.

…One of the failings of speech recognition historically has been the desire to take what we do with the mouse and the keyboard and try to automate that using voice. And that’s really not the optimal way to voice enable an application.

> Listen to podcast

Macros, training and implementation

I read the following paper in Physician News from Tracey C. Glenn, CPC, a Senior Consultant at PMSCO Healthcare Consulting, a subsidiary of the Pennsylvania Medical Society, which discusses the benefits of speech recognition for physician practices. This article might be two years old, but the author already clarified a couple of key points while killing the whole “initial training” myth, still very much alive today.

First, Tracey discusses the benefits of dictation macros:

“Macros can be used for parts of an encounter or as a template for an entire visit. A simple example of a macro as a time-saving tool can be shown in a normal abdominal exam which may read: “Flat without visible scars, hernias, ecchymosis, peristalsis, pulsations or venous distention. Normoactive bowel sounds in all 4 quadrants. No aortic, renal, iliac, or femoral bruits noted. Liver span 8 cm/MCL with smooth edge. Gall bladder and spleen not palpable. No noted tenderness on light or deep palpation in any quadrant. No masses guarding or rebound. No CVAT.”

A macro would allow all of this information to be pre-programmed into the system. During dictation, the only thing that would have to be said by the user is “normal abdomen” and all of the above information would appear in the typed version of the patient encounter. This eliminates the need for repetition of all of the standard verbiage in a normal exam during each patient encounter. Macros are easy to learn and even easier to use.

Tracey then goes on to comment on an old speech recognition myth: initial training time.

Speech recognition software does not require a major retraining of physicians since most are already using or have used some type of dictation…Initial training of the newest versions of available speech recognition software requires only between 15 to 20 minutes of the user’s time to be able to starting effectively using the tool. Training time has become significantly reduced from previous versions available only a year or two ago. Initial training of the newest versions of available speech recognition software requires only between 15 to 20 minutes of the user’s time to be able to starting effectively using the tool. Training time has become significantly reduced from previous versions available only a year or two ago.

Last but not least, Tracey suggests a few questions to be considered prior to purchasing a speech recognition system:

– What does the practice want to achieve with the software?
– Is there adequate support among physicians for using the technology?
– Who should we call to help with this?

Read full article

Digital Dictation or Speech Recognition?

Digital Dictation or Speech Recognition? Let’s first take a look at the terminology. As always, Wikipedia clears up any potential confusion with one of those efficient, 3-line definitions: “Digital dictation is different from Speech Recognition where audio is analyzed by a computer using speech algorithms in an attempt to automatically transcribe the document. With digital dictation the process of converting digital audio to text is done via a typist using a digital transcription software application (…)”

But this doesn’t tell us which one should be preferred to the other (Wikipedia is not that powerful…yet). The truth is, both technologies work closely together when implemented in a healthcare environment, mainly because a speech recognition engine is not worth much without the workflow automation features brought in by the digital dictation system (DDS) it typically integrates within. In a white paper dedicated to speech recognition technology for healthcare, expert Dr. Bob Yacovitch explains how the DDS is the glue that holds everything else:

The first aspect is workflow automation. “A stand-alone speech recognition solution on an individual PC does not bring the expected gains in productivity and efficiency. Speech recognition needs to be approached as part of a whole document creation platform. Real benefits only come by implementing a digital dictation workflow solution with integrated speech recognition, which takes into account the entire document creation process and not simply the transcription of a dictation. The digital dictation workflow system is the central framework that supports everything else, from voice control to workflow management and it is what the physician will be interacting with on a day-to-day basis. The difference resides in the system’s new ability to produce a “recognized text” together with the voice file. This draft report simply needs to be corrected as opposed to being fully transcribed.”

The DDS thereby seems to be the most important ingredient in the mix; giant steps can already be achieved with it, provided high-level routing management is offered. Speech recognition can turn document creation from “fast” into “light speed,” though it is not necessarily justified for all environments. Factors such as workflow complexity and the number of dictating authors play a key role in the overall ROI (return on investment), hence the need to investigate what can be achieved in terms of workflow management with a single DDS before even considering the speech recognition path.

The other keyword is integration. It is the DDS that integrates with the rest of the organization’s IT infrastructure, not the spech recognition engine, and “optimal accuracy and reliability of medical data can only be achieved in a fully integrated IT environment,” insists Yacovitch.

Download the Speech Recognition for Healthcare White Paper

CIOs to invest more in technology as a way to improve physician access to information and reduce medical errors

In 2005, the Journal of the Institute of Medicine attributed 98,000 patient deaths every year in the US due to medical errors, many of them – such as poor documentation – preventable. The good news is that healthcare Chief Information Officers (CIOs) seem more than ever committed to address this critical issue. The latest Chief Information Officers (CIO) Survey conducted by Health Data Management and The Quammen Group indeed shows that an overall growth in IT budgets is underway, with respondents clearly focused on improving access to information for clinicians and reducing medical errors. Of those respondents who expect their I.T. budgets to increase in fiscal year 2007, 51% said the primary factor for those budget increases is to improve clinician access to information. Another 27% cited reducing medical errors/improving quality as the primary factor for spending hikes:

IT Budget Increases

When it comes to investing in emerging technologies, speech recognition comes first:

Emerging technologies most likely to be implemented within the next five years

Full Survey results can be found on the Health Data Management web site.

Front-end or Back-end Speech Recognition?

Front-end or Back-end Speech Recognition? Let’s cover the definitions first. Front-end speech recognition is a particularly attractive feature for physicians who prefer to look after the full report generation process. Text is generated on-screen from their dictations in real-time, allowing physicians to edit and finalize documents themselves.

When implemented as a back-end layer, the system is fully transparent to physicians, who may not even be aware that speech recognition technology is being used. Completed dictations are automatically processed by the speech recognition server in the background and the Transcriptionist is presented with a transcribed text and the original audio file. Their new role consists of checking the recognition accuracy rather than having to transcribe the entire report.

I am always amazed at vendors pushing front-end SR as the one and only magic potion that will make the documentation mountain vanish. Yes, front-end SR is fantastic on weekends, for highly confidential documents, or in environments such as Radiology, Pathology, ER where medical reports are typically short (e.g. “normal findings”). But other physicians might still see their main activities affected by the time required for the editing process. To me, it only makes sense that a SR system should leave all options open by supporting both the front-end and back-end workflow, ideally within the same licence. For instance, a facility can decide that short reports can be reviewed by Authors in foreground mode, while more complex and detailed work can be routed to transcription for correction as a standard or on the fly. On the other hand, switching from back-end to front-end may compensate for transcription resource shortage or periodic peaks of activity. Once again, we must remember that it is the technology that is supposed to adapt to the physician/organization’s needs, not the other way round.

Now what is the future of medical transcription in the context of back-end speech recognition? It indeed looks like the medical transcriptionist role is evolving more towards a “medical editor” role. How does this affect their job and overall career? See this thread for a take on the subject.

 


Blog Stats

  • 96,081 hits