gtag('config', 'AW-803824614');
Back To Top

Data Integrity Matters Newsletter: March 2013

Algorithms, Data Integrity and Patient Identity


Algorithms, Data Integrity and Patient Identity

The following article is an amended version of "Algorithms, Data Integrity and Patient Identity", which appeared in a recent issue of Government Health IT. 

Accurate patient data matching is an integral part of providing safe and effective healthcare and participating in emerging care models such as Accountable Care Organizations (ACOs) and Health Information Exchanges (HIEs). However, despite the importance of clean and accurate data, a number of healthcare organizations continue to struggle with a growing volume of duplicate and overlaid records that threaten to clog their systems and compromise the integrity of patient data upon which care decisions are based.

These errors are serious enough when contained within the four walls of the hospital in which they originate. But now, as organizations move toward care models based upon wide-spread information sharing, the impact of these errors or mismatched records has grown exponentially, threatening both patient safety and care quality.

According to HIMSS, 8-14% of medical records contain erroneous information linked to an incorrect patient identity; errors that place patients at great risk and add hundreds of millions of dollars in unnecessary costs to the system. 

However, there are steps that healthcare organizations can take to prevent the creation of duplicate and overlaid records and reduce the risks associated with these mistakes. The first of these is to ensure the use of record-matching algorithms capable of efficiently and accurately managing what is without a doubt the most complex step in a resource-intense process.

Algorithms Matter

Within a single healthcare organization, patients can have a number of identifiers including medical records, billing records, service orders and requisition numbers. How accurately those identifiers are linked to the appropriate patient record depends largely upon the strength of the record-matching algorithms utilized.

  • Basic algorithms: The simplest technique for matching records, basic algorithms make comparisons based on selected data elements such as name, birth date, gender and Social Security Number. Exact or deterministic matching tools are typically utilized, as are wild-card linking techniques that return every record that matches a limited number of search characters.
  • Intermediate algorithms: Providing a higher level of accuracy than the aforementioned matching techniques, intermediate algorithms incorporate fuzzy logic (nickname tables, rules to address typographical errors within the database, etc.) and arbitrary or subjective scoring systems with exact match and deterministic tools. A field match weight is assigned to specific identification attributes using estimates of the relative weight of each field against the other fields used in the algorithm. Records must then reach a minimum scoring threshold to qualify for consideration.
  • Advanced algorithms: The most sophisticated set of record-matching tools, advanced algorithms rely on mathematical theory (bipartite graph theory, probabilistic theory and mathematical and statistical models) to determine the likelihood of a match. Advanced algorithms also include machine learning and neural networks, which use forms of artificial intelligence that simulate human problem solving. These systems "learn" as more data is processed and automatically redefine field match weights based upon that learning.


Preserving Data Integrity

These algorithms provide a sophisticated way to match patient data and ensure the accuracy of patient records. However, not all business or clinical systems utilize advanced algorithms to handle these processes and over-reliance on these systems could result in error. 

While some systems do offer strong algorithms, the majority will utilize intermediate algorithms that rely primarily on fuzzy logic for record matching. This typically results in an unacceptably high number of false positives, which occur when the algorithm incorrectly identifies two records as belonging to the same person. Intermediate algorithms also often deliver an artificially inflated number of false negatives, when the identification of a true duplicate record is missed entirely.

The result is a significant volume of duplicate or overlaid records within the master patient index (MPI), which can significantly impact care safety and quality. High volumes of duplicate or overlaid records can also delay care, or cause clinicians to make decisions based on incomplete or inaccurate information and as a result impact clinical costs in the form of redundant diagnostic procedures, extended lengths of stay, higher incidence of adverse events and delayed diagnosis and treatment. Unnecessary administrative costs related to record corrections and increased registration and admission times may also result. 

For those organizations participating in ACOs or HIEs, these errors and associated costs grow exponentially. That is why it is imperative that participating facilities and the organization itself have in place strong algorithms capable of matching patients to records with pinpoint accuracy, then linking those records with a single unique identifier for use across the initiative.

Information systems that utilize advanced algorithms focused on statistical and/or mathematical matching will deliver lower rates of false positives and false negatives. As a result, fewer duplicates will enter the originating system, which in turn ensures integrity of the data flowing across the ACO or HIE.

Vigilance is Key

Though they can significantly decrease the rate of occurrence, even the most sophisticated algorithms cannot entirely eliminate false positives and false negatives, as auto-linking routines will always create errors. Automation, though highly effective, cannot be solely relied upon to make key record-matching decisions.

Well-established record-matching validity procedures are a requisite element of any patient data integrity process. They ensure that complex record-matching decisions—such as differentiating between individuals with similar names and birth dates who reside in close proximity or two individuals with the same name, birth date and address—are verified to eliminate overlays, privacy violations, and care coordination and safety issues. 

More sophisticated algorithms and comprehensive validity processes and procedures ensure that MPIs remain clean and preserve the integrity of the data that is ultimately shared across ACOs and HIEs. The end result is higher quality, safer care and lower costs.


Minimizing Duplicate Patients in Epic® Identity

Ensuring that the master patient index (MPI) is free of duplicate records is a critical element of a successful health information technology (HIT) strategy. That is because accurate, complete and consistent data is necessary to properly identify patient medical records within the electronic health record (EHR) and link records across disparate databases and healthcare organizations. 

For Epic® users, maintaining a clean database has never been easier. Epic Identity enterprise master person index (EMPI) provides organizations with the tools they need to eliminate duplicate records and actively prevent users from creating new duplicates. 

This is key for organizations with multiple data systems, as consolidating systems within and across healthcare organizations requires each database be free of duplicate records. Dirty or duplicate data within just one database can contaminate all of an organization's downstream systems as well as those of any health information exchange (HIE) initiative in which it may be participating. The result is a serious threat to the safety of care decisions based upon that patient data.

Unaddressed duplicates can also create a number of other problems, including inefficient registration processes, revenue cycle delays and inappropriate release of information. Inaccurate or incomplete patient information can also compromise patient treatment and result in costly duplicate tests. 

One of the easiest ways to mitigate the risk of duplicate data within the MPI is to optimize tools and algorithms within existing EHR systems to more quickly identify and resolve issues before they can escalate.

System Optimization

A number of features exist to help streamline the process of identifying more true duplicates and reducing the number of false positives identified within the system. With just a few tweaks to the system's settings, the number of false positives identified can be reduced from up to 70 percent with the out-of-the-box settings to less than 10 percent. 

For example, by updating the nickname table to make it more robust and remove inappropriate nicknames, organizations can identify more real matches and have fewer false positives when performing a patient search. Defining standards for patient searches will also ensure that physicians and staff are utilizing optimal search techniques to identify patients within the system, rather than creating new records for existing patients. 

Standardized naming conventions can also reduce the creation of duplicates and make it easier to locate the patient record at a later date. By ensuring that staff members are collecting key identifiers such as first name, last name and date of birth, systems will be populated with standard data, reducing the chances that duplicates will be created by registrars who cannot immediately locate an existing patient's record. Requesting photo ID at the time of admission will also verify that the information is captured correctly. 

Custom error queues can also be created to bucket different types of errors. These queues can be used to flag errors that are unique to the facility so that they can be more quickly resolved—before data integrity is compromised. . For example, an error queue can be built to flag the records of trauma patients that do not contain legal names. Once these issues are identified, the patient record can be corrected. 

Reducing the number of false positives detected by record matching tools and increasing the number of true duplicates identified, organizations can better maintain the integrity of their patient data. It also serves to reduce the workload of staff members charged with reconciling and eliminating duplicate records. Most importantly, it enhances patient safety and care quality while reducing associated costs.


Just Associates News 

 

Jefferson Radiology Deploys Associates' Repair™

Jefferson Radiology has deployed Repair™, Just Associates' outsourced master patient index (MPI) management service. Part of the firm's comprehensive outsourced patient identity management suite, Repair leverages the firm's highly trained staff and proprietary IDMaster® duplicate workflow software for cost-effective ongoing management of the duplicate validation and reconciliation process. 

Welcome to the Family!

Please join us in welcoming Tami Montroy to the Just Associates family! Tami joins us as Director of Data Reconciliation Services. She comes to Just Associates from The Children's Hospital of Philadelphia, where she served as the Coding & Clinical Data Access Manager. Tami will be responsible for the day-to-day operations, management, execution and results of the data reconciliation services division.

Beth Just To Speak at 2013 AHIMA Convention and Exhibit, October 26–30, 2013

In recent years, the health information exchange (HIE) has emerged as a trusted way to share patient information across hospitals and health systems and improve the quality of patient care between organizations. However, a number of HIEs have implemented dangerous data integrity practices that may not only hinder data sharing, but threaten patient safety and care. 

During her presentation, Beth will discuss the five most dangerous data integrity practices deployed by HIEs, how organizations can avoid these dangers and steps to overcome challenges to strong data integrity. She will also discuss data integrity best practices that will streamline the identification of duplicate or overlaid records and reduce the creation of new ones. 

Data Integrity Webinar

Just Associates would like to thank those of you who attended our informational webinar, "Minimizing Duplicate Patients in Epic Identity," which took place on October 23rd. During the webinar, presenter Karen Proffitt shared with attendees the importance of clean data within the mater patient index (MPI) and how organizations utilizing Epic Identity can optimize system settings to enhance duplicate detection and reduce the rate of false positives. For those of you who were not able to join the webinar, Click here for an archive of the event.

Just Associates in the News

In a recent Health Data Management blog, "Record-Matching Integrity: An Algorithm Primer," Beth Just discusses they key role that algorithms play in identifying patient and linking patient information within a single record, and how the various algorithms perform to achieve this goal. She also shares with readers the importance of advanced record matching algorithms in reducing the number of false positives and false negatives identified within the master patient index (MPI) in her Health Data Management blog, "Record Matching Algorithms: Close Isn't Good Enough."


Around the Industry

 

Healthcare Data Breaches Carry Hefty Price Tag 

A new report from research firm, the Ponemon Institute, "Benchmark Study on Patient Privacy and Data Security," reveals that the number of annual data breaches within the healthcare industry is on the rise and could come with a nearly $7 billion price tag. The report examines the fiscal and economic consequences of data breaches in conjunction with up-and-coming security trends, such as those relating to mobile devices. It notes that 94 percent of hospitals have experiences data breaches in the last two years. Of those, medical files, billing and insurance records account for the majority of the breaches in data security. 

Meaningful Use Stage 2 Final Rules Revised

CMS and ONC have issued revisions to the final rules for Stage 2 of the Meaningful Use incentive program. The revisions to the rules are relatively minor, correcting inaccuracies and updating certification criteria. For example, the new rules correct the regulatory text for the measures associated with the objective for hospitals to provide patients with the ability to view, download and transmit information. They also make the case number threshold exemption for reporting clinical quality measures applicable for eligible hospitals and critical access hospitals beginning in FY2013. 

New Push for Healthcare Data Virtualization

Virtualization may be the new future of healthcare, according to some experts. While virtualization is similar to the cloud, enhanced features may make cloud computing a thing of the past. Unlike the cloud, virtualization allows organizations to access bulky and power-intensive applications without having to access another machine through a browse or application, thus allowing physicians to access information that would normally require a PC, on a tablet. In essence, virtualization will improve flexibility and patient outreach, increase access to personalized healthcare applications and enhance security. 

New Report Calls for Meaningful Use Pre-Payment Audits

A new report from the OIG may complicate payouts under the meaningful use electronic health record (EHR) incentive program. In an effort to mitigate the risk of paying incentives to unqualified providers, OIG suggests that random audits of doctors and hospitals be conducted prior to payout to ensure they have qualified for incentive payments. CMS has responded to this call to action, saying that prepayment reviews would only increase the burden on practitioners and hospital and delay incentive payments. 

Physician Use of EHRs up 24 Percent

A new report from the National Center on Health Statistics (NCHS) indicates that the number of office-based physicians utilizing EHRs is up 24 percent since 2009, with 72 percent of physicians indicating that they currently utilize an EHR in their practice. While these findings are promising, the percentage of physicians utilizing EHRs varies on a state-by-state basis. Study results show that 54 percent of physicians utilize an EHR in New Jersey and 89 percent in Massachusetts. The percentage of physicians utilizing a system which meets the criteria for a basic system also varies, with 22 percent of physicians meeting the basic criteria in the District of Columbia and 71 percent in Wisconsin.


Back to all Newsletters