gtag('config', 'AW-803824614');

Mitigate MPI Data Conversion Errors

 Part 2 of 3

Part One of this series on patient identity management best practices focused on the broad impact of unique patient identifiers. In Part Two, we look at how best to mitigate data conversion errors. To access the complete white paper series by patient matching experts with Just Associates upon which this blog series is based, click here

The mergers, acquisitions and affiliation arrangements taking place at a rapid pace within healthcare today are creating a serious challenge: the need to combine master patient index (MPI) databases. If not handled with care, the data conversion will have serious short- and long-term implications for the healthcare organization and its patients.

Protecting post-conversion data quality requires careful planning and extensive testing and validation. Doing so will also eliminate or mitigate safety concerns and operational inefficiencies. Outlined below are best-practice strategies that will go a long way toward minimizing the MPI quality degradation and conversion errors that can ultimately impair clinical data interoperability.

Planning:  Success requires a multi-disciplinary approach that starts with bringing to the planning table key stakeholders from departments including HIM, Revenue Cycle, Clinical Informatics, Risk Management and IT. Doing so ensures development of a comprehensive strategy that supports legal record retention and, more importantly, quality patient care. Other key components of the planning stage are a dedicated project manager and clearly defined, comprehensive scope of data to be converted. Development of a strategy for managing clinical data that currently resides in the legacy system is equally important. 

Analysis:  Pre-conversion analysis is essential to executing an efficient, comprehensive data conversion. The decisions made during this process will have a long-lasting impact on data quality and access, patient safety, end user satisfaction, regulatory compliance and operational workflows. This effort provides complete understanding of what is entailed and why it is critical. It is also important for minimizing legacy system maintenance, which helps prevent the need for long-term dedication of resources to that process. By extracting the legacy system MPI data into a test environment—or bringing in a third party to manage the process—overall data quality and aberrant records can be assessed. Shell records can also be identified and the number of possible duplicate and overlay records determined.

Pre-Conversion Cleanup:  Duplicates are the unwelcomed gift that keeps on giving. The more that exist in the MPI, the faster duplicate creation rates climb. Thus, the recommendation is to clear duplicates from the source system prior to an MPI conversion, which will spare the source system from receiving more data “pollution.” If time and resources are lacking, the cleanup effort can be handed off to outside experts who can research and merge possible duplicates or overlaps in a much shorter timeframe.

Technical Considerations and Data Survivorship Rules:  Frustration and re-work can be avoided with deliberate, careful planning designed with both expected outcomes and potential errors in mind. No one wants to be celebrating a successful conversion only to discover numerous issues hiding in work queues—issues that will take months and even years to resolve.

Testing:  To bullet proof the conversion plan, define test cases that cover a representative sample of the data. It’s important to include not just the first 100 or 1,000 records, but also first and oldest records in each batch file and additional random sub-sets throughout. Ideally, test the foreign system load in a test environment that is already populated with the full production MPI. Identify which errors are returned and plan resolution procedures and accountability. Estimate the level of manual effort likely required post-conversion to resolve intra-facility duplicate records that were not cleaned up prior to the merger of foreign data source records, as well as inter-facility overlaps that were not auto-combined.

Careful planning on the front end of any data conversion will prevent headaches on the back end.