Electronic Medical Records (EMR) have been receiving a good deal of attention of late. And it is no wonder. Amongst the challenges present in healthcare, both in the U.S.A. and globally, the fact that medical records largely consist of paper files certainly gives us pause. But what, exactly, are the goals of the much talked about EMR initiatives? And, are the approaches being discussed likely to meet those goals? Further, why am I writing about this issue on a blog that is about taxonomies, content management, and so on? Let us look at this a bit more carefully, as I think the connection to taxonomies and the like will become quite clear.
A quick survey through the news tells us that the efforts around EMR are anything but trivial. Indeed, the EMR efforts in Great Britain have been anything but smooth sailing. Their efforts, that have only targeted 30,000 physicians in 300 state run hospitals, have ended up coming in at six times the original cost estimate, and have delivered results that are, according to Public Accounts Chairman Edward Leigh “…late or, when deployed, do not meet expectations of clinical staff.” Given that our country faces a considerably greater diversity of hospital systems, owing to the private-sector nature of healthcare in the U.S.A., and a significantly larger number of hospitals and physicians, it would seem like the efforts are doomed to failure.
So, just what are the goals of this initiative and is EMR, as we consistently talk about it equipped to deliver to these goals? Well, to listen to President Obama on this issue, the goals seem to be threefold: 1. Cutting red tape, 2. Preventing medical mistakes, 3. Saving money. To put it simply, this seems optimistic given the current approach to EMR. To see this, let us explore what EMR looks like, at least in the current vendor landscape.
EMR is typically viewed as a Data Management problem. By that I mean, it is viewed as an issue to be dealt with by Data Architects using large, undoubtedly very large, databases. So, to have comprehensive electronic medical records for every man, woman and child in the U.S.A., there would need to be some very large database, or, more likely, a set of interconnected databases, that have a consistent data model that captures all medical data about every man, woman and child. Further, key information about medicine and pharmaceuticals will also need to be present. But hang on, we have private medicine in this country, so that would mean that every individual healthcare network, however large or small, would need to agree to use the same data model. Further, each would have to agree to put their data into an accessible database. Given that large databases run on very large and expensive machines, this seems like no small expense for each and every healthcare network. Further, getting different divisions within a single company to agree on a data model is, as many of us know, next to impossible. Doing this across a range of organizations that, in many cases, have already spent millions of dollars to implement their own system, seems optimistic at best.
Well, we may shrug and think that this is fine, after all, theses medical networks are large wealthy organizations that have the resources to do this, and besides, that is what the federal money is all about. But this view is naive. Many medical networks are nothing like the large regional medical centers we tend to think of. Indeed, as of 2005 roughly 60% of the medical centers nation-wide had fewer than 200 beds, with many, over 25%, having fewer than 50. Add to this the independent doctors offices throughout the country that are not part of a larger network, and the problem becomes quite clear. Reflecting this reality, a recent headline in Computerworld claims that the current plan will take $100 Billion and at least 10 years to reach fruition. Wow.
So, is there an alternative, and what might it have to do with taxonomies and the like? The short, bad answer is “yes” and “a great deal”. Having worked with taxonomies and metadata for years it has become clear to me that only a small subset of a taxonomy does the lion’s share of the work. For example, at one client with a 4500+ node taxonomy, 12 concept nodes (less than 1%), account for roughly 60% of the taxonomy-related traffic on the site. While the example I gave may be more or less pronounced than others, this is a trend that is seen again and again. Further, as I begin the taxonomy development process with a client it becomes quite clear, quite quickly, which subset of the concept nodes will get the most traffic. In fact, the recognition of the future trends is vital to building an effective taxonomy and is part of what taxonomy professionals get paid to deliver.
The relationship to EMR from this is clear: if a small subset of the taxonomy does most of the work in a commercial application, might it be reasonable to expect that a small subset of the data that constitutes an EMR also can do most of the work of improving the quality of care? If the answer to this is “yes”, and I fully expect that it is, this would suggest that it makes sense to focus the initial efforts, if not all the efforts, around EMR on the subset that does most of the work. This, it turns out, is exactly where taxonomies come in.
To build an effective EMR, one that can be delivered in a reasonable time frame and with a reasonable budget, we need to focus efforts on the key subset of data that will deliver the greatest improvement in the quality of care. Further, though this goes beyond the scope of this discussion, we need to centralize the data into a single, shared environment, so as to avoid forcing doctor’s offices country-wide into becoming IT shops. I would prefer that my doctor remain focused on taking care of my family rather than on taking care of a rack of servers.
It turns out that there is some broad recognition that the current approach to EMR, call it EMR 1.0, cannot deliver the goods. A recent editorial in the D.C. Examiner (3/1/09) suggest that the approach supported by Obama, at least as presented to date, is not up to the task as it cannot be implemented in a reasonable time-frame, using a reasonable amount of money. What’s worse, according to the editorial, even if it were implemented, an EMR 1.0 solution would not actually deliver improved healthcare.
The good news is that there are approaches to EMR that change the game in fundamental ways. EMR 2.0, as proposed by Mark Clare (http://newvaluestreams.com/wordpress/?p=494) addresses this problem with exactly the type of approach I am advocating here. His approach, while much more elaborate than what I have addressed here, insists that the old approach to EMR, EMR 1.0 is a non-starter and that we need to abandon the idea of a comprehensive medical record with every possible data-point represented (sounds like something a data architect would dream up). Rather, he insists, we need to find the key subset of the data that delivers the greatest value and leverage that to deliver a quality offering that can scale. What he proposes, however, will require the expertise of people adept at identifying the key value-drivers in a taxonomy or metadata models, and leveraging those skills to deliver quality results that, in this case, might really save lives.
So, instead of viewing EMR as a “healthcare database thing”, we, as experts in the taxonomy and metadata space, would do well to start looking at this as an issue that clearly overlaps our own universe of discourse and we would do well, therefore, to stand up and offer our help.