State of Change, Chapter 10: Healthcare

The healthcare industry in America is a test in progress.  Like almost every hospital you visit today, it’s coping with a massive construction project.  The stated end-goal of this project is to compel Americans into joining a larger pool of insured customers, which would theoretically drive down coverage costs.  The model for this project, however, is not some simple chart like a Porter value chain.

It’s an accountability model, based in very large measure on the flow of information within and between healthcare providing organizations, qualified insurers, and government regulators.  The folks at NASA who made that square carbon monoxide filter fit the round hole of Apollo 13’s outer space lifeboat, had a much easier job than the poor fellows tasked with interfacing these three fundamentally incompatible components.

A business conference keynote speaker would advise at least one of these three to radically restructure itself, to undergo a realignment, so that its information needs are in alignment with its service needs.  Not going to happen.  Instead, like the dongles that used to hang from printers in the 1980s that supposedly prevented the pilfering of private files, new methodologies for healthcare information sharing and compliance are being bolted onto existing applications, and not particularly successfully.  Many of the provisions of the new healthcare laws that were supposed to be enacted in 2014, have already been postponed.

The reason is because of information itself.  It’s hard enough on paper to interface the existing components of the healthcare system, so that they share information accountably.  In practice, it may be impossible.  But someone has tried.

Un-franchising the Skill Set

In an earlier article in this series, I introduced you to Steven Phillpott, formerly a CIO with Amylin Pharmaceuticals, but since last March — after Amylin’s absorption by Bristol Myers Squibb — CIO of HGST, the enterprise storage division of Western Digital.  In a January 2013 presentation to the Southern California chapter of not-for-profit healthcare IT organization HIMSS, Phillpott discussed the strategic changes he made to his company’s IT division to prepare it for what he foresaw as the “New World” (his term, including the capitals) of healthcare.

In 2008, prior to any new technology adoption, Phillpott stressed the need for immediate organizational change, foreseeing a time when the current structure of the healthcare industry would fail to interface with the world around it.  As he told attendees:

What we did was, we reorganized IT to say, “Your expertise, your value to the organization, is not associated with a single product.  Your value to the organization is associated with one of three things: data, process, or reporting/analytics.”  Because the concept was, for any type of business process, that data, process, or reporting may not lie in one single system.  It may be in multiple systems.  It may be in multiple systems across multiple companies.  So we really had to frame our IT organization as, ‘You are not a Siebel expert.  You are an expert in customer relationship management and that process, and it may or may not be involving a specific product.’”

This may be a difficult adjustment for any healthcare organization of any size to make.  The IT talent pool for all of healthcare is segmented into skill sets that are aligned with products and brands.  To fill the role of a data analyst (reporting), HR personnel typically look for someone skilled with Microsoft SQL Server Reporting Services (SSRS).  Certification of that skill requires that the applicant be what Microsoft calls an “MCSE Business Intelligence Solutions Expert,” which grounds one in the knowledge of using SQL Server to implement a data warehouse.  The idea here being, if you’re going to work with analytics, you had better be aligned with a brand with the tools for storing and managing all that data.

By comparison, a database administrator (data) is usually someone with a DBA certification — what Microsoft calls an MCDBA, although Oracle has its counterparts for Oracle Database 12c, and IBM offers separate certifications for DB2 versions 9.x and 10.x.  And a CRM lead for a healthcare organization is often called the “Siebel Manager,” sometimes the leader of a “Siebel platform management team.”

Surely you see the predicament I’m illustrating.  Technology vendors have stacked the skill set decks in their favor, by promoting a scheme where skills have brands.  If an organization wants to change brands, it needs to overhaul its entire IT division.  If money were no object, it could retrain everyone; but money always is an object.  One of the key reasons why Oracle, Microsoft, and IBM all present themselves as all-inclusive cloud providers is because they want their customers to stay the course with their respective, loyal brands — the course they were on before someone invented the cloud and messed everything up.

Meanwhile, if you’re seeking a job in healthcare IT, you’ve either chosen your brand or you’ve strategically loaded your hand with a “full house” of the brands preferred by the organizations on your candidate list.  As of today, your brand(s) defines your career.

Until along comes Steve Phillpott who says, no, not any more.  In order for the organization to face the cloud, he says, it must re-orient the axis it uses to divide tasks and compartmentalize labor.  Whatever your certification may be, as of right now, your job has become a role.

“This is a hard concept to get used to,” says Phillpott, “but this is part of what needed to happen in order to give the organization the flexibility to respond to change.”

The Innovation Sandbox

With the division of labor in healthcare IT reduced to three “pillars,” the leaders of those divisions can concentrate on whether (not when) applications need to be repurposed as cloud services.  Existing applications that already service the business well may conceivably be redeployed on cloud infrastructure (IaaS) platforms.  Meanwhile, new classes of apps that enable inter-departmental collaboration on a scale discussed in our previous article, “The Public Sector,” can be constructed by in-house developers using common languages, and deployed on a platform-as-a-service (PaaS).

One of the first and most well-known PaaS service is Microsoft Windows Azure, which naturally supports the company’s .NET languages, but now also includes Java.  Amazon’s Elastic Beanstalk supports .NET, Java, PHP, and Python, with granular administrative control over the resources you lease for these purposes, enabling customers to deploy these apps on a true hybrid cloud (incorporating on-premise resources).  And Salesforce’s Heroku platform (the choice of Phillpott’s Amylin) supports a variety of new and experimental languages including Scala, a derivative of Java that is specifically geared for massive parallel processing scenarios with cloud-based servers; and Clojure, a functional dialect of the Lisp programming language of the late 1950s.

This new apps environment looks nothing like the skill sets that HR personnel are scouting for, or the accreditation and certification industry revolves around.  These are the languages that more healthcare IT personnel are finding they need to learn, as their organizations shift from branded silos to dynamic roles.  As Phillpott described for attendees of a CommNexus conference in San Diego in the spring of 2013:

Does that mean every application is suitable for the cloud?  No, but our job as IT leaders is to figure out what applications are suitable to go in the cloud, and what applications aren’t mature enough, and start working on the ones that are mature.  When we started our journey [in 2008], we worked on the applications that were mature.  The ones that weren’t mature... were the regulated-type ones — the pharma industry is heavily regulated.  So we knew it would take a while for that part of the ecosystem to catch up, but trust me, we had a hundred other applications to work on before we even got close to tackling those.

It’s a strange, almost retrograde transition from what had been characterized as a modern data center career, back to that of a “programmer” — somebody who writes scripts, debugs procedures, and runs simulations.  IBM uses the term “DevOps” to refer to this new role — originally as an administrator who learns scripting skills to run batch operations, but more recently as someone tasked with operations and maintenance suddenly finding herself in a creative position.

For Amylin, Phillpott planned a four-stage re-evolutionary model for IT personnel grounded in the old knowledge base, to move to the new one without being taken out of action.  He calls this the “innovation sandbox.”  “It allows us to evolve the maturity inside the organization,” he tells HIMSS, “at the same time maturity is evolving outside of the organization.”

Figure 10.1.

Steven Philpott’s “Innovation Sandbox” model, or “Where to Start and How to Evolve” [2008]

On the surface, the Phillpott Sandbox model looks awfully familiar.  There are stages of growth, and there’s a phased broadening of scope.  But if you look carefully, there are critical differences between this model and Michael S. Scott Morton’s Five Levels of IT-Induced Reconfiguration from 1991:

  1. The Phillpott model begins by involving everyone in the team, rather than concluding with it.  Every initiative begins with intensive collaboration, and an admission on everyone’s part that, at step one, they may not know what they’re doing yet.  Sometimes, Phillpott says, an organization has to get “directionally correct,” moving in the direction the compass is pointing in the absence of a map or a guide.

  2. Trial-and-error is accounted for, rather than an excuse for becoming “selected out” of some Darwinian survival scheme.  It’s through experimentation and the occasional failure, the Phillpott model acknowledges, that teams learn the constraints of the situation they’re trying to solve for.

  3. Implementation takes place after the experimentation is done.  Test environments are safe sandboxes, and errors have minimal or no negative impact on eventual success.  This is one of the first lessons of virtualization in the enterprise.

  4. Expertise may be a singular product that emerges from this process, perhaps embodied in a single leader or specialist.

  5. The strategy creation process starts at the end, literally after the initial implementation.  The theory here is that a workable strategy can only be arrived upon by people who know what they’re talking about.  They won’t get to that stage without training themselves to comprehend the constraints of their environment.

No Nukes

In previous articles in this series, I’ve spoken about something I call the “nuclear option” — the temptation for organizations of any type that are moving their IT resources to a cloud service provider, to cannibalize their own IT departments and offload the responsibility for IT to that CSP as well.  For many small businesses, especially, this may not only be tempting but necessary.  And medical clinics — the ones where nurses and clerical staff are often hoodwinked into becoming “IT directors” — count as small businesses.

“For the enterprise, they want to optimize their IT staff.  Small and medium businesses, they want to get out of IT,” says Ric Telford, IBM’s vice president of cloud services.

In the future, how many doctor’s offices and law firms are going to have servers under management?  Probably very few, because they’re going to want to get out of the IT business and go to SaaS or IaaS providers.  But once you get up to some kind of enterprise scale, it’s going to be hybrid [cloud].

In July 2013, IBM closed its acquisition of SoftLayer Technologies, a cloud and hosted service provider for SMBs.  Frankly, as analysts have noted, it’s SoftLayer which has served as the best use-case scenario for hybrid cloud deployments for smaller businesses, including clinics and practitioners’ offices.  For example, SoftLayer’s Flex technology is a kind of cloud-oriented virtualization scheme that lets a customer capture a snapshot of any kind of server it already has in service, move that image to any other server (not just SoftLayer’s cloud), and deploy it there.  The objective is to optimize server usage both locally and remotely, using any combination of cloud and on-premise resources that may best fit the situation.  And situations for smaller businesses may change more radically than for enterprises of scale.  As Telford continues:

SoftLayer, and their technology, opens the aperture to make it easier to move things to the cloud.  They have not only the traditional, virtualized public cloud model like an Amazon has, but they also have what we call bare metal provisioning, where you can run your application, install it on an actual physical server, which just so happens to actually be managed and run out of a SoftLayer/IBM data center.  So the barrier to porting applications and moving them into the cloud, is much easier than having to move into a virtualized environment like Amazon.

The SoftLayer model happens to fit the Phillpott Sandbox model surprisingly nicely.  Conceivably, any organization can enter the experimentation phase of an application strategy before any cloud transition begins, using on-premise equipment — even old equipment that it already has on hand.  Once an app is perfected on that physical pilot system, then it can be migrated in phases to a hybrid environment, where it is more likely to be cost-optimized for usage.

Or, optionally, clinics could decide to extend the boundaries of their firewalls inside the leased portion of the hybrid cloud model.  “So for all intents,” says Telford, “it’s like being inside your own data center.  The reason you’d want to do that is, if you want to start offloading or shrinking the footprint of your data center.”  Such shrinking can take place in experimental phases, following Phillpott’s plan perhaps to the letter.

One use-case scenario Telford offers is a hospital, “where they don’t want to use all that data center space, but they want the security and regulatory compliance for workloads, and they run in a true private environment but in an IBM data center.”

Technology vendors have stacked the skill set decks in their favor, by promoting a scheme where skills have brands.  If an organization wants to change brands, it needs to overhaul its entire IT division.  If money were no object, it could retrain everyone; but money always is an object.

HIPAA-cracy

For the healthcare industry in America in particular, as of September 2013, the nuclear option for IT may have been taken off the table.  The reason is a new rule, implemented as part of the Health Insurance Portability and Accountability Act (HIPAA), which are modifications to the rules just recently put in place.  Some of these modifications are in direct response to the changing role of cloud service providers for healthcare.

A Department of Health and Human Services document (PDF available here) describes the objective of one of the key rule modifications as follows:

Make business associates of covered entities directly liable for compliance with certain of the HIPAA Privacy and Security Rules’ requirements.

Another rule adjustment prohibits the sale of protected health information without individual authorization, which I’ll touch on momentarily.  What is not obvious from this rule adjustment alone is that the definition of business associates has been modified, to specifically incorporate any business entity which, in the course of business, exchanges protected personal health information.

That makes every cloud service provider capable of storing personal data on its premises, a “business associate of the covered entity” (covered by the HIPAA rules).  That’s a very different class of relationship than a typical vendor/customer scenario.  What the law is saying now is, if the relationship between two or more business entities is so close that they can legally exchange their own customers’ personal information and comply with privacy directives in so doing, then it’s the next closest thing to a partnership.

The legal ramifications here are immense.  First, this makes a CSP fully liable for the safety of the information.  It sets up a precedent for patients to take legal action against the CSP if that information is mishandled, and especially if the alleged mishandling results in permanent injury or death.

Remember how the judge at the end of “Miracle on 34th Street” declared Kris Kringle officially Santa Claus, because a U.S. Government agency (at that time) directed Santa’s mail to him?  It’s like that, only with the cloud and death.

Given the volatile nature of security over the Web, it may be no surprise that some healthcare service vendors are rethinking or even reversing their cloud strategies.  If CSPs cannot be indemnified against damages due to the handling of its clients’ customers’ personal data, they may not want to take on the full and unsupervised responsibility for cloud customers’ IT services, especially with the DHS, the FBI, DHHS, Treasury, and the IRS all simultaneously looking over their shoulders.

What has yet to be determined, in the midst of this transitional state of affairs, is how accountability will be shared among healthcare resource providers.  While the adjusted HIPAA laws will add new burdens for CSPs, they do not spell out the full future state of accountability.  The secret to solving this problem will be for the various “business associates,” to use the Government’s term, to perceive accountability as a business value, something which can be differentiated both qualitatively and quantitatively — so that healthcare organizations and IT providers collectively produce accountability as a service.  I’d add the usual abbreviation here, but this time I’ll refrain.

Previous
Previous

State of Change, Chapter 9: The Public Sector

Next
Next

State of Change, Chapter 11: Education