State of Change, Chapter 8: Financial Services
The standards and practices of the world’s financial institutions are carefully cultivated capsules of logic. Nurtured and maintained by generations, they have evolved into strange, often bewildering forms of code, in some cases beyond the immediate comprehension of anyone yet alive today. The technology that supports this code consists of upgraded foundations for the first generation of information systems anywhere in the world.
From the point of view of someone working in financial IT, change is the enemy. The alteration of business models and the visualization of new ways of working are the last things on the minds of financial CIOs. Mainframe computers are the hives for financial business logic, and there is no initiative under way by almost any CIO anywhere, at any corporation, to change this fact. The reasons were made unambiguously clear in a September 2012 Forrester Research survey of 1,243 IT executives and staff, commissioned by BMC Software. Among those respondents, 43 percent indicated they worked in the financial sector.
More than nine out of ten — 92 percent of the total respondents — agreed with the statement that maintaining the working order of their mainframes was essential to their continued strategy. When asked why, some 69 percent said it was to keep costs low. Nearly half — about 46 percent — said their application strategy was to extend the useful lifetime of their existing business logic.
For a series that’s essentially about the onset of cloud dynamics in the enterprise, you would think this would be the final word on the subject of financial IT. The “move to the cloud” has been played in the press like a grand exodus, a relocation of the world’s informational headquarters across a vast gulf. The impetus for this move is described using the catch-phrase “the consumerization of IT.” The story line goes something like this: Tablets are more desirable than PCs, and because both employees and executives want to use their tablets in the workplace, IT has had to adapt. That adaptation has meant the adoption of cloud delivery models for applications, which are best suited for usage by mobile devices like tablets and smartphones. And that adaptation has necessitated the adoption of cloud infrastructures.
To say this is an oversimplification is to say “Rosebud was his sled” is an oversimplification of “Citizen Kane.” Point A and Point B both feature well in the overall geometry, but the straight line is something of a false assumption. There is indeed a transition under way to cloud-based computing in the financial sector, and it is happening whether that sector wants it or not. Consumerization is not the driving force, and cost reduction is neither the incentive nor the interim result.
Partly Cloud
“Cloud should be about improving business value, not just about going on to the next cool thing. It helps shape a strategy,” states Ric Telford, vice president of cloud services for IBM. At the risk of confusing atmospheres, he continues, “There’s a lot of this [feeling of], ‘Hey, the water’s fine, jump right in.’ But the water’s not the same temperature for everybody.”
As essentially this planet’s last great manufacturer of mainframe computers, IBM might have something to lose in leading some great financial exodus away from one of the foundation architectures of the company. Newer competitors who have everything to gain from cloud, and who would love to see finance dive right in, point to IBM as something of a stalwart, a conservative, someone with too much of an interest in the status quo. But the truth is, it isn’t IBM that’s dragging finance kicking and screaming into the future. Telford continues:
We have a one-on-one discussion with clients about what they’re doing today, what workloads they’re running, and what they believe the cloud is. The first question I like to ask when I sit down with clients is, “Tell me, in your words, what is the cloud and why do you think it brings value to you?” It’s amazing the variety of responses I get, but it’s typical in that no two companies are the same. When a company comes to IBM and says, “Help us with the cloud, we understand it’s going to bring value to us but we don’t know how to get started, and we want to make sure it’s not just the latest fad,” we sit down and do a thoughtful analysis of their current environment, and then we can take steps... Understand the environment first, take some initial projects that are of the “no regrets” flavor, get some experience with these delivery models, get your people comfortable with a different way of thinking about IT services, don’t rush it.
As Telford tells me, financial services were actually among the earliest adopters of infrastructure-as-a-service (IaaS), the class of cloud services where raw material like compute power, storage, and bandwidth are sold in manageable increments on an as-needed basis. (In fact — and this is not meant as a joke — some respondents to BMC’s survey who registered reticence toward adopting cloud services, may not have known their institutions were IaaS subscribers.) This makes the financial sector, of all places, one of the initial proving grounds for private clouds.
Among the poorest explained concepts in all of cloud computing is the distinction between public and private clouds. This is probably due to a well-meaning effort on many journalists’ part to pinpoint a “best-fit” definition that misses every vendor’s mark by an equivalent amount. The truth is dramatically simpler, as is cloud dynamics itself. Cloud technology is, quite simply, the ability to pool various resources; provision portions of that pool on an as-needed basis; and pay only for what is used, when it is used. The whole “public/private” thing is essentially about who owns those resources:
With platforms such as Pivotal Cloud Foundry, you can build a cloud service provider entirely in your data center, and make yourself the sole customer. Without argument, that’s a private cloud. The reasons private clouds exist are to enable faster provisioning and easier servicing, and to optimize usage. This is what IBM’s Telford tells me that financial organizations have been doing for longer than their own executives may realize.
Resources made available for instantaneous provisioning by providers such as Amazon AWS and Rackspace, are considered public cloud resources. This is what many financial CIOs think of when asked point-blank about “the cloud.” They may be averse to embracing this end of the platform for very obvious reasons — some personal, some legal. Banks do not want the core logic of their businesses residing in a place they do not own and have full control over.
When you lease resources including public cloud from other service providers, extending your systems’ boundaries and even its firewalls into infrastructure that’s owned by someone else, the result is the architecture called hybrid cloud.
For most businesses for whom the “nuclear option” of outsourcing all IT would be fanciful or ludicrous, any cloud deployment will be a hybrid cloud. Thus the whole “whether to go public, private, or hybrid” question is actually a false one. Certainly, financial organizations will not be asking it. To some extent, businesses will always own some degree of their IT resources, even if most or all of it is in virtual space. Since every private cloud platform offers the option to incorporate public cloud resources (that’s the whole idea), and every public cloud infrastructure deployment recognizes the integrity of clients’ private resources, every cloud deployment is a hybrid cloud. The ratio of public-ness to private-ness may be completely immaterial. IBM’s Ric Telford:
Any enterprise of size and scale will be running what we call a hybrid cloud, going forward. They are going to have some that they will continue to manage internally, and they will leverage some of the public cloud. That’s how it’ll be for as far as we can see... Hybrid IT is really what we’re talking about, with most enterprises. They’re going to have some workloads running in the traditional IT model (“if it ain’t broke, don’t fix it”); they’re going to have some running in a true private cloud (dynamically provisioned, with an elastic pool of resources that are pay-as-you-go within the internal data center); and then they’re going to have some sets of IT services that will be from true public cloud providers. “Hybrid IT” describes the integration of all three of those models, and I think any enterprise of scale is going to have a hybrid IT environment.
Given the realization that the financial sector is one of the testbeds for what Telford calls “hybrid IT,” finance may actually be more of a trend-setter for cloud adoption that literally anyone may have realized. One of the emerging drivers of this trend, perhaps most ironically of all, is the need to preserve the existing core logic — the very need that IT leaders say keeps them from adopting the cloud.
The Crown Jewels
Many of the financial sector’s core applications run in COBOL, using a language and a job control system that fewer of the world’s IT personnel even understand. The cost of maintaining the systems that maintain the COBOL is rising, and some would say this is the premiere use-case scenario for transitioning away from COBOL as a platform and toward a newer language like Java — something offered through a platform-as-a-service (PaaS). But the cost of that transition may yet exceed this maintenance cost.
Hence the creation by a company called Heirloom Computing of a PaaS service called Elastic COBOL, which lets COBOL-based core logic be transplanted whole from the physical mainframe environment into a simulated one. Elastic COBOL runs batch operations written in IBM’s original Job Control Language, which in turn dynamically provision compute instances in a way that JCL’s creators never dreamed. Those instances then translate COBOL code into Java on the fly, producing output that can then be wrapped in Web services — in other words, that can be called by Web apps and integrated into mobile apps.
“What you find with these Global 2000 companies is that, the companies that are always trying something new probably were never working too well in the past,” remarks Heirloom’s chief technology officer Mark Haynie, reflecting the conservative viewpoint of the financial sector that still, despite this latest reported revolution, lives on.
I don’t want to say, when you try something new, throw away everything and rewrite what you were doing. [But] if you’re the guy who rewrote your COBOL in C, then rewrote the applications in C++, and then in Java, and now undergoing a PHP rewrite, maybe you’ve always been behind the curve, maybe spending all your time changing mindless syntax. Once we get as [experienced] as we are, we stop thinking about syntax. You don’t think about Python having colons on their “if-then-else” statements, and using indentations. You start thinking in terms of algorithms. And if somebody says, “By the way, I need to plug this into a COBOL application,” then the guy with that expertise, that knowledge base, can say, “Okay, that’s just syntax. Let me just generate some COBOL and some Python.”
Rewritten and re-rewritten apps, Haynie argues, are often the least performing kind not because of syntax distinctions, but because their architectures typically were not adjusted to best suit their new environments. As a result, although cloud-based language platforms offer significant new virtues, these apps will specifically fail to take advantage of them. And financial services’ core logic ends up continuing to run in the environment in which it’s best suited anyway.
Haynie’s experience dates back to the 1970s, where he served as Director of Systems Architecture for Amdahl Corp. There, he helped create and refine the concept of the distributed transaction — a way for a computer program to define a task in a discrete, “atomic” fashion that could be undone. It’s the architecture of processing multiple transactions concurrently, in a way that ensures the cause-and-effect durability of the data they change. It’s a part of the underlying IBM mainframe platform that supports the COBOL-based logic, that often goes completely unappreciated.
Mainframes were originally designed for multiple users, so concurrency was built in. The operating systems of the client/server era were designed for single users with isolated desktops, so the integrity of data from any one user’s point of view was maintained, but distributing that integrity throughout the network was a matter of planned replication. The languages created during the c/s era did not assume, as COBOL did, that distributed transaction logic was in place.
So at the dawn of the cloud era, when the first PaaS languages were deployed, and multiple simultaneous users were once again assumed, concurrency had to be reinvented. Ironically, there are aspects of COBOL that, Haynie demonstrates, are better suited to PaaS than languages like PHP and Python that were originally designed to render Web pages for single clients.
These COBOL applications were infinitely scalable, and you didn’t have to upgrade your hardware if you added more bits or digits to your COBOL applications. So fortunately, for better or for worse, these applications are designed to move forward with the technology, because they were never dependent on it in the first place.
A financial organization that leverages Elastic COBOL is essentially a hybrid cloud, assuming that fact even matters. They are hybrid because banks and financial services do not feel they need to reinvent their businesses simply because they’re reforming their technology. Essential business logic includes standards and practices that banks hope will live on long after we’re gone.
Risk Driver
Another unexpected driver of financial IT to cloud services is the law. Up until the start of this decade, more than nine out of ten actuaries who provide service and support for the insurance industry reported that their typical software package consisted of Excel and a third-party VBA analysis add-in package. While that may sound fairly flimsy, for years it was the only tool financial analysts could use to see projections on a grid and a chart simultaneously. (I should know. I literally wrote the “bible” on VBA.)
On July 10, 2012, the EIOPA — which regulates the insurance industry for the European Union — produced its final set of recommendations for how insurers serving customers on the continent (wherever those insurers may be headquartered) should report their financial status to the European Central Bank (ECB). New regulations were already in place for banks, and at this point EIOPA was recommending that similar data proving insurers’ solvency be also submitted to ECB on a quarterly basis.
For the first time, regulators will require insurers to report every three months a huge amount of data regarding their target operating models (TOM). These are, from a statistical standpoint, the expectations each insurer has with respect to the operating conditions it will face, given multiple sets of economic scenarios, over extended periods of time.
“The regulations are now requiring an insurance company to value all of its assets and all of its liabilities,” explains Patricia Renzi, global practice leader with actuarial firm Milliman. Renzi tells me an insurer may have thousands, perhaps millions, of individual policy holders worldwide, any one of which may have several hundred thousand assets, whose values must all individually and collectively be projected over a 30- to 50-year period. As if that’s not enough, those projections must vary across what she describes as perhaps a thousand stress scenarios — various sets of global economic circumstances and risk drivers (for example, should a country’s interest rates fluctuate by 300 basis points in one day). And it must all be done every three months.
It’s a four-dimensional supercomputing application, and suddenly it’s an everyday requirement. There’s no way to do this on a spreadsheet. The way the Excel templates previously rendered projections was by generating “best-fit curves,” or polynomial expressions that project asset values based on past performance, into the indefinite future. Come next year, such reporting methods will not be compliant, and may actually be illegal.
Milliman had been offering a desktop application to rival the bellwether Excel, called MG-ALFA. It’s not impossible for insurers to utilize the desktop MG-ALFA for these new requirements. The problem is, it’s expensive. An extraordinary amount of computing power would be required for one week out of every 13. For the remaining 12, it may as well be sitting dormant. Dormancy of any kind costs too much.
MG-ALFA uses Microsoft’s .NET framework. That company’s Windows Azure was originally deployed as a PaaS service, running .NET applications over cloud resources. Collaborating with a consultant, Milliman re-designed its MG-ALFA application so that its .NET-based core ran on the Azure platform, with a lightweight client-side platform like ASP.NET MVC picking up the desktop end of the service.
Milliman did this so that it could effectively become a cloud services provider, offering actuarial service on a utility model. It spins up instances of MG-ALFA in the Azure cloud just for those days when its clients require it. It then resells those instances to its clients.
Pat Renzi admitted point-blank that cost savings for Milliman had next to nothing to do with this architectural decision:
It’s more expensive to manage and maintain [MG-ALFA], because we’re now providing 24/7/365 monitoring and support of the system. Whereas before, it was the client’s job to do that...
Obviously, for any company, if you can get the right answer faster, that’s better. To have a 10,000-core, on-premise infrastructure that you’re using five percent of the time, that doesn’t make any sense at all. But the cost to provide that through cloud computing is really minimal when you think about it, because you just pay for it when you need it. It seems like the best solution, in that you can get more accurate information for a lower cost.
There are some who will argue that the operating structure of banks and financial institutions worldwide is in need of one of those revolutions I wrote about in an earlier article in this series. But it is not the need to revolutionize business strategy that is moving these institutions toward cloud services, or into actually becoming service providers. Though a Porter value chain does exist for the financial sector, nobody’s drawing circles and arrows.
Rather, there are fundamental changes at work in the world in which they do business that compels them — perhaps not even willingly — to rethink their approach to continuing to service their customers in the manner they’ve come to expect. The technology infrastructure, the legal framework, and the economic environment which has supported the unchanging, stable, reliable nature of financial services, are all radically reforming. For the financial sector to stay the same, it must change.