State of Change, Chapter 16: Energy

It has been called “the largest interconnected machine on Earth,” in a report prepared a few years ago for the U.S. Dept. of Energy (PDF available here).  As any computer builder will tell you, an interconnected machine is only as strong as its weakest connection.  By that scale of measurement, America’s power grid is older than almost everyone presently alive.  It is, by all accounts, in a state of slow collapse.

In August 2007, in one of the most dire warnings ever issued by a government agency, the Environmental Protection Agency painted a picture of a power grid that could become unsustainable as soon as 2015.

Between 2000 and 2006, the EPA report said, the power requirements of America’s data centers alone more than doubled, from 28.2 billion kilowatt-hours (kWh) to 61.4 billion kWh, for a compound annual growth rate (CAGR) in energy demand of 14 percent.  The brunt of that increase was borne by a category of computer for which the government had yet to attribute a name: “volume servers.”  Large enterprises accounted for 38 percent of all the electricity use by the nation’s data centers.  The rate of increase in the power draw accumulated from new servers had become greater than the rate of increase in power consumed by new people — eclipsing the strain on the system caused by population growth.  At that rate, the EPA predicted, energy consumption by data centers alone would have doubled by 2011.

And at that rate, there would actually be nothing in the federal budget to pay for the new power stations required to fulfill that demand.  Some $21 billion per year would be required by the government to keep up with that demand — an amount 56 percent greater than the entire per-annum discretionary resources budget for the U.S. Dept. of Transportation.

And you thought Windows was looking dated.

Since that time, however, the tension level from Washington has receded to a murmur.  That the world failed to end on cue two years ago, many believe, should be a signal that data center power consumption did not, in fact, double.

What averted the apocalypse may have been, according to one renowned source, an unforeseen catastrophe.  Stanford University researcher Dr. Jonathan Koomey believes that a perfect storm was caused by the conflux of two factors: increased utilization among data centers as a result of virtualization, thus reducing energy demand... and the economic slowdown triggered by the subprime mortgage crisis of 2008.  We can thank Lehman Bros. after all, it turns out, for U.S. data center usage increasing by only 36 percent between 2005 and 2010, and by 56 percent for data centers worldwide, says Dr. Koomey.  It’s not an official government estimate, however, but rather one put together in the absence of a formal inquiry. 

As a country, we have to stop relying upon catastrophes and massive failures to save us from the serious problems that face us.  When today’s power grid can’t handle the two-way traffic caused by wind farms and solar power any more, there might not be an asteroid crashing to Earth to save the day.

Power Flux

The replacement of the nation’s brittle power grid will be the greatest platform migration in the history of humankind.  I say “will be,” because it hasn’t really yet begun.  The migration of the United States to new and more sustainable modes of power (thus catching up with much of the rest of the world) remains something best depicted with the aid of special effects.  Even America’s largest wind farms to date are experimental.  Simply modernizing the power delivery system (PDS) in place today will not adequately sustain the power requirements of our grandchildren.

In any system of value delivery — even a commoditized one — when the customer is underserved, the customer eventually engineers a way not to be underserved any more.  (See: “IT, consumerization of.”)  In any technological system, from something as simple as data storage to something as fundamental as energy delivery, cloud dynamics empowers customers by demolishing the boundaries that divide useful and valuable resources into compartmentalized units.

The result is, well, disruption.  In this case, quite literally.

Years ago, the entertainment industry enjoyed unchallenged prominence when it could control the channel with which content was delivered.  It no longer can.  As today’s systems of power and fuel delivery erode, opportunities arise for customers to take charge, and wrest control of the delivery channels.  Here too, the customer is gaining the power to drive the value chain, although unlike with media, the transition here is in its early stages.

Not every utility company in America has the skills or expertise to run its own Los Alamos facility, even by remote control. This leaves cloud computing as the only option available for most public utilities.  But it may mean adopting the same migration agendas as some financial services firms.

The early warning signals of the impending effects of cloud dynamics on energy systems, are just now beginning to be felt.  Already, estimates of the amount of power Americans are likely to consume over the next five years, carry such broad “plus-or-minus” warnings that they’re about as dependable as a roll of the dice.  For instance, in January 2013, the U.S. Energy Information Administration issued its latest five-year projection (PDF available here) of the levelized costs per kilowatt/hour for providing various classes of power, including conventional and advanced coal-based power and thermal and photovoltaic solar power (also known as “solar PV”).  In one of its many warnings about the variables that may affect the efficacy of its forecasts, the EIA notes that its solar PV cost estimates “apply only to utility-scale use of those technologies,” and cannot yet account for “end-use” applications — customers generating their own power.

“End-use applications” occur when customers generate more power than they use, and the excess gets transferred back into the power grid.  A customer can be as small as a single household or as large as Google.  But it’s the existence of end-use applications for as few as one customer that’s the principal reason why the nation’s power grid requires replacement.

At a CommNexus conference in March 2013 ostensibly about “migrating to the cloud,” Dr. Amy Chiu, vice president of IT for San Diego Gas & Electric, made one of the first public connections between what I’ve been calling “cloud dynamics” and the distribution of electric power.  It may sound like a stretch for me or anyone to correlate a phenomenon of computing to a public utility.  And customer-generated power is not particularly new.  But the phenomenon that’s impacting her industry now is part of the same broader trend of customer empowerment that’s being driven by retail customers, and the same broader trend of pooling resources to overcome or eliminate barriers.  As Dr. Chiu describes:

In San Diego, a lot of us have been putting solar panels on our rooftops.  That has moved [power] generation from bulk generation to distributed generation.  That has a huge impact on the way we keep your power on, because the quality of that current now is no longer in our control.  In the past, we would run the power plant or we would contract with power plants who would put out power at 400 or 500 megawatts/block.  We actually have services, or they can actually do ramping up and down, to make sure the current is very steady.  But when you have solar panels on your rooftop, we can’t control that.  I can’t control when a cloud passes over.  [The atmospheric type.]  What that does is create a lot of static and a lot of noise in the current.  And... this noise is introduced at your homes rather than one central location.  So we have to manage that.

In the old days, when we just had refrigerators, that was okay.  But now, when we have sensitive equipment like computers or... very highly calibrated instruments, that current has to be of certain integrity.

Dr. Chiu went on to describe how SDG&E was installing a new technology called synchrophasors to its transmission lines.  It’s a real-time logistics network for PDS, and it’s an upgrade of the existing power grid to support a machine-to-machine (M2M) communications network — very literally, an “Internet of things.”  Despite how M2M is being played elsewhere in the press, it’s an existing technology, not some future prediction.  It’s a way of employing a sensor network to deliver information about any geographically dispersed system in real-time, including oil pipelines, automobile traffic systems, transportation networks, inventory control systems, and now electric power grids.

It’s what some engineers were starting to call “the cloud” before that term was appropriated by marketers to refer to data centers.

Synchrophasors have become necessary for power providers because traffic over the power grid, like the traffic on the Internet, is two-way.  What will make the future “smart grid” smart — assuming it ever comes to fruition — is the ability to sense instabilities in the system and route around them, using routing protocols that are not only inspired by the Internet, but may actually be Internet Protocol version 6 (IPv6).

In a 2010 white paper (PDF available here), Cisco makes its case for implementing IP as the routing system for not only the smart grid’s data, but the energy it would carry:

IP is the proven, scalable, secure, cost-effective, and interoperable foundation for the communications, information, and commercial networks around the world.  Cisco believes that the Internet architecture should similarly serve as the foundation for the smart grid.  The IP protocol suite includes a number of protocols and mechanisms to ensure high quality of service that meets the requirements of the most stringent applications, high availability, and a very strong and secure architecture. The flexibility of the layered architecture also ensures makes sure of investment protection for utilities for decades to come.  IP has also demonstrated its ability to scale with billions of connected devices, and smart grid networks require similar scalability requirements.

Until the components of the nation’s current power grid can recognize one another, synchrophasors use Internet Protocol to multicast data between each other, reducing instances of single points of failure by ensuring that multiple points in the network have the latest status reports.  But the amount of data these systems generate, according to SDG&E’s Dr. Chiu, is beyond the capacity of their existing data centers to contain.  Keep in mind that many of the programs used to maintain that 500 MW-per-block flow, back when it was all downstream, are still run in the original COBOL.  Meanwhile, there are complex data sets being compiled by each synchrophasors, in some cases as frequently as 100 times per second.  She continues:

Once we collect that data, we want to quickly digest the data and diagnose problems in the system that we have.  This is a problem we never had before, because we didn’t have these types of needs, nor the equipment.  Those types of computing will be done in a very real time...  People talk about big data.  There’s one aspect of data that we care a lot about: velocity.  What is the speed of the data coming in, and what is the speed [in which] that digested data has to go out?  And in that particular scenario, for the grid operation, that velocity’s very, very high.  We have a ton, and we have to get it out right away, so if there’s a system fault, they can do something about it.

Power Drain

In time, the costs of keeping the current grid on life support may outweigh those of replacing it.  But what are those costs exactly?

In 2011, the Electric Power Research Institute (EPRI) estimated the general cost of a massive, grid-related blackout event to be about $10 billion (PDF available here).  The everyday costs sustained by the U.S. economy related to ordinary power disturbances, EPRI said, range between $119 billion and $188 billion annually.  In that same document, EPRI estimated the maximum net investment required for the country to realize a fully operational smart grid to be $476 billion over 20 years.  Other sources have pegged this number as high as $1.5 trillion.

The amazing fact is that these costs may not directly result from the power events themselves.  One provider of geospatial mapping services for power companies is ESRI, which now provides ArcGIS mapping software through a cloud-based SaaS service hosted by Amazon AWS.  In a June 2012 blog post, Bill Meehan, who directs ESRI’s worldwide utility package, made a point which should be obvious but which is completely underappreciated:  Because the nation’s current PDS is not smart, the costs incurred by customers in the wake of power interruptions can be completely arbitrary — whatever they’re willing to pay.  Sure, power companies budget for failures in advance, but whatever they’ve budgeted tends to be the minimum cost in the end.  Meehan writes:

In the midst of an outage, most customers would probably say they are willing to pay more for power that doesn’t go out during a heavy snowstorm or a substation fire.  A week after power is restored, about 30 percent fewer customers would agree to the higher payment.  Why is that?  No one calculates the cost of power failures to the customers.

Meehan’s suggestion is that utility providers upgrade the geographic information systems (GIS) they already use to keep track of their assets, to build spatial analysis models for power failures.  That way, the cost of a failure can be determined at the point of failure.  “The results could be shared with everyone,” he writes.  “Then they would know.  We all would know.”

ESRI’s estimates of the cost of power events evidently come from what utility companies have budgeted and paid to recover from them, and how much their customers in turn reimbursed them.  Using that logic, it estimates that in the first 20 years of its use, the net benefits realized from the new smart PDS could be as high as $2 trillion.  Yet these estimates may also need to be reconsidered, not in light of failures with arbitrary costs that no longer happen, but instead in light of new efficiencies gained by making the PDS into an internet unto itself.

The U.S. Dept. of Energy white paper introduces this new class of efficiencies as “power from the people.”  It’s a type of smart routing that enables power generation resources to be... pooled together.  Is this starting to sound very familiar?  DOE calls this methodology islanding:

Islanding is the ability of distributed generation to continue to generate power even when power from a utility is absent.  Combining distributed resources of every description – rooftop PV (solar), fuel cells, electric vehicles – the community can generate sufficient electricity to keep the grocery store, the police department, traffic lights, the phone system and the community health center up and running.

While it may take a week to restore the lines, the generation potential resident in the community means that citizens still have sufficient power to meet their essential needs.

It’s the same general resilience concept that maintains data availability in a cloud computing infrastructure, only here applied as a way to enable local resources to serve as backups.  It’s also one of the principal functions of what EPRI calls its Renewable Energy Grid Interface (REGI).  Yes, it’s a router.  And EPRI believes the cost of using REGI as the PDS power routers in 2030 could be just 10 percent of the present cost of running the units that REGI would replace.

One huge reason is resilience — the ability of the system, probably using IPv6, to contain and route around failures, replacing the habit of budgeting for failures as massive events in advance.

Yet one missing element from most estimates of the cost of implementing the smart grid involves a class of problem that, by design, the smart grid cannot route around: namely, the interface between the existing system and the new one under construction.  Unless the nation’s power companies intend to throw a single switch and light up the new grid in one fell swoop, the two systems will need to coexist with each other for at least some period of time, hopefully a manageable one.

Power Play

The current power grid was not designed to be a two-way network.  Retrofitting it for two-way capacity is not some kind of bolt-on attachment; synchrophasors don’t suddenly make the existing grid “virtually” smart.  Rather, it’s more of a life support system – not so much a second brain as an artificial respirator.

When variable energy events occur (and quite possibly, they do every single day, but usually to a small amount), the result is called a loop flow — a spillover of sorts of unscheduled energy.  Such spillovers have to go somewhere, and since (until further notice) electricity follows the laws of physics, the overflow follows the path of least resistance.

A preview of coming attractions for dealing with loop flows every day comes from Europe, where Germany has made bold and swift moves away from nuclear power and toward wind power.  Germany, like the U.S., has an aging power grid.  But unlike America, it now boasts of generating 20 percent of its electrical power from wind turbines, many of which are stationed in the eastern part of the country.  There, as an Institute for Energy Research report confirms, the combined upstream flow from wind and solar generation sources into the German PDS on a particularly blustery day can result in the system in that region bearing as much as four times the amount of electricity that is actually being consumed.

That overflow has to go somewhere, so the laws of physics send it over paths of least resistance to neighboring Poland and Czech Republic.  Though parts of the Czech grid have modernized, it still lacks the capacity to sustain the overflow.  So Czech utility provider CEPS is spending as much as USD$50 million apiece for the construction of tremendous phase-shifting power transformers at the interface between the Czech and German power grids.

$50 million apiece is exactly what Michigan power provider ITC has paid for the construction of power transformers to avert another catastrophic loop flow event there.  In August 2003, the Great Lakes region was the epicenter of the most damaging power blackout event in the history of the existing U.S. power grid, when a loop flow tripped a grid shutdown whose ripple effect impacted over 100 power plants in the U.S. and Canada.  Apparently during periods of high grid congestion in the Northeast, and particularly in New York State, traders found it more profitable to route power over a chain of eight paths along the Great Lakes than through a direct route.  But the loop flow over the circuitous route was so great that the overflow took the direct route anyway, triggering a blackout event over the very route the trade was designed to bypass.

A six-year investigation (these things take time) by the Federal Energy Regulatory Commission cleared the traders of suspicion and wrongdoing.  EPRI assessed the cost of that blackout to the nation at $10 billion... again, using estimates based, some experts believe, on the costs Americans are willing to pay.

As this nation builds more wind power generators, the likelihood of loop flow events increases — there’s no way around it, at least with the current PDS.  This little fact may either be mentioned (albeit in small print) or cleverly implied by stories covering Google’s much-publicized moves to powering its data centers with its own wind turbines.

Such small print was correctly spotted by Forbes contributor Tim Worstall, who found it lurking in Google’s recent news of purchasing the entire 240 MW power output of the Happy Hereford wind farm near Amarillo, Texas.  Ostensibly, the move was made in conjunction with the building out of its Oklahoma data centers.

But wind power has the misfortune of being as predictable as the wind itself.  Granted, in Oklahoma and northern Texas, the problem isn’t always whether there’s a wind, but how much.  Yet because wind power is not normalized, and cannot become so (unless you want to supplement the wind with a giant motorized fan), neither Google nor anyone else can directly draw upon wind-based power for its data centers.  Instead, as Google admitted in its small print, it will feed off the regular power grid, warts and all.  “But the impact on our overall carbon footprint and the amount of renewable energy on the grid is the same as if we could consume it,” the company says.

As we’ve seen, that isn’t exactly right.  The interface between wind power generators and the existing power grid is too fragile.  Furthermore, the variability in wind power’s contribution to the PDS is supplemented by power from coal, hydroelectric, and nuclear power; and the transformers necessary to contain loop flow events draw upon power sources with high carbon footprints.

Finally, and perhaps most importantly, the interface of Happy Hereford power with the rest of the existing PDS could actually trigger the kinds of power events which Google must take care to avoid.  Assuming Google is right, and the power draw is virtually the same as if the data center were plugged into the wind farm directly, the likelihood of an outage would increase with every tornado warning.  Which makes the location of this pairing — near the very center of “tornado alley” — cause for concern in itself.

Mind Power

Technically, there are three possible providers of computing power at the scale that utility companies require to manage even the current PDS, and reduce the likelihood of loop flow events.  First, they could continue the course they had started on in the 2000s and purchase new data center equipment.  For some utility providers, including SDG&E, the cost of this course is already prohibitive.  Second, there’s supercomputing power which can be purchased on a time-sharing basis.  But not every utility company in America has the skills or expertise to run its own Los Alamos facility, even by remote control.

This leaves cloud computing as the only option available for most public utilities.  But it may mean adopting the same migration agendas as some financial services firms, which we discussed in an earlier article in this series.  They’re deploying virtual COBOL interpreters on cloud platforms, capable of emulating mainframe systems for the benefits of billing routines and cost control structures from the 1950s and ‘60s.

Or, as SDG&E’s Dr. Chiu suggested to the CommNexus panel, it could go entirely the opposite direction.  The type of platform she suggested should be built is a cloud-based neural network, utilizing algorithms that were the subjects of artificial intelligence research throughout the 1990s, prior to the advent of Web search engines.  Neural networks are software-based constructs that build relationships between points in a database based on the “flow” of data over those points.  The idea behind them is, given enough training, a neural network can expect or predict a pattern of events even when it isn’t mathematically or statistically clear how such a prediction could be made.

The basis of neural networking theory is the research of Canadian psychologist Dr. Donald O. Hebb, who hypothesized that electrochemical activity caused by the repeating patterns that people naturally observe and sense, could strengthen the bonds between axons and dendrites in the brain, leading to memory.  Here’s how I explained computer-based neural networks for a client of mine back in 1994:

The part of Hebb's neuron interaction theory that a neural network program attempts to simulate is this:  The computer neural network program maintains a matrix of “neurons” (also called “neurodes” by some who wish to distinguish between the brain and the computer) which are, for all intents and purposes, simple fractional values... The user of a neural network “trains” it by feeding it series of numerical data, preferably large numbers of record entries from a database.  Certain computer neurons are “connected,” in a symbolic sense, to other neurons not by means of axons and dendrites but by relative memory addresses.  The values of the connections between each pair of neurons along successive layers is memorized and retained within a separate region of memory.  These values govern how much the values within the computer neurons are affected by continuing similarities in the data input patterns.

The training process is declared complete at any arbitrary period of time.  At this point, the values originally absorbed by the input layer have been passed through any number of hidden layers (those which the user need not interact with directly), so a pattern of data representing medians or trends or other such general observations about the patterns in the input data, will be reflected in the output layer of neurons.  Any number of realistic conclusions can be drawn concerning the meaning of the patterns found in the output layer.

A neural net that predicts in real-time the circumstances in which a loop flow event takes place, prior to it doing so, could conceivably become the largest category of cloud computing task ever deployed.  The very implementation of this task could call into question whether the current model of utility companies, which are regional in structure and tied to their current stations in the old grid, should continue to be “virtualized” on the new system.

Put another way:  Suppose the nation’s power system truly does become an “Internet of things,” reaping all the benefits of smart routing, passing on the cost benefits of cloud dynamics to the consumer, and implementing a holistic system of event prediction that’s modeled on a concept of the human brain.  Why should its utility providers remain regional and limited by statute?  When the nation’s financial transactions became all-electronic, its banks soon became nationwide.  Are we ready for disruption on this scale?  Or do we go back to praying for a second Great Depression to stave it off?

Previous
Previous

State of Change, Chapter 15: Retail

Next
Next

State of Change, Chapter 17: Marketing