State of Change, Chapter 13: Manufacturing
Technically, there is a significant difference between an application that is provided to your business through a hosted data center, and one that is genuinely “in the cloud.” That technical distinction is qualitative; it directly impacts the quality of the service your business receives.
This is particularly important for the manufacturing sector of the economy — the sector at the root of supply chains, as discussed in the previous article in this series. For nearly two decades, manufacturers have been served by a number of “all-in-one” or “suite” applications providers, beginning with on-premise data center installations. In the late 1990s, some vendors including NetSuite began offering hosted alternatives, such as enterprise resource planning (ERP) applications, designed for use through Web browsers.
They were among the first genuine Web applications. But they were woefully underpowered and unreliable. The immaturity of the Web platform was only one reason why.
Through the 2000s, as Web engineers finally figured out how the Web was supposed to work, many of these same ERP vendors began offering their applications “as a service.” Some extended their marketing metaphors to include “in the cloud.” Thus began the trend of cloud confusion that we’re experiencing today, a scenario where customers are led to believe that almost anything that goes on behind the magic curtain (imagine the Wizard of Oz) is cloud computing.
True Clouds, and Others
There are certain virtues that a true cloud-based deployment should bring to the table — virtues that cannot be realized by simply obfuscating the identity of the single, off-site data center where the application is being hosted:
A cloud-based service should be extensible and flexible enough to adjust to the customer organization’s varying needs, at or near the time those needs change. By contrast, a hosted application suite is typically a virtualized version of the same suite that would run in the customer’s data center on-premise, and therefore is designed for an operating system rather than designed for a cloud platform. That measurable difference in design means that improvements to applications take place at the vendor’s exclusive discretion, and on that vendor’s marketing itinerary.
When a true cloud platform (e.g., Microsoft Windows Azure, Salesforce Heroku) hosts business services, the “endpoints” for those services have connectors that link the output and the products of those services to other business applications. By comparison, when an on-premise application suite is migrated to a virtualization layer, all the ways it communicates with the outside world move with it. It “thinks” it’s home, so it exports data to spreadsheets and imports data from normalized databases on a file-by-file basis — just one step up from MS-DOS. This type of data translation typically lacks an audit trail. As a result, when information gets lost in translation (however that may happen), it’s not obvious who or what is responsible. Executives who are accustomed to Excel could validate any exported spreadsheet that passes over their desks, as long as it’s formatted properly.
Cloud services provided on a platform may be engineered to work in tandem with one another — to “know” of each other’s presence. So they are pre-equipped with the ability to communicate using best practices, which may including the use of Electronic Data Interchange (EDI, which featured very prominently in the previous article in this series). By contrast, many ERP suites lack this means of data exchange, at least without the use of an add-on. So when a suite processes a shipping transaction or a purchase order, its output may very well be entrusted to a person who serves as a bridge between the application and EDI, not unlike a telegraph office from the 19th century. Recently, actual businesses have been established to handle, for a fee, the translation between each transaction recorded by a customer’s business management application and an EDI document in processing... and then, of course, to handle the reverse. Conceivably, an ERP platform can automate and standardize this transaction process, to such an extent that some business leaders may begin questioning the applicability of EDI — a system predicated on separate systems rather than converged ones — in a modern cloud scenario.
So there is a distinct difference between the concepts of simply moving manufacturing ERP applications to the cloud, and of deploying ERP services on a cloud-based platform. Those differences may seem subtle on the surface. An ERP software company named IFS North America recently polled some 200 executives from companies with $50 billion or more in annual revenue, asking two different questions about whether they ran their ERP software as a SaaS application. For one question, 5 percent of respondents said yes; for another, phrased a bit differently, 20 percent of the same group said yes.
The confusion was an aspect that IFS wanted to illuminate, especially with respect to advising its customers as to how to engineer any cloud migration they may have. Their advice is for customers to investigate a private cloud deployment in order to maintain ownership and control over their digital assets. Here’s an excerpt from that company’s April 2013 white paper:
This enhanced degree of control is something study respondents seem to find appealing as control was one of the main concerns about SaaS expressed in responses to an open-ended question. But control can mean different things to different people. Control may involve an ability to make decisions about how to configure the software, when to upgrade, and when to roll out new functionality. SaaS vendors, for instance, upgrade the software centrally, and position the fact that all users are always on the latest version as a product benefit. But new versions mean change, and change can be disruptive. Relatively subtle changes to the way an application works and how users interact with the system may introduce a learning curve that impacts performance.
And if that centralized upgrade happens at a mission critical time for your business, you may feel like you lack control. A SaaS vendor may provide ample notice and training around pending changes, but customers still find themselves conforming to the SaaS vendor’s timeline instead of a timeline determined by the customer... Control is important because ERP is more than just a piece of technology. It is the heart and soul of middle-market to large businesses.
These are not the words of a company that’s ready to sound the trumpet of the revolution right about now, but rather play the violins for executives who want to hold onto the last relic of the 1980s. It’s important to note that IFS does not offer its ERP as a cloud platform — more to the point, as a suite of networked components engineered for the cloud, rather than an application running on a virtualized operating system.
The ERP market is still considered a “software” market. It’s so stable that industry analyst firm Gartner computed its annual growth rate at an anemic 2.2 percent. And there are so many players in this market that some of its best-known and most publicized names in recent years, including NetSuite and Workday, have so little market share to themselves that they each fall within the “Others” segment on Gartner’s pie chart. Many of the established players are evidently so content with the glacial flow of change, that they’re happy with promoting a world view that essentially says that change isn’t happening, isn’t wanted, and/or is overrated.
Narrow Forecasts
The bad news for the established order is that the trumpet call has been sounding, and here is its message: Material requirements planning (MRP) has long been perceived as a critical function of manufacturing. It’s the logic that examines what is known thus far about the goods that customers will be purchasing, and determines whether enough raw materials are on hand or in transit for the manufacturer to make those goods.
A surprising number of the world’s ERP applications (especially those applications installed on-premise, but also hosted ERP and virtualized SaaS) do not actually provide enough (or any) real-time visibility into customer or partner requirements to be able to truly implement MRP as it was originally envisioned. For their users, MRP — or something using that abbreviation — is an add-on module. As a result, even though MRP is the older science (dating back to the 1950s) and the parent of ERP, the two classes of software have historically been separate, and in some instances actually compete with one another.
In this sample chapter from a book about that company’s own SAP ERP product (PDF available here), SAP Press explains the various levels of planning processes that pertain to SAP’s materials planning functions. The chapter explains that standard sales/operations planning (SAP SOP) “is, to a large extent, preconfigured when the system is delivered and can be used without much effort.” Flexible planning — that is, the ability to base materials planning on customer needs — is an option. And while SAP suggests using this option, it clearly explains that the ERP application’s comprehension about customer demand is based not upon actual visibility into the demand itself, but instead upon forecasts:
In demand planning, forecasts are usually based on aggregated historical data, which can, for example, come from SAP ERP-LIS (Logistics Information System). The corresponding data structures in LIS are used to structure and prepare the plan figures. They are based on the use of characteristics (e.g., plant and sold-to party) that can be used to break down key figures (e.g., production quantity).
Demand planning can be the starting point for the entire production planning process. For example, production plans can be created within demand planning (e.g., in a production key figure) and later transferred to operative planning as planned independent requirements. Because of this, the planned independent requirements form the basis for procurement planning and production planning and can, for example, be offset against the current sales orders, depending on the chosen planning strategy.
A cottage industry has formed around the function of predicting future demand patterns based on past data, and retrofitting existing ERP tools to handle the estimates. This may not be so much material requirements planning as material requirements guessing. It assumes that past behavior is good enough to indicate how much and how often raw materials will need to be procured; but in the absence of past behavior (as would be the case with a default system with its preconfigured settings) SAP advises that users devise as much of a planning strategy as possible without waiting for a history of sales orders to be built up. There’s an actual catch-phrase for this sort of predilection, which appears here:
In make-to-stock production, you want to initiate procurement of the relevant materials without having to wait for actual sales orders. Such a procedure can shorten delivery times and, thanks to future-oriented planning, evenly spread the burden across the production resources.
Thus “future-oriented planning,” by definition, is something done without any knowledge of the future besides forecasts.
The New Constraints
“The world that existed when MRP was developed no longer exists,” write Carol Ptak and Chad Smith, co-authors of the third edition of Orlicky’s Material Requirements Planning. Yes, the very people responsible for the production of the world’s MRP bible are the loudest voices speaking against the use of past patterns to predict the future, and even to justify the present. In a 2008 article entitled “Beyond MRP: Meeting the Current Materials Synchronization Challenge” (PDF available here), Ptak and Smith continue:
The sheer size of ERP systems today hides the reality that for most mid-range and large manufacturers, MRP remains a critical module in their ERP system, and the changing global manufacturing environment has exposed critical shortcomings in most MRP implementations and tools. Variability and volatility are on a dramatic rise and the implementations of pull-based philosophies like Lean and TOC [Theory of Constraints] are proliferating. These conditions and approaches are putting extreme pressure on MRP systems and even creating conflicting modes of operation (push versus pull). MRP was designed in the 1950’s, commercially coded in the 1970’s and really hasn’t changed since. It was never designed with today’s factors in mind...
We are now in a world with global capacity far exceeding global demand. Customers can purchase what they want, when they want it, at a price they want to pay due to the lack of transactional friction available through the Internet. In addition, customers are increasingly fickle. The push strategy of produce and promote just does not work anymore.
Or to put it bluntly: Technology has changed the marketplace at such a drastic pace that technology has yet to keep up with it.
Ptak and Smith refer to something called the Theory of Constraints. This is an idea first put forward in 1984 by Eliyahu M. Goldratt and Jeff Cox in a book called The Goal: A Process of Ongoing Improvement. Unusually, the book is not an essay, but rather a novel that follows the story of a central narrative character named Alex Rogo on his journey through all the business functions of a fictitious manufacturer, interviewing their various practitioners in an effort to understand how the company’s systems work and fail to work. Rogo has a three-month deadline imposed upon him by his superior. His goal, although he can’t articulate it at first, is to create some kind of interoperative system that pulls all these processes together.
At one point in the book, Rogo encounters a plant inventory manager who’s enthused about how the changes Rogo already made to the production cycle gave her the opportunity to implement tactical changes in the workplace. (Yes, there are times you may be reminded of watching a 1950s army training film.) Those changes prompted her and her staff to periodically check the assembly queues to see where work orders were stacking up. She dubbed these bottlenecks buffers (keep that word in mind). Whenever the work stacked up on a particular work center’s buffer, she would respond by notifying that work center’s foreman to direct his attention to those delayed processes.
Soon she discovered that she tended to visit the same foremen far more often than others, so much so that the process of reassigning their work sequences was becoming standardized. In a feat of nomenclature, she christened these foremen’s work centers capacity constraint resources (CCRs), referring to the parts of the production chain that feel the constraints first — the weakest links in the chain. People on the shop floor who handle materials can’t always be expected to simply “go faster,” she concluded.
Her solution is effectively an application of this new Theory of Constraints: an examination of how work processes are reassigned in order to alleviate bottlenecks, and effectively redesigning those few processes to make them the norm. The idea is that the management of any system boils down to alleviating the bottlenecks precluding effective processing at the weakest links of the chain... and not much else.
Rather than a novel, Ptak and Smith wrote a textbook — or rather, they rewrote it, and then redefined its very purpose. In 2008, they proposed that systems management principles that concentrated mainly on meeting projected forecasts neglected how the evolution of the modern economy made accurate forecasting next to impossible. Their suggested alternative (first articulated just as cloud computing was taking hold, and cloud dynamics had yet to be examined) is something they originally dubbed Actively Synchronized Replenishment (ASR), and have since re-named Demand-Driven MRP (DDMRP). It is a vastly different philosophy from the MRP they inherited, and is based on five (previously four) principal components:
Strategic inventory positioning is a concept of evenly distributing inventory (including raw materials) among warehouses to distribute the ability to absorb unforeseeable fluctuations in customer demand throughout the supply chain. Its goal is to focus on the buffers (there’s that word) representing the constraints inherent in the current system, and to re-engineer the distribution of resources to minimize bottlenecks. The result is something I called, in an earlier article in this series, equilibrium. Ptak and Smith advise planners to pay attention to the aggregate bill of material (ABoM) — the virtual bill for all the components that comprise all the components of a customer’s order, and so on — and station parts and raw materials in such a way as to maximize your options for fulfilling future orders.
Dynamic buffer level profiling and maintenance is a science of characterizing the seasonality, demand, and volatility (“freshness”) traits not just of parts in your inventory, but groups of parts that share several of those same characteristics. An ideal MRP application that follows ASR principles would color-code parts and parts groups on a rainbow scale ranging from blue (overstocked) through green (properly stocked) and through to red (understocked). A chart of customer demand for parts or items with these parts over time would use these color codes to explain buying cycles better than a thousand-word essay ever could. A model for responding to increasing demand on account of seasonality, or a successful product promotion, can then literally be drawn in color on a time-scale chart that, once you understand the coding system, looks completely self-explanatory.
Dynamic adjustments involves the application of these models to the supply chain-oriented plan, so that variability becomes part of the program. A part or group of parts is given a “buffer profile,” but as demand shifts on a day-by-day basis, these buffers are adjusted (hopefully automatically), resulting in new profiles representing when parts can or should be replenished. The objective is to move away from replenishing supply levels based on anticipated or forecast demand, and toward replenishment based upon actual demand (“pull”). While that may sound dangerous, it isn’t actually the same as throwing caution to the wind when you consider that the real demand is modeled ahead of time. The economic forces of actual demand should naturally push buffers into the yellow zone — signals to replenish — but hopefully not all the way to the red zone.
Pull-based demand generation triggers the ERP application to account for demand as it happens, and integrate any variations it encounters into the existing buffer profiles. Here, the demand levels for parts that are components of other parts are modeled cumulatively in advance, rather than as separate buffers representing individual demands. So when the ERP refreshes the demand chart, a part that’s a part of a part way down the chain doesn’t accumulate urgency along with the demand for the part in its own right. Ptak and Smith call this “BoM decoupling,” merging all the dependent side-effects into one graph and thereby “minimizing the bull-whip effect,” to borrow their phrase.
Highly visible and collaborative execution (there’s that “C” word again), which is heavily dependent upon the introduction of another “C” word: communication. Typically a trading partner asserts the priority of a purchase or manufacturing order based on how much time they’re given to fulfill it. If they’re given the standard interval, in other words, it must not be urgent. This creates a situation where the normal flow of business is, by definition, non-critical. As Ptak and Smith demonstrate, this actually creates opportunities for adversities, the only solutions to which come in the form of manual workarounds. The final solution to this, they suggest, comes in the form of a system that color-codes visible buffer status, based on data that is collectively supplied by the multiple trading partners in the supply chain.
The concept of DDMRP would seek to completely replace the entire methodology of modern resource planning with a system that programmatically responds to events in real-time. The key introduction here is the notion of real-time facts, which has only recently become feasible by way of inexpensive bandwidth and plentiful storage. As a case study, DDMRP advocates point to the largest freeze-drying company in the US, Oregon Freeze Dry, which credits the methodology with improving annual sales by 20 percent while reducing both inventory and lead time by 60 percent.
Whether or not an organization chooses to implement DDMRP to the letter, at some near-future date, it will need to consider the implications of utilizing real-time supply chain data. It is here that the traditional model of the ERP application begins to fail. The most sophisticated desktop ERP systems give users what appear to be flexible, rules-based functions for planning parts replenishments. For instance, with something like Microsoft Dynamics NAV, when an inventory level falls below a given percentage or quantity, or when a number of pending, planned POs will come due at a certain time, a rule can set up a requisition on standby until someone with the proper authority signs off on it.
This seems perfectly logical, until you take Ptak and Smith’s suggestions into account, and you realize that all those rules are effectively triggers that deal with the replenishment of one part on one set of conditions. By contrast, an actively synchronized system examines the entire inventory picture as a whole, taking all the interdependencies into account later. So the objective moves from deciding when to reorder one part, multiplied by the total number of parts; to how to reallocate inventory among multiple warehouses and suppliers in advance, ordering only what may be necessary based not on trends but on facts, to forestall contingencies and achieve equilibrium.
It’s the level of revelation that cries out for a cloud-based solution.
Bottlenecks
A cloud platform could provide the multi-tenancy and real-time supply chain visibility that such a collaborative system would require, with the security necessary to prevent shared data from inadvertently leaking trade secrets between partner companies. For the moment, at least, there may not be such a solution. Although one software package has earned the “ASR Compliant” seal of approval, Replenishment+ is a traditional desktop application — the product of an ingenious idea that came to light after the twilight of the cloud computing era. The ideal concepts are there, but the execution is tailored for a market that’s one or two stages behind SAP, Salesforce, and Dynamics NAV.
NetSuite stops short of endorsing DDMRP outright. But unlike its competition, in December 2012 the company officially acknowledged the concept, tied it to cloud dynamics, and has actually been asking its own users whether the concept is worth integrating into its software. It tries to paint a broad picture of DDMRP as “planning by exception,” and then ties the rule-based Reminders system of its ERP application to that concept, by way of attempting to demonstrate that NetSuite does something in the vicinity of that concept right now.
A few months earlier, Microsoft wrote off DDMRP as “just [an] acronym,” or something that Dynamics AX not only does now but has always done. It also inaccurately defined the term as a forecast-driven system, even though DDMRP’s own creators explicitly say, “Forecast accuracy at the individual SKU and part levels is highly inaccurate.”
Although Salesforce ostensibly constructed its original cloud platform for CRM, the depth that Force.com eventually attained has enabled third parties to adapt that platform for MRP. This is where a company called Rootstock attracts our attention. By leveraging Salesforce’s sales leads and opportunities database, data generated by the company’s sales division can directly translate to purchase orders within Rootstock Cloud MRP without the intermediate step of importing or data integration. And Rootstock also integrates work center production management, using a color-coded chart system that should catch the eye of any developer familiar with DDMRP.
While these three cloud platforms consider their position on the future of planning, a company called GT Nexus is assembling the one component they sorely lack to enable DDMRP at the scale it requires: a network of interoperable trading partners, willing to share information over a cloud platform so that they all have supply chain visibility. This platform is called Procure-to-Pay (PDF available here), and serves as a negotiating exchange between purchasers and suppliers, searching and bargaining for the best prices in real-time. By settling bills of materials instantly, GT Nexus is arguing that the forecasting engines that comprise the heart of ERP applications being moved to cloud platforms, will be rendered wholly obsolete. Company co-founder and executive vice president Greg Johnson had this to say in a recent podcast:
Globalization has changed so much. Companies have outsourced a lot of their functions — whether it’s manufacturing or sourcing. So the very processes that they might have held inside their four walls have been pushed out beyond their four walls. And they’ve also been pushed out globally, so other companies and partners take on manufacturing, or take on sourcing, and they are located 10-, 12-, 13,000 miles away.
So the information needs and topology have changed radically. It’s no longer a game about managing your information inside your four walls; it’s about how you manage the information and the business process flows between you and your customers, or you and your suppliers, or you and your service providers? And ERP just wasn’t designed for that kind of world. You can almost draw a bright line in the sand, and on one side of that line are all the information systems that were built for the single company, “enterprise systems;” and on the other side of the line... systems that were designed for inter-company collaboration on a global scale.
There is indeed a line in the sand, but on both sides lie some of the key resources that vendors on the other side desperately require. Salesforce, Microsoft, and now NetSuite all have developer platforms for their ERP engines in their respective clouds. So if none of these companies ever get a handle on what DDMRP is, or recognize that their customers have already begun implementing it, outside developers could build DDMRP applications around those platforms anyway. Cloud dynamics enables situations where smarter customers can become service providers despite vendor reticence. Rootstock already has a handle on how to make Salesforce CRM into a viable ERP platform. And all three of these platforms have the innate capability, however it may be exploited, to exchange data between trading partners.
What GT Nexus lacks at present is an open development system for existing ERP customers to transition their existing databases and practices to a real-time methodology. But it has built an actual working exchange, along with the methods behind it. It’s slowly building a trading community for this exchange. And it’s adopting real-time tracking of parts, products, items in transit, and shipping logistics by adopting RFID tagging support in stages. This way, rather than auditing the EDI document trail, trading partners can learn what each other has on-hand and what it needs now. It’s this type of real-time data that would be the lifeblood of a future cloud-based DDMRP platform.
It’s also the culmination of what’s been termed the “Internet of Things” — a new universe of communication where trillions of tagged items are signaling their heartbeats and confirming their existence to billions of hubs. If any communications system ever seemed deserving of the metaphor “cloud,” this would be the one.
The prospect is this: If SAP or Microsoft or Salesforce or NetSuite doesn’t build the next generation of planning applications that replaces ERP for the manufacturing sector, someone else will. And that’s not bad news for any of those companies, because that someone else could build it on one or more of their platforms. For it to truly work in the cloud, however, it needs to have a viable business model. Once they’ve matured, exchanges earn revenue through transactions. Conceivably, a cloud-based supply chain partner exchange would earn revenue this way, by handling transaction volumes at scale.
But here’s the problem: Good ideas rarely germinate into mature business practices on their own. They often have to render old ideas obsolete first, and those old ideas are someone else’s intellectual property. Cloud platforms could conceivably host old business practices on virtualization layers for as long as we live. For good ideas — or, to be honest, for any ideas — to take root, there must be a significant business incentive for the keeper of the grounds where those roots are planted. As things stand now, there are still significant, if not obvious, disincentives to implementing such a broad cloud coalition, one of which we’ll explore as this series continues: For service providers, the revenue model for this system may not yet be enticing enough.