How Low Can Public Cloud Computing Prices Go?

by Bernard Golden on March 7, 2014

Dollar Sign Cloud 2

Cloud Computing Prices: The Topic Du Jour Quotidiennement

The topic of cloud computing pricing is evergreen: controversy about how much public cloud computing really costs and whether on-premise cloud environments can be less expensive is endless. And certainly it’s not obvious what the real answer is. If we look at analog situations, it’s clear that buying vs. renting can be less expensive over a given time period; for example, over a four-year period, it is cheaper to buy a car than rent it from Hertz.

There are many who assert that running your own hardware infrastructure, for the same reason that buying a car is cheaper than obtaining it on a rental basis, is bound to be less expensive than using a public cloud provider. The justifications for this run from the reasoned (as in Chuck Hollis’ blog) to the ludicrous (as in this recent blog posting that claimed running your own hardware is less expensive because you don’t have to hire anyone to manage it).

The rationale for those holding this opinion boils down to:

  • Cloud provider costs are roughly comparable to those for on-premise, so they have no resource cost advantage;
  • Cloud providers charge a premium for making the capacity available on an on-demand basis, while the on-premise offering does not need to impose a convenience surcharge; and
  • Cloud providers are for-profit businesses while internal IT organizations run on a non-profit basis and therefore don’t add a margin for profit

Taking the same cost structure, tacking on an on-demand surcharge, and adding on a margin for profit indicates that on-premise is bound to be less expensive. Right?

I have to say, I am not convinced by this reasoning. In fact, I’m not sure that any of the factors cited are true, and find it easy to believe that they’re almost always false — particularly for the webscale public providers, which are the only ones really worth discussing, and likely to be the only ones left in five years.

Let’s look at the cost input structure. If you examine the figure below, it lists the major inputs into running a cloud environment.

Cloud Comparison Criteria

Taking each factor in turn:

  • Hardware infrastructure refers to the physical infrastructure used to house and operate a cloud environment. It includes both computing (e.g., servers, switches, and storage) and environment (e.g., data center building, HVAC, and racks). It’s not obvious at all that an internal IT group can achieve the same cost structure for hardware that a webscale provider can achieve. Providers buy in much larger quantities than almost every IT organization, so receive volume discounts. Moreover, most of them have moved to internal design and direct purchase from Asian ODMs, so they avoid paying the profit margin imposed by a systems vendor. And for sure, none of the webscale providers buy “enterprise-grade” equipment, unlike most enterprises. Cost advantage: public cloud provider.
  • Software infrastructure refers to the orchestration software used by the provider to operate its environment. All of the large providers write their own, which enables them to deliver a differentiated and highly scaled offering. Of course, this means that the providers have to employ sophisticated software developers, which internal clouds may be able to avoid. However, this isn’t as clear-cut as it might seem. Enterprises often choose an orchestration product from a software vendor who achieves 80% gross and 20% net margins, all of which adds to the internal cloud cost structure. Of course, there are open source orchestration products that an enterprise might choose, which avoids the license tax, but, given the relative immaturity of the open source offerings that are available, typically means that the implementing organization ends up employing software developers, thereby obviating the putative cost advantage of using an open source product. At scale, the cost of software development is amortized across enormous volume, such that the effect of internal software development is relatively small; by contrast, proprietary license fees drop much less as scale grows (each new node requires additional license fees, albeit as volumes grow larger, per-node costs drop due to volume discounts). Cost advantage: public cloud provider.
  • Internet bandwidth refers to general connectivity to the outside world. Public cloud providers purchase Internet connectivity in much higher volumes than any individual enterprise and therefore achieve cost advantage. Moreover, Google and Amazon (and no doubt Microsoft) have moved to purchasing their own fiber, which further reduces their Internet bandwidth cost, as they no longer have to fund a carrier’s profit margin as part of that cost. Cost advantage: public cloud provider.
  • Energy refers to the cost of electricity used to run the cloud environment. This is clearly an area in which very large-scale cloud providers achieve better pricing than any enterprise. In a study originally published by Virtual Strategy magazine (alas, no longer available online), author Steve Denegri analyzed CSP energy costs in detail. In his analysis, he described how, when building its San Antonio data center, Microsoft negotiated a new, super-large user electricity rate lower than even the high-user rate previously available to the largest electricity users. This cost advantage does not even touch on the reduced energy costs available to very large providers via operating data centers at very high efficiency. Facebook achieves a sub-1.10 PUE in its Oregon data centers; the typical skilled enterprise operates at over 1.5 PUE. Cost advantage: public cloud provider.
  • Labor refers to how much the employees required to build and operate a cloud environment cost. It’s not immediately obvious whether enterprises or public cloud providers can achieve the lowest labor cost. From a pure salary viewpoint, it’s likely that public providers pay more; after all, they employ the cream-of-the-crop talent required to build the largest computing environments. By contrast, most enterprises select from a less-heady applicant pool and therefore probably don’t have to pay as much. On the other hand, the question isn’t the raw cost per employee, it’s how much labor cost is imputed for a given amount of computing capacity. Seen in this light, it’s likely that public providers have a significant cost advantage; because their environments are so highly automated they require far fewer employees per 1000 servers. While most enterprises strain to achieve server/sysadmin ratios of 30-1, public providers commonly achieve 10,000-1. Cost advantage: public cloud provider.
  • Cost of capital refers to how much an entity must spend to obtain investment capital. The cost of capital is typically a blended rate of what an entity pays to borrow money (its interest rate) combined with its equity valuation. A high-PE company has, in effect, a lower equity cost of capital compared to a low-PE company. Of the large providers, Microsoft has a relatively low PE ratio, while Amazon and Google sport sky-high ratios. Large balance sheet cash reserves reduce a company’s cost of capital, and Microsoft has an enormous warchest of savings. By contrast, enterprises (and most non-webscale cloud providers) have much higher costs — in fact, I once had a vendor explain to me how unfair the cloud marketplace is because “Amazon is subsidized by the stock market and can offer lower prices than we can.” Another way of saying that is Amazon has a much lower cost of capital than his company, and thereby benefits. Cost advantage: public cloud provider.

Reviewing the cost categories, it’s public cloud providers 6 – 0 over private cloud. So you can understand why I’m perplexed when I hear people assert that they can operate a private cloud less expensively than one of the large public providers. And this is not even to address the issue that most IT organizations have no detailed understanding of their costs; their budgets are often spread across several departments and are never analyzed to provide true marginal costs broken down to specific resource units, e.g., server-hour. In fact, most of the time, the putative cost advantage of a private cloud is breezily asserted with no evidence save for an unsupported certainty that, by being “on the side” of users (i.e., part of the same corporate entity), internal IT must be more cost-effective than a profit-seeking external entity.

The problem with that perspective is that it assumes cloud operating costs are  analogous to the situation described at the beginning of this post: renting a car vs. buying one. In the car world, while Hertz can buy a car less expensively than an end customer can, there’s only a small difference in actual cost to Hertz and an end user. I don’t believe that analogy is at all accurate; while the cost savings a cloud provider achieves by designing and directly procuring servers may only be 15%, the total cost savings available to a cloud provider summed across all six categories might be as much as 75%. That is to say, a virtual machine costs a cloud provider one quarter what it costs a private cloud operator — $.25 compared to every dollar the private provider costs. Even adding in a profit margin to the public provider of 10% leaves it less than half the cost of a private cloud alternative.

 Cloud Computing Price Drops: Gated by Investment?

Of course, even if public providers have a cost advantage, that doesn’t necessarily mean that they will reduce their prices sufficiently to make using a public provider less expensive than an on-premise environment. Despite Amazon’s (and now Microsoft and Google’s) ongoing price cuts, there are those who assert (as in this piece by competitor ProfitBricks) that Amazon, far from offering really low prices, is actually milking its offering for very high margins. This perspective is somewhat supported by this article that quotes Gartner’s Kyle Hilgendorf that cloud users aren’t really interested in price cuts and are more focused on the richness of the provider services, which benefits Amazon as the provider with the richest offering functionality.

I think this perspective has some validity. The big three public providers (AMG) are building out capacity like crazy and Amazon, in particular, is experiencing sky-high growth. While Amazon is extremely secretive about its revenues and infrastructure numbers, its one public indicator  — the number of S3 objects it manages — indicates the scale and growth it is experiencing.

amazon_s3_Q2 2013Using S3 object numbers  as a proxy for overall AWS scale and growth, it seems clear that, if ProfitBricks’ assertion is correct,  too-high AWS prices aren’t deterring users from embracing the service. In fact, it may not be clear that lower prices would do Amazon any good and might, in fact, alienate users. In my CIO.com blog, I did an analysis of Amazon’s capital investment, based on its frequent statement that each day it installs as much AWS capacity as it used to run all of Amazon in 2003, and concluded that it had to install 6.5 million dollars of equipment each and every day to achieve that capacity. I could be off significantly from the real number, but it’s clear that Amazon is investing a very large amount in its computing infrastructure, day in and day out. It’s likely that just keeping up with that level of investment taxes Amazon’s ability to the limit; lowering prices further would just result in even more users piling on, which would make Amazon’s daily struggle even worse. In other words, Amazon can barely keep up with demand as it is, so there’s no need (or point) to cutting prices to some, supposedly “fairer” level.

So, public providers may not have prices as low today as they theoretically could be. I’m not convinced that even at these prices public providers are more expensive than private clouds, but am convinced that they are not done lowering prices. If you are an IT organization creating a business case that indicates you can provide cloud computing services less expensively than a public provider, you need to factor in what your business case would look like if the public provider costs were 50% of what they are today.  Put another way: can you drop your prices as low as your public cloud competition can?

 Cloud Computing Prices: How Low Can They Go?

The big question is, just how low can public cloud computing prices go? There is, of course, an irreducible minimum price level that, absent a willingness to absorb losses on a long-term basis, below which a public provider cannot go. I think it’s fair to say that we are not yet at that level, and of course the level drops as future hardware gets cheaper through the workings of Moore’s Law. The providers treat their cost structures as proprietary information, so it’s unlikely that any of them will publish numbers.

However, this are some guidelines we can look to as to how public providers will behave regarding prices. In general, I believe public cloud providers will follow these rules in pricing:

  • Price at marginal cost. When I was in college, in my microeconomics class, the instructor spelled out this rule: a company should be willing to take any price that allows it to cover its marginal cost and make some contribution, no matter how small, to cover its capital cost. A more sophisticated way to put this is that a firm should accept any offer that allows it to pay for the variable cost of the offer as well as provide some payment against its fixed cost, even if the payment does not cover the fully loaded offer cost.
  • Increase utilization to reduce fixed cost. Of course, the amount of fixed cost that any cloud resource has to address varies according to the utilization rate of the provider’s infrastructure. This is because the overhead of the fixed cost is spread across all of those resources that have been sold at a level to cover their variable costs. The more of those variable cost-covering resources are sold, the less fixed cost overhead is assigned to each resource, which makes it profitable (or reduces its net loss) at a lower price. Consequently, we can expect to see public cloud providers execute sophisticated pricing schemes designed to increase infrastructure utilization rates to as high a level as is practicable in terms of service quality acceptability. We’ve seen this already with AWS’s reserved and spot instances, which are designed to drive up utilization of Amazon’s computing infrastructure. We are nowhere near the end of this sophisticated demand management; to understand its nuances we have only to look at the airline industry, where this activity is called yield management and accounts for situations in which a passenger in one seat pays ten times what the one in the next seat paid.
  • Use game theory to increase market share. Certainly Amazon and the other webscale providers have use price drop announcements to signal that they are in the provider market for good. However, one can expect them to increase the frequency of price reduction announcements to communicate to other providers, both current and potential, that this is a highly competitive market in which any participant must recognize that there will be unrelenting price pressure and that, unless a provider is prepared for a huge capital investment with an unknown outcome (i.e., whether it will be one of the ultimate winners or just another loser that retires after throwing away untold millions of dollars), it might be better to withdraw (or never enter) the market. We’ve already seen a couple of providers (e.g., GoDaddy) announce that they will leave the CSP market, and a couple of others announced a renewed focus on “their core markets” (e.g., hosting), with a clear implication that they will de-emphasize cloud computing. Essentially, the public providers will force or frighten other entrants from the market; once the easy marks are out of the game, then the AMG players will turn on one another with a vengeance.

This has been a relatively lengthy blog post, because I think price, despite all of the discussions, is an underappreciated aspect of cloud computing. To my mind, it is the most important characteristic of cloud computing, and the one all other characteristics flow from or are enabled by. As I noted in this blog post, everyone talks about how cloud computing enables agility, but the reality is that cloud computing makes agility affordable. Absent its cost advantages, its ability to enable users to more rapidly access resources and roll out applications would be meaningless.

We’re still early enough in the cloud computing revolution such that the realities of what its economic impact on running IT are not yet fully understood. As the new pricing levels and expectations are better comprehended, we will see an enormous shift in IT infrastructure investment and where applications are deployed. The next ten years will see more dislocation in IT than all the previous platform revolutions combined — and I may be understating the revolution.

 

 

 

{ 3 comments }

The Second Machine Age and Cloud Computing

by Bernard Golden on February 23, 2014

 The Second Machine Age Cover 2

I’ve just finished the extremely interesting “The Second Machine Age: Work, Progress, and Prosperity in a Time of Brilliant Technologies,” written by two MIT-affiliated academics, Erik Brynjolfsson (@erikbryn) and Andrew McAfee (@amcafee). The book asserts that we are in the second stage of the Industrial Revolution — the first revolution was based on physical machine manifestations like steam engines, electrical motors, and mechanized farming, while the second is based on a digital revolution, providing innovations like genetic analysis, self-driving cars, and automated language translation.

In a sense, the book is a response to a school of economic thought, made most prominent by Robert Gordon, who studies economic productivity and asserts that we face a long-term decline in productivity growth based on the exhaustion of the first Industrial Revolution and an assessment that the benefits of the second are much lower than the first. A gloss on this theory might be: The first Industrial Revolution brought us this:

Polio Salk_headlines

The second brought us the ability to watch Miley Cyrus twerking.

OK, that may overstate the case, but the core of the perspective is that, despite hundreds of billions (if not trillions) of dollars invested in computer hardware, software, and services, there is little evidence of a significant productivity increase due to computerization — and the fact that there’s no productivity increase is a big deal, because that’s the main driver of improved living standards. To this end, the authors quote Nobel Prize-winning economist Paul Krugman, who said “Productivity isn’t everything, but in the long run it’s nearly everything.”

The authors disagree with Gordon (as does another well-known economic historian, Joel Mokyr, with whom I had dinner in October and wrote this about our conversation). Their perspective is that we are just beginning the digital revolution, and it will have effects upon our society (and our productivity) as profound as the Industrial Revolution — that we are, in fact, in the second machine age. They assert that the digital revolution is so powerful due to three factors:

  1. Exponential growth in computing power. Moore’s Law is well-known, but less appreciated is that, given the 40+ years of its progression, it is now operating “on the back half of the chessboard,” which is to say, the exponential effect is huge because the doubling of power is not from four to eight, but from 524,288 to 1,048,575, and the next generation will go from 1,048,575 to 2,097,151. In other words, each generation’s jump in power is now enormous. Consequently, while it was impossible to envision a self-driving car even a decade ago, the incredible increase in processing power means that they are available today and will be commonplace in a decade.
  2. Digitization of everything. The authors quote Marc Andreessen’s “Software is eating the world,” which means that every industry, and every company in every industry, is infusing or extending or wrapping products and services in digitized capability. The rise of genomics and the potential for drugs prescribed to address the specific genetic makeup of a patient is one example of this. And yes, another example is being able to watch Miley Cyrus twerk, since digital video is easy to capture and cheap to distribute, unlike the old world of analog film stocks and limited distribution outlets. I have seen this myself: my recent YouTube video on the new AWS Kinesis service has been viewed over 1500 times since going up a couple of months ago. In a year’s time, over 10,000 people will have watched the video and seen my perspective on why I think it’s such a revolutionary service. I could never have shared my thoughts on video before the 21st century — it would have been difficult technically and unaffordable economically. Today it’s easy and costs nothing.
  3. The combination of different large digital capabilities. In essence this can be summarized as smart people will put stuff together and develop new offerings that none of us can predict, and they will transform our lives. An example is polymerase chain reaction, a method to replicate DNA sequences. The inventor, Kary Mullis, didn’t invent PCR from whole cloth; instead, he combined several already existing and well-known technologies to create a revolution in genetic analysis. So, put simply, factors one and two, shaken and stirred together in the minds of millions of people, will result in world-changing products and services — a perspective  which another economist, Julian Simon, would have wholeheartedly approved.

Another reason they believe that the whole computer productivity paradox is misguided has to do with its relative youth (after all, we’ve only gotten through the front half of the chessboard!). They note that it often takes time for new technologies to diffuse and improve productivity. They cite the extended period it took for design to change when factory power shifted from steam to electric. Traditional factories had been built around a large central steam-based source of power. The steam engine had large shafts extending from it which drove pulleys and belts to distribute the engine’s power to machine tools. Because of shaft limitations, machine tools requiring large amounts of torque had to be located near the engine; this led to factories locating large tools in a central factory location and being designed and built in a three dimension arrangement, so that the short shafts that could handle large torque could drive the largest number of machines.

When electric motors, which could be operated efficiently in multiple, much smaller sizes came onto the scene, factory designers continued to implement very large electric motors in a central location and place the largest machine tools nearby. It was only 30 years later, according to the authors, when a new generation of factory managers that could appreciate the different capabilities that electric power enabled, came into power, that the design of factories changed, small electric motors began to be distributed throughout the factory, and the productivity enabled by this arrangement began to be realized.

It was this example that brought cloud computing to my mind. The authors, of course, discuss cloud computing in their book and give it its due. In terms of the authors’ three factors, cloud computing falls into factor three — the combination of already existing technologies. Cloud computing represents the combination of virtualization with automated software operation. However, it’s hard, I think, for someone not working directly in IT, to appreciate how profound the change cloud computing represents in terms of computing practices and how profoundly cloud computing will enable factors one and two to advance — because most people not working in IT don’t understand how much friction there is in using IT, and how this friction retards the use of IT to solve problems.

To offer one example: in a recent meeting with the infrastructure manager of a very large financial institution’s IT infrastructure, he noted that they were moving to nearly universal virtualization, “now that the guy who used to manage the databases and insisted that they had to run on bare metal because virtualization couldn’t perform has retired.” Has retired! In other words, this organization was stuck at the productivity level of the 1970s, with people installing and configuring software by hand, because this guy didn’t believe virtualization worked well enough. Now that he’s gone, the organization can get on to using virtualization throughout the stack and achieve the efficiencies it brings. Interestingly, with respect to cloud computing, he said that his company was not going to try and implement a private cloud, because he believed that there was no way his company could run a cloud as cheaply as the largest providers — in other words, he didn’t believe he could operated an automated environment as efficiently as specialists like Amazon, Microsoft, or Google.

As I wrote in my last CIO blog, until recently the profound friction represented by traditional IT processes hasn’t really been that much of an issue, due to the fact that IT infrastructure was so unmalleable. It was so much work to install and configure hardware and infrastructure that using it efficiently in terms of scalable and easily modified applications was a second order problem. Today, cloud computing removes infrastructure friction to enable agile applications — assuming that IT personnel learn how to use cloud computing effectively and restructure work practices to take advantage of cloud computing’s characteristics.

We’ve seen how the new model of applications based on cloud computing can result in profoundly different applications. For example, the way Cycle Computing used AWS to construct a popup 156,000 core computer for a drug company to do genetic analysis over a weekend. Or how Dropbox leverages AWS to enable efficient document sharing and syncing. Or, of course, how Netflix uses AWS to entertain more people than HBO.

The point is, cloud computing is in its infancy, and just as it took years — decades, even — for the new capabilities enabled by electric motors to integrate themselves into quotidian business operations and thereby increase productivity, so too will cloud computing gradually — but unmistakeably — transform our new second machine age and bring forth vast improvements in our society and lives.

 

 

{ 0 comments }

Critical implications of the 01 24 14 AWS price reduction

February 3, 2014

Amazon has a long history of reducing prices on AWS services, citing reduced service delivery costs based on infrastructure performance improvements and lower costs based on increased infrastructure purchase volumes. The AWS price reductions announced on January 24 2014 reflect these elements, but carry further implications important to understand in the context of how Amazon […]

Read the full article →

My Amazon 2013 Orders Analysis

January 17, 2014

  If you saw this post from July, you know that I track the number of orders we receive from Amazon. In July 2013, we received 20 shipments. In the post, I discussed the amortized cost of our Prime membership, as well as how the availability of Prime shipping has motivated us to direct more […]

Read the full article →

AWS Kinesis Introduction Tutorial

December 17, 2013

  AWS has introduced a real-time event processing service called Kinesis. Amazon’s description of Kinesis: “A fully managed service for real-time processing of streaming data at massive scale.” Put another way, Kinesis is designed to support applications that generate enormous numbers of events that an application or organization want to store, analyze, or operate upon. […]

Read the full article →

AWS vs. CSPs: Software Infrastructure

November 28, 2013

In my last post I discussed Amazon Web Services’ approach to its hardware infrastructure, noting that Amazon appears to view its hardware infrastructure quite differently from other CSPs. In that post, I said: Amazon, however, appears to hold the view that it is not operating an extension to a well-established industry, in which effort is […]

Read the full article →

AWS vs. CSPs: Hardware Infrastructure

November 20, 2013

    At the 2013 AWS Re:invent conference, the company put its data center guru, James Hamilton, on stage to discuss how AWS designs and operates its data center infrastructure. In his fascinating presentation, Hamilton described some of the things AWS does in its hardware infrastructure, all in the service of scale and efficiency. It’s […]

Read the full article →

My Dinner with Joel (Mokyr)

October 25, 2013

  I had the privilege of having dinner with Joel Mokyr yesterday evening. Joel is the real deal — an economic historian who is a Northwestern professor and is, perhaps, best known for his enlightening book, The Lever of Riches, which examines this question: “How did the astounding technological progress of the industrial revolution come about?” […]

Read the full article →

AWS Improves Reserved Instances

October 12, 2013

One of the biggest issues potential AWS users raise is its cost. AWS is convenient, they’ll admit, but renting by the hour can really add up. One way AWS addresses this issue is reserved instances (called, for convenience in the rest of this post, RI) — in return for an upfront fee, Amazon reduces the […]

Read the full article →

What the healthcare.gov website debacle really tells us

October 11, 2013

  An endless amount of ink (electrons?) have been spilled about the performance and availability problems healthcare.gov has experienced with the launch of the Affordable Care Act (ACA), aka Obamacare. The site could not stand up to the traffic it experienced on October 1, the first day it was possible to enroll for coverage. Digital […]

Read the full article →