Building Information Technology Liquidity

Creative Commons. Courtesy of Flickr. By Ze'ev Barkan

Turbulent markets offer companies both challenges and opportunities. But with rigid and aging IT infrastructures, it’s hard for companies to turn on a dime and respond to fluctuations in supplies and consumer demand. A corporate culture built on agile principles helps, but companies really need to build information technology “liquidity” to meet global disturbances head on.

Creative Commons. Courtesy of Flickr. By Ze'ev Barkan
Creative Commons. Courtesy of Flickr. By Ze’ev Barkan

Liquidity is a term often used in financial markets. When markets are deep and liquid it means they have assets that can be exchanged or sold in a moment’s notice with very little price fluctuation. In liquid markets, participants usually have the flexibility to sell or buy a position very rapidly, using cash or another accepted financial instrument.

Companies with liquid assets—such as lots of cash—can take advantages of market opportunities like picking up ailing competitors cheaply, or buying out inventory that another competitor desperately needs. Liquidity then, allows companies to take advantage of unplanned scenarios, and in some cases—to stay afloat when other companies are failing!

In the same way, IT organizations desperately need to embrace the concept of “liquidity”—not by having extra cash lying around, but creating agile and flexible infrastructures that can take advantage of unplanned demand. This is especially hard when an estimated 75% of the IT budget is already spent on maintaining legacy infrastructure.

Even worse, IT capacity planning efforts are often based on simple linear regression models or other quick and dirty heuristics that don’t account for huge spikes in demand such as a major corporate merger or “one-hit wonder” product.

Companies need to build a “liquid” information technology capability that can respond quickly to market and competitive agitations. Richard Villars, Vice President at IDC, says that in building liquidity, IT must; “enable variable workloads, handle the data explosion, and (be able to promptly) partner with the business (when unplanned opportunities arise)”.

What are some examples of IT liquidity? One scenario could be extra compute and storage available on-premises and reserved for unplanned demand. These resources could be “hidden” from the business by throttling back CPU for example, and then “released” when needed.

A second scenario might be having contracts signed and cloud resources at the ready on a moment’s notice to “burst into” extra processing when required. A third option could be using outside service contractors on a retainer model basis to provide a ready set of skills when your IT staff is crunched with too many extra projects.

In the financial world, liquid assets can allow companies to react and capitalize on market opportunities.  Liquidity in IT means that companies have enough extra compute firepower, people resources and are agile enough with IT processes to respond to unplanned events and demand, in whatever shape, form or order they arrive.

Building resistance to and combating market disruptions is an essential quality—in some cases to thrive and in others, to simply survive.

Adapting to Winds of Change with Cloud

Look around at global economic conditions. More than skirmishes—near flat out war—in Ukraine, Gaza, Iraq and Syria.  China pushing up GDP numbers by loading local provinces with more debt. European economies are on the mend, but not yet turning the corner. Fickle western consumers more pre-occupied with the latest smartphone app than the new product you’re selling. It’s in stressful economic conditions that you need to make sure your business has the ability to cycle capacity up or down when needed. You need cloud computing.

According to an article in the Financial Times, during the World Cup, Ghana’s authorities had to “import 50 megawatts of energy from neighboring Ivory Coast” just to keep televisions on during Ghana’s National Soccer team’s games. Fortunately, Ivory Coast had enough spare electricity to sell to Ghana, because there might have been riots in the streets had Ghanaian authorities not figured out a way to meet the demands of thousands of televisions.

Just like Ghanaian authorities, many businesses are unprepared for volatile capacity needs and capricious consumers who want what they want, and now.  That’s why enterprises that not only have a cloud computing strategy, but the ability to quickly deploy cloud resources on a whim, will ultimately fare better than those still trying to spell “c-l-o-u-d”.

This means having an information architecture documented that includes cloud, signed agreements with providers, an understanding of applications and databases or file systems needed, security policies in place, applications written and ready to take advantage of cloud resources, data loading strategies (VPN or dedicated circuit?), processes to scale cloud resources up and down (and triggers when to do so), data governance for onsite and cloud systems, business continuity plans and more.

There’s much work to do before you can take advantage of cloud resources, and just-in-time planning doesn’t cut it. With the flexibility, speed and power that cloud offers, there’s really no excuses to let opportunities to capture unplanned demand pass you by.

Can you ramp up and down based on erratic business conditions? Can you weather economic fluctuations? Are you flexible enough to point resources towards unmet consumer demand?  Can you quickly adapt to global winds of change? Cloud computing infrastructures are ready. Are you?

It’s Time to Ditch Scarcity Thinking

Image courtesy of Flickr.  By SolidEther

In J.R.R. Tolkien’s “The Hobbit,” Smaug the magnificent dragon sits on his nearly unlimited hoard of treasure and coins and tells “burglar” Bilbo Baggins to “help (himself) again, there’s plenty and to spare.” While it’s certainly true there are many things in this world that are physically scarce, when it comes to living in the information age, we need to retrain our minds to ditch scarcity thinking and instead embrace “sky’s the limit” abundance.

Image courtesy of Flickr.  By SolidEther
Image courtesy of Flickr. By SolidEther

Most of us have been taught there are resource constraints for things such as time, talent and natural items such as land, fresh water and more. And of course, there are very real limits to some of these items. However, we currently live in an information age. And in this era, some of our previous thought patterns no longer apply.

Take for instance, the ability to have an ocean of knowledge at our fingertips. With non-networked computers or and other devices, we’re limited to the data at hand, or the storage capacity of these devices. But add in a dash of hard-wired or wireless networking and suddenly physical limits to knowledge disappear.

Apple’s Siri technology is a compelling case in point. Using the available processing power of an iPhone (which by the way is considerable), Siri could arguably answer a limited amount of questions based on data in flash storage.

But open up Siri’s natural language processing (the bulk of which is done in the cloud) and suddenly if Siri can’t understand you, or doesn’t know an answer, the web may provide assistance. By leveraging cloud computing and access to the internet, Siri brings a wealth of data to users, and even more intelligence to Apple by capturing all queries “in the cloud” and offering an immense data set for programmers to tune and improve Siri’s capabilities.

It used to be that TV airtime was in short supply. After all, there are only so many channels and airtime programming slots for content, especially during primetime hours. And there’s still an arduous process to create, discover and produce quality content that viewers will want to watch during these scarce blocks of time.

Without regard to conventional thinking, YouTube is turning this process on its head. A New Yorkerarticle details how YouTube is growing its market presence by offering unlimited “channels” that can be played on-demand, anytime and anywhere. “On YouTube, airtime is infinite, content costs almost nothing for YouTube to produce and quantity, not quality is the bottom line,” explains author John Seabrook.  Content watching then (whether via YouTube, Netflix, DVR, Slingbox etc), is no longer constricted to certain hours, and in effect time is no longer a constraint.

In the past, the music we liked was confined to physical media such as records or compact discs. Then MP3 players such as the iPod expanded our capabilities to listen to more music but were still confined to available device storage. That’s scarcity thinking. Now with wireless networking access, there are few limits to listening to our preferred music through streaming services such as Pandora, or renting music instead of owning it on CD.  Indeed, music subscription services are becoming the dominant model for how music is “acquired”.

There are still real limits to many valuable things the world (e.g. time, talent, money, physical resources, and even human attention spans). Yet even some of these items are artificially constrained by either politics or today’s business cases.

The information age has brought persons, businesses and societies elasticity, scalability, and the removal of many earlier capacity constraints. We seem to be sitting squarely on Smaug’s unending stack of treasure. But even in the great Smaug’s neck there was a gaping vulnerability. We’ll still need to use prudence, intelligence and far-sighted thinking in this age of abundance, with the understanding that just because some of our constraints are removed, that doesn’t necessarily mean we should become gluttonous and wasteful in our use of today’s resources.


Debunking Five Cloud Computing Myths

server farm3

For the third year in a row, cloud computing is one of the top three technology investments for CIOs. However, there are many misconceptions of “the cloud”. Indeed, in my travels through public speaking sessions and corporate training seminars on cloud computing, I have encountered five common myths or misconceptions. It’s high time to debunk them.

server farm3Myth #1: “The Cloud” is Just One Big Cloud

With the exception of Amazon Web Services which is constantly expanding its data center presence, there is no single cloud of record.  Companies and vendors are standing up cloud computing infrastructures which they make available to the public or internal stakeholder audiences such as employees or suppliers.

In fact there are hundreds if not thousands of “clouds” in the United States alone (especially when one considers private cloud infrastructures).  For example on the software vendor side, OracleHP, IBMTeradata (full disclosure: the author works for Teradata Corporation) and others are building and maintaining their own clouds. And of course there are B2C “clouds” such as iCloud and DropBox. So the next time someone says, “I’m performing analytics in the cloud”, you may wish to ask “which one”?

Myth #2: One Day Soon, All Our Data Will Be in the Public Cloud

Many cloud experts and prognosticators believe the march to public cloud computing infrastructures—for everyone (corporations and consumers) is inevitable.  Reasons for this line of thinking range from the growing size and complexity of data volumes (i.e. who can afford all this storage?) to public cloud providers should be able to monitor, manage and secure IT infrastructures better and cheaper than individual companies.

While I don’t doubt that public cloud computing will take more market share in the future, I certainly am under no illusion that one day soon all data will be stored in the public cloud—mainly because of bandwidth costs for data transport and costs of doing all your processing on a pay-per-use basis. And of course, recent government snooping revelations help me easily predict that plenty of data will stay right where they’re currently located.

Myth #3: Cloud is Cheaper than On-Premises Computing

This particular myth is a big misconception to overcome.  Corporate buyers hear the word “cloud” and assume it equates to cheaper IT costs. This statement may be true on a low utilization basis—meaning you only plan on using compute power infrequently—but on a full utilization basis you’ll most likely pay more for computing on a pay-per-use basis than maintaining your own IT infrastructure and applications.  For a deeper discussion on this topic, visit “The Cloud Conundrum: Rent or Buy?

Myth #4: Cloud Computing Means Someone Else Now Has My IT Headaches

Of course, while moving your workloads to “the cloud” means that another vendor—that “someone else”—is responsible for monitoring, maintaining and supporting the information technology infrastructure, it certainly doesn’t mean that your IT headaches go away. In fact, while you may no longer have day to day responsibility for availability, software and hardware upgrades and more, you never really lose complete “responsibility” for IT.

Instead, your day is now consumed with vendor, contract (SLAs) and incident management, workload balancing, application development (for the cloud), and security items such as roles, profiles, authentication processes and more.  Long story, short; you don’t abdicate responsibility for IT when you move workloads to the cloud.

Myth #5: If it’s not Multi-Tenant, It’s Not Cloud

I hear this particular comment quite a bit. Really, the person suggesting this “truth” is stating that the real beauty of cloud computing is taking a bunch of commodity hardware, virtualizing it, and pooling resources to keep costs down for everyone. To be sure, resource pooling is a key criteria for cloud computing, but virtualization software isn’t the only route to success—(i.e. workload management might fit the bill just fine).

In addition, while multi-tenant most commonly means “shared”, it’s important to define how many components of a cloud infrastructure you’re actually willing to “share”. To be sure, economies of scale (and lower end user prices) can result from a cloud vendor sharing the costs of physical buildings, power, floor space, cooling, physical security systems and personnel, racks, maintaining a cloud operations team and more. But I’ll also mention that there are customers I’ve talked to that have zero intention of sharing hardware resources—mostly for security and privacy reasons.

These are just five cloud computing myths that I’ve come across. There are certainly more that I failed to mention. And perhaps you don’t agree with my efforts to debunk some of these themes?  Please feel free to comment, I’d love to hear from you!

Rent vs. Buy? The Cloud Conundrum

Courtesy of Flickr. By IntelFreePress

Over the long-run, is cloud computing a waste of money? Some startups and other “asset lite” businesses seem to think so. However, cloud computing for specific use cases, makes a lot of sense—even over the long haul.

Courtesy of Flickr. By IntelFreePress
Courtesy of Flickr. By IntelFreePress

A Wired Magazine article emphasizes how some Silicon Valley startups are migrating from public clouds to on-premises deployments. Yes, read that again. Cash poor startups are saying “no” to the public cloud.

On the whole this trend seems counter intuitive. That’s because it’s easy to see how capital disadvantaged startups would be enchanted with public cloud computing: little to no startup costs, no IT equipment to buy, no data centers to build, and no software licensing costs. Thus for startups, public cloud computing makes sense for all sorts of applications and it’s easy to see why entrepreneurs would start—and then stick with public clouds for the foreseeable future.

However, after an initial “kick the tires” experience, various venture capital sponsored firms are migrating away from public clouds.

The Wired article cites how some start-ups are leaving the public cloud for their own “fleet of good old fashioned computers they could actually put their hands on.”  That’s because, over the long run, it’s generally more expensive to rent vs. buy computer resources. The article mentions how one tech start up “did the math” and came up with internal annual costs of $120K for the servers they needed, vs. $320K in public cloud costs.

For another data point, Forbes contributor Gene Marks cites how six of his clients analyzed the costs of public cloud vs. an on-premises installation, monitored and managed by a company’s own IT professionals. The conclusion? Overall, it was “just too expensive” for these companies to operate their workloads in the public cloud as opposed to capitalizing new servers and operating them on a monthly basis.

Now to be fair, we need to make sure we’re comparing apples to apples. For an on-premises installation, hardware server costs may be significantly less over the long run, but it’s also important to include costs such as power, floor space, cooling, and employee costs of monitoring, maintaining and upgrading equipment and software. In addition there are sometimes “hidden” costs of employees spending cycles procuring IT equipment, efforts for capacity sizing, and hassles of going through endless internal capitalization loops with the Finance group.

Thus, cloud computing still makes a lot of financial sense, especially when capacity planning cycles aren’t linear, when there is need for “burst capacity”, or even when there is unplanned demand (as there often is with fickle customers). And don’t forget about use cases such as test and development, proof of concept, data laboratory environments and disaster recovery.

Another consideration is resource utilization.  As I have stated before, if you plan on using IT resources for a brief period of time, cloud computing makes a lot of sense. Conversely, if you plan on operating IT resources at 90-100% utilization levels, on a continual and annual basis, it probably makes sense to acquire and capitalize IT assets instead of choosing “pay per use” cloud computing models.

Ultimately, the cloud rent vs. buy decision comes down to more than just the price of servers. Enterprises should be careful to understand their use cases for cloud vs. on premises IT. In addition, watch for hidden costs in your TCO calculation that underestimate how much time and effort it really takes to get an IT environment up, running and performing.

What the Sharing Economy Means for Cloud Computing

Courtesy of Flickr. By laura_m_billings

The sharing movement is in full swing. Innovative “collaborative consumption” companies are helping pool under-utilized assets such homes, boats, cars and then renting them out as services. With the rise of peer-to-peer sharing, it also makes sense that cloud computing—which is compute and storage “resource pooling” and renting—would also gain traction. But just as there are risks in sharing property and other assets, there are also risks in sharing cloud computing infrastructures.

Courtesy of Flickr. By laura_m_billings
Courtesy of Flickr. By laura_m_billings

Jessica Scorpio of Fast Company has it right when she says; “A few years ago, no one would have thought peer-to-peer asset sharing would become such a big thing.”

Indeed, since the launch of Airbnb, more than 4 million people have rented rooms—in their own houses—to complete strangers. And in San Francisco, a new company called FlightCar offers to park and wash your car at the airport –with a catch, that while you’re away on a business trip your car is available as a “rental” to others (at half the cost of other companies).

Intrinsically, the rise of the sharing economy makes sense. Why not take underutilized assets and make them available to others for a temporary amount of time, thus gaining higher utilization and earning extra income?

But to make a sharing economy work, a key issue of “trust” is necessary. In the case of Airbnb, homeowners must trust the company has carefully vetted those who would rent out rooms, especially when security and privacy concerns are very real. However, while there have been a few scary tales in terms of sharing homes, cars, and other services, for the most part the marketplace has run smoothly.

In a similar vein, the big target on the back of cloud computing is trust. Cloud computing providers are still wrestling with perceptions that they are not as safe and trustworthy in terms of privacy, security and availability. And while it’s true that cloud providers have greatly improved in these areas, myriad surveys show there’s still significant work to do in overcoming initial perceptions that sensitive corporate data is often “lost, corrupted or accessed by unauthorized individuals”.

For both cloud computing and the sharing economy, overcoming trust issues is job one. That said, the trend towards sharing is unmistakable. Neal Gorenflo, publisher of Shareable Magazine says; “People don’t want the cognitive load associated with owning.”  The same mindset can also be attributed to global CIOs and CFOs who want someone else to do the work of capitalizing, maintaining, updating and running their IT systems in the cloud while they focus on driving business value.

Forbes estimates that in 2013, $3.5 billion dollars will change hands in the sharing economy. We also know that cloud revenues are on torrid trajectory. If peer-to-peer sharing and cloud computing providers can overcome trust issues, there are few constraints on how big these markets can really be.

CAPEX for IT – Why So Painful?

Courtesy of Flickr. By 

CAPEX dollars reserved for investments in plant, property or equipment (including IT) are notoriously hard to secure. In fact, IT leaders often express dismay at the process involved in not only forecasting for CAPEX needs, but then stepping through arduous internal CAPEX budget approvals. What’s all the fuss with CAPEX, and why is it so difficult to obtain?

Courtesy of Flickr. By  FCAtlantaB13
Courtesy of Flickr. By

An investment analyst says that 2013 should be a banner year for capital investments. And another analyst, Mark Zandi of Moody’s said in late 2012; “Businesses are flush and highly competitive and this will shine through in a revival of investment spending by this time next year…”

So where’s the CAPEX? Apparently in short supply. A New York Times article says that companies are stock piling cash, and taking on debt, but investing very little in themselves. For now, if there are significant IT investments, it appears OPEX is the preferred route.

First, let’s be clear, the CAPEX vs. OPEX debate really is around a shift in cash flows and outlays, there are little to no other financial advantages.  In choosing one vs. the other, it’s a  matter of company policy, as in one instance (CAPEX) assets are categorized on a balance sheet and depreciated, and in the other (OPEX) purchases are expensed through daily operations.

Certainly, there are capital intensive businesses such as telecom, manufacturers and utilities that must continually invest in infrastructure. These types of companies will always spend significantly on CAPEX. On the other hand, there are companies that are CAPEX restricted such as start-ups, companies under the watchful eye of private equity firms, and medium sized businesses that don’t have much CAPEX as a matter of course. 

Obtaining CAPEX can also be painful for IT leaders. At the TDWI Cloud Summit in December 2012, one stage presenter in charge of IT mentioned that getting an idea from the “back of a napkin to (capitalization budget) approval” could take 18 months.

This is why cloud computing options are attractive. With cloud, companies that either have capital to spend (but don’t want to), or those that are CAPEX constrained can take advantage of existing compute infrastructures on a subscription basis. With cloud, investments in IT capabilities are easier to digest via OPEX rather than front-loading a significant chunk of change in a business asset.  And of course, there are also other reasons to choose cloud computing (such as elastic provisioning and full resource utilization) as listed here.

Regardless, it appears that for the present day, CAPEX dollars (especially for IT) will be in short supply. Perhaps this is just one of the many reasons why there’s a flurry of M&A activity in the cloud computing space?


  • Why is CAPEX so hard to come by for information technology?
  • Do you have any horror stories (post anonymously if you’d like) about trying to get CAPEX for IT? Was cloud computing an easier discussion with your CFO?
  • When will larger companies relax their CAPEX spending, or is OPEX for IT a long term trend?

Get every new post delivered to your Inbox.

Join 44 other followers

%d bloggers like this: