Building Information Technology Liquidity

Creative Commons. Courtesy of Flickr. By Ze'ev Barkan

Turbulent markets offer companies both challenges and opportunities. But with rigid and aging IT infrastructures, it’s hard for companies to turn on a dime and respond to fluctuations in supplies and consumer demand. A corporate culture built on agile principles helps, but companies really need to build information technology “liquidity” to meet global disturbances head on.

Creative Commons. Courtesy of Flickr. By Ze'ev Barkan
Creative Commons. Courtesy of Flickr. By Ze’ev Barkan

Liquidity is a term often used in financial markets. When markets are deep and liquid it means they have assets that can be exchanged or sold in a moment’s notice with very little price fluctuation. In liquid markets, participants usually have the flexibility to sell or buy a position very rapidly, using cash or another accepted financial instrument.

Companies with liquid assets—such as lots of cash—can take advantages of market opportunities like picking up ailing competitors cheaply, or buying out inventory that another competitor desperately needs. Liquidity then, allows companies to take advantage of unplanned scenarios, and in some cases—to stay afloat when other companies are failing!

In the same way, IT organizations desperately need to embrace the concept of “liquidity”—not by having extra cash lying around, but creating agile and flexible infrastructures that can take advantage of unplanned demand. This is especially hard when an estimated 75% of the IT budget is already spent on maintaining legacy infrastructure.

Even worse, IT capacity planning efforts are often based on simple linear regression models or other quick and dirty heuristics that don’t account for huge spikes in demand such as a major corporate merger or “one-hit wonder” product.

Companies need to build a “liquid” information technology capability that can respond quickly to market and competitive agitations. Richard Villars, Vice President at IDC, says that in building liquidity, IT must; “enable variable workloads, handle the data explosion, and (be able to promptly) partner with the business (when unplanned opportunities arise)”.

What are some examples of IT liquidity? One scenario could be extra compute and storage available on-premises and reserved for unplanned demand. These resources could be “hidden” from the business by throttling back CPU for example, and then “released” when needed.

A second scenario might be having contracts signed and cloud resources at the ready on a moment’s notice to “burst into” extra processing when required. A third option could be using outside service contractors on a retainer model basis to provide a ready set of skills when your IT staff is crunched with too many extra projects.

In the financial world, liquid assets can allow companies to react and capitalize on market opportunities.  Liquidity in IT means that companies have enough extra compute firepower, people resources and are agile enough with IT processes to respond to unplanned events and demand, in whatever shape, form or order they arrive.

Building resistance to and combating market disruptions is an essential quality—in some cases to thrive and in others, to simply survive.

Adapting to Winds of Change with Cloud

Look around at global economic conditions. More than skirmishes—near flat out war—in Ukraine, Gaza, Iraq and Syria.  China pushing up GDP numbers by loading local provinces with more debt. European economies are on the mend, but not yet turning the corner. Fickle western consumers more pre-occupied with the latest smartphone app than the new product you’re selling. It’s in stressful economic conditions that you need to make sure your business has the ability to cycle capacity up or down when needed. You need cloud computing.

According to an article in the Financial Times, during the World Cup, Ghana’s authorities had to “import 50 megawatts of energy from neighboring Ivory Coast” just to keep televisions on during Ghana’s National Soccer team’s games. Fortunately, Ivory Coast had enough spare electricity to sell to Ghana, because there might have been riots in the streets had Ghanaian authorities not figured out a way to meet the demands of thousands of televisions.

Just like Ghanaian authorities, many businesses are unprepared for volatile capacity needs and capricious consumers who want what they want, and now.  That’s why enterprises that not only have a cloud computing strategy, but the ability to quickly deploy cloud resources on a whim, will ultimately fare better than those still trying to spell “c-l-o-u-d”.

This means having an information architecture documented that includes cloud, signed agreements with providers, an understanding of applications and databases or file systems needed, security policies in place, applications written and ready to take advantage of cloud resources, data loading strategies (VPN or dedicated circuit?), processes to scale cloud resources up and down (and triggers when to do so), data governance for onsite and cloud systems, business continuity plans and more.

There’s much work to do before you can take advantage of cloud resources, and just-in-time planning doesn’t cut it. With the flexibility, speed and power that cloud offers, there’s really no excuses to let opportunities to capture unplanned demand pass you by.

Can you ramp up and down based on erratic business conditions? Can you weather economic fluctuations? Are you flexible enough to point resources towards unmet consumer demand?  Can you quickly adapt to global winds of change? Cloud computing infrastructures are ready. Are you?

It’s Time to Ditch Scarcity Thinking

Image courtesy of Flickr.  By SolidEther

In J.R.R. Tolkien’s “The Hobbit,” Smaug the magnificent dragon sits on his nearly unlimited hoard of treasure and coins and tells “burglar” Bilbo Baggins to “help (himself) again, there’s plenty and to spare.” While it’s certainly true there are many things in this world that are physically scarce, when it comes to living in the information age, we need to retrain our minds to ditch scarcity thinking and instead embrace “sky’s the limit” abundance.

Image courtesy of Flickr.  By SolidEther
Image courtesy of Flickr. By SolidEther

Most of us have been taught there are resource constraints for things such as time, talent and natural items such as land, fresh water and more. And of course, there are very real limits to some of these items. However, we currently live in an information age. And in this era, some of our previous thought patterns no longer apply.

Take for instance, the ability to have an ocean of knowledge at our fingertips. With non-networked computers or and other devices, we’re limited to the data at hand, or the storage capacity of these devices. But add in a dash of hard-wired or wireless networking and suddenly physical limits to knowledge disappear.

Apple’s Siri technology is a compelling case in point. Using the available processing power of an iPhone (which by the way is considerable), Siri could arguably answer a limited amount of questions based on data in flash storage.

But open up Siri’s natural language processing (the bulk of which is done in the cloud) and suddenly if Siri can’t understand you, or doesn’t know an answer, the web may provide assistance. By leveraging cloud computing and access to the internet, Siri brings a wealth of data to users, and even more intelligence to Apple by capturing all queries “in the cloud” and offering an immense data set for programmers to tune and improve Siri’s capabilities.

It used to be that TV airtime was in short supply. After all, there are only so many channels and airtime programming slots for content, especially during primetime hours. And there’s still an arduous process to create, discover and produce quality content that viewers will want to watch during these scarce blocks of time.

Without regard to conventional thinking, YouTube is turning this process on its head. A New Yorkerarticle details how YouTube is growing its market presence by offering unlimited “channels” that can be played on-demand, anytime and anywhere. “On YouTube, airtime is infinite, content costs almost nothing for YouTube to produce and quantity, not quality is the bottom line,” explains author John Seabrook.  Content watching then (whether via YouTube, Netflix, DVR, Slingbox etc), is no longer constricted to certain hours, and in effect time is no longer a constraint.

In the past, the music we liked was confined to physical media such as records or compact discs. Then MP3 players such as the iPod expanded our capabilities to listen to more music but were still confined to available device storage. That’s scarcity thinking. Now with wireless networking access, there are few limits to listening to our preferred music through streaming services such as Pandora, or renting music instead of owning it on CD.  Indeed, music subscription services are becoming the dominant model for how music is “acquired”.

There are still real limits to many valuable things the world (e.g. time, talent, money, physical resources, and even human attention spans). Yet even some of these items are artificially constrained by either politics or today’s business cases.

The information age has brought persons, businesses and societies elasticity, scalability, and the removal of many earlier capacity constraints. We seem to be sitting squarely on Smaug’s unending stack of treasure. But even in the great Smaug’s neck there was a gaping vulnerability. We’ll still need to use prudence, intelligence and far-sighted thinking in this age of abundance, with the understanding that just because some of our constraints are removed, that doesn’t necessarily mean we should become gluttonous and wasteful in our use of today’s resources.


Debunking Five Cloud Computing Myths

server farm3

For the third year in a row, cloud computing is one of the top three technology investments for CIOs. However, there are many misconceptions of “the cloud”. Indeed, in my travels through public speaking sessions and corporate training seminars on cloud computing, I have encountered five common myths or misconceptions. It’s high time to debunk them.

server farm3Myth #1: “The Cloud” is Just One Big Cloud

With the exception of Amazon Web Services which is constantly expanding its data center presence, there is no single cloud of record.  Companies and vendors are standing up cloud computing infrastructures which they make available to the public or internal stakeholder audiences such as employees or suppliers.

In fact there are hundreds if not thousands of “clouds” in the United States alone (especially when one considers private cloud infrastructures).  For example on the software vendor side, OracleHP, IBMTeradata (full disclosure: the author works for Teradata Corporation) and others are building and maintaining their own clouds. And of course there are B2C “clouds” such as iCloud and DropBox. So the next time someone says, “I’m performing analytics in the cloud”, you may wish to ask “which one”?

Myth #2: One Day Soon, All Our Data Will Be in the Public Cloud

Many cloud experts and prognosticators believe the march to public cloud computing infrastructures—for everyone (corporations and consumers) is inevitable.  Reasons for this line of thinking range from the growing size and complexity of data volumes (i.e. who can afford all this storage?) to public cloud providers should be able to monitor, manage and secure IT infrastructures better and cheaper than individual companies.

While I don’t doubt that public cloud computing will take more market share in the future, I certainly am under no illusion that one day soon all data will be stored in the public cloud—mainly because of bandwidth costs for data transport and costs of doing all your processing on a pay-per-use basis. And of course, recent government snooping revelations help me easily predict that plenty of data will stay right where they’re currently located.

Myth #3: Cloud is Cheaper than On-Premises Computing

This particular myth is a big misconception to overcome.  Corporate buyers hear the word “cloud” and assume it equates to cheaper IT costs. This statement may be true on a low utilization basis—meaning you only plan on using compute power infrequently—but on a full utilization basis you’ll most likely pay more for computing on a pay-per-use basis than maintaining your own IT infrastructure and applications.  For a deeper discussion on this topic, visit “The Cloud Conundrum: Rent or Buy?

Myth #4: Cloud Computing Means Someone Else Now Has My IT Headaches

Of course, while moving your workloads to “the cloud” means that another vendor—that “someone else”—is responsible for monitoring, maintaining and supporting the information technology infrastructure, it certainly doesn’t mean that your IT headaches go away. In fact, while you may no longer have day to day responsibility for availability, software and hardware upgrades and more, you never really lose complete “responsibility” for IT.

Instead, your day is now consumed with vendor, contract (SLAs) and incident management, workload balancing, application development (for the cloud), and security items such as roles, profiles, authentication processes and more.  Long story, short; you don’t abdicate responsibility for IT when you move workloads to the cloud.

Myth #5: If it’s not Multi-Tenant, It’s Not Cloud

I hear this particular comment quite a bit. Really, the person suggesting this “truth” is stating that the real beauty of cloud computing is taking a bunch of commodity hardware, virtualizing it, and pooling resources to keep costs down for everyone. To be sure, resource pooling is a key criteria for cloud computing, but virtualization software isn’t the only route to success—(i.e. workload management might fit the bill just fine).

In addition, while multi-tenant most commonly means “shared”, it’s important to define how many components of a cloud infrastructure you’re actually willing to “share”. To be sure, economies of scale (and lower end user prices) can result from a cloud vendor sharing the costs of physical buildings, power, floor space, cooling, physical security systems and personnel, racks, maintaining a cloud operations team and more. But I’ll also mention that there are customers I’ve talked to that have zero intention of sharing hardware resources—mostly for security and privacy reasons.

These are just five cloud computing myths that I’ve come across. There are certainly more that I failed to mention. And perhaps you don’t agree with my efforts to debunk some of these themes?  Please feel free to comment, I’d love to hear from you!

Text Analytics for Tracking Executive Hubris?

Courtesy of Flickr. By NS Newsflash.

The next audacious “off the cuff” statement your CEO makes could tank your company’s stock price in minutes. That’s because machines are increasingly analyzing press conferences, earnings calls and more for “linguistic biomarkers” and possibly placing sell orders accordingly. Indeed, with technology’s ability to mine speech patterns of corporate, political, and social leaders, the old proverb; “A careless talker destroys himself”, rings truer than ever.

Courtesy of Flickr. By NS Newsflash.
Courtesy of Flickr. By NS Newsflash.

Financial Times’ columnist Gillian Tett writes how researchers are starting to analyze corporate and political speech for signs of hubris. By analyzing historical speeches alongside existing speeches from today’s leaders, researchers are identifying “markers of hubris”, where a particular leader may be getting a bit too full of their own accomplishments.

Such communications, according to experts in Tett’s article, increasingly consist of words such as “I”, “me” and “sure” as tell-tale signs of leaders increasingly believing their own hype. And consequently, if such “markers of hubris” can increasingly be identified, they could indicate to stakeholder that it’s  time to take a course of action (e.g. liquidating a given stock position).

Now as you can imagine, there are challenges with this approach. The first difficulty is in identifying which linguistic markers equate to hubris—an admittedly subjective process. The second challenge is establishing hubris as a negative trait. In other words, should increasing hubris and/or narcissism mean that the executive has lost touch with reality? Or that he or she is incapable of driving even better results for their company, agency or government? Surely, the jury is still out for these questions.

Today’s technology has made endeavors such as text mining of executive, political and other communications much more feasible en masse. Streaming technologies can enable near real time analysis, map-reduce type operators can be used for word counts and text analysis, and off the shelf sentiment applications can discern meaning and intent on a line-by-line basis.

When computers are tuned to pour through executive speeches, corporate communications, press interviews and more, such analysis could ultimately indicate whether a company is prone to “excessive optimism”, and help investors and other stakeholders “punch through the hype” of corporate speak. To the naked ear, speech patterns of executives, politicians and other global players probably change little over time. However, if data scientists are able to run current and past communications through text analytics processes, interesting patterns may emerge that could be actionable.

The largest challenge in analyzing executive hubris doesn’t appear to be standing up a technology infrastructure for analytics, especially when cloud based solutions are available. Nor does the actual sentiment analysis seem to be the sticking point, because with enough data scientists, such algorithms can be tweaked for accuracy over time.

The ultimate difficulty is deciding what—if anything to do—when analyses of leader’s speech patterns reveal a pattern of hubris. As an employee, does this mean it’s time to look for another job? As an investor, does this mean it’s time to sell? As a citizen, does this mean it’s time to pressure the government to change its course of action—and if so, how? All good questions for which there are few clear answers.

Regardless, with computers reading the news, it’s more important than ever for leaders of all stripes to be cognizant that stakeholders are watching and acting on their words—often in near real time.

Writer Gillian Tett says that we humans“ instinctively know, but often forget that power not only goes to the head, but also to the tongue.” With this in mind, leaders in political, business and social circles then need to understand that when it comes to signs of arrogance, we’ll not only be watching and listening, but counting too.

The Dirty (Not so Secret) Secret of IT Budgets

Courtesy of Flickr. By Val.Pearl

Some business users believe that every year IT is handed a budget, which is then fully used to drive new and productive ways to enable more business. This is, however, far from reality. In fact, in most instances the lion’s share of the IT budget is dedicated towards supporting legacy code and systems. With so precious few IT dollars to support experimentation with new technologies, it makes sense why pay-per-use cloud computing options are so alluring.

Courtesy of Flickr. By Val.Pearl
Courtesy of Flickr. By Val.Pearl

There’s an interesting story in the Financial Times, where author Gillian Tett discusses how in western economies most of the dollars lent by banks go to supporting existing assets, and not innovation. The problem is highlighted by former UK regulator Adair Turner who says, out of every dollar in credit “created” by UK banks; only 15% of financial flows go into “productive investment”. The other 85% goes to supporting “existing corporate assets, real estate and unsecured personal finance.”

Essentially, there are fewer dollars lent by banks for innovative projects, startups, and new construction with most of the monies dedicated to maintaining existing assets. Sounds a lot like today’s IT budgets.

As evidence, a Cap Gemini report mentions; “Most organizations do not have a clear strategy for retiring legacy applications and continue to spend up to three quarters of their IT budgets just “keeping the lights on” – supporting outdated, redundant and sometimes entirely obsolete systems.”  Now if this “75%” statistic is still in fashion, and there is evidence that it’s accurate, it leaves very little funds for high potential projects like mobile, analytics, and designing new algorithms that solve business problems.

Here’s where cloud computing can make a difference.  Cloud computing infrastructures, platforms and applications often allow users to try before they buy with very little risk. Users can test applications, explore new application functions and features, experiment with data, and stand up analytic capabilities with much less fuss than traditional IT routes. Best of all, much of this “experimentation” can be funded with operating budgets instead of going through the painful process of asking the CFO for a CAPEX check.

Speaking of innovation, the real value of cloud isn’t just the fact that information infrastructure is at the ready and scalable, but more what you can use it for. Take for example, the use of cloud based analytics to drive business value, such as sniffing out fraud in banking and online payment systems, exploring relationships between customers and products, optimizing advertising spend, analyzing warranty data to produce more quality products and many more types of analyses.

These kinds of analytics stretch far beyond the mundane “keeping the lights on” mindset that IT is sometimes associated with, and instead can really show line of business owners how IT can be more than a just a “game manager” but rather a “play-maker”.

Fortunately, the modernization of legacy systems is a top priority for CIOs. But much like turning an aircraft carrier, it’s going to take time to make a complete switch from maintaining legacy systems to supporting innovative use cases.

But in the meantime, with cloud, there’s no need to wait to experiment with the latest technologies and/or try before you buy. Cloud infrastructures, platforms and applications are waiting for you. The better question is, are you ready to take advantage of them?

Rent vs. Buy? The Cloud Conundrum

Courtesy of Flickr. By IntelFreePress

Over the long-run, is cloud computing a waste of money? Some startups and other “asset lite” businesses seem to think so. However, cloud computing for specific use cases, makes a lot of sense—even over the long haul.

Courtesy of Flickr. By IntelFreePress
Courtesy of Flickr. By IntelFreePress

A Wired Magazine article emphasizes how some Silicon Valley startups are migrating from public clouds to on-premises deployments. Yes, read that again. Cash poor startups are saying “no” to the public cloud.

On the whole this trend seems counter intuitive. That’s because it’s easy to see how capital disadvantaged startups would be enchanted with public cloud computing: little to no startup costs, no IT equipment to buy, no data centers to build, and no software licensing costs. Thus for startups, public cloud computing makes sense for all sorts of applications and it’s easy to see why entrepreneurs would start—and then stick with public clouds for the foreseeable future.

However, after an initial “kick the tires” experience, various venture capital sponsored firms are migrating away from public clouds.

The Wired article cites how some start-ups are leaving the public cloud for their own “fleet of good old fashioned computers they could actually put their hands on.”  That’s because, over the long run, it’s generally more expensive to rent vs. buy computer resources. The article mentions how one tech start up “did the math” and came up with internal annual costs of $120K for the servers they needed, vs. $320K in public cloud costs.

For another data point, Forbes contributor Gene Marks cites how six of his clients analyzed the costs of public cloud vs. an on-premises installation, monitored and managed by a company’s own IT professionals. The conclusion? Overall, it was “just too expensive” for these companies to operate their workloads in the public cloud as opposed to capitalizing new servers and operating them on a monthly basis.

Now to be fair, we need to make sure we’re comparing apples to apples. For an on-premises installation, hardware server costs may be significantly less over the long run, but it’s also important to include costs such as power, floor space, cooling, and employee costs of monitoring, maintaining and upgrading equipment and software. In addition there are sometimes “hidden” costs of employees spending cycles procuring IT equipment, efforts for capacity sizing, and hassles of going through endless internal capitalization loops with the Finance group.

Thus, cloud computing still makes a lot of financial sense, especially when capacity planning cycles aren’t linear, when there is need for “burst capacity”, or even when there is unplanned demand (as there often is with fickle customers). And don’t forget about use cases such as test and development, proof of concept, data laboratory environments and disaster recovery.

Another consideration is resource utilization.  As I have stated before, if you plan on using IT resources for a brief period of time, cloud computing makes a lot of sense. Conversely, if you plan on operating IT resources at 90-100% utilization levels, on a continual and annual basis, it probably makes sense to acquire and capitalize IT assets instead of choosing “pay per use” cloud computing models.

Ultimately, the cloud rent vs. buy decision comes down to more than just the price of servers. Enterprises should be careful to understand their use cases for cloud vs. on premises IT. In addition, watch for hidden costs in your TCO calculation that underestimate how much time and effort it really takes to get an IT environment up, running and performing.


Get every new post delivered to your Inbox.

Join 43 other followers

%d bloggers like this: