Building Information Technology Liquidity

Turbulent markets offer companies both challenges and opportunities. But with rigid and aging IT infrastructures, it’s hard for companies to turn on a dime and respond to fluctuations in supplies and consumer demand. A corporate culture built on agile principles helps, but companies really need to build information technology “liquidity” to meet global disturbances head on.

Creative Commons. Courtesy of Flickr. By Ze'ev Barkan
Creative Commons. Courtesy of Flickr. By Ze’ev Barkan

Liquidity is a term often used in financial markets. When markets are deep and liquid it means they have assets that can be exchanged or sold in a moment’s notice with very little price fluctuation. In liquid markets, participants usually have the flexibility to sell or buy a position very rapidly, using cash or another accepted financial instrument.

Companies with liquid assets—such as lots of cash—can take advantages of market opportunities like picking up ailing competitors cheaply, or buying out inventory that another competitor desperately needs. Liquidity then, allows companies to take advantage of unplanned scenarios, and in some cases—to stay afloat when other companies are failing!

In the same way, IT organizations desperately need to embrace the concept of “liquidity”—not by having extra cash lying around, but creating agile and flexible infrastructures that can take advantage of unplanned demand. This is especially hard when an estimated 75% of the IT budget is already spent on maintaining legacy infrastructure.

Even worse, IT capacity planning efforts are often based on simple linear regression models or other quick and dirty heuristics that don’t account for huge spikes in demand such as a major corporate merger or “one-hit wonder” product.

Companies need to build a “liquid” information technology capability that can respond quickly to market and competitive agitations. Richard Villars, Vice President at IDC, says that in building liquidity, IT must; “enable variable workloads, handle the data explosion, and (be able to promptly) partner with the business (when unplanned opportunities arise)”.

What are some examples of IT liquidity? One scenario could be extra compute and storage available on-premises and reserved for unplanned demand. These resources could be “hidden” from the business by throttling back CPU for example, and then “released” when needed.

A second scenario might be having contracts signed and cloud resources at the ready on a moment’s notice to “burst into” extra processing when required. A third option could be using outside service contractors on a retainer model basis to provide a ready set of skills when your IT staff is crunched with too many extra projects.

In the financial world, liquid assets can allow companies to react and capitalize on market opportunities.  Liquidity in IT means that companies have enough extra compute firepower, people resources and are agile enough with IT processes to respond to unplanned events and demand, in whatever shape, form or order they arrive.

Building resistance to and combating market disruptions is an essential quality—in some cases to thrive and in others, to simply survive.

Adapting to Winds of Change with Cloud

Look around at global economic conditions. More than skirmishes—near flat out war—in Ukraine, Gaza, Iraq and Syria.  China pushing up GDP numbers by loading local provinces with more debt. European economies are on the mend, but not yet turning the corner. Fickle western consumers more pre-occupied with the latest smartphone app than the new product you’re selling. It’s in stressful economic conditions that you need to make sure your business has the ability to cycle capacity up or down when needed. You need cloud computing.

According to an article in the Financial Times, during the World Cup, Ghana’s authorities had to “import 50 megawatts of energy from neighboring Ivory Coast” just to keep televisions on during Ghana’s National Soccer team’s games. Fortunately, Ivory Coast had enough spare electricity to sell to Ghana, because there might have been riots in the streets had Ghanaian authorities not figured out a way to meet the demands of thousands of televisions.

Just like Ghanaian authorities, many businesses are unprepared for volatile capacity needs and capricious consumers who want what they want, and now.  That’s why enterprises that not only have a cloud computing strategy, but the ability to quickly deploy cloud resources on a whim, will ultimately fare better than those still trying to spell “c-l-o-u-d”.

This means having an information architecture documented that includes cloud, signed agreements with providers, an understanding of applications and databases or file systems needed, security policies in place, applications written and ready to take advantage of cloud resources, data loading strategies (VPN or dedicated circuit?), processes to scale cloud resources up and down (and triggers when to do so), data governance for onsite and cloud systems, business continuity plans and more.

There’s much work to do before you can take advantage of cloud resources, and just-in-time planning doesn’t cut it. With the flexibility, speed and power that cloud offers, there’s really no excuses to let opportunities to capture unplanned demand pass you by.

Can you ramp up and down based on erratic business conditions? Can you weather economic fluctuations? Are you flexible enough to point resources towards unmet consumer demand?  Can you quickly adapt to global winds of change? Cloud computing infrastructures are ready. Are you?

Are You Using Tricolons in Your Rhetoric?

If you’re a presenter, or simply someone wanting to convey information in a memorable way, you have probably inadvertently or intentionally used the rule of three.  The rule of three is a teaching, writing or presenting device where a key concept is broken into three easy to remember pieces.  Does the rule of three apply to the fields of technology and business? Let’s dive a little deeper to find out.

By Don McCullough. Creative Commons. Courtesy of Flickr.
By Don McCullough. Creative Commons. Courtesy of Flickr.

Financial Times columnist Sam Leith offers executives a few hints on how to make business presentations and documents more interesting. He says that by using a rhetorical device called a “tricolon”, anyone looking to influence or persuade can make their ideas easier to consume and comprehend.

What’s a good example of a tricolon? How about Thomas Jefferson’s prose in the US Declaration of Independence where he writes; “We hold these truths to be self-evident, that all men are created equal, that they are endowed by their Creator with certain unalienable Rights, that among these are Life, Liberty and the pursuit of Happiness.” Notice the tricolon; “life, liberty and the pursuit of happiness” and how easy is it forget the first part of the sentence and remember the second. Why is this?

Leith advances the concept that humans accept and retain information better when the Rule of Three is used.  “For reasons that remain neurologically obscure, the human mind adores things in groups of three: tricolons sound strong, memorable and coherent,” he says.

Tricolons are found in all types of rhetoric from political speeches to children’s books. Take a look at this gem in Quentin Blake’s Angelica Sprocket’s Pockets:

      “There’s a pocket for mice,”

      “and a pocket for cheese”

      “and a pocket for hankies in case anyone feels that they’re going to sneeze”

Here we have three pockets, but we mostly remember what is supposed to go in them, namely mice, cheese and hankies.

We can use this rhetorical device in our business presentations and messaging for better conclusions.  For example, most readers of this column know that I have marketing duties for Teradata Cloud.

While there are many compelling aspects of this particular solution,  I’ve boiled the ocean down to “fast, flexible and powerful”, where deployment in the cloud is faster than you’d expect, flexible enough to meet your needs for a little or a lot of analytic capability and powerful with the availability of three analytic engines. While it’s terribly tempting to create a longer checklist of all the benefits of this solution, I’ve intentionally limited myself to only three (and arguably even these require more refinement!).

Want to make your next presentation more compelling? And added effect of the tri-colon is that it can provide a rhythm to our discourse.  Rhythmically, we can use tricolons to break up the monotony of an otherwise bland presentation (especially ones that technology executives are prone to deliver!).

Going forward, let’s be sure to use more tricolons (i.e. Rule of Three) in our training materials, internal presentations, customer whitepapers, conference presentations and more. I’m pretty sure by doing so; we’ll end up much more interesting, memorable, and effective.

Beware Big Data Technology Zealotry

Undoubtedly you’ve heard it all before: “Hadoop is the next big thing, why waste your time with a relational database?” or “Hadoop is really only good for the following things” or “Our NoSQL database scales, other solutions don’t.” Invariably, there are hundreds of additional arguments proffered by big data vendors and technology zealots inhabiting organizations just like yours. However, there are few crisp binary choices in technology decision making, especially in today’s heterogeneous big data environments.

Courtesy of Flickr. Creative Commons. By Eden, Janine, and Jim.
Courtesy of Flickr. Creative Commons. By Eden, Janine, and Jim.

Teradata CTO Stephen Brobst has a great story regarding a Stanford technology conference he attended. Apparently in one session there were “shouting matches” between relational database and Hadoop fanatics as to which technology better served customers going forward. Mr. Brobst wasn’t amused, concluding; “As an engineer, my view is that when you see this kind of religious zealotry on either side, both sides are wrong. A good engineer is happy to use good ideas wherever they come from.”

Considering various technology choices for your particular organization is a multi-faceted decision making process. For example, suppose you are investigating a new application and/or database for a mission critical job. Let’s also suppose your existing solution is working “good enough”. However, the industry pundits, bloggers and analysts are hyping and luring you towards the next big thing in technology. At this point, alarm bells should be ringing. Let’s explore why.

First, for companies that are not start-ups, the idea of ripping and replacing an existing and working solution should give every CIO and CTO pause. The use cases enabled by this new technology must significantly stand out.

Second, unless your existing solution is fully depreciated (for on-premises, hardware based solutions), you’re going to have a tough time getting past your CFO. Regardless of your situation, you’ll need compelling calculations for TCO, IRR and ROI.

Third, you will need to investigate whether your company has the skill sets to develop and operate this new environment, or whether they are readily available from outside vendors.

Fourth, consider your risk tolerance or appetite for failure—as in, if this new IT project fails—will it be considered a “drop in the bucket” or could it take down the entire company?

Finally, consider whether you’re succumbing to technology zealotry pitched by your favorite vendor or internal technologist. Oftentimes in technology decision making, the better choice is “and”, not “either”.

For example, more companies are adopting a heterogeneous technology environment for unified information where multiple technologies and approaches work together in unison to meet various needs for reporting, dashboards, visualization, ad-hoc queries, operational applications, predictive analytics, and more. In essence, think more about synergies and inter-operability, not isolated technologies and processes.

In counterpoint, some will argue that technology capabilities increasingly overlap, and with a heterogeneous approach companies might be paying for some features twice. It is true that lines are blurring regarding technology capabilities as some of today’s relational databases can accept and process JSON (previously the purview of NoSQL databases), queries and BI reports can run on Hadoop, and “discovery work” can complete on multiple platforms. However, considering the maturity and design of various competing big data solutions, it does not appear—for the immediate future—that one size will fit all.

When it comes to selecting big data technologies, objectivity and flexibility are paramount. You’ll have to settle on technologies based on your unique business and use cases, risk tolerance, financial situation, analytic readiness and more.

If your big data vendor or favorite company technologist is missing a toolbox or multi-faceted perspective and instead seems to employ a “to a hammer, everything looks like a nail” approach, you might want to look elsewhere for a competing point of view.

When Ideology Reigns Over Data

Increasingly, the mantra of “let the data speak for themselves” is falling by the wayside and ideology promotion is zooming down the fast lane. There are dangers to reputations, companies and global economies when researchers and/or statisticians either see what they want to see—despite the data, or worse, gently massage data to get “the right results.”

Courtesy of Flickr. By Windell Oskay
Courtesy of Flickr. By Windell Oskay

Economist Thomas Piketty is in the news. After publishing his treatise “Capital in the Twenty First Century”, Mr. Piketty was lauded by world leaders, fellow economists, and political commentators for bringing data and analysis to the perceived problem of growing income inequality.

In his book, Mr. Piketty posits that while wealth and income were grossly unequally distributed through the industrial revolution era, the advent of World Wars I and II changed the wealth dynamic as tax raises helped pay for war recovery and social safety nets. Then, after the early 1970s, Piketty claims that once again his data show the top 1-10% of earners take more than their fair share. In Capital, Piketty’s prescriptions to remedy wealth inequality include an annual tax on capital and harsh taxation of up to80% for the highest earners.

In this age of sharing and transparency, Mr. Piketty received acclaim for publishing his data sets and Excel spreadsheets for the entire world to see. However, this bold move could also prove to be his downfall.

The Financial Times, in a series of recent articles, claims that Piketty’s data and Excel spreadsheets don’t exactly line up with his conclusions. “The FT found mistakes and unexplained entries in his spreadsheet,” the paper reports. The articles also mention that a host of “transcription errors”, “incorrect formulas” and “cherry-picked” data mar an otherwise serious body of work.

Once all the above errors are corrected, the FT concludes; “There is little evidence in Professor Piketty’s original sources to bear out the thesis that an increasing share of total wealth is held by the richest few.” In other words, ouch!

Here’s part of the problem; while income data are somewhat hard to piece together, wealth data for the past 100 years is even harder to find because of data quality and collection issues. As such, the data are bound to be of dubious quality and/or incomplete. In addition, it appears that Piketty could have used some friends to check and double check his spreadsheet calculations to save him the Ken Rogoff/Carmen Reinhardt treatment.

In working with data, errors come with the territory and hopefully they are minimal. There is a more serious issue for any data worker however; seeing what you want to see, even if the evidence says otherwise.

For example, Nicolas Baverez, a French economist raised issues with Piketty’s data collection approach and “biased interpretation” of those data long before the FT report.  Furthermore, Baverez thinks that Piketty had a conclusion in mind before he analyzed the data. In the magazine Le Point, Baverez writes; “Thomas Piketty has chosen to place himself under the shadow of (Karl Marx), placing unlimited accumulation of capital in the center of his thinking”.

The point of this particular article is not to knock down Mr. Piketty, nor his lengthy and researched tome. Indeed we should not be so dismissive of Mr. Piketty’s larger message that there appears to be an increasing gap between haves and have nots, especially in terms of exorbitant CEO pay, stagnant middle class wages, and reduced safety net for the poorest Western citizens.

But Piketty appeared to have a solution in mind before he found a problem. He will readily admit; “I am in favor of wealth taxation.”  When ideology drives any data driven approach, it becomes just a little easier to discard data, observations and evidence that don’t exactly line up with what you’re trying to prove.

In 1977, statistician John W. Tukey said; “The greatest value of a picture is when it forces us to notice what we never expected to see.” Good science is the search for causes and explanations, sans any dogma, and willingness to accept outcomes contrary to our initial hypothesis. If we want true knowledge discovery, there can be no other way.

 

Privacy Ramifications of IT Infrastructure Everywhere

Most people don’t notice that information technology pervades our daily lives. Granted, some IT infrastructure is in the open and easy to spot, such as the computer and router on your desk hooked up via network cables. However, plenty of IT infrastructures are nearly invisible as they reside in locked network rooms or heavily guarded data centers.  And some IT infrastructures are bundled underneath city streets, arrayed on rooftops, or even camouflaged as trees at the local park. Let’s take a closer look at a few ramifications of IT infrastructure everywhere.

Courtesy of Flickr. By Jonathan McIntosh.
Courtesy of Flickr. By Jonathan McIntosh.

1.  Technology is pervasive and commonplace in our daily lives. Little is seen, much is hidden.

Good news: Companies have spent billions of dollars investing in wired and wireless connections that span cities, countries and oceans. This connectivity has enabled companies to ship work to lower cost providers in developing countries, and for certain IT projects to “follow the sun” and thus finish faster. Also, because we have IT infrastructure everywhere, it makes it possible for police forces and/or governments to identify and prosecute perpetrators of crime that much easier.

Bad news: This same IT infrastructure can also be used to monitor and analyze where and how people gather, what they say, relationships, how they vote, religious and political views and more. Closed circuit TV cameras on street corners (or concealed as mailboxes), ATM machines, POS systems, red-light cameras, and drones make up a pervasive and possibly invasive infrastructure that never sleeps. You may be free to assemble, however, IT infrastructure might be watching.

2.  Some information technology is either affordable or in some cases “free”, but the true costs may be hidden.

Good news: Google’s G+ or Gmail, Facebook, or Yahoo’s portal and email services are no to low cost for consumers and businesses. In addition, plenty of cloud providers such as Amazon, Google or Dropbox offer a base level of storage for documents or photos with no upfront hard dollar cost. On the surface it appears we are getting something for practically nothing.

Bad news: There’s no such thing as a free lunch as Janet Vertesi, assistant professor of sociology at Princeton can attest. For months she tried to hide her pregnancy from Big Data, but she realized that Facebook, Google and other free “services” were watching her every post, email, and interaction in search of ways to advertise and sell her something. While she was not paying a monthly fee for these online services, there was in fact a “cost”—Vertesi was exchanging her online privacy for the ability of advertisers to better target her and serve appropriate advertising.

3. IT infrastructure is expected to be highly available. Smartphones, internet access, computers are simply expected to work and be immediately available for use.

Good news: With IT infrastructure, high availability (four to five 9’s) is the name of the game. Anything less doesn’t cut it. Cloud services from IaaS to SaaS are expected to stay up and running, and phone networks are expected to have enough bandwidth to support our phone calls and web browsing—even atbusy sporting events.  And for the most part, IT infrastructure delivers time and again because consumers and business have the expectation that technology is highly available.

Bad news: Not only is IT infrastructure always on, but because of Moore’s Law and plummeting costs of disk, it never forgets.  For example, when disk and tape space was expensive, closed circuit TVs would record a day’s worth of coverage and then write over it the next day. Now, multiple cameras can record 30 days of surveillance on an 80 GB hard drive. And we haven’t even mentioned offsite or cloud storage which makes it possible to store audio, video, documents, photos, call detail records and more—essentially forever. Youthful transgressions can be published for all time. And mistakes today are recorded for years to come. The internet never forgets, unless you live in the European Union.

In the book, Sorting Things Out, Geoffrey C. Bowker and Susan Leigh Star call “Infrastructural Inversion” the process of focusing on various invisible systems—how they work—and how “people can change this invisibility when necessary”.   IT infrastructure is one such system that permeates our daily lives, often unseen but ever so critical to our societies.

There are undoubtedly other ramifications to this unseen IT infrastructure. Here’s hoping you’ll join the conversation with your thoughts!

The High Cost of Low Quality IT

In times of tight corporate budgets, everyone wants “a deal.” But there is often a price to be paid for low quality, especially when IT and purchasing managers aren’t comparing apples to apples in terms of technology capability or experienced implementation personnel.  Indeed, focusing on the lowest “negotiated price” is a recipe for vendor and customer re-work, delayed projects, cost overruns and irrecoverable business value.

Courtesy of Flickr. By Rene Schwietzke
Courtesy of Flickr. By Rene Schwietzke

Financial Times columnist Michael Skapinker recently lamented about the terrible quality of his dress shirts.  In years prior, his shirts would last two to three years. However, as of late, his shirts –laundered once a week—now only last three months.

Of course, this equates to a terrible hit to Mr. Skapinker’s clothing budget, not to mention environmental costs in producing, packaging, and discarding sub-standard clothing.  Consumers, Skapinker says, should “start searching out companies that sell more durable clothes. They may cost more, but should prove less expensive in the long run.”

Much like it’s short sighted to buy low quality shirts that don’t last very long, it’s also very tempting to select the low cost provider for technology or implementation, especially if they meet today’s immediate needs. The mindset then is that tomorrow can worry about itself.

This myopic thinking is exacerbated by the rise of the procurement office.  Today’s procurement offices are highly motivated by cost control. In fact, some are goaled to keep costs down. This of course, can be dangerous because in this model procurement professionals have little to no “skin in the game”. Meaning, if something goes wrong with the IT implementation, procurement has no exposure to the damage.

Now to be fair, some procurement offices are more strategic and are involved in IT lifecycle processes. From requirements, to request for proposal, to final sign-off on the deal, procurement is working hand-in-hand with IT the entire time.  In this model, the procurement department (and IT) wants the best price of course, but they’re also looking for the best long-term value. However, the cost conscious procurement department seems to be gaining momentum, especially in this era of skimpy corporate budgets.

Ultimately, technology purchases and implementations aren’t like buying widgets. A half-baked solution full of “second choice” technologies may end up being unusable to end-users, especially over a prolonged period of time. And cut-rate implementations that are seriously delayed or over-budget can translate into lost revenues, and/or delayed time to market.

When evaluating information technology (especially for new solutions), make sure to compare specs to specs, technical capabilities to capabilities, and implementation expertise to expertise.

Some questions to consider: Is there a 1:1 match in each vendor’s technologies? Will the technical solution implemented today scale for business user needs next year or in three years? What does the technology support model look like, and what are initial versus long term costs? Is the actual vendor supporting the product or have they outsourced support to a third party?

For the implementation vendor make sure to evaluate personnel, service experience, customer references, methodologies, and overall capabilities. Also be wary of low service prices as some vendors are able to arrive at cut rates by dumping a school bus of college graduates on your project (which of course then learn on your dime!). The more complex your project, the more you should be concerned with hiring experienced service companies.

A discounted price may initially look like a bargain. But there’s a cost to quality. If you’re sold on a particular (higher priced) technology or implementation vendor don’t let procurement talk you out of it. And if you cannot answer the questions listed above with confidence, it’s likely that the bargain price you’re offered by technology or implementation vendor X is really no bargain at all.

 

Follow

Get every new post delivered to your Inbox.

Join 41 other followers

%d bloggers like this: