Is Your IT Architecture Ready for Big Data?

Built in the 1950s, California’s aqueduct is an engineering marvel that transports water from Northern California mountain ranges into thirsty coastal communities. But faced with a potentially lasting drought, California’s aqueduct is running below capacity as there’s not enough water coming from sources. In terms of big data, just the opposite is likely happening in your organization—too much big data, overflowing the river banks and causing havoc. And it’s only going from bad to worse.

Courtesy of Flickr. Creative Commons. By Herr Hans Gruber
Courtesy of Flickr. Creative Commons. By Herr Hans Gruber

The California aqueduct is a thing of beauty. As described in an Atlantic magazine article;

“A network of rivers, tributaries, and canals deliver runoff from the Sierra Mountain Range’s snowpack to massive pumps at the southern end of the San Joaquin Delta.” From there, these hydraulic pumps push water to California cities via a forty four mile aqueduct that traverses the state and dumps into various local reservoirs.

You likely have something analogous to a big data aqueduct in your organization. For example, source systems kick off data in various formats, which probably go through some refining process and end up in relational format. Excess digital exhaust is conceivably kept in compressed storage onsite or a remote location. It’s a continual process whereby data are continually ingested, stored, moved, processed, monitored and analyzed throughout your organization.

But with big data, there’s simply too much of it coming your way. Author James Gleick describes it this way; “The information produced and consumed by humankind used to vanish—that was the norm, the default. The sights, the sounds, the songs, the spoken word just melted away. Now expectations have inverted. Everything may be recorded and preserved, at least potentially: every musical performance; every crime in a shop, elevator, or city street; every volcano or tsunami on the remotest shore.” In short, everything that can be recorded is fair game, and likely sits on a server somewhere in the world.

So what got us here in terms of IT architecture isn’t going to be able to handle the immense data flood coming our way without a serious upgrade in terms of capability and alignment.

IT architecture can essentially be thought of as a view from above, or a blueprint of various structures and components and how they function together. In this context, we’re concerned with what an overall blueprint of business, information, applications and systems looks like today and what it needs to look like to meet future business needs.

We need a rethink of our architectural approaches for big data. To be sure, some companies—maybe 10%–will never need to harness multi-structured data types. They may never need to dabble with or implement open source technologies. To recommend some sort of “big data” architecture for these types of companies is counter-productive.

However, the other 90% of companies are waking up and realizing that today’s IT architecture and infrastructure won’t be able to meet their future needs. These companies desperately need to assess their current situation and future business needs, and then design an architecture that will deliver insights from all data types, not just those that fit neatly into relational rows and/or columns.

The big data onslaught will continue for the foreseeable future, and is only going to grow more intense from exponential data growth. But here’s the challenge: the human mind tends to think linearly—we simply don’t know how to plan for, much less capitalize on this exponential data growth. As such, the business, information, application and systems infrastructures—at most companies—aren’t equipped to cope with, much less harness the coming big data flood.

Want to be prepared? It’s important to take a fresh look at your existing IT architecture—and make sure that your data management, data processing, development tools, integration and analytic systems are up to snuff. And whatever your future plans are, consider doubling down on them.

Until convincing proof shows otherwise, it’s simply too risky not to have a well thought out plan to cope with stormy days ahead of too much big data.

Changing Your Mind About Big Data Isn’t Dumb

After all the hype about big data and its mental cousin Hadoop, some CIOs are getting skittish about investing additional money in a big data program without a clear business case.  Indeed, in terms of big data it’s OK to step back and think critically about what you’re doing, pause your programs for a time if necessary, and—yes, even change your mind about big data.

Courtesy of Flickr. Creative Commons. By Steven Depolo
Courtesy of Flickr. Creative Commons. By Steven Depolo

Economist and Federal Reserve Chairman, Alan Greenspan, has changed his mind many times. In aFinancial Times article, columnist Gillian Tett, chronicles Greenspan’s multiple positions on the value of gold. Tett says in his formative years, Greenspan was fascinated with the idea of the gold standard (i.e. pegging the value of a currency to a given amount of gold), but later was a staunch defender of fiat currencies.  And now, in his sunset years, Greenspan has shifted again saying; “Gold is a currency. It is still, by all evidence, a premier currency. No fiat currency, including the dollar, can match it.”

To me at least, Greenspan’s fluctuating positions on gold reflect a mind that continually adapts to new information.  Some would view Greenspan as “waffler”, or someone who cannot make up his mind. I don’t see it that way. Changing your mind isn’t a sign of weakness; rather it shows pragmatic and adaptive thinking that mutates as market or business conditions shift.

So what does any of this have to do with the concept of big data? While big data and associated big data technologies have enjoyed plenty of hype, there’s a new reality setting in regarding getting more value from big data investments.

Take for example a Barclays survey where a large percentage of CIOs were “uncertain”—thus far—as to the value of Hadoop because of the ongoing costs of support, training, hiring hard to find operations and development staff, and the necessary work to make Hadoop integrate with existing enterprise systems.

In another survey of 111 U.S. data scientists sponsored by Paradigm4, twenty-two percent of those surveyed said Hadoop and Spark were not well-suited to their analytics. And in the same survey, thirty-five percent of data scientists who tried Hadoop or Spark have stopped using it.

And earlier in the year, Gartner analyst Svetlana Sicular noted that big data has fallen into Gartner’s trough of disillusionment by commenting; “My most advanced with Hadoop clients are also getting disillusioned…these organizations have fascinating ideas, but they are disappointed with a difficulty of figuring out reliable solutions.”

With all this in mind, I think it makes sense to take a step back and assess your big data progress.  If you are one of those early Hadoop adopters, it’s a good time to examine your current program, report on results, and test against any return on investment (hard dollar or soft benefits) projections you’ve made. Or maybe you have never formalized a business case for big data? Here’s your chance to work up that business case, because future capital investments will likely depend on it.

In fact, now’s the perfect opportunity for deeper thinking on your big data investments. It’s time to go beyond the big data pilot and put effort into strategies for integrating these pilots with the rest of your enterprise systems.  And it’s also time to think long and hard about how to make your analytics “consumable by the masses”, or in other words, making your analytics accessible to many more business users than those currently using your systems.

And maybe you are in the camp of charting a different course for big data investments. Perhaps business conditions aren’t just right at the current moment, or there’s an executive shift that warrants a six month reprieve to focus on other core items.  If this is your situation, it might not be a bad idea to let an ever changing big data technology and vendor landscape shake out a bit before jumping back in.

To be clear, there’s no suggestion—whatsoever—to abandon your plans to harness big data. Now that would be dumb. But much like Alan Greenspan’s shifting opinions on gold, it’s also perfectly OK to re-assess your current position, and chart a more pragmatic and flexible course towards big data results.

Storytelling with the Sounds of Big Data

Trying to internally “sell” the merits of a big data program to your executive team?  Of course, you will need your handy Solution Architect by your side, and a hard hitting financial analysis vetted by the CFO’s office. But numbers and facts probably won’t make the sales pitch complete. You’ll need to appeal to the emotional side of the sale, and one method to make that connection is to incorporate the sounds of big data.

By Tess Watson. Creative Commons. Courtesy of Flickr.
By Tess Watson. Creative Commons. Courtesy of Flickr.

There’s an interesting book review on “The Sonic Boom” by Joel Beckerman in the Financial Times.  In his book, Beckerman makes the statement that “sound is really the emotional engine for any story”—meaning if you’re going to create a powerful narrative, there needs to be an element of sound included.

Beckerman cites examples where sound is intentionally amplified to portray the benefits of a product or service, or even associate a jingle with a brand promise. For example, the sizzling fajitas that a waiter brings to your table, the boot up sound on an Apple Mac, or AT&T’s closing four notes on their commercials.

Of course, an analytics program pitch to senior management requires your customary facts and figures.  For example, when pitching the merits of an analytics program you’ll need slides on use cases, a few diagrams of the technical architecture (on premise, cloud based or a combination thereof), prognostications of payback dates and return on investment calculations, and a plan to manage the program from an organizational perspective among other things.

But let’s not mistake the value of telling a good story to senior management that humanizes the impact of investing deeper in an analytics program.  And that “good story” can be delivered more successfully when “sound” is incorporated into the pitch.

So what are the sounds of big data?  I can think of a few that, when experienced, can add a powerful dimension to your pitch.  First, take your executives on a tour of your data center, or one you’re proposing to utilize so they can hear the hum of a noisy server room where air conditioning ducts pipe in near ice cold air, CPU fans whirl in perpetuity, and cable monkeys scurry back and forth stringing fiber optic lines between various machines.  Yes, your executive team will be able to see the servers and feel the biting cold of the data center air conditioning, but you also want them to hear the “sounds” (i.e. listen to this data center) of big data in action.

In another avenue to showcase the sound of big data, perhaps you can replay to your executive team the audio of a customer phone call where your call center agent struggles to accurately describe where a customer’s given product is in transit, or worse, tries to upsell them a product they already own.  I’m sure you can think of more “big data” sounds that can accurately depict either your daily investment in big data technologies…or lack thereof.

Too often, corporate business cases with a “big ask” for significant headcount, investment dollars and more, give too much credence to the left side of our brain that values logic, mathematics and facts.  In the process we end up ignoring the emotional connection where feelings and intuition interplay.

Remember to incorporate the sounds of big data into your overall analytics investment pitch because what we’re aiming for is a “yes”, “go”, “proceed”, or “what are you waiting for?” from the CFO, CEO or other line of business leader. Ultimately, in terms of our analytics pitch, these are the sounds of big data that really matter.

Three Steps to Becoming a Genius Forecaster

Both Ben Bernanke and Edward John Smith got it wrong. They made terrible forecasts that either wrecked the economy, or in the instance of John Edward Smith (his ship).  Forecasting is hard, and even those who sometimes get it right, often fail on a continuous basis. But fear not, there are three steps you can take to drastically improve your forecast accuracy, but you’ll have to be willing to put in the work, and possibly put your ego aside to get there.

Captains of the Titanic. By Jimmy. Courtesy of Flickr Creative Commons.
Captains of the Titanic. By Jimmy. Courtesy of Flickr Creative Commons.

Simply stated, a forecast is “a prediction…of some future activity, event, or occurrence.” There are many types of forecasts including business, economic, financial, meteorology, political and more. In fact, everyone is a forecaster to some degree especially when we start thinking about future trends and how they might affect our families, companies, communities and…even countries.

But good forecasting is difficult, and even the so-called “experts” and pundits get it wrong more times than they’re right.  With this in mind, here are three tips (surely there are more) to becoming a better forecaster.

First, understand that domain knowledge of a particular area doesn’t necessarily mean you’ll see the future better than anyone else. An article from the Financial Times chronicled Canadian psychologist Phillip Tetlock’s quest to improve forecasting techniques. Over 18 years, Tetlock accumulated over 27,500 expert forecasts on politics, geopolitics and economics.  His shocking conclusion? According to the FT article, Tetlock discovered that so-called experts were terrible forecasters! These were people in their sphere of influence, with strong opinions and knowledge about things they understood quite well. However, their forecasting track records—over time—were no better than chance.  So if you believe yourself to be an “expert”, it’s probably better to take a more humble approach.

Second, if you want better forecasts, run your expert opinions by others. Phillip Tetlock, Barbara Mellers and Don Moore run “The Good Judgment Project”.  It’s a collection of more than 20,000 volunteer participants who offer up opinions on economic and geo-political events. Through their research, Tetlock, Mellers and Moore learned that when expert forecasters were broken into teams, their discussions and sometimes heated arguments bore better results.  With the biblical adage, “iron sharpens iron”, when you bounce your expert forecasts off others, research from the Good Judgment Project shows that you’ll end up with more accurate depictions of future events.

Third, bring your data—in fact, bring all of them. Sometimes, when making expert forecasts we assume that only what we deem as a relevant data set is needed (maybe what’s in the corporate data warehouse) for the best decision making.

However, because the world is complex, and there are often many variables that contribute to an event or outcome, it’s better to bring all your data to the task. So this means, data that might be locked away in non-tabular “messy” formats such as call detail records, machine logs, or JSON data sets can and should be processed, refined and analyzed.  And don’t be afraid to look for data sets outside what you own in your internal data stores. There are plenty of data brokers that might have data you need to help unlock the puzzle of where to direct your corporate resources next.

Looking for more on the latest in forecasting? Tim Hartford’s FT article is a great place to start. I’m also a fan of Cullen Roche’s macro approach to understanding markets and financial flows. And no discussion on forecasting would be complete without referencing Nassim Taleb’s sometimes caustic critiques of the forecasting profession.

So if you want to be a genius forecaster, follow these three steps. First, drop any bit of hubris that comes with the forecasting profession and be open to other opinions. After all, as John Kenneth Galbraith once said, “There are two kinds of forecasters: those who don’t know, and those who don’t know they don’t know.”

Next, once your guard is down, you’ll be able to run your ideas by other experts and maybe come up with a better idea than your original one. Don’t be afraid to argue your point. But also be wise enough to be quiet and listen.  You can learn a lot by simply closing your mouth and opening your mind.

Finally, bring all the data you need to solve a problem, not just the clean data, or those that are easily sourced. Sometimes, there’s signal in the noise. But if you want better forecasts, you’re going to have to do the really hard work to find it.

  • CAPEX Deferred Eventually Makes the Company Sick

    Wall Street analysts keep waiting for companies to spend money upgrading their infrastructures, but they shouldn’t hold their collective breath.  Instead of investments in IT, machinery, buildings and more, CEOs and CFOs are content to predominantly spend cash on stock buy-backs and/or dividends.  Deferring CAPEX spend, or “sweating the assets” will work for a little while, but it’s not a strategy for long-term success.

    Creative Commons. Flickr. By Jeff Hitchcock.
    Creative Commons. Flickr. By Jeff Hitchcock.

    Poor Los Angeles. Decades of not spending enough money to upgrade infrastructures is really catching up with the city.

    According to a New York Times article, “Infrastructure Cracks as Los Angeles Defers Repairs”, there’s a real breakdown happening with the public works infrastructure in Los Angeles. Take for example, massive flooding when a 90 year old water main broke outside of UCLA, flooding the campus with 10-20 million gallons of water and leading to millions in damages.

    Deferring necessary upgrades and repairs is costing Los Angeles.  The New York Times article mentions; “With each day…another accident illustrates the cost of deferred maintenance on public works, while offering a frustrating reminder to this cash-strained municipality of the daunting task it faces in dealing with the estimated $8.1 billion it would take to do the necessary repairs.”

    In the same manner, since 2012 companies have clamped CAPEX spending for their own infrastructures, instead choosing to spend cash on stock buybacks, or plain just hoarding cash on the balance sheet.  Granted, some of these monies are locked up off-shore, and cannot be repatriated without significant tax hits, but for now, companies are choosing not to spend much on upgrading their own infrastructure.

    In terms of information technology, deferring upgrade expenditures has the following implications:

    • Big data are only getting bigger
    • Moore’s Law’s keeps marching on, but some companies are using IT equipment that may be long past its depreciation cycle.
    • Software advancements continue
    • SLAs demanded by business units are in jeopardy of not being met

    Perhaps the slowdown for IT CAPEX has something to do with the rise of cloud computing. After all, Amazon’s cloud business has turned into a $2 billion or more business. That said, survey after surveystill shows reluctance for companies to move everything to the cloud, so perhaps there’s more to the story.

    The constant deferral of CAPEX has the real potential to make your company sick. Investments in computers, machines, plants, equipment, buildings and more are the backbone of a company. When CAPEX is intentionally constrained in favor of parking cash for a rainy day or buying back stock (at already high prices), much needed upgrades are deferred.

    Worse, constant deferrals of capital upgrades are like a “hidden tax” in that by not spending cash on upgrading creaking systems and infrastructure, it’s highly likely something much worse can happen down the road (i.e. the millions extra Los Angeles has to spend just to clean up the messes resulting from infrastructure failures).

    Getting back to Los Angeles and their years of infrastructure spend deferral, Donald Shoup, a professor of urban planning at UCLA says; “It’s part of a pattern of failing to provide for the future.”

    The problem is quite clear. Investments in CAPEX can only be delayed so long. Eventually, failure to spend means missed growth opportunities, frustrated customers, irritated employees, and exposure to much more downside risk if things “blow up” from trying to get by just one more quarter with aging infrastructure. Eventually, the piper needs to be paid. And when he finally gets paid, he usually asks for double.

    Building Information Technology Liquidity

    Turbulent markets offer companies both challenges and opportunities. But with rigid and aging IT infrastructures, it’s hard for companies to turn on a dime and respond to fluctuations in supplies and consumer demand. A corporate culture built on agile principles helps, but companies really need to build information technology “liquidity” to meet global disturbances head on.

    Creative Commons. Courtesy of Flickr. By Ze'ev Barkan
    Creative Commons. Courtesy of Flickr. By Ze’ev Barkan

    Liquidity is a term often used in financial markets. When markets are deep and liquid it means they have assets that can be exchanged or sold in a moment’s notice with very little price fluctuation. In liquid markets, participants usually have the flexibility to sell or buy a position very rapidly, using cash or another accepted financial instrument.

    Companies with liquid assets—such as lots of cash—can take advantages of market opportunities like picking up ailing competitors cheaply, or buying out inventory that another competitor desperately needs. Liquidity then, allows companies to take advantage of unplanned scenarios, and in some cases—to stay afloat when other companies are failing!

    In the same way, IT organizations desperately need to embrace the concept of “liquidity”—not by having extra cash lying around, but creating agile and flexible infrastructures that can take advantage of unplanned demand. This is especially hard when an estimated 75% of the IT budget is already spent on maintaining legacy infrastructure.

    Even worse, IT capacity planning efforts are often based on simple linear regression models or other quick and dirty heuristics that don’t account for huge spikes in demand such as a major corporate merger or “one-hit wonder” product.

    Companies need to build a “liquid” information technology capability that can respond quickly to market and competitive agitations. Richard Villars, Vice President at IDC, says that in building liquidity, IT must; “enable variable workloads, handle the data explosion, and (be able to promptly) partner with the business (when unplanned opportunities arise)”.

    What are some examples of IT liquidity? One scenario could be extra compute and storage available on-premises and reserved for unplanned demand. These resources could be “hidden” from the business by throttling back CPU for example, and then “released” when needed.

    A second scenario might be having contracts signed and cloud resources at the ready on a moment’s notice to “burst into” extra processing when required. A third option could be using outside service contractors on a retainer model basis to provide a ready set of skills when your IT staff is crunched with too many extra projects.

    In the financial world, liquid assets can allow companies to react and capitalize on market opportunities.  Liquidity in IT means that companies have enough extra compute firepower, people resources and are agile enough with IT processes to respond to unplanned events and demand, in whatever shape, form or order they arrive.

    Building resistance to and combating market disruptions is an essential quality—in some cases to thrive and in others, to simply survive.

    Follow

    Get every new post delivered to your Inbox.

    Join 42 other followers

    %d bloggers like this: