The High Cost of Low Quality IT

In times of tight corporate budgets, everyone wants “a deal.” But there is often a price to be paid for low quality, especially when IT and purchasing managers aren’t comparing apples to apples in terms of technology capability or experienced implementation personnel.  Indeed, focusing on the lowest “negotiated price” is a recipe for vendor and customer re-work, delayed projects, cost overruns and irrecoverable business value.

Courtesy of Flickr. By Rene Schwietzke
Courtesy of Flickr. By Rene Schwietzke

Financial Times columnist Michael Skapinker recently lamented about the terrible quality of his dress shirts.  In years prior, his shirts would last two to three years. However, as of late, his shirts –laundered once a week—now only last three months.

Of course, this equates to a terrible hit to Mr. Skapinker’s clothing budget, not to mention environmental costs in producing, packaging, and discarding sub-standard clothing.  Consumers, Skapinker says, should “start searching out companies that sell more durable clothes. They may cost more, but should prove less expensive in the long run.”

Much like it’s short sighted to buy low quality shirts that don’t last very long, it’s also very tempting to select the low cost provider for technology or implementation, especially if they meet today’s immediate needs. The mindset then is that tomorrow can worry about itself.

This myopic thinking is exacerbated by the rise of the procurement office.  Today’s procurement offices are highly motivated by cost control. In fact, some are goaled to keep costs down. This of course, can be dangerous because in this model procurement professionals have little to no “skin in the game”. Meaning, if something goes wrong with the IT implementation, procurement has no exposure to the damage.

Now to be fair, some procurement offices are more strategic and are involved in IT lifecycle processes. From requirements, to request for proposal, to final sign-off on the deal, procurement is working hand-in-hand with IT the entire time.  In this model, the procurement department (and IT) wants the best price of course, but they’re also looking for the best long-term value. However, the cost conscious procurement department seems to be gaining momentum, especially in this era of skimpy corporate budgets.

Ultimately, technology purchases and implementations aren’t like buying widgets. A half-baked solution full of “second choice” technologies may end up being unusable to end-users, especially over a prolonged period of time. And cut-rate implementations that are seriously delayed or over-budget can translate into lost revenues, and/or delayed time to market.

When evaluating information technology (especially for new solutions), make sure to compare specs to specs, technical capabilities to capabilities, and implementation expertise to expertise.

Some questions to consider: Is there a 1:1 match in each vendor’s technologies? Will the technical solution implemented today scale for business user needs next year or in three years? What does the technology support model look like, and what are initial versus long term costs? Is the actual vendor supporting the product or have they outsourced support to a third party?

For the implementation vendor make sure to evaluate personnel, service experience, customer references, methodologies, and overall capabilities. Also be wary of low service prices as some vendors are able to arrive at cut rates by dumping a school bus of college graduates on your project (which of course then learn on your dime!). The more complex your project, the more you should be concerned with hiring experienced service companies.

A discounted price may initially look like a bargain. But there’s a cost to quality. If you’re sold on a particular (higher priced) technology or implementation vendor don’t let procurement talk you out of it. And if you cannot answer the questions listed above with confidence, it’s likely that the bargain price you’re offered by technology or implementation vendor X is really no bargain at all.

 

It’s Time to Ditch Scarcity Thinking

In J.R.R. Tolkien’s “The Hobbit,” Smaug the magnificent dragon sits on his nearly unlimited hoard of treasure and coins and tells “burglar” Bilbo Baggins to “help (himself) again, there’s plenty and to spare.” While it’s certainly true there are many things in this world that are physically scarce, when it comes to living in the information age, we need to retrain our minds to ditch scarcity thinking and instead embrace “sky’s the limit” abundance.

Image courtesy of Flickr.  By SolidEther
Image courtesy of Flickr. By SolidEther

Most of us have been taught there are resource constraints for things such as time, talent and natural items such as land, fresh water and more. And of course, there are very real limits to some of these items. However, we currently live in an information age. And in this era, some of our previous thought patterns no longer apply.

Take for instance, the ability to have an ocean of knowledge at our fingertips. With non-networked computers or and other devices, we’re limited to the data at hand, or the storage capacity of these devices. But add in a dash of hard-wired or wireless networking and suddenly physical limits to knowledge disappear.

Apple’s Siri technology is a compelling case in point. Using the available processing power of an iPhone (which by the way is considerable), Siri could arguably answer a limited amount of questions based on data in flash storage.

But open up Siri’s natural language processing (the bulk of which is done in the cloud) and suddenly if Siri can’t understand you, or doesn’t know an answer, the web may provide assistance. By leveraging cloud computing and access to the internet, Siri brings a wealth of data to users, and even more intelligence to Apple by capturing all queries “in the cloud” and offering an immense data set for programmers to tune and improve Siri’s capabilities.

It used to be that TV airtime was in short supply. After all, there are only so many channels and airtime programming slots for content, especially during primetime hours. And there’s still an arduous process to create, discover and produce quality content that viewers will want to watch during these scarce blocks of time.

Without regard to conventional thinking, YouTube is turning this process on its head. A New Yorkerarticle details how YouTube is growing its market presence by offering unlimited “channels” that can be played on-demand, anytime and anywhere. “On YouTube, airtime is infinite, content costs almost nothing for YouTube to produce and quantity, not quality is the bottom line,” explains author John Seabrook.  Content watching then (whether via YouTube, Netflix, DVR, Slingbox etc), is no longer constricted to certain hours, and in effect time is no longer a constraint.

In the past, the music we liked was confined to physical media such as records or compact discs. Then MP3 players such as the iPod expanded our capabilities to listen to more music but were still confined to available device storage. That’s scarcity thinking. Now with wireless networking access, there are few limits to listening to our preferred music through streaming services such as Pandora, or renting music instead of owning it on CD.  Indeed, music subscription services are becoming the dominant model for how music is “acquired”.

There are still real limits to many valuable things the world (e.g. time, talent, money, physical resources, and even human attention spans). Yet even some of these items are artificially constrained by either politics or today’s business cases.

The information age has brought persons, businesses and societies elasticity, scalability, and the removal of many earlier capacity constraints. We seem to be sitting squarely on Smaug’s unending stack of treasure. But even in the great Smaug’s neck there was a gaping vulnerability. We’ll still need to use prudence, intelligence and far-sighted thinking in this age of abundance, with the understanding that just because some of our constraints are removed, that doesn’t necessarily mean we should become gluttonous and wasteful in our use of today’s resources.

 

CAPEX for IT – Why So Painful?

CAPEX dollars reserved for investments in plant, property or equipment (including IT) are notoriously hard to secure. In fact, IT leaders often express dismay at the process involved in not only forecasting for CAPEX needs, but then stepping through arduous internal CAPEX budget approvals. What’s all the fuss with CAPEX, and why is it so difficult to obtain?

Courtesy of Flickr. By  FCAtlantaB13
Courtesy of Flickr. By
FCAtlantaB13

An investment analyst says that 2013 should be a banner year for capital investments. And another analyst, Mark Zandi of Moody’s said in late 2012; “Businesses are flush and highly competitive and this will shine through in a revival of investment spending by this time next year…”

So where’s the CAPEX? Apparently in short supply. A New York Times article says that companies are stock piling cash, and taking on debt, but investing very little in themselves. For now, if there are significant IT investments, it appears OPEX is the preferred route.

First, let’s be clear, the CAPEX vs. OPEX debate really is around a shift in cash flows and outlays, there are little to no other financial advantages.  In choosing one vs. the other, it’s a  matter of company policy, as in one instance (CAPEX) assets are categorized on a balance sheet and depreciated, and in the other (OPEX) purchases are expensed through daily operations.

Certainly, there are capital intensive businesses such as telecom, manufacturers and utilities that must continually invest in infrastructure. These types of companies will always spend significantly on CAPEX. On the other hand, there are companies that are CAPEX restricted such as start-ups, companies under the watchful eye of private equity firms, and medium sized businesses that don’t have much CAPEX as a matter of course. 

Obtaining CAPEX can also be painful for IT leaders. At the TDWI Cloud Summit in December 2012, one stage presenter in charge of IT mentioned that getting an idea from the “back of a napkin to (capitalization budget) approval” could take 18 months.

This is why cloud computing options are attractive. With cloud, companies that either have capital to spend (but don’t want to), or those that are CAPEX constrained can take advantage of existing compute infrastructures on a subscription basis. With cloud, investments in IT capabilities are easier to digest via OPEX rather than front-loading a significant chunk of change in a business asset.  And of course, there are also other reasons to choose cloud computing (such as elastic provisioning and full resource utilization) as listed here.

Regardless, it appears that for the present day, CAPEX dollars (especially for IT) will be in short supply. Perhaps this is just one of the many reasons why there’s a flurry of M&A activity in the cloud computing space?

Questions:

  • Why is CAPEX so hard to come by for information technology?
  • Do you have any horror stories (post anonymously if you’d like) about trying to get CAPEX for IT? Was cloud computing an easier discussion with your CFO?
  • When will larger companies relax their CAPEX spending, or is OPEX for IT a long term trend?

Societal Remedies for Algorithms Behaving Badly

In a world where computer programs are responsible for wild market swings, advertising fraud and more, it is incumbent upon society to develop rules and possibly laws to keep algorithms—and programmers who write them—from behaving badly.

Courtesy of Flickr. By 710928003
Courtesy of Flickr. By 710928003

In the news, it’s hard to miss cases of algorithms running amok. Take for example, the “Keep Calm and Carry On” debacle, where t-shirts from Solid Gold Bomb Company were offered with variations on the WWII “Keep Calm” propaganda phrase such as “Keep Calm and Choke Her” or “Keep Calm—and Punch Her.” No person in their right mind would sell, much less buy, such an item. However, the combinations were made possible by an algorithm that generated random phrases and added them to the “Keep Calm” moniker.

In another instance, advertising agencies are buying online ads across hundreds of thousands of web properties every day. But according to a Financial Times article, PC hackers are deploying “botnet” algorithms to click on advertisements and run up advertiser costs.  This click-fraud is estimated to cost advertisers more than $6 million a month.

Worse, the “hash crash” on April 23, 2013, trimmed 145 points off the Dow Jones index in a matter of minutes. In this case, the Associated Press Twitter account was hacked by the Syrian Electronic Army, and a post went up mentioning “Two Explosions in the White House…with Barack Obama injured.”  With trading computers reading the news, it took just a few seconds for algorithms to shed positions in stock markets, without really understanding whether the AP tweet was genuine or not.

In the case of the “Keep Calm” and “hash crash” fiascos, companies quickly trotted out apologies and excuses for algorithms behaving badly.  Yet, while admission of guilt with promises to “do better” are appropriate, society can and should demand better outcomes.

First, it is possible to program algorithms to behave more honorably.  For example, IBM’s Watson team noticed that in preparation for its televised Jeopardy event that Watson would sometimes curse.  This was simply a programming issue as Watson would often scour its data sources for the most likely answer to a question, and sometimes those answers contained profanities. Watson programmers realized that a machine cursing on national television wouldn’t go over very well, thus programmers gave Watson a “swear filter” to avoid offensive words.

Second, public opprobrium is a valuable tool. The “Keep Calm” algorithm nightmare was written up in numerous online and mainstream publications such as the New York Times. Companies that don’t program algorithms in an intelligent manner could find their brands highlighted in case studies of “what not to do” for decades to come.

Third, algorithms that perform reckless behavior could (and in the instance of advertising fraud should) get a company into legal hot water. That’s the suggestion of Scott O’Malia, Commissioner of the Commodities Futures Trading Commission. According to a Financial Times article, O’Malia says in stock trading, “reckless behavior” might be “replacing market manipulation” as the standard for prosecuting misbehavior.  What constitutes “reckless” might be up for debate, but it’s clear that more financial companies are trading based on real-time news feeds. Therefore wherever possible, Wall Street quants should be careful to program algorithms to not perform actions that could wipe out financial holdings of others.

Algorithms –by themselves—don’t actually behave badly; after all, they are simply coded to perform actions when a specific set of conditions occurs.

Programmers must realize that in today’s world, with 24 hour news cycles, variables are increasingly correlated. In other words, when one participant moves, a cascade effect is likely to happen. Brands can also be damaged in the blink of an eye when poorly coded algorithms run wild. With this in mind, programmers—and the companies that employ them—need to be more responsible with their algorithmic development and utilize scenario thinking to ensure a cautious approach.

Preserving Big Data to Live Forever

If anyone knows how to preserve data and information for long term value, it’s the programmers at Internet Archive, based in San Francisco, CA.  In fact, Internet Archive is attempting to capture every webpage, video, television show, MP3 file, or DVD published anywhere in the world. If Internet Archive is seeking to keep and preserve data for centuries, what can we learn from this non-profit about architecting a solution to keep our own data safeguarded and accessible long-term?

Long term horizon by Irargerich. Courtesy of Flickr.
Long term horizon by Irargerich. Courtesy of Flickr.

There’s a fascinating 13-minute documentary on the work of data curators at the Internet Archive. The mission of the Internet Archive is “universal access to all data”. In their efforts to crawl every webpage, scan every book, and make information available to any citizen of the world, the Internet Archive team has designed a system that is resilient, redundant, and highly available.

Preserving knowledge for generations is no easy task. Key components of this massive undertaking include decisions in technology, architecture, data storage, and data accessibility.

First, just about every technology used by Internet Archive, is either open source software or commodity hardware. For web crawling and adding content to their digital archives Heritrix was developed by Internet Archive. To enable full text search on Internet Archive’s website, Nutch running on Hadoop’s file system is utilized to “allow Google-style full-text search of web content, including the same content as it changes over time.”  There are also web sites that mention HBase could also be in the mix as a database technology.

Second, the concepts of redundancy and disaster planning are baked into the overall Internet Archive architecture. The non-profit has servers located in San Francisco, but in keeping a multi-century and beyond vision, Internet Archive mirrors data in Amsterdam and Egypt to weather the volatility of historical events.

Third, many companies struggle to decide what data they should use, archive, or throw away. However with the plummeting cost of hard disk storage, and open source Hadoop, capturing and storing all data in perpetuity is more feasible than ever. For Internet Archive all data are captured and nothing is thrown away.

Finally, it’s one thing to capture and store data, and another to make it accessible. Internet Archive aims to make the world’s knowledge base available to everyone. On the Internet Archive site, users can search and browse through ancient documents, view recorded video from years past and listen to music from artists that no longer walk planet earth. Brewster Kahle, founder of the Internet Archive says, that with a simple internet connection; “A poor kid in Keyna or Kansas can have access to…great works no matter where they are, or when they were (composed).”

Capturing a mountain of multi-structured data (currently 10 petabytes and growing) is an admirable feat, however the real magic lies in Internet Archive’s multi-century vision of making sure the world’s best and most useful knowledge is preserved. Political systems come and go, but with Internet Archive’s Big Data preservation approach, the treasures of the world’s digital content will hopefully exist for centuries to come.

For Simplicity’s Sake – Learn from Peyton Manning!

Future NFL Hall of Fame quarterback Peyton Manning is tough to beat. What’s his secret? Is it accuracy, the ability to throw a “catchable ball” or capability to diagnose defenses quickly?  The answer is probably all of the above, to some degree.  Yet stated another way, Manning’s offensive excellence comes down to two things –simplicity and ability to execute.

As Chris Brown writes for Grantland, Peyton Manning’s offense is simple, simple, simple. Brown says; “(Manning runs) the fewest play concepts of any offense in the league. Despite having one of the greatest quarterbacks of all time under center, the Colts eschewed the conventional wisdom of continually adding volume to their offense in the form of countless formations and shifts.”

Image courtesy of USA Today and NFL.
Image courtesy of USA Today and NFL.

A small number of plays that “fit together”. That’s it, with various personnel groupings. Chris Brown mentions that sometimes Manning’s offense uses three wide receivers and a tight end. Sometimes, two wide receivers and two tight ends. The simplicity of the offense means that Manning can quickly come to the line, diagnose what the defense is doing, and then execute the best play possible.

Manning is essentially saying, “You’ve studied up on what I’m going to run, now try to beat me.” And while the Baltimore Ravens did just that in 2013’s NFL Divisional playoff game, Manning’s regular season record of 13-3 suggests few teams could “out-execute” the Broncos.

There are parallels in commerce. For many years, the rage was to get bigger and more diversified.  For example, under CEO Dennis Kozlowski, Tyco Corporation acquired more than 1000 companies in a ten year stretch.  But too much growth and diversity resulted in an unwieldy business model. Thus, in 2002 Tyco was forced shed businesses and consolidate into four companies to better meet customer needs.

Some companies are known for a simple business model. Case in point, Priceline is a discount travel site. But during the early 2000s dot.com boom, Priceline got off track as its founders believed the “name your own pricing” model could be translated to car sales, travel insurance and groceries. It was only after re-focusing on its core business of booking unsold hotel rooms, did Priceline’s market value zoom from $226 million (January 2000) to $33.41B today.

Applying the concept of simplicity also holds for products and services. I’ve previously written about how some companies such as AWS are subtracting mental clutter in cloud computing services. From design of the cloud management console, to the actual service offers, AWS is baking simplicity into very complex “behind the scenes” products.

Please don’t get me wrong. Once in a while you’ll have to add complexity to your business. And of course, there’s nothing wrong with purposeful M&A or adding new product lines to your business stack.

However, in the dash to add grow at all costs, instead take a page from the Peyton Manning playbook and choose the concepts of simplicity (possibly through consolidation, better design, or being pickier about additions) and precise execution (doing things better than competitors).

Technologies and Analyses in CBS’ Person of Interest

Person of Interest is a broadcast television show on CBS where a “machine” predicts a person most likely to die within 24-48 hours. Then, it’s up to a mercenary and a data scientist to find that person and help them escape their fate. A straight forward plot really, but not so simple in terms of the technologies and analyses behind the scenes that could make a modern day prediction machine a reality. I have taken the liberty of framing some components that could be part of such a project.  Can you help discover more?

CBSIn Person of Interest, “the machine” delivers either a single name or group of names predicted to meet an untimely death. However, in order to predict such an event, the machine must collect and analyze reams of big data and then produce a result set, which is then delivered to “Harold” (the computer scientist).

In real life, such an effort would be a massive undertaking on a national basis, much less by state or city. However, let’s dispense with the enormities—or plausibility of such a scenario and instead see if we can identify various technologies and analyses that could make a modern day “Person of Interest” a reality.

It is useful to think of this analytics challenge in terms of a framework: data sources, data acquisition, data repository, data access and analysis and finally, delivery channels.

First, let’s start with data sources. In Person of Interest, the “machine” collects data from various sources such as interactions from: cameras (images, audio and video), call detail records, voice (landline and mobile), GPS for location data, sensor networks, and text sources (social media, web logs, newspapers, internet etc.). Data sets stored in relational databases that are publicly and not publicly available might also be used for predictive purposes.

Next, data must be assimilated or acquired into a data management repository (most likely a multi-petabyte bank of computer servers). If data are acquired in near real time, they may go into a data warehouse and/or Hadoop cluster (maybe cloud based) for analysis and mining purposes. If data are analyzed in real time, it’s possible that complex event processing technologies (i.e. streams in memory) are used to analyze data “on the fly” and make instant decisions.

Analysis can be done at various points—during data streaming (CEP), in the data warehouse after data ingest (which could be in just a few minutes), or in Hadoop (batch processed).  Along the way, various algorithms may be running which perform functions such as:

  • Pattern analysis – recognizing and matching voice, video, graphics, or other multi-structured data types. Could be mining both structured and multi-structured data sets.
  • Social network (graph) analysis – analyzing nodes and links between persons. Possibly using call detail records, web data (Facebook, Twitter, LinkedIn and more).
  • Sentiment analysis – scanning text to reveal meaning as in when someone says; “I’d kill for that job” – do they really mean they would murder someone, or is this just a figure of speech?
  • Path analysis – what are the most frequent steps, paths and/or destinations by those predicted to be in danger?
  • Affinity analysis – if person X is in a dangerous situation, how many others just like him/her are also in a similar predicament?

It’s also possible that an access layer is needed for BI types of reporting, dashboard, or visualization techniques.

Finally, delivery of the result set –in this case – name of the person “the machine” predicts most likely to be killed in the next twenty four hours, could be sent to a device in the field either a mobile phone, tablet, computer terminal etc.

These are just some of the technologies that would be necessary to make a “real life” prediction machine possible, just like in CBS’ Person of Interest. And I haven’t even discussed networking technologies (internet, intranet, compute fabric etc.), or middleware that would also fit in the equation.

What technologies are missing? What types of analysis are also plausible to bring Person of Interest to life? What’s on the list that should not be? Let’s see if we can solve the puzzle together!

Follow

Get every new post delivered to your Inbox.

Join 40 other followers

%d bloggers like this: