Counteracting Our Obsession with Speed

In the quest to get as close as possible to the speed of light for faster decision making, it appears some companies are moving too fast and thus making very costly mistakes. When windows of time are compressed to near zero, there’s no recovery time for critical errors.  In fact, for some decisions (especially those of a strategic nature) it’s much better to take it slow.

Courtesy of Flickr. By Theseanster93

In baseball, scouts love to find pitchers that can throw “the heater”.  Prospects that can throw near 100 mph a few times a game are coveted over those who can rarely top 90.  The mantra for pitchers now is; “throw it faster and see if hitters can keep up”.

However, there’s a renewed interest in knuckleballers, or those pitchers who throw a pitch with little to no spin. For these pitchers, the ball is supposed to “dance” on its way to home plate at speeds of 60-70 miles per hour. For baseball hitters, trying to track down a dancing knuckleball is extremely tough. It’s hard to track the lively movement of the knuckleball, much less adjust to low speed at which they’re thrown.

Why the revived interest in knuckleball pitchers? Sports Illustrated writer Phil Taylor says; “(In baseball) we need the knuckleball to help counteract the obsession with speed, to prove there still is a place for nuance and skill.”

Phil Taylor has it exactly right –in baseball and in the business world.

Our world is obsessed with speed. Faster food, hurry up offenses in football, faster computers, and even faster war-making. As I have detailed before, it’s everything—faster.

But conversely, sometimes moving too fast is dangerous. There are some decisions that should not be made too quickly, especially those that could benefit from more data collection, or decisions where there is ambiguity and complexity. United States President Barack Obama mentioned in a Vanity Fair article; “Nothing comes to my desk that is perfectly solvable…so you wind up dealing with probabilities, and any given decision you make you’ll wind up with 30-40% chance that it isn’t going to work.”

Even when speed is deemed a competitive advantage, sometimes faster isn’t better. For example, Knight Capital Group lost $440 million dollars when a “technology malfunction” launched erroneous trades on their behalf.  Trading at near the speed of light, there simply wasn’t enough time to recover from the initial errors leaving Knight with nearly a $500 million loss in the span of just 45 minutes.

The need for speed comes at a price of compressed decision making windows and non-recoverability for critical errors. Worse, when errors from a few players cascade through complex systems, the feedback effects can severely damage all participants in the ecosystem. It’s as if the butterfly flapping its wings really does bring about category four hurricanes.

Not every decision needs to be made faster. There will always be a place for decisions made with “skill and nuance”, where it’s important to slow down, see the bigger picture, and adjust our swing and timing for the occasional erratic knuckleball thrown our way

Private Clouds Are Here to Stay—Especially for Data Warehousing

Some cloud experts are proclaiming private clouds “are false clouds”, or that the term was conveniently conjured to support vendor solutions. There are other analysts willing to hedge their bets by proclaiming that private clouds are a good solution for the next 3-5 years until public clouds mature.  I don’t believe it. Private clouds are here to stay (especially for data warehousing)—let me tell you why.

For starters, let’s define public vs. private cloud computing.  NIST and others do a pretty good job of defining public clouds and their attributes.  They are remote computing services that are typically elastic, scalable, use internet technologies, self-service, metered by use and more.  Private clouds, on the other hand are proprietary and typically behind the corporate firewall. And they frequently share most of the characteristics of public clouds.

However, there is one significant difference between the two cloud delivery models –public clouds are usually multi-tenant (i.e. shared with other entities/corporations/enterprises). Private clouds are typically dedicated to a single enterprise – i.e. not shared with other firms. I realize the above definitions are not accepted by all cloud experts, but they’re common enough to set a foundation for the rest of the discussion.

With the definition that private clouds equate to a dedicated environment for a single or common enterprise, it’s easy to see why they’ll stick around—especially for data warehousing workloads.

First, there’s the issue of security. No matter how “locked down” or secure a public cloud environment is said to be, there’s always going to be an issue of trust that will need to be overcome by contracts and/or SLAs (and possibly penalties for breaches).  Enterprises will have to trust that their data is safe and secure—especially if they plan on putting their most sensitive data (e.g. HR, financial, portfolio positions, healthcare and more) in the public cloud.

Second, there’s an issue of performance for analytics.  Data warehousing requirements such as high availability, mixed workload management, near real-time data loads and complex query execution are not easily managed or deployed using public cloud computing models. By contrast, private clouds for data warehousing offer higher performance and predictable service levels expected by today’s business users. There are myriad other reasons why public clouds aren’t ideal for data warehousing workloads and analyst Mark Madsen does a great job of explaining them in this whitepaper.

Third, in the multi-tenant environment of public cloud computing, there is increasing complexity which will lead to more cloud breakdowns. In a public cloud environment there are lots of moving pieces and parts interacting with each other (not necessarily in a linear fashion) within any given timeframe. These environments can be complex and tightly coupled where failures in one area easily cascade to others. For data warehousing customers with high availability requirements public clouds have a long way to go.  And the almost monthly “cloud breakdown” stories blasted throughout the internet aren’t helping their cause.

Finally, there’s the issue of control. Corporate IT shops are mostly accustomed to having control over their own IT environments. In terms of flexibly outsourcing some IT capabilities (which is what public cloud computing really is), IT is effectively giving up some/all control over their hardware and possibly software.  When there are issues and/or failures, IT is relegated to opening up a trouble ticket and waiting for a third party provider to remedy the situation (usually within a predefined SLA).  In times of harmony and moderation, this approach is all well and good. But when the inevitable hiccup or breakdown happens, it’s a helpless feeling to be at the mercy of another provider.

When embarking on a public cloud computing endeavor, a company or enterprise is effectively tying their fate to another provider for specific IT functions and/or processes.   Key questions to consider are:

  • How much performance do I need?
  • What data do I trust in the cloud?
  • How much control am I willing to give up?
  • How much risk am I willing to accept?
  • Do I trust this provider?

There are many reasons why moving workloads to the public cloud makes sense, and in fact your end-state will likely be a combination of public and private clouds.  But you’ll only want to consider public cloud after you carefully think about the above questions.

And inevitably, once answers to these questions are known, you’ll also conclude private clouds are here to stay.

 

Can Big Data Analytics Solve “Too Big to Fail” Banking Complexity?

Despite investing millions upon millions of dollars in information technology systems, analytical modeling and PhD talent sourced from the best universities, global banks still have difficulty understanding their own business operations and investment risks, much less complex financial markets. Can “Big Data” technologies such as MapReduce/Hadoop, or even more mature technologies like BI/Data Warehousing help banks make better sense of their own complex internal systems and processes, much less tangled and interdependent global financial markets?

Courtesy of Flickr

British physicist and cosmologist, Stephen Hawking, in 2000 said; “I think the next century will be the century of complexity.” He wasn’t kidding.

While Hawking was surely speaking of science and technology, it’s of little doubt he’d also look at global financial markets and financial players (hedge funds, banks, institutional and individual investors and more) as a very complex system.

With hundreds of millions of hidden connections and interdependencies, hundreds of thousands of various hard-to-understand financial products, and millions if not billions of “actors” each with their own agenda, global financial markets are the perfect example of extreme complexity.  In fact, the global financial system is so complex that even attempts to analytically model and predict markets may have worked for a point in time, but ultimately failed to help companies manage their investment risks.

Some argue that complexity in markets might be deciphered through better reporting and transparency.  If every financial firm were required to provide deeper transparency into their positions, transactions, and contracts, then might it be possible for regulators to more thoroughly police markets?

Financial Times writer Gillian Tett has been reading the published work of Professor Henry Hu at University of Texas.  In Tett’s article; “How ‘too big to fail’ banks have become ‘too complex to exist’ (registration required)” she says that Professor Hu argues technological advances and financial innovation (i.e. derivatives) have made financial instruments and flows too difficult to map. Moreover, Hu believes financial intermediaries themselves are so complex that they’ll continually have difficulty making sense of shifting markets.

Is a “too big to fail” situation exacerbated by a “too complex to exist” problem? And can technological advances such as further adoption of MapReduce or Hadoop platforms be considered a potential savior?  Hu seems to believe that supercomputers and more raw economic data might be one way to better understand complex financial markets.

However, even if massive data sets can be better searched, counted, aggregated and reported with MapReduce/Hadoop platforms, superior cognitive skills are necessary to make sense of outputs and then make recommendations and/or take actions based on findings. This kind of talent is in short supply.

It’s even highly likely the scope of complexity in financial markets is beyond today’s technology to compute, sort and analyze. And if that supposition is true, should next steps be to take measures to moderate if not minimize additional complexity?

Questions:

  • Are “Big Data” analytics the savior to mapping complex and global financial flows?
  • Is the global financial system—with its billions of relationships and interdependencies—past the point of understanding and prediction with mathematics and today’s compute power?

Top Financial Risks of Cloud Computing

Cloud computing definitely has upside as adopters can speed delivery of analytics, gain flexibility in deployments and costs, and transfer IT headaches to another company. However, with all the advantages of cloud, it’s important to keep in mind there are financial risks to cloud computing including potential costs from lawsuits and reputational damage from cloud provider security/privacy data breaches, and possible revenue losses from cloud provider downtime/outages.

For any type of business decision, there are various risks that should be considered– strategic, operational, financial, compliance and reputational (brand).  These risks should also be criteria for any decision to move workloads to cloud computing. However, for sake of discussion, let’s focus on financial risk.

First, for cloud computing there are financial risks in terms of potential data or privacy loss, especially in complex multi-tenant environments.  If there are data breaches of unencrypted personally identifiable information (PII), many US states have laws that require consumer notification. Companies accused of data breach also typically provide consumer credit monitoring services for up to one year. One research firm estimates total costs due to a data breach average $7.2 million (USD).  In addition, such breaches may open up companies to class action lawsuits that could total millions more in damages.

To mitigate risks of data loss or privacy breach, cloud providers do everything in their power to safeguard data including: server hardening, user provisioning and access controls, enforcement of policies for passwords and data privacy, monitoring/logging for intrusion detection, self-auditing, third party security audits (when specified), mandatory training for personnel and in some cases encryption of tables and/or columns.

And while in many cases the above practices are more robust in public cloud computing environments than in most corporate data centers, there are still lagging trust concerns of possible cloud data loss or privacy breach. Perhaps this is why, at least for the next 2-3 years, companies will increasingly choose private cloud over public cloud environments.

To mitigate financial risks some companies seek indemnification where the cloud provider agrees to take on or share liability of security breach including costs associated a breach. However, cloud financial indemnifications are extremely rare, and even if offered, the risk associated with such breaches is often transferred to insurance companies via purchase of cyber insurance. And of course, such insurance costs will be baked into cloud service fees.

Other financial risks for companies doing business in the cloud include loss of revenues if there are significant availability issues. If cloud environments are down for hours or days, this could adversely impact a business’ ability to perform analytics or reporting and thus may affect revenue opportunities. To offset possible lost revenues, most cloud providers will sign up for availability SLAs and associated penalties (usually redeemable as service credits).

Cloud computing has so much upside, that it’s very easy for business managers to declare “all things must be cloud”.  That’s well and good, but one must also carefully consider cloud risks. And while risk cannot be eliminated, it can surely be mitigated with proper planning and execution when things go wrong.

Companies considering cloud computing must remember that just like in outsourcing, there’s no such thing as transference of responsibility. In moving workloads to the cloud, carefully document upsides and downsides, examine your decisions in terms of risk (including financial ones), and then make the best decision possible for your particular organization.

Questions:

  • This article speaks to financial risks for cloud computing in terms of access and availability. There are certainly more including project cost overruns for cloud deployment and data quality (completeness/accuracy).  What others can you think of?

Is Bigger, Better in the Cloud?

Bigger is better – is a phrase that’s widely assumed to be self-evident. However, whether it is cruise ships capsizing, or international banks catching a major cold/flu in the 2008 Financial Crisis, we know that while there’s presumably safety in the concept of “size” there can also be inherent complexity and subsequent risk.

Risk management is a critical topic business and IT professionals must take into account in terms of cloud computing.  And especially for mission critical data such as human resources, payroll, financial or even patient data, security and privacy of sensitive data is a paramount concern when considering cloud delivery models.

Courtesy of Flickr/Serhat Demir

But in cloud computing, risk comes in other forms as well including financial viability, especially when there seems to be a new cloud entrant in the marketplace every week. New and attractive markets usually attract entrants at a sizzling pace, however, when the eventual market shakeup comes due, there’s also a chance your cloud provider might go out of business completely, taking your data and applications with them.

And let’s not forget operational risk in the cloud, where it might be assumed large cloud providers might have the upper hand in hiring the talent and expertise necessary to manage inherently complex cloud environments.   However, all the talent in the world is not going to save an environment that’s poorly architected, tightly coupled, and one operational mistake (or bad decision) away from catastrophic meltdown.

Ultimately, one cannot master risk. Instead, management of risks is about all we can hope for.

Mark Twain once famously said; “put all your eggs in one basket – and then watch that basket.” However, Mr. Twain surely didn’t have cloud computing on the mind when he spoke.

For IT and business professionals considering cloud computing solutions, it’s probably tempting to short-list those providers that have a sizable and large cloud computing presence (e.g. the top ten largest and established vendors). However, for a few of these companies, cloud computing is an ancillary business and there’s no guarantee that strategic plans won’t shift to the point that spin-offs aren’t a possibility. In addition, with cloud computing margins already thinning by some estimates, there’s also a good chance investor pressures may force cloud providers to skimp on redundancy or recklessly cut corners elsewhere.

Long way of saying, when it comes to cloud computing, I’m not convinced there’s safety in numbers, nor that a bigger presence and/or market share signals a fundamentally better offer.

Questions:

  • When it comes to cloud computing, do you believe bigger is better and possibly safer?
  • What criteria do you look for in assessing a cloud provider?

From Complexity to Simplicity in the Cloud

The inner workings of cloud computing can be quite complex. That’s why the founders of Dropbox are on the right path—make cloud computing as simple as possible with easy to understand user-interfaces to mask “behind the scenes” infrastructure and connections.

Open up the lid of “black box” cloud computing and what you’ll see is anything but simple. Massive and parallel server farms that never sleep, algorithms worming and indexing their way through global websites, large data sets waiting in analytical stores for discovery, message buses that route, control and buffer system requests, and massive processing of images, text and more on a grandiose scale.

That’s why companies that take the complexity out of cloud computing are thriving. Take for instance, Dropbox, a company that allows users to access their personal or corporate files from any internet connected device. A Technology Review article featuring Q&A with CEO Drew Houston cites the efforts of Dropbox to mask behind the scenes efforts of “having your stuff with you, wherever you are.”

With various operating systems, incompatibilities, file formats and more, Dropbox engineers had to wade through mountains of bugs and fixes to make the user experience as seamless as possible. “There are technical hurdles that we had to overcome to provide the illusion that everything is in one place…and that getting it is reliable, fast and secure,” Houston says.

Looking at Dropbox from the outside, a user only sees “visual feedback” via a folder, icon or the like on his/her desktop. But underneath the hood there’s a whole gaggle of technologies and code that makes Dropbox work. And to create a seamless experience, painstaking efforts are involved down to the tiniest components says Houston; “Excellence is the sum of 100 or 1000 of…little details”.

If information technology leaders plan to bring “BI to the masses”, simplicity will be a necessary requirement to mask the inherent complexity of cloud computing. Ultimately, there are plenty of business users that won’t care how their particular applications are delivered, only that they are carried out with efficiency, reliability and security. Thus, user interfaces designed with clarity, elegance and ease-of-use in mind will ultimately put a “wrapper” on complexity and drive further adoption of cloud computing delivery models.

And it’s also likely that business users will never appreciate the hard work that goes into designing, delivering and sustaining their applications on a 24x7x365 basis, and accessible from any internet enabled device. But then again, perhaps that’s the point. Application availability, security, reliability, simplicity and productivity are now the expectations of business users – it’s best to deliver “in the cloud”exactly what they want.

Is there Too Much Complexity in the Cloud?

A recent analyst report suggests public clouds are prone to failure because they are inherently complex. However, just because there are multiple interacting objects in a particular environment, this doesn’t necessarily imply complexity.

Cloud computing is all the rage for business users and technology buyers. And why not, especially because it provides a fast and flexible option for delivering information technology services. In addition,cloud computing also drives value through higher utilization of IT assets, elasticity for unplanned demand, and scalability to meet business needs today and tomorrow.

However, there are risks in the cloud, especially in the public cloud where business and news media regale with case studies of data loss, security issues, failed backups and more. Perhaps one reason that public clouds are prone to failure—and perhaps always will be—is that some analysts consider these environments to be complex and tightly coupled.  And if indeed this is the case, then IT buyers must consider that failure isn’t only possible, it’s inevitable.

Yet, first we must ask, are public clouds really complex environments?

To understand if a particular system is complex, we must understand if it has characteristics such as connected objects (nodes and links with interdependencies), multiple messages and transactions, hierarchies, and behavioral rules (instructions).

Public cloud services available from companies such as Microsoft, Google, Amazon Web Services (AWS) etc., often consist of various components such as applications (front end and backend such as billing), controllers and message passing mechanisms, hardware configurations (disk, CPU, memory), databases (relational and NoSQL), Hadoop clusters and more.  In addition there are various management options (dashboards, performance monitoring, identity and access) and these environments typically operate with multiple users, multiple tenants (compute environments shared with more than one application and/or company), and sometimes span multiple geographies.  And from a complexity standpoint we haven’t even yet discussed processes in building cloud environments much less operating them.

In summary, in a cloud environment there’s lots of moving pieces and parts interacting with each other (not necessarily in a linear fashion) within any given timeframe.

Multiple interacting agents can help define whether a particular environment is complex or not, however another key determinant is also very important—whether processes are tightly or loosely coupled. Richard Bookstaber, author of Demon of Our Own Design, writes that tightly coupled systems have components critically interdependent with little to no margin for error. “When things go wrong, (an error) propagates linked from start to finish with no emergency stop button to hit,” Bookstaber says. So a tightly coupled system is one where linkages (dependencies) are so “tight” that errors or failures cascade and eventually cause the entire system to fail.

This discussion is important from a risk management perspective for cloud computing. If we believe that data is one of the most valuable assets of a corporation and if we believe public clouds are complex environments with tightly coupled components that have little to no slack (buffers) to stop failures, then there should be a set of practices and processes set in place to manage the potential risk of data breach, theft, loss or corruption.

So what say you, should public clouds be considered “complex” environments?  Are they “high risk” systems prone to failure?