Mobile: It’s Still a Big Deal

With visions of four legged robots rambling through the forest, a pair of virtual reality goggles for every person, and drones flying overhead beaming cyberspace access, the internet giants are infatuated with “the next big thing.”  However, while the internet giants are pre-occupied with capital intensive robotic ventures, every day run-of-the mill companies can still get plenty of value from meeting the needs of the mobile consumer.

Flickr for android, courtesy of Flickr.
Flickr for android, courtesy of Flickr.

Mobile data are on a torrid trajectory. A CNN Money article cites studies from Telco equipment companies such as Ericsson and Cisco reporting mobile traffic doubling over the past two years, and traffic on mobile data networks at a princely 885 petabytes in 2012.  The article also mentions that wireless traffic will grow 66% annually for the next five years.

These numbers on mobile traffic are too abstract for most of us.  Let’s take a closer look at some trends—closer to home—from a Business Insider report on mobile media consumption:

  • 1/5 of consumer media consumption is done on a mobile device (yes, that means your household TV set is losing ground every single day)
  • Consumers are spending as much time on mobile devices as desktops/laptops (talk about crossing the Rubicon!)
  • Tablet shipments are up 83%, PC shipments are down 13% (just looking around it seems just about everyone has a smartphone/tablet…)
  • Mobile advertisements make up 41% of Facebook’s ad revenue (stunning turnaround for a company that just a few years ago was in danger of being left out of the mobile era)

An article from Atlantic Magazine  interviewing Walmart’s SVP of Mobile and Digital really brings these statistics to life.  For example, this retail giant claims that over the 2013 Christmas shopping season, 50% of the traffic coming to Walmart.com was from mobile devices.  Read that again—50% of the e-commerce site traffic came from mobile devices!

And mobile can help with in-store shopping too. Using the Walmart app (downloadable on smartphone or tablet), an individual can use “in-store mode” to search for items on the shelf. That means no longer tracking down Harry or Susan to ask them where the Tide laundry pods are located. And shoppers can also use “Scan and Go” to scan items as they walk through the store. This then, tabulates a running total of items in their shopping cart, which of course can be very beneficial for customers on a tight budget.

The tablet craze is just getting started too. Gartner predicts that by 2015, tablets will outsell PCs (desk based and notebooks). That’s a tough one to believe for those of us tethered to a desktop PC every day! And as mobile data plans get less expensive (via mobile carrier price wars), it’s not hard to fathom thatthe final 40% of mobile users holding out in the U.S. will upgrade to a smartphone. If data like these don’t change the mobile plans of retailers across the globe, I’m not sure what will.

Taking a look at the latest round of capital investments from the internet giants, it’s easy to believe they’re more infatuated with self-driving cars, robots and drones that deliver packages than driving value to mobile consumers.  That should leave a pretty wide open gap for businesses of all stripes to invest in meeting the needs of the mobile consumer. Mobile isn’t anywhere near dead. In fact, it’s just getting started.

 

It’s Time to Ditch Scarcity Thinking

In J.R.R. Tolkien’s “The Hobbit,” Smaug the magnificent dragon sits on his nearly unlimited hoard of treasure and coins and tells “burglar” Bilbo Baggins to “help (himself) again, there’s plenty and to spare.” While it’s certainly true there are many things in this world that are physically scarce, when it comes to living in the information age, we need to retrain our minds to ditch scarcity thinking and instead embrace “sky’s the limit” abundance.

Image courtesy of Flickr.  By SolidEther
Image courtesy of Flickr. By SolidEther

Most of us have been taught there are resource constraints for things such as time, talent and natural items such as land, fresh water and more. And of course, there are very real limits to some of these items. However, we currently live in an information age. And in this era, some of our previous thought patterns no longer apply.

Take for instance, the ability to have an ocean of knowledge at our fingertips. With non-networked computers or and other devices, we’re limited to the data at hand, or the storage capacity of these devices. But add in a dash of hard-wired or wireless networking and suddenly physical limits to knowledge disappear.

Apple’s Siri technology is a compelling case in point. Using the available processing power of an iPhone (which by the way is considerable), Siri could arguably answer a limited amount of questions based on data in flash storage.

But open up Siri’s natural language processing (the bulk of which is done in the cloud) and suddenly if Siri can’t understand you, or doesn’t know an answer, the web may provide assistance. By leveraging cloud computing and access to the internet, Siri brings a wealth of data to users, and even more intelligence to Apple by capturing all queries “in the cloud” and offering an immense data set for programmers to tune and improve Siri’s capabilities.

It used to be that TV airtime was in short supply. After all, there are only so many channels and airtime programming slots for content, especially during primetime hours. And there’s still an arduous process to create, discover and produce quality content that viewers will want to watch during these scarce blocks of time.

Without regard to conventional thinking, YouTube is turning this process on its head. A New Yorkerarticle details how YouTube is growing its market presence by offering unlimited “channels” that can be played on-demand, anytime and anywhere. “On YouTube, airtime is infinite, content costs almost nothing for YouTube to produce and quantity, not quality is the bottom line,” explains author John Seabrook.  Content watching then (whether via YouTube, Netflix, DVR, Slingbox etc), is no longer constricted to certain hours, and in effect time is no longer a constraint.

In the past, the music we liked was confined to physical media such as records or compact discs. Then MP3 players such as the iPod expanded our capabilities to listen to more music but were still confined to available device storage. That’s scarcity thinking. Now with wireless networking access, there are few limits to listening to our preferred music through streaming services such as Pandora, or renting music instead of owning it on CD.  Indeed, music subscription services are becoming the dominant model for how music is “acquired”.

There are still real limits to many valuable things the world (e.g. time, talent, money, physical resources, and even human attention spans). Yet even some of these items are artificially constrained by either politics or today’s business cases.

The information age has brought persons, businesses and societies elasticity, scalability, and the removal of many earlier capacity constraints. We seem to be sitting squarely on Smaug’s unending stack of treasure. But even in the great Smaug’s neck there was a gaping vulnerability. We’ll still need to use prudence, intelligence and far-sighted thinking in this age of abundance, with the understanding that just because some of our constraints are removed, that doesn’t necessarily mean we should become gluttonous and wasteful in our use of today’s resources.

 

Big Data Technology Training – A Better Approach

Many technology companies begin training by handing employees binders of technical manuals, topics and user guides.  Employees are expected to plow through reams of text and diagrams to learn what they need to know to succeed on the job. Instead of just a “core dump” of manuals and online training courses, technical employees should also get “hands on” simulations, boot camps and courses led by advanced robo-instructors to fully hit the ground running.

Courtesy of Flickr. By Colum O'Dwyer
Courtesy of Flickr. By Colum O’Dwyer

It’s generally accepted there are two types of knowledge; theoretical knowledge learned via reading books, whitepapers, and other types of documents (also known as classroom knowledge) and experiential knowledge (learning by doing a specific task or involvement in daily activities).

All too often, technology employees coming onto the job on day one, are either handed a tome or two to assimilate, or given a long list of pre-recorded webinars to understand the company’s technology, competitive positioning and go-to-market strategies. In best case scenarios, technology employees are given a week of instructor led training and possibly some role-playing exercises.  However, there is a better way.

Financial Times article titled “Do it Like a Software Developer” explores new approaches in terms of training and learning for technology companies of all sizes.  Facebook, for example, offers application development new hires 1-2 days of coursework and then turns them loose on adding new features to a new or existing software program.  In teams of 30-60, new hires are encouraged to work together to add features and present results to business sponsors at the end of the first week of employmentNew hires get hands-on and “real life” experience of how to work in teams to achieve specific business results.

Even better, Netflix has a rogue program called “Chaos Monkey” that keeps new and existing application developers on their toes. This program’s purpose is to intentionally and randomly disable systems that keep Netflix’s streaming system running. Employees then scramble to discover what’s going wrong and make necessary adjustments. According to the FT article, Chaos Monkey is only let loose on weekdays when lots of developers are around and there is relatively light streaming traffic. Netflix believes if left alone, the streaming service will break-down anyway, so isn’t it better to keep it optimized by having armies of employees scouring for trouble-spots?

Simulations, fire-drills, and real life boot camps should supplement book knowledge for technology companies looking to make new-hires fully productive. But of course, such events are often considered a luxury for companies with limited training budgets, or a need to get employees on the job as soon as possible. All too often, however, employees will learn one-way or another. And mistakes are then made on the customer’s dime. Is it not better to have new employees learn in a safe, controlled “non-production” environment where mistakes can be monitored and quickly corrected by mentors and instructors?”

“Hands-on” training and learning activities are not only for application developers. With available and coming Artificial Intelligence (AI) technologies, it’s feasible for “robo-instructors” to guide technology sales employees through customer sales calls via an online interface (with more than canned responses based on rudimentary decision trees).  Or new-hire technology marketing professionals could design a campaign along with a feasible budget for a new product line and present results to business sponsors or be graded by an advanced algorithm. The possibilities for a more robust and experiential training program for technology associates are endless.

At my first job in Silicon Valley—working for a cable modem company—I was handed five thick and heavy technical manuals on day-one. No instructor led, online training or mentoring. It was sink or swim, and many employees (me included) sank to the bottom of the ocean floor.

While these types of lackluster training events at tech companies might be more exception than rule, there’s an opportunity for increased new-hire productivity and job satisfaction. What’s required is a different mindset towards additional training investment and more focus on ingrained learning through experience and daily immersion of activities rather than a book knowledge cram course.

When is CAPEX Coming Back?

Since 2008, many companies have stockpiled cash on their balance sheets instead of spending it on upgrading and/or building new facilities or buying new equipment. This of course, is a strategy that can only last so long, as eventually things fall apart and systems grow obsolete.  And yet, six years since the global financial crisis, long lost CAPEX still hasn’t come home. The better question is; will it ever?

Image courtesy of Flickr.  By  freefotoUK.
Image courtesy of Flickr. By freefotoUK.

Companies are sitting on piles of cash ($2.8 trillion to be exact). To be fair, cash on a balance sheet isn’t a bad thing—after all that cash may be stored up for a rainy day, used to buy a competitor or purchase outstanding shares on the market.  Even better, cash could be used for capital expenditures (CAPEX) such as upgrading ancient IT equipment or improving manufacturing facilities.

While enterprises clutch the purse strings, investors are pressuring them to spend. A Bank of America/Merrill Lynch survey showed 58% of fund managers want companies to spend their cash on CAPEX, as opposed to giving money back to shareholders. Regardless, of whether monies are used for CAPEX, share buybacks or the like, investors want companies to spend, and spend now.

Exacerbating the issue, the Financial Times reports cash hoarding is blamed for a lack-luster global economy. After all, one person’s income is often based on another person’s (or corporation’s) spending. In other words, freeing up all that unspent cash could lead to a lot more global economic growth.

Boston fund manager Jim Swanson says that while companies have been right to store up cash—because of economic uncertainty—they’ve taken it too far. He says he’d prefer the cash to go toward refreshing a company’s “aging IT” department.

There are good business cases to spend that cash now. From an IT perspective, plenty of assets have most likely been depreciated past the three year mark and are nearing the end of useful life. And from a facilities and equipment point of view, it may make sense to examine possible productivity improvements from advanced robotics, or even upgrade onsite security systems to prevent un-authorized entry.

Perhaps we’re seeing a sea-change, where CAPEX is on permanent vacation. The rise of cloud computing, for instance, essentially means that information technology can be purchased from another provider on a monthly basis and funded from operating expenses.  This method of payment undoubtedly allows companies to take advantage of technology improvements with fewer obsolescence risks.

At some point in the near future—and likely through activist and investor pressure—companies will be forced to put their cash to work. And while some analysts are bullish on the global economy because of pent up demand, others are more reserved.  So when will CAPEX come back? It’s quite possible that “flat CAPEX” spending is the new normal.

Questions:

  • Should companies be hoarding or spending their cash?  If spending, what needs the most attention?
  • What trends—besides virtualization—may be contributing to less spending on information technology?
  • Is the move to OPEX instead of CAPEX for IT purchases an “economic revolution”?

I’d love to hear your thoughts!

 

IT Doesn’t Matter… Until It Does

While most business leaders equate the Information Technology (IT) department with simply “keeping the lights on,” IT does much more than administer and operate business systems. In fact, with how quickly technology is changing the game in retail, hospitality, healthcare, manufacturing and more, companies that underinvest in IT are in real danger of becoming yesterday’s news.

In May 2003, Harvard Business Review’s Nicholas Carr came out with a stunning article titled, “IT Doesn’t Matter.”  In the article, Carr argued that as information technology becomes more prevalent and widely used, the strategic advantage from adopting new technologies has waned.

The theme “IT Doesn’t Matter,” of course, clashed heavily with pundits, consultants and vendors of the day who would hear nothing of this offensive concept. Instead, these constituencies argued that while information technology may quickly commoditize, the real value is not in a specific technology, but how various technologies are stitched together and deployed for business improvement.

Courtesy of Flickr. By PNNL - Pacific Northwest National Laboratory
Courtesy of Flickr. By PNNL – Pacific Northwest National Laboratory

Today, most business and technology leaders have accepted that information technologies quickly commoditize and that any advantage implementing the latest and greatest may create only temporary competitive advantage. But to underestimate the value of a highly available, optimized and business supporting IT infrastructure is to do so at your peril!

Take for instance a recent Financial Times column by entrepreneur Luke Johnson. Mr. Johnson is a private equity venture capitalist that regularly invests in ailing companies. After investing in one particular retail outlet, Johnson discovered “the core reason (the company) was in decline: its technology strategy was deeply flawed and it suffered from a huge underinvestment in systems.” In other words, on the surface this company may have looked like a quick and easy fix, but Johnson found out that years of underinvestment in point of sale, networking, data center upkeep, servers and retail applications had left this company in the dust bin.

For this particular retail company, IT didn’t matter, and it showed. The refusal to invest in people, processes and technologies to keep this retailer well stocked and serving customers efficiently was a key contributor to its demise.

Granted, this is just one case study, but Luke Johnson—an investor in multiple companies—sees a disturbing trend when he says, “most non-tech companies do not take technology seriously enough.”

So does IT matter? The answer is self-evident: IT should be treated as an important and critical function in your company. IT does more than just “sweat the details” of making sure systems are available and performing.  Whatever your business strategy—improving productivity, cutting costs, discovering new markets, changing your product mix, or expanding sales, IT is the enabler. IT allows business to do things—only better. And companies that fail to understand the significance of a healthy investment in IT will likely be next on Mr. Johnson’s list of companies that need VC help to rise from the dead.

The speed of change—in terms of politics, globalization, technology, and markets is un-relenting. And emerging new concepts such as “the internet of things,” wearablesindustrial drones and more are coming online quickly.  It’s not enough to simply serve your customer of today well enough; you also need to understand your customer’s needs tomorrow and predict what they’ll be in the next few years.

And that’s not going to happen if you firmly believe that IT doesn’t matter.

Debunking Five Cloud Computing Myths

For the third year in a row, cloud computing is one of the top three technology investments for CIOs. However, there are many misconceptions of “the cloud”. Indeed, in my travels through public speaking sessions and corporate training seminars on cloud computing, I have encountered five common myths or misconceptions. It’s high time to debunk them.

server farm3Myth #1: “The Cloud” is Just One Big Cloud

With the exception of Amazon Web Services which is constantly expanding its data center presence, there is no single cloud of record.  Companies and vendors are standing up cloud computing infrastructures which they make available to the public or internal stakeholder audiences such as employees or suppliers.

In fact there are hundreds if not thousands of “clouds” in the United States alone (especially when one considers private cloud infrastructures).  For example on the software vendor side, OracleHP, IBMTeradata (full disclosure: the author works for Teradata Corporation) and others are building and maintaining their own clouds. And of course there are B2C “clouds” such as iCloud and DropBox. So the next time someone says, “I’m performing analytics in the cloud”, you may wish to ask “which one”?

Myth #2: One Day Soon, All Our Data Will Be in the Public Cloud

Many cloud experts and prognosticators believe the march to public cloud computing infrastructures—for everyone (corporations and consumers) is inevitable.  Reasons for this line of thinking range from the growing size and complexity of data volumes (i.e. who can afford all this storage?) to public cloud providers should be able to monitor, manage and secure IT infrastructures better and cheaper than individual companies.

While I don’t doubt that public cloud computing will take more market share in the future, I certainly am under no illusion that one day soon all data will be stored in the public cloud—mainly because of bandwidth costs for data transport and costs of doing all your processing on a pay-per-use basis. And of course, recent government snooping revelations help me easily predict that plenty of data will stay right where they’re currently located.

Myth #3: Cloud is Cheaper than On-Premises Computing

This particular myth is a big misconception to overcome.  Corporate buyers hear the word “cloud” and assume it equates to cheaper IT costs. This statement may be true on a low utilization basis—meaning you only plan on using compute power infrequently—but on a full utilization basis you’ll most likely pay more for computing on a pay-per-use basis than maintaining your own IT infrastructure and applications.  For a deeper discussion on this topic, visit “The Cloud Conundrum: Rent or Buy?

Myth #4: Cloud Computing Means Someone Else Now Has My IT Headaches

Of course, while moving your workloads to “the cloud” means that another vendor—that “someone else”—is responsible for monitoring, maintaining and supporting the information technology infrastructure, it certainly doesn’t mean that your IT headaches go away. In fact, while you may no longer have day to day responsibility for availability, software and hardware upgrades and more, you never really lose complete “responsibility” for IT.

Instead, your day is now consumed with vendor, contract (SLAs) and incident management, workload balancing, application development (for the cloud), and security items such as roles, profiles, authentication processes and more.  Long story, short; you don’t abdicate responsibility for IT when you move workloads to the cloud.

Myth #5: If it’s not Multi-Tenant, It’s Not Cloud

I hear this particular comment quite a bit. Really, the person suggesting this “truth” is stating that the real beauty of cloud computing is taking a bunch of commodity hardware, virtualizing it, and pooling resources to keep costs down for everyone. To be sure, resource pooling is a key criteria for cloud computing, but virtualization software isn’t the only route to success—(i.e. workload management might fit the bill just fine).

In addition, while multi-tenant most commonly means “shared”, it’s important to define how many components of a cloud infrastructure you’re actually willing to “share”. To be sure, economies of scale (and lower end user prices) can result from a cloud vendor sharing the costs of physical buildings, power, floor space, cooling, physical security systems and personnel, racks, maintaining a cloud operations team and more. But I’ll also mention that there are customers I’ve talked to that have zero intention of sharing hardware resources—mostly for security and privacy reasons.

These are just five cloud computing myths that I’ve come across. There are certainly more that I failed to mention. And perhaps you don’t agree with my efforts to debunk some of these themes?  Please feel free to comment, I’d love to hear from you!

Be Wary of the Science of Hiring

Like it or not, “people analytics” are here to stay. But that doesn’t mean companies should put all their eggs in one basket and turn hiring and people management over to the algorithms. In fact, while reliance on experience/intuition to hire “the right person” is rife with biases, there’s also danger in over-reliance on HR analytics to find and cultivate the ultimate workforce.

Courtesy of Flickr. By coryccreamer
Courtesy of Flickr. By coryccreamer

The human workforce appears to be ripe with promise for analytics. After all, if companies can figure out a better way to measure the potential “fit” of employees to various roles and responsibilities, subsequent productivity improvements could be worth millions of dollars.  In this vein, HR analytics is the latest rage—where algorithms team through mountains of workforce data to identify the best candidates and predict which ones will have lasting success.

According to an article in Atlantic Magazine, efforts to quantify and measure the right factors in hiring and development have existed since the 1950s. Employers administered tests for IQ, math, vocabulary, vocational interest and personality to find key criteria that would help them acquire and maintain a vibrant workforce. However, with the Civil Rights Act of 1964, some of those practices were pushed aside due to possible bias in test formulation and administration.

Enter “Big Data”. Today, data scarcity is no longer the norm. In actuality, there’s an abundance of data on candidates who are either eager to supply them, or ignorant of the digital footprint they’ve left since leaving elementary school. And while personality tests are no longer in vogue, new types of applicant “tests” have emerged where applicants are encouraged to play games that watch—and measure how they solve problems and navigate obstacles—in online dungeons or fictitious dining establishments.

Capturing “Big Data” seems to be the least of challenges in workforce analytics. The larger issues are identifying key criteria for what makes a successful employee—and discerning how those criteria relate and interplay with each other.  For example, let’s say you’ve stumbled upon nirvana and found two key criteria for employee longevity.  Hire for that criteria and now you may have more loyal employees, but you still need to account and screen for “aptitude, skills, personal history, psychological stability, discretion”, work ethic and more. And how does one weight these criteria in a hiring model?

Next, presuming you’ve developed a reliable analytic model, it’s important to determine under which circumstances the model works.  In other words, does a model that works for hiring hamburger flippers in New York, also work for the same role in Wichita, Kansas?  Does seasonality have a play? Does weather? Does it matter the size of the company, or the prestige of its brand? Does the model work in economic recessions and expansions? As you can see, discovering all relevant attributes for “hiring the right person” in a given industry, much less role, and then weighting them appropriately is a challenge for the ages.

Worse, once your company has a working analytic model for human resource management, it’s important to not completely substitute it for subjective judgment.  For example in the Atlantic Magazine article, a high tech recruiting manager lamented: “Some of our hiring managers don’t even want to interview anymore, they just want to hire the people with the highest scores.”  It probably goes without saying, but this is surely a recipe for hiring disaster.

While HR analytics seems to have room to run, there’s still the outstanding question of whether “the numbers” matter at all in hiring the right person. For instance, Philadelphia Eagles coach, Chip Kelly was recently asked why he hired his current defensive coordinator, who had less than stellar numbers in his last stint with the Arizona Cardinals.

Chip Kelly responded: “I think people get so caught up in statistics that sometimes it’s baffling to me. You may look at a guy and say, ‘Well, they were in the bottom of the league defensively.’ Well, they had 13 starters out. They should be at the bottom of the league defensively.”

He continued: “I hired [former Oregon offensive coordinator and current Oregon head coach] Mark Helfrich as our offensive coordinator when I was at the University of Oregon. Their numbers were not great at Colorado. But you sit down and talk football with Helf for about 10 minutes. He’s a pretty sharp guy and really brought a lot to the table, and he’s done an outstanding job.”

Efficient data capture, data quality, proper algorithmic development and spurious correlations in too much big data are just a few of the problems yet to be solved in HR analytics. However, that won’t stop the data scientists from trying. Ultimately, the best hires won’t come exclusively from HR analytics, but will be paired with executive (subjective) judgment to find the ideal candidate for a given role. However, in the meantime, buckle your seatbelt for much more use of HR analytics. It’s going to be a bumpy ride.

 

Follow

Get every new post delivered to your Inbox.

Join 41 other followers

%d bloggers like this: