Be Wary of the Science of Hiring

Like it or not, “people analytics” are here to stay. But that doesn’t mean companies should put all their eggs in one basket and turn hiring and people management over to the algorithms. In fact, while reliance on experience/intuition to hire “the right person” is rife with biases, there’s also danger in over-reliance on HR analytics to find and cultivate the ultimate workforce.

Courtesy of Flickr. By coryccreamer
Courtesy of Flickr. By coryccreamer

The human workforce appears to be ripe with promise for analytics. After all, if companies can figure out a better way to measure the potential “fit” of employees to various roles and responsibilities, subsequent productivity improvements could be worth millions of dollars.  In this vein, HR analytics is the latest rage—where algorithms team through mountains of workforce data to identify the best candidates and predict which ones will have lasting success.

According to an article in Atlantic Magazine, efforts to quantify and measure the right factors in hiring and development have existed since the 1950s. Employers administered tests for IQ, math, vocabulary, vocational interest and personality to find key criteria that would help them acquire and maintain a vibrant workforce. However, with the Civil Rights Act of 1964, some of those practices were pushed aside due to possible bias in test formulation and administration.

Enter “Big Data”. Today, data scarcity is no longer the norm. In actuality, there’s an abundance of data on candidates who are either eager to supply them, or ignorant of the digital footprint they’ve left since leaving elementary school. And while personality tests are no longer in vogue, new types of applicant “tests” have emerged where applicants are encouraged to play games that watch—and measure how they solve problems and navigate obstacles—in online dungeons or fictitious dining establishments.

Capturing “Big Data” seems to be the least of challenges in workforce analytics. The larger issues are identifying key criteria for what makes a successful employee—and discerning how those criteria relate and interplay with each other.  For example, let’s say you’ve stumbled upon nirvana and found two key criteria for employee longevity.  Hire for that criteria and now you may have more loyal employees, but you still need to account and screen for “aptitude, skills, personal history, psychological stability, discretion”, work ethic and more. And how does one weight these criteria in a hiring model?

Next, presuming you’ve developed a reliable analytic model, it’s important to determine under which circumstances the model works.  In other words, does a model that works for hiring hamburger flippers in New York, also work for the same role in Wichita, Kansas?  Does seasonality have a play? Does weather? Does it matter the size of the company, or the prestige of its brand? Does the model work in economic recessions and expansions? As you can see, discovering all relevant attributes for “hiring the right person” in a given industry, much less role, and then weighting them appropriately is a challenge for the ages.

Worse, once your company has a working analytic model for human resource management, it’s important to not completely substitute it for subjective judgment.  For example in the Atlantic Magazine article, a high tech recruiting manager lamented: “Some of our hiring managers don’t even want to interview anymore, they just want to hire the people with the highest scores.”  It probably goes without saying, but this is surely a recipe for hiring disaster.

While HR analytics seems to have room to run, there’s still the outstanding question of whether “the numbers” matter at all in hiring the right person. For instance, Philadelphia Eagles coach, Chip Kelly was recently asked why he hired his current defensive coordinator, who had less than stellar numbers in his last stint with the Arizona Cardinals.

Chip Kelly responded: “I think people get so caught up in statistics that sometimes it’s baffling to me. You may look at a guy and say, ‘Well, they were in the bottom of the league defensively.’ Well, they had 13 starters out. They should be at the bottom of the league defensively.”

He continued: “I hired [former Oregon offensive coordinator and current Oregon head coach] Mark Helfrich as our offensive coordinator when I was at the University of Oregon. Their numbers were not great at Colorado. But you sit down and talk football with Helf for about 10 minutes. He’s a pretty sharp guy and really brought a lot to the table, and he’s done an outstanding job.”

Efficient data capture, data quality, proper algorithmic development and spurious correlations in too much big data are just a few of the problems yet to be solved in HR analytics. However, that won’t stop the data scientists from trying. Ultimately, the best hires won’t come exclusively from HR analytics, but will be paired with executive (subjective) judgment to find the ideal candidate for a given role. However, in the meantime, buckle your seatbelt for much more use of HR analytics. It’s going to be a bumpy ride.

 

No Gold Medals for “Black Swan” Criers?

It’s extremely unfashionable to be the “Black Swan” crier in your organization, or the person who warns line of business managers about the heavy impact of extreme but unlikely events.  In fact just the opposite is the norm, where plenty of company executives get rewarded in career growth and compensation for ignoring risks, or sweeping them under the rug for others to tackle down the road.  It’s time to listen—really listen—to what Black Swan criers in your own company are saying.

Courtesy of Flickr. By Al S
Courtesy of Flickr. By Al S

In 18th century England, the town crier would be dressed in fine clothing, given a bell, and told to “cry” or proclaim significant news to merchants and citizens alike. Sometimes the town crier brought bad news—such as tax increases. Fortunately, such a person was protected by laws stating that anyone causing harm to the town crier could be convicted of treason.  Wikipedia notes the phrase; “don’t shoot the messenger” was a real command!

Fast forwarding to our current time, there are few rewards for those who “cry” or warn about the dangers of “Black Swans” or extreme but rare events that carry a high impact.  See here for a list of “Black Swan” events since 2001.

Case in point, leading up to the September 2008 financial crisis, only a few prognosticators could see that quasi-government agencies such as Fannie Mae and Freddie Mac were buying too many no-documentation, no-income (NINJA) loans that could go bust if the US economy went into recession.  Nassim Taleb, author of the Black Swan, was a key figure that needed no more than a glance at these agency’s financials in 2007 to declare, “(They seem) to be sitting on a barrel of dynamite, vulnerable to the slightest hiccup”.

And of course, that dynamite was lit as the global economy teetered on the edge of major depression, and the agencies ultimately lost a combined $15B. Of course, Mr. Taleb was ridiculed as a “clown” and “rabble rouser” for many of his prognostications.

Today’s corporate potential whistleblowers don’t fare much better in terms of warning about everyday risks whether they reside in supply chains, nuclear power plants, cloud computing infrastructures or other such complex systems prone to fragility. It’s much easier to carry on with business as usual, than plan and prepare for events that however unlikely, could end up disabling or dismantling your organization in one fell swoop.

Indeed, Taleb argues it’s much easier for managers to tout what they “did”, rather than what they avoided by taking proper risk management precautions.  “The corporate manager who avoids a loss will often not be rewarded,” he says.

Business executives should not turn their eyes and ears from their own “town criers” preaching Black Swans. While painful to listen to, and sometimes counter-intuitive for today’s “business wisdom”, those closest to your business operations often see what can blow up, long your before mid-level and corporate executives gain visibility.

These “Black Swan” criers may never be personally rewarded with a gold medal for highlighting key risks, but it’s the smart business that ultimately finds a way to seek their opinions and at least scenario plan for their noted “worst case event” outcomes.

Real-Time Pricing Algorithms – For or Against Us?

In 2012, Cyber Monday sales climbed 30% over the previous year’s results. Indeed, Cyber Monday benefits both online retailers as they gain massive Christmas spend in one day, and consumers can shop at work or home and thus skip holiday crowds.

And yet, underneath the bustle of ringing “cyber cash registers”, a battle brews as retailers now can easily change prices, even by the second, using sophisticated algorithms to out-sell competitors. Consumers aren’t standing still though. They also have algorithmic tools available to help them determine the best prices.

Christmas ballLet’s say you are thinking about buying a big screen television from a major online retailer.  The price at 12 noon is $546.40, but you decide to go get some lunch to think about it. An hour later, you check back on that same item and now it’s priced at $547.50.  What gives?  Depending on your perspective, you’ll either end up being the beneficiary of algorithmic pricing models or the victim.

A Financial Times article notes the price of an Apple TV device sold by three major online retailers changed anywhere from 5-10% daily (both up and down) in late November. Some HDTVs changed prices by the hour.

These up to the minute changes are made possible by real time pricing algorithms that collect data from competitor websites and customer interactions on their own sites, and then make pricing adjustments based on inventory, margins, and competitive strategies.

An algorithm is really just a recipe if you will, codified into steps and executed at blinding speed by computers.  Thus, a pricing algorithm may be using inputs from competitor websites and other data sources, and then based on pre-defined logic, churn out a “price” that is then posted on a website. Typically this process is executed in seconds.

Thus, it is increasingly common –depending on the specific item, day, hour, or even minute—that prices of online items change in a moment’s notice. If keeping up with rapidly rising and falling prices seems like a shopper’s nightmare, you’re right. However, consumers also have tools to fight back.

The same FT article points out that some consumers are using websites such as Decide.com to determine the best if not the most “fair” price points. Using either Decide.com, or Decide’s convenient smartphone app, for an annual fee of $30, a consumer can access pricing predictions of items based on Decide’s predictive pricing algorithms.  Simply look up an item, and Decide.com gives its best prediction of when to buy an item and where.

Today, we take for granted that grocery store prices generally don’t change within the hour, and that prices at the gas pump (while sometimes changing intra-day) generally don’t change by the minute. As data collection processes move from overnight batch to near real time, expect more aggressive algorithmic pricing, coming to a grocer, gas pump—or theater near you!

The Softer Side of Risk Management Means Fewer Analytics

For the past 25 years, with their elegant analytical models, quantitative analysts and transplanted physicists have ruled the roost in Finance. However, as global financial flows (and financial products) get more interconnected, complex and opaque; investors and managers are finding this new paradigm terrifying.  It’s high time to supplement quantitative strategies with the softer side of risk management—before it’s too late.

Courtesy of Flickr

Since 2008, Wall Street’s astrophysicists have been in shock.  Analytical models such as Value-at-Risk (VAR) and others have proven time and again to be of limited value in understanding the true risk of banking products and overall portfolios.

Whereas Wall Street’s quants relied heavily on analytical models to help them judge volatility and riskiness of their investment portfolios, today’s business managers and quants must look at a whole new set of variables—many of which cannot be quantified.

Financial Times author Gillian Tett writes how today’s global banks, hedge funds, pension funds and investors are in a state of “cognitive shock”.   That because crises such as what’s happening in the Eurozone cannot be explained or predicted with models devised by Wall Street’s quantitative analysts. Instead, she says, what really matters now are non-quantitative issues such as “political values, social cohesion and civic identity.”

And these issues get back to an underlying and fundamental premise that all fiat money is based on—trust. When trust and faith among individuals, groups and nations disappear, rest assured most forms of money and wealth go with it.

Tett goes on to cite how “soft social issues” are suddenly replacing the all-important quantitative variables Westerner bankers have relied on for so many years.  A new “mental shift” is taking place, she says, where rating agencies are modifying their equations, asset managers are reducing their emphasis on quantitative models, and even central bankers are memorizing social and political trends.

Indeed, when it comes to understanding our complex and global financial system, quantitative measures are being put in their rightful place –as an input to decision making, not the quintessential or all-important element.

The softer side of financial risk management means harkening back to the days where bankers needed feet on the street and relationships in court houses, statehouses and dining rooms to sense political and economic winds.

Probability models based on historical data are still important criteria, but it’s also now critical to assess factors such as societal trends, politics and even personal character in decision making.  This is tough medicine for bankers, especially because these types of “softer data” are time consuming to capture, tough to categorize and analyze, and definitely don’t scale.

Gillian Tett says because of the sheer complexity of global financial markets, we’ve entered a new “age of volatility” where risk is difficult, if not impossible to model accurately.  In effect, we’ve moved from a “plug and play” world where we do one thing and expect a consistent reaction, to more of a “plug and pray” pattern where we turn the dial and hold onto our seats for dear life.  In this new world Bayesian Inference may be of some assistance, but we may also have to accept there are some things just too complex to effectively model.

Fight Back Against Black Swan Fatigue

In today’s leanjust-in-time, and over optimized world, it’s not uncommon for executives roll their eyes when the term “Black Swan” is brought up in risk management discussions.  That’s because even though preparing for extreme events makes logical sense, there’s also a cost associated with redundancy and robust disaster planning.  In addition, no one is ever judged a hero for saving the company from what never (or is never supposed to) happen.  Business executives must fight back against Black Swan fatigue, because in today’s interconnected and highly correlated world, the next extreme event could be the one that shoves your company off a cliff.

Image Courtesy of Flickr

When it comes to preparing for low probability but high impact events (i.e. Black Swans), the sad truth is most business executives will do nothing.  Why? Nassim Taleb, author of the Black Swan, explains; “It is difficult to motivate people in the prevention of Black Swans. Prevention is not easily perceived, measured, or rewarded; it is generally a silent and thankless activity. History books do not account forheroic preventive measures.”

Taleb is right. No one will be labeled a hero for keeping extra inventory on hand. No one will be characterized a hero for divvying orders among various suppliers just in case the favored and most cost effective supplier goes belly up. And spending money on strategy and risk management consultants to disaster and scenario plan for worst case developments? Forget about it. These are all just costs, and cannot be afforded in today’s bottom line economy, right?

Someone wise once said that risk management is much like insurance. You hate to spend money on it, but you’re darn glad you have it when all hell breaks loose.

But wait you say, don’t most business executives plan for disaster? Perhaps, but there’s a difference between hiring a consultant to produce a disaster planning report which promptly collects dust, and actually preparing for and assuming extreme events will occur as part of your overall business plan.  And even when managers believe they’re prepared for worst case events, sometimes it’s not enough—with potentially horrific consequences.

As detailed in the March 27, 2011 issue of the Financial Times, executives at Tepco’s Fukushima Daiichi plant were prepared for earthquake. In fact, they were also prepared for tsunami—having built a seawall 20 feet tall. What they did not expect is that the March 11, magnitude 9.0 earthquake would cause a tsunami wave 40 feet tall! The tsunami promptly washed away the sea wall and also the diesel powered generators cooling the spent nuclear fuel rods housed at Fukushima.

Executives at Fukushima had planned for disaster. They had built a 20 foot seawall. They had redundancy with backup generators in case the cooling system failed. And the nuclear plant powered down once the 9.0 magnitude earthquake hit. Everything worked as planned. But they were not prepared for the “unthinkable” extreme event.

Predictive modeling based on historical data will only take you so far. Even extrapolating with Bayes isn’t going to be of much use for “unknown, unknowns”.  As business managers we must fight Black Swan lethargy, especially when all oars in the boat are rolling towards lands of “optimization” and “cost effectiveness”. As managers, we must continue to sound the alarm, even though probability of the extreme event is of the smallest percentages.

Play up the risk of future Black Swans and then prepare for the extreme event. Here’s to hoping you’ll never be proved right.

Blasphemy? Quantitative Approaches Don’t Always Work Best

Ray Dalio’s Bridgewater Associates hedge fund, Pure Alpha II, is up 25% in year that hasn’t been kind to competitors. How did he do it?  Hint: it wasn’t through a purely quantitative approach.

Hedge fund manager Ray Dalio is a rare breed in financial investing. Dalio is known as a “macro” investor, or someone who takes a “big picture” approach to investing as opposed to math whiz “quants” who rely on quantitative/numerical techniques.

The July 25, 2011 issue of New Yorker, highlights Dalio’s investment methods as he looks for hidden profit opportunities; “(Dalio) spends most of his time trying to figure out how economic and financial events fit together in a coherent framework. His constant goal (is to) understand how the economic machine works.”

Dalio isn’t concerned with the nuts and bolts of companies. He doesn’t want to scrub the bowels of the machine to see how it works. And he shuns frequency based probability techniques used by financial quants to estimate whether stocks will move up or down in penny increments.

While other hedge funds and investment banks control risks with sophisticated Value at Risk (VAR) models and use of derivatives, Dalio suggests that studying the big picture is a better approach. “Risky things are not in themselves risky if you understand them and control them,” he says. Instead of statistical distributions, it appears Dalio is more focused on what he calls the “probability of knowing”.  He never places all his eggs in one basket, especially because he understands that a complex and global world can shift course in a moment’s notice.

This is not to say, however, that Dalio doesn’t use analytical techniques. Of course, Dalio crunches the numbers and uses computers for much of his work. But he’s not driven by making money with techniques such as high frequency trading, where super computers trade liquid instruments at near light speed. Instead, his algorithmic trading models are written with his investment philosophy of components and relationships in mind, and help supplement decision making for broad and big bets.

Dalio is doing much more than guesswork here, but it’s a different kind of analysis based on a rules based framework codified in thirty years of investment experience. “It’s the commitment to systematic analysis and investment (within the boundaries of his mental framework) that makes the difference,” he says.

The contrast between Dalio’s approach and those of data driven quants couldn’t be clearer. Quants model investment decisions based on math and use computers to move volumes of liquid securities thus making money on tight spreads. Dalio seeks to understand “larger underlying forces”, interrelationships and historical context. His main advantages appear to be a “top down” rather than “bottom up” approach to investing and the pursuit of a longer time line for decision making.

In 2008 during the worst of the Great Recession, Dalio was up 9.5%, in 2010 the fund was up 45%, and Dalio’s $122B fund is up 25% this year (2011) based on macro bets for Treasuries, Japanese Yen and Gold.

It may be blasphemy, but for one investor, a macro “big picture” approach is proving much more profitable than one that’s (normal distribution) probability driven.

Newsflash: Correlation is Not a Cause!

Just about every data scientist and statistician knows that correlation doesn’t necessarily confirm causation. However, popular business and social literature often confuse the two concepts. By understanding the maxim of “correlation is not a cause” more clearly, it’s possible to let loose creativity and imagination of the questioning mind.

Humans like to think and speak in declarative terms. For a sampling, imagine statements bandied about by pundits such as; “global warming is caused by humans”, or “the Financial Crisis of 2008 was caused by greedy bankers” or “Republicans lost the 2008 election because they didn’t pay enough attention to immigration issues.”

Psychologist and author Sue Blackmore says the simple reminder that “correlation is not a cause” (CINAC), would improve just about everyone’s mental toolkit. Case in point, in an Edge.org article, she gives an example of people filling up a railway station as a scheduled train approaches.  She asks, “Did the people cause the train to arrive (A causes B)? Or did the train cause people to arrive (B causes A)?” The answer she says is they both depended on a railway timetable (C caused both A and B)!

In linear systems, cause and effect is much easier to pinpoint. However, the world around us is considered a complex system where there are often multiple variables pushing an outcome to occur. Nigel Goldenfeld, a professor of physics at University Illinois, sums it up best: “For every event that occurs, there are a multitude of possible causes, and the extent to which each contributes to the event is not clear. One might say there is a web of causation.”

And author Richard Bookstaber says that it’s a difficult search to pinpoint cause in complex systems especially because, “a change in one component can propagate through the system to lead to surprising and apparently disproportionate effect elsewhere, e.g. the famous “butterfly effect””.

The concept that in complex systems there is a “web of causation” may not sit right with some individuals, especially since newscasters, publishers, and even a fair portion of scientists prefer to insist on simple declarations of true and false.  However, the very nature of complex systems is that every object is in some way linked to another with either weak or strong ties and often connections are opaque and mysterious. So even a correlation of one, may not necessarily mean A causes B.

Freeing ourselves from the shackles of CINAC thinking means we have the possibility to let loose our imaginations says psychologist Sue Blackmore. Each definitive report should be greeted with skepticism. And when “A causes B” is held up as the answer, Blackmore cautions, then the critical mind automatically gets to work thinking; “Maybe instead, B actually causes A! And if not, what are the other opportunities?”

It’s human nature to try and explain the world around us. However, when it comes to complexity, we should lead discussions with a measure of humility to include questions and possibilities rather than declarations of certainty.

Questions:

  • How does our “aim to explain” end up stifling innovation and creativity?
  • The US Securities and Exchange Commission posited an explanation for the May 6, 2010 “Flash Crash”, but experts are not buying their simplistic explanation of a single “trigger event”. What are your thoughts?