Be Wary of the Science of Hiring

Like it or not, “people analytics” are here to stay. But that doesn’t mean companies should put all their eggs in one basket and turn hiring and people management over to the algorithms. In fact, while reliance on experience/intuition to hire “the right person” is rife with biases, there’s also danger in over-reliance on HR analytics to find and cultivate the ultimate workforce.

Courtesy of Flickr. By coryccreamer
Courtesy of Flickr. By coryccreamer

The human workforce appears to be ripe with promise for analytics. After all, if companies can figure out a better way to measure the potential “fit” of employees to various roles and responsibilities, subsequent productivity improvements could be worth millions of dollars.  In this vein, HR analytics is the latest rage—where algorithms team through mountains of workforce data to identify the best candidates and predict which ones will have lasting success.

According to an article in Atlantic Magazine, efforts to quantify and measure the right factors in hiring and development have existed since the 1950s. Employers administered tests for IQ, math, vocabulary, vocational interest and personality to find key criteria that would help them acquire and maintain a vibrant workforce. However, with the Civil Rights Act of 1964, some of those practices were pushed aside due to possible bias in test formulation and administration.

Enter “Big Data”. Today, data scarcity is no longer the norm. In actuality, there’s an abundance of data on candidates who are either eager to supply them, or ignorant of the digital footprint they’ve left since leaving elementary school. And while personality tests are no longer in vogue, new types of applicant “tests” have emerged where applicants are encouraged to play games that watch—and measure how they solve problems and navigate obstacles—in online dungeons or fictitious dining establishments.

Capturing “Big Data” seems to be the least of challenges in workforce analytics. The larger issues are identifying key criteria for what makes a successful employee—and discerning how those criteria relate and interplay with each other.  For example, let’s say you’ve stumbled upon nirvana and found two key criteria for employee longevity.  Hire for that criteria and now you may have more loyal employees, but you still need to account and screen for “aptitude, skills, personal history, psychological stability, discretion”, work ethic and more. And how does one weight these criteria in a hiring model?

Next, presuming you’ve developed a reliable analytic model, it’s important to determine under which circumstances the model works.  In other words, does a model that works for hiring hamburger flippers in New York, also work for the same role in Wichita, Kansas?  Does seasonality have a play? Does weather? Does it matter the size of the company, or the prestige of its brand? Does the model work in economic recessions and expansions? As you can see, discovering all relevant attributes for “hiring the right person” in a given industry, much less role, and then weighting them appropriately is a challenge for the ages.

Worse, once your company has a working analytic model for human resource management, it’s important to not completely substitute it for subjective judgment.  For example in the Atlantic Magazine article, a high tech recruiting manager lamented: “Some of our hiring managers don’t even want to interview anymore, they just want to hire the people with the highest scores.”  It probably goes without saying, but this is surely a recipe for hiring disaster.

While HR analytics seems to have room to run, there’s still the outstanding question of whether “the numbers” matter at all in hiring the right person. For instance, Philadelphia Eagles coach, Chip Kelly was recently asked why he hired his current defensive coordinator, who had less than stellar numbers in his last stint with the Arizona Cardinals.

Chip Kelly responded: “I think people get so caught up in statistics that sometimes it’s baffling to me. You may look at a guy and say, ‘Well, they were in the bottom of the league defensively.’ Well, they had 13 starters out. They should be at the bottom of the league defensively.”

He continued: “I hired [former Oregon offensive coordinator and current Oregon head coach] Mark Helfrich as our offensive coordinator when I was at the University of Oregon. Their numbers were not great at Colorado. But you sit down and talk football with Helf for about 10 minutes. He’s a pretty sharp guy and really brought a lot to the table, and he’s done an outstanding job.”

Efficient data capture, data quality, proper algorithmic development and spurious correlations in too much big data are just a few of the problems yet to be solved in HR analytics. However, that won’t stop the data scientists from trying. Ultimately, the best hires won’t come exclusively from HR analytics, but will be paired with executive (subjective) judgment to find the ideal candidate for a given role. However, in the meantime, buckle your seatbelt for much more use of HR analytics. It’s going to be a bumpy ride.

 

The Math Says Yes, But Human Behavior Says No

Data scientists are busy writing algorithms to optimize employee productivity, improve trucking routes, and update retail prices on the fly. But those pesky humans and their demands for a reasonable schedule and consistent pricing keep getting in the way. Which then proves when it comes to algorithmic model development, “real world” human behavior is the hard part.

Courtesy of Flickr. By Nathan Gibbs
Courtesy of Flickr. By Nathan Gibbs

The traveling sales person is still one of the most interesting math problems in terms of optimization. The problem can be summarized this way: take a sales person and their accounts in various cities. Now, optimize the shortest possible route for that sales person to visit each account once, and then come back home, all within a defined time period. What may sound like an easy problem to solve is easy is actually one bedeviling planners to this day—and it ultimately involves a lot more than math.

While the math in the traveling sales person has been painstakingly improved over the years, the human element is still a very large factor in real world implementations. That’s because while the most mathematically optimized route for a sales person might be visiting three accounts in one day, it doesn’t take into account the schedules of those customers he/she intends to visit, necessary employee bathroom breaks, hotel availability and the fact the sales person also wants to visit their ailing grandmother after stop two.

The traveling salesperson problem also applies to transportation optimization. And again, sometimes the math doesn’t add up for human beings. For example, at particular shipping company, optimization software showed the best route combination for delivering packages. However, there was one small catch: the most optimized route ignored Teamster and Federal safety rules of drivers needing to take pre-defined breaks, and even naps after a certain amount of hours on the road.

Modeling is getting better though. An article in Nautilus shows how transportation models are now incorporating not only the most mathematically optimized route, but also human variables such as the “happiness” of drivers. For instance, did the driver have a life event such as death in the family? Do they prefer a certain route? How reliable are they in terms of delivering the goods on time? And plenty of other softer variables.

Sometimes optimization software just flat out misses the mark. I’m reminded of a big chain retail store that tried to use software to schedule employee shifts. The algorithm looked at variables such as store busyness, employee sales figures, weather conditions, and employee preferences and then mapped out an “ideal” schedule.

Too bad the human element was missing though as some employees were scheduled 9a-1pm and then 5p-9pm the same day, essentially swallowing their mornings and evenings whole. The algorithm essentially ignored the costs of employees having to travel back and forth to work, much less the softer side of quality of life for employees struggling to balance their day around two shifts with a four hour gap in between. Rest assured that while the store employee schedule was “optimized,” employee job satisfaction took a tumble.

Lastly, online retailers are experimenting with pricing optimization in near real time. You’ve undoubtedly seen such pricing models in action; you place an item in your shopping cart, but don’t buy it. Then, a couple hours later you come back to your shopping cart and the price has jumped a few dollars. This dynamic pricing has caused some customers to cry foul, especially because to some, it feels a lot like “bait and switch.” And while dynamic online pricing is becoming more commonplace, it doesn’t mean that consumers are going to like it—especially because humans have a preference for consistency.

Thus, from pricing, employee scheduling, to trucking route optimization, the computers say one thing, but sometimes humans beg to differ. Indeed, there’s a constant push-pull between mathematics and the human element of what’s practical and reasonable. As our society becomes more numbers and computer driven and thereby “optimized,” expect such battles to continue until a comfortable equilibrium can be achieved.  That is, until the computers don’t need us anymore. Then all bets are off.

Societal Remedies for Algorithms Behaving Badly

In a world where computer programs are responsible for wild market swings, advertising fraud and more, it is incumbent upon society to develop rules and possibly laws to keep algorithms—and programmers who write them—from behaving badly.

Courtesy of Flickr. By 710928003
Courtesy of Flickr. By 710928003

In the news, it’s hard to miss cases of algorithms running amok. Take for example, the “Keep Calm and Carry On” debacle, where t-shirts from Solid Gold Bomb Company were offered with variations on the WWII “Keep Calm” propaganda phrase such as “Keep Calm and Choke Her” or “Keep Calm—and Punch Her.” No person in their right mind would sell, much less buy, such an item. However, the combinations were made possible by an algorithm that generated random phrases and added them to the “Keep Calm” moniker.

In another instance, advertising agencies are buying online ads across hundreds of thousands of web properties every day. But according to a Financial Times article, PC hackers are deploying “botnet” algorithms to click on advertisements and run up advertiser costs.  This click-fraud is estimated to cost advertisers more than $6 million a month.

Worse, the “hash crash” on April 23, 2013, trimmed 145 points off the Dow Jones index in a matter of minutes. In this case, the Associated Press Twitter account was hacked by the Syrian Electronic Army, and a post went up mentioning “Two Explosions in the White House…with Barack Obama injured.”  With trading computers reading the news, it took just a few seconds for algorithms to shed positions in stock markets, without really understanding whether the AP tweet was genuine or not.

In the case of the “Keep Calm” and “hash crash” fiascos, companies quickly trotted out apologies and excuses for algorithms behaving badly.  Yet, while admission of guilt with promises to “do better” are appropriate, society can and should demand better outcomes.

First, it is possible to program algorithms to behave more honorably.  For example, IBM’s Watson team noticed that in preparation for its televised Jeopardy event that Watson would sometimes curse.  This was simply a programming issue as Watson would often scour its data sources for the most likely answer to a question, and sometimes those answers contained profanities. Watson programmers realized that a machine cursing on national television wouldn’t go over very well, thus programmers gave Watson a “swear filter” to avoid offensive words.

Second, public opprobrium is a valuable tool. The “Keep Calm” algorithm nightmare was written up in numerous online and mainstream publications such as the New York Times. Companies that don’t program algorithms in an intelligent manner could find their brands highlighted in case studies of “what not to do” for decades to come.

Third, algorithms that perform reckless behavior could (and in the instance of advertising fraud should) get a company into legal hot water. That’s the suggestion of Scott O’Malia, Commissioner of the Commodities Futures Trading Commission. According to a Financial Times article, O’Malia says in stock trading, “reckless behavior” might be “replacing market manipulation” as the standard for prosecuting misbehavior.  What constitutes “reckless” might be up for debate, but it’s clear that more financial companies are trading based on real-time news feeds. Therefore wherever possible, Wall Street quants should be careful to program algorithms to not perform actions that could wipe out financial holdings of others.

Algorithms –by themselves—don’t actually behave badly; after all, they are simply coded to perform actions when a specific set of conditions occurs.

Programmers must realize that in today’s world, with 24 hour news cycles, variables are increasingly correlated. In other words, when one participant moves, a cascade effect is likely to happen. Brands can also be damaged in the blink of an eye when poorly coded algorithms run wild. With this in mind, programmers—and the companies that employ them—need to be more responsible with their algorithmic development and utilize scenario thinking to ensure a cautious approach.

Are Computers the New Boss in HR?

Too many resumes, too few job openings. What’s an employer in today’s job market to do? Turn to computers of course! Sophisticated algorithms and personality tests are the new rage in HR circles as a method to separate the “wheat from the chaff” in terms of finding the right employee. However, there is danger in relying on machines to pick the best employees for your company, especially because hiring is such a complex process full of nuance and hundreds of variables and multiple predictors of success.

The article “The New Boss: Big Data”  in Macleans – a Canadian publication – discusses the challenges for human capital professionals in using machines for the hiring process– and coincidentally has a quote or two from me.

Net, net, with hundreds if not thousands of resumes to sort through and score for one to two open positions, it does appear this is an ideal task for machines.  However, I believe a careful balance is in order between relying on machines to solve the problem and also using intuition or “gut decision making” especially to determine cultural fit.  This is a complex problem to solve where the answer isn’t machine or HR professional –but in fact, both are necessary.

Three Implications for the Rise of E-Readers

For the first time ever, Amazon.com sold more electronic than printed books. In other news, Kindle e-readers are flying off the shelves and one article suggests Barnes and Noble’s saving grace will be their Nook reader.  What gives with this sudden transition to e-books and what are the implications of this e-reading trend?

An Economist article titled “Great Digital Expectations” highlights the rapid rise of e-books. And “rapid” is exactly the right word, as it was just in 2006 e-reader sales were a measly 100,000, whereas 25 million unitsare expected to be sold in 2011.

The Economist article mentions some startling ramifications of this trend towards electronic reading and publishing (paraphrased):

  • E-Books have higher profit margins
  • E-Romance novels are selling like hot-cakes
  • Digital piracy is a threat
  • Pricing is all over the map

As more people switch to e-readers, and the tablet craze really takes off, there are certainly some implications.

First, the Long Tail, will be much more of a selling force. In the past, publishers would rely on big box stores such as Borders or the like to prominently display their wares. In addition, publishers would expect discount stores and warehouse firms such as Costco, to move book volumes.  With digital publishing, it’s conceivable that more players will have a “fair shot” at publishing success as clustering algorithms on Amazon andBN.com suggest books based on our browsing history or past purchases.   For sure, blockbuster titles will continue to have a conspicuous display on Kindle and Nook homepage screens, but readers will discover more book options as expert recommendation engines suggest likely interests that can be purchased in seconds.

Second, pricing will take on added importance. Today, print publishers wrestle with initial price setting as they must deliver books to stores that will sell and also create profits. Pricing must be decided before printing, because each book has a printed list price on the back cover.

However with digital publishing, there is essentially no need to establish a “set in stone” price. In a virtual world, publishers (and online retailers) can experiment with pricing every day, perhaps setting different rates by country, discounting “on the fly” based on daily e-book sales, or offering deals to Amazon Kindle customers through their “Special Offers” Kindle. Amazon shoppers know it’s not uncommon to view a book, say “Harry Potter and the Deathly Hallows” one day at $21.24, and then come back to the site tomorrow and see it listed for $22.09.  Pricing experimentation will happen instantaneously based on near real time data analysis—without the need to change store signage and update retail POS systems.

Third, as e-readers take over the market, there is danger of increasing the digital knowledge divide between “haves” and “have not’s”. If a person requires a $100 e-reader to check out a digital library book, will this create a knowledge gap between socio-economic groups?

These are just three implications for rising e-reader ownership and there are certainly dozens more. Can you think of other implications for the rise of e-books?

Has Personalized Filtering Gone Too Far?

In a world of plenty, algorithms may be our saving grace as they map, sort, reduce, recommend, and decide how airplanes fly, packages ship, and even who shows up first in online dating profiles. But in a world where algorithms increasingly determine what we see and don’t see, there’s danger of filtering gone too far.

The global economy may be a wreck, but data volumes keep advancing. In fact, there is so much information competing for our limited attention, companies are increasingly turning to compute power and algorithms to make sense of the madness.

The human brain has its own methods for dealing with information overload. For example, think about millions of daily input the human eye receives and how it transmits and coordinates information with our brain. A task as simple as stepping a shallow flight of stairs takes incredible information processing. Of course, not all received data points are relevant to the task of walking a stairwell, and thus the brain must decide which data to process and which to ignore. And with our visual systems bombarded with sensory input from the time we wake until we sleep, it’s amazing the brain can do it all.

But the brain can’t do it all—especially not with the onslaught of data and information exploding at exponential rates. We need what author Rick Bookstaber calls “artificial filters,” computers and algorithms to help sort through mountains of data and present the best options. These algorithms are programmed with decision logic to find needles in haystacks, ultimately presenting us with more relevant choices in an ocean of data abundance.

Algorithms are at work all around us. Google’s PageRank presents us relevant results—in real time—captured from web server farms across the globe. Match.com sorts through millions of profiles, seeking compatible profiles for subscribers. And Facebookshows us friends we should “like.”

But algorithmic programming can go too far. As humans are more and more inundated with information, there’s a danger in turning over too much “pre-cognitive” work to algorithms. When we have computers sort friends we would “like”, pick the most relevant advertisements or best travel deals, and choose ideal dating partners for us, there’s a danger in missing the completely unexpected discovery, or the most unlikely correlation of negative one. And even as algorithms “watch” and process our online behavior and learn what makes us tick, there’s still a high possibility that results presented will be far and away from what we might consider “the best choice.”

With a data flood approaching, there’s a temptation to let algorithms do more and more of our pre-processing cognitive work. And if we continue to let algorithms “sort and choose” for us – we should be extremely careful to understand who’s designing these algorithms and how they decide. Perhaps it’s cynical to suggest otherwise, but in regards to algorithms we should always ask ourselves, are we really getting the best choice, or getting the choice that someone or some company has ultimately designed for us?

Question:
*  Rick Bookstaber makes the case that personalized filters may ultimately reduce human freedom. He says, “If filtering is part of thinking, then taking over the filtering also takes over how we think.” Are there dangers in too much personalized filtering?

The Next Wave in Recommendation Systems?

While some internet privacy experts fret over use of cookies and web profiles for targeted advertising, the quest for personalization is about to go much deeper as web companies create new profiling techniques based on the science of influence.

Behavioral targeting on the web using cookies, http referrer data, registered user accounts and more is about to be significantly enhanced says columnist Eli Pariser.  In the May 2011 issue of Wired Magazine, in an article titled “Mind Reading”, Pariser discusses how website recommendation and targeting algorithms; “analyze our consumption patterns and use that information to figure out (what to pitch us next).”   However, Parser notes that the next chapter for recommendation systems is to discern the best approach in influencing online shoppers to buy.

In the article, Pariser cites an experiment by a doctoral student at Stanford where online shopping sites attempted to not only track clicks and items of interest, but also determine the best way to pitch a product. For example, pitches would alternate between an “Appeals to Authority”; as in someone you respect says you’ll like this product to “Social Proof”—everyone’s buying this product, so should you!

Taking a cue from the work completed by Dr. Robert Cialdini it appears that the next wave in recommendation algorithms is to learn our “decision triggers”, or the best way to persuade us to act. In his book “Influence: Science and Practice”, Cialdini documented six decision triggers of consistency, reciprocation, social proof, liking, authority and scarcity as mental shortcuts that help humans deal with the “richness and intricacy of the outside environment.”

Getting back to the Wired Magazine article, Eli Pariser says this means that websites will hone in on the best pitch for a particular online consumer and –if effective—continue to use it.  To illustrate this concept, Pariser says; “If you respond a few times to a 50% off in the next ten minutes deal, you could find yourself surfing a web filled with blaring red headlines and countdown clocks.”

Of course, shoppers buy in various ways and not always in the same manner. However, the work of Robert Cialdini shows that in the messy and complicated lives of most consumers that mental shortcuts help with the daily deluge of information. Therefore, this new approach of recommendation systems using principles of psychology in tailoring the right way to “pitch” online shoppers, might just work.

There’s no doubt that recommendation systems already take into account principles of social proof and liking, but there’s a lot more room for improvement, especially other areas that Cialdini has researched. The answer to ‘why we buy’ is about to be taken to a whole new level.

Questions:

  • What’s your take on this next development in recommendation systems? Benefit or too much “Big Brother”?
  • Are you moved by “act now” exhortations? What persuasion technique/s work best on you?