Two noted economists, Kenneth Rogoff and Carmen Reinhardt, recently had their findings on country debt to GDP ratios questioned, as it was discovered an Excel spreadsheet error led to some grave miscalculations. And while plenty of financial bloggers and economists took the opportunity to gloat over Rogoff and Reinhardt’s misfortune, there is a larger point here: just because mathematical calculations are wrong, it doesn’t mean a particular idea isn’t directionally sound.

In 2010, Rogoff and Reinhardt published a paper on the link between high public debt and slower economic growth. Their findings showed that when a country reached a debt level of greater than 90% of GDP, that country’s growth would slow to a crawl. This paper was subsequently used as the empirical basis for fiscal austerity—or belt tightening—for many European countries.

However, since the publishing of Rogoff and Reinhardt’s 2010 paper, their findings have been under intense scrutiny. Facing pressure to release their methodology and data, Rogoff and Reinhardt finally let other statisticians examine the study’s underlying calculations.

When Rogoff and Reinhardt’s Excel spreadsheets were released, a pair of graduate students discovered some coding errors. One key error omitted five countries from the calculations, which changed the mean of negative 0.1% economic growth to a positive 2.2%, a pretty significant switch! In other words, the conclusion that the “magic number of 90% debt to GDP equates to slow growth” wasn’t so magical after all.

Predictably, mainstream economists like Paul Krugman were quick to pounce. In his column, “Holy Coding Error, Batman”, Krugman called the error “embarrassing”, a “failure” and concluded it was reason enough to discount the underlying message that countries with higher debt could see slower growth in the future.

Krugman’s gloating aside, we should note that just because calculations supporting a particular idea are wrong, it doesn’t necessarily mean the proverbial “baby” should be tossed out with the “bathwater”.

Here’s why: an article on *WSJ*’s Market Watch cites a few studies showing 88% of spreadsheets contain errors of some kind. Ray Panko, a professor of IT management at University of Hawaii says that spreadsheet “errors are pandemic”.

Now whether you believe the 88% number is correct, or even if you discount it by half—as a consultant friend of mine suggests—it’s still a whopper of a number!

Going forward, with the knowledge a fair percentage of excel calculations are likely flawed in some manner, it makes sense that while we should expect the numbers supporting an idea need to be accurate, we should also understand that there could be errors. And because there could be calculation errors, we need to decide if the idea—outside any erroneous calculations—is a sound idea, or not.

Of course, there are instances where it’s critical to get mathematical calculations correct such as launching rockets, landing planes, engineering a building or bridge etc. But let’s also be careful not to immediately throw away an idea as “false” simply because it’s discovered someone made a correctible excel spreadsheet error.

Getting back to the Rogoff and Reinhardt commotion, this is exactly what *Financial Times* columnist Anders Aslund has in mind when he writes, “(While) the critique of Reinhart and Rogoff correctly identifies some technical errors in their work, one cannot read it and conclude the case for austerity is much weakened. High public debt is still a serious problem.” I would add this is especially true for countries where their debt is not denominated in their own currency.

With the realization that most spreadsheets have errors, we should check, double check, and triple check Excel calculations to ensure accuracy. Peer review of excel calculations is also a recommended approach.

But let’s also not be so quick to throw out perfectly good ideas where it’s discovered some excel miscalculations, or omissions have skewed the results. After all, a key idea may not be precisely supported by the maths, but still may be directionally correct. Or as New York Fund Manager Daniel Shuchman says, we don’t need to touch the stove to prove it’s hot.

Questions:

What are the key lessons in the Rogoff and Reinhardt debacle? Mistakes in treating correlation for causation? Sloppy coding? Applying too much historical data where conditions may have changed? Applying too little data (cherry-picking)? What say you?

1. What are the key lessons in the Rogoff and Reinhardt debacle?

Rogoff and Reinhardt could have prevented the miscalculation by allowing a third-party to independently audit their calculations and or process. Their key findings may still support the idea that a country’s GDP slows if the country is in too much dept.

2. Mistakes in treating correlation for causation?

Challenging problems should be approached and reviewed objectively. Perhaps outliers exist. Perhaps the perceived whole is actually a piece to the story. Support your argument with objective data. Seek reviews and critiques from others.

3. Sloppy coding?

In relation to Rogoff and Reinhardt, perhaps they were working within a budget. In relation to system efficiency, it is important to know the fundamentals and write optimized code when resources allow.

4. Applying too much historical data where conditions may have changed?

Personally, I struggle with this. In forecasting, the variables, independent, dependent; I believe their perceived value decreases over time. I like to work in current historical periods and forecast for current future periods. I discourage the use of long ranges of historical data, especially in rapidly changing and difficult prediction environments.

5. Applying too little data (cherry-picking)?

Finding the balance between too little and too much data really depends on the model or formula complexity. Sometimes for simple ad-hoc analysis a few data sets will satisfy the need. For more complex equations and calculations more data sets are needed. For some theories there are guidelines on how many variables, etc. For some metrics there are standard formulas, etc. However, sometimes you have to blend best practices, standard formulas, with custom equations to track a certain activity. Lastly, data can only accomplish so much. Intuition and qualitative analysis are equally as important assuming the source is credible. Even then, after being careful, thoroughly reviewing the process and calculation, third-party audit, we can get it wrong. Admit when this happens. Take notes and identify what went wrong to minimize future occurrences. Then simply move forward.

What a terrific response Jonathan! You really know your stuff. In regards to sloppy coding, II wonder how much of this could have been avoided with the use of SQL vs. Excel? How many economists use SQL – probably not that many? It appears to me that you’re ready to start your own blog if you haven’t already! What’s stopping you? 🙂

“How many economists use SQL – probably not that many?” Excellent point! No blog for me at the moment.Stopping me is time and motivation.