More Magic Formula Analysis


As a professor by day, I spend a lot of time doing research and studying financial data. Recently, my research assistant and I have been digging deeper into the magic formula. We simply can’t figure out why the results posted in the Little Book are so extraordinary compared to what we have found.

Granted, there are certainly likely to be differences at the margin, but any robust strategy should be fairly impervious to slight changes in technique, time period, and methods.

Here are the returns presented in the Little Book that Beats the Market:

30.8% is breathtaking and will make you a multi-billionaire very quickly. Nonetheless, we can’t replicate the results under a variety of methods.

We’ve hacked and slashed the data, dealt with survivor bias, point-in-time bias, erroneous data, and all the other standard techniques used in academic empirical asset pricing analysis–still no dice.

In the preliminary results presented below, we analyze a stock universe consisting of large-caps (defined as being larger than 80 percentile on the NYSE in a given year). We test a portfolio that is annually rebalanced on June 30th, equal-weight invested across 30 stocks on July 1st, and held until June 30th of the following year.

We show major differences between our results and the magic formula results (30.75% CAGR vs. 13.80% CAGR). As a robustness check, we also analyse the performance of the Profit & value strategy, which is the “academic equivalent” to the Magic Formula strategy. Both the Magic Formula and Profit & Value strategy outperform the market, but none of them comes close to DOUBLING market returns.

Perhaps this is a size effect?

As an additional check, we looked at the results when the universe consists of small, mid, and large caps (defined as being larger than 20 percentile on the NYSE in a given year):

Same story here: definitely some serious outperformance on behalf of the special formulas, but nowhere near the 31% CAGR outlined in the book.

So what gives?

There are a list of possible conclusions that we can draw from this analysis:

  1. We screwed something up in our analysis.
  2. Greenblatt & Co. screwed something up in their analysis.
  3. The strategy is highly unstable (i.e., small backtesting procedure changes have large effects).

I am fairly confident that we did a careful job in our analysis, and I am confident that Greenblatt & Co. did a solid analytical job. My guess is the magic formula backtest performance is simply highly unstable, and small changes in assumptions/analysis can have dramatic effects on the performance.

Interestingly enough, I visited the website and clicked on the live results of the Magic Formula:

I then compared the 2009 (partial) and 2010 results (full year) against our backtested 2009 (partial year) and 2010 results (using a 80 percentile NYSE cutoff for size):

From May 1 -Dec 31 2009 our backtest results of the magic formula lagged the live performance of the Magic Formula. In 2010, our backtest shows a 13.74% return, whereas the live Magic Formula earned 12.64% after fees.

But here are the results when we extend the universe market cap down to the 20 percentile market cap cutoff:

2009 absolutely kills it, as does 2010–it is obvious that ‘magic small-caps’ are driving the backtested performance here. As one can see, the live magic formula dramatically underperforms the backtested performance (likely because they had limited small/mid cap exposure).

Although anecdotal in nature, we can see from a very limited out of sample test (2009 and 2010) that the backtest returns to a Magic Formula strategy is VERY unstable and results should be analyzed with a skeptical eye.

I’m a huge believer that there are slack in asset prices and the market is never perfectly efficient; however, I also believe that markets are highly competitive and prices tend to stay reasonably close to efficient. Applying this thought to the Magic Formula results, I would conclude that the strategy probably works at the margin, but expecting massive outperformance, after controlling for risk, is foolhardy.

Related posts:

  1. How “Magic” is the Magic Formula? The underlying concept of the magic formula is a genius...
  2. Technical Analysis may actually work! Note: This is a guest post by Tom Cleveland, market...
This entry was posted in backtests, Trading Strategy Paper. Bookmark the permalink.

16 Responses to More Magic Formula Analysis

  1. Trustamind says:

    Thanks for the post.

    Yes size does matter. Actually I think it’s liquidity that matters. I have a fundamental ranking system that uses similar approach with Magic Formula. Roughly speaking, Magic Formula consists of two components: valuation and return on capital. My ranking system has one more component: financial condition. According to my own research (, there is an inverse relation between liquidity and performance. Annualized return drops if I require more liquidity. Because large market cap generally fetch better liquidity, what I observed in my research generally echoes what you have observed here. I think 30% annualized return is doable if we loosen the requirement on liquidity. However, investors may or may not be able to get rich with that because they couldn’t invest too much money on a thinly traded stock. At least it is the case with my ranking system as I do weekly rebalance. But it is still arguable that if the holding period is 6 months to 1 year, investors can spend weeks or even months to accumulate a position, so liquidity may not matter.

  2. Pingback: Wednesday links: platform vs. products | Abnormal Returns

  3. Pingback: Where’s the Magic? – World Beta – Engineering Targeted Returns and Risk

  4. Great post. I respect your non-combative, analytical stance.

    I’m constantly coaching my clients to not invest blindly just because someone publishes something that appears statistically solid on the surface. The devil is in the details and small differences can compound into large differences. There are so many ways to slice the data with assumptions that can make or break validity in real time.

    Thanks for sharing these insights.

  5. Great analysis! I am a little perplexed by the wide range of results and the inconsistencies with little black book but I am not economist and thus have only a very limited understanding of how you / greenbalt&co came to these conclusions! Let’s hope the magical formula is a little more stable than you believe!

    Thanks for the educational post!

  6. Trustamind says:

    It is so obvious why liquidity matters. I’m a little embarrassed not being able to articulate it before.

    Liquidity generally reflects popularity of a stock. Popularity means that a lot of investors pay attention to the stock. There is an saying in the world of computer geeks that “Given enough eyeballs, all bugs are shallow”. Similarly, all pricing errors are shallow given a lot investors are watching it. The less pricing error, the less profit left for value investors. Thus the inverse relation between liquidity and performance.

  7. I thought Greenblatt ruled out a size effect. So liquidity shouldn’t matter. If it does, then isn’t it possible that a lot of outperformance would be consumed by the spread?

    I must say, I am rather disillusioned with the results you presented, as it is beginning to look like the magic formula isn’t so magic after all. Anecdotally, it seems that others have trouble producing the stellar returns that Greenblatt cites. Maybe he should open up his data to more scrutiny.

    Based on your results, it doesn’t seem that the Magic Formula can do any better than simple value strategies like low PBV, PE, etc..

  8. very nice idea so thanks

  9. Steve Clements says:

    I’ve independently come to the same rough results and conclusions. I’ve backtested the Magic Formula using two different tools. You can also look at the AAII website and see that their version of the model doesn’t approach the lofty 31% returns. As one of the posters earlier wrote, roughly Greenblatt’s model is composed of a value screen and a return to capital. Low Value and High Momentum are widely documented as generating excess returns. Return on Capital not so much. I’ve isolated and backtested various measurements of return on capital without a great deal of success.

  10. Pingback: Avoid Losses: 5 Ways Investment Researchers Lie To You

  11. David Miller says:

    The reason you are coming up with numbers that are dramatically lower than the Magic Formula backtest in the book is because you are looking at large-caps and larger mid-caps instead of small-caps and mid-caps as they did in the book. The book never said that someone should screen by whether or not a company is in the NYSE. The average market cap of companies in the NYSE is 8.8 billion and the lowest market cap of any company in the NYSE is 3.2 billion. The back-test in the book that came to a 30.8% return looked at US traded companies with a market-cap of $50 million or greater. There is clearly a huge difference between $50 million market cap companies and $8.8 billion market cap companies. Even if you look at companies in the bottom 20% of size on the NYSE you are looking at companies that are nearly 100X larger in market-cap than the back-test in the Little Book That Beats The Market. If you run the back-test with the same criteria as the book you will come to very similar results as shown in the book.

  12. Ethan Ard says:

    I echo David’s hunch regarding the likely source of the discrepancy. The book does break out the results of both the 50M+ group, with about 30% returns, as well as the results of the top 1000 companies by market cap, with a more modest 23% average return. In other words, Greenblatt does acknowledge that there is a large size effect at work, which is consistent with what you are showing. It’s interesting to note that the Formula Investing mutual funds restrict themselves to the top 1400 US-listed companies by market cap, I guess to keep expenses reasonable. There are some non-affiliated micro-cap oriented magic formula funds (e.g. Catalyst Value), with concomitantly higher expenses than the official funds.

  13. Maybe , you need to read about the backtest studies, done on the Euro markets (by MFIE Capital) from 13/06/1999 until 13/06/2010

  14. Mike D says:

    As an avid follower of the Magic Formula, I was very interested in this article. I was also surprised to see the results did not live up to the backtest that was in the Little Book. So I dug it out and reread it from cover to cover. I believe there are 3 factors that could explain why the test results are different from what Greenblatt did in the little book. First, as already pointed out by others, the size effect. Greenblatt uses $50 million as one cut-off and $1 billion as the second cut-off for larger cap. Second, the cutoff for data is Dec 31st, but the investment date is not until July 1st. In Greenblatt’s tests, he used the most recently available quarter. I follow the Magic Formula monthly and companies fall off the list regularly, so a six month lag seems much too long. Third, and most importantly, in the book he states that to duplicate the results shown in the book, you have to buy 5-7 companies every 2-3 months. This prevents any seasonal effects. If anyone remembers the Foolish Four (check Motley Fool’s website for details), it was a market crushing method, but it only worked if you bought the stocks around year end. If you did it any other time of the year, it failed miserably. As a point of reference, if you look at the AAII website from 1998 to 2004, which are the only overlapping years, the Magic Formula version they track returned about 18% compounded vs. 5% for the S&P (13% annualized since 1998. The AAII version re-balances monthly and their methodology is slightly different than the Little Book, but it is directionally correct. I would really like to see a back tested comparison that takes these factors into account.

  15. Tom Knudtson says:

    I am amateur, but I can’t make sense of the data in the two tables presented. (The first is for large cap and the second is for “~all” cap company sizes.)

    To me it looks like the Magic formula data stays exactly the same (to four significant figures) in the two tables while other strategies, like Profit and Value, change significantly. Is this a typo or am I missing something? (Apologies in advance if I am “missing the obvious”.)

Leave a Reply

Your email address will not be published. Required fields are marked *



You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>