SSRN Id3254580 PDF
SSRN Id3254580 PDF
SSRN Id3254580 PDF
Neither the Research Foundation, CFA Institute, nor the publication’s edito-
rial staff is responsible for facts and opinions presented in this publication.
This publication reflects the views of the author(s) and does not represent
the official views of the CFA Institute Research Foundation.
The CFA Institute Research Foundation and the Research Foundation logo are trademarks
owned by the CFA Institute Research Foundation. CFA®, Chartered Financial Analyst®,
AIMR-PPS®, and GIPS® are just a few of the trademarks owned by CFA Institute. To view
a list of CFA Institute trademarks and the Guide for the Use of CFA Institute Marks, please
visit our website at www.cfainstitute.org.
© 2017 The CFA Institute Research Foundation. All rights reserved.
No part of this publication may be reproduced, stored in a retrieval system, or transmitted,
in any form or by any means, electronic, mechanical, photocopying, recording, or otherwise,
without the prior written permission of the copyright holder.
This publication is designed to provide accurate and authoritative information in regard to
the subject matter covered. It is sold with the understanding that the publisher is not engaged
in rendering legal, accounting, or other professional service. If legal advice or other expert
assistance is required, the services of a competent professional should be sought.
Cover Image Photo Credit: Jay_Zynism/iStock/Getty Images
ISBN 978-1-944960-33-9
Acknowledgments.................................................................................. vii
Foreword................................................................................................. ix
Preface. . ................................................................................................... xiii
References............................................................................................... 103
The authors wish to thank those from academia and the industry who accepted
the challenge to articulate their views on equity valuation. Their perspectives
are cited and attributed throughout this monograph.
We are also grateful to the CFA Institute Research Foundation for
funding this project and, in particular, to its director of research, Laurence
B. Siegel, and its executive director, Walter (Bud) Haslett, CFA, for their
encouragement, assistance, and insightful comments.
Where do stock prices come from? The easy answer is that they are the out-
come of supply and demand; that is, the price of any given stock is that which
causes the quantity supplied to equal the quantity demanded.
Of course, that answer, like most easy answers, is unsatisfying. What
causes the supply and demand for a stock to be what it is? There are two sets
of players in the game that we call the stock market: (1) the sell side, consisting
of issuers (seekers of capital) and their agents, called “investment bankers” or
“stockbrokers,”1 and (2) the buy side, consisting of saver-investors (providers of
capital) and their agents, called “investment managers.”
Each of these characters in the financial zoo presumably has a view on
every asset in the world, the view generally being that one should ignore the
asset. There are simply too many assets for everyone to analyze, so they delegate
the task of analyzing most assets to other investors, who—through their col-
lective wisdom, it is believed—will agree on a price that makes the asset a fair
deal; in that case, the asset is held in an index fund. (In an environment where
index funds exist, simply not holding an asset expresses a strong negative view on
that asset and is in no way equivalent to holding the asset at its index weight.)
It is only when an investor believes an asset offers a better-than-fair deal—
either because it is cheap, justifying an above-index weight, or because it is
expensive, requiring a below-index weight—that he or she becomes an active
manager with respect to that asset.
Active managers contribute to the price discovery process by shifting the
demand curve for that asset—outward, if they are buyers—in a way that an
index fund investor does not do. Active managers also affect the supply curve
by offering stocks for sale. Finally, sell-siders (corporations and their invest-
ment banker agents) may also be regarded as active managers in that they,
too, hold nonindex weights of the assets they trade and thus move the supply
and demand curves for stocks in exactly the same way that buy-siders do.
Well, in almost the same way. A corporation has only one stock to sell; an
investment manager can choose from among all the stocks offered for sale in
the world. Thus, the corporate issuer’s impact is more concentrated, and the
investment manager’s impact is more diffuse. But the underlying Economics
101 of asset price discovery is the same when viewed from either side.
1
They are also called “broker/dealers”—the word “dealer” highlighting the market-making
or principal function of the institution, in contrast to the buyer–seller matching or agent
function.
2
M. Barton Waring and Laurence B. Siegel, “The Dimensions of Active Management,”
Journal of Portfolio Management (Spring 2003): 35–51.
3
Frank J. Fabozzi, Sergio M. Focardi, and Caroline Jonas, Trends in Quantitative Finance
(Charlottesville, VA: CFA Institute Research Foundation, 2006); Frank J. Fabozzi, Sergio M.
Focardi, and Caroline Jonas, Challenges in Quantitative Equity Management (Charlottesville,
VA: CFA Institute Research Foundation, 2008); Frank J. Fabozzi, Sergio M. Focardi, and
Caroline Jonas, Investment Management after the Global Financial Crisis (Charlottesville,
VA: CFA Institute Research Foundation, 2010); Frank J. Fabozzi, Sergio M. Focardi, and
Caroline Jonas, Investment Management: A Science to Teach or an Art to Learn? (Charlottesville,
VA: CFA Institute Research Foundation, 2014).
Laurence B. Siegel
Gary P. Brinson Director of Research
CFA Institute Research Foundation
November 2017
Fundamental analysts and fundamental active managers believe that one can
determine the intrinsic (fundamental) value of a company’s stock by analyzing
the company. They argue that their ability to estimate the difference between
a stock’s fundamental price and its market price allows them to outperform
the market, thereby keeping markets at least somewhat efficient while creat-
ing value (alpha) for their clients. Much of the academic community agrees
and has developed valuation models, such as present value models based on
fundamental analysis. Such models include the widely used dividend discount
model, market-multiplier models (such as the popular price-to-earnings
models), and asset- and options-based valuation models.
Studies show, however, that on average and over time, despite all the
fundamental research, active traditional managers fail to outperform mar-
kets. This fact, plus a number of other aspects, including widely published
studies showing that, on average, high-fee funds do not perform as well as
less expensive funds,4 has resulted in what the data provider Morningstar
(Morningstar Manager Research 2017) calls “a sea-change in investor prefer-
ences” (p. 1). The Morningstar Manager Research report on annual US asset
flows shows that for the full year 2016, investors withdrew almost $264 bil-
lion from actively managed US equity funds. And the withdrawals occurred
despite a trend toward stocks versus bonds as the S&P 500 Index closed up
12% for the year. For the same period, passively managed US equity funds
saw net inflows of almost $237 billion. In their Wall Street Journal article,
Tergesen and Zweig (2016) cite Morningstar estimates that passive assets
under management since 2007 have tripled to $5.7 trillion, whereas assets in
active funds have increased by only 54%, to $23.2 trillion.
Still, according to Morningstar, 66% of US mutual fund and
exchange-traded fund assets are actively invested, albeit down from 84% just
10 years ago.
But these percentages could change soon. Representing less than 20%
of US equity holdings by retail (individual) investors only a decade ago,
passive funds’ share of US equities is expected to reach 50% in 2018–2019
(Morningstar Manager Research 2017).
While the trend to passive equity funds is particularly pronounced in the
United States, it is also present in Europe, where passive equity funds now
4
According to Morningstar, the average asset-weighted annual fee for actively managed US
stock funds is currently 0.78%, compared with 0.11% for the average passive stock fund.
attract more net flows than active equity funds, and in the Asia-Pacific region,
where net flows to passive strategies now almost match flows to active
strategies.
This monograph addresses a number of questions these trends raise:
•• Public stocks are traded in competitive markets and subject to the law of
supply and demand: Is there really such a thing as an intrinsic (or fun-
damental) value of a stock? If yes, can we determine this value using the
tools we presently have? Or can we determine only relative values?
•• What about determining the value of hard-to-value assets, such as initial
public offerings or private equity? What is the role of “hype” (hyperbole)
or information asymmetry in determining value?
•• Assuming that fundamental active analysts/managers can estimate the
intrinsic value of a stock and spot mispricings by comparing the intrinsic
price to the market price, can they execute an advantageous trade that
delivers value to the investor? What tools and heuristics do such analysts
use, and how effective are they?
•• Do economic or other phenomena, such as quantitative easing by central
banks or corporate stock buybacks, distort market prices, taking them far
from a stock’s fundamental price?
•• What is the equilibrium between the cost and the benefit of doing funda-
mental analysis, where the benefit is alpha or extra return to the investor?
•• Do fundamental analysts/managers really play an important role in keep-
ing markets (quasi) efficient?
•• Will news sources, data sources, new tools, or new technology not yet
(widely) used allow fundamental managers to better estimate a stock’s
fundamental value?
•• Has the global investment universe changed so much that the role of the
fundamental analyst/manager is no longer central? In other words, are
there better ways to generate returns for investors than traditional value
investing?
Quotations in this book that are not attributed to a source listed in the
References are from academic and industry colleagues who provided us with
their views on equity valuation in a series of interviews during the first half
of 2017.
However, I think that intrinsic values are dynamic and can change with
major nonanticipated shocks in supply and demand. For example, the
recent tax advantage on long-term investments in Italian small caps drove
the stock prices up to very high valuation multiples. Thus, in my opinion,
both considerations have to be taken into account when thinking about
intrinsic value.
The €205 billion Dutch fund PGGM Investments is a long-term investor
and uses the notion of intrinsic value to construct fundamental portfolios. Jaap
van Dam, the firm’s principal director of investment strategy, commented,
Intrinsic value can be used in a very narrow sense, to measure book value.
But you can broaden this a bit in the sense that intrinsic value can be used,
for example, if you invest deeply in understanding how firms create long-
term value. This requires an understanding of the firm, its markets, strate-
gies, investor stewardship; in the long term, there are a lot of agency issues.
The market price is a question of supply and demand, but behind the market
price, one can reasonably estimate a ballpark number that represents the
firm’s value. We use intrinsic price not to “beat the market”—a term I don’t
like—but to generate better returns.
Christian Kjaer, head of global equities and volatility at the $113 billion
Danish state pension fund ATP, remarked,
In the space of global equities, we consider the uncertainties around various
valuation models to be huge. Consequently, we need to observe rather extreme
mispricing relative to the fundamental model value in order to have sufficient
conviction in the perceived mispricing. In areas where we consider our fun-
damental understanding to be stronger—for example, Danish equities—we
have significantly more confidence in perceived fundamental mispricings and
use it [this information] more actively to outperform markets.
The role of theory is to provide rules that allow a financial analyst to
make forecasts. Consider sending a satellite into orbit. Doing so would be
impossible without the theory of gravity. Theory allows a physicist to use
data to determine the satellite’s future path. In finance, however, we are in
an intermediate situation; that is, to determine the future path of an asset’s
price, we need knowledge of some basic facts, plus the ability to test whether
those facts are true. Unlike in physics, in finance (and economics), we do not
have the ability to test our theories. In equity valuation, we can avoid abstract
mathematical and statistical principles, but we need an understanding of the
theoretical underpinnings of valuation.
In free markets, the price of “things” is determined by the interaction
between supply and demand—with links, possibly complex ones, between the
characteristics of “things” and their prices. Largely dependent on objective
are considered risk free; any other price would result in arbitrage opportuni-
ties. For example, in discrete time, assuming n periods, if the risk-free rate in
period i is ri, then the present value, PV, of a government bond with coupon C
and principal M is
C C C+M
PV = + + + . (1)
(1 + r1 ) (1 + r1 )(1 + r2 ) (1 + r1 )(1 + r2 ) (1 + rn )
5
In modern monetary systems, the vast majority of money is created when agents take out
loans. A study by Moore (1988) found that banks are not constrained by reserves in their
lending. Contrary to the theory of the multiplier, he proposed a theory called Horizontalism
by which money is created endogenously by the banking system in granting loans and simul-
taneously crediting the client’s account. For a complete analysis of the modern process of
money generation, see McLeay, Radia, and Thomas (2014a, 2014b).
Percent
18
16
14
12
10
8
6
4
2
0
1934 48 62 76 90 2004
Source: Constructed by the authors using data obtained from Federal Reserve Economic Data
Three-Month Treasury Bill: Secondary Market Rate.
the market valuation relatively stable. In addition, stocks are subject to risks
other than solely credit risk.
The price of a firm’s stock is (essentially) determined by expectations such
as the ability of the firm to produce a steady flow of dividends and/or to com-
mand a high price in the future. In the absence of arbitrage opportunities,
the market price of any financial asset is the sum of the expected discounted
value of future cash flows. The discount rate is the risk-free rate plus a risk
premium.
Discount rates for stocks are determined by the market—that is, by the
interplay of supply and demand—and by the action of central banks and gov-
ernments. Central banks determine the rate they pay on reserves—a basic
interest rate that affects all other interest rates. Other considerations, includ-
ing fiscal policies and risk regulations, affect all rates in the economy. The
notion that assets’ market prices are equal to the sum of expected discounted
cash flows is general; it does not characterize the intrinsic price. In fact, prices
can always be represented as the sum of expected discounted cash flows with
appropriate discount rates.
To do the work that has traditionally been considered the job of an equity
analyst and for which investment professionals are still widely considered to
be paid (i.e., to “beat the market”), the profession tries to identify under- and
overpriced assets and take advantage of that knowledge to realize a profit.
such as quantitative easing (see McLeay, Radia, and Thomas 2014b). But, as
years of prolonged near-zero interest rates by central banks failed to produce
the desired economic growth, central bankers began to question just what the
ideal interest rate should be.
If we could identify a rate of interest at which no excess demand for invest-
ments occurs, we might reasonably assume that we could identify the intrinsic
value of assets as the price obtained by discounting cash flows with the natural
rate of interest. Of course the problem of forecasting cash flows remains.
Central banks have developed models to compute the natural rate of
interest. Lubik and Matthes (2015), from the US Federal Reserve Bank of
Richmond, compared two approaches: (1) their approach, which uses a time-
varying parameter vector autoregressive model, and (2) the Laubach–Williams
model (see Laubach and Williams 2015), which uses a state–space approach.
The idea of a natural rate of interest is not without critics, who observe
that no single rate of interest is able to guarantee stable prices and equilibrium
between investments and savings.
The question of the natural rate of interest bears on the question of the
relationship between stock returns and economic growth. Studying the
period 1872–2014, Straehl and Ibbotson (2015) found that the increase of
total payout (dividends plus share buybacks) follows the increase in per capita
GDP over very long periods. Others, including Ritter (2005), have found no
relationship (not even a negative relationship) between stock returns and eco-
nomic growth. This question will be discussed in Chapter 4.
Following the notion that the intrinsic value of a financial asset is its price
under some equilibrium condition led to a revision of the efficient market
hypothesis (EMH). LeRoy (1976) was the first to point out that the original
formulation of the EMH was a tautology.6 In response to LeRoy, Fama (1976)
introduced the idea that the true price is the price in economic equilibrium.
More recently, Pilkington (2014) posited the idea that the EMH is really a
hypothesis on an equilibrium situation of economies; that is, the EMH states
that actual prices are equal to the prices of an economy where savings and
investments are in equilibrium.
6
In his 1970 paper, Fama (p. 384) suggested that markets are efficient if prices satisfy the fol-
lowing equation:
E ( p j ,t +1 | Φt ) = E 1 + (rj ,t +1 | Φt ) p j ,t ,
where p j ,t +1 and rj ,t +1 are prices and returns at time t + 1 anticipated at time t and Φt is the
information set at time t. LeRoy (1976) observed that this equation is a tautology because the
expectation of prices at time t + 1 with the information set at time t is by definition equal to
prices at time t multiplied by the expectation of returns at time t + 1.
7
The accuracy of this rule is subject to changes in the amount of equity risk in a given econ-
omy that is publicly traded. For example, in the 1980s, Germany appeared to be undervalued,
but that appearance was because the equities were held privately by families, while corporate
debt was held by banks.
8
A total of 1,980 practitioners in the Americas (66% of the total), Asia Pacific (12% of the
total), and Europe, the Middle East, and Africa (22% of the total) participated in the CFA
Institute Survey (Pinto et al. 2015).
Table 2.1. Most Widely Used Valuation Approaches among Respondents to the 2015
CFA Institute Study
Valuation Approaches: Global ranking.
In evaluating individual equity securities, Percentage of Cases in
which of the following approaches to Which the Respondent Uses
valuation do you use? Percentage of Each of the Approachesa
N = 1,980 Respondents (mean)
A market multiples approach 92.8 68.6
A discounted present value approach 78.8 59.5
An asset-based approach 61.4 36.8
A (real) options approach 5.0 20.7
Other approach 12.7 58.1
a
Respondents using an approach were asked for the percentage of valuation cases in which the
approach is used. Thus, this column reports conditional frequencies.
Source: CFA Institute.
9
This reference is to Fernandez (2015).
The Kalman filter is a technique for estimating hidden variables in linear systems.
10
We thank Birinyi’s Chris Costelleo for providing us with this information in an Excel
11
spreadsheet.
P
P = E + ε, (2)
E
where P is the price of the stock, E represents the earnings per share, and
ε is random noise. Let’s leave unanswered for the moment the question of
the timeframe over which we compute E. If a true intrinsic P/E exists, then
Equation 2 would allow us to understand whether the stock is cheap or
expensive.
But determining a natural, intrinsic P/E is akin to determining a natural
rate of return. Sometimes the average P/E of a market is compared with a
historical average of the P/E of the same market. Figure 2.1 shows the cross-
sectional average P/E for the S&P 500 for the 146-year period 1871–2017.
As can be seen from Figure 2.1, for this 146-year period, the P/E had
a mean of 15.64, with values as low as 5.31 (December 1917) and as high
as 123.73 (May 2009, truncated in the graph). In the two most recent
decades, not only did the P/E increase, but fluctuations in the ratio also
grew. Clearly, considering the 146-year average (15.64) a natural benchmark
is problematic.
Given these difficulties and the fact that stocks in different sectors often
exhibit considerably different P/Es, in using a multiples approach, analysts
typically create small groups of similar (comparable) firms. A multiples
Figure 2.1. The Cross-Sectional P/E for the S&P 500 and Predecessor Indexes,
1871–2017
P/E
80
70
60
50
40
30
20
10
0
1871 83 95 1907 19 31 43 55 67 79 91 2003 15
Using a sample of US stocks from CRSP12 for the 1970–2014 period, the
authors performed an empirical study to determine the sensitivity of prices to
a number of factors and concluded,
Stock prices are, on average, affected by short-term earnings. … We find
that cash flow pricing is used primarily to price what we classify as “nega-
tive” stocks—stocks that are generally characterized as illiquid, mispriced,
or having a shorter trading history, negative earnings, or negative market
performance. Thus, the practice appears to collide with modern finance
theories. (p. 511)
Nevertheless, the authors consider the wide use of earnings rational: The
use of earnings is part of conforming to the majority.
The CAPE model—or rather, the data used to estimate the model when
valuing US equities—was the subject of a recent critique by Jeremy Siegel
(2016), professor of finance at the Wharton School of the University of
Pennsylvania. Siegel suggests that even though CAPE is among the best
forecasting models for long-term future stock returns, the CAPE model
is “overpessimistic” (p. 41) in its return forecasts because of changes in the
way GAAP earnings used in the model are calculated.13 He advocates using
National Income and Product Account after-tax corporate profits to estimate
the model. This approach, Siegel believes, will result in higher explanatory
power and significantly higher stock return forecasts.
This idea raises a general question regarding the input data when using
multiples: Do we use trailing or forward-looking multiples? A trailing multi-
ple is a multiple based on historical data; a forward-looking multiple is a mul-
tiple computed on forecast data. Value investors, including Benjamin Graham
and Warren Buffett, prefer historical data. Janet Lowe (2010) reports that
Buffett commented, “I have no use whatsoever for projections or forecasts.
They create an illusion of apparent precision. The more meticulous they are,
the more concerned you should be. We never look at projections, but we care
very much about, and look very deeply at, track records.”
A problem with using historical data is that for a firm whose earnings
change rapidly, the measure will lag.
A problem with using future market multiples is the universal problem
with forecasts—that they may be inaccurate. In their 2002 paper, how-
ever, Liu, Nissim, and Thomas report that forward earnings measures using
12
CRSP is the Center for Research in Security Prices at the University of Chicago Booth
School of Business.
13
GAAP is a standard framework of guidelines for financial accounting used in Canada, the
United Kingdom, and the United States.
market–equity breakpoint, they found that for the period under study, an
annually rebalanced equal-weighted portfolio of high-EBITDA/TEV stocks
earned annual returns of 17.66%, with a 2.91% annual three-factor alpha.
Gray and Vogel concluded that this measure compares favorably with E/M;
cheap-E/M stocks earned 15.23% a year.
Actually, an equal-weighted portfolio is, in itself, a good active strategy.
DeMiguel, Garlappi, and Uppal (2009) claim that the equal-weighted port-
folio is very difficult to beat.
In the aforementioned Gray and Vogel study, the authors also observe
that value-weighted portfolios exhibit similar results, though returns are
smaller than those of equal-weighted portfolios. This result is reasonable given
that equal-weighted portfolios take advantage of the relative mean-reverting
behavior of stocks. Interestingly, they also found that using forward estimates
based on analysts’ consensus yields produced the worst performance.
Some, including McKinsey’s corporate finance practice in New York,
consider the P/E—ubiquitous as it is—distorted in its traditional form by dif-
ferences in capital structure and other nonoperating items, such as restructur-
ing charges and write-offs. They advise using EV/EBITA or EV/EBITDA,
the most widely used market multiples after P/E, according to participants
in the 2015 CFA Institute Survey (see Pinto et al. 2015). McKinsey’s Nolen
Foushee, Koller, and Mehta (2012) believe that these multiples do not suffer
from distortions that affect earnings ratios. Nevertheless, they write,
Comparisons based on enterprise-value multiples typically reveal a very
narrow range of peer-company multiples. A closer look at the US consumer-
packaged-goods industry is illustrative. From 1965 to 2010, the difference
in EV/EBITA multiples between top- and bottom-quartile companies was,
for the most part, less than four points, even though the industry is fairly
diverse, including companies that manufacture and sell everything from
household cleaners to soft drinks.
30
25
20
15
10
–5
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
Years since Inception of Portfolio
Notes: Companies were those with inflation-adjusted revenue of at least $200 million that were
publicly listed from 1963 to 2000. Companies were divided into five portfolios based on their
growth rate at the midpoint of each decile (1965, 1975, 1985, and 1995). Portfolios were then
aligned chronologically from Year 0 to Year 15, and their median growth rates were compared.
Source: Nolen Foushee et al. (2012).
easily (is the P/E ratio of a small firm comparable to the P/E ratio of a
large firm?) and are overly sensitive to the choice of a universe of compa-
rable firms (can we compare the EV-to-EBITDA ratio of two firms with
different strategies and product mix even if they appear to compete in the
same sector?).
This means that we cannot rely solely on a relative measure to value stocks.
Their main use is to complement DCF methodologies by providing addi-
tional vantage points from which we can assess the value of a corporation.
Cornell and Gokhale (2016) developed a corporate valuation model
that uses both market comparables and the DCF method. Their valuation
valuation; they are for challenging the market price. … But that challenge
is successful only to the extent of the quality of the accounting in the valu-
ation model. (p. 22)
Richard Bernstein, chief executive and CIO at Richard Bernstein
Advisors and formerly chief investment strategist at Merrill Lynch, offered
another angle: valuation models as a tug of war between buyers and sellers.
Bernstein commented,
Valuation is in the eye of the beholder, and there will always be a bid–ask
spread between the buyer’s and the seller’s valuation. There is a tug-of-war
between the seller of the asset and the buyer, and how high up the income
statement one values a company (i.e., sales instead of earnings) demonstrates
who is winning that tug of war. As an investor, one wants to skew the anal-
ysis as much as possible in one’s own favor. Yet, most valuation models are
based on a “pure” valuation, which typically favors the seller.
He added that, in his experience, using GAAP earnings rather than
operating earnings, EBIT, EVA, or other measures resulted in better portfo-
lio performance.
What Drives Valuations? Even assuming that we can determine the
intrinsic or at least the relative price of a firm’s stock by using valuation mod-
els, a number of important questions remain. First, what drives the valua-
tion methods? Although the idea that growth alone drives multiples is widely
believed, McKinsey consultants Goedhart, Koller, and Wessels (2005)
write, “In reality, growth rates and multiples don’t move in lockstep. Growth
increases the P/E multiple only when combined with healthy returns on
invested capital, and both can vary dramatically across companies” (p. 8).
MSF Investment Management’s institutional portfolio manager Robert
M. Almeida, Jr. (2016) writes, “Fundamentals drive cash flow, cash flow
drives profits, and profits drive stock prices” (p. 3). Citing Compustat’s EPS
data for the 1994–2015 period, which are shown in Figure 2.3, Almeida
continues,
When we look back at companies that have made money versus those that
haven’t, we see those with profits outperforming those that lose money,
which isn’t surprising. But the magnitude of the performance is significant.
Over the past 20 years companies that were profitable were up more than
650% (cumulative), while unprofitable ones were down 23%. (p. 2)
Robert Jarrow, professor of investment management at Cornell University,
suggests that market prices are set not by fundamental values but by expected
(or desired) resale values. In a recent paper on equity prices, Jarrow (2016)
Figure 2.3. Cumulative Return for Positive and Negative Earnings, 1994–2015
600
500
200
100
0
Negative Earnings
–100
–200
1994 97 2000 03 06 09 12 15
Note: Each portfolio of positive and negative earnings companies was rebalanced monthly and
market cap weighted.
Sources: Almeida (2016); Compustat EPS data as of 31 December 2015.
which reversion to intrinsic value is likely to take place, nor do we know the
stability of intrinsic value:
Intrinsic value is supposed to serve as a type of gravitational pull for the
market price of a stock, but too few analysts try to figure out how strong
that pull is. How long will it take to have the market price converge to
intrinsic value? How does the intrinsic value change as you wait? Factoring
in the uncertainty around how long you might have to wait and the uncer-
tainty around how the intrinsic value might change while you wait should
make even the best of analysts more humble in the way they put client capi-
tal at risk.
Ang and Bekaert (2007) looked at the predictive power of the present
value model. In their widely cited paper, they report that they did find predict-
ability in stock returns but suggest refocusing the debate in three directions:
First, our results suggest that predictability is mainly a short-horizon,
not a long-horizon, phenomenon. Second, the strongest predictability
comes from the short rate and not from yield variables with price in the
denominator. … Third, there are tantalizing cross-country predictability
patterns that appear stronger than domestic predictability patterns. (p. 47)
ATP’s Kjaer commented,
In the inter-sector cross section, we do find some predictive power in rela-
tive valuation models. Obviously, stocks can be/are cheap/expensive for
a reason, but on average, we still find value in these models in the cross
section. Across time or sector, we have at present a relatively low convic-
tion in these types of models. However, in the case of extreme mispricing
relative to the model, we do have some confidence in these types of signals
across time.
As for the widely used price-to-earnings ratio in its 10-year cyclically
adjusted version (CAPE), the model was found to have little short-term pre-
dictive ability in a study by Antti Ilmanen (2016), principal of the quanti-
tative asset management firm AQR Capital Management. Using the range
of CAPE values without anticipation of information (i.e., using information
that would have been known at the time the model was applied to time the
market) and going back to 1900, Ilmanen found that, over the full period,
the use of CAPE as a predictive model only “mildly” outperformed a buy-
and-hold strategy, with all the outperformance occurring in the first half of
the sample. It would have underperformed for the last 50 years of the 20th
century. His conclusion: Neither a doubling of the market nor a historically
high valuation is a reliable sell signal; only with hindsight is CAPE of use in
the predictability of future returns and hence as a market-timing measure.
What If Peer Firms or Whole Markets Are Mispriced? The fourth key
question: What do investors do if the comparables are mispriced or, more spe-
cifically, when valuations of a sector or the whole market are high? Consider,
for example, the telecommunications, media, and technology (TMT) sector
in the period 1995–2001, when valuations were detached from economic fun-
damentals. Before the bubble burst, leaving investors with a loss of $5 trillion
in the market value of TMT companies from their high of March 2000 to
their low in October 2002, value investors saw clients walk out the door in
search of higher returns. Alpha Architect blogger David Foulke (2016) notes
that in the period of July 1998 to the end of February 2000, the NASDAQ
was up 145%, while Warren Buffett’s Berkshire Hathaway was down 44%,
underperforming the NASDAQ by 189 percentage points. Lleo remarked,
Overall, the return of financial assets, such as stocks, cannot stay discon-
nected very long from economic growth. This was Warren Buffett’s central
thesis when he proposed the market-value-to-GNP ratio as a measure of
mispricing shortly after the dot-com bubble burst.
14
Chee et al. (2013) formalized the Graham–Dodd approach to building a valuation-
based framework to explain stock returns and show that returns can be decomposed into
expected returns, returns driven by unexpected cash flow news, and unexpected discount
rate news.
models explained just 3% of the returns, rising to 9%, 17%, and 25%, respec-
tively, as they increased the investment horizon to 3-, 6-, and 12-month peri-
ods. The Macquarie Equities Research group cited the study by Chee et al.
(2013) showing that company fundamentals can explain about 60% of stock
returns over a five-year period.
A recent blog post by Damodaran (2016) raised a number of issues rela-
tive to the use of mean reversion, widely considered a robust underpinning of
many investment strategies. Damodaran remarked that, even if one believes
stock returns are mean reverting, using reversion might be tricky. First, the
mean will critically depend on the time period for which it is estimated.
Second, estimating the time to reversion is difficult but critical from the point
of view of investment strategies. In addition, structural breaks in the markets
can invalidate mean reversion. Creating a test strategy to understand the abil-
ity of CAPE to predict returns, Damodaran found that CAPE is better at
predicting short-horizon returns than long-term returns. Results, however,
depend on the choices made in estimation: Different results can be obtained
if slight changes are made in the estimation parameters. Damodaran con-
cluded with a warning that in these times of economic change, one has to
be particularly attentive in using ideas such as the mean reversion of CAPE.
“Statistical significance is not cash in the bank,” he wrote.
Another skeptical view on the use of mean reversion in asset management
comes from Charles Schwab’s Greiner:
I personally subscribe to the concept that financial asset prices and metrics
like valuation generally never mean-revert. Mean reversion implies station-
ary behavior in the time series. Since market prices are nonstationary, there
really is no mean to revert to. In physics (and in finance), the behavior called
“mean reverting” is more correctly called “anti-persistent,” as in the physics
of signal processing.
15
For his papers, see http://som.yale.edu/nicholas-c-barberis.
same trend, the trading activity creates a feedback loop, which may lead
to an amplification and acceleration of the trend. The risk of a flash crash
becomes substantial. These phenomena are particularly dangerous because
they are nonlinear, which makes them difficult to predict.
Lleo suggested putting safeguards in place: “Computing an intrinsic
value and using this value as an anchor in a trend-following algorithm—for
example, if you are more than X% above/below the intrinsic value, then stop
following the trend—can reduce the risk of following the crowd into flash
bubbles and crashes.”
What about Investor Objectives? A general question is, What are
the objectives of the investor? Is the investor striving to beat the market or
achieve some other goal? For Slager, the objective of a pension fund is not to
beat the market—a chapter he considers time to close—but to help the fund
achieve its goals. He cited added value for active managers in working with
funds to create new metrics and strategies to aid the fund to reach its goal.
“Trustees,” he said, “no longer need to be drawn into overly technical asset-
pricing discussions but can focus, instead, on what matters—assessing where
active management can work.”
Valuing IPOs
Clearly, valuing IPOs can be problematic. Warren Buffett once famously said
that if he were teaching a finance course, he would ask students to evaluate
an internet stock, and any student giving an answer would flunk.16 (He made
this statement at a time when no internet stock had yet made a profit.)
In May 2016, Investopedia’s John Burke noted that 72% of the IPOs
issued in 2015 were trading below the issuance price a year later and that the
average return for a 2015 IPO stock issued in the United States was –19%.
According to data from FactSet, from 1 January to 23 December 2016, while
the S&P 500 Index was up 10.8%, the First Trust US IPO exchange-traded
fund (ETF) was up only 6.6% and the Renaissance IPO ETF was actually
down 0.4% on a year-to-date basis. The problem of IPOs trading below their
offer price and/or underperforming with respect to the overall market has
led to a loss of investor appetite, which is reflected in the number of compa-
nies going public on US exchanges. According to FactSet analyst Andrew
Birstingl (2016), only 106 IPOs were issued in 2016—the lowest number
since 2009, when 64 companies went public. The amount of money collec-
tively raised by these 106 IPOs was also down—to $20.2 billion (a 38.1%
decline from 2015), the smallest annual total since 2002, when gross pro-
ceeds were $19.5 billion.
Perhaps the IPOs that most retain the media’s attention are technology
IPOs, where performance has not been stellar. According to Reuters reporter
Dan Burns (2017), globally, shares of the 25 largest technology IPOs per-
formed poorly in their first 12 months on the public market: 16 of the 25
suffered declines from their debut-day closing price, with 8 of the 10 big-
gest falling by 25%–71%. The median one-year performance of the largest
technology IPOs was –22.3%. The medium-run performance of Snap’s stocks
following the 1 March 2017 IPO will likely affect investors’ appetite for IPOs
throughout the year.
Another explanation for the recent dearth of IPOs is offered by Gao,
Ritter, and Zhu in “Where Have All the IPOs Gone?” (2013). They note that
16
To view a related video, see www.youtube.com/watch?v=nrSB1sLgWLE.
the drop in IPO offerings was especially high among small firms and hypoth-
esize that the advantages of selling out to a larger organization have increased
relative to the benefits of operating an independent firm.
Back in 1994, Ibbotson, Sindelar, and Ritter (1994) wrote, “The
market has a great deal of difficulty in valuing issuing firms appropriately”
(p. 66). They identified three anomalies still present in IPO valuations
today: (1) short-run underpricing resulting in first-day returns that average
10%–15%, (2) cycles in the volume of new issues and the magnitude of first-
day returns, and (3) long-run (five-year) underperformance. Ibbotson et al.
consider these anomalies a challenge to the efficient market hypothesis and
conclude that raising capital “is subject to the whims of the market, as well as
the fundamentals of the company” (p. 74).
So, what tools do analysts have for valuing IPOs? Essentially, the same
tools discussed in Chapter 2 that sell-side and buy-side analysts use to value
publicly traded companies, but with some additional problems.
Penman (2016) comments thus on the valuation of IPOs:
While one cannot hope to pin down “intrinsic value” with certainty, valua-
tion aims to reduce uncertainty in investing, and standard approaches that
often introduce uncertainty do not serve us well. They even lend themselves
to “playing with mirrors.” Sell-side bankers like the models; set with the
“due-diligence” task of supporting an issue price with a formal valuation,
they look for a model that can establish, or rather justify, practically any
value one wishes, however high, for a really outstanding issue. But the
investor on the buy side of that issue, or a fiduciary of other people’s money,
is cautioned: caveat emptor; beware. (p. 4)
In “Valuing IPOs,” Kim and Ritter (1999) consider the usefulness of
various approaches as benchmarks for valuing IPOs. They report that valuing
IPOs on the basis of P/E, price-to-sales (P/S), enterprise value-to-sales, and
enterprise value-to-operating cash flow ratios has some predictive value when
used with earnings forecasts and adjusted for differences in growth and prof-
itability. However, the authors found that, when used with historical account-
ing numbers, multiples are imprecise in their ability to forecast future cash
flows of IPOs. They report a similar finding for another widely used valuation
method—the discounted cash flow (DCF) method.
Commenting on the use of market multiples for valuing IPO-issuing
firms, An Yan, a professor of finance at Fordham University’s Gabelli School
of Business, said,
The use of multiples to value listed firms where you have reliable earn-
ings and information sources is OK, but using multiples to value IPOs is
problematic: It is hard to define comparable firms, firms in an early stage of
Unlike US or UK law, French law makes available how underwriters value companies and
17
the methods they use at a stage prior to taking a company public. Several other European
countries have such laws.
public. Part of underpricing is due to the deliberate price discount that attracts
investors in the early stage of the IPO process.”
Yan discussed the two-step process in valuing and pricing IPOs. In the
first step, the investment bank performs due diligence—the S-1 filings and
balance sheet analysis—to arrive at a rough idea of value based on fundamen-
tals. The second step is the “road show,” during which the investment bank
tests initial investor sentiment. “This,” Yan noted, “is key.” He continued,
The price is determined by the industry perspective and the marketing
environment. This is very important in pricing an IPO. In the end, the
price is determined by the information exchange between the underwriter
and investors, not so much by fundamentals. An investment bank cannot
underprice an IPO on grounds of the fundamentals—which might be det-
rimental to the interests of the issuing firm. The underwriter must find the
match between demand and the price. It is a question of market timing.
Firms wanting to issue an IPO find a window to go public and do so when
the market is receptive. The price depends on market sentiment, demand at
the moment of going public. It is not a question of fundamental value.
The positive role of investment banks as underwriters was studied by
Bajo, Chemmanur, Simonyan, and Tehranian (2016) in their paper on under-
writer networks, investor attention, and IPOs. The authors studied how
central lead underwriters arrive at pricing through an information exchange
with their investment banking network. This information exchange allows
the underwriter to both disseminate information on the issuing firms and
simultaneously extract information from institutional investors that will
prove useful in pricing the IPO. Bajo et al. found that IPOs underwritten by
more central lead underwriters are associated with higher absolute values of
offer price revisions, higher IPO and secondary market valuations, and higher
IPO initial returns. The authors also found that IPOs underwritten by cen-
tral lead underwriters are typically covered by a larger number of financial
analysts, have large institutional investors holding shares, and (subsequently)
have greater secondary market liquidity and better returns over a period of six
months to one year after issuance.
Matteo Bonaventura (now a buy-side analyst at Banor SIM) and
Giancarlo Giudici (2016) documented the positive role of pre-IPO book-
building activity in valuing and pricing IPOs in the Italian market for the
2000–09 period.18 Noting that one of the most common techniques used in
18
As in France, Italian law makes available information about how underwriters value com-
panies and the methods they use at a stage prior to taking a company public. Book building
is the process underwriters use to assist in price discovery when seeking to raise equity for
clients via a public offering—either an IPO or a follow-on public offering. The bids and the
number of shares that a bidder wants at the bid are collected from both institutional and retail
investors during the period the offer is open. After the bidding process is closed, the issue
price to be used by the underwriter is then determined by the demand generated from the
book-building process.
19
Purnanandam is now a professor of finance at the University of Michigan’s Ross School
of Business; Swaminathan is now a partner and the director of research at LSV Asset
Management in Chicago and an adjunct professor of finance at Northwestern University’s
Kellogg School of Management. Swaminathan’s work on valuing the Dow Jones Industrial
Average won a Graham and Dodd Award of Excellence from CFA Institute.
the 2008 market crash and beyond. His study includes both technology and
nontechnology companies, loss-making as well as profitable IPO issuers, and
already listed firms. Loss-making IPO issuers made up roughly half of all
IPOs in the United States during the period studied. Using a variety of valu-
ation methods adopted by researchers working on similar studies and valu-
ation methods from the mergers and acquisitions field (e.g., Rhodes-Kropf,
Robinson, and Viswanathan 2005), Zörgiebel found that IPOs are, in gen-
eral, valued higher than listed peer companies and that IPOs with negative
earnings were valued higher than IPOs with positive earnings. Figure 3.1
shows the discontinuity of the market-to-book (M/B) premium versus the
net income margin.
Another finding is that IPOs with negative earnings provide long-term
underperformance relative to both listed companies and IPOs with positive
earnings.
Zörgiebel suggests that factors other than higher growth expectations
might be part of this phenomenon and identifies media coverage and hetero-
geneous beliefs as playing a substantial role in IPO valuations.
Why are investors ready to accept high valuations for IPOs? Degeorge,
Derrien, and Womack (2007) explored the role of “analyst hype” (p. 1021)
100
80
60
40
20
–20
–0.5 –0.4 –0.3 –0.2 –0.1 0.1 0.2 0.3 0.4
Net Income Margin
in the IPO process. In a study of the French market, they found that book
building as a selling procedure was related to the perceived benefit analyst
coverage provided to the success of the issuance and (possibly) post-IPO
coverage in an attempt to ensure aftermarket liquidity. They also found that
analysts affiliated with the lead underwriter issue more (and more favorable)
recommendations for the book-built IPOs and that lead underwriters “lean
on” unaffiliated analysts to provide favorable coverage. Analysts affiliated
with lead underwriters were also found to put out positive recommendations
(“booster shots,” p. 1023) following the poor stock market performance of
recent book-built IPOs.
Clearly, IPO valuations are more subject to qualitative or behavioral ele-
ments than are shares of already listed firms. Data on the fundamentals of
an IPO business, such as cash flow, the balance sheet, and profitability, are
often unreliable and/or not audited. Such factors as demand or the “narrative”
(or marketability of the business), including assumptions about the company’s
future growth projections, play a large role.
Chemmanur and Yan (2017) document how advertising by firms going
public affects both the valuation and price revisions of the IPO as well as
long-run post-IPO stock returns. In their sample of US IPOs from 1990 to
2007, they compared IPO firms with high and low advertising intensity in
the years running up to their IPOs. They found that companies going pub-
lic with high advertising intensity prior to their IPOs (1) are valued higher
both in the IPO and in the immediate aftermarket, (2) are associated with
greater upward price revisions from the pre-IPO filing range means, and (3)
have lower long-run post-IPO stock returns. Specifically, Chemmanur and
Yan reported that among their sample firms, the profitability (EBIT/assets) of
low-advertising-intensity firms was, on average, 9.9%, whereas that of high-
advertising-intensity firms was, on average, –6.7%.20
Zörgiebel’s (2016a) study of IPOs with negative earnings identified mar-
keting campaigns by venture capitalists and underwriters as the valuation
drivers in the IPO process. In a study specific to media coverage and startup
valuations, Zörgiebel (2016b) notes that as of October 2015, Crunchbase
counted 153 venture capital–funded startups in the Unicorn Club ($1 billion
or over) with a (post-money) valuation of about $529 billion and total funding
20
Interestingly, in their working paper “Advertising, Investor Recognition, and Stock
Returns,” Chemmanur and Yan (2011) found a similar pattern for listed stock returns. They
tested Merton’s investor recognition theory by examining the impact of advertising on stock
returns and found that a higher level of advertising growth is associated with higher stock
returns in the high-advertising year and lower ex post long-run stock returns.
Figure 3.2. Average Transaction Value vs. Media Coverage per Day of 153 Venture
Capital–Funded Startups in the Unicorn Club, 1995–2015
2,000 2.0
1,500 1.5
1,000 1.0
500 0.5
0 0
1995 97 99 2001 03 05 07 09 11 13 15
Average Transaction Value (adjusted) Media Coverage/Day
fueled by media coverage and hype around that firm, uncertainty is higher,
and valuation levels might be pushed upwards.
The impact of uncertainty on investor behavior was studied by Alok
Kumar (2009), a professor of finance at the University of Miami School of
Business Administration. Kumar found that in situations where valuation is
highly uncertain and stocks are difficult to value, people are more likely to
use heuristics (rules of thumb), thereby creating stronger behavioral biases
and bigger investment mistakes. Kumar writes, “Both stock-level and market-
wide uncertainty adversely influence [investors’] decisions” (p. 1377). These
remarks pertain equally to IPOs and private equity discussed below.
The role of psychology was mentioned as a leading factor in IPO invest-
ment decision making by Sébastien Lleo, a finance professor at NEOMA
Business School (France). Citing work by Hersh Shefrin suggesting that
behavioral biases such as framing and conflicts of interest explain a part of the
long-term underperformance of IPOs, Lleo said,
The effect of these biases is precisely to disconnect decision makers from the
type of coherent and articulated mental framework provided by DCF meth-
ods. With IPOs, as with mergers and acquisitions, the risk in the decision
process of both managers and investors is that the exciting story (“writing a
new page in corporate history,” “grabbing some financing when the market
is hot,” and so on) will eclipse down-to-earth valuation. When this occurs,
you may observe long-term underperformance for both the company and its
stock.
Cornell and Damodaran (2014) studied the role of market sentiment
in the case of Tesla Motors. Following DeLong, Shleifer, Summers, and
Waldmann (1990), Cornell and Damodaran describe market sentiment as
a “belief about future cash flows and investment risks not justified by facts
at hand” (p. 139). Tesla went public in June 2010 (just one month after
Toyota Motor Corporation announced that it was taking a 2.5% stake in the
California company), with shares priced at $17. Cornell and Damodaran stud-
ied the stock’s rise from $36.62 on 22 March 2013 to $253.00 on 26 February
2014. To value Tesla’s shares, they constructed a DCF model and ran it at
three points: before the start of the run-up in price, during the run-up, and
at the end of the run-up. They found, “The valuations all yield value estimates
that are well below the market price (at the time of the valuation), with the
price at more than two and one-half times an aggressively optimistic value
estimate,” concluding that “investor sentiment played an important role in the
run-up” (p. 150).
21
See http://people.stern.nyu.edu/adamodar/pdfiles/ovhds/inv2E/PvtFirm.pdf.
determined by the net operating profit after tax (NOPAT) margin, the cost
of capital (typically the firm’s weighted average cost of capital or WACC),
and expected growth. The multiple varies directly with the margin and
growth and inversely with the cost of capital. Even if the private firm being
valued has a margin and growth expectation equivalent to that of the public
firm, the cost of capital for the private firm will always be higher, if for no
other reason than lack of liquidity. Hence, using an unadjusted public-firm
multiple to value a private firm will likely result in overvaluation of the pri-
vate firm.
According to Feldman, the DCF and market multiples should be used
only for mature private equity firms or private equity firms that are already
in the commercial stage of development. His criticism of the use of DCF to
value early-stage firms (firms that have not reached commercial development
or are in the very early stages of commercial development) is as follows:
In these cases, the analyst might use a DCF and apply a very high discount
rate to reflect the firm’s uncertain future, but this simply takes away the
upside potential, and while this may be a low-probability event, the value
achieved may be very large. So, even when multiplied by a low probability,
the resulting value at the valuation date may be substantive.
Feldman believes that a Monte Carlo or option technique is preferable,
though it is not generally used.
Returning to the Grant Thornton (“Private Equity Valuations” 2015)
survey on methods frequently used to value privately held firms, participants
cited more rule-of-thumb methods than simply DCF and multiples. These
methods include the use of the price of recent transactions, such as merger
and acquisition transactions (70.5%), and recent transactions involving assets
that are the same or similar to the asset in question (69.2%). Feldman notes
that if the transactions are private, they reflect a lack of liquidity (the methods
cited above produce pre-liquidity-adjusted values). In addition, using recent
transactions of comparable companies with similar attributes (e.g., industry
group, recent timing, business offerings, and capital structure) is question-
able, because transactions are rarely directly comparable; value might be tied
to metrics other than revenue. However, in some industrial sectors where
normal profitability does not vary much, there might be an industry valuation
benchmark. Examples include price per subscriber in cable television or price
per bed for nursing home operators.
A source of transaction data for North America is GF Data Resources’
searchable database of business transactions in the mid-market ($10 million to
$250 million range). Drawing from a pool of 206 private equity firms, mezza-
nine groups, and other financial sponsors, the firm’s database has information
22
NAICS, the North American Industry Classification System, was developed under the aus-
pices of the Office of Management and Budget. It was developed jointly by the US Economic
Classification Policy Committee, Statistics Canada, and Mexico’s Instituto Nacional
de Estadistica y Geografia and was adopted in 1997 to replace the Standard Industrial
Classification codes. Other classification systems exist: At the international level, Standard
& Poor’s and Morgan Stanley Capital International jointly developed the Global Industry
Classification Standard system widely used by financial practitioners. Other systems include
the International Standard Industrial Classification of All Economic Activities system devel-
oped by the United Nations.
23
The Financial Accounting Standards Board is United States specific; the international stan-
dards are under the International Accounting Standards Board.
risk, allows a better valuation of firms when optionalities exist because of, for
example, flexibility in the corporate strategy, as is typical with venture capital
at seed or in the startup phase. Damodaran writes, “The equity in a firm is a
residual claim, that is, equity holders lay claim to all cash flows left over after
other financial claimholders (debt, preferred stock, etc.) have been satisfied,”
concluding that “equity can thus be viewed as a call option [on the value of]
the firm, where exercising the option requires that the firm be liquidated and
the face value of the debt (which corresponds to the exercise price) paid off”
(p. 57). Application of real option theory is based on this observation (i.e.,
that the stock price is an option on the value of the firm).
Damodaran also observes, however, that applying real option theory to
the valuation of firms is challenging. It requires a number of inputs that are
not easy to estimate. For example, option valuation models typically make
the simplifying assumption that the debt of the firm is a single zero-coupon
bond. However, firms do not have only one zero-coupon bond outstand-
ing. Estimating the volatility of the underlying value adds another layer of
difficulty.
As with valuation in general, the use of more than one valuation meth-
odology is typically recommended, because doing so allows the analyst to use
one method to cross-check another. And as with IPOs, investment banks
and, more generally, equity analysts play an important role in valuing pri-
vately held firms.
Summing up the valuation of private firms—an asset class estimated to
be worth almost $2.5 trillion globally—staff at Investopedia (“Valuing” 2016)
wrote that the process “is full of assumptions, best guess estimates, and indus-
try averages.” In an industry discussion organized by Financier Worldwide
Magazine (“Q&A: Valuations” 2014), Hilco Enterprise Services’ head of
enterprise valuation and corporate finance, Jason Frank, said,
Since all parties use essentially the same and well-known valuation meth-
odologies, inputs become the most important part of the equation. If the
assumptions are accurate and are properly applied within the chosen valu-
ation methods, much of the subjectivity is eliminated and the valuation
becomes more of a science. … Regardless of which method is utilized, it is
very important to scrutinise the underlying assumptions. The quality, rel-
evancy, and accuracy of inputs separate the valuation from a “science” to an
“art.” A formula and its inputs can be manipulated to provide any result that
is desired. A true valuation will not only follow the guideline methodolo-
gies but will be realistic in its findings.
In comparing the valuation process of private equity with that of IPOs,
our interviewees noted some similarities and some differences. Lleo remarked
that, as with IPOs but less consistently, the valuation gap observed with pri-
vate equity and leveraged buyouts (LBOs)
may reflect behavioral pitfalls such as overconfidence, excessive optimism,
confirmation bias, and framing. When a bidding war erupts, as was the
case in the 1987 LBO of RJR Nabisco, or more recently when Marriott and
AnBang locked horns over Starwood, we may also observe the effect of the
winner’s curse.
Fordham University’s Yan remarked,
In valuing a privately held firm, fundamental values and operational effi-
ciency are more important than they are in valuing an IPO. On the finan-
cial side of leverage, you need to be careful. If you go by sentiment, it might
be hard to exit a deal. On the other hand, when you talk about the second
step—that is, taking a private firm public—you encounter the same prob-
lems as in the IPO space.
As with IPOs, problems in evaluating private equity include not only the
disclosure of financial information but also the importance of such factors
as supply and demand. Gompers and Lerner (2000) studied 4,000 venture
investments between 1987 and 1995 and found a “strong positive relation
between the valuation of venture capital investments and capital inflows”
(p. 283). They rejected the possibility that changes in valuation of the firms
in their sample was related to the ultimate success of the firms. Specifically,
Gompers and Lerner found that a doubling of inflows into venture funds led
to a 7%–21% increase in the value of private equity transactions. They also
found a marginal impact of a doubling in public market values, which added a
15%–35% increase in the valuation of private equity transactions.
The strong positive relationship Gompers and Lerner found between the
valuation of venture capital investments and capital inflows in their 2000
study was reported by Investopedia journalist Ryan Downie (2016) in “Is the
Private Equity Bubble Still Expanding?” Downie remarks that during the
dot-com bubble of the late 1990s and 2000, global technology-sector valua-
tions reached an average P/E 80%–300% higher than the average for equities
in other sectors; for the 2010–15 period, the average P/E for public technol-
ogy companies was 20—only 10% higher than the market average as a whole.
Commenting on the divergence of valuation between publicly and privately
held technology firms, Downie writes,
The growth in capital earmarked for private equity was not met with a simi-
lar expansion in suitable investment opportunities. This imbalance drove
valuations higher. However, publicly traded technology firms did not expe-
rience valuation expansion of the same magnitude.
The problem of valuations in public versus private markets was also the
subject of an article by Investopedia’s Trevir Nath (2016). Writing at the
beginning of 2016, Nath remarked, “The mind-boggling valuations [of pri-
vate technology companies] over the past five years are more indicative of the
markets than of the true value of the company itself.” According to Nath,
seeing private tech companies valued at 100 times revenues just before going
public is not uncommon, but, he noted, apart from Facebook, almost no com-
pany has achieved a forward revenue multiple higher than 10.
Valuations appear to be high in some other sectors also. For example, in
the midmarket, Pennsylvania-based GF Data Resources found that valua-
tions for 52 completed US middle-market private equity transactions for the
year 2016 averaged 6.9 times trailing 12-month adjusted EBITDA, a record
high in their dataset that goes back to 2003 (see GF Data Resources 2017).
Can the industry do better in valuing private equity?
Frank remarked,
It is extremely difficult to arrive at an accurate valuation of a privately held
company in today’s market. Not only has the recession wreaked havoc on a
business’s operational, financial, and strategic initiatives, but there are many
other factors that contribute to the complexity of valuing the company.
Companies are faced with severe liquidity concerns, unpredictable con-
sumer demand, and challenges to supplier relationships. These factors are
creating significant uncertainty and unpredictability regarding current per-
formance, as well as a lack of visibility for future projections, which makes
accurately valuing a private entity an extremely challenging exercise.
Privately held companies are generally plagued by a lesser quality and quan-
tity of information that can be used in an analysis. Also, a private company’s
capital structure could be more complex, with various classes of equity
and debt securities. Lastly, the final value of a closely held, private busi-
ness may differ from the value calculated using the established methods
of appraisal—the income, market, and cost approaches—because various
types of discounts or premiums to the basic valuation methodology must be
considered. (“Q&A: Valuations” 2014)
Nevertheless, Feldman made several suggestions for improving the valua-
tion process in private equity:
Best practice in valuation should reflect both theoretical developments in
finance and economics as well as peer-reviewed research. Market practice
is important, but it should conform to academic discipline where possible.
Often, this is not the case. There are many examples, but one in particular
relates to using public company multiples to determine perpetuity values in
the DCF. There are a number of things wrong with this, but chief among
them is that factors that determine the multiples at the valuation date are
not likely to be the same when the firm reaches its steady state sometime in
the future.
Specific to the use of DCF models, Feldman pointed to the need to test
whether projections of revenue and EBIT are consistent with market expecta-
tions. This testing includes the following:
1. Making sure that changes in working and net fixed capital are at appro-
priate levels: Values that are too high unnecessarily burden the cash flows
and reduce the value of the firm, while values that are too low do the
reverse.
2. Considering depreciation a real expense: The acceleration adjustment
should be added to cash flows so the full impact of the acceleration shows
up in increased value.
3. Considering the perpetuity of the growth rate: This should be no greater
than the expected nominal growth in the overall economy.
Feldman remarked that the same observations apply to the use of multi-
ples, adding that public company multiples cannot be applied directly to value
private firms.
Feldman also suggested the following:
1. The need to measure lack of liquidity: He suggests using a put-option
pricing framework (its strength: the inputs are market metrics; its draw-
back: the analyst needs to determine the derivative’s life).
2. The need to consider different classes of stock—preferred versus
common—to reflect preferences for the latter over the former.
3. The need to make adjustments to financials, primarily the officer’s com-
pensation adjustment.
A brief look at returns on private equity investments may be worthwhile.
Specific to buyout-related private equity returns, Bloomberg and asset man-
ager Hamilton Lane Advisors analyzed private equity returns of 20 firms
valued at $10 billion or more and subject to buyouts in the 2005–07 period.
Their conclusion was reported by Carey and Banerjee (2016): In more than
half the deals, investors would have fared better by placing their money in
an index fund. A study of the performance of private equity buyout funds by
L’Her, Stoyanova, Shaw, Scott, and Lai (2016) found that, using an appro-
priate risk-adjusted benchmark, buyout investment funds had no significant
24
In the McKinsey report, total investments in private equity at the end of 2016 were divided
as follows: $1,474 billion in buyouts, $524 billion in venture capital, $315 billion in growth
companies, and $151 billion in other.
25
See NEPC’s Q3 2015 and Q3 2016 survey reports and its press release about the Q3 2016 sur-
vey results (http://www.nepc.com/insights/nepcs-q3-endowments-foundations-poll-results).
an IPO on the New York Stock Exchange in November 2015 with an initial
valuation of $2.9 billion—less than half its private valuation of 13 months
earlier.26 (We note that in October 2017, Square’s market capitalization was
$12.6 billion.) In general, according to Downie (2016), the estimate is that
more than 40% of the billion-dollar technology IPOs issued between 2011
and 2015 were trading at or below their last private-round valuations as of
May 2016, despite the rise in public equity indexes over much of that period.
Ernst & Young’s 2015 private equity survey found that investors are
asking for more detailed information about the key assumptions and inputs
driving valuations. According to the survey, 69% of the participating private
equity investors said transparency in financial reporting is a major concern,
and 86% believe that the involvement of third-party valuation specialists adds
a level of consistency to the valuation process. The following year’s survey of
private equity funds and investors (Ernst & Young 2016) found that for 45%
of the participating investors, reporting is now the most important require-
ment when selecting a private equity firm; that percentage is up from 11% just
one year earlier.
With investors, consultants, academics, and regulators noticing a decline
in private equity returns and calling for more transparency in valuations,
Feldman concluded,
When the market for private firms is ebullient, like now, there is less time
spent on ensuring that valuation methods applied are appropriate or gener-
ally consistent with best practice. Alternatively, the greater the oversight,
the more likely that valuation methods applied will meet best practice
guidelines. Although purchase price accounting is typically done post trans-
action, the analysis and methods used can inform the value of a private firm.
For example, the value of the firm should line up with the value of its asset
base. Purchase price allocation tests the veracity of a firm’s value, and this is
especially true when valuing a private firm. I would add that the forecast
trajectory of a firm’s cash flows should be subject to far greater uncertainty
and that simulation techniques should be more commonly used.
See Leena Rao and Dan Primack, “Square Prices IPO at Just $9 per Share, Valued at
26
Central bank policies and corporate buybacks are the two major phenomena
about which equity analysts and investors now worry because of their potential
to distort market valuations relative to theoretical valuations. (Central bank
policies include low interest rates and quantitative easing.) Together, these
two factors are helping sustain a market rally in the United States where, for
example, the S&P 500 Index has more than tripled since its March 2009 low.
In Europe and Asia, where central bank policies are similar but the buyback
phenomenon is (somewhat) less present, the overall picture is not dissimilar:
The S&P Europe 350 Index is at about 2.5 times its 2009 low, and the Nikkei
250 Index is 2.7 times its 2009 low.
27
Whether Buffett was serious is difficult to determine. A more conventional analysis would
say that with a 3% equity risk premium over a riskless asset yielding zero, the stock market
should sell at 33 times earnings.
Since the 2008 financial crisis, central banks have added liquidity to mar-
kets: $4.5 trillion in bond buying by the US Federal Reserve and another
€1.7 trillion by the European Central Bank. In Japan, in addition to making
aggressive bond purchases, the central bank has embarked on a $58-billion-
a-year stock purchase program, making the Bank of Japan (BOJ) one of the
country’s biggest stock market investors. Kitanaka and Hasegawa (2016) esti-
mated that by the end of 2017, the BOJ could become the largest shareholder
of 55 companies in the country’s Nikkei 225.
Commenting on the problem the BOJ’s moves present for other investors,
Kitanaka and Hasegawa cite a Goldman Sachs report:
While the exact amount of a company’s freely-traded shares is often dif-
ficult to pin down, Goldman Sachs Group Inc. estimated in an August 10
[2016] report that BOJ purchases could soak up the remaining free float at
firms including Comsys Holdings Corp., Nissan Chemical Industries Ltd,
and Tokyo Electron Ltd. Over the next year, if the free float in some stocks
keeps shrinking, it could become more difficult for fund managers to find
the shares they need to track benchmark indexes.
Note that the difficulty in tracking is a problem only if the index the
manager is trying to track is not free float adjusted, but most are.
Kenneth Little, managing director of the investments group at value
manager Brandes Investment Partners, said,
We do consider many market valuations to be fairly high currently. While
we are much more focused on individual companies as opposed to trying to
forecast (or explain) the valuations of markets in aggregate, we do believe
that record low interest rates have been a significant factor in pushing up
valuation levels. The lack of alternatives for individuals and institutions
seeking any sort of yield has forced investors out on the risk spectrum in
order to attempt to meet their return requirements. Equities have been a
primary beneficiary of this quest for yield and return, and this demand has
been a key factor in driving up equity market valuations.
Matteo Bonaventura, a buy-side analyst at Banor SIM, agrees that quan-
titative easing (QE) and loose monetary policy have played a key role in driv-
ing equity valuations up. He noted, “Central banks have flooded the market
with money. For this reason, investors needed to allocate money in the mar-
ket. Given the artificial low returns of government and other bonds, equities
became the target of market investors.”
Jason Hsu, co-founder and vice chairman at Research Affiliates, agrees.
He told IPE Magazine journalist Charlotte Moore (2016), “This wall of
money created a spike in asset prices,” with some segments of the markets
becoming more and more expensive, thereby fueling a momentum market.
28
In his book Risk Society: Toward a New Modernity, Ulrich Beck characterized Western
societies as risk societies in which the environmental impact of production and distribu-
tion becomes increasingly central to social organization and conflict. Risk Society was first
published in German in 1986; the first English edition was printed in 1992 by SAGE
Publications, London.
29
The US Federal Reserve defines M2 as that measure of the stock of money that consists
of a broad set of financial assets held principally by households and includes bank and sav-
ing deposits, small-denomination time deposits, and balances in retail money-market mutual
funds.
Figure 4.1. Growth of M2 Money Stock Compared with GNP in the United States,
1957–2015
1 Jan 1957 = 0
20,000
18,000
16,000
14,000
12,000 GNP
10,000
8,000 M2
6,000
4,000
2,000
0
Jan/57 Jul/68 Jan/80 Jul/91 Jan/03
Source: Constructed by the authors using data obtained from the Federal Reserve Bank of St. Louis.
Price Index (CPI), has remained very low some seven years after the intro-
duction of QE.
Where did the money go?
Richard Werner, director of the Centre for Banking, Finance and
Sustainable Development at the University of Southampton School of
Management, proposed a possible explanation. Werner (2012) argues that
the newly created money did not reach the economy uniformly but that a
growing fraction of the money reached financial markets, contributing to
asset inflation without contributing to GDP or GNP growth or to inflation
as measured by the CPI. He thus suggested the need to divide money into
two streams:
•• money used for GDP transactions—that is, used for the “real economy”
or real circulation; “real money” (MR) and
•• money used for non-GDP transactions—that is, “financial circulation”;
financial money (MF).
Therefore, M = MR + MF.
Note, in particular, that QE, as it is being implemented, is by nature
money that reaches primarily financial markets (MF) because QE consists of
central banks buying assets from nonbank institutions. The main effect is to
stimulate asset demand and therefore a rise in asset prices.
We thank Birinyi’s Chris Costelleo for providing us with this information in an Excel
30
spreadsheet.
that 5 million shares were bought back by STOXX Europe 600 firms in
2015 (this number compares with more than 60 million shares bought back
by S&P 500 companies in the same period). In Japan, the phenomenon has
been growing in importance since 2014, which witnessed a spike in buybacks.
Indeed, in 2016 corporate buybacks were the biggest source of equity demand
in Japan, according to Leo Lewis (2016). Lewis cited Goldman Sachs esti-
mates that put buybacks for the fiscal year ending 31 March 2017 at a record
¥6.5 trillion (approximately $60 billion). Nevertheless, buybacks in Japan,
compared with those in the United States, still represent a small proportion
of market capitalization.
An example of the effect of corporate buybacks on stock prices comes
from the Nikkei Asian Review (“Fewer Japanese Companies” 2017). In
January 2017, when NTT DOCOMO reported nine-month results without
renewing its share repurchasing program, its stock price fell by 4% within a
few days. In contrast, companies that announce share buybacks see the price
of their shares increase. When Asahi Glass Company announced buybacks
of up to ¥10 billion several months later, the company’s shares went up some
9% to a six-year high. After Aoyama Trading Company announced its sixth
straight year of buybacks, shares went up 3%.
Indeed, some equity analysts consider corporate buybacks the sole factor
driving demand for equities in today’s market. In March 2016, Bloomberg
TV reported that if the pace of withdrawals from US mutual funds and
exchange-traded funds were to continue through the month, outflows would
hit $60 billion (Wang 2016a). The result would be the biggest annual gap
between outflows and corporate buybacks ($225 billion) since the “mini”
stock market crash of 27 October 1997. At the Amsterdam-based Robeco
Institutional Asset Management (2016), buybacks are considered largely
responsible for the growth in market multiples seen in recent years.
Buybacks, which in theory are a way of sharing profits with shareholders,
increase demand for and reduce the supply of a company’s shares. Given the
law of supply and demand, buybacks tend to distort valuations in the immedi-
ate term by raising earnings per share (EPS), even when total net income is
flat. Interestingly, for fear of market manipulation, companies were largely
prohibited from buying their own shares until 1982, when the Ronald Reagan
administration began to deregulate financial markets.
In a special Reuters report on buybacks, Brettell, Gaffen, and Rohde
(2015a) cited the effect of the health insurer Humana’s $500 million share
repurchase in November 2014, which allowed the firm to surpass its $7.50
EPS target by a penny. Commenting on such buybacks, Heitor Almeida,
professor of finance at the College of Business at the University of Illinois
Are today’s stock markets—in the United States, the S&P 500 is now
roughly 70% above the historical earnings multiple—overvalued? And if so,
is the overvaluation the result (at least in part) of central bank policies and
buybacks?
PGGM Investments’ head of equities Felix Lanters said, “While I don’t
like to use the term ‘overvalued,’ there is a gap between market values and
valuation in the models. We are having more and more trouble finding com-
panies that are undervalued; their numbers are fading away.”
Bradford Cornell, a professor of financial economics at the California
Institute of Technology, agrees that prices are high but commented, “I do
note that they have been this high or higher in the past when the factors to
which you refer [central banks’ policies and stock buybacks] were not present.”
Niels C. Jensen, partner and chief investment officer at the
United Kingdom–based Absolute Return Partners, identified two additional
factors to explain market values—in particular, US market values. To low
rates and corporate buybacks, Jensen (2015) added (1) the all-time high of
capital, which is now 42%–43% of US national income, against a historical
average of 35% and (2) demographics—the biggest equity buyers are middle-
aged, and the US great equity bull market has coincided with Baby Boomers’
middle years. Jensen contrasted the situation in the United States with that in
Japan (see Table 4.1).
Christian Kjaer, head of global equities and volatility at the Danish fund
ATP, believes that the supply–demand balance is at work. He remarked,
“Supply and demand has caused ‘risk-free’ yields as well as risk premiums in
In the United States, the Federal Reserve controls monetary policy using three tools: open
31
market operations, setting the discount rate, and setting reserve requirements. The first of
these, open market operations, is the responsibility of the FOMC.
The forward P/E is calculated by taking into consideration analysts’ future projections for
32
FactSet analyst John Butters (2017), in mid-2017 the S&P 500’s 12-month
forward price-to-earnings ratio was 17.6, its highest level in 13 years.
In addition to forward-looking P/E and backward-looking measures
(the trailing 12-month P/E is widely used in the industry on single assets),
other measures frequently used to understand whether markets, overall, are
cheap or dear include the historical growth of capital market earnings and
the growth of per capita GDP, or a country’s stock market capitalization as a
percentage of its GDP.
Straehl and Ibbotson (2015) essentially confirmed earlier findings by
Ibbotson and Chen (2003) that US long-run stock returns “participate” in the
real economy. Ibbotson and Chen decomposed 1926–2000 historical equity
returns using factors commonly thought to describe the aggregate equity
market and overall economic productivity (i.e., inflation, EPS, dividends per
share, price per earnings, the dividend payout ratio, book value per share,
return on equity, and per capita GDP) and found that for the 1926–2000
period, the majority of historical returns can be attributed to the supply of
these components. Straehl and Ibbotson (2015) extended the period to 1871–
2014—a 143-year period—and found that total payout (dividends and buy-
backs) per share and per capita GDP grew approximately at the same rate,
albeit with large fluctuations from time to time (p. 20).
Cornell (2010) explored the link between equity returns and economic
growth in a study that took into account both theoretical models and empiri-
cal results from growth theory. He postulates that
unless corporate profits rise as a percentage of GDP, which cannot continue
indefinitely, earnings growth is constrained by GDP growth. This dynamic
means that the same factors that determine the rate of economic growth
also place bounds on earnings growth and, thereby, the performance of
equity investments. (p. 54; italics added)
Cornell concluded that the long-run performance of equity investments
is indeed fundamentally linked to growth in earnings, which in turn depends
on growth in real GDP.
The positive relationship between the growth of total payout per share
and GDP growth found by Straehl and Ibbotson (2015) and others contrasts
with the findings of Jay R. Ritter at the University of Florida’s Warrington
College of Business. In his cross-country study using data from the World
Development Indicators for the 1900–2002 period, Ritter (2005) found a
negative relationship between real stock returns and per capita GDP.
Following these and other studies, researchers at Barra/MSCI (2010)
asked whether investors, assuming a relationship (or not) between economic
growth and stock returns, should assign a higher weight to countries experi-
encing strong GDP growth. They note,
This question is not new; “supply-side” models have been developed to
explain and forecast market returns based on macroeconomic performance.
These models are based on the theory that equity returns have their roots in
the productivity of the underlying real economy and long-term returns can-
not exceed or fall short of the growth rate of the underlying economy. (p. 1)
Using long-term Morgan Stanley Capital International equity index data
(i.e., the MSCI All Country World Index and the MSCI World Index, both
of which pertain only to large-cap and mid-cap companies) and the GDP
growth of countries included in the same indexes, the researchers empirically
tested the link between economic growth and subsequent stock returns. They
observed, “Long-term trends in the link between real GDP and equity prices
are more similar for global equities than for most individual [country] mar-
kets” (Barra/MSCI 2010, p. 5). In other words, the link is stronger between
variations in global GDP and the return of the global index across time than
it is for variations from one country to another. They offered several explana-
tions for this finding: (1) given economic integration, the link appears over
global, not national, markets; (2) a large part of economic growth comes from
new, not existing, companies, which dilutes GDP growth before it reaches
shareholders; and (3) expected economic growth might already be factored
into share prices, thereby reducing future realized returns.
O’Neill, Stupnytska, and Wrisdale (2011) at Goldman Sachs Asset
Management found significant methodological issues in previous studies that
failed to establish a link between GDP and returns (e.g., problems with data,
too long a time horizon, during which many changes occur) and posed a ques-
tion: “Why, in a world of forward-looking investors, [would we] expect to find
a contemporaneous relationship between growth and returns? In fact, there
exists extensive evidence that equity price changes tend to lead GDP growth in
a number of countries” (p. 3). They believe that an existing (though not straight-
forward) link between GDP and equity returns places a renewed emphasis on
valuation. Indeed, establishing such a link lends support to present value models.
Finally, London Business School’s Dimson, Marsh, and Staunton (2014)
revisited this question: If a link between per capita GDP and equity returns
exists, why has making money by buying stocks of countries that are improv-
ing their economic position been so difficult? For example, looking at 21 coun-
tries for the 1972–2013 period and assuming that investors put their money
in the equity markets of the fastest-growing countries, the approach would
have delivered an annual return of 14.5%; had the investors put their money
in the slowest growing countries, they would have realized an annual return of
24.6%. The authors concluded, however, that although capturing returns is not
easy, stronger (aggregate) GDP growth is “generally good for investors” (p. 29).
Another measure used to understand whether markets in the aggregate
are overvalued is the ratio between a country’s total market capitalization
(TMC) and its GDP (the TMC-to-GDP ratio). This ratio is often referred to
as the Buffett ratio because following the dot-com bubble of the late 1990s,
Warren Buffett embraced the ratio of the Wilshire 5000 Full Cap Price Index
to US GDP as “probably the best single measure of where valuations stand at
any given moment” (Loomis 2001). As of mid-2017, this ratio for US markets
stands at just above 130%. According to data from the World Bank, during
the 1975–85 period, the average for the United States was under 50%, and at
the end of the dot-com bubble in early 2000, it had reached a peak of 153%.
Among developed countries, the TMC-to-GDP ratio has historically been
highest in the United Kingdom and the United States. Table 4.1 provides a
comparison of the TMC-to-GDP ratio for major economies and aggregate
OECD member states for the 40-year period 1975–2015.
In referring to lessons to draw from the TMC-to-GDP ratio following
the dot-com stock crash, Buffett told Fortune journalist Carol Loomis (2001),
“For me, the message … is this: If the percentage relationship falls to the
70% or 80% area, buying stocks is likely to work very well for you. If the ratio
approaches 200%—as it did in 1999 and a part of 2000—you are playing
with fire.” The long-term average for the United States is 79%.
The intuition behind Buffett’s advice is that the stock of capital and
the actual economic output should move together. Actually, the question is
complex. Economies are not homogeneous. An economy’s growth rate is a
simplification that does not take into account the inequalities inside the econ-
omy. Different sectors in the same economy might grow at different rates for
prolonged periods. But as with the relationship between the growth of total
returns and the growth of per capita GDP, many value managers do not use
the TMC-to-GDP ratio.
Edward Yardeni of Yardeni Research has criticized the ratio for several
reasons, among them the fact that the TMC-to-GDP ratio does not take into
account structural changes in profit margins caused by, for example, changing
tax rates, lower interest rates, or technological innovation (Siblis Research,
n.d.). It also does not take into account institutional differences between
countries; for example, a great deal of the equity in German companies is
family owned and thus not listed on exchanges. But again, the critical fact is
that economies are complex systems with complex output. To average growth
over a whole market and a whole economy is basically impossible.
“It’s now an era in which analysts capable of analyzing will need to do so thor-
oughly.” These are the words of Shinichi Tamura, a Tokyo-based strategist at
Matsui Securities Company, as reported by Allan and Ito (2016). But what
is meant by “thoroughly” in an era of big data and high-performance com-
puting, advanced analytics, predictive reasoning, machine learning/artificial
intelligence, natural-language processing, fast algorithms, and automation?
During the past decade or two, many business processes in the finan-
cial services sector have been radically changed or fully automated. Consider
algorithmic (or automated) trading. In the European Union and the United
States, automated trades are estimated to constitute 50%–80% of all equity
trades, up from about 33% in 2006. As for investment management, some
automation has already taken place. Anthony Ledford, chief scientist at the
London-based quantitative investment manager Man AHL, remarked,
The balance sheets of publicly traded companies are reported on a time-
table that is known well in advance, and they largely comprise structured
summary information in a standardized format. This makes it relatively
straightforward to monitor the balance sheet changes of any particular
company through time. Modern production systems for data capture, stor-
age, and retrieval can easily scale this process across the tens of thousands
of publicly traded companies found in global portfolios. This is not machine
learning, nor is it big data—it’s simply automation.
Much progress has been made recently in data management thanks to
the structuring or standardization of data brought about by adoption of the
eXtensible Business Reporting Language (XBRL) required by regulators
for financial reporting purposes. XBRL uses a taxonomy or list of fields that
allows one to “tag” data. Tagged data are generally referred to as metadata.
The objective of XBRL is to deliver financial data from companies directly
in a computer-readable format, thereby making financial statements and data
easy to search and comparable over time and among companies. In their
study on data and technology in the financial services industry, Singh and
Peters (2016) note among the benefits brought about by the implementation
of XBRL the availability of more (and more granular) data. They cite more
than 51 million discrete facts tagged with XBRL in the US SEC’s EDGAR
database of more than 89,000 filings by some 10,000 firms.33
EDGAR stands for Electronic Data Gathering, Analysis, and Retrieval system.
33
What is new today is that with artificial intelligence, firms are trying to
bridge the gap between real-time information and information with longer
periodicity. The early 2000s saw the implementation of strategies and algo-
rithms for trading, which were all similar as all were based on using market
data. With the AI tools now available, firms are looking beyond traditional
market data, which everyone is using, trying to build (trading) strategies
with subcorrelations. As firms use a greater variety of data sources, we have
seen a greater differentiation, a greater diversification of strategies.
These differences, Pierron remarked, “reduce herd behavior and lead to
greater market stability.” Opimas has recently published reports on the use of
AI (Pierron 2017) and alternative data (Marenzi 2017) in asset management.
Much of this new big data is commercially available, however, which will
eventually reduce its value. PanAgora president and chief executive Eric H.
Sorensen commented,
First, “big” alone is not a sufficient condition for added value—“big” but
not “smart” (or causal) is potentially spurious. Second, “big” brings perverse
results if everyone uses it. Smart data may be sufficient. Smart data, among
other things, is not commercially ubiquitous, is given to reasonable invest-
ment horizons, and is rich in fundamental intuition.
Sorensen referenced his recent investment insight note (2017) relating
PanAgora’s first encounter with smart data back in the early 1970s:
A major industry of the Pacific Northwest was forest products—
Weyerhaeuser and Georgia-Pacific, to name a few. One innovative analyst
determined that during periods of rising interest rates (building slump) ver-
sus falling rates (building boom), a small (versus large) lumber inventory
separated the winners from the losers. Consequently, he hired a helicopter
pilot to routinely fly him directly over the lumber yards adjacent to lumber
mills, as well as active logging sites, to assess the potential inventory levels.
The helicopter data was proprietary and intuitively causal. Most importantly,
it worked, providing valuable company insights.
Sorensen continued,
Forty-five years later the contemporary version of helicopter data is satellite
imaging of shoppers’ parked cars. Today’s modern space technology version
is big, and it seems causal. However, it may fail if everyone has access to it.
Ubiquitous is neither proprietary nor innovative, which means it fails to be
“smart.” (p. 3)
To illustrate this fact, Sorensen cited an experiment by his colleagues at
PanAgora on parking lot data for the 2010–13 period. The back test pro-
duced a 30% cumulative long–short return. But after 2013, the return became
Lleo agreed:
To some extent, finance is ready for a data-centric evolution. Data—
whether financial statements, economic releases, or market prices and
yields—are already at the heart of finance. Market data, in particular, fit the
“high-volume, high-velocity” bill, and economic data are getting ready for
an upgrade, thanks to a growing interest in nowcasting.34 Since the 1970s,
banks have been constantly upgrading their computing power to run ever
more demanding pricing and risk management algorithms.
Lleo added,
Most of this evolution has taken place in trading rooms, where the emphasis
was placed on objective pricing (as in Black–Scholes-style derivative pric-
ing) rather than subjective valuation (as in DCF [discounted cash flow]-
style equity valuation). The mathematical tools and algorithms are different,
because the objective is different: Pricing is about hedging; valuation is
about predicting. Only recently have banks started to focus on predictions
on a large scale—to predict the default risk of their individual clients. Now,
machine learning–based techniques are being used in investment manage-
ment to design smart-beta strategies, as well as “robo-advisers.” We can well
envision that machine learning will soon be used in the valuation process as
well, to help analysts and portfolio managers process higher velocity data
and higher variety data—including news coverage, CEO interviews, tweets,
supplier and client data, supply chain information, etc.—or to explore the
industries from a network perspective.
34
Nowcasting, a contraction of now and forecasting, is the prediction of the present, the near
future, and the very recent past in economics or the creation of accurate forecasts of undis-
closed target values based on publicly observable proxy values. The term has been used for
a long time in meteorology. It has recently become popular in economics, because standard
measures used to assess the state of an economy (e.g., GDP) are determined only after a long
delay and are subject to subsequent revisions. Nowcasting models have been applied in central
banks, and the technique is used routinely to monitor the state of the economy in real time.
R packages are collections of functions, data, and compiled code in a well-defined format.
35
More than 4,500 packages are available. A Bayesian classifier is a statistical technique for
predicting class membership probabilities. It can be used to estimate the probability that
a given data point is a member of a particular class. See https://monkeylearn.com/blog/
practical-explanation-naive-bayes-classifier/.
Nowcast Inc. was formed in 2015 at the University of Tokyo and was subsequently pur-
36
The traditional active manager with 20–50 stocks will know each one
extremely well and will probably have more information than we can hope
to have. But we can analyze 5,000 stocks on a consistent basis, whereas a
traditional manager using a screen to narrow down his universe will inevi-
tably lose a lot of information.
J.R. Lowry (2016), EMEA head of State Street Global Exchange, writes
that he believes that as processes continue to be automated and more informa-
tion is moved and created online, and as advances in analytic tools allow the
accumulation and assessment of the new data, “Some of this data is going to
have investable value, and investment managers need to be able to ingest and
mine it successfully. However, identifying the right monetisable insights—
separating the signal from the noise, so to speak—is a significant challenge.”
An example of the challenge comes from what is called the “Hathaway
effect.” Reportedly, when the American actress Anne Hathaway is in the
news, Berkshire Hathaway’s stock price receives a boost. In 2015, students at
the University of Kansas School of Business (“Finance Students” 2015) were
tasked with analyzing whether trading algorithms could be misled by irrel-
evant information. Focusing on news coverage of the actress and the returns
of Berkshire Hathaway’s stocks, students found no correlation between news
and the price or volatility of the stock, but they did find a positive correlation
with trading volumes.
Marcos López de Prado, a senior managing director for Guggenheim
Partners and a research fellow at Lawrence Berkeley National Laboratory,
cautioned, “Big data, machine learning, and supercomputing require new
skills. A blind application of machine learning to large financial datasets will
surely result in false discoveries.”
Also at the forefront of applying big data and machine learning in invest-
ment management, Man AHL’s Ledford also cautioned about expectations,
saying, “There is no free lunch here: adequate identification of a machine
learning model may require extremely large datasets that are, in practice,
larger than history allows.”
Which raises the question, What are the true capabilities and limits of
these technologies? Clearly, analysts can now interact with machines using
natural languages and search large databases of texts asking questions formu-
lated in natural languages. Computers equipped with algorithms now seem
capable of replacing humans in almost every job, learning from data and dis-
covering relationships better than humans can.
But computers and their algorithms implement step-by-step procedures
whereas human mental activities, such as making connections or drawing
conclusions, are not done consciously following a step-by-step procedure.
much larger datasets, researchers may need to place even stronger constraints
on models so that patterns discovered are significant, not spurious, or else
take a different approach.
The level of complexity in the financial services sector presents another
problem. RavenPack’s Hafez observed,
Today, the various artificial intelligence methodologies are mostly used as
part of the data preprocessing stage to produce various analytics or insights
from unstructured content rather than in the actual investment process
itself. One of the main reasons for this is that finance is a more complex
space than most other sectors. It can generally be characterized as involv-
ing incomplete information, non-stationarity, and having fuzzy objectives.
Today, machine learning can hardly distinguish cats from dogs in a pic-
ture if one adds just a bit of “noise.” Financial markets are magnitudes more
complex.
Macquarie’s Brar commented on some of the downsides of using big data
and advanced analytics, saying,
One example: If we only focus on the outputs and don’t understand the
inputs (i.e., the data), the quality of data and how it is being applied, etc., we
will fall into the trap of GIGO [garbage in, garbage out]. Moreover, if we
don’t build a hypothesis and only interpret the outputs, then again we’re at
risk. We’ve always had (big) data, but the key is having an inquisitive mind
and knowing what questions to ask.
Another consideration comes from Ledford: “Whilst the machine-learn-
ing tools and the inferred knowledge are quantitatively expressed, as for any
model, they may just be inputs within a wider qualitative framework.”
That framework is the process of fundamental valuation and valuation
models. According to Lleo, although we are still at an early stage in under-
standing how to use these new tools, machine learning will
dramatically increase the speed at which this [valuation] analysis is per-
formed, broaden the range of information used to forecast cash flows and
estimate the cost of capital, increase the breadth and depth of scenarios
considered, and upgrade the visualization of the results, thanks to interac-
tive drill-down reports.
Among other effects these tools will have on the valuation and portfo-
lio construction process, Lleo mentioned that machine learning will do the
following:
•• Bring greater coherence to the valuation of firms within an industry,
country, and internationally, with more powerful learning algorithms to
limit discrepancies in valuation between firms and countries. This benefit
ago, it would have been unthinkable for such a small group of people, with
no background in finance, to manage this amount of money. Machine learn-
ing and supercomputing are changing finance. In 10 years’ time, finance
will have more to do with computer science than economics.
As for the analyst, Lleo remarked, “All of these changes will have an
impact on the skill set of financial analysts. The way in which CFA charter-
holders work in 10–15 years may be very different from the way they work
now.”
All these changes will also likely reduce the number of analysts (and other
personnel) in asset management. In a recent report on the use of data and AI
in the financial services industry, Pierron (2017) estimated that these new
tools will result in a loss of more than 110,000 jobs in asset and private wealth
management between now and 2025. However, analysts also have a market-
ing function at asset management firms: They help establish the credibility of
the firm regarding its claim to be able to generate above-market returns.
Macquarie’s Brar commented on the impact of these technologies on
investment strategies:
If these signals help gain competitive advantage—extract alpha or execu-
tion—then these innovations will be a disruptive force. The impact will be
greater in less-developed markets, but one cannot ignore sophisticated mar-
kets like the United States, in which gaining a basis point matters to overall
fund performance.
As for the impact of these technologies on the industry as a whole, Brar
added,
Investors who build good platforms and systems will gain a competi-
tive edge, whilst others will have to try to enhance their investment pro-
cess across other dimensions to maintain their competitive edge. It is
too early to say whether the cost/benefit ratio is or will be positive in the
future. However, it will be very difficult for an active manager not to be in
this space.
RavenPack’s Hafez believes that the emergence of alternative data will
affect all investment styles, active and passive; all investment strategies,
including value and momentum, among others; and all investable universes.
He remarked,
These new technologies can be expected to support both the smart-beta
strategies as well as help active investors keep their competitive edge. On
the active side, more data creates more opportunities to find new alpha
sources, and the future winners in the active (systematic) space will be those
who manage to process the most data, most efficiently. The idea of creating
Asset managers put time and money into their promise to deliver above-
market returns to their clients. They use equity analysts to undertake funda-
mental analysis and develop valuation models to identify mispricings. Their
investment decisions are based on the belief that under- and overpriced assets
will revert to some mean or fair value within a given period of time. The
expectation is that the manager will then deliver on its promise.
But does the manager deliver? Ever since Burton G. Malkiel, then an
economics professor (now emeritus professor) at Princeton University, penned
A Random Walk Down Wall Street in 1973, the argument has been raging as to
whether equity analysts and active management add value.
A frequently used way to address the question is to look at mutual fund
returns relative to benchmark funds. However, a caveat is in order. Mutual
fund performance available to researchers is limited to returns based on his-
torical price and dividend data. A fund’s beta and return volatility are com-
puted from historical returns. Given historical returns, computed return
volatility, the estimated beta, and the benchmark, a fund manager’s relative
performance is determined. The problem is that a fund’s return is determined
by several activities performed by the fund’s management team. Typically, for
internal evaluation purposes, a fund does not naively look at only the return
and some risk measure to assess the skills of the members of its management
team. Rather, some type of performance attribution analysis is used for inter-
nal purposes by the financial manager and the trustees.
The most basic equity performance attribution model considers the con-
tribution of three activities: security selection, sector allocation, and market
timing (i.e., adjusting beta on the basis of a market view). Assuming that the
estimated beta is correct and adjusted for comparing performance against a
benchmark, the market-adjusted relative performance would be attributable
to either security selection or sector allocation. Thus, equity valuation models
may lead to superior selection of stocks within a sector, but an inferior alloca-
tion among sectors would result in overall underperformance.
With that caveat in mind, consider the S&P Indices Versus Active
(SPIVA) Scorecards, starting with actively managed large-cap funds. As
Table 6.1 illustrates, active large-cap funds in Europe, Japan, and the
United States consistently underperform the S&P large-cap indexes in their
Japan S&P TOPIX 150 64% 36% 60% 40% 74% 26%
respective markets, whether the results are looked at over a one-year, three-
year, or five-year period.
Active underperformance relative to S&P indexes is even greater in mid-
and small-cap markets, where companies typically benefit from less (sell-side)
research than do large-cap firms. Table 6.2 provides data from the United
States as an example.
Although active managers as a group fail to outperform large-, mid-, and
small-cap indexes in developed countries, can they do better in emerging-
market equities? Some, including Lukas Daadler, chief of investment solu-
tions at the Dutch asset manager Robeco Institutional Asset Management,
believe they can. Daadler told IPE Magazine journalist Christopher O’Dea
(2016) that using market multiples, including price-to-book ratios and P/Es,
Robeco estimates that emerging-market stocks are undervalued, perhaps by
as much as 25%–30% compared with the MSCI World Index. This gap may
provide an opportunity for equity analysts and active managers to add value.
Source: Constructed by the authors using data obtained from SPIVA (http://us.spindices.com/
SPIVA/#/).
Still, the dialectic between cost and performance has shaped the investment
industry, driving the explosion in the number of hedge funds in the 2000s,
the boom in purely passive ETFs [exchange-traded funds], and now the rise
in smart-beta ETFs and alternative-beta funds. Each of these successive
designs provides a very different set of answers to the two original questions
of the value and cost/benefit of active management. Right now, smart beta
and alternative beta have anchored the debate in the “low-cost” camp, with-
out giving up on the dream that a strategy (or small set of strategies) can
systematically beat the market on a risk-adjusted basis over the long run.
The role of behavioral biases in explaining (under)performance was
underlined by Dean Mcintyre, director of performance strategy at FactSet.
Mcintyre (2016) identified two widely recognized biases with an impact on
performance: the instinct to follow the herd and the tendency to stick to pre-
vious price targets.
Another way of understanding how much value analysts and active man-
agement add is to look at persistence in performance. Soe and Poirier (2016)
write that according to the S&P Persistence Scorecard for the United States,
one of the key measurements of successful active management lies in the
ability of a manager or a strategy to deliver above-average returns consis-
tently over multiple periods. Demonstrating the ability to outperform peers
repeatedly is the only proven way to differentiate a manager’s luck from
skill. According to the S&P Persistence Scorecard, relatively few funds can
consistently stay at the top. Out of 631 domestic equity funds that were
in the top quartile as of September 2014, only 2.85% managed to stay in
the top quartile at the end of September 2016. Furthermore, 2.46% of the
large-cap funds, 2.20% of the mid-cap funds, and 3.36% of the small-cap
funds remained in the top quartile. For the three-year period that ended in
September 2016, persistence figures for funds in the top half were equally
unfavorable. Over three consecutive 12-month periods, 18.07% of large-cap
funds, 22.95% of mid-cap funds, and 20.88% of small-cap funds main-
tained a top-half ranking.
An inverse relationship generally exists between the measurement time
horizon and the ability of top-performing funds to maintain their status.
It is worth noting that less than 1% of large-cap funds and no mid-cap or
small-cap funds managed to remain in the top quartile at the end of the
five-year measurement period. This figure paints a negative picture regard-
ing the lack of long-term persistence in mutual fund returns. Similarly, only
4.47% of large-cap funds, 3.68% of mid-cap funds, and 9.27% of small-
cap funds maintained top-half performance over five consecutive 12-month
periods. Random expectations would suggest a repeat rate of 6.25%.
We can also look at the equilibrium between the cost and the gains of
active management by considering how much information equity analysts add.
At the macro level, one might well ask: Have markets been kept (quasi)
efficient, thanks to the combined action of active managers? Up until just
10 years ago, 84% of US mutual fund and ETF assets were actively invested,
though the figure had fallen to 66% by the end of 2016.37 Is evidence available
of a change of efficiency during this period?
“No,” said Alfred Slager, professor of pension fund management at TIAS
School for Business and Society and trustee at the Dutch pension fund for
general practitioners SPH:
The argument is that with less money in active funds, there are fewer trans-
actions, more information asymmetries, and thus more active opportunities.
But we observe fewer active managers, more transaction volume, and simi-
larly disappointing active results. So, this argument does not hold. We need
to adapt our view of efficiency and financial markets.
I would suggest the application of the Market Segmentation Theory from
the fixed-income realm.38 Long-term investors are typically buy-and-hold
investors who have a different supply-and-demand schedule than the ETF,
HFT [high-frequency trading], or insurance company sectors. Depending
on the distribution of the security in the different segments, one could
hypothesize whether it has become more liquid or not and whether that
indicator of efficiency has any meaning.
Concerning how much information equity analysts add—and its value
to investors—a recent article in the Economist (“Breaking Up Is Hard to Do”
2017) had some unkind words for the sell-side research bundled in banks’
services to asset managers:
At present, banks blast their clients’ inboxes with thousands of reports, only
a fraction of which are read. The problem is that most research is not very
useful—it is hard to come up with original insights about big companies
when dozens of other researchers are trying to do the same.
37
Morningstar via the Wall Street Journal (18 October 2016).
38
According to Investopedia: “Market segmentation theory is a fundamental theory regard-
ing interest rates and yield curves, expressing the idea that there is no inherent relationship
between the levels of short-term and long-term rates. According to market segmentation
theory, the prevailing interest rates for short-term, intermediate-term and long-term bonds
should be viewed separately, as items in different markets for debt securities. The major con-
clusion drawn from market segmentation theory and applied to investing is that yield curves
are determined by supply and demand forces within each separate market, or category, of
debt security maturities, and that the yields for one category of maturities cannot be used to
predict the yields for securities within a different maturity market” (www.investopedia.com/
terms/m/market-segmentation-theory.asp#ixzz4iTuVfVej).
Cornell suggested that for investors who think they have the skill to
identify mispriced securities, knowing whether the current movement toward
indexing has led to increased market inefficiency would be nice. He explained,
Ideally, there would be an index of market efficiency that investors could use
to judge how likely it would be to find mispriced securities. Unfortunately,
there is no such index and there is not likely to be one in the foreseeable
future. Asset prices are so volatile and market conditions are so variable that
a reasonable index of “inefficiency” cannot be constructed. That is why, 50
years after Eugene Fama introduced the idea of market efficiency, scholars
are still arguing about how efficient the market is. There is no evidence that
the debate is subsiding. While conceptually it follows that the move toward
passive investing will lead to greater inefficiency, whether there has been
any material change in market efficiency thus far is unknown.
Some have suggested that if active managers accounted for as little as
10% of the market, efficiency would still be assured. If so, active investors as a
group will have performance problems for some time.
Keeping the market price aligned with a firm’s intrinsic value is widely
considered a key economic role of the fundamental analyst. So, fundamental
analysts play an important role in enhancing a firm’s access to capital and its
ability to invest. Derrien and Kecskés (2013), expanding on previous stud-
ies, provided empirical evidence that a decrease in analyst coverage increases
information asymmetry. They found that companies that lose an analyst
decrease their investment and financing by up to 2% of total assets compared
with similar companies that do not lose an analyst. Perhaps not surprisingly,
they found that results were stronger for small companies and those with less
analyst coverage.
4,331, or 10%, were in the United States; China (3,052) and India (5,820)
together accounted for 20% of all listed firms worldwide. The number of
listed firms for Germany in 2016 was 555 and for Japan 3,504.39
As discussed in Chapter 5, new technologies are accelerating the analysis
and trading of stocks worldwide. Not only has the number of stocks greatly
increased because of emerging markets and globalization, but the invest-
able universe has also significantly expanded to include products other than
equities. Investors looking for returns have many more options than they did
only two or three decades ago. This development raises the question of how
returns will be produced in such a changed scenario. Does fundamental anal-
ysis retain its central role?
Slager answers “yes and no.” He explained,
Expanding the investment universe increases the need to analyze overall
market valuations and the differences and commonalities between them.
So, on an aggregate level, fundamental analysis helps. On an individual
level, I would not be interested in whether the best security would be picked;
“fit-for-purpose” would be fine too. By fit-for-purpose, my idea would be—
besides the fact that the company is financially viable—that governance is
in order, shareholder rights are protected, and especially ESG [environmen-
tal, social, and governance] factors have been taken into consideration. All
are crucial for long-term risk management. I sort of suspect that ESG is a
form of DNA or fingerprint of the organization and in that sense, might
have more predictive value than financials.
Kenneth Little, managing director of the investments group at Brandes
Investment Partners, said,
We believe fundamental analysis can and should retain its central role in
investment management, despite the wave of new investment vehicles and
strategies over the past few decades. A key function of capital markets is
providing for the efficient allocation of capital throughout the economy.
We believe fundamental analysis is required to determine which compa-
nies deserve (or do not deserve) capital, and their share prices should adjust
accordingly.
He added, “While the growth in popularity of index investing (and
ETFs that track the indexes) has led to tremendous growth in assets in these
vehicles, it has done little to improve the efficient allocation of capital within
the respective markets.”
For more information, see World Bank data on listed companies worldwide (http://data.
39
worldbank.org/indicator/CM.MKT.LDOM.NO?locations=IN-CN-DE-JP-US).
difference it [active equity investing] can really make is, for many trustees,
not worth the time needed to monitor those things carefully.”
Jaap van Dam, principal director of investment strategy at PGGM
Investments, the €200 billion Dutch pension fund for health and social work-
ers, voiced a similar opinion. He said, “If you look at aggregated results of
pension funds globally, there is a small contribution from active manage-
ment to the total return, and only if it is well controlled for cost.” PGGM
is reviewing its bottom-up investment decision-making processes. “We now
want more discussion on understanding where and how value is created and
transformed into profits,” Van Dam said. “We believe in creating value by
looking at the fundamentals of value creation over the long term.”
The role of active management has not disappeared; its focus has just shifted
over time. The choice of factors and weighting schemes and the search for
value through asset allocation are all active decisions that investors must
now focus on. For asset managers, the challenge is therefore to provide a
combination of a wide range of passive vehicles and of selected active exper-
tise with proven alpha, with a strong capacity to accompany clients in their
asset allocation decisions and in the efficient execution of the latter.
In a similar vein, Slager had several suggestions as to how active manag-
ers might increase their value to (institutional) investors:40
From a pension fund perspective, I see that in Europe, classic active strate-
gies are being eschewed, and passive is on the rise. At the same time, in
discussions with trustees, the picture emerges that active strategies have an
added value, just not in the somewhat antiquated “let’s beat the benchmark”
form. Some ideas that have emerged from recent discussions include using
active managers to work on developing a new set of metrics. What sort of
strategy would best aid the pension funds’ goals? Which risk factors would
one add, compared to the total portfolio?
40
See also Koedijk and Slager (2011).
truly relevant business cases for their active management strategies. Active
managers could innovate on testing strategy introduction: How could we
emulate prototyping, testing, and the introduction of a new strategy in such
a way that it filters out the mediocre strategies from the start? Moving from
the classical database backtesting to live simulation raises the hurdle but
increases the chances of designing durable investment strategies, and of a long-
term partnership. Trustees would no longer be drawn into overly technical
asset pricing discussions but could focus on what matters—will active man-
agement work?
Aggarwal, Rajesh, Sanjai Bhagat, and Srinivasan Rangan. 2009. “The Impact
of Fundamentals on IPO Valuation.” Financial Management, vol. 38, no. 2
(Summer): 253–284.
Allan, Gareth, and Komaki Ito. 2016. “Fintech Venture Targets Hedge Funds
with Big-Data Research.” Bloomberg (12 September; updated 13 September):
www.bloomberg.com/news/articles/2016-09-13/fintech-startup-dives-into-
big-data-for-japanese-stock-research.
Almeida, Robert M., Jr. 2016. “Decision Drivers: Stock Prices versus
GDP.” MFS White Paper Series (October): www.mfs.com/content/dam/
mfs-enterprise/pdfs/thought-leadership/us/mfse_gdp_wp.pdf.
Ang, Andrew, and Geert Bekaert. 2007. “Stock Return Predictability: Is It
There?” Review of Financial Studies, vol. 20, no. 3 (May): 651–707.
Appelbaum, Eileen, and Rosemary Batt. 2016 (updated March 2017).
“Are Lower Private Equity Returns the New Normal?” Center for
Economic and Policy Research (June): http://cepr.net/publications/reports/
are-lower-private-equity-returns-the-new-normal.
Bajo, Emanuel, Thomas J. Chemmanur, Karen Simonyan, and Hassan
Tehranian. 2016. “Underwriter Networks, Investor Attention, and Initial
Public Offerings.” Journal of Financial Economics, vol. 122, no. 2 (November):
376–408.
Barra/MSCI. 2010. “Is There a Link between GDP Growth and Equity
Returns?” (May): www.msci.com/documents/10199/a134c5d5-dca0-420d-
875d-06adb948f578.
Beck, Ulrich. 1992. Risk Society: Toward a New Modernity, English ed.
London: SAGE Publications.
Ben-Ami, Daniel. 2016. “Active Management: The Active-Passive Debate.”
IPE Magazine (January): www.ipe.com/reports/active-managemet/special-
report-active-management-the-active-passive-debate/10011319.article.
Bindseil, Ulrich, and Philipp J. König. 2013. “Basil J. Moore’s Horizontalists
and Verticalists: An Appraisal 25 Years Later.” Review of Keynesian Economics,
vol. 1, no. 4 (Winter): 383–390.
Butters, John. 2017. “Highest Forward 12-Month P/E Ratio for S&P
since 2004.” FactSet Insight (17 February): https://insight.factset.com/
earningsinsight_02.17.17.
Campbell, J.Y., and R.J. Shiller. 1988. “Stock Prices, Earnings and Expected
Dividends.” Journal of Finance, vol. 43, no. 3 (July): 661–676.
Carey, David, and Devin Banerjee. 2016. “Private Equity’s Golden Age
Wasn’t So Golden after All.” Bloomberg (21 January).
Chee, Seungmin, Richard G. Sloan, and Aydin Uysal. 2013. “A Framework
for Value Investing.” Australian Journal of Management, vol. 38, no. 3
(December): 599–633.
Chemmanur, Thomas, and An Yan. 2011. “Advertising, Investor Recognition,
and Stock Returns.” Working paper (April): http://econ.shufe.edu.cn/upload/
htmleditor/Image/74319_1105160822451.pdf.
———. 2017. “Product Market Advertising, Heterogeneous Beliefs, and
the Long-Run Performance of Initial Public Offerings.” Journal of Corporate
Finance, vol. 46 (October): 1–24 (www.sciencedirect.com/science/article/pii/
S0929119917303899).
Citi Research. 2017. “Searching for Alpha: Big Data: Navigating New
Alternative Datasets” (10 March).
Cochrane, John H. 2001 (rev. ed., 2005). Asset Pricing. Princeton, NJ:
Princeton University Press.
Cornell, Bradford. 2010. “Economic Growth and Equity Investing.” Financial
Analysts Journal, vol. 66, no. 1 (January/February): 54–64.
———. 2014. “Dividend–Price Ratios and Stock Returns: International
Evidence.” Journal of Portfolio Management, vol. 40, no. 2 (Winter): 122–127.
———. 2016. “Market Efficiency and the Impact of Passive Investing.”
Brad Cornell’s Economics Blog (7 November): http://wbcornell.blogspot.
com/2016/11/market-efficiency-and-impact-of-passive.html.
Cornell, Bradford, and Aswath Damodaran. 2014. “Tesla: Anatomy of a
Run-Up.” Journal of Portfolio Management, vol. 41, no. 1 (Fall): 139–151.
Cornell, Bradford, and Rajiv Gokhale. 2016. “An ‘Enhanced Multiple’
Corporation Valuation Model: Theory and Empirical Tests.” Business
Valuation Review, vol. 35, no. 2 (Summer): 52–61.
Dimson, Elroy, Paul Marsh, and Mike Staunton. 2014. “The Growth Puzzle.”
In Credit Suisse Global Investment Returns Yearbook 2014 (February): 17–29
(http://doc.xueqiu.com/14cdbae48e74653fe7546fe0.pdf).
Doidge, Craig, G. Andrew Karolyi, and René M. Stulz. 2015. “The U.S.
Listing Gap.” NBER Working Paper 21181 (May): www.nber.org/papers/
w21181.
Downie, Ryan. 2016. “Is the Private Equity Bubble Still Expanding?”
Investopedia (15 June): www.investopedia.com/articles/markets/061516/
private-equity-bubble-still-expanding-gs.asp#ixzz4O5wFX3QU.
Ernst & Young. 2015. “Positioning to Win: 2015 Global Private Equity Survey”
(www.ey.com/gl/en/industries/financial-services/fso-insights-global-private-
equity-survey-2015).
———. 2016. “Disruption Causes Seismic Shift for Private Equity:
2016 Global Private Equity Fund and Investor Survey” (www.ey.com/
gl/en/industries/private-equity/ey-2016-global-private-equity-fund-and-
investor-survey).
Fabozzi, Frank J., K.C. Chen, K.C. Ma, and Jessica West. 2015. “In Search
of Cash-Flow Pricing.” Journal of Financial Research, vol. 38, no. 4 (Winter):
511–527.
Fama, Eugene F. 1970. “Efficient Capital Markets: A Review of Theory and
Empirical Work.” Journal of Finance, vol. 25, no. 2 (May): 383–417.
———. 1976. “Reply.” Journal of Finance, vol. 31, no. 1 (March): 143–145.
Feldstein, Martin. 2016. “What Could Go Wrong in America?” Project Syndicate
(26 October): www.project-syndicate.org/commentary/asset-price-risk-in-
america-by-martin-feldstein-2016-10.
Fernandez, Pablo. 2002 (rev. 2007). “Company Valuation Methods: The Most
Common Errors in Valuation.” Working Paper 449, IESE Business School,
University of Navarra.
Fernandez, Pablo. 2015. “119 Common Errors in Company Valuations.”
Working Paper 714, IESE Business School, University of Navarra.
“Fewer Japanese Companies Planning Stock Buybacks: Declining Trend
Could Affect Supply–Demand Balance.” 2017. Nikkei Asian Review (9
February): http://asia.nikkei.com/Markets/Tokyo-Market/Fewer-Japanese-
companies-planning-stock-buybacks.
———. 2014b. “How Are You Really Feeling?” Quantamentals (12 November).
———. 2015. “I Just Called to Say I’m Bullish.” Quantamentals (20 April).
Malkiel, Burton G. 1973 (10th ed., 2012). A Random Walk Down Wall Street:
The Time-Tested Strategy for Successful Investing. New York: Norton.
Man Group. 2014. “Is Momentum Behavioural?” AHL/MSS Academic
Advisory Board group discussion (March): www.man.com/is-momentum-
behavioural.
Marenzi, Octavio. 2017. “Alternative Data—The New Frontier in Asset
Management.” Opimas (31 March): www.opimas.com/research/217/detail/.
Mariathasan, Joseph. 2016. “Non-Traditional Investment: Quant versus
Traditional.” IPE Magazine (January): www.ipe.com/reports/special-reports/
active-management/non-traditional-investment-quant-versus-traditional/
10011326.article.
Mcintyre, Dean. 2016. “Management Performance: Good Behavior or
Good Luck?” FactSet Insight (22 November): https://insight.factset.com/
manager-performance-good-behavior-or-good-luck.
McKinsey Private Equity and Principal Investors Practice. 2017.
“McKinsey Global Private Markets Review: A Routinely Exceptional Year.”
McKinsey & Company (February): www.mckinsey.com/~/media/McKinsey/
Industries/Private%20Equity%20and%20Principal%20Investors/Our%20Insights/
A%20routinely%20exceptional%20year%20for%20private%20equity/
McKinsey-Global-Private-Markets-Review-February-2017.ashx.
McLeay, Michael, Amar Radia, and Ryland Thomas. 2014a. “Money in the
Modern Economy: An Introduction.” Quarterly Bulletin, Monetary Analysis
Directorate of the Bank of England (Q1 2014): www.bankofengland.co.uk/
publications/Documents/quarterlybulletin/2014/qb14q101.pdf.
———. 2014b. “Money Creation in the Modern Economy.” Quarterly
Bulletin, Monetary Analysis Directorate of the Bank of England (Q1 2014):
www.bankofengland.co.uk/publications/Documents/quarterlybulletin/2014/
qb14q102.pdf.
Merton, Robert C. 1974. “On the Pricing of Corporate Debt: The Risk
Structure of Interest Rates.” Journal of Finance, vol. 29, no. 2 (May): 449–470.
Miller, Edward M. 1977. “Risk, Uncertainty, and Divergence of Opinion.”
Journal of Finance, vol. 32, no. 4 (September): 1151–1168.
Montier, James, and Philip Pilkington. 2016. “The Stock Market as Monetary
Policy Junkie: Quantifying the Fed’s Impact on the S&P 500.” GMO white
paper (March).
Moore, Basil. 1988. Horizontalists and Verticalists: The Macroeconomics of Credit
Money. Cambridge, UK: Cambridge University Press.
Moore, Charlotte. 2016. “Time to Become More Active?” IPE Magazine
(January): www.ipe.com/reports/special-reports/active-management/time-to-
become-more-active/10011325.article.
Moreolo, Carlo Svaluto. 2016. “Pension Funds: What Role for Active
Management?” IPE Magazine (January): www.ipe.com/reports/special-reports/
active-management/pension-funds-what-role-for-active-management/
10011322.article.
Morningstar Manager Research. 2017. “Morningstar Direct Asset Flows
Commentary: United States.” Morningstar (11 January): https://corporate.
morningstar.com/US/documents/AssetFlows/AssetFlowsJan2017.pdf.
Nath, Trevir. 2016. “Public vs. Private Tech Valuations: What’s Driving
the Divide?” Investopedia (5 February): www.investopedia.com/articles/
investing/020516/public-vs-private-tech-valuations-whats-driving-divide.asp#
ixzz4O5wesPJO.
Nesbitt, Stephen L. 2016. “An Examination of State Pension Performance: 2006
to 2015” (6 September): www.cliffwater.com/Cliffwater%20Research%20-%
20An%20Examination%20of%20State%20Pension%20Performance%20
2006-2015.pdf.
Nolen Foushee, Susan, Tim Koller, and Anand Mehta. 2012. “Why Bad
Multiples Happen to Good Companies.” McKinsey Quarterly (May): www.
mckinsey.it/idee/why-bad-multiples-happen-to-good-companies.
O’Dea, Christopher. 2016. “Valuations: Emerging from the Wings.” IPE
Magazine (October): www.ipe.com/investment/investing-in/global-equities/
valuations-emerging-from-the-wings/10015442.article.
O’Neill, Jim, Anna Stupnytska, and James Wrisdale. 2011. “Linking GDP
Growth and Equity Returns.” Monthly Insights, Goldman Sachs Asset
Management (May): http://s3.amazonaws.com/zanran_storage/www2.gold-
mansachs.com/ContentPages/2509459477.pdf.
Sorensen, Eric H. 2017. “Investment Insight: Smart Data, Big Beta and the
Evolving Land of Quant.” PanAgora (May): https://publishing.dealogic.com/
Nomura/PanAgora.pdf.
Sorensen, Eric, and David Williamson. 1985. “Some Evidence on the Value
of Dividend Discount Models.” Financial Analysts Journal, vol. 41, no. 6
(November/December): 60–69.
S&P. 2017. “S&P 500 Buybacks Total $135.3 Billion for Q4 2016, Decline
for Full-Year 2016.” Standard & Poor’s (22 March): http://us.spindices.com/
documents/index-news-and-announcements/20170322-sp-500-buybacks-
q4-2016-pr.pdf.
Straehl, Philip U., and Roger G. Ibbotson. 2015. “The Supply of Stock
Returns: Adding Back Buybacks.” Morningstar working paper (17
December): http://corporate1.morningstar.com/WorkArea/DownloadAsset.
aspx?id=13346.
Tergesen, Anne, and Jason Zweig. 2016. “The Dying Business of
Picking Stocks.” Wall Street Journal (17 October): www.wsj.com/articles/
the-dying-business-of-picking-stocks-1476714749.
“Valuing Private Companies.” 2016. Investopedia (16 November): www.
investopedia.com/articles/fundamental-analysis/11/valuing-private-compa-
nies.asp#ixzz4c2kLLaTK.
Wang, Lu. 2016a. “There’s Only One Buyer Keeping S&P 500’s Bull
Market Alive.” Bloomberg (14 March): www.bloomberg.com/news/
articles/2016-03-14/there-s-only-one-buyer-keeping-the-s-p-500-s-bull-
market-alive.
———. 2016b. “Bull Market Losing Big Ally as Buybacks Fall Most since
2009.” Bloomberg (16 May): www.bloomberg.com/news/articles/2016-05-16/
bull-market-losing-biggest-ally-as-buybacks-fall-most-since-2009.
Werner, Richard. 2012. “The Quantity Theory of Credit and Some of Its
Applications.” Working paper, Robinson College, University of Cambridge.
Wicksell, Knut. 1898. Geldzins und Güterpreise (first English ed.: Interest and
Prices, 1936). London: Macmillan (available as a PDF file or ebook from the
Ludwig von Mises Institute, https://mises.org/library/interest-and-prices).
Williams, John Burr. 1938. The Theory of Investment Value. Cambridge, MA:
Harvard University Press.
*Emeritus
Card Number
—— /——
Expiration Date
_______________________________
Name on card P L E A S E P R I N T
Corporate Card
Personal Card __________________________________
Signature
____________________________________________________
PLEASE PRINT NAME OR COMPANY NAME AS YOU WOULD LIKE IT TO APPEAR
Title
Address