Session | ||
B3: Financial Econometrics
| ||
Presentations | ||
The Value of Software London Business School, United Kingdom Software companies have steadily become key pillars of the digital economy, representing upwards of 12 percent of U.S. market capitalization. A simple buy-and-hold strategy of pure-play software companies over the past three decades produced annual alphas of over 7.1 percent. We document that these firms are growing at 13.9 percent annually and that both management and analysts systematically underestimate over a third of this growth. We show that these expectation errors appear to largely explain the foregoing outperformance of software companies and that management, analysts, short sellers, and other market participants ignore key performance indicators that describe these pure-play software firms and signal future growth. Together, the study underscores the value of software to the economy and how its economic impact has been significantly under-appreciated for the past two decades. A Skeptical Appraisal of Robust Asset Pricing Tests 1Karlsruhe Institute of Technology, Germany; 2University of Neuchâtel, Switzerland We analyze the size and power of a large number of "robust" asset pricing tests of the hypothesis that the price of risk of a candidate factor is equal to zero. Different from earlier studies, our approach puts all tests on an equal footing and focuses on sample sizes comparable to standard applications in asset pricing research. Thus, our paper guides researchers on which method to use. A simple test based on bootstrapped confidence intervals stands out as it does not over-reject useless factors and is powerful in detecting useful factors. Non-Standard Errors 1VU Amsterdam, The Netherlands; 2University of Innsbruck, Austria; 3Stockholm School of Economics, Sweden; 4No affiliation In statistics, samples are drawn from a population in a data-generating process (DGP). Standard errors measure the uncertainty in estimates of population parameters. In science, evidence is generated to test hypotheses in an evidence-generating process (EGP). We claim that EGP variation across researchers adds uncertainty: Non-standard errors (NSEs). We study NSEs by letting 164 teams test the same hypotheses on the same data. NSEs turn out to be sizable, but smaller for better reproducible or higher rated research. Adding peer-review stages reduces NSEs. We further find that this type of uncertainty is underestimated by participants. |