Conference Agenda

Overview and details of the sessions of this conference. Please select a date or location to show only sessions at that day or location. Please select a single session for detailed view (with abstracts and downloads if available).

 
 
Session Overview
Session
B3: Financial Econometrics
Time:
Friday, 31/Mar/2023:
2:00pm - 3:45pm

Session Chair: Olivier David Zerbib, EDHEC
Location: Room "Link"


Presentations

The Value of Software

Roberto Gomez Cram, Alastair Lawrence, Collin Dursteler

London Business School, United Kingdom

Discussant: Karamfil Todorov (BIS (Bank for International Settlements))

Software companies have steadily become key pillars of the digital economy, representing upwards of 12 percent of U.S. market capitalization. A simple buy-and-hold strategy of pure-play software companies over the past three decades produced annual alphas of over 7.1 percent. We document that these firms are growing at 13.9 percent annually and that both management and analysts systematically underestimate over a third of this growth. We show that these expectation errors appear to largely explain the foregoing outperformance of software companies and that management, analysts, short sellers, and other market participants ignore key performance indicators that describe these pure-play software firms and signal future growth. Together, the study underscores the value of software to the economy and how its economic impact has been significantly under-appreciated for the past two decades.



A Skeptical Appraisal of Robust Asset Pricing Tests

Julian Thimme1, Tim Kroencke2

1Karlsruhe Institute of Technology, Germany; 2University of Neuchâtel, Switzerland

Discussant: Tobias Sichert (Stockholm School of Economics)

We analyze the size and power of a large number of "robust" asset pricing tests of the hypothesis that the price of risk of a candidate factor is equal to zero. Different from earlier studies, our approach puts all tests on an equal footing and focuses on sample sizes comparable to standard applications in asset pricing research. Thus, our paper guides researchers on which method to use. A simple test based on bootstrapped confidence intervals stands out as it does not over-reject useless factors and is powerful in detecting useful factors.



Non-Standard Errors

Albert Menkveld1, Anna Dreber3, Felix Holzmeister2, Juergen Hueber2, Magnus Johannesson3, Michael Kirchler2, Sebastian Neusuess4, Michael Razen2, Utz Weitzel1

1VU Amsterdam, The Netherlands; 2University of Innsbruck, Austria; 3Stockholm School of Economics, Sweden; 4No affiliation

Discussant: Maziar Mahdavi Kazemi (Arizona State University)

In statistics, samples are drawn from a population in a data-generating process (DGP). Standard errors measure the uncertainty in estimates of population parameters. In science, evidence is generated to test hypotheses in an evidence-generating process (EGP). We claim that EGP variation across researchers adds uncertainty: Non-standard errors (NSEs). We study NSEs by letting 164 teams test the same hypotheses on the same data. NSEs turn out to be sizable, but smaller for better reproducible or higher rated research. Adding peer-review stages reduces NSEs. We further find that this type of uncertainty is underestimated by participants.