Conference Agenda
Overview and details of the sessions of this conference. Please select a date or location to show only sessions at that day or location. Please select a single session for detailed view (with abstracts and downloads if available).
|
Daily Overview |
| Session | ||
Poster 2
| ||
| Presentations | ||
The Predictive Power of Fedspeak 1Bloomberg, United States; 2Bloomberg, Germany We show that embeddings derived from Bloomberg News headlines about the Federal Reserve contain meaningful real-time signals for key U.S. macroeconomic variables. Principal components of these embeddings, when incorporated into Bayesian VARs and quantile-regression frameworks, improve point and density forecasts for inflation, unemployment and Treasury yields relative to standard models, at times even outperforming professional forecasters. Our approach also captures shifts in risks that align with policy narratives. The results demonstrate the value of high-frequency central bank communication data for forecasting and enhance our understanding of how monetary policy communication is received by the public. Are there asymmetries in euro area monetary policy? 1WU Vienna; 2Oesterreichische Nationalbank, Austria We assess asymmetries, nonlinearities and state dependencies in dynamic responses of the euro area to monetary policy shocks. The dataset includes macroeconomic, financial, and survey-based variables measuring credit conditions and bank lending transmission channels. These data are observed at different frequencies. We propose a multivariate nonparametric mixed-frequency model, and discuss how to compute dynamic causal effects in a nonlinear context. The results suggest limited effects of expansionary policy shocks whereas contractionary shocks yield responses in line with theory. There is little variation over the business cycle and in distinct periods such as at the effective lower bound. Building Climate Indices of Maximal Macroeconomic Relevance via the Assemblage VAR 1University Bocconi, Italy; 2Université du Québec à Montréal; 3ETH Zurich Evaluating the macroeconomic relevance of new indicators typically follows a two-step procedure: construct an index—by rule, unsupervised learning, or by adopting an existing measure—and plug it into a VAR to assess its impact. Yet many economically meaningful objects are inherently ambiguous. Climate risk is a prime example: if one were to build a single climate index to assess its macroeconomic impact, what should it contain? We introduce the Assemblage VAR, a joint estimation framework that aggregates disaggregated constituents within the VAR itself. Rather than fixing weights ex ante, we estimate the VAR while simultaneously choosing nonnegative aggregation weights to maximize multivariate likelihood and system-wide coherence. We consider two variants: a component-space approach that reweights named subcomponents directly, and a rank-space approach that emphasizes or downweights specific regions of the cross-sectional distribution. We apply the method to U.S. data, assembling a climate benchmark from the U.S. Actuaries Climate Index and its national, regional, and NOAA-based monthly components. The resulting synthetic index generates an economically meaningful contractionary response of industrial production, unemployment, and housing following a climate shock, whereas fixed-weight indices yield weak responses. The component-space estimator places substantial weight on high-wind extremes, while the rank-space estimator concentrates on tail realizations, consistent with threshold-type damages. The framework generalizes beyond climate. Assembling synthetic inflation and industrial production measures within a standard U.S. VAR sharpens monetary transmission and mitigates the price puzzle under recursive identification. Per capita Income Convergence in Central Europe: Does the 2004 Accession to the European Union matter? University of Warsaw, Poland The main objective of this paper is to study regional income convergence in Central Europe from 1993 to 2024 using an innovative approach proposed by Phillips and Sul (2007). We divide the study period into two sub-periods: 1993-2004 and 2005-2024, which allows us to compare income convergence before and after accession to the European Union. Our study uses data at different NUTS levels ranging from NUTS1 to NUTS3. Our results show that the hypothesis of income convergence between all regions of Central Europe is rejected at each NUTS level for both the pre- and post-accession periods. At the same time our results indicate the existence of different convergence clubs between regions of Central Europe. In particular, the number of convergence clubs depends on both the NUTS level used and the period of the study. The number of convergence clubs increases with the level of spatial data aggregation. Moreover, our results show that after joining the European Union, the number of identified convergence clubs at each NUTS level is lower than before joining the European Union. Therefore, our results suggest growing per capital income cohesion within Central Europe and effectiveness of the EU cohesion policy. Forecast Combination for Tail Risk with Regulator-Aware Decision Trees 1Erasmus University Rotterdam; 2University of Birmingham, United Kingdom In this paper, we address the challenge of combining forecasts for Value-at-Risk and Expected Shortfall by employing nonparametric weights which are functions of macro-financial state variables. We develop a tailor-made, tree-based procedure in which optimal combination weights remain constant within economic regimes but vary across them, and the continuously varying regimes themselves are identified in a fully data-driven manner using a modified decision tree-based algorithm. To ensure consistency with supervisory practice, we align the forecast combination process with liquidity-horizon-specific risk factors defined by the Basel Committee on Banking Supervision. We further reduce the influence of correlated covariates by proposing a methodology of cluster-wise random variable draws when building each tree. In an application to daily returns of 24 large-cap U.S. stocks from 2000–2023, the proposed market-driven forecast combination delivers significantly lower out-of-sample VaR–ES losses than the standard relative-score benchmark for up to 22 out of 24 assets across multiple liquidity horizons, while also improving coverage properties and economic interpretability. Monthly GDP estimates for the euro area and its countries 1European Central Bank, Germany; 2Deutsche Bundesbank; 3University of Leicester We propose a computationally efficient Bayesian mixed-frequency vector autoregressive (MF-VAR) model to estimate latent monthly real GDP growth for the euro area aggregate and its 19 member countries. Official euro area GDP is observed at a quarterly frequency and released with substantial publication lags, limiting its usefulness for real-time inference and historical business cycle analysis. Our framework addresses both the temporal aggregation problem and the computational burden typically associated with large mixed-frequency systems. We offer thus a unified estimation framework that is of paramount interest of central bankers and practitioners alike, providing monthly GDP estimates and forecasts that are methodologically consistent and easily comparable with each other across member countries. Methodologically, we build on the MF-VAR literature but depart from standard equation-by-equation estimation strategies that rely on recursive or triangular structures. In many existing approaches, contemporaneous endogenous variables or transformed residuals are included as regressors in sequential estimations. When structural shocks are correlated across equations, such procedures introduce regressors that are correlated with the error term, generating endogeneity and classical bias. As the dimension of the VAR increases, this bias becomes more pronounced. Sequential conditioning effectively propagates cross equation error correlations through the likelihood; ignoring this dependence distorts posterior means in a way analogous to omitted-variable bias in OLS. Our approach avoids this problem by conditioning on draws of the latent monthly GDP growth rates and estimating the country-level VARs separately, while including lagged monthly variables for the euro area and all 19 member countries. Conditional on the latent states, each country-specific system is estimated without incorporating contemporaneous endogenous regressors. In the classical interpretation, coefficient estimates are unbiased but not fully efficient due to the omission of cross-equation error covariance terms. In the Bayesian framework, posterior means remain unbiased under correct specification, while posterior credible sets may be modestly wider because the likelihood does not exploit cross-equation correlations. This delivers substantial computational gains and preserves consistency without relying on arbitrary variable orderings. A second major contribution concerns the state-space representation. Rather than stacking observed and unobserved variables in a single large system and updating latent monthly GDP only in non-quarter-ending months, we treat monthly GDP growth as latent in all months. The measurement equation combines (i) temporal aggregation constraints linking monthly and quarterly national accounts data and (ii) timevarying cross-sectional restrictions that enforce consistency between country-level GDP and the Euro area aggregate. These restrictions are embedded directly into the state-space system, allowing full-sample updating of the latent states via the standard Kalman filter. By imposing both temporal and cross-sectional coherence at each iteration, the model improves identification of the latent monthly series and enhances numerical stability. The full system integrates country-level indicators observed at monthly and quarterly frequencies within a unified Bayesian framework. Joint estimation of latent monthly GDP growth across member states exploits cross-sectional dependence structures while maintaining computational tractability through conditional decoupling. The resulting high-frequency Euro area GDP series supports more accurate real-time nowcasting, sharper inference on turning points, and deeper analysis of business cycle synchronization, asymmetric shocks, and monetary policy transmission within the currency union. Importantly, the algorithm remains computationally fast even in high-dimensional settings. Comparison in Dynamic Forecasting: A Bayesian LASSO State-Space and Bayesian Factor VAR Analysis University College Dublin, Ireland This study develops a Bayesian LASSO state-space model to forecast key macroeconomic variables—including the Federal Funds Rate (FEDFUNDS), Industrial Production Index (INDPRO), and Consumer Price Index (CPI)—using a macro-finance dataset by McCracken and Ng from the Research Division at the Federal Reserve Bank of St. Louis. The model employs Kalman filtering for latent state updates and Bayesian LASSO regularization via Gibbs sampling, forming a dynamic mechanism that stabilizes forecasts and improves accuracy under uncertainty from various financial factors. Empirical analysis demonstrates that the Bayesian LASSO state-space model outperforms a benchmark moving average model in out-of-sample forecasts within financial datasets, specially, forecasts of FEDFUNDS and INDPRO exhibit significantly improved accuracy in both point and density forecasting. These results indicate that the model effectively captures the dynamic patterns in financial time series, demonstrating sensitivity to the dynamic selection of informative priors and the degree of shrinkage. Additionally, robustness checks on macroeconomic and combined datasets confirm the adaptability of this model to various macroeconomic dynamics. For comparative purposes in machine learning and traditional econometric forecasting via Bayesian methods, a Bayesian Factor-Augmented Vector Autoregression (BFAVAR) model with pandemic priors is additionally introduced to examine exogenous shocks. Robustness tests indicate that the BFAVAR employs rolling-window out-of-sample forecasts, smoothly updating parameters over time and achieving stable performance across both financial and macroeconomic data, outperforming static models (BVAR) in pandemic periods. By integrating data-driven Bayesian method within frameworks of statistical machine learning and theory-driven structural econometrics, our proposed approach balances statistical rigor with economic intuition, offering robust theoretical and empirical support for dynamic forecasting and parameter estimation in financial and macroeconomic research. Exploiting common volatility dynamics in high-dimensional portfolio selection University of Regensburg, Germany We develop a framework for forecasting high-dimensional covariance matrices for portfolio selection that exploits commonality in monthly realized volatilities. Motivated by pronounced comovement and persistence of stock volatilities, we study factor models and pooled ARMA$(p, p)$ models that constrain parameters to be equal within pre-specified groups of stocks. We establish a formal link between the two modeling approaches by showing that the pooled ARMA admits a dynamic factor representation and can be understood as a static factor model with group-specific factors. Volatility forecasts are combined with (Realized-)DCC estimates to obtain conditional covariance matrices, which are evaluated in a minimum variance portfolio framework. For portfolios constructed from S\&P 500 constituents, our best approach is based on parameter pooling and outperforms the unconditional (co)variance estimate obtained via nonlinear shrinkage by 20.2\% in terms of annualized portfolio standard deviation. This strong performance is further supported by comparisons with competing estimators of conditional (co)variances. Behind the Curve: How the Fed Missed Inflation Risks Using a High-Dimensional Distributional VAR (HiDVAR) 1Institute for Monetary and Financial Stability, Germany; 2Goethe University Frankfurt, Germany Standard policy risk frameworks often impose tight parametric restrictions when converting conditional quantiles into predictive densities, which can understate one-sided inflation risks in turbulent episodes. This paper develops a real-time, data-rich alternative, a High-Dimensional Distributional VAR (HiDVAR), to forecast the joint distribution of GDP growth, inflation, and unemployment without imposing a parametric density shape. It does so by using many binary logit cuts to construct an approximate CDF. HiDVAR therefore combines binary local projections with elastic-net shrinkage according to Chi et al. (2025) and factor-based information extraction from an augmented real-time FRED-QD panel. Coherent multivariate scenarios are generated via a triangular factorization across variables and a copula structure across horizons, producing path-consistent predictive distributions that support operational risk measures (e.g., inflation tail probabilities, recession, and stagflation risk). In real-time out-of-sample evaluation (Feb 2013–Oct 2025), HiDVAR improves probabilistic accuracy relative to the FRBNY Outlook-at-Risk densities and standard benchmarks. Particularly large gains stem from inflation and overall risk assessment (e.g., a 28.5\% CRPS reduction for CPI, 20\% reduction in multivariate Energy Score) and better-calibrated upper-tail probabilities during the post-pandemic inflation surge. A policy application translates the forecast distributions into a macro-risk dashboard and rule-implied policy-rate prescriptions, illustrating how improved tail-risk assessment would have altered real-time signals about inflation risks and policy stance. Nelson–Siegel Autoencoders for Global Yield Curve Forecasting 1Erasmus School of Economics, Netherlands, The; 2Neoma Business School, France We study global yield curve dynamics using an economics-informed autoencoder framework that captures the hierarchical structure of international bond markets. We propose a Global Nelson–Siegel Autoencoder anchored in the Nelson–Siegel basis that decomposes a panel of yield curves into global and country-specific components. For forecasting, we introduce a Dynamic Global Nelson–Siegel Autoencoder that jointly learns latent factors and their linear time-series dynamics. Using monthly government bond yields for the United States, Germany, the United Kingdom, and Japan from 1995 to 2023, we show that global factors explain a large and time-varying share of yield variation. Global models deliver robust out-of-sample forecasting gains for the United States and the United Kingdom, particularly during periods of heightened global uncertainty. Forecasting improvements increase with the horizon, consistent with global factors being especially informative for longer-term risk compensation. Signal-Selected Sparse Equal-Weight Portfolios 1University of Hagen, Germany,; 2Aalborg University Business School, Denmark We develop a high-dimensional portfolio-optimization framework tailored to long-horizon investors who evaluate performance in terms of geometric growth. Our decision problem is therefore anchored in a Kelly (log-utility) objective. In high dimensions, however, the plug-in Kelly rule is empirically brittle: even small errors in conditional expected returns and covariances can induce large and unstable changes in continuous portfolio weights, leading to poor out-of-sample growth. This fragility is especially problematic when predictive signals are weak and noisy. Our key idea is to reverse the usual regularization paradigm. Instead of mapping estimated conditional moments directly into continuous weights and then shrinking them, we start from a robust and implementable baseline: a regularly rebalanced equal-weight portfolio over a fixed number of assets k. Predictive information is injected only through subset selection. Formally, at each rebalancing date we approximate the Kelly objective by a second-order Taylor expansion (equivalently, a quadratic Kelly/mean--variance surrogate), which depends only on the conditional mean vector and covariance matrix. We then choose a subset of size k by solving an L0-constrained quadratic optimization problem and allocate equal weights within the selected subset. This construction yields a portfolio rule that is piecewise constant in the predictive signals and exhibits controlled sensitivity to covariance perturbations, thereby translating weak cross-sectional information into stable investment decisions while avoiding the amplification of estimation error through continuous weight magnitudes. We establish formal properties of the resulting ``subset-selection + equal-weighting'' operator, including finite-sample bounds on the incremental growth improvement attainable by deviating from equal weighting via optimal subset choice. These results connect to ``best averaging'' phenomena in forecast combinations: when signals are predominantly noise, improvements over equal weighting are limited, whereas ranking information can be exploited without reintroducing weight instability. The framework naturally accommodates additional linear constraints relevant in practice, such as factor-exposure controls, sector/country limits, leverage constraints, and turnover budgets. In a realistically calibrated simulation study and an empirical application to U.S. equities, our approach consistently improves out-of-sample long-run growth relative to (i) equal-weight portfolios formed from random subsets of the same size and (ii) standard regularized continuous optimizers, indicating that robustness arises from the specific combination of subset selection and equal weighting rather than from sparsity or signals alone. | ||