Conference Agenda
| Session | ||
Statistics for Stochastic Processes
| ||
| Presentations | ||
A nonparametric statistic for rank changes of volatility functions of Ito semimartingales Christian-Albrechts-Universität, Germany The change of the rank of the volatility function in Ito semimartingales poses a complicated signal-detection problem. In their paper from 2013 Jacod & Podolskij have derived a statistic to detect whether the rank of the volatility function is constant over the observation period. Based on their results we develop a statistic which allows us to detect local jumps in the rank which is based on random perturbation of the high-frequency observations on an Ito semimartingale. This statistic can be used to estimate the time points at which the rank jumps occur. We illustrate our results with some simulated data. Nonparametric density estimation for the small jumps of Lévy processes Université Versailles Saint Quentin, France We consider the problem of estimating the density of the process associated with the small jumps of a pure jump Lévy process, possibly of infinite variation, from discrete observations of one trajectory. The interest of such a question lies on the observation that even when the Lévy measure is known, the density of the increments of the small jumps of the process cannot be computed in closed-form. We discuss results both from low and high-frequency observations. In a low frequency setting, assuming the Lévy density associated with the jumps larger than $epsilonin(0,1)$ in absolute value is known, a spectral estimator relying on the convolution structure of the problem achieves a parametric rate of convergence with respect to the integrated $L_2$ loss, up to a logarithmic factor. In a high-frequency setting, we remove the assumption on the knowledge of the Lévy measure of the large jumps and show that the rate of convergence depends both on the sampling scheme and on the behavior of the Lévy measure in a neighborhood of zero. We show that the rate we find is minimax up to a logarithmic factor. An adaptive penalized procedure is studied to select the cutoff parameter. These results are extended to encompass the case where a Brownian component is present in the Lévy process. Furthermore, we numerically illustrate the performances of our procedures. Fractional interacting particle system: drift parameter estimation via Malliavin calculus Universitat Pompeu Fabra, Spain We address the problem of estimating the drift parameter in a system of $N$ interacting particles driven by additive fractional Brownian motion of Hurst index ( H geq 1/2 ). Considering continuous observation of the interacting particles over a fixed interval ([0, T]), we examine the asymptotic regime as ( N to infty ). Our main tool is a random variable reminiscent of the least squares estimator but unobservable due to its reliance on the Skorohod integral. We demonstrate that this object is consistent and asymptotically normal by establishing a quantitative propagation of chaos for Malliavin derivatives, which holds for any ( H in (0,1) ). Leveraging a connection between the divergence integral and the Young integral, we construct computable estimators of the drift parameter. These estimators are shown to be consistent and asymptotically Gaussian. Finally, a numerical study highlights the strong performance of the proposed estimators. Adaptive denoising diffusion modelling via random time reversal 1Kiel University, Germany; 2Heidelberg University, Germany; 3University of Stuttgart, Germany We introduce a new class of generative diffusion models that, unlike conventional denoising diffusion models, achieve a time-homogeneous structure for both the noising and denoising processes, allowing the number of steps to adaptively adjust based on the noise level. This is accomplished by conditioning the forward process using Doob’s h-transform, which terminates the process at a suitable sampling distribution at a random time. The model is particularly well suited for generating data with lower intrinsic dimensions, as the termination criterion simplifies to a first hitting rule. A key feature of the model is its adaptability to the target data, enabling a variety of downstream tasks using a pre-trained unconditional generative model. We highlight this point by demonstrating how our generative model may be used as an unsupervised learning algorithm: in high dimensions the model outputs with high probability the metric projection of a noisy observation $y$ of some latent data point $x$ onto the lower-dimensional support of the data – which we don't assume to be analytically accessible but to be only represented by the unlabeled training data set of the generative model. | ||