Conference Agenda
| Session | ||
Statistics in natural sciences and technology
| ||
| Presentations | ||
Time-varying degree-corrected stochastic block models ISBA/LIDAM, UC Louvain, Belgium Recent interest has emerged in community detection for dynamic networks which are observed along a trajectory of points in time. In this talk, we present a time-varying degree-corrected stochastic block model to fit a dynamic network which allows evolving heterogeneity in the degrees of nodes within a community over time. Considering the influence of the varying time window on the aggregation of network information from different time points, in the parameter estimation, we propose a smoothing-based method to recover time-varying degree parameters and communities. In particular we provide rates of consistency of our smoothed estimators for degree parameters and communities using a time-localised profile-likelihood approach. We illustrate our method by some comparative simulation studies and an application to a real data set. Learning population and individual structure in dynamic networks with degree heterogeneity UCLouvain, Belgium Dynamic networks provide a powerful framework for characterizing time-varying functional connectivity in neuroimaging studies. In practice, such networks are typically collected from multiple subjects across time and exhibit both temporal dynamics and subject-specific heterogeneity. Brain functional connectivity networks also contain hub nodes, defined as highly connected regions that play critical roles in understanding brain functional connectivity. In this talk, we propose a mixed-effect dynamic stochastic block model with degree heterogeneity, which simultaneously disentangles the population connectivity structure from individual variability and recovers the trajectories of hub regions through time-varying degree parameters. We develop an efficient local approximate estimation procedure and evaluate its performance through extensive simulations and a case study of dynamic functional connectivity from the Human Connectome Project. How to build your latent Markov model — the role of time and space Bielefeld University, Germany Statistical models that involve latent Markovian state processes have become immensely popular tools for analysing time series and other sequential data. However, the plethora of model formulations, the inconsistent use of terminology, and the various inferential approaches and software packages can be overwhelming to practitioners, especially when they are new to this area. Here we aim to provide guidance for both statisticians and practitioners working with latent Markov models by offering a unifying view on what otherwise are often considered separate model classes, from hidden Markov models over state-space models to Markov-modulated Poisson processes. In particular, we provide a roadmap for identifying a suitable latent Markov model formulation given the data to be analysed. Furthermore, we emphasise that it is key to applied work with any of these model classes to understand how recursive techniques exploiting the models' dependence structure can be used for inference. The R package LaMa adapts this unified view and provides an easy-to-use framework for fast numerical maximum likelihood estimation, allowing users to flexibly tailor a latent Markov model to their data using a Lego-type approach. Real-data examples from ecology, medicine and finance will be used to illustrate the modelling workflow. A Simple and Robust Multi-Fidelity Data Fusion Method for Effective Modelling of Citizen-Science Air Pollution Data 1University of Glasgow, United Kingdom; 2ETH Zürich We propose a robust multi-fidelity Gaussian process for integrating sparse, high-quality reference monitors with dense but noisy citizen-science sensors. The approach replaces the Gaussian log-likelihood in the high-fidelity channel with a global Huber loss applied to precision-weighted residuals, yielding bounded influence on all parameters, including the cross-fidelity coupling, while retaining the flexibility of co-kriging. We establish attenuation and unbounded influence of the Gaussian maximum likelihood estimator under low-fidelity contamination and derive explicit finite bounds for the proposed estimator that clarify how whitening and mean-shift sensitivity determine robustness. Monte Carlo experiments with controlled contamination show that the robust estimator maintains stable MAE and RMSE as anomaly magnitude and frequency increase, whereas the Gaussian MLE deteriorates rapidly. In an empirical study of PM2.5 concentrations in Hamburg, combining UBA monitors with openSenseMap data, the method consistently improves cross-validated predictive accuracy and yields coherent uncertainty maps without relying on auxiliary covariates. The framework remains computationally scalable through diagonal or low-rank whitening and is fully reproducible with publicly available code. | ||