Conference Agenda
Overview and details of the sessions of this conference. Please select a date or location to show only sessions at that day or location. Please select a single session for detailed view (with abstracts and downloads if available).
|
Agenda Overview |
| Session | ||
Poster and Demo Session
| ||
| Presentations | ||
Walk the Sanctum: Live 360° Tour + Evaluation Workflow for Chemmanthatta Mahadeva Temple Murals Indian Institute of Technology, Jodhpur, India Virtual heritage workflows for sacred sites often lack transparency in capture, authoring, ethical considerations, and user evaluation. This poster presents a reproducible pipeline for creating and assessing 360° virtual tours of Kerala temple murals, demonstrated through the Chemmantitta Mahadeva Temple, Thrissur. With Archaeological Survey of India(ASI)’s permission, panoramas were captured across mural panels, authored into an interactive tour using Storyline 360, and augmented with an embedded survey on aura, sacredness, and authenticity. Attendees can experience the live prototype, exploring temple spatial logic from entrance to sanctum while testing the evaluation instrument. They will be able to experience what it feels to be inside the temple and view the murals. Literature Context This work is based on Wslter Benjamin’s work on reproducibility of art works. Benjamin's aura critique frames digital reproduction's detachment from ritual context, extended by virtual heritage scholars examining 3D reconstructions' capacity for "virtual presence" (Benjamin, 2001)( Bakker, 2018) (Galeazzi et al., 2018). VR heritage studies emphasize workflow transparency for reproducibility, particularly ethical capture/documentation of sacred content(Kenderdine & Yip, 2018). Indian virtual darśan research reveals hybrid devotion via immersive tours, though empirical evaluation remains sparse(Eck, 1996). This work addresses gaps by operationalizing aura/authenticity measurement within a reusable pipeline. Workflow Pipeline (5 documented stages):
Live Demo Features:
Reproducibility Package :
This beta tool addresses DARIAH's reproducibility mandate by surfacing tacit decisions: ASI negotiation protocols, darśan-sensitive UX, sacred content consent. The workflow is interoperable and adaptable for global sacred heritage. Future iterations include VR headset testing and multi-temple expansion. Poster Visuals: Workflow diagram, temple panorama screenshots, survey interface, results charts, QR code to live demo. VIZWP Visualising Wikipedia 1SUPSI University of Applied Sciences and Arts of Southern Switzerland, Switzerland; 2Wikimedia Italia A new tool to visualise groups of Wikipedia articles, analyse and monitor them, support the work of volunteers, researchers and GLAMs, and create knowledge landscapes. Wikipedia is a mainstream source of information accessible in 342 languages, with 65 million articles and over 25 billion views per month: it is the largest existing open collaborative peer-production platform, with 124 million registered users and over 600,000 contributors, actively involved in producing and disseminating open knowledge. Supporting those active communities of volunteers, researchers and GLAMs in evaluating and producing quality knowledge responds to a societal need and triggers research in design, boosting active citizenship and democratic processes. The prototype we present will be available for Wikipedia articles in English, Spanish, French, and Italian. It will focus on climate change and sustainability articles to assess current coverage and test interventions. Once developed, the tool will be adaptable to any topic, starting from Wikidata and Wikipedia categories. Through the tool: This participatory and visual analytics project and free and open software tool is developed in the framework of the international research project “Visual Analytics for Sustainability and Climate Change: Assessing online open content and supporting community engagement. The case of Wikipedia" (2025-2029), led by the University of Applied Sciences and Arts of Southern Switzerland (SUPSI), in collaboration with Wiki Education Foundation, Wikimedistas de Uruguay, Wiki in Africa and Open Climate Campaign, with the endorsement of Wikimedia Italia, the support of the Swiss National Science Foundation (SNSF, grant n. 10.003.183) and the engagement of many Wikipedia and Wikidata volunteers. * Project description https://meta.wikimedia.org/wiki/Visual_Analytics_for_Sustainability_and_Climate_Change Corpusense: A Lightweight Infrastructure for Producing Structured Data from Digitised Heritage Collections 1LASTIG/ENSG/IGN, France; 2Bibliothèque nationale de France, France; 3Laboratoire de recherche de l'EPITA, Le Kremlin-Bicêtre, France; 4CRH/EHESS, France; 5Centre-Jean-Mabillon/ENC, Paris Mezanno is a program jointly led by the Bibliothèque nationale de France (BnF), IGN, EHESS, and EPITA, which aims to facilitate the production of structured data from digitized serial historical sources (such as directories, registers, and administrative records). In this demonstration, we will present Corpusense (https://mezanno.xyz/corpusense ), the web interface of the Mezanno project, designed to enable humanities and social sciences (HSS) teams and heritage institutions to manage, without heavy infrastructure, a complete workflow from images to structured data: importing sources via IIIF, building working collections, running automated processes (layout analysis/segmentation, OCR, and structuring), editing and correcting outputs within the interface, and exporting data in standard formats (CSV/JSON/Excel). Corpusense is a lightweight platform (deployed as static files, with projects stored locally) that lowers technical and financial barriers while fostering the autonomy of research communities. Its architecture implements a clear separation of responsibilities and expertise: heritage institutions ensure sustainable access to content (notably via IIIF); technical experts deploy remote services accessible through APIs (transcription, structuring, enrichment); and researchers and heritage practitioners retain control over source selection, modeling choices, and the critical validation of results. This approach is situated with respect to existing platforms--collaborative transcription environments and widely used transcription tools as well as open environments such as eScriptorium--while focusing specifically on producing reusable tabular structured data directly usable in downstream HSS workflows. The audience will follow a guided and reproducible workflow, illustrated using a complex heritage corpus. We will demonstrate: (1) the selection and import of images from an IIIF manifest; (2) the creation of a working corpus and its organization into collections; (3) the configuration of an extraction task (definition of fields, formats, constraints, and normalization rules); (4) the execution of automated processes and the display of results within the interface; (5) the review and editing of results by the user prior to export, supported by simple indicators of errors and cases requiring verification; and (6) the options for exporting and sharing the resulting data. We will also discuss two issues that are central to responsible, usable infrastructures for digital humanities. First, trust in automated results: how to support rigorous evaluation campaigns despite their cost and protocol complexity, how to define metrics that make sense for end users, and how to address alignment problems between predictions and reference data. Second, robustness and interoperability: the modular integration of evolving OCR/HTR components and the stabilization of APIs to support future extensions and richer outputs. The AIncient Tutor University of Zurich, Switzerland The AIncient Tutor is an AI-powered language-learning app that revolutionises the translation of ancient languages such as Latin, Greek, Middle Egyptian, and Coptic. Unlike existing tools, it provides interactive learning environments for any text, including unannotated sources. The app guides learners step by step through the translation process, automatically identifies sentence structures and grammatical phenomena, and promotes comprehension through personalized tasks. Teachers can closely monitor learning progress and offer targeted support. Gamification elements motivate learners at different levels—from secondary education and university to hobbyists. In this way, AIncient Tutor enables an individualised, efficient, and sustainable engagement with ancient languages. Need a MANO? Learning by Doing in Digital Manuscript Studies Johannes Gutenberg-Universität Mainz, Germany Context This demo introduces MANO - Manuscripts Online, a pedagogically oriented platform designed for teaching and learning in digital manuscript studies.[1] Developed as a fully open, browser-based, and serverless environment, MANO enables students, educators, and cultural heritage practitioners to create, share, and explore manuscript metadata, transcriptions, and learning materials without installation, authentication, or dependence on institutional infrastructure. MANO operates as a low-threshold entry point into digital manuscript scholarship by emphasising hands-on TEI XML work with immediately visible results, an approach shown to support effective TEI learning (Faghihi et al., 2022). Approach MANO is designed around four interrelated principles—accessibility, collaboration, sustainability, and user-centred design—which shape usability and user experience (UX) decisions to support learner engagement. Accessibility is operationalised through interface and workflow choices that minimise technical prerequisites and foreground open scholarly standards, enabling users to interact directly with TEI-based metadata and transcriptions. This principle is embodied in core components such as the Metadata Editor and Transcription Viewer, whose interfaces, sample-loading functions, and XML previews make underlying data structures intelligible to beginners while preserving methodological transparency. Collaboration is supported through shared spaces for learning and exploration, via the Resources and Metadata Collection sections, which enable instructors and learners to consult and reuse community-contributed materials and records. Sustainability is pursued through a lightweight architecture based on GitHub repositories, avoiding servers and ensuring long-term maintainability with low maintenance requirements.[2] Across all components, MANO implements user-centred design practices grounded in research on usability in digital humanities (DH) tool development. These include consistent terminology, real examples, error prevention, clear status notifications, technical and procedural documentation, and the use of familiar graphical conventions (Bulatovic et al., 2016; Gibbs and Owens, 2012). This approach builds on established usability heuristics (Nielsen, 1994) and on critical analyses of DH infrastructures, which demonstrate that despite increased attention to user-centred methods, many DH applications remain difficult to use or insufficiently self-explanatory (Thoden et al., 2017). Together, these principles position MANO not simply as a functional platform, but as an environment in which design choices actively reduce cognitive barriers and facilitate engagement in digital manuscript studies. Contribution The demo presents MANO through concrete examples and sample data. It shows how its different interface components support learning, collaboration, and confidence-building, and how UX choices influence interaction with digital tools. Through selected use cases, the demo illustrates how manuscript descriptions and transcriptions move from structured TEI XML to readable, shareable online representations, making visible the often opaque transition from encoding to interpretation. MANO is presented not as a substitute for professional research infrastructures, but as a complementary environment that supports entry and participation in digital manuscript studies. [1] MANO is accessible at: https://mano-project.github.io/. [2] The MANO project repositories are available via the project’s GitHub organisation: https://github.com/orgs/mano-project/repositories. RIS3D (Referenced Information System in 3D): toward a multiformat and multiparadigm annotation system. 1UMR 6034 Archéosciences Bordeaux/Univ. Bordeaux, Bordeaux (France); 2UMR 6034 Archéosciences Bordeaux/CNRS, Bordeaux (France) A RIS3D is willing to tackle the problem of merging information from multiple domain around the natural representation in 3D of studied object. Such a gathering of multidisciplinary information is required in the domain of Cultural Heritage Sciences. For this purpose, the data managed by RIS3D encompasses an always wider range of formats, including textual annotations, numerical measurements, images, files, dates, two-dimensional series such as spectra or sensor outputs, data cubes like those generated by X-ray fluorescence (XRF), and voxel grids such as those produced by ground-penetrating radar. Given the information system’s capacity to manage extensive datasets, it has to provide a user-friendly query system accessible through a nodal interface. Users are also able to populate the underlying database efficiently by uploading CSV files in bulk, rather than entering each record manually, and benefit from streamlined processes for inserting two-dimensional series. We are working into making anchors that positions information in 3D as generic as possible. We have demonstrated the possibility of use in a same system concepts such as simple 3D points, 2D surfaces overlaid on 3D meshes, or individual mesh elements (particularly when the 3D model consists of multiple meshes, as is often the case in architectural applications). To ease the concomitant visualization of different information, we use markers that can be customized according to the data they represent (e.g., for instance, color may indicate a specific category, a brief text label may appear above each marker, and marker size may vary in proportion to a numerical value). Our RIS3D implementation comprises a cloud-hosted server and a client application designed for personal computers. The server infrastructure integrates a PostgreSQL database with a NodeJS web server, while the client is developed in Unity and establishes a connection to the server to visualize and manipulate three-dimensional objects alongside their associated data. All data is stored in JSON format, allowing users to define and organize their content within the hierarchical structure permitted by JSON. This data can be edited and visualized in three-dimensional space, anchored to meshes or point clouds. On the server side, project administrators can create user accounts with tailored access permissions, establish user groups, and perform database import and export operations. The proposed demonstration showcases several archaeological sites and artifacts, illustrating how complex datasets can be visualized in three dimensions and queried by users. This demonstration will highlight the system’s ability to handle large volumes of data simultaneously, present them as customizable markers, and facilitate the consolidation of diverse information for researchers and conservators, thereby enabling more effective analysis and interpretation. Being in used in multiple archaeological projects, the software remains under development to introduce the requirements that serve them. The source code is intended to be released as open-source in the future, with a comprehensive documentation and improved server-side security. Spirit of Radio: Demonstration of a Nostalgic Interface as a Bridge for Age-Friendly Artificial Intelligence Engagement 1TU Dublin, Ireland; 2ADAPT Centre, Dublin City University Turn the dial on a vintage 1950s radio and encounter something unexpected: AI-generated music on one band and traditional human-curated content on another. This interactive demo presents a bespoke, interactive "AI Radio" designed as a tangible and familiar interface to engage audiences (e.g. older adults) in discussions about the social and ethical impacts of Artificial Intelligence (AI). Rather than abstract discussions about algorithms, the nostalgic radio interface provides a hands-on way to explore questions about AI quality, ethics and accessibility in everyday media. Research [1] indicates that adopting nostalgic elements can help older adults engage with new technologies in a more intuitive and less intimidating manner. The radio was developed through a Participatory Design [2] methodology within an Action Research [3] framework. Its design was directly informed by six public engagement events where older adults were asked to reimagine legacy technology in an AI context. This process transformed the radio from a simple playback device into a co-designed tool that reflects the lived experiences and accessibility needs of the older demographic. Technical development (Figure 1) focused on "Open-Source Hardware" (OSHW), using modular Python-based software running on an internal Raspberry Pi, to ensure the platform remains adaptable for various engagement scenarios. The concrete modular design of the chassis (Figure 2) and internals, provides an engaging mechanism for exploring the use of accessible maker technologies like 3D printing and laser cutting and facilitates replication and modification of the platform by maker communities to support the co-evolution of software and hardware as new interactions emerge, for example, to develop radio designs that are reflective of different decades. Participants at the demo interact with the radio's three primary "bands," which prompt reflection and discussion:
Through these interactions, participants are invited to "reimagine" the radio’s functionality, providing feedback on the impact of AI generated content, how AI-curated recommender systems should behave and what ethical guardrails are necessary. This demo aligns with the DARIAH 2026 goal of "Building Infrastructures of Engagement" by providing a case study in which physical objects facilitate digital literacy. It serves as a mechanism to:
Support National Literacy: The platform is intended for deployment in public engagement programs, such as Age-Friendly AI: Ireland’s National Artificial Intelligence Literacy Initiative for Older Adults [4] to democratise conversations about an increasingly AI-driven world. New Kids on the Block: Additional SSHOMP Community Use Cases: Training Materials for the Humanities and Archaeology/ Conservation Communities 1Göttingen State and University Library, Germany; 2LEIZA - Leibniz-Zentrum für Archäologie, Germany; 3Berlin-Brandenburgische Akademie der Wissenschaften, Germany Community Curation The Social Sciences and Humanities Open Marketplace (SSHOMP) serves as a discovery portal for the Social Sciences and Humanities (SSH) research communities. It aggregates and contextualises solutions across the research data life cycle, facilitating the discovery of resources essential for sharing and re-using workflows. Currently hosting approximately 6,000 items from over 15 trusted sources, SSHOMP relies on community curation to ensure the catalogue remains current. To maintain high (meta)data quality, the platform employs curation routines that merge automated processes with manual tasks. In this contribution to the DARIAH Annual Event 2026, we present the SSHOMP through the lens of two specific community use cases—NFDI4Objects and Text+—to demonstrate the platform's inclusivity and technical uptake. Expanding Community Access: NFDI4Objects The first use case highlights the platform’s openness towards the archaeological and conservation communities via NFDI4Objects. A recent survey (Fischer/Witt 2025) revealed a high demand for accessible training materials on research data management (RDM) within the conservation community, where documentation practices remain heterogeneous and weakly standardised. As a sustainable, participatory platform, SSHOMP serves as an ideal hub for these materials, supporting self-study and formal training for early-career researchers. We demonstrate the seamless integration of training materials, significantly lowering the barrier for new user groups. Crucially, contextualisation ensures these resources are not isolated; they are linked alongside existing tools in the catalogue. This approach underscores SSHOMP’s commitment to inclusion, ensuring specific disciplinary resources are discoverable within the broader SSH ecosystem. Infrastructural Agility and Technical Integration: Text+ The second use case focuses on Text+, the NFDI consortium for language- and text-based research data. Text+ has utilised SSHOMP to disseminate services since 2022 and expanded to training materials in 2025. This integration highlights the spectrum of technical embedding options offered by the platform:
Bibliography
Serving the German Archival Community and beyond: the nestor Working Group “Archivstandards” University of Hamburg, Germany nestor - Network of Expertise in Long-term Storage of Digital Resources in Germany - is a no-profit organisation led by the German National Library - Deutsche Nationalbibliothek. Nestor has four strategic pillars: lobbying, content development, qualification and offering services. It issues the nestor Seal for Trustworthy Digital Archives, which assesses compliance with the German standard DIN 31644 "Kriterien für vertrauenswürdige digitale Langzeitarchive" (Criteria for trustworthy long-term digital archives). It publishes free-access guidelines and technical specifications. Nestor has nine working groups on topics like “personal digital archiving” and “community survey”. The nestor working group on archival standards (https://www.langzeitarchivierung.de/Webs/nestor/DE/Arbeitsgruppen/AG_Archivstandard/ag_archivstandard_node.html, hereafter: WG) was founded in 2019. It allows archivists to create and maintain standards for the archival community, and it is led by the Landesarchiv Baden-Württemberg. Since the University of Hamburg Archives joined the WG in 2022, the „nestor-Archivstandard Archivierung von Studierendendaten aus Fachverfahren" (nestor Archival Standard for Archiving Students’ Data from E-Government Applications) was issued in 2023 and two more standards have been drafted: “Standardisierte Aussonderung aus DMS” (Standardised Disposition from Records-management Systems) and "Prozesse und Kriterien zur Bewertung von Forschungsdaten" (Processes and Criteria for Appraising Research Data). Both standards are to be published in 2026. The WG has a special position: from one side, for it is the only nestor WG representing archives’ and archivists’ needs and wants, it aims to represent the archival community in Germany’s leading organisation for digital preservation and long-term archiving. From the other side, it makes communication between nestor (and its members) and archival institutions more direct and effective. Because suggestions for new standards may be proposed bottom-up by any archivist or archival institutions and since the WG is representative of the German archival landscape (in terms of typologies, dimensions and geographical position), the WG is in the author’s view a good example of governance frameworks for community participation and participatory co-creation. eKultura: Creative Digital Practices for Engaging Audiences with Croatian Heritage Ministry of culture and media of the Republic of Croatia, Croatia This poster examines the role of digital storytelling within the eKultura framework, showcasing how Croatian heritage institutions are rethinking the way audiences experience cultural heritage. Through the eKultura portal, institutions can present their collections and create virtual exhibitions that offer immersive, thematic experiences. These digital exhibitions allow users to engage with historical artifacts in meaningful ways, connecting past and present. By contextualizing objects through technology, institutions provide audiences with opportunities to explore complex cultural narratives, making heritage more accessible and engaging for diverse audience groups. In addition to the portal, the eKultura Instagram profile demonstrates the potential of social media to extend the reach and impact of digital cultural heritage. The profile uses reels, stories, curated posts, and behind-the-scenes content to showcase Croatian heritage in dynamic and interactive ways. Digital tools and AI are employed to create engaging content that is relatable and visually appealing, transforming heritage from static collections into interactive experiences. This approach is particularly effective in reaching younger audiences, fostering a sense of connection and participation, and encouraging users to explore cultural narratives beyond the walls of institutions. Social media thus serves as a complementary platform to digital exhibitions, enhancing their educational and participatory potential. It is also worth mentioning the digital storybooks and interactive coloring books shared on social media, created using digitized collections from the eKultura portal. These books offer users, especially children and young learners, to explore stories, motifs, and traditions in an imaginative, hands-on way. The eKultura framework also includes an augmented reality (AR) exhibition, which allows visitors to interact with museum objects through a mobile application. By scanning posters, users can explore objects in AR, reposition them on their screens, take photos, and access enriched contextual information. This interactive layer transforms traditional exhibition formats into multisensory experiences, supporting personalized learning and playful engagement. The AR exhibition illustrates how creative technological interventions can revitalize heritage presentation, making it both educational and entertaining. Together, the eKultura portal, social media presence, and AR exhibition showcase the transformative power of digital storytelling, interactive technologies, and innovative communication in the field of cultural heritage. These initiatives not only make heritage more accessible and engaging but also invite audiences to actively explore and connect with cultural narratives. By combining digital tools, AI, and creative content strategies, eKultura ensures that Croatia’s cultural heritage resonates with contemporary audiences, fostering awareness, appreciation, and thoughtful reflection on its rich historical legacy. From Hidden Practice to Public Dialogue. Connecting Conservation Science to Society through Standardised Data Leibniz-Zentrum für Archäologie (LEIZA), Germany Conservation of cultural heritage objects is an important component of preserving material heritage, yet it often remains largely invisible for researchers and the wider public. Conservation reports have been an essential part of professional practice, documenting an object’s condition, damages, as well as the detailed description of all conservation interventions (see DIN EN 16095). However such records are often heterogeneous, difficult to compare, and rarely openly accessible (Fischer/Witt 2025). As a result, essential knowledge about an object’s material history or the decision-making processes and efforts behind conservation work remains inaccessible to both scholarly research and public understanding. This contribution presents an overview of an approach within NFDI4Objects that will provide conservators with a foundation for preparing their data in accordance with good research data management (RDM) practices. In this context ‘infrastructure’ is not primarily understood as software or platform, but rather as conceptual and methodological infrastructure. The approach contains the development of interconnected components like domain-specific controlled vocabularies (Fischer/Mempel-Länger 2025a), a LIDO and CIDOC CRM-aligned metadata schema for structured conservation documentation (Fischer/Mempel-Länger 2025b; Schwenk/Fischer 2025), as well as practical RDM guidelines and training materials. Together, these components enable the creation of standardised conservation data in accordance with the FAIR-Principles, designed to be widely applicable and adaptable for a diverse range of stakeholders (e.g. museums, universities, research institutions). This promotes the connectivity of conservation science within the broader humanities ecosystem, ensuring that conservation data can be meaningfully integrated with related infrastructures and collaborative research networks. The societal potential becomes evident when standardised conservation data are made openly accessible. Through consistently structured and ambiguous data does it become possible to visualise the “condition biography” of an object across time. Historic alteration, degradation, different conservation interventions, etc. can be presented for example as interactive timeline or visualisations in online museum portals. In this way, conservation, usually undertaken behind closed doors, becomes visible and experientially accessible. It shows conservation as an openly communicated scientific practice, allowing researchers, museum visitors and the wider public to understand the value, rationale and impact of conservation work. The approach presented here demonstrates how ‘infrastructure of engagement’ can be methodological foundations, enabling sustainable research practices and making cultural heritage tangible for society. Bibliography DIN EN 16095 (2012): Erhaltung des kulturellen Erbes - Zustandsaufnahme an beweglichem Kulturerbe. Kristina Fischer, Lasse Mempel-Länger. (2025b). Aufbau eines Minimalmetadatensatzes für die Konservierung-Restaurierung. SODa Forum am 16.10.2025. Zenodo. https://doi.org/10.5281/zenodo.17367214 Gudrun Schwenk, Kristina Fischer. (2025). Der Nutzen von Ontologien für die Konservierungs- und Restaurierungsdokumentation: CIDOC CRM als Brücke zwischen Kulturerbe und digitaler Welt. FORGE 2025 - Daten neu denken (FORGE2025), Rostock. Zenodo. https://doi.org/10.5281/zenodo.17178210 Kristina Fischer, Lasse Mempel-Länger. (2025a). Restaurierungswissen digital vernetzen - Von textlichen Dokumentationen zu maschinenlesbaren Begriffen. FORGE 2025 - Daten neu denken (FORGE 2025), Rostock. Zenodo. https://doi.org/10.5281/zenodo.17202284 Kristina Fischer, Nathaly Witt. (2025). Zusammenfassung des Status Quo im Forschungsdatenmanagement für den Bereich der Konservierung-Restaurierung (Version v1). Zenodo. https://doi.org/10.5281/zenodo.17475354 Community Outreach through Scientific Blogging 1Göttingen State and University Library, Germany; 2Max Weber Foundation, Germany Introduction Blogs are established components of scholarly communication. Characterized by open "engagement and participation," they serve as a low-threshold medium for academic exchange and Citizen Science. Unlike traditional publications, blog posts are compact and address a broader audience, contributing significantly to knowledge transfer and social engagement. The Text+ Blog (ISSN 2941-2250, Seltmann et al. 2023) demonstrates the advantages of the platform Hypotheses.org. With its focus on text- and language-based research data in the humanities, it connects the European DARIAH community. Since 2022, the blog has developed into an acknowledged platform for topics ranging from digital editions and lexical resources to artificial intelligence. The Text+ Blog as a Bridge between Infrastructure and Community As a medium for project activities, the Text+ Blog bridges the gap between potential users and the team developing demand-oriented infrastructure. It addresses "closed-shop development," where services often reach users only near the end of a project term or fail to find community uptake. The resonance of this approach is evidenced by recent usage data: In 2024, the blog recorded 5,472 unique visitors, with the majority (4,457) originating from Germany. On average, the platform attracts over 450 visitors per month. Analysis of access pathways highlights a strong core readership, with 53% of users accessing the blog directly, while 28% arrive via search engines, 15% via referrals from other websites, and 3% through social media channels. The blog serves multiple functions:
The Text+ Blog as Lived European Research Infrastructure Cooperation Technically, the Text+ Blog is hosted on the European platform Hypotheses.org and supported by the Max Weber Foundation as part of its commitment to DARIAH and OPERAS. This platform enables free scientific blogging without technical barriers by handling maintenance and preservation. Operated via OpenEdition, it hosts thousands of blogs in the Humanities and Social Sciences. Hypotheses.org ensures academic citability by assigning DOIs to every post, offering a unique guarantee of long-term technical provision (Ochsner et al. 2025). Blog Articles as the Foundation of a Simple Multichannel Strategy Communication follows a jointly developed strategy that defines target groups, goals, channels, and standardized modules. Blog posts serve as an excellent repository for consistent communication, provided they meet high academic standards. Once established, noteworthy posts are easily promoted via platforms like LinkedIn, X, or Mastodon using visual snippets. Simultaneously, the latest posts are teased on the project website as informational resources. Within the blog itself, tagging and keywords enable targeted browsing of the content. A Digital Humanities Infrastructure and Network for Austria: How CLARIAH-AT fosters a Community of Engagement 1Austrian Academy of Sciences (OeAW), Austria; 2University of Graz, Austria CLARIAH-AT is the consortium of Austrian universities, research and GLAM institutions that coordinates and drives Austrian activities in the European ESFRI research infrastructures CLARIN-ERIC (Common Language Resources and Technology Infrastructure) and DARIAH-EU (Digital Research Infrastructure for the Arts and Humanities). The consortium brings together 11 Austrian institutions with relevant expertise in research, development and teaching in the field of Digital Humanities as well as in the establishment and sustainable operation of research (data) infrastructures, which play an active role in the Austrian development of Digital Humanities and the establishment and expansion of technical and social infrastructures for the humanities in general. With our poster we are showcasing Austrian efforts to establish a community of practise and engagement, alongside the following central strategic framework: i) the Digital Humanities Austria (DHA) Strategy: ii) project funding iii) funding opportunities for early career researchers vi) events:
v) other activities: All activities feed into the national DH strategy mentioned in the beginning. FedOSC Belgium: Building Alignment Across Belgian FSIs Through GLAM Engagement 1Institute of Natural Sciences, Belgium; 2Royal Library of Belgium; 3Royal Observatory of Belgium; 4Belnet Belgium’s Federal Scientific Institutions (FSIs) manage extensive scientific and cultural heritage collections, ranging from biodiversity and environmental data to digitized archives, artworks, and museum objects. Despite a shared federal policy on Open Data and Open Access, local practices and levels of data literacy and maturity differ significantly among institutions. In response to these disparities, the Federal Open Science Cloud (FedOSC) Belgium was launched in 2024 to support interoperability, contribute to shared governance, and improve services that benefit research across scientific and cultural heritage domains. GLAM institutions (Galleries, Libraries, Archives, Museums) highlight the diversity and complexity of this landscape particularly well. A FedOSC survey identified limited capacity, time constraints, and preservation infrastructure as major data management challenges. Arts-oriented FSIs additionally reported strong needs related to metadata quality and standardization while documentation-related FSIs mentioned a wider range of needs: Open Access publishing, data sharing and reusability. These findings resonate with broader assessments of GLAM data8 showing heterogeneous metadata, legacy digitization outputs, unclear rights information, and inconsistent or incomplete documentation of data origins and transformations, all of which can impede reuse and computational analysis. At a structural level, these issues reflect the interpretive and context-dependent character of heritage metadata, which differs from the more standardized data models often used in other scientific domains. This poster presents FedOSC’s direct engagement with the GLAM sector during its current two-year phase. It focuses on engagement as an enabling infrastructure in its own right, guided by the principles of the UNESCO Recommendation on Open Science, and based on coordinated exchange, shared problem framing, and practical collaboration to transversal initiatives. GLAM representatives contribute insights into data, workflows, and metadata practices, test tools, and participate in working groups. For example, FedOSC supports the revision of federal Open Science policies together with the Belgian Scientific Policy Office (BELSPO), reflecting evolving research habits and issues such as artificial intelligence, data security, and new publication models. FedOSC also aims to assist FSIs in developing or refining FAIR and Open Science roadmaps aligned with the EOSC vision and European data spaces. The poster also reflects recurring challenges and the opportunities they open. Integrating GLAM data and workflows into a shared Open Science landscape raises questions of semantic heterogeneity across domain standards, and requires balancing openness with conservation ethics, copyright restrictions, and cultural sensitivity. Limited budget, time and staff resources dedicated to data management continues to slow implementation. At the same time, this work encourages reflection on what counts as research data, and on how contextual, interpretive, and historical information should be preserved to enable reuse. FedOSC aims at navigating Open Science and FAIR data requirements with equitable, ethical and inclusive access to knowledge and cultural memory/data heritage, in particular by ensuring GLAM-specific representation into infrastructure discussions. By documenting needs, constraints, and forms of collaboration in federal context, this poster contributes to defining engagement as a powerful enabler of meaningful infrastructure design and discusses how cross-domain Open Science can include, rather than sideline, the humanities and cultural heritage sector. Beyond Archives: Designing a Segmentation and ATR Workflow for Mid-Twentieth-Century Typescripts in a Contested-Memory Case Study University of Naples "L'Orientale", Italy This poster presents results from Beyond Archives, focusing on layout segmentation and Automatic Text Recognition (ATR) for mid-twentieth-century Italian typescripts. Beyond Archives bridges archival science and digital humanities to examine how interfaces shape visibility and reuse in contested-memory settings (Hedstrom 2002; Gilliland 2017). Using the post-war Julian-Dalmatian exodus to Naples as a research laboratory, the project is working toward a transferable workflow for comparable collections, serving memory institutions and affected communities (McKemmish and Piggott 2013). FAIR-oriented, standards-based methods record technical decisions, so models and error profiles remain inspectable and reusable (Tomasi and Buzzetti 2012; Michetti 2023). The case study examines a polarized memorial landscape where administrative documents, testimonies, and memories intersect (Lazzarich 2021). The infrastructure must keep these divergences readable to support research, archival stewardship, and public dialogue (Hedstrom 2010). Two corpora are processed in parallel: Prefecture of Naples records (1946-1958), typescript administrative files marked by stamps and custodial handling; and semi-structured interviews with former refugees, collected under ethical protocols and transcribed with speech-to-text tools. Keeping both streams in view provides a shared basis for exploration and dialogue across institutions and communities (Tomasi 2022). The workflow follows three stages: digitization, formalization, and publication. Digitization produces high-quality images with IIIF references. Publication is planned in a TEI- and IIIF-based environment, set to be implemented with TEIPublisher, that preserves contextual layers and provenance traces. This poster focuses on formalization for the written corpus, where segmentation and ATR generate PAGE/ALTO outputs mapped into TEI-XML for structured access, with transformations documented for scrutiny (Chiffoleau 2025). Segmentation is performed in eScriptorium with the Kraken engine (Kiessling 2019; Kiessling et al. 2019). A SegmOnto-aligned vocabulary distinguishes core text from paratextual zones, including stamps, signatures, and notes (Gabay et al. 2021). Ground-truth is expanded iteratively to cover heterogeneous layouts, treating these zones as evidence of custodial intervention. A second model trained on a broader pool improves line stability and the separation of stamps and signatures across internal and external test sets. ATR fine-tuning starts from CATMuS-Print (Gabay and Clérice 2024) and proceeds in staged runs on curated Prefecture pages. Metrics are interpreted as fitness-for-purpose indicators, mindful of small ground-truth samples (Bubula et al. 2025). On official test splits, CER drops from 9-11% to 3-5% and WER from 32-37% to 13-18%, reaching 96-97% character accuracy. In practice, this shifts effort from full transcription to targeted correction, enabling earlier TEI structuring and entity-oriented access while remaining explicit about residual noise. For reuse and accountability, Beyond Archives treats ground-truth and paradata as research outputs. Its purpose is not preservation alone, but to make complexity visible and open to critical negotiation across institutional records and lived testimony. Once benchmarks stabilize, trained segmentation and ATR models, together with curated training artifacts, will be deposited on Zenodo. The release is designed to serve a dual aim: to share trained models that can be applied and fine-tuned on comparable corpora, and to provide a documented workflow and three-stage pipeline that others can adapt to similar contested-memory collections while keeping their tensions legible over time. Benefits of Research Data Repositories: Showcasing the Parallel Bible Corpus and the TextGrid Repository 1Georg-August-University Göttingen, SUB Göttingen; 2Information Commissioner's Office This poster shows the benefits of integrating resources previously available in GitHub repositories into research data repositories. More specifically, we will show the different ways in which the quality of the data of the Multilingual Parallel Bible Corpus could be improved through its integration in the TextGrid Repository (TGR). The corpus contains biblical texts spanning over 100 languages, encoded originally in Corpus Encoding Standard (CES) as XML files (Christodouloupoulos & Steedman 2015). The project aimed to create a corpus aligned at the verse level for comparing methods in highly multilingual contexts, including those involving low-resource languages. In 2025, the resource has been integrated into the TGR, being part of Text+’s portfolio, the consortium for text- and language-based research data within the German National Research Data Infrastructure (NFDI). The TGR aims to integrate existing resources improving the quality of the data in line with the FAIR principles, and providing long-term archiving. As a TEI-specific repository, the TGR has been equipped with new features, such as project-specific options, the use of library classification and authority file systems (Calvo Tello et al. 2023), a Python library for accessing data (Hynek et al. 2024), and a new publication workflow (Veentjer et al. 2025). Other project with XML-TEI data can benefit similarly from these features when publishing in the TGR. During its integration, several aspects of the data were updated following the FAIR-principles (Wilkinson et al. 2016). This proposal highlights only a few. Considering the text, the original CES files have been encoded into TEI. Considering the metadata, some information has been enriched:
Thus, this corpus is one of those that adhere most closely to the FAIR principle. The Bible offers a unique chance for analysis across languages. To demonstrate the previously mentioned characteristics of the resource and the TGR, we are currently creating Jupyter Notebooks applying Named Entity Recognition algorithms in various languages. The poster will highlight different aspects: First, it will provide a quantitative overview of the corpus, presenting data on various measures such as documents, words, languages, works, and authors. Secondly, a visual representation of the connections across different levels will demonstrate the various types of resources to which the corpus is currently linked. Thirdly, the benefits of the corpus within the TGR will be displayed, with several QR-codes linking to different functionalities. Finally, the poster will give access and show the main results of the above-mentioned Jupyter Notebooks. Furthermore, this contribution serves as an invitation for other projects with TEI files to import their data into the TGR and enjoy similar advantages. CultIS: Sustainable and Inclusive Platform for Intelligent Management and Presentation of Large Datasets in Culture, Humanities and Social Sciences Institute of Mathematics and Informatics, Bulgarian Academy of Sciences, Bulgaria CultIS (https://cultis.math.bas.bg/en) is a web-based open-source platform for intelligent management of digital content, conceived as an integrated data storage, retrieval, and curation for culture, humanities and social sciences research. CultIS implements a modular, service-oriented architecture that supports heterogeneous digital cultural assets through a unified data model. It provides advanced mechanisms for persistent and sustainable storage, dynamic management of structured metadata, and semantic organization of digital objects and collections. The platform is developed by the Institute of Mathematics and Informatics, Bulgarian Academy of Sciences as an infrastructure component of CLaDA-BG, the Bulgarian National Interdisciplinary Research e-Infrastructure for Resources and Technologies in favour of the Bulgarian Language and Cultural Heritage, part of the EU infrastructures CLARIN and DARIAH (2018-2027). The mission of CLaDA-BG is to create a national infrastructure for resources and technologies for linguistic and cultural heritage. The platform provides a set of functionalities for interactive virtual presentation, multidimensional and full-text search, filtering, cross-sectional dynamic grouping, and hierarchical and multilayer content indexing based on controlled vocabularies and dictionary-driven classification schemes. CultIS supports the ingestion, management, and long-term preservation of diverse digitized cultural heritage resources (incl. textual, graphical, audiovisual, three-dimensional, and other complex media objects) together with their associated descriptive, structural, and administrative metadata. The innovative strategy of CultIS is focused on achieving a high degree of configurability, extensibility, and inclusiveness of the technological solutions through sustainable development. The platform allows building of multilingual and application-specific user interfaces and visualizations, as well as the integration of additional functional modules according to domain and users’ requirements. The interoperability is supported through adoptable interfaces to different external systems using standard formats. CultIS is implemented using established and open-source software technologies, ensuring scalability, maintainability, and adaptability to evolving research and application contexts. The platform has been deployed or is in the process of being deployed in a number of Bulgarian cultural and national institutions. CultIS serves three of the largest libraries in Bulgaria (Public Library “Ivan Vazov” – Plovdiv, Sofia University Library, Central Library of the Bulgarian Academy of Sciences), providing open access to over 200,000 digital cultural objects. The wide range of applications with significant social impact of CultIS is confirmed by the implementation and maintenance of digital archives for state institutions such as the National Statistical Institute of the Republic of Bulgaria, the Supreme Court of Cassation, etc. CLARIAH-CM Madrid-Node: A Regional Laboratory for Digital Humanities and Open Science 1Universidad Complutense de Madrid, Spain; 2Universidad Complutense de Madrid, Spain; 3Universidad Complutense de Madrid, Spain; 4Universidad Rey Juan Carlos, Spain CLARIAH-CM Madrid-Node is a research infrastructure formed by the six public universities of the Community of Madrid (Spain): Universidad de Alcalá, Universidad Autónoma, Universidad Carlos III, Universidad Complutense, Universidad Politécnica, and Universidad Rey Juan Carlos. Universidad Complutense coordinates the Madrid-Node within CLARIAH-ES, which represents Spain in the European research infrastructure consortia CLARIN-ERIC and DARIAH-EU. In the spirit of the DARIAH Annual Event 2026, this poster presents the CLARIAH-CM Madrid-Node as a model of sustained collaboration across academia, memory institutions, and society. Our main contributions include: 1.- A network of coordinators representing each participating university of the Madrid-Node sharing ideas, supporting projects and developing services aligning regional initiatives with the European infrastructures (weekly newsletters). 2.- A five-year Community of Madrid funding programme that secures staff, activities, and long-term sustainability. 3.- Seven shared strategic objectives. 1) Advising researchers on European infrastructures and DH; 2) Supporting scientific and technological infrastructures; 3) Providing training in DH, NLP, and AI; 4) Developing computational methods for the analysis; 5) Promoting multilingualism, interoperability, and Open Science practices; 6) Bridging fragmentation among research communities; 7) Supporting researchers in applying to funding calls 4.- Academic collaboration through training and outreach. A rich and diverse portfolio including: 4. a. General training materials on data workflows for five-star Open Data, on using open infrastructures within the SSHOM as a virtual research environment. 4. b. Specific materials with infographics and videos on DH, DARIAH resources, and LOD 4. c. Training materials on digital tools. 4. d. Structured training cycles on DH, future challenges, digital methodologies, and historical press analysis (forthcoming). 4. e. Hands-on workshops on DH, LT, AI and OS. 4. f. A dedicated Zenodo community for dissemination and reuse. 5.- Collaboration with libraries as core memory institutions. Projects and initiatives have been presented at the Biblioteca Regional Joaquín Legina de Madrid (International Women´s Day); the Biblioteca Nacional de España (DARIAH Day 2023 and 2024; and the Libraries Lab at Instituto Cervantes (Data Open Access, 2025) demonstrating the Node’s commitment to Open Access and the FAIR principles. 6.- Engagement with society: 6.1. Participation in Madrid Science Week with public-oriented activities in 2024 on “What are Digital Humanities? Are they for me?” and 2025 on “Open up to Europe: learn how to use Zenodo”. 6.2. Summer Courses in El Escorial on “Social Sciences and Digital Humanities in Open Science: The power of managing your data” in 2024 and New Approaches to Global Communication in the Age of AI: Translation, Creativity, and Specialised Languages (2025, forthcoming). The CLARIAH-CM Madrid-Node combines stable funding, coordinated leadership, and a strong culture of collaboration with scholars, libraries and citizens. Having actively supported previous DARIAH Events in Lisbon (2024) and Gottingen (2025), participation in 2026 Event in Rome would represent opportunity to the collective reflection. Links: 0 https://www.ucm.es/clariah-cm/ 1 https://www.ucm.es/clariah-cm/textos/546263 2 https://www.comunidad.madrid/notas-prensa/2023/09/01/comunidad-madrid-invierte-red-transformacion-digital-investigacion-universitaria-humanidades-ciencias-sociales?utm_source=chatgpt.com 3 https://www.ucm.es/clariah-cm/textos/546264 4 a https://sshopencloud.eu/ https://www.ucm.es/clariah-cm/materiales-generales https://www.ucm.es/clariah-cm/cursos-de-doctorado 4 b https://www.ucm.es/clariah-cm/materiales_propios https://www.ucm.es/clariah-cm/infografias-1 4 c https://www.ucm.es/clariah-cm/i-ciclo-clariah 4 d https://www.ucm.es/clariah-cm/i-ciclo-uc3m-de-humanidades-digitales https://www.ucm.es/clariah-cm/i-ciclo-clariah-cm-de-formacion 4 e https://www.ucm.es/clariah-cm/i-workshop-clariah-cm-ucm 4 f https://zenodo.org/communities/nodoclariahcm/records?q=&l=list&p=1&s=10&sort=newest 5 https://www.youtube.com/playlist?list=PLW0WD9bd0tTQLEQIgGEWElDPbN2uu1cWN https://www.bne.es/es/agenda/bibliotecas-datos-inteligencia-artificial-nuevas-rutas-conocimiento https://cervantes.org/es/bibliotecas/actividades/lab-bibliotecas 6 a https://www.ucm.es/clariah-cm/noticias/72530 https://www.ucm.es/clariah-cm/noticias/77869 6 b https://www.ucm.es/clariah-cm/noticias/74600 7 https://www.ucm.es/clariah-cm/noticias/72525 The DNB Knowledge Graph as a Bridge into the National Research Data Infrastructure 1German National Library; 2Göttingen State and University Library, Germany The German National Library (DNB) operates the DNB SPARQL Service, powered by the QLever SPARQL engine, providing standardized RDF‑based semantic queries over national bibliographic and authority data. Fully compliant with the SPARQL 1.1 standard, the service supports both automated and manual queries through its API, with results viewable in a user interface or exportable in multiple formats. At its core, this Knowledge Graph offers a uniquely consistent representation of national bibliographic entities – dating back to 1913 – and their connections to authority records in the Integrated Authority File (GND). It is openly accessible: users, from researchers to the interested public, can explore curated sets of publications linked to people, places, or keywords within selected timeframes, among many other possibilities. This openness and quality enable the DNB SPARQL Service to be integrated into the KGI4NFDI (Knowledge Graph Infrastructure for the NFDI), one of the Base4NFDI basic services. KGI4NFDI supports the creation, reuse, and registry of knowledge graphs across the NFDI, providing shared documentation, guidance, and software. Its mission is to improve data integration, promote standardized ontological practices, and operationalize the FAIR principles (Wilkinson et al. 2016) across disciplines. Federation is a key feature: SPARQL endpoints can be connected so that queries span multiple KGs simultaneously, retrieving distributed data in a unified manner – without sacrificing autonomy or structure. Such infrastructures of engagement – open, inclusive, collaborative, and sustainable – ensure that valuable curated datasets do not remain isolated but become interconnected assets in a shared research ecosystem. Within the NFDI, the DNB contributes to the Text+ consortium, which aggregates, curates, and disseminates language‑ and text‑based research data for the humanities. Here, the DNB brings in its bibliographic and authority data expertise and actively transfers knowledge related to KG technologies. Text+ also hosts a dedicated GND agency (Buddenbohm et al. 2023), run collaboratively by DNB and Göttingen State and University Library, to support the use and expansion of authority data across disciplines. While not yet directly linked to the DNB Knowledge Graph, this initiative strengthens interoperability and offers future opportunities for integration. By linking the DNB SPARQL Service to KGI4NFDI, Text+ itself becomes connected to this NFDI‑wide federated KG infrastructure, opening further opportunities for cross‑domain data enrichment, interoperability, and collaboration. These interconnections also create fertile ground for co‑creation, citizen science, public and participatory humanities, and community‑driven scholarship – engaging not only academic institutions but also a wider public in the stewardship and creative reuse of cultural and scholarly data. This poster will illustrate how knowledge graph technology – combined with open architectural concepts – enables effective knowledge transfer across networks, enriching authoritative data for multiple domains. Live demonstrations will showcase the strengths of the DNB SPARQL Service and highlight its role as a bridge between the public, the humanities research community, and the wider NFDI infrastructure. FROM HERITAGE TO DIGITAL INNOVATION: A FRAMEWORK FOR APPLYING EMERGING TECHNOLOGIES IN HUMANITIES EDUCATION Comillas Pontifical University, Spain This poster presents an innovative proposal for the development of a digital application for learning in the humanities, based on teaching experiences with first-year students at Universidad Pontificia Comillas. The project centers on the creation of a virtual reality (VR) environment that reconstructs the seventeenth-century Imperial College Library and Chapel in Madrid, offering students an immersive and interactive learning experience that bridges historical knowledge and technological innovation through gamification strategies and microcredential-based content. The development process begins with content design informed by the learning profiles of digital-native students, who favor visual and experiential approaches. Historical and literary sources are analyzed by humanities scholars, while engineering faculty apply advanced modeling tools to ensure technical accuracy. VR models are developed using SketchUp, TinkerCAD, and Fusion, and disseminated via platforms such as SketchUp Viewer and Sketchfab. Complementary learning resources include video lectures produced with OBS Studio and iMovie, interactive questionnaires created with Microsoft Forms, and gamified activities designed in Genially, including escape rooms and puzzles based on the artistic features of the chapel’s dome. The application integrates problem-based learning (PBL) and challenge-based learning (CBL) methodologies, encouraging students to address real-world challenges such as improving accessibility to cultural heritage. Co-teaching between humanities and engineering faculty fosters interdisciplinary collaboration, ensuring a balance between cultural interpretation and technical precision. The design is grounded in the Jesuit Pedagogical Paradigm, promoting active learning, critical thinking, and ethical engagement. From a technical standpoint, the application complies with the UNE 71362 standard for digital educational materials, ensuring quality and alignment with defined learning objectives. The associated microcredential is delivered through the Moodle platform at Universidad Pontificia Comillas, which provides learning analytics to monitor student engagement and progress. Gamification elements—such as QR-linked puzzles and interactive VR tours—enhance motivation and deepen understanding of cultural heritage. The project aligns with European Commission and OECD guidelines for microcredentials, emphasizing transparency, relevance, and portability. These short, flexible learning units complement traditional courses and support lifelong learning, making them particularly suitable for Generation Alpha learners. By integrating digital humanities tools with immersive technologies, the application democratizes access to cultural heritage and supports the development of skills relevant to the evolving labor market. Initial pilot results indicate significant improvements in student engagement and content comprehension, although first-year students may require additional scaffolding to support autonomous learning. Future iterations will focus on refining content sequencing, expanding accessibility, and adapting the model to other educational contexts. In conclusion, this project presents a validated strategy for designing digital applications in the humanities that merge historical scholarship with advanced technologies. Through the integration of VR, gamification, and microcredentials, it offers a scalable, student-centered model that enhances learning, fosters creativity, and bridges tradition and innovation. NAIF: Responsible Research Assessment and Metadata Quality for Interoperable Research Infrastructures in Switzerland ETH Zurich, Switzerland This poster presents ongoing work within the NAIF project (National Approach for Interoperable Repositories and Findable Research Results), a swissuniversities-funded collaboration between eight Swiss higher-education institutions. NAIF addresses a key infrastructural challenge in contemporary research ecosystems: ensuring that repository metadata are sufficiently structured, interoperable, and trustworthy to support both the discovery of research outputs and the responsible use of quantitative indicators in research assessment. Institutional repositories increasingly serve as central nodes within national and international research infrastructures. They provide access to publications, datasets, and other scholarly outputs while simultaneously feeding metadata into discovery services, open-science infrastructures, and research-information systems. At the same time, these metadata increasingly underpin monitoring and evaluation practices in universities and funding organisations. This dual role raises important questions: how can repository metadata be curated so that they remain interoperable and reusable across infrastructures, while also providing a reliable foundation for responsible and transparent research assessment? The work presented here approaches this challenge from two complementary directions. The first concerns the responsible use of quantitative indicators in research evaluation. Building on workshops with stakeholders from Swiss higher-education institutions and the broader scientometrics community, the project investigates how principles associated with initiatives such as DORA and CoARA can be translated into operational practices within institutional infrastructures. Preliminary results highlight that indicators must be interpreted in context, that transparency about data sources and methods is essential, and that evaluation practices should prioritise aggregated monitoring over individual-level benchmarking whenever possible. Future work expands this effort through a qualitative survey of actors involved in research assessment and repository management. The survey aims to map institutional practices, expectations, and tensions surrounding the use of indicators, and to identify how repository infrastructures and metadata practices can better support responsible evaluation. The second focus concerns the enrichment and standardisation of key categories of academic metadata that connect research outputs with the broader scholarly ecosystem. Particular attention is given to four interrelated data families: organisational identifiers that disambiguate institutions; persistent researcher identifiers; structured funding information linking outputs to funding programmes; and metadata describing open-access status and publication pathways. These elements form the contextual backbone that allows research outputs to be linked reliably to people, organisations, funding streams, and dissemination models across infrastructures. Strengthening these metadata families improves the interoperability of institutional repositories and enables more reliable data exchange with national and international infrastructures. At the same time, improved metadata quality enhances the transparency and interpretability of research indicators derived from these systems. By bringing together questions of metadata stewardship and responsible metrics, the poster argues that evaluation practices and repository infrastructures should not be treated as separate domains. Instead, they must be developed together as parts of a shared sociotechnical infrastructure that supports trustworthy knowledge production, equitable research evaluation, and the long-term visibility of scholarly outputs. Collaborative Human-Centred Design of the GoTriple platform: connecting and engaging users for multidisciplinary research discovery system design 1Net7, Italy; 2Abertay University, Scotland Digital research platforms are increasingly expected to operate as open, inclusive, and sustainable infrastructures that enable meaningful engagement across disciplines and communities. Within this context, our work focuses on improving GoTriple [1], a multilingual discovery platform for Social Sciences and Humanities run by the OPERAS RI [2] , by integrating new services co-designed with its research communities through the LUMEN [3] and GRAPHIA [4] EU funded projects. These projects aim to extend GoTriple’s capacity for collaborative knowledge exploration by developing novel tools for visualisation, annotation, and cross-domain discovery in four scientific domains—Mathematics, Earth Systems, Molecular Dynamics, and Social Sciences & Humanities. To achieve this, we adopt a participatory, human-centred approach that positions researchers not as passive users but as partners in shaping the next generation of a multidisciplinary open research infrastructure. Understanding user needs in such a complex environment requires more than standard usability testing; it demands trust, long-term engagement, and methods that capture how diverse communities make sense of design choices. In a distributed European consortium, we therefore used hybrid online and in-person co-design workshops where participants from different disciplines shared needs and collectively envisioned future GoTriple services. Our methodology draws on human-centred design and participatory research [5] to rebalance the roles of researchers, designers, and platform users. Within the LUMEN and GRAPHIA projects, this approach supports the development of services that foster discovery, interdisciplinary exchange, and responsible AI integration. Early community involvement helps ensure that the resulting infrastructure is socially responsive, transparent, and aligned with real research practices. We operationalise this approach through a structured process composed of recruitment, communication, and translation of design artefacts. Recruitment is co-designed with partners to ensure openness and inclusivity. Snowball sampling through professional networks, flexible participation formats, and accessible sign-up workflows allow us to reach researchers across domains, career stages, and linguistic backgrounds. The GoTriple platform design is continuously updated to support transparency and evolving recruitment needs. Our communication strategy builds on insights from earlier semi-structured interviews and is iteratively refined. Research questions, personas, scenarios, workshop scripts, and design tasks are co-reviewed across all partner organisations. Pilot testing with external researchers ensures clarity, accessibility, and comfort—addressing elements such as duration of tasks, audio participation, help-seeking options, and accessibility requirements related to colour, fonts, and interface layout. Translation is essential in a project spanning diverse epistemic cultures and digital practices. Co-design workshops create a shared space to identify common challenges, explore perspectives through familiar metaphors, and generate ideas that can be integrated into GoTriple. Using human-centred design tools and a “How might we…?” approach, these workshops catalyse exploration of functionality, interfaces, and interactions, supporting sustainable platform evolution. Finally, through sharing and operationalising, we apply Thematic Analysis to produce cross-cutting insights representing the voices of participating communities [6] . These themes feed directly into prototyping activities carried out with User Experience (UX) and Graphical User Interface (GUI) specialists in LUMEN and GRAPHIA, ensuring that all new GoTriple services are conceived with inclusivity, openness, and long-term sustainability at their core. From Notation to Motion: Preserving Baroque Dance as Intangible Cultural Heritage through Computer Vision and 3D Reconstruction Sorbonne University, France Historical Baroque dance, a cornerstone of European court culture primarily spanning the late 17th and early 18th centuries, represents a unique challenge for cultural heritage preservation. Unlike static arts, dance is ephemeral. While the era left behind rich resources—specifically handwritten manuals and the complex Beauchamp-Feuillet notation—interpreting these sources requires highly specialized knowledge. The difficulty of deciphering historical handwriting combined with abstract choreographic symbols creates a significant accessibility threshold, isolating this art form from the public and endangering its transmission as Intangible Cultural Heritage (ICH). This project proposes a comprehensive Digital Humanities pipeline utilizing AI to bridge the gap between archival sources and modern visual understanding. The framework begins with Optical Character Recognition (OCR) tailored for historical handwriting, allowing for the digitization and interpretation of original dance manuals. This textual and symbolic data then forward together with videos into a large model designed to detect and generate Baroque dance sequences in 3D, rendering the movements accessible for educational and archival purposes. Technically, we employ a two-stage approach for 3D human pose estimation. While end-to-end methods offer efficiency, our research prioritizes the higher accuracy inherent in two-stage approaches, which first detect 2D keypoints and subsequently lift them to 3D coordinates. To mitigate the scarcity of historical dancers and motion capture resources, we focus initially on solo dance sequences. We address the risk of overfitting through rigorous data augmentation and transfer learning techniques. A critical contribution of this research is the creation of a domain-specific dataset. Current state-of-the-art datasets for human pose estimation—such as Human3.6M, MPI-INF-3DHP, and EMDB—focus primarily on daily human actions. While specific dance databases like AIST and AIST++ exist, they lack the specific stylistic nuances, distinct postures, and ornamental gestures unique to the Baroque period. To fill this lacuna, we are collaborating with professional Baroque dancers to capture and annotate a dedicated dataset, establishing a ground truth that links historical notation to biomechanical reality. By converting static, handwritten historical records into dynamic 3D visualizations, this pipeline offers an all-encompassing view of Baroque performance. This not only lowers the barrier for public appreciation and amateur learning but also provides a scalable digital framework applicable to other forms of historical dance. Ultimately, this project demonstrates how computational methods—from OCR to 3D reconstruction—can serve as robust tools for protecting, preserving, and democratizing access to our shared, intangible cultural history. From Capture to Engagement: A Reproducible Workflow for 360° Virtual Tours and Audience Evaluation of Sacred Kerala Temple Murals Indian Institute of Technology, Jodhpur, India Workflows help in understanding how a particular project came about. Furthermore, they allow for replication and adoption of the same in developing similar projects. However, most projects in Digital Humanities seldom express their workflows openly, particularly those in the field of digital heritage. The implicit choices in image capture, authoring, contextual modeling, and user evaluation, if recorded for future use, can aid others invested in similar works. Through this paper, the workflow for developing a virtual tour for a cultural heritage object — namely Kerala mural paintings — are documented for future use. Kerala mural paintings are artworks emerging from Kerala, which is the southernmost state of India. For the scope of this study the virtual tour of one temple in central Kerala, the Chemmanthatta Mahadeva Temple, Thrissur is created and a survey is embedded within it to learn about audience conception of concepts such as aura, authenticity, and sacredness. The workflow pipeline is constructed in such a way that it can be considered to comprise of five documented stages: (1) ethical capture — at this initial stage, permission from the governing bodies of the temple were sought out. Since the temple was under local governance and also maintained by Archaeological Survey of India(ASI), Thrisure circle, permissions needed to be sought. With the necessary permissions in place, 360° images were captured, strictly adhering to ASI guidelines as well as temple decorum rules; (2) contextual authoring — after the first step, the images captured were imbibed with its context and stitched together to form the 360° tour. For this purpose Affinity Photo 2 image editing software was used to do minor edits in the images. Followed by that, the images were imported into Articulate Storyline 360 to stitch together the visuals into a virtual tour. This contained contextual information on the temple architecture, as well as annotations for mural iconography. The temple’s spatial logic was carefully retained; (3)evaluation instrumentation—the virtual tour was subjected to testing on audience perceptions of aura, authenticity, and sacredness in a virtual environment. This was assessed using an integrating a standardized survey (Likert scales for aura/presence, sacredness/reverence, authenticity/fidelity + open-ended comparison prompts) directly into the end of the tour; (4)deployment and data collection—the tour was then hosted on the web with analytics tracking dwell time/paths, collecting n=100 anonymized responses; (5) analysis and archiving—the descriptive statistics were exported as a CSV file and qualitative coding was carried out on NVivo software. This workflow addresses DARIAH's call for reproducible research practices by making explicit the "hidden costs" of sacred heritage digitization: balancing visual fidelity with ritual context, navigating community consent for devotional content, and designing evaluation instruments sensitive to cultural concepts like darśan. Methodological contributions include a reusable survey template for aura/authenticity and an open Storyline 360 template. Ethical documentation covers ASI protocols, participant consent for sacred viewing, and data minimization for sensitive responses. For DH practitioners, this demonstrates "small data, rich workflow" reproducibility: modest temple case yields generalizable methods for evaluating immersive heritage. Who Decides the Digital Past: Participatory Appraisal in Digital Archives National archives of the Czech republic, Czech Republic (Czechia) Digitisation, artificial intelligence and born-digital records are inconspicuously transforming traditional archival practices. Classical archival theory places strong emphasis on expert appraisal of records, grounded in the analysis of their creator, provenance, and the context of acquisition. These principles remain fundamental; but the digital environment provides both new challenges and opportunities, as for the scale and technological approaches. At the same time, it brings the critical questions concerning the transparency and societal relevance of appraisal and selection processes. Will there not arise new ways for the participation of broad user communities in decisions about what is going to be preserved for the future? The National Digital Archive of the Czech Republic acts as the guarantor of archival appraisal and selection, providing in Czech Republic both methodological guidance and technical tools that support the long-term preservation of digital, digitised records and digitally managed records of many kinds. These practices also offer a platform for examining how established archival principles are adapted within contemporary digital infrastructures. The paper will focuse on the appraisal and the selection of archival materials in the digital age as an important, yet often insufficiently reflected, part of (not only) digital archival infrastructures, including by archivists themselves. It will try to discuss, that decisions about what should be preserved, how selection criteria are formulated, and how appraisal outcomes are communicated, are a fundamental form of historical, cultural and social mediation, rather than a purely technical or administrative procedure. We can see, that the selection criteria and processes actively shape the historical narratives and influence what is represented in the digital archival record. So how can archives ensure the transaprency of the process, what measurements could and should be performed? Drawing on a case study of a digital archive and related institutional archival practices, the paper analyses traditional appraisal principles and criteria for long-term preservation in the digital environment. Particular attention is paid to mechanisms through which the expectations and needs of user communities are identified and taken into account. The paper further explores possibilities for involving the public and researchers in appraisal and selection processes, for example through participatory proposals, curated or thematic digital collections, and forms of mediated co-creation that respect professional archival standards. The paper will analyze archival appraisal and selection as a possible “infrastructure of engagement” that connects archival expertise with research and public expectations. Digital archives are not only repositories of information but also spaces of trust, and shared responsibility for historical and cultural memory. The archive is thus reaffirmed as a trusted institution providing reliable and “safe” sources, while simultaneously adapting to expectations. The aim of the paper is to contribute to debates on how digital archives can develop transparent, participatory, and sustainable models of appraisal and selection, which then demonstrates the long-term relevance of digital archival infrastructures. From invisible to accessible; creating a database of censored literature University of Gothenburg, Sweden The Dawit Isaak Database of Censorship (DIDOC) is a collaborative digital humanities initiative designed to document and analyse censorship of literary and journalistic works. Developed by the Dawit Isaak Library (DIL) in Malmö together with the Gothenburg Research Infrastructure for Digital Humanities (GRIDH), Swedish PEN, and researchers at Lund University, the project brings together cultural heritage expertise, freedom-of-expression advocacy, and digital infrastructure to create a foundational resource for comparative censorship research. The project originated in 2023 through a collaboration between DIL and literary scholars at Lund University, later joined by Swedish PEN, whose involvement in Swedish Banned Books Week and advocacy for persecuted writers brought essential political and legal expertise. In 2024, DIL also approached GRIDH due to its experience developing Queerlit, a bibliographic database based on linked data. Together, the partners initiated a pilot to explore how censorship could be systematically documented, ethically represented, and made publicly accessible. DIDOC is developed in Omeka-S, an open-source platform well suited for semantically structured digital collections and linked open data (LOD). The choice of Omeka-S was driven by the need for a robust, extensible system that could be self-hosted, support multilingual metadata, and allow fine-grained access control given the sensitivity of censorship data. GRIDH funded and implemented the pilot as part of its strategic work on “Responsible DH,” enabling rapid prototyping through Omeka-S’s modular architecture and intuitive data curation interface. The DIDOC data model is built around six entity types—work, event, concept, person, organization, place—allowing bans to be modelled as events in the life of a work. Each event records the type of censorship, stated reason, place, date, source, and certainty, enabling the annotation of statements such as “This book was removed from schools in 2021 due to political content.” By using established LOD vocabularies such as Dublin Core, FOAF, and SKOS, DIDOC ensures semantic interoperability and supports future integration with external datasets, including the now-defunct Norwegian Beacon for Freedom of Expression, which documents nearly 50,000 censored works. The pilot interface, released in October 2024 during Banned Books Week, included 160 curated titles from DIL’s collection of 1,200 censored works (now expanded to 271). It offers faceted exploration of censorship events, contextual editorial material, and a unified search across metadata and narratives of suppression. Over its first 11 months, the site received more than 3,000 visitors, indicating public and scholarly interest in the material. A central challenge of DIDOC is ethical navigation. Not all censored authors can be safely documented, and censorship data is often incomplete, contested, or politically sensitive. Through collaboration with Swedish PEN, the project has established workflows for reviewing and withholding records when documentation may endanger authors or reproduce harm. These tensions—between transparency and protection, documentation and silence—shape the design and interpretation of the database. DIDOC demonstrates how digital humanities infrastructures can illuminate patterns of repression while foregrounding the ethical responsibilities of documenting censorship. By combining linked data technologies, interdisciplinary collaboration, and careful metadata practices, the project lays the groundwork for a comprehensive, multilingual, and globally relevant research resource. AVOBMAT: Accessible Advanced Text and Metadata Analysis at Scale for Collaborative Research and Digital Humanities Teaching University of Szeged, Hungary The aim of this paper is to introduce the rationale, history, and challenges behind the design and creation of AVOBMAT (Analysis and Visualization of Bibliographic Metadata and Texts), a multilingual text-mining tool developed at the University of Szeged and hosted on the infrastructure of the Gesellschaft für wissenschaftliche Datenverarbeitung mbH Göttingen (GWDG). AVOBMAT is available at avobmat.hu and has been accessed from 40 countries since its launch in October, 2025. AVOBMAT is a multilingual text-mining service created in close collaboration with researchers, including members of the DARIAH Bibliographical Data Working Group. It empowers scholars, educators, and students to explore large collections of textual and bibliographic data without requiring programming skills or costly hardware. Built on an extensible, scalable, modular cloud-based infrastructure, AVOBMAT supports a transparent and reproducible research process through a wide range of analytical tools, enabling socially responsive scholarship grounded in verifiable results. It enables researchers to identify hidden connections, enrich texts and metadata, and collaborate by sharing private databases. With support for 25 languages and customizable features, AVOBMAT makes advanced text analysis accessible and inclusive, allowing researchers to focus on critical interpretation and discovery. AVOBMAT’s public, participatory infrastructure offers an inclusive model of collaboration among students, researchers, and GLAM institutions by enabling them to publish digital collections and, where appropriate, to make private databases public after the conclusion of research projects. AVOBMAT also supports pedagogies of engagement in digital humanities education: teachers and students can use a browser-based platform with comprehensive documentation (interface overview, step-by-step workflows, configuration settings, glossary, appendices) to support seminar-based teaching. It further supports DH teaching by providing more than 30 ready-to-use literary and historical corpora in 16 languages. By lowering technical barriers, AVOBMAT broadens participation in critical text and metadata analysis: non-coding users can explore databases interactively using diverse analytical tools and customizable features. Research groups and labs can coordinate shared databases and methods across projects to improve reproducibility. Libraries can publish curated collections for in-depth exploration. Data Papers, Data Stories, and Co. as ‘Participatory Science’ 1University of Applied Sciences Potsdam, Germany; 2King’s College London; 3National Research Council of Italy (CNR); 4Digital Repository of Ireland; 5DARIAH Ireland; 6University of Hamburg Archives The DARIAH WG RDM is a place where researchers, RDM and GLAM professionals have come together since 2020. On the basis of this interdisciplinary and international collaboration, we reflect in the panel on the role, formats and workflows of data publications to foster participatory science. We will integrate moments of exchange with the audience into the panel to ensure the session remains as interactive and engaging as the available time and structure allow. After the following four short papers, the session will open into an extended discussion phase: Andrea Farina: Data Papers as Catalysts for Participatory Humanities Research. Insights from the Journal of Open Humanities Data This contribution examines how data papers can act as a bridge between academic and non-academic communities, drawing on a quantitative analysis of authorship and submission patterns within the Journal of Open Humanities Data (JOHD). Building on JOHD’s metadata, I will investigate trends in author backgrounds to assess how different communities participate in Humanities data publication. Particular attention will be given to thematic clusters and special collections, allowing us to explore correlations between topic areas, author provenance, and authorship roles (e.g., single or multi-authorship). Building on these empirical findings, I will reflect on how JOHD’s workflows – structured templates, reuse-oriented metadata, and low-threshold submission pathways – can foster a “virtuous circle” of participation. Rather than merely documenting datasets, data papers can help cultivate communities of practice and support contributors outside academia in publishing, preserving, and reusing Humanities data. Ulrike Wuttke: Innovations in Science Communication related to data publications to invite and accommodate participation beyond academia The development of new publication formats related to research data, such as the examples for data stories pioneered within the German National Research Data Infrastructure (NFDI) offers promising pathways to foster participatory science. These new formats lower barriers to understanding, engagement, enabling citizens, practitioners and policymakers to contribute insights, validate findings, and co-create knowledge. Thus they facilitate transparency, inclusivity, and collaboration, important aspects of participatory science’s creed of extending the scientific process to diverse societal actors. Such innovative publication formats might enable participatory science by lowering entry barriers for non-academic actors. These innovations in science communication comprise communicative interfaces instead of static records (data sets, pdfs), and transform publications from endpoints to participatory platforms. Using concrete examples of innovations in data publications, departing from data papers such as from the ZfDG and moving then onwards towards data stories, incentives (democratization of knowledge, visibility) and obstacles (limited data literacy, academic reward system) will be explored from the perspective of the SeDOA Innovation Lab to explore their potential to strengthen and diversify Diamond Open Access publishing. Alessia Spadi: Editorial workflows for open, accessible, and community-driven research sharing The present discussion highlights two intertwined aspects of scientific communication: how the academic community can share its work in more accessible formats for broader audiences, and how citizen scientists can, in turn, disseminate their findings within a moderated, scientifically oriented environment. This contribution introduces OpenMethods as an environment to foster reuse of research data, offering new pathways for participation beyond traditional scholarly publishing. OpenMethods is a metablog dedicated to Digital Humanities methods and tools that highlights already published open access content to make it more discoverable for both scholars and non-specialists. Its mission is not to host original research, but to circulate methodological knowledge and support the community in identifying valuable resources scattered across diverse platforms, languages, and formats. A core feature of OpenMethods is its collaborative editorial workflow, which relies on volunteer contributors and an expert Editorial Team. Beth Knazook, Joan Murphy, Engagement for Sustainability - The Research Sustainability Workflow for the Humanities The Research Sustainability Workflow for the Humanities considers engagement a critical factor in the sustainability of data and other research outputs. It centres the importance of developing partnerships to support research dissemination and engagement, highlighting the value of repositories, publicly funded institutions and communities as stewards of data with continuing cultural and social value. This contribution situates participatory science as an enabler of long-term preservation and data reuse and aims to help Humanities researchers think through stewardship in terms of engagement timelines and audience needs. By thinking of repositories as infrastructures of engagement rather than simply storage, researchers better ensure the likelihood of their data remaining online, active, and reusable over time. Alongside practical recommendations this presentation will also help researchers to understand and apply appraisal techniques and archival thinking to their data projects to ensure that diverse outputs are able to be sustained for as long as necessary. Digital Humanities in Hungary: From Early Experiments to Transnational Collaboration 1Eötvös Loránd University, Hungary; 2ELTE Research Centre for the Humanities, Hungary; 3University of Debrecen, Hungary; 4Université Côte d'Azur, France Digital Humanities has rapidly evolved in Hungary from early experiments to a robust field supported by universities and national research institutions. This panel brings together four scholars to discuss the emergence, institutionalization, and community building of DH in Hungary, highlighting historical foundations alongside recent developments. Their combined perspectives reflect on how digital scholarship in Hungary has built infrastructures not only for academic innovation, but also for public engagement, collaborative knowledge creation, and wider societal relevance. Institutionalizing DH at ELTE (Eötvös Loránd University): As Hungary’s oldest continuously operating university, ELTE has played a key role in pioneering DH locally. A Centre for Digital Humanities was established at ELTE’s Faculty of Humanities in 2017, and by 2020 a dedicated Department of Digital Humanities was in place under the leadership of Dr. Gábor Palkó. Dr. Palkó – a literary scholar and digital humanist – now heads this department and directs the National Laboratory for Digital Heritage (DH-LAB). His contribution will detail how ELTE integrated DH into its curriculum and research, and how the inter-institutional DH-LAB initiative is advancing new methodologies for processing and preserving cultural heritage. Dr. Palkó also leads the research network initiating the Hungarian DARIAH membership. These efforts exemplify how national infrastructures can serve as platforms for public-facing humanities research, linking memory institutions, libraries, and universities in shared stewardship of cultural data. In his presentation he will discuss opportunities for community building made possible by Hungary's future DARIAH membership, and how transnational collaboration and digital networks can support open and socially responsive forms of scholarship. DH in the Research Centre for the Humanities (ELTE): Dr. Zsófia Fellegi will discuss DH efforts within ELTE-RCH’s Institute of Literary Studies of, where she leads digital philology projects. The institute’s DigiPhil group has been at the forefront of developing born-digital critical editions and semantic text annotations, ensuring that literary works and archives are accessible in digital form. These efforts align with international open science practices, implementing FAIR data principles and collaborative virtual research environments. They also demonstrate how digital editorial work can act as a form of public engagement and cultural memory work. Notably, ELTE RCH researchers have partnered in DH-LAB and earned recognition for innovative projects – for example, an AI-based system for processing manuscripts of 19th-century poet János Arany, which was awarded a national Social Innovation Prize. Dr. Fellegi’s contribution highlights how Hungary’s research infrastructure is supporting humanities scholars with tools for textual analysis, long-term preservation, and public access to cultural knowledge. Origins of DH at the University of Debrecen: Dr. István Szekrényes will provide a historical account of how digital humanities took root at the University of Debrecen, one of Hungary’s major regional universities. He will highlight foundational efforts led by Dr. László Hunyadi, including early projects in computational linguistics, digital text editing, and humanities computing. These initiatives not only catalyzed interdisciplinary collaborations among technologists, philologists, and librarians, but also laid the groundwork for a model of socially connected scholarship that remains relevant today. Debrecen was home to the first center for the Digital Humanities in Hungary, focusing on computational linguistics and digital editing, and also hosted the country’s first international conference on computational linguistics. These developments culminated in the formal integration of DH into the university’s academic structures. Dr. Szekrényes’s talk will shed light on how these local innovations helped establish DH as a recognized discipline and contributed to the national evolution of the field. Iván Horváth’s Legacy and DH in Szeged and Budapest: Finally, Dr. Levente Seláf’s contribution will connect the past to the present by focusing on the works and influence of Iván Horváth and their relevance to Digital Humanities. Iván Horváth was one of Hungary’s earliest adopters of computing in the humanities. He was among the first to recognize and apply the possibilities of informatics and the internet in literary scholarship, playing a defining role in digital literary studies from as early as the 1970s. Horváth’s pioneering projects included creating online critical editions and computational analyses of Hungarian texts, long before “digital humanities” became a common label. His work anticipated a participatory approach to scholarship, where digital editions were not just research outputs but tools for public engagement, reuse, and education. Dr. Seláf will discuss how Horváth’s work – like his data-driven approaches to studying poetry and historical literature – laid groundwork for DH endeavors in Szeged and beyond, particularly those envisioning cultural data as a shared resource and foster inclusive engagement with literary heritage. Signals from the Field: A Survey of Digital Practices and Needs in Sweden Linnaeus University, Sweden This poster presents findings from a nationwide survey mapping digital practices and needs among researchers in the humanities and social sciences, as well as professionals in cultural heritage institutions in Sweden. The survey was conducted in 2025 by the Swedish national research infrastructure Huminfra, in coordination with DARIAH-SE, to inform strategic development during Huminfra’s work and to contribute to the broader European discussion on digital research practices and needs. The survey builds on 208 valid responses from 26 Swedish higher education institutions and a range of museums, archives, and libraries. The study addresses three core questions: (1) how digital tools and resources are currently used; (2) what training needs researchers and professionals identify; and (3) how digital tools, resources, and training opportunities are discovered, including awareness of infrastructures such as Huminfra and DARIAH-SE. The survey design aligns with the Digital Methods and Practices Observatory (DiMPO) survey developed by DARIAH-EU, enabling international comparison. Findings show that digital tools and resources are widely embedded in everyday research practice. A majority of respondents use digital tools beyond basic applications, most commonly for data analysis, data collection, and data processing. Textual data dominate, though many respondents also work with images, metadata, numerical, and audiovisual data. Despite this widespread engagement, digital competence is uneven: approximately 40% of respondents rate their proficiency with digital tools as below adequate, with particularly high proportions among humanities researchers and early-career scholars. Training emerges as a critical unmet need. Most respondents report having received no formal training in either digital tools or digital resources, while expressing strong interest in hands-on training. The survey also highlights a pronounced visibility gap. Awareness of tools, resources, and training opportunities provided through Huminfra and DARIAH-SE is very low, despite strong alignment between respondents’ needs and the infrastructures’ services. Researchers primarily discover digital tools and methods through informal channels, such as colleagues and conferences, rather than through institutional or infrastructural communication. Overall, the findings depict a research community in which digital work is normalized but unevenly supported. The results point to integration, not adoption, as the central challenge for digital research infrastructures: integrating competence development into research careers, providing clearer guidance in a fragmented tool landscape, and embedding infrastructures more firmly into everyday research practices. | ||