Conference Agenda

Overview and details of the sessions of this conference. Please select a date or location to show only sessions at that day or location. Please select a single session for detailed view (with abstracts and downloads if available).

 
 
Session Overview
Session
Paper session 4C: Alternation - "Technological applications"
Time:
Thursday, 04/Sept/2025:
10:25am - 11:55am

Location: Room 2420

2nd Floor (lake side)
Session Topics:
The potential of the different forms of alternation in VET/PET

Show help for 'Increase or decrease the abstract text size'
Presentations

From 2D to 3D: Augmented and virtual reality to enhance technical drawing learning

Alberto CATTANEO1, Vito Candido1, Chiara Locatelli2

1Swiss Federal University for Vocational Education and Training, Switzerland; 2Scuola Universitaria Professionale della Svizzera Italiana

Introduction

Interpreting a technical drawing is a common task in many professions. However, it does not always prove to be an easy one for apprentices. It implies being able to transpose a two-dimensional drawing onto a three-dimensional plane and having good visual-spatial skills. Facilitating this transposition is one affordance offered by immersive technologies such as virtual reality (VR; e.g., Renganayagalu et al., 2021) and augmented reality (AR; Candido et al., 2023; Papakostas et al., 2021; Rocha et al., 2024). Nevertheless, interventions involving basic vocational training and having a prolonged duration are still lacking. The same holds true for interventions combining 2D and 3D visualisations rather than exploiting either of the two options (Piri & Cagiltay, 2024), as well as directly comparing the two technologies with each other, although some aggregate data is now available (e.g., Di & Zheng, 2022).

We developed an application available for both AR and VR, that allows learners to practice 2D to 3D transposition, aiming to test (a) the extent to which these technologies support the learning of technical drawing and more generally the development of mental rotation skills, and (b) the extent to which AR and VR differ in supporting this learning.

Methods

The participants are N=29 (agemean=18.0; SD=3.15) apprentice heating and plumbing installers, who often have to build pipes on construction sites. They were involved during their first year of training, to minimize prior knowledge of technical drawing.

The application allows students to position three types of pipes (T-joint, 90° elbow, and straight pipe) using their hands, to replicate a given technical drawing. Two sets of four views (top, front, side, and isometric) are displayed: blue-bordered views show the target composition, while corresponding, green-bordered views update in real time to reflect the student's construction. The app adjusts support by modifying the number of displayed views and limiting validation attempts to encourage independent problem-solving.

Each participant was randomly assigned to one of two conditions: in the AR condition (N=15) they used a Microsoft Hololens2 device, in the VR condition (N=14) a MetaQuest3. For four weeks, during one lesson per week, the participants performed four different tasks of increasing difficulty, each comprising three exercises. For each exercise, the trainees had five minutes. One week before starting the AR/VR experience and one week after finishing it, a mental rotation test and a learning test on technical drawing—this latter constructed in cooperation with the vocational schoolteachers—were also administered. Also, each exercise was corrected based on a scoring system discussed with teachers. A set of psychological variables was assessed too using validated questionnaires at the end of each exercise. Their relationship with performance will be presented at the conference.

As the design is a 2 (group) X 4 (time) design with the latter factor repeated within participants, we analysed the data using linear mixed models and the GAMLj module in Jamovi.

Results

In all the three measures (mental rotation test, learning test, four-sessions exercises) the linear model was significant (always p<.001) and indicated that a substantial portion of variance was explained (R² respectively of .685, .745, and .652); a significant effect of time was found for both conditions (always p<.001). No significant main effect of condition—with AR showing slightly higher mean scores than VR in the first test—and no interaction effect was significant.

The study contributes to the advancement of theory and practice, giving indications both on the effectiveness of immersive technologies for exploiting visual-spatial skills in vocational education and on the feasibility of integrating these technologies in the curriculum, during classroom activities.



Evaluating the Modality and Redundancy Principles in AR for VET: From a Simple Procedure to Real-World Applications

Vito CANDIDO, Alberto CATTANEO

Scuola Universitaria Federale per la Formazione Professionale (SUFFP Lugano), Switzerland

Introduction

Using augmented reality (AR) can enhance learning outcomes (Cao & Yu, 2023; Yu, 2023). Two theoretical frameworks guide how to optimize multimedia learning materials: The Cognitive Load Theory (Sweller et al., 2019) explains how intrinsic and extraneous load arise from working memory constraints, while the Cognitive Theory of Multimedia Learning (CTML; Mayer, 2021) offers design principles such as modality and redundancy. The modality principle states that narration enhances learning more than written text, while the redundancy principle suggests that combining spoken and written information does not necessarily impair learning. However, although these principles improve learning in non-immersive media, studies on immersive virtual reality (IVR) show contradictory or reversed effects (Albus & Seufert, 2023; Baceviciute et al., 2022), and evidence when using Head‐mounted display‐based AR (HMD‐AR)—an immersive yet reality-anchored technology solution—is missing. Moreover, the effect of AR on cognitive load (CL) remains unclear (Buchner et al., 2022).

Therefore, we examined whether the modality and redundancy CTML principles apply to HMD‐AR and which are the effects on CL. In particular, we tested the following hypothesis:

H1: We expect no differences between audio-only and text-and-audio conditions, consistently with evidence that redundancy does not improve learning when auditory information is already provided (Adesope & Nesbit, 2012).

H2: Audio-based instruction (audio-only or text-and-audio) outperforms text-only by lowering intrinsic cognitive load (ICL) and improving performance (Mayer, 2021).

Method

A between‐subjects experiment included 104 participants (M(age) = 17.2, SD = 2.58) randomly assigned to one of three conditions: (1) audio‐only, (2) text‐only, or (3) text-and‐audio. They learned a T-shirt folding procedure via an HMD‐AR application employing CTML principles (segmenting, signaling, spatial/temporal contiguity, voice), with the sole variation in verbal delivery. None had prior knowledge of this procedure. After the guided learning phase, participants completed a CL questionnaire (Krieglstein et al., 2023), followed by a retention test and three transfer tasks without AR. Performance measures included success/failure (guided, retention, transfer), number of attempts, and folding accuracy.

Results

To test the redundancy hypothesis, we compared audio-only with text-and-audio using the Bayesian approach to verify the absence of difference (Table 1). This suggests that adding on-screen text to audio does not yield measurable effects in HMD-AR.

Table 1. Audio-only vs. Text-and-audio

Measure BF₀₁ Audio Text-and-Audio

ICL (Median) 2.84 1.80 2.20

Guided Success Rate 3.06 91.43% 91.43%

Retention Success Rate 1.11 81.25% 93.75%

Transfer 1 Success Rate 2.15 78.13% 84.38%

Transfer 2 Success Rate 2.30 75.00% 75.00%

Transfer 3 Success Rate 1.13 53.13% 65.63%

For the modality hypothesis, we tested whether audio-based conditions (audio-only, text-and-audio) led to lower ICL and better performance than text-only. Since audio-only and text-and-audio did not differ, we combined them into an audio-based group for PLS-SEM analyses in ADANCO (v.2.4.1). Model fit was excellent (SRMR = .041, d-ULS = .094, d-G = .076). Membership in the audio-based group predicted lower ICL (β = –.21, p = .029) than for text-only, while higher ICL predicted more attempts (β = .37, p < .001) and reduced success (β = –.50, p < .001). These findings support the modality principle.

Discussion

Results confirm no differences between the audio-only and text-and-audio conditions, in line with Adesope & Nesbit (2012). Also, audio-based groups showed lower ICL and better performance than the text-only group, aligning with CTML (Mayer, 2021). Unlike some IVR findings, HMD-AR appears to follow these principles similarly to traditional media. Although this is a single study, its alignment with theory suggests that applying CTML in HMD-AR can effectively support procedural learning, giving important operational guidelines for the development of HMD-AR learning materials. Future research should explore these principles in other vocational education contexts and with more difficult procedures.



Using an immersive 360° Video to Learn a Medical-Technical Procedure: A Case Study of Operating Room Technicians

Francesca AMENDUNI, Alice TELA, Alberto CATTANEO

SFUVET, Switzerland

Operating room technician trainees acquire numerous complex procedures during their three-year professional training (VPET), including the sterile dressing procedure. This procedure involves two roles: the “instrument technician”, who wears the sterile gown and gloves in preparation for surgery, and the “assistant”, who supports the former during the dressing procedure. VPET teachers report that trainees struggle with avoiding contact with non-sterile areas in the operating room, likely due to limited awareness of body movements and spatial relations.

Immersive 360-degree video technology (360°VR) offers affordances that may address these challenges. Embodiment could enhance trainees’ awareness of body movements and their relationship to sterile and non-sterile areas (Makransky & Petersen, 2021), while the panoramic view may improve visuospatial orientation (Li et al., 2024). Prior research indicates that 360°VR supports procedural skill acquisition, particularly in healthcare contexts (Blair et al., 2021) and emphasizes the importance of signaling to improve learning (Albus et al., 2021). Moreover, 360°VR can include interactive features that actively engage the learners (Violante et al., 2019).

This study investigates how a 360°VR can support trainees in acquiring sterile dressing competence. We hypothesize that, prior to viewing the 360°VR, there will be significant differences in theoretical and procedural knowledge between novices and experienced trainees. However, after exposure to the 360°VR, this gap will diminish.

Participants: Thirty trainees are divided into two groups: experienced (N=15), with over one year of clinical experience, and novices (N=15) with no prior experience.

Materials: A 360°VR demonstrating the sterile dressing procedure is enhanced with signaling to highlight sterile and non-sterile areas and spatial organization. Interactive buttons allow trainees to recall theoretical concepts and switch perspectives between the instrument technician’s first-person perspective and an external third-person perspective. A tutorial precedes the video to explain the interactive features.

Procedure: This quasi-experimental pre-post study consists of four phases:

1. Pre-test: Participants complete a quiz and an assessment test on sterile and non-sterile area recognition using immersive pictures. Novices also participate in a sterile dressing simulation.

2. Learning: Participants watch the 360°VR hypervideo twice—first passively, then with interactive features.

3. Post-test: The same assessment test is repeated.

4. Debriefing: Feedback is collected through focus group discussions.

We adopted a quanti-qualitative approach, collecting the following data:

1. Theoretical Knowledge: A 20-point quiz evaluates understanding before and after the intervention.

2. Sterile Area Recognition: Participants identify sterile and non-sterile zones in immersive pictures (maximum score: 30).

3. Sterile Dressing Performance (novices only): Performance during pre- and post-tests is scored by two expert trainers (maximum score: 10).

4. Video Experience: Focus group discussions are recorded, transcribed, and analyzed qualitatively through content analysis.

Repeated measures ANOVA will evaluate differences in theoretical knowledge and sterile area recognition between novices and experienced trainees across pre- and post-tests. A t-test will assess novice skill improvement.

Preliminary Results

Analysis are still ongoing. Partial data from experienced trainees (N=7) show near-perfect pre-test scores in theoretical knowledge (mean: 19.92/20) and sterile area recognition (mean: 27/30), with minimal post-intervention changes. These findings align with our expectations as well as their advanced proficiency.

Focus group discussions revealed that participants valued the 360°VR panoramic view for improving spatial awareness. The immersive experience was praised for its realism, signaling features, and interactive elements. Repeated viewings helped trainees notice missed details, with no reports of motion sickness or fatigue. Participants recommended structuring training sequentially (starting from theory and concluding with hands-on simulation) and incorporating demonstrations of common errors in the 360°VR to consolidate learning.

So far, preliminary results suggest that 360°VR may support learning for sterile dressing procedure. Further analysis will assess its impact on novices. Complete findings will be presented at the congress.



Using Immersive 360° Video with Office Clerk Apprentices to Elicit Physiological Stress Response When Training Customer Consultancy

Rita Cosoli1,2, Ulrike Rimmele2, Alice Tela1, Alberto Cattaneo1

1Swiss Federal University for Vocational Education and Training (SFUVET), University of Geneva, Switzerland; 2University of Geneva

Simulation-based learning is a core component of vocational education in Switzerland, offering apprentices opportunities to develop practice-oriented competences. Apprentice office clerks engage in peer-to-peer consulting simulations; however, these exercises have limitations. For instance, simulating high-emotional impact scenarios – as in the case of coping with an angry client – is challenging. Immersive technology offers a unique opportunity to create realistic situations while maintaining a safe and controlled environment that can be repeated as needed, ensuring ecological validity (Parsons, 2015). Immersive 360° video technology in particular offers the advantage of eliciting emotional responses comparable to real-world experiences (Schöne et al., 2023).

This study investigates the potential of an immersive 360° video as a tool to elicit acute stress responses in apprentice office clerks, assessing its effectiveness through physiological and self-reported measures. Specifically, it aims to answer the following research question: Can a 360° video featuring an angry client induce higher stress levels compared to a neutral client consultation scenario? It is hypothesized that participants exposed to the stressful scenario will exhibit higher levels of stress compared to those in the neutral condition. Additionally, negative emotions are expected to be more pronounced in the stressful scenario, whereas the neutral scenario will elicit more positive emotions. Moreover, it is expected that a higher sense of presence will correlate negatively with cybersickness, consistent with prior research (Weech et al., 2019).

The study builds upon research on immersive learning environments and stress induction methods. Established paradigms for eliciting acute stress, such as the Trier Social Stress Test (TSST), have been extensively used to induce physiological stress responses in laboratory settings. The TSST, typically involving public speaking and arithmetic challenges, has also been adapted to virtual reality (Helminen et al., 2021). However, no established stress induction paradigms specifically address professional scenarios such as client consulting, which are also relevant for many other professions. This study addresses this gap by developing a domain-specific stress induction tool in collaboration with professional trainers to ensure authenticity in workplace interactions.

The study employs a randomized between-subjects design with a sample of ninety Swiss apprentice office clerks (mean age = 16.6 years, SD = 1.39). Participants are assigned to either an experimental group – which views a 360° immersive video featuring an interaction with an angry client played by a professional actor – or a control group – which watches a neutral version of the same scenario. The video simulates a client consulting situation from a first-person perspective, allowing participants to respond verbally. Using audio detection, the video dynamically adapts to the participant’s responses, ensuring coherence and enhancing realism.

Physiological stress is assessed through salivary cortisol and the POLAR H10 heart rate monitor. Self-reported measures include the Positive and Negative Affect Schedule (Terracciano et al., 2003), the Subjective Units of Distress Scale, the Visual Analogue Scale for Anxiety (Abend et al., 2014), the eXtended Reality Presence Scale (Gandolfi et al., 2021) and the subscales cybersickness from the ITC-Sense of Presence Inventory (Lessiter et al., 2001). Ethical considerations, including informed consent and debriefing, were strictly followed to safeguard participants’ psychological well-being. Data collection was conducted in January and February 2025 in collaboration with the professional association and implemented at the branch course centre for commercial professions. Data analysis is currently in progress and all the results will be presented at the congress.

This study contributes to VET research by validating an immersive tool for stress induction that could be integrated into future training programs, helping apprentices develop emotional regulation strategies for customer interactions. Moreover, by focusing on the importance of emotions, it addresses the gap in research on emotional experiences in VET (Sauli et al., 2022).



 
Contact and Legal Notice · Contact Address:
Conference: VET Congress 2025
Conference Software: ConfTool Pro 2.8.106
© 2001–2025 by Dr. H. Weinreich, Hamburg, Germany