Conference Agenda
Overview and details of the sessions of this conference. Please select a date or location to show only sessions at that day or location. Please select a single session for detailed view (with abstracts and downloads if available).
Please note that all times are shown in the time zone of the conference. The current conference time is: 18th Apr 2026, 05:29:09pm CEST
|
Agenda Overview |
| Session | ||
D442: MACHINE LEARNING FOR GENERATIVE DESIGN AND DESIGN SPACE EXPLORATION
| ||
| Presentations | ||
A deep reinforcement learning approach for the multi-objective, segment-based generative design of sheet metal components Karlsruhe Institute of Technology, Germany Current approaches for the generative design of sheet metal parts only take singular optimization goals into account. This paper presents a concept for a deep reinforcement learning approach to train an agent to generate sheet metal parts by combining segments from a predefined library. Through a weighted reward function, agents can be trained for different or combined optimization goals, such as weight, cost, or sustainability. The resulting agents enable the creation of a pareto front of optimal solutions, supporting efficient exploration of the design space for diverse design objectives. Generating vehicle designs using probabilistic programs and reinforcement learning 1Computer Science Laboratory, SRI International, United States of America; 2Department of Aeronautics and Astronautics, Stanford University, United States of America; 3University of Florida, United States of America We present FORGE (Framework for Optimization and Reinforcement-driven Generative Engineering), a probabilistic programming framework for generative design that unifies declarative, symbolic modeling and reinforcement learning (RL). FORGE can learn and refine a design generator through RL based on simulator-derived rewards. We demonstrate FORGE across several vehicle domains. FORGE creates an extensible, interpretable foundation for generative engineering. It can act as both a data generator for machine learning and a design optimizer, offering a practical alternative to purely neural methods. Reinforcement learning for the design of mechanisms using available bars and pins Engineering Design and Computing Laboratory, Department of Mechanical and Process Engineering, ETH Zurich, Switzerland This work explores Reinforcement Learning (RL) for the circular design of planar truss linkages using available bars and pins. A bipartite graph representation and elementary action formulation enable agents to assemble mechanisms in a physics-based environment. Results for a force-inverter design problem show 98.5% success for fixed-stock training and 66.0% for shuffled stocks. The method demonstrates RL’s potential for inventory-constrained mechanism synthesis, with future work targeting scalable, indexing-invariant architectures and more flexible connection actions. Comparison of evolutionary, reinforcement and active learning for simulation-based design space exploration 1RPTU University Kaiserslautern-Landau, Germany; 2University of Mannheim, Germany; 3EIGNER engineering consult, Germany Trade-off studies often use the design of experiments approach, while simulation models enable data-based product optimization by AI. This paper presents a comparison of evolutionary algorithms, reinforcement learning as well as active learning for design space exploration. Based on a real-world case study and hypervolume analysis, the performance of selected algorithms is assessed. The results highlight their ability to identify pareto fronts and provide insights to deepen the understanding of AI-driven design space exploration. | ||

