Conference Agenda

Overview and details of the sessions of this conference. Please select a date or location to show only sessions at that day or location. Please select a single session for detailed view (with abstracts and downloads if available).

Please note that all times are shown in the time zone of the conference. The current conference time is: 17th May 2024, 11:29:20am CEST

 
 
Session Overview
Session
D431: ADVANCING DESIGN WITH GENERATIVE AI APPLICATIONS
Time:
Thursday, 23/May/2024:
3:15pm - 5:15pm

Session Chair: Tomislav Martinec, University of Zagreb FSB, Croatia
Location: Congress Hall Ragusa


Show help for 'Increase or decrease the abstract text size'
Presentations

Generative large language models in engineering design: opportunities and challenges

Filippo Chiarello1, Simone Barandoni1, Marija Majda Škec2, Gualtiero Fantoni1

1University of Pisa, Italy; 2University of Zagreb Faculty of Mechanical Engineering and Naval Architecture, Croatia

Despite the rapid advancement of generative Large Language Models (LLMs), there is still limited understanding of their potential impacts on engineering design (ED). This study fills this gap by collecting the tasks LLMs can perform within ED, using a Natural Language Processing analysis of 15,355 ED research papers. The results lead to a framework of LLM tasks in design, classifying them for different functions of LLMs and ED phases. Our findings illuminate the opportunities and risks of using LLMs for design, offering a foundation for future research and application in this domain.



Inspiration or indication? Evaluating the qualities of design inspiration boards created using text to image generative AI

Charlie Ranscombe1, Linus Tan1, Mark Goudswaard2, Chris Snider2

1Swinburne University of Technology, Australia; 2University of Bristol, United Kingdom

This study explores the application of image generative AI to support design process by creating inspiration boards. Through an evaluative study, we compare the diversity, quantity, fidelity, and ambiguity of boards generated by image generative AI and traditional methods. The results highlight how generative AI produces a quantity of images, it exhibits limited diversity compared to traditional methods. This suggests a tendency for supporting interpolation rather than extrapolation of ideas, in turn providing insights on best practice and into the optimal stage for its application.



Integrating large language models for improved failure mode and effects analysis (FMEA): a framework and case study

Ibtissam El Hassani1,2, Tawfik Masrour1,2, Nouhan Kourouma1, Damien Motte3, Jože Tavčar3

1Moulay Ismail University, Morocco; 2University of Quebec at Rimouski, Canada; 3Lund University, Sweden

The manual execution of failure mode and effects analysis (FMEA) is time-consuming and error-prone. This article presents an approach in which large language models (LLMs) are integrated into FMEA. LLMs improve and accelerate FMEA with human in the loop. The discussion looks at software tools for FMEA and emphasizes that the tools must be tailored to the needs of the company. Our framework combines data collection, pre-processing and reliability assessment to automate FMEA. A case study validates this framework and demonstrates its efficiency and accuracy compared to manual FMEA.



Towards an automatic contradiction detection in requirements engineering

Alexander Elenga Gärtner, Dietmar Göhlich

Technische Universität Berlin, Germany

This paper presents a novel method for automatic contradiction detection in requirements engineering using a hybrid approach combining formal logic with Large Language Models (LLMs), specifically GPT-3. Our three-phase process detects contradictions by identifying conditionals and pseudo-grammatical elements, and employing LLMs for nuanced contradiction detection. Tested extensively, including on a real-world electric bus project, our method achieved 99% accuracy and 60% recall. This approach significantly reduces manual effort, enhances quality, and is scalable for future advancements.



Sketch2Prototype: rapid conceptual design exploration and prototyping with generative AI

Kristen M. Edwards, Brandon Man, Faez Ahmed

Massachusetts Institute of Technology, United States of America

Sketch2Prototype is an AI-based framework that transforms a hand-drawn sketch into a diverse set of 2D images and 3D prototypes through sketch-to-text, text-to-image, and image-to-3D stages. This framework, shown across various sketches, rapidly generates text, image, and 3D modalities for enhanced early-stage design exploration. We show that using text as an intermediate modality outperforms direct sketch-to-3D baselines for generating diverse and manufacturable 3D models. We find limitations in current image-to-3D techniques, while noting the value of the text modality for user-feedback.



Towards the extraction of semantic relations in design with natural language processing

Vito Giordano1,2, Marco Consoloni1,2, Filippo Chiarello1,2, Gualtiero Fantoni1,2

1University of Pisa, Italy; 2Business Engineering for Data Science Lab (B4DS), Italy

Natural Language Processing (NLP) has been extensively applied in design, particularly for analyzing technical documents like patents and scientific papers to identify entities such as functions, technical feature, and problems. However, there has been less focus on understanding semantic relations within literature, and a comprehensive definition of what constitutes a relation is still lacking. In this paper, we define relation in the context of design and the fundamental concepts linked to it. Subsequently, we introduce a framework for employing NLP to extract relations relevant to design.



 
Contact and Legal Notice · Contact Address:
Privacy Statement · Conference: DESIGN 2024
Conference Software: ConfTool Pro 2.8.101
© 2001–2024 by Dr. H. Weinreich, Hamburg, Germany