XaiverZ commited on
Commit
f22106d
·
1 Parent(s): 2c85abc
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. abs_29K_G/test_abstract_long_2405.00716v1.json +0 -0
  2. abs_29K_G/test_abstract_long_2405.00718v1.json +0 -0
  3. abs_29K_G/test_abstract_long_2405.00722v1.json +0 -0
  4. abs_29K_G/test_abstract_long_2405.00738v1.json +0 -0
  5. abs_29K_G/test_abstract_long_2405.00739v1.json +0 -0
  6. abs_29K_G/test_abstract_long_2405.00747v1.json +0 -0
  7. abs_29K_G/test_abstract_long_2405.00772v1.json +0 -0
  8. abs_29K_G/test_abstract_long_2405.00791v1.json +0 -0
  9. abs_29K_G/test_abstract_long_2405.00801v1.json +0 -0
  10. abs_29K_G/test_abstract_long_2405.00824v1.json +0 -0
  11. abs_29K_G/test_abstract_long_2405.00843v1.json +0 -0
  12. abs_29K_G/test_abstract_long_2405.00853v1.json +0 -0
  13. abs_29K_G/test_abstract_long_2405.00864v1.json +0 -0
  14. abs_29K_G/test_abstract_long_2405.00899v1.json +0 -0
  15. abs_29K_G/test_abstract_long_2405.00902v1.json +0 -0
  16. abs_29K_G/test_abstract_long_2405.00954v1.json +0 -0
  17. abs_29K_G/test_abstract_long_2405.00957v1.json +0 -0
  18. abs_29K_G/test_abstract_long_2405.00958v1.json +0 -0
  19. abs_29K_G/test_abstract_long_2405.00966v1.json +0 -0
  20. abs_29K_G/test_abstract_long_2405.00970v1.json +0 -0
  21. abs_29K_G/test_abstract_long_2405.00972v1.json +0 -0
  22. abs_29K_G/test_abstract_long_2405.00977v1.json +0 -0
  23. abs_29K_G/test_abstract_long_2405.00978v1.json +0 -0
  24. abs_29K_G/test_abstract_long_2405.00981v1.json +0 -0
  25. abs_29K_G/test_abstract_long_2405.00982v1.json +0 -0
  26. abs_29K_G/test_abstract_long_2405.00987v1.json +0 -0
  27. abs_29K_G/test_abstract_long_2405.00988v1.json +0 -0
  28. abs_29K_G/test_abstract_long_2405.01008v2.json +0 -0
  29. abs_29K_G/test_abstract_long_2405.01029v2.json +0 -0
  30. abs_29K_G/test_abstract_long_2405.01035v1.json +0 -0
  31. abs_29K_G/test_abstract_long_2405.01051v1.json +59 -0
  32. abs_29K_G/test_abstract_long_2405.01063v1.json +0 -0
  33. abs_29K_G/test_abstract_long_2405.01097v1.json +0 -0
  34. abs_29K_G/test_abstract_long_2405.01102v1.json +0 -0
  35. abs_29K_G/test_abstract_long_2405.01103v1.json +0 -0
  36. abs_29K_G/test_abstract_long_2405.01116v1.json +0 -0
  37. abs_29K_G/test_abstract_long_2405.01130v1.json +0 -0
  38. abs_29K_G/test_abstract_long_2405.01143v1.json +0 -0
  39. abs_29K_G/test_abstract_long_2405.01159v1.json +0 -0
  40. abs_29K_G/test_abstract_long_2405.01175v1.json +0 -0
  41. abs_29K_G/test_abstract_long_2405.01217v1.json +0 -0
  42. abs_29K_G/test_abstract_long_2405.01229v1.json +0 -0
  43. abs_29K_G/test_abstract_long_2405.01248v1.json +0 -0
  44. abs_29K_G/test_abstract_long_2405.01266v1.json +0 -0
  45. abs_29K_G/test_abstract_long_2405.01270v1.json +0 -0
  46. abs_29K_G/test_abstract_long_2405.01280v1.json +0 -0
  47. abs_29K_G/test_abstract_long_2405.01345v1.json +0 -0
  48. abs_29K_G/test_abstract_long_2405.01350v1.json +0 -0
  49. abs_29K_G/test_abstract_long_2405.01359v1.json +0 -0
  50. abs_29K_G/test_abstract_long_2405.01373v1.json +0 -0
abs_29K_G/test_abstract_long_2405.00716v1.json ADDED
The diff for this file is too large to render. See raw diff
 
abs_29K_G/test_abstract_long_2405.00718v1.json ADDED
The diff for this file is too large to render. See raw diff
 
abs_29K_G/test_abstract_long_2405.00722v1.json ADDED
The diff for this file is too large to render. See raw diff
 
abs_29K_G/test_abstract_long_2405.00738v1.json ADDED
The diff for this file is too large to render. See raw diff
 
abs_29K_G/test_abstract_long_2405.00739v1.json ADDED
The diff for this file is too large to render. See raw diff
 
abs_29K_G/test_abstract_long_2405.00747v1.json ADDED
The diff for this file is too large to render. See raw diff
 
abs_29K_G/test_abstract_long_2405.00772v1.json ADDED
The diff for this file is too large to render. See raw diff
 
abs_29K_G/test_abstract_long_2405.00791v1.json ADDED
The diff for this file is too large to render. See raw diff
 
abs_29K_G/test_abstract_long_2405.00801v1.json ADDED
The diff for this file is too large to render. See raw diff
 
abs_29K_G/test_abstract_long_2405.00824v1.json ADDED
The diff for this file is too large to render. See raw diff
 
abs_29K_G/test_abstract_long_2405.00843v1.json ADDED
The diff for this file is too large to render. See raw diff
 
abs_29K_G/test_abstract_long_2405.00853v1.json ADDED
The diff for this file is too large to render. See raw diff
 
abs_29K_G/test_abstract_long_2405.00864v1.json ADDED
The diff for this file is too large to render. See raw diff
 
abs_29K_G/test_abstract_long_2405.00899v1.json ADDED
The diff for this file is too large to render. See raw diff
 
abs_29K_G/test_abstract_long_2405.00902v1.json ADDED
The diff for this file is too large to render. See raw diff
 
abs_29K_G/test_abstract_long_2405.00954v1.json ADDED
The diff for this file is too large to render. See raw diff
 
abs_29K_G/test_abstract_long_2405.00957v1.json ADDED
The diff for this file is too large to render. See raw diff
 
abs_29K_G/test_abstract_long_2405.00958v1.json ADDED
The diff for this file is too large to render. See raw diff
 
abs_29K_G/test_abstract_long_2405.00966v1.json ADDED
The diff for this file is too large to render. See raw diff
 
abs_29K_G/test_abstract_long_2405.00970v1.json ADDED
The diff for this file is too large to render. See raw diff
 
abs_29K_G/test_abstract_long_2405.00972v1.json ADDED
The diff for this file is too large to render. See raw diff
 
abs_29K_G/test_abstract_long_2405.00977v1.json ADDED
The diff for this file is too large to render. See raw diff
 
abs_29K_G/test_abstract_long_2405.00978v1.json ADDED
The diff for this file is too large to render. See raw diff
 
abs_29K_G/test_abstract_long_2405.00981v1.json ADDED
The diff for this file is too large to render. See raw diff
 
abs_29K_G/test_abstract_long_2405.00982v1.json ADDED
The diff for this file is too large to render. See raw diff
 
abs_29K_G/test_abstract_long_2405.00987v1.json ADDED
The diff for this file is too large to render. See raw diff
 
abs_29K_G/test_abstract_long_2405.00988v1.json ADDED
The diff for this file is too large to render. See raw diff
 
abs_29K_G/test_abstract_long_2405.01008v2.json ADDED
The diff for this file is too large to render. See raw diff
 
abs_29K_G/test_abstract_long_2405.01029v2.json ADDED
The diff for this file is too large to render. See raw diff
 
abs_29K_G/test_abstract_long_2405.01035v1.json ADDED
The diff for this file is too large to render. See raw diff
 
abs_29K_G/test_abstract_long_2405.01051v1.json ADDED
@@ -0,0 +1,59 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "url": "http://arxiv.org/abs/2405.01051v1",
3
+ "title": "Generating User Experience Based on Personas with AI Assistants",
4
+ "abstract": "Traditional UX development methodologies focus on developing ``one size fits\nall\" solutions and lack the flexibility to cater to diverse user needs. In\nresponse, a growing interest has arisen in developing more dynamic UX\nframeworks. However, existing approaches often cannot personalise user\nexperiences and adapt to user feedback in real-time. Therefore, my research\nintroduces a novel approach of combining Large Language Models and personas, to\naddress these limitations. The research is structured around three areas: (1) a\ncritical review of existing adaptive UX practices and the potential for their\nautomation; (2) an investigation into the role and effectiveness of personas in\nenhancing UX adaptability; and (3) the proposal of a theoretical framework that\nleverages LLM capabilities to create more dynamic and responsive UX designs and\nguidelines.",
5
+ "authors": "Yutan Huang",
6
+ "published": "2024-05-02",
7
+ "updated": "2024-05-02",
8
+ "primary_cat": "cs.SE",
9
+ "cats": [
10
+ "cs.SE",
11
+ "cs.HC"
12
+ ],
13
+ "label": "Original Paper",
14
+ "paper_cat": "LLM Fairness",
15
+ "gt": "Traditional UX development methodologies focus on developing ``one size fits\nall\" solutions and lack the flexibility to cater to diverse user needs. In\nresponse, a growing interest has arisen in developing more dynamic UX\nframeworks. However, existing approaches often cannot personalise user\nexperiences and adapt to user feedback in real-time. Therefore, my research\nintroduces a novel approach of combining Large Language Models and personas, to\naddress these limitations. The research is structured around three areas: (1) a\ncritical review of existing adaptive UX practices and the potential for their\nautomation; (2) an investigation into the role and effectiveness of personas in\nenhancing UX adaptability; and (3) the proposal of a theoretical framework that\nleverages LLM capabilities to create more dynamic and responsive UX designs and\nguidelines.",
16
+ "main_content": "Introduction User Interface (UI) and User Experience (UX) are integral components in software engineering (SE) that serve to bridge the gap between human requirements and system functionalities. UI and UX aim to optimise the interaction between the computer and the human via the interface to ensure ease of use and intuitiveness. A well-implemented UI/UX not only diminishes the cognitive load on the user but also reduces the time and e\ufb00ort required for users to understand and navigate through a system [8]. Hence, properly designed UI/UX signi\ufb01cantly a\ufb00ects system e\ufb03ciency, user satisfaction, and overall performance [20]. In the rapidly advancing technological landscape, users\u2019 desire for customised options and personalised experiences has surged, emphasising the importance of customisable and adaptive UX [23]. In addition, there is a growing recognition of the necessity for human-centric requirements that cater to individuals with speci\ufb01c needs, such as those with disabilities or diverse backgrounds [5]. Customizable UX allows users to control and tailor the design based on their preferences. Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for pro\ufb01t or commercialadvantage and that copies bearthis notice and the full citation on the \ufb01rstpage. Copyrights for third-party components of this work mustbe honored. For all other uses, contact the owner/author(s). ICSE-Companion \u201924, April 14\u201320, 2024, Lisbon, Portugal \u00a9 2024 Copyright held by the owner/author(s). ACM ISBN 979-8-4007-0502-1/24/04. https://doi.org/10.1145/3639478.3639810 It represents an important step toward user-centric interfaces but often fails to deliver a truly personalised experience [6, 12]. Adaptive UX goes beyond customisation, employing the ability to understand user behaviours, preferences and context [14]. Consequently, the system proactively alters the elements of UI to serve users better, e.g., visual appearance, typography, colour schemes, iconography and interactive elements like buttons, forms, and navigation menus [9]. While the idea of a truly adaptive system seems appealing, its practical implementation is challenging due to the diverse needs of users. Additionally, manually designing such a system is laborious, compounded by the need to maintain consistency due to business requirements, e.g., branding and aesthetics. Personas are often used in the \ufb01eld of UX as archetypical user pro\ufb01les to inform designers about speci\ufb01c user behaviours, needs and goals from the system [18]. Their strength lies in providing a clear, focused understanding of end-users, especially when direct access to human bene\ufb01ciaries is limited, enabling designers to make informed decisions. The recent advances in arti\ufb01cial intelligence (AI) techniques o\ufb00er great potential for adaptive UI and addressing the challenges mentioned above via automation. Large Language Models (LLMs) are the recent successors in the area of AI techniques that have shown considerable promise in automating di\ufb00erent SE tasks, e.g., code generation [10], requirements management [2], test generation [16], and persona generation [25, 26]. LLMs, trained on vast amounts of data, are excellent candidates for generating adaptive designs due to their ability to understand context, infer user intentions, and generate coherent responses [4]. This PhD research intends to explore the potential of LLMs combined with rich personas, which are more comprehensive and detailed than standard personas, to develop adaptive UX for diverse users. Speci\ufb01cally, I aim to create an adaptive UX framework that tailorsuser interfaces according to individual preferences and needs, focusing on the design, adapting and leveraging personas (and user requirements). Next, I discuss the related work on adaptive UX and the use of personas (Section 2), and the research plan with research questions (RQs) (Section 3). This PhD project is in the early stages; hence, in Section 4, I discuss the proposed approach and research directions. 2 Related work Adaptive UI/UX design uses a model-based approach as well as an AI-based approach [19, 21]. The model-based approach involves the creation of adaptive designs using architectural models. These models consist of one or multiple layers of architecture that \fICSE-Companion \u201924, April 14\u201320, 2024, Lisbon, Portugal Yutan Huang process multimodal data to generate adaptive UXs [11]. This approach primarily focuses on enhancing UX features such as layout, content, and modality, however, while it achieves diversi\ufb01cation by leveraging di\ufb00erent models, it often lacks the invaluable input of user feedback and iterative re\ufb01nement derived from legacy systems [1]. Additionally, the methodology for runtime feature selection is often underdeveloped in this approach, which limits its ability to adapt to changing user needs and preferences [7]. This model-based approach seeks to create variations in UX but may fall short in addressing real-time user interactions and feedback [7]. In contrast, the AI-based approach has gained prominence in recent years, capitalizing on the capabilities of AI to generate both text and graphics. Researchers have employed AI tools such as Sketch2Code, MetaMorph, and ChatGPT to dynamically generate UIs based on user interactions and requirements [17, 22]. The use of AI in adaptive UX design introduces a range of possibilities. Yang et al. identi\ufb01ed four key channels through which AI augments the value of adaptive UX: self-inferences, world inferences, optimal inferences, and utility inferences. These channels represent AI\u2019s ability to provide users with self-understanding, contextual understanding, optimal solutions, and utility-based responses, signi\ufb01cantly enriching the user experience [24]. These four channels serve as foundational concepts for adaptive UX generation with AI and are essential for guiding designers to create more personalized and user-centric interfaces [3]. Despite the potential of AI-based approaches, it\u2019s becoming increasingly evident that solutions utilizing Large Language Models (LLMs) are at the forefront of this technology\u2019s application. These LLMs, which are now among the most commonly implemented forms of AI, heavily rely on the quality of prompts provided to them [15]. In the context of user experience (UX) design, these prompts\u2019 precision and relevance directly impact the outcomes\u2019 quality, as demonstrated in recent studies [13]. E\ufb00ective prompt engineering is a critical aspect of AI-driven adaptive UX requirements, and it is an area that requires careful consideration and re\ufb01nement [2]. The model-based and AI-based approaches in adaptive UX design have illustrated diverse possibilities. However, it\u2019s important to note that these approaches commonly lack rigorous evaluation and iterative feedback from users and designers, forming a signi\ufb01cant gap in the existing research landscape. This review provides the context for understanding the need for our research, which aims to address these limitations and enhance the \ufb01eld of adaptive UX design by constructing an intelligent User interface that uses ML techniques with a framework to guide experts through the process of creating adaptive UI with user experience. 3 Research Plan The main research aim of this PhD research is to develop a framework for generating adaptive UX using LLMs and personas structured in the following steps (guided by the research questions mentioned under each step). Foundational Understanding: How is adaptive UX de\ufb01ned and understood in the current literature? Which UX fragments can be adapted and generated automatically? Role of Personas in Adaptive UX: What are the critical elements within personas that lend themselves to the creation of adaptive UX? Are there gaps or limitations in current persona models that could hinder the development of adaptive UX designs? Role of LLMs in Adaptive UX: To what degree can LLMs contribute to the development of adaptive UX? How do LLMs interpret and utilise persona information to generate UX designs? Which prompting techniques in LLMs yield the best adaptive UX results? Framework Development and Evaluation: Do users and practitioners \ufb01nd the adaptive UX generated by our framework useful? What are the challenges when leveraging LLMs for adaptive UX? 4 Solution Approach Foundational Understanding Systematic Literature Review and UX experiment: My foundational understanding begins with a systematic literature review on adaptive UI/UX, exploring de\ufb01nitions, methods, and applications in academic and professional contexts to identify aspects of UX that have been automated previously. Concurrently, I will conduct experiments to create UI automatically using LLMs, with insights from the literature, to validate my \ufb01ndings and identify potential UI fragments that can be adapted easily (e.g., interface designs, colours, buttons). This will establish a foundation for developing an informed adaptive UI/UX framework. Role of Personas in Adaptive UI/UX Expert Insight and Model Comparison: To \ufb01gure out the important parts of personas that help create adaptive UI and \ufb01nd any shortcomings in current persona representations, I will \ufb01nd key persona elements related to adaptive UI in practice by interviewing experienced UX designers. I will then compare di\ufb00erent representations of persona contents and prioritise what is important to include in a persona for adaptive UX generation. The comparative analysis and interviews in parallel will help re\ufb01ne persona representations and triangulate our \ufb01ndings. Role of LLMs in Adaptive UI/UX Exploring LLM\u2019s Capability in Adaptive UI Creation: I plan to carry out a set of experiments revolving around prompt engineering, an example would be using GPT-model-based LLMs and feeding them user preference and background information with personas. These experiments can examine the e\ufb00ectiveness of LLMs in generating user-tailored designs. Framework Development and Evaluation Evolving UI/UX Framework through User and Practitioner Feedback: I aim to develop a UX framework based on LLMs to guide adaptive UX creation. This framework will be dynamic, evolving through iterative enhancements for robustness and e\ufb00ective adaptive UX design. Leveraging LLM capabilities, I seek to establish a foundational, adaptable tool for UX development. Assessment and Re\ufb01nementof the UI/UX Framework through UserCentric Feedback: The evaluation of adaptive UI design and UI/UX framework will involve engaging users and experts to interact with and test the developed UIs by using them as a daily routine and provide users with tasks to complete.Their feedback will inform the integration of prompt engineering into our framework, and enhancing a smooth transition from design-time to run-time approach. \fGenerating User Experience Based on Personas with AI Assistants ICSE-Companion \u201924, April 14\u201320, 2024, Lisbon, Portugal 5",
17
+ "additional_graph_info": {
18
+ "graph": [
19
+ [
20
+ "Yutan Huang",
21
+ "Chetan Arora"
22
+ ]
23
+ ],
24
+ "node_feat": {
25
+ "Yutan Huang": [
26
+ {
27
+ "url": "http://arxiv.org/abs/2405.01051v1",
28
+ "title": "Generating User Experience Based on Personas with AI Assistants",
29
+ "abstract": "Traditional UX development methodologies focus on developing ``one size fits\nall\" solutions and lack the flexibility to cater to diverse user needs. In\nresponse, a growing interest has arisen in developing more dynamic UX\nframeworks. However, existing approaches often cannot personalise user\nexperiences and adapt to user feedback in real-time. Therefore, my research\nintroduces a novel approach of combining Large Language Models and personas, to\naddress these limitations. The research is structured around three areas: (1) a\ncritical review of existing adaptive UX practices and the potential for their\nautomation; (2) an investigation into the role and effectiveness of personas in\nenhancing UX adaptability; and (3) the proposal of a theoretical framework that\nleverages LLM capabilities to create more dynamic and responsive UX designs and\nguidelines.",
30
+ "authors": "Yutan Huang",
31
+ "published": "2024-05-02",
32
+ "updated": "2024-05-02",
33
+ "primary_cat": "cs.SE",
34
+ "cats": [
35
+ "cs.SE",
36
+ "cs.HC"
37
+ ],
38
+ "main_content": "Introduction User Interface (UI) and User Experience (UX) are integral components in software engineering (SE) that serve to bridge the gap between human requirements and system functionalities. UI and UX aim to optimise the interaction between the computer and the human via the interface to ensure ease of use and intuitiveness. A well-implemented UI/UX not only diminishes the cognitive load on the user but also reduces the time and e\ufb00ort required for users to understand and navigate through a system [8]. Hence, properly designed UI/UX signi\ufb01cantly a\ufb00ects system e\ufb03ciency, user satisfaction, and overall performance [20]. In the rapidly advancing technological landscape, users\u2019 desire for customised options and personalised experiences has surged, emphasising the importance of customisable and adaptive UX [23]. In addition, there is a growing recognition of the necessity for human-centric requirements that cater to individuals with speci\ufb01c needs, such as those with disabilities or diverse backgrounds [5]. Customizable UX allows users to control and tailor the design based on their preferences. Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for pro\ufb01t or commercialadvantage and that copies bearthis notice and the full citation on the \ufb01rstpage. Copyrights for third-party components of this work mustbe honored. For all other uses, contact the owner/author(s). ICSE-Companion \u201924, April 14\u201320, 2024, Lisbon, Portugal \u00a9 2024 Copyright held by the owner/author(s). ACM ISBN 979-8-4007-0502-1/24/04. https://doi.org/10.1145/3639478.3639810 It represents an important step toward user-centric interfaces but often fails to deliver a truly personalised experience [6, 12]. Adaptive UX goes beyond customisation, employing the ability to understand user behaviours, preferences and context [14]. Consequently, the system proactively alters the elements of UI to serve users better, e.g., visual appearance, typography, colour schemes, iconography and interactive elements like buttons, forms, and navigation menus [9]. While the idea of a truly adaptive system seems appealing, its practical implementation is challenging due to the diverse needs of users. Additionally, manually designing such a system is laborious, compounded by the need to maintain consistency due to business requirements, e.g., branding and aesthetics. Personas are often used in the \ufb01eld of UX as archetypical user pro\ufb01les to inform designers about speci\ufb01c user behaviours, needs and goals from the system [18]. Their strength lies in providing a clear, focused understanding of end-users, especially when direct access to human bene\ufb01ciaries is limited, enabling designers to make informed decisions. The recent advances in arti\ufb01cial intelligence (AI) techniques o\ufb00er great potential for adaptive UI and addressing the challenges mentioned above via automation. Large Language Models (LLMs) are the recent successors in the area of AI techniques that have shown considerable promise in automating di\ufb00erent SE tasks, e.g., code generation [10], requirements management [2], test generation [16], and persona generation [25, 26]. LLMs, trained on vast amounts of data, are excellent candidates for generating adaptive designs due to their ability to understand context, infer user intentions, and generate coherent responses [4]. This PhD research intends to explore the potential of LLMs combined with rich personas, which are more comprehensive and detailed than standard personas, to develop adaptive UX for diverse users. Speci\ufb01cally, I aim to create an adaptive UX framework that tailorsuser interfaces according to individual preferences and needs, focusing on the design, adapting and leveraging personas (and user requirements). Next, I discuss the related work on adaptive UX and the use of personas (Section 2), and the research plan with research questions (RQs) (Section 3). This PhD project is in the early stages; hence, in Section 4, I discuss the proposed approach and research directions. 2 Related work Adaptive UI/UX design uses a model-based approach as well as an AI-based approach [19, 21]. The model-based approach involves the creation of adaptive designs using architectural models. These models consist of one or multiple layers of architecture that \fICSE-Companion \u201924, April 14\u201320, 2024, Lisbon, Portugal Yutan Huang process multimodal data to generate adaptive UXs [11]. This approach primarily focuses on enhancing UX features such as layout, content, and modality, however, while it achieves diversi\ufb01cation by leveraging di\ufb00erent models, it often lacks the invaluable input of user feedback and iterative re\ufb01nement derived from legacy systems [1]. Additionally, the methodology for runtime feature selection is often underdeveloped in this approach, which limits its ability to adapt to changing user needs and preferences [7]. This model-based approach seeks to create variations in UX but may fall short in addressing real-time user interactions and feedback [7]. In contrast, the AI-based approach has gained prominence in recent years, capitalizing on the capabilities of AI to generate both text and graphics. Researchers have employed AI tools such as Sketch2Code, MetaMorph, and ChatGPT to dynamically generate UIs based on user interactions and requirements [17, 22]. The use of AI in adaptive UX design introduces a range of possibilities. Yang et al. identi\ufb01ed four key channels through which AI augments the value of adaptive UX: self-inferences, world inferences, optimal inferences, and utility inferences. These channels represent AI\u2019s ability to provide users with self-understanding, contextual understanding, optimal solutions, and utility-based responses, signi\ufb01cantly enriching the user experience [24]. These four channels serve as foundational concepts for adaptive UX generation with AI and are essential for guiding designers to create more personalized and user-centric interfaces [3]. Despite the potential of AI-based approaches, it\u2019s becoming increasingly evident that solutions utilizing Large Language Models (LLMs) are at the forefront of this technology\u2019s application. These LLMs, which are now among the most commonly implemented forms of AI, heavily rely on the quality of prompts provided to them [15]. In the context of user experience (UX) design, these prompts\u2019 precision and relevance directly impact the outcomes\u2019 quality, as demonstrated in recent studies [13]. E\ufb00ective prompt engineering is a critical aspect of AI-driven adaptive UX requirements, and it is an area that requires careful consideration and re\ufb01nement [2]. The model-based and AI-based approaches in adaptive UX design have illustrated diverse possibilities. However, it\u2019s important to note that these approaches commonly lack rigorous evaluation and iterative feedback from users and designers, forming a signi\ufb01cant gap in the existing research landscape. This review provides the context for understanding the need for our research, which aims to address these limitations and enhance the \ufb01eld of adaptive UX design by constructing an intelligent User interface that uses ML techniques with a framework to guide experts through the process of creating adaptive UI with user experience. 3 Research Plan The main research aim of this PhD research is to develop a framework for generating adaptive UX using LLMs and personas structured in the following steps (guided by the research questions mentioned under each step). Foundational Understanding: How is adaptive UX de\ufb01ned and understood in the current literature? Which UX fragments can be adapted and generated automatically? Role of Personas in Adaptive UX: What are the critical elements within personas that lend themselves to the creation of adaptive UX? Are there gaps or limitations in current persona models that could hinder the development of adaptive UX designs? Role of LLMs in Adaptive UX: To what degree can LLMs contribute to the development of adaptive UX? How do LLMs interpret and utilise persona information to generate UX designs? Which prompting techniques in LLMs yield the best adaptive UX results? Framework Development and Evaluation: Do users and practitioners \ufb01nd the adaptive UX generated by our framework useful? What are the challenges when leveraging LLMs for adaptive UX? 4 Solution Approach Foundational Understanding Systematic Literature Review and UX experiment: My foundational understanding begins with a systematic literature review on adaptive UI/UX, exploring de\ufb01nitions, methods, and applications in academic and professional contexts to identify aspects of UX that have been automated previously. Concurrently, I will conduct experiments to create UI automatically using LLMs, with insights from the literature, to validate my \ufb01ndings and identify potential UI fragments that can be adapted easily (e.g., interface designs, colours, buttons). This will establish a foundation for developing an informed adaptive UI/UX framework. Role of Personas in Adaptive UI/UX Expert Insight and Model Comparison: To \ufb01gure out the important parts of personas that help create adaptive UI and \ufb01nd any shortcomings in current persona representations, I will \ufb01nd key persona elements related to adaptive UI in practice by interviewing experienced UX designers. I will then compare di\ufb00erent representations of persona contents and prioritise what is important to include in a persona for adaptive UX generation. The comparative analysis and interviews in parallel will help re\ufb01ne persona representations and triangulate our \ufb01ndings. Role of LLMs in Adaptive UI/UX Exploring LLM\u2019s Capability in Adaptive UI Creation: I plan to carry out a set of experiments revolving around prompt engineering, an example would be using GPT-model-based LLMs and feeding them user preference and background information with personas. These experiments can examine the e\ufb00ectiveness of LLMs in generating user-tailored designs. Framework Development and Evaluation Evolving UI/UX Framework through User and Practitioner Feedback: I aim to develop a UX framework based on LLMs to guide adaptive UX creation. This framework will be dynamic, evolving through iterative enhancements for robustness and e\ufb00ective adaptive UX design. Leveraging LLM capabilities, I seek to establish a foundational, adaptable tool for UX development. Assessment and Re\ufb01nementof the UI/UX Framework through UserCentric Feedback: The evaluation of adaptive UI design and UI/UX framework will involve engaging users and experts to interact with and test the developed UIs by using them as a daily routine and provide users with tasks to complete.Their feedback will inform the integration of prompt engineering into our framework, and enhancing a smooth transition from design-time to run-time approach. \fGenerating User Experience Based on Personas with AI Assistants ICSE-Companion \u201924, April 14\u201320, 2024, Lisbon, Portugal 5"
39
+ }
40
+ ],
41
+ "Chetan Arora": [
42
+ {
43
+ "url": "http://arxiv.org/abs/2310.13976v2",
44
+ "title": "Advancing Requirements Engineering through Generative AI: Assessing the Role of LLMs",
45
+ "abstract": "Requirements Engineering (RE) is a critical phase in software development\nincluding the elicitation, analysis, specification, and validation of software\nrequirements. Despite the importance of RE, it remains a challenging process\ndue to the complexities of communication, uncertainty in the early stages and\ninadequate automation support. In recent years, large-language models (LLMs)\nhave shown significant promise in diverse domains, including natural language\nprocessing, code generation, and program understanding. This chapter explores\nthe potential of LLMs in driving RE processes, aiming to improve the efficiency\nand accuracy of requirements-related tasks. We propose key directions and SWOT\nanalysis for research and development in using LLMs for RE, focusing on the\npotential for requirements elicitation, analysis, specification, and\nvalidation. We further present the results from a preliminary evaluation, in\nthis context.",
46
+ "authors": "Chetan Arora, John Grundy, Mohamed Abdelrazek",
47
+ "published": "2023-10-21",
48
+ "updated": "2023-11-01",
49
+ "primary_cat": "cs.SE",
50
+ "cats": [
51
+ "cs.SE"
52
+ ],
53
+ "main_content": "Introduction Requirements Engineering (RE) is arguably the most critical task in the software development process, where the needs and constraints of a system are identified, analyzed, and documented to create a well-defined set of requirements [20]. Organizations and project teams often overlook or do not understand the significance of RE and its impact on project success [14]. Some underlying reasons for the lack of effort and 1 arXiv:2310.13976v2 [cs.SE] 1 Nov 2023 \fresources spent in RE include (i) time, budget and resource constraints, (ii) inadequate training and skills, (iii) uncertainty and ambiguity in early stages, which teams consider as challenging, causing them to cut corners in the RE process; (iv) inadequate tools and automation support [5], and (v) emphasis on an implementation-first approach instead [15]. These lead to significant challenges in the later stages of development as issues related to inconsistent, incomplete and incorrect requirements become increasingly difficult to resolve, resulting in increased development costs, delays, and lower-quality software systems [20]. In this chapter, we contend that the recent advances in Large-Language Models (LLMs) [13] might be revolutionary in addressing many of these RE-related challenges noted above, though with some caveats. LLMs are advanced AI models designed to process and generate human language by learning patterns and structures from vast amounts of text data. These models have made significant strides in natural language processing (NLP) tasks and are particularly adept at handling complex language-based challenges. LLMs including OpenAI\u2019s Generative Pre-trained Transformer (GPT) series and Google\u2019s Bidirectional Encoder Representations from Transformers (BERT) [8] and LaMDA [24], learn to comprehend and generate human language by predicting the most probable next word in a given sequence, capturing the probability distribution of word sequences in natural language (NL). OpenAI\u2019s ChatGPT1 and Google\u2019s Bard2, built on the advancements of the LLMs, are examples of chatbot platforms designed to facilitate interactive and dynamic text-based conversations. When a user provides input to ChatGPT or Bard, the model processes the text and generates a contextually appropriate response based on the patterns learned during the training process. A large majority of requirements are specified using natural language (NL). LLMs thus have the potential to be a \u2018game-changer\u2019 in the field of RE. This could be by automating and streamlining several crucial tasks and helping to address many of the RE challenges mentioned earlier. With the focus on automated code generation using LLMs, delivering concise and consistently unambiguous specifications to these models (as prompts) becomes paramount. This underscores the ever-growing significance of RE in this new era of generative AI-driven software engineering. This chapter explores the potential of LLMs to transform the RE processes. We present a SWOT (Strengths, Weaknesses, Opportunities and Threats) analysis for applying LLMs in all key RE stages, including requirements elicitation, analysis, and specification. We also discuss examples from a preliminary evaluation as motivation for using LLMs in all RE stages. Preliminary Evaluation Context. We performed a preliminary evaluation on a real-world app (pseudonym ActApp), encouraging patients with type-2 diabetes (T2D) to remain active. To ensure that the app is effective, engaging, and personalized, the ActApp team implemented a machine learning (ML) model in the background to learn from user behaviour and preferences and suggest appropriate reminders and activities. The team has a mix of experienced engineers and an ML scientist (with little understanding of RE). Our preliminary evaluation and the examples in the chapter are done using ChatGPT (GPT-3.5). 1https://chat.openai.com/ 2https://bard.google.com/ 2 \fRequirements Elicitation Requirements Speci\ufb01cation Requirements Validation Requirements Analysis Req. Engineer Stakeholders Engineering Team Domain Experts Stakeholder needs, domain-speci\ufb01c information and other sources Draft Requirements Analysis Heuristics Analyzed Requirements Template Requirements Document Consolidated Requirements Document with Acceptance Criteria and Test Scenarios Validation Heuristics LLMs-based RE Agents Prompt Engineeering Start LLMs-based RE Agents Fig. 1 LLMs-driven RE Process Overview. Structure. Section 2 provides an overview of our vision of the role of LLMs in RE process. Sections 3, 4, 5 and 6 cover the four major RE stages, i.e., elicitation, specification, analysis and validation, respectively. Section 7 presents our preliminary evaluation results. Section 8 covers the lessons learned, and Section 9 concludes the chapter. 2 LLMs-driven RE Process Fig. 1 provides an overview of our vision of an LLMs-driven RE process (an adaptation of RE process by Van Lamsweerde [14]). The RE process can be broadly divided into four stages: requirements elicitation (domain understanding and elicitation), specification (specification and documentation), analysis (evaluation and negotiation), and validation (quality assurance). We note that the exact instantiation and contextualization of LLMs in RE will depend on the problem domain and the project. For instance, implementing the LARRE framework for ActApp might be different from a safetycritical system. We, in this book chapter, provide a broad perspective on the role of LLMs in RE, which should be applicable to a wide range of projects, as the RE stages discussed are common and can be generalized across domains and systems, with finer refinements required in some cases. LLMs can be employed differently for automating RE tasks, e.g., as they have been successfully applied for ambiguity management [9]. In this chapter, we specifically focus on prompting by requirements analysts or other stakeholders directly on generative AI agents, e.g., ChatGPT or fine-tuned LLMs RE agents built on top of these agents. One would generate multiple agents based on LLMs for interaction (via prompting) with the stakeholders (e.g., domain experts, engineering teams, clients, requirements engineers and end users) and potentially with each other for eliciting, specifying, negotiating, analysing, validating requirements, and generating other artefacts for quality assurance. Prompting is a technique to perform generative tasks using LLMs [11]. Prompts are short text inputs to the LLM that provide information about the task the LLM is being asked to perform. Prompt engineering is designing and 3 \ftesting prompts to improve the performance of LLMs and get the desired output quality. Prompt engineers use their knowledge of the language, the task at hand, and the capabilities of LLMs to create prompts that are effective at getting the LLM to generate the desired output [26]. Prompt engineering involves selecting appropriate prompt patterns and prompting techniques [26]. Prompt patterns refer to different templates targeted at specific goals, e.g., Output Customization pattern focuses on tailoring the format or the structure of the output by LLMs. Other generic templates, include formatting your prompts consistently in \u201cContext, Task and Expected Output\u201d format. For instance, one can use a persona for output customization, wherein the agent plays a certain role when generating the output, e.g., the patient in ActApp. Prompting technique refers to a specific strategy employed to get the best output from the LLM agents. Some of the well-known prompting techniques include zero-shot prompting [19], few-shot prompting [17], chain-of-thought prompting [25] and tree-of-thought prompting [27]. In this context, prompt engineering combinations must be empirically evaluated in RE for different systems and domains. In each section, we explore the role of LLMs in each RE stage with a SWOT analysis. The insights for the SWOT analysis were systematically derived from a combination of our direct experiences with LLMs, feedback gathered from practitioner interactions, and our preliminary evaluation. 3 Requirements Elicitation 3.1 Elicitation Tasks Requirements Elicitation encompasses pre-elicitation groundwork (as-is analysis and stakeholder analysis) and core elicitation activities with stakeholders (interviews and observations) [20]. The main objective is to identify and document the project information, system needs, expectations, and constraints of the solution under development. The key tasks in elicitation include domain analysis, as-is analysis, stakeholders analysis, feasibility analysis, and conducting elicitation sessions with the identified stakeholders using techniques such as interviews and observations. While the elicitation process is methodical, it is inherently dynamic, often necessitating iterative sessions as requirements evolve and new insights emerge from stakeholders. Requirements elicitation is also intensely collaborative, demanding constant feedback and validation from diverse stakeholders to ensure clarity and alignment. Some prevalent challenges associated with requirements elicitation involve the lack of domain understanding [22], unknowns (i.e., known and unknown unknowns) [23], communication issues due to language barriers or technical jargon [6], and lack of a clear understanding of what needs to be built in early stages [10]. In addition, the current elicitation techniques fall short in human-centric software development, i.e., ensuring adequate representation from all potential user groups based on their human-centric factors, such as age, gender, culture, language, emotions, preferences, accessibility and capabilities [12]. External influences, such as evolving legal stipulations and legal compliance, also play a pivotal role in shaping the elicitation process. Furthermore, with the rapidly advancing technological landscape, the existing elicitation processes often fail to capture the system requirements precisely, e.g., in the case of AI systems, bias, ethical considerations, integration of undeterministic AI components in larger software systems [2]. 4 \f3.2 Role of LLMs LLMs can address numerous key challenges in the elicitation phase, including domain analysis. LLMs can rapidly absorb vast amounts of domain-specific literature, providing a foundational structuring and acting as a proxy for domain knowledge source [16]. They can assist in drawing connections, identifying gaps, offering insights based on the existing literature, and based on automated tasks such as as-is analysis, domain analysis and regulatory compliance. In addition to stakeholder communication, leveraging LLMs would require other inputs such as existing domain or project-specific documentation (e.g., fine-tuning LLMs) and regulations (e.g., GDPR). While LLMs have access to the domain knowledge, it is difficult to replace domain specialists\u2019 intuition, experience, and expertise. For example, in ActApp the nuanced understanding of how specific exercises influence a patient\u2019s glucose or hormonal levels rests with medical professionals such as endocrinologists, who are irreplaceable in RE. LLMs help identify unknowns by analyzing existing documentation and highlighting areas of ambiguity or uncertainty. LLMs can help with the completion or suggest alternative ideas that the requirements analysts might have otherwise missed, drawing on their large corpus of training data and connections. LLMs can assist with translating complex technical jargon into plain language and aiding stakeholders from different linguistic backgrounds, e.g., translating medical terminology in ActApp for requirements analysts or translating domain information from one language to another. LLMs play a vital role in human-centric RE. They can analyze diverse user feedback, like app reviews, ensuring all user needs are addressed. LLMs can also simulate user journeys considering human-centric factors, but this necessitates resources such as app reviews, persona-based use cases, and accessibility guidelines. For emerging technologies, LLMs need regular updates, a challenging task since automated solutions might be affected by these updates. The use of LLMs in requirements elicitation also warrants ethical scrutiny. LLMs may introduce or perpetuate biases as they are trained on vast internet data. Ensuring the ethical use of LLMs means avoiding biases and guaranteeing that the stakeholders\u2019 inputs are managed according to the data privacy and security guidelines. LLMs output should be viewed as complementary to human efforts. Requirements analysts bring domain expertise, cultural awareness, nuanced understanding, and empathetic interactions to the table, ensuring that software requirements cater to the diverse and evolving needs of end-users. This synergy of humans and generative AI is crucial in human-centric software development. Example Prompt for requirements generation. I am developing an app called ActApp. ActApp is a real-time application for T2D patients to ensure an active lifestyle. The app gives timely reminders for working out, health & disease management. Act and respond as an ActApp user with the persona provided below in JSON format. The main aim is to elicit the requirements from your perspective. The generated requirements should each be associated with a unique id, and rationale. {\u201cpersona\u201d:{ \u201cname\u201d: \u201cJane Doe\u201d, \u201cage\u201d: \u201c65\u201d, \u201cgender\u201d: \u201cFemale\u201d, \u201clocation\u201d: \u201cCanada\u201d, \u201coccupation\u201d: \u201cRetired\u201d, \u201cmedical info\u201d: . . ., \u201clifestyle\u201d: . . ., \u201cgoals\u201d: . . . \u201cwork\u201d: \u201csedentary\u201d, \u201cchallenges\u201d: . . . } } 5 \fExample. For the ActApp, LLMs are used to gather information from various stakeholders, including patients and carers. The agent can conduct virtual interviews with the stakeholders (for a given persona, as exemplified below), asking targeted questions to identify their needs, preferences, and concerns. For instance, the agent may inquire users about desired features and data privacy concerns. Additionally, LLMs can analyze and synthesize information from online forums, social media, reviews from similar apps, and research articles on disease management to extract insights into common challenges patients face and best practices for care. This information can generate preliminary requirements (e.g., R1 and R2 below), which can be refined in later stages. ActApp Example Information and Early Requirements. Key stakeholders (identified based on initial app ideas): Patients, carers, app developers, ML scientists, and healthcare professionals, e.g., endocrinologists. R1. The patients should receive a notification to stand up and move around if they have been sitting for long. R2. The patients should not receive notifications when busy. SWOT Analysis: LLMs for Requirements Elicitation [Strengths] \u2022 Interactive Assistance: Can actively assist in elicitation, asking probing questions and generating diverse potential requirements based on initial inputs \u2013 leading to uncovering unknowns. \u2022 Efficient Data Processing: Facilitate round-the-clock elicitation, rapidly processing large volumes of elicitation data in varied formats. \u2022 Domain Knowledge: Can rapidly absorb and understand domain-specific literature and automate tasks based on the absorbed literature. \u2022 Assisting Multilingual and Multicultural Stakeholders: Can accurately translate complex technical jargon into plain language and aid stakeholders\u2019 communication even with diverse backgrounds. [Weaknesses] \u2022 Lack of Empathy and Nuance: Do not possess human empathy and might miss out on emotional cues or implicit meanings. \u2022 Lack of Domain Expertise: While LLMs understand domain knowledge, they cannot replace the intuition and experience of domain experts. \u2022 Misinterpretation Risks: The potential for misinterpreting context or over-relying on existing training data without considering unique project nuances. [Opportunities] \u2022 Real-time Documentation and Processing: Can document requirements and analyze feedback in real time, ensuring thoroughness and accuracy. \u2022 Human-centric Elicitation: By analyzing diverse user feedback, LLMs can ensure all user needs are considered, promoting a holistic approach to elicitation. [Threats] \u2022 Over-reliance and Trust Issues: Excessive dependence might lead to missing human-centric insights, and some stakeholders might hesitate to engage with AI. 6 \f\u2022 Data Security and Privacy Concerns: Eliciting requirements via LLMs could raise data confidentiality issues, especially with sensitive information (e.g., in public LLMs-based agents like ChatGPT and Bard). \u2022 Potential Biases: May inadvertently introduce or perpetuate biases in the elicitation process if trained on biased data or past flawed projects. \u2022 Regular Updates and Compatibility: Given the stochastic nature of LLMs, the regular updates might lead to technical issues and inconsistency in project requirements. On the other hand, outdated LLMs are suboptimal for RE. 4 Requirements Specification 4.1 Specification Tasks Requirements Specification translates the raw, elicited requirements information into structured and detailed documentation, serving as the system design and implementation blueprint. LLMs can contribute to this process by helping to generate well-structured requirements documents that adhere to established templates and guidelines, e.g., the \u2018shall\u2019 style requirements, user story formats, EARS template [18], or specific document templates, e.g., VOLERE [21]. Given a project\u2019s context, the informal NL requirements need to be converted into structured specifications \u2013 both what the system should do (functional requirements) and the quality attributes or constraints the system should possess (non-functional requirements). Requirements analysts must maintain consistency in terminology and style throughout the document to enhance readability and clarity. In this stage, requirements can be prioritized considering stakeholder needs, project constraints, and strategic objectives. This phase is exacting, as ambiguities or errors can lead to significant project delays and escalated costs in later stages. Moreover, it is essential to balance the level of detail (too granular or too abstract) and ensure that non-functional requirements like security and usability are adequately addressed and not sidelined. Additional tasks such as generating requirements glossary, examples and rationale, and developing user personas to ensure that the human-centric aspects are duly covered are often performed during or immediately after requirements specification. 4.2 Role of LLMs LLMs can streamline the specification process. The unstructured requirements from the elicitation stage can be automatically formatted into structured templates like EARS or user stories (see the example prompt below for EARS and the example for user stories). They can further assist in categorizing requirements into functional and non-functional and classifying NFRs like performance, ethical requirements, and usability. LLMs can automate other tasks during specification, e.g., generating glossary, rationale and examples, developing personas [29]. Another advantage of LLMs is their ability to cross-check requirements against existing standards, regulatory guidelines, or best practices. For a health-focused app like ActApp, LLMs can ensure alignment with health data privacy standards and medical device directives. 7 \fLLMs can also suggest requirements prioritization by analyzing technical dependencies, project goals, and historical data. However, generating requirements prioritization requires several SE roles and deep-seeded expertise. Hence, the results produced by LLMs might be inaccurate. On similar lines, while LLMs can enhance the speed and consistency of specification, there is a risk of \u2018over-automation\u2019, i.e., overlooking some crucial aspects or over-trusting the requirements produced by LLMs. For instance, determining the criticality of specific NFRs\u2014like how secure a system needs to be or how scalable\u2014often requires human expertise. LLMs can aid the process, but decisions should be validated by domain experts familiar with the project context. Similarly, for compliance issues, it is essential to have domain experts validate the results. Example Prompt. Using the EARS template defined by the BNF grammar below, generate the <requirement> from the unformatted requirement \u201cThe patients should not receive notifications when busy.\u201d <requirement> ::= <ubiquitous> | <event-driven> | <state-driven> | <optional> | <unwanted> <ubiquitous> ::= \u201cThe system shall <action>.\u201d <event-driven> ::= \u201cWhen <event>, the system shall <action>.\u201d <state-driven> ::= \u201cWhile <state>, the system shall <action>.\u201d <optional> ::= \u201cThe system shall <action>.\u201d <unwanted>::= \u201cThe system shall <preventive-action> to <unwanted-outcome>.\u201d <action> ::= <verb-phrase> <event> ::= <noun-phrase> <state> ::= <noun-phrase> <preventive-action> ::= <verb-phrase> <unwanted-outcome> ::= <noun-phrase> <verb-phrase> ::= \u201ca verb phrase\u201d <noun-phrase> ::= \u201ca noun phrase\u201d Example Output: \u201cWhen patient is driving, ActApp shall not send notifications.\u201d Example. In ActApp, the LLMs can generate refined requirements as user stories (desired by ActApp team members). The requirements document may include sections such as an introduction, a description of ActApp stakeholders, a list of functional and non-functional requirements, a list of ActApp features with priorities, and any constraints or assumptions related to the development process. For non-functional requirements, such as data privacy for patients\u2019 health information, LLMs can crossreference with regulations, e.g., HIPAA or GDPR to ensure compliance [1]. ActApp Example (as user story for functional requirements). R1.1. As a user, I want to receive a notification to move if I have been sitting for 60 minutes, so that I will be active. R1.2. As a carer, I want ActApp to notify me if the patient continues to remain inactive after receiving a notification to move, so that I can intervene. NFR1.1: The app shall encrypt all data during transmission and storage to ensure patient privacy and comply with GDPR guidelines. 8 \fWe note that for SWOT analysis in subsequent phases, we attempt to reduce overlap (to the best extent possible). For instance, almost all threats from elicitation are also applicable for specification. SWOT Analysis: LLMs for Requirements Specification [Strengths] \u2022 Automation: Can streamline converting raw requirements into structured formats, such as EARS or user stories. Can generate additional artefacts, e.g., glossaries and personas, from converted requirements and domain information. \u2022 Compliance Check: Can cross-reference requirements against standards and regulatory guidelines, ensuring initial compliance. \u2022 Requirement Classification: Can categorize requirements into functional and nonfunctional, further classifying them. \u2022 Initial Prioritization: Can suggest requirement prioritization based on dependencies, project goals, and historical data. [Weaknesses] \u2022 Depth of Domain Expertise: While LLMs have vast knowledge, they might not fully capture the nuances of specialized domains. \u2022 Over-Automation Risk: Sole reliance on LLMs might lead to overlooking crucial requirements or business constraints. \u2022 Ambiguity Handling: May sometimes struggle with ambiguous or conflicting requirements, necessitating human intervention. [Opportunities] \u2022 Continuous Feedback: Can aid in real-time documentation and specification updates as requirements evolve. \u2022 Human-Centric Focus: Can help maintain a human-centric outlook in the specification stage by generating alternate requirements for different user groups. [Threats] \u2022 Ambiguities in Structured Requirements: Can generate requirements in specific formats. However, the generated requirements can have unintentional ambiguities (if the model is not fine-tuned adequately) or other quality issues (e.g., inconsistencies due to the limited \u2018memory\u2019 of LLMs). \u2022 Over-specification: Known to be verbose [31], which can easily lead to overdefined requirements, and consequently lead to a rigid system design. \u2022 Missing Non-functional Requirements: Non-functional (unlike functional) requirements rely on a deeper understanding of the system\u2019s context, which LLMs might miss or inadequately address. 5 Requirements Analysis 5.1 Analysis Tasks Requirements analysis focuses on understanding, evaluating, and refining the gathered requirements to ensure they are of high quality, i.e., coherent, comprehensive, 9 \fand attainable, before moving to the design and implementation stages. An integral component of this phase is the automated evaluation of requirements quality. This includes addressing defects like ambiguity resolution, ensuring consistency, and guaranteeing completeness. Deficiencies in this phase can affect subsequent artefacts, leading to project delays, budget overruns, and systems misaligned with stakeholder expectations. The main challenges of NL requirements are ambiguity, incompleteness, inconsistency and incorrectness, which lead to misinterpretations, untestable requirements, untraced requirements to their origin, no consensus among stakeholders on what needs to be built, and conflicting requirements. Constantly evolving requirements further exacerbate all these issues. At times, documented requirements or underlying assumptions might inadvertently overlook potential risks or dependencies. In such instances, it becomes crucial to identify these risks and introduce new requirements as countermeasures. Analysis of requirements, for instance, getting an agreement on conflicting requirements requires negotiation. Negotiation is the key to resolving all conflicts, and the stakeholders converge on a unified set of requirements. From a human-centric RE perspective, the analysis stage must prioritize users\u2019 emotional, cultural and accessibility needs. This entails scrutinizing user feedback for inclusivity, vetting ethics and bias concerns\u2014especially in AI-based software systems [3]\u2014and analyzing requirements against prevailing accessibility guidelines. 5.2 Role of LLMs LLMs come into play as powerful tools to automate the quality evaluation process: 1. Automated Evaluation for Quality Assurance: LLMs can automatically assess the quality of requirements, flagging any ambiguities, vague terms, inconsistencies, or incompleteness, and highlight gaps or overlaps. 2. Risk Identification and Countermeasure Proposal: LLMs, when equipped with domain knowledge, can identify potential risks associated with requirements or their underlying assumptions. Drawing from historical data or known risk patterns, LLMs can suggest new requirements that act as countermeasures to mitigate those risks, ensuring system design and operation robustness. 3. Conflict Resolution and Negotiation: By identifying areas of contention, LLMs can facilitate the negotiation process. Multiple LLM agents can be employed to negotiate the requirements, suggest compromises, and simulate various scenarios, helping stakeholders converge on a unified set of requirements. 4. Human-centric Requirements Enhancement: LLMs can evaluate requirements to ensure they cater to diverse user needs, accessibility standards, and user experience guidelines. LLMs can also suggest requirements that enhance the software\u2019s usability or accessibility based on user personas or feedback. Moreover, they can evaluate requirements for biases or potential ethical concerns, ensuring that the software solution is inclusive and ethically sound. 5. Change Impact Analysis: LLMs offer real-time feedback in requirements refinement, enhancing the efficiency of the iterative analysis and maintaining stakeholder alignment. The change impact analysis process implemented as continuous feedback cycle via LLMs ensures consistency. LLMs can further proactively predict requirements changes improving the quality of requirements. 10 \fExample Prompt. Context: For the ActApp system, we need to negotiate and prioritize requirements (FR1-FR7 and NFR1-NFR5) that ensure the system caters to the patient\u2019s health needs while maintaining usability and data privacy. Task: Create two agents: Agent1 (A1) represents the primary user (a T2D patient). Agent2 (A2) represents the system\u2019s software architect. A1 and A2 will negotiate and discuss FR1 FR7 to determine a priority list. During this negotiation, A1 will focus on the user experience, health benefits, and practical needs, while A2 will consider technical feasibility, integration with existing systems, and the architectural perspective. The agents can sometimes have differing opinions, leading to a more nuanced and realistic discussion. No decisions should violate NFR1 NFR5. Expected Output Format: FRs in decreasing order of priority, and include the rationale for priority order based on the negotiation outcomes between A1 and A2. Example. In the context of ActApp, LLMs can (i) identify and resolve ambiguities or inconsistencies in the requirements, such as conflicting preferences of patients or unclear feature descriptions; (ii) highlight any dependencies and requisites, e.g., a secure data storage system to support medical data storage; and (iii) generate missed ethical and regulatory concerns related to data storage. ActApp Analysis Examples. Identify the missing information from R1.2 and NFR1.1 in Section 4, wherein in R1.2 the information on how long after the initial notification the system should wait before notifying the carer is missing, and in NFR1.1, no information about data retention and deletion were specified, with regards to GDPR. SWOT Analysis: LLMs for Requirements Analysis [Strengths] \u2022 Automation Support: Can automatically and efficiently assess and enhance the quality of requirements, addressing ambiguities, inconsistencies, incompleteness, potential risks and countermeasures, and conflicts. \u2022 Consistency: Unlike human analysts who might have varying interpretations or might overlook certain aspects due to fatigue or bias, LLMs provide consistent analysis, ensuring uniformity in the analysis process. \u2022 Historical Data Analysis: Can draw insights from historical project data, identifying patterns, common pitfalls, or frequently occurring issues and provide proactive analysis based on past experiences. \u2022 Support Evolution and Continuous Learning: Provide real-time feedback during iterative requirements analysis, predicting possible changes and ensuring consistency. As LLMs are exposed to more data, they can continuously learn and improve, ensuring their analysis is refined. [Weaknesses] \u2022 Lack of Nuanced Domain Understanding: Can process vast amounts of information but might miss or get confused on subtle nuances or domain context that a human analyst would catch, leading to potential oversights. 11 \f\u2022 Difficulty with Ambiguities: Struggle with inherently ambiguous or conflicting requirements, potentially leading to misinterpretations for all analysis tasks. \u2022 Limited Context/Memory: Have a limited \u201cwindow\u201d of context they can consider at any given time. This means that when analyzing large requirements documents as a whole, they might lose context on earlier parts of the document, leading to potential inconsistencies or oversights. They don\u2019t inherently \u201cremember\u201d or \u201cunderstand\u201d the broader context beyond this window, which can be challenging when ensuring coherence and consistency across the document. [Opportunities] \u2022 Continuous Refinement: As requirements evolve, LLMs can provide real-time feedback on the quality and consistency of these requirements. \u2022 Integration with Development Tools: Can be integrated with software development environments, offering real-time requirement quality checks during the software development lifecycle. \u2022 Collaborative Platforms: Can facilitate better stakeholder collaboration by providing a unified platform for requirements analysis, negotiation, and refinement. [Threats] \u2022 Over-automation: Risk of sidelining human expertise in favor of automated checks, potentially leading to overlooked requirements defects. \u2022 Regulatory Issues: Certain industries, domains or certification bodies might have regulatory or compliance concerns related to using LLMs for critical RE tasks. 6 Requirements Validation 6.1 Validation Tasks Requirements validation ensures that the documented requirements accurately represent the stakeholders\u2019 needs and are ready for the subsequent design and implementation stages. Validating requirements often involves intricate tasks like reviewing them with stakeholders, inspecting them for defects, ensuring their traceability to their origins (or other artefacts), and defining clear acceptance criteria and test scenarios. The primary challenge in the validation phase revolve around ensuring the requirements are devoid of gaps due to stakeholders \u2018real\u2019 expectations and tacit assumptions. Requirements might be interpreted differently by stakeholders, leading to potential misalignments. The dynamic nature of projects means that requirements evolve, further complicating the validation process. Occasionally, requirements or their underlying assumptions might inadvertently miss certain constraints or dependencies. This leads to further issues for validation tasks. In such cases, it is imperative to identify these gaps and refine the requirements accordingly. 6.2 Role of LLMs LLMs can assist in the validation phase in several nuanced ways. As highlighted in the Analysis phase, LLMs can aid in the manual review and inspections by flagging potential ambiguities, inconsistencies, or violations based on pre-defined validation 12 \fheuristics. LLMs can be utilized to simulate stakeholder perspectives, enabling analysts to anticipate potential misinterpretations or misalignments. For instance, by analyzing historical stakeholder feedback, LLMs can predict potential areas where clarifications might be sought from the perspective of a given stakeholder. With their ability to process vast amounts of data quickly, LLMs can assist in requirements traceability to other artefacts, e.g., design documents and regulatory codes. LLMs can further assist in formulating clear and precise acceptance criteria based on the documented requirements. They can also propose test scenarios, ensuring a comprehensive validation suite. Furthermore, LLMs can scan the requirements to identify and flag any overlooked human-centric aspects, constraints or dependencies, ensuring a more comprehensive validation. While LLMs can facilitate most validation tasks, as noted above, a major weakness of LLMs in this context is that the validation tasks often require an overall picture of the project, domain and stakeholders\u2019 viewpoints \u2013 it is extremely difficult for LLMs to work at that level of abstraction, which typically requires manual effort from numerous stakeholders. SWOT Analysis: LLMs for Requirements Validation [Strengths] \u2022 Alternate Perspectives: Can simulate multiple stakeholder perspectives and ensure that all requirements are vetted from different viewpoints. \u2022 Proactive Feedback: Can provide real-time feedback during validation sessions, enhancing stakeholder engagement. [Weaknesses] \u2022 Depth of Context Understanding: While adept at processing text, LLMs are not able to process the tacit knowledge in RE, the domain and the business context. [Opportunities] \u2022 Interactive Validation Workshops: Can be integrated into workshops to provide instant feedback, enhancing the validation process. \u2022 Gap Analysis Enhancements: Can assist in refining requirements by highlighting overlooked aspects or potential improvements. \u2022 (Semi-)Automated Acceptance and Testing Artefacts Generation: Can lead to a substantial effort saving in V&V activities and concomitantly higher quality software products by generating acceptance criteria and test scenarios. [Threats] \u2022 Excessive False Positives: Likely to generate too many potential issues, leading to an unnecessary overhead of addressing false positives and slowing down the validation process \u2013 rendering the added value for automation moot. \u2022 Stakeholder Misrepresentation: Might not accurately capture the unique concerns or priorities of specific stakeholders (when simulating stakeholder perspectives), leading to a skewed validation process. Example Prompt. Context: For the ActApp system, we need to perform the validation on all the requirements specified in the system (FR1 FR50) and (NFR1 NFR28). The 13 \fgoal is to identify the gaps in all the requirements from three different stakholders\u2019 perspectives, the software developer, the ML scientist and the product owner. Task: Imagine all three stakeholders are answering this question. For each requirement, all stakeholders will write down the gaps in the requirement based on their role, and then share it with the group. Then all stakeholders will review the inputs in the group and move to the next step. If any expert doesn\u2019t have any gap identified or a concern they can skip the discussion on that requirement. Expected Output Format: For all gaps agreed upon by all stakeholders, export the issue with the requirement id. Example. In ActApp, LLMs can generate acceptance criteria. Also, LLMs can uncover gaps in our preliminary LLMs evaluation, the ActApp team figured it needed to comply with Australia\u2019s Therapeutic Goods Act (TGA) regulation. Example Acceptance Criteria. R1.1-AC1 Accurately detect when the user has been sitting for 60 continuous mins. R1.1-AC2 Notifications can be toggled on or off by user. R2-AC1 Accurately identifies when the user is driving. 7 Preliminary Evaluation We conducted a preliminary evaluation of LLMs in RE on a real-world system (ActApp). We note that the purpose of this evaluation was not to conduct a comprehensive assessment of LLMs in RE. Instead, we focused primarily on the feasibility of integrating LLMs into requirements elicitation. The rationale is that the applicability of LLMs to the remaining RE stages is relatively intuitive, thanks in part to the extensive history and well-established methodologies of applying NLP techniques to these stages [28, 30]. Thus we deemed exploring LLMs in requirements elicitation to be the essential first step. Data Collection Procedure. The main goal of our data collection procedure was to establish the user requirements in ActApp and analyze the performance of ChatGPT for requirements elicitation. Our team had access to three ActApp experts the project manager, an ML scientist, and a software engineer. These experts met with a researcher, Katelyn (pseudonym), to articulate the project\u2019s focus. The meetings were part of a broader context of understanding the RE processes. ChatGPT was not mentioned to experts to avoid bias. Katelyn engaged in four two-hour meetings with the experts, where they presented an overview of the project, system users, user requirements, and software features. We used ChatGPT to simulate the initial stages of requirements elicitation, wherein requirements engineers acquire project knowledge from stakeholders, review existing documentation, and formulate user requirements and core functionalities. The process involved four participants: Jorah and Jon, both seasoned software/requirements engineers, and Arya and Aegon, both early-stage RE and NLP research students. They were given a project overview from Katelyn and asked to start a ChatGPT session, introducing themselves as developers of the ActApp project. Guided by the project 14 \fbrief, they interacted with ChatGPT to elicit user-story-style requirements over a 45minute session. Subsequently, Katelyn examined the requirements generated by the participants using ChatGPT against the actual project requirements. Table 1 Evaluation Results Participant Elicited Full Match Partial Match Potentially Relevant Superfluous/ Redundant Precision Recall Jorah 14 11 1 0 2 82% 58% Jon 17 7 4 2 4 53% 45% Arya 14 3 2 1 8 29% 20% Aegon 27 2 4 1 20 15% 20% Results. Overall, 20 key user requirements were identified in ActApp by Katelyn with the experts. Katelyn mapped the requirements Jorah, Jon, Arya and Aegon elicited against these 20 requirements. Each requirement in the elicited set was categorized as a full match, partial match, or no match. It should be noted that a \u2018full match\u2019 did not imply an exact syntactic duplication of the original requirement but rather captured its essence effectively. Likewise, a \u2018partial match\u2019 indicated that only a part of the original requirement\u2019s essence was captured. We note that in our calculation of precision and recall, each full match is weighted as 1 true positive (TP) and each partial match is weighted as 0.5 TP. Katelyn further classified all \u2018no match\u2019 requirements as superfluous or potentially relevant (for further expert vetting if required). Table 1 shows the overall results from the four participants. The results clearly show the significance of experience while using ChatGPT in this preliminary evaluation. While none of the participants could elicit most requirements, it is important to note that with a project brief and one interaction session, the experienced participants could get almost half the relevant requirements, emphasising the feasibility of LLMs for RE. 8 Lessons Learned Our preliminary evaluation provided insights and highlighted challenges noted below. Role of Prompts and Contextual Information. LLMs depend heavily on comprehensive prompts and the availability of contextual information to generate meaningful output. Slightly different prompts can produce very different outputs. A thorough empirical evaluation of prompt engineering is necessary for employing LLM agents. Experience Matters. Experienced requirements engineers were more successful in formulating prompts, interpreting responses, and getting quality output, despite the project background being uniform across participants. This highlights the importance of experience and training in RE teams. LLMs Capabilities. Our preliminary evaluation underlined the capability of LLMs to discover \u2018unknown\u2019 requirements, addressing a significant challenge in RE. We found four \u2018potentially relevant\u2019 requirements for future stages in ActApp, which were not part of the original set, from three participants. We surmise that LLMs may assist interpreting and generating text for varied stakeholders, which can be key in reducing communication barriers inherent in diverse project teams. However, managing many \u2018false positive\u2019 candidate requirements will require care to ensure engineers are not overloaded with many irrelevant or semi-inaccurate requirements. 15 \fLLM Problems. LLMs have some inherent issues, such as systematic inaccuracies or stereotypes in the output (influenced by the training data [7]), and the limited context length, e.g., ChatGPT has a limit of 32K tokens, which although enough but still might make it difficult to process large documents or maintain task context in a session. All participants reported issues with maintaining the context of ActApp system in the evaluation session and noticed inaccuracies. Domain Understanding. RE requires an excellent understanding of the underlying domain for eliciting and specifying correct and complete requirements. An LLM\u2019s training on specific domain knowledge may be limited and requires addressing to incorporate domain knowledge via experts, other sources, or fine-tuned LLMs. Access to large amounts of training data to fine-tune a custom LLM may be a challenge. Automation Bias. Humans often display unfounded trust in AI [4], e.g., the LLMsgenerated requirements in our case. For example, upon completing the session, Arya and Aegon displayed a remarkable degree of confidence in their elicited requirements. Security, Privacy and Ethical Issues. Requirements are by their very nature mission-critical for software engineering and incorporate much sensitive information. Disclosure via public LLMs may result in IP loss, security breaches in deployed systems, organisational and personal privacy loss, and other concerns. Who \u2018owns\u2019 requirements generated by LLMs from training data from unknown sources? 9"
54
+ }
55
+ ]
56
+ },
57
+ "edge_feat": {}
58
+ }
59
+ }
abs_29K_G/test_abstract_long_2405.01063v1.json ADDED
The diff for this file is too large to render. See raw diff
 
abs_29K_G/test_abstract_long_2405.01097v1.json ADDED
The diff for this file is too large to render. See raw diff
 
abs_29K_G/test_abstract_long_2405.01102v1.json ADDED
The diff for this file is too large to render. See raw diff
 
abs_29K_G/test_abstract_long_2405.01103v1.json ADDED
The diff for this file is too large to render. See raw diff
 
abs_29K_G/test_abstract_long_2405.01116v1.json ADDED
The diff for this file is too large to render. See raw diff
 
abs_29K_G/test_abstract_long_2405.01130v1.json ADDED
The diff for this file is too large to render. See raw diff
 
abs_29K_G/test_abstract_long_2405.01143v1.json ADDED
The diff for this file is too large to render. See raw diff
 
abs_29K_G/test_abstract_long_2405.01159v1.json ADDED
The diff for this file is too large to render. See raw diff
 
abs_29K_G/test_abstract_long_2405.01175v1.json ADDED
The diff for this file is too large to render. See raw diff
 
abs_29K_G/test_abstract_long_2405.01217v1.json ADDED
The diff for this file is too large to render. See raw diff
 
abs_29K_G/test_abstract_long_2405.01229v1.json ADDED
The diff for this file is too large to render. See raw diff
 
abs_29K_G/test_abstract_long_2405.01248v1.json ADDED
The diff for this file is too large to render. See raw diff
 
abs_29K_G/test_abstract_long_2405.01266v1.json ADDED
The diff for this file is too large to render. See raw diff
 
abs_29K_G/test_abstract_long_2405.01270v1.json ADDED
The diff for this file is too large to render. See raw diff
 
abs_29K_G/test_abstract_long_2405.01280v1.json ADDED
The diff for this file is too large to render. See raw diff
 
abs_29K_G/test_abstract_long_2405.01345v1.json ADDED
The diff for this file is too large to render. See raw diff
 
abs_29K_G/test_abstract_long_2405.01350v1.json ADDED
The diff for this file is too large to render. See raw diff
 
abs_29K_G/test_abstract_long_2405.01359v1.json ADDED
The diff for this file is too large to render. See raw diff
 
abs_29K_G/test_abstract_long_2405.01373v1.json ADDED
The diff for this file is too large to render. See raw diff