diff --git "a/abs_29K_G/test_abstract_long_2405.00864v1.json" "b/abs_29K_G/test_abstract_long_2405.00864v1.json" new file mode 100644--- /dev/null +++ "b/abs_29K_G/test_abstract_long_2405.00864v1.json" @@ -0,0 +1,64 @@ +{ + "url": "http://arxiv.org/abs/2405.00864v1", + "title": "Math Multiple Choice Question Generation via Human-Large Language Model Collaboration", + "abstract": "Multiple choice questions (MCQs) are a popular method for evaluating\nstudents' knowledge due to their efficiency in administration and grading.\nCrafting high-quality math MCQs is a labor-intensive process that requires\neducators to formulate precise stems and plausible distractors. Recent advances\nin large language models (LLMs) have sparked interest in automating MCQ\ncreation, but challenges persist in ensuring mathematical accuracy and\naddressing student errors. This paper introduces a prototype tool designed to\nfacilitate collaboration between LLMs and educators for streamlining the math\nMCQ generation process. We conduct a pilot study involving math educators to\ninvestigate how the tool can help them simplify the process of crafting\nhigh-quality math MCQs. We found that while LLMs can generate well-formulated\nquestion stems, their ability to generate distractors that capture common\nstudent errors and misconceptions is limited. Nevertheless, a human-AI\ncollaboration has the potential to enhance the efficiency and effectiveness of\nMCQ generation.", + "authors": "Jaewook Lee, Digory Smith, Simon Woodhead, Andrew Lan", + "published": "2024-05-01", + "updated": "2024-05-01", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "label": "Original Paper", + "paper_cat": "LLM Fairness", + "gt": "Multiple choice questions (MCQs) are a popular method for evaluating\nstudents' knowledge due to their efficiency in administration and grading.\nCrafting high-quality math MCQs is a labor-intensive process that requires\neducators to formulate precise stems and plausible distractors. Recent advances\nin large language models (LLMs) have sparked interest in automating MCQ\ncreation, but challenges persist in ensuring mathematical accuracy and\naddressing student errors. This paper introduces a prototype tool designed to\nfacilitate collaboration between LLMs and educators for streamlining the math\nMCQ generation process. We conduct a pilot study involving math educators to\ninvestigate how the tool can help them simplify the process of crafting\nhigh-quality math MCQs. We found that while LLMs can generate well-formulated\nquestion stems, their ability to generate distractors that capture common\nstudent errors and misconceptions is limited. Nevertheless, a human-AI\ncollaboration has the potential to enhance the efficiency and effectiveness of\nMCQ generation.", + "main_content": "INTRODUCTION Multiple choice questions (MCQs) are widely used to evaluate students\u2019 knowledge since they enable quick and accurate administration and grading [2, 6, 9]. MCQs are constructed in a specific format. The stem refers to the statement on the problem setup and context, followed by a question that needs to be answered. Among the options, the correct one can be referred to as the key, while incorrect ones can be referred to as distractors. As the name implies, distractors in MCQs are typically formulated to align with common errors among students. These distractors are chosen because students either i) lack the necessary comprehension of the knowledge components (KCs) or concepts/skills tested in the question to accurately identify the key as the correct answer or ii) exhibit misconceptions that make them think a specific distractor is correct. While MCQs offer many advantages in student knowledge assessment, manually crafting high-quality MCQs, especially in math-related domains, is a demanding and labor-intensive process [5]. There are three main tasks in this process: First, educators need to formulate a question stem that effectively encapsulates the KCs they aim to test. Second, educators need to anticipate common errors and/or misconceptions among students and create corresponding distractors. Third, educators need to provide feedback to students who select distractors that can help them identify their errors and lead them to the correct answer, to expedite their learning process. The emergence of large language models (LLMs) has raised hopes for making MCQ creation more scalable by automating the process. Specifically, few-shot, in-context learning is promising for generating math MCQs since LLMs can follow instructions based on contextual information conveyed by a few examples. While automated question generation for open-ended questions has shown notable success, generating plausible distractors within MCQs presents a different challenge: distractors should be based on anticipated student errors/misconceptions [12], whereas LLMs have not necessarily learned this information during training. Moreover, math MCQs are challenging since they require mathematical reasoning, which means that distractors cannot be generated using a knowledge graph [13] or paraphrasing tool [8]. Consequently, math educators need to take an important role in guiding LLMs in math MCQ generation: LLMs are responsible for scaling up the process while humans use their expertise efficiently. Therefore, we raise following are two core research questions (RQs) that help identify opportunities to generate math MCQs through collaboration between LLMs and human educators: 1) RQ1: Can LLMs generate valid MCQs, especially distractors and feedback corresponding to common student errors/misconceptions? 2) RQ2: What are the key design elements in a system where human math educators and LLMs collaborate on MCQ generation? 1.1 Contributions In this paper, we introduce a prototype tool called the Human Enhanced Distractor Generation Engine(HEDGE) for math MCQ creation, which leverages the expertise of educators by asking them to edit LLM-generated MCQs in a two-step arXiv:2405.00864v1 [cs.CL] 1 May 2024 \fprocess. In the first step, we prompt the LLM to generate stem, key, and explanation in an MCQ, and ask educators to evaluate and edit the output to make sure it is mathematically correct and relevant to the intended KC. In the second step, we prompt the LLM to generate a set of possible errors/misconceptions and the corresponding distractors and feedback, and ask educators to evaluate and edit the output to make sure they correspond to valid distractors to the generated question stem. In a pilot study, we recruit four former/current math teachers to evaluate our tool on generating math MCQs related to five pre-defined KCs. Results show that educators considered 70% of the generated stem, key, and explanation generated by GPT-4 as valid. However, they only considered 37% of the generated misconception, distractor, and feedback valid, which reveals significant limitations of LLMs in capturing anticipated common errors/misconceptions among real students. This observation underscores the necessity of involving humans in the process of generating math MCQs and leveraging real math educators\u2019 expertise on common errors among students. 2. HUMAN ENHANCED DISTRACTOR GENERATION ENGINE 2.1 Overview Figure 1: HEDGE Overview: the human-AI collaboration setting for generating math MCQs for a given KC. Strikethrough text represents edits made to LLM-generated content while boldface text indicates misconceptions that correspond to distractors. HEDGE is our prototype for math MCQ generation that generates math MCQ for a given mathematical KC, as illustrated in Figure 1. These KCs are categorized into three levels of granularity: coarse, medium, and fine-grained. For instance, KCs can cover either a broad topic such as\u201cbasic arithmetic\u201d or a specific topic like \u201cIdentify that a problem needs to be solved using addition.\u201d HEDGE is designed to utilize LLMs within OpenAI. The provided example is generated using ChatGPT. We take a two-step approach for MCQ generation: 1) generate the question step and answer key, and an explanation, and 2) generate a list of possible misconceptions, corresponding distractors, and feedback messages. We implement both steps using by prompting LLMs with an in-context example of these tasks. The in-context example shows the KC converting ratios to fractions, employing a real-life scenario in which Kate and Isaac share yogurt in a 2 : 5 ratio. The objective is to calculate the fraction representing Kate\u2019s share, 2 7. In this Table 1: The in-context example used for prompting LLMs for math MCQ generation. KC Coarse Ratio, Medium Writing ratios, Fine Convert ratios to fractions Stem Kate and Isaac share yogurt in a 2 : 5 ratio. Kate has \u25a1of the total. Identify the fraction. Key 2 7 Explanation The total ratio is 7 parts. Kate\u2019s share of 2 7 is derived by dividing her 2 parts by the total. Misconceptions 1. Misinterpreting the ratio as a fraction. 2. Confusing the difference in ratio parts as relevant. 3. Calculating Isaac\u2019s share instead of Kate\u2019s. Distractors 1. 2 5 2. 3 7 3. 5 7 Feedback 1. The ratio 2 : 5 means 7 parts total, not 2 5. 2. The ratio splits the total, not the difference between parts. 3. Ensure you are calculating Kate\u2019s share, not Isaac\u2019s. context, we list three common misconceptions. First, a student mistakenly thinks that the ratio 2 : 5 could be directly converted into the fraction 2 5. Second, a student mistakenly calculates the difference between Kate\u2019s and Issac\u2019s share. Third, a student mistakenly think the goal is to calculate Issac\u2019s share. These misconceptions, along with the corresponding feedback on how to resolve them, are included as part of the in-context example. Now, we explore a scenario where an educator creates MCQs using our tool based on the concept of basic arithmetic, specifically focusing on mental addition. In the first step, given the target KC, along with an in-context example consisting of the concept, stem, key, and explanation, the LLM generates the following stem: \u201cSally has 5 apples. She gives 2 apples to her friend. How many apples does Sally have left?\u201d However, this stem mistakenly embodies the KC of subtraction rather than addition. Therefore, the educator edits the generated results to align it with the intended KC of addition. In the second step, using the adjusted stem, key, and explanation, as well as incorporating in-context examples with distractors, misconceptions, and feedback, the LLM generates distractors along with corresponding misconceptions and feedback. Figure 1 illustrates option B, which contains a misconception related to subtraction instead of addition, accompanied by feedback designed to correct this error. Additionally, the educator has the option to edit option D to address any misconceptions associated with multiplication. 2.2 User Interface We develop HEDGE interface, as illustrated in Figure 2. This interface is built using React and employs Firestore as its database for data storage. The interface comprises three components: a Sidebar, a Preview, and a Generation. The educator generates MCQs using the Generation component as discussed in Section 2.1. Here, after prompting LLMs using the edited stem, key, and explanation, we add a rating step to assess the overall quality of misconceptions, distractors, and feedback that the educator rates based on a 5-point Likert scale. Once the educator completes the distractor editing process, \fthe Preview component displays a fully structured MCQ, with the answer options randomized. We store any metadata that isn\u2019t visually represented within the image. Following the completion of distractor editing, the Sidebar component is refreshed. The educator can click on the stem to view the generated image along with the answer sheet or create a new MCQ. Figure 2: HEDGE Interface: what human participants use to generating an MCQ by editing LLM output. 3. PILOT STUDY 3.1 Experimental Setup We perform a pilot study to assess the usability of HEDGE in generating MCQs. In this study, we select pre-defined KCs and instruct participants to utilize these KCs to simulate a scenario where an educator is crafting MCQs. We select the KCs and the in-context example from a large education company\u2019s content repository, categorized under the label \u201cNumber,\u201d encompasses various subtopics, such as \u201cBasic Arithmetic,\u201d \u201cFractions,\u201d and \u201cRounding and Estimating.\u201d We choose five KCs, as shown in Table 2, from the KCs that incorporate mathematical expressions, such as fractions, powers, and surds. We utilize GPT-4 as LLM for the study and set the parameters to temperature = 0.7 and top p value = 0.9 to balance creativity and consistency of the generated MCQs. After completing the study, participants are asked to complete an exit survey. The survey includes open-ended questions and ratings on their satisfaction with the quality of LLM-generated responses and the usability of the tool using a 5-point Likert scale. 3.2 Participants We recruit four participants for the study, comprising one male and three females, all recruited through Upwork [14]. Among them, two currently work as middle/high school math teachers, while the other two currently work as tutors, with prior experience as former math teachers. All participants are selected based on their qualifications and expertise in mathematics education. Each participant was tasked with creating five MCQs using the HEDGE, employing the five KCs specified in Table 2. 4. RESULTS 4.1 Stem, Key, and Explanation Table 3 shows the stems produced by participants utilizing HEDGE. In the\u201cFine-grained KC\u201dcolumn, the original stem is indicated in italics, while the stems modified by each participant denoted as a, b, c, and d, respectively. In what follows, we label each MCQ in the format of 1a, where 1 denotes the index of the fine-grained KC and a denotes index of the participant. Out of 20 sets of stem, key, and explanation generated by the LLM, participants deemed 14 sets of them as valid. Among these valid sets, two added more details in their explanations, while the remaining sets were adopted without any need for edits. For example, italicized details were added in the explanation for 2c: \u201cThe fraction 3 9 simplifies to 1 3 because both the numerator and the denominator can be divided by a common factor of 3. 3 divided by 3 is 1, and 9 divided by 3 is 3. Hence, 1 3 is an equivalent fraction to 3 9.\u201d The other case was to make the question setting more realistic: In 4d, the educator edited the initial price of the car worth $5000 to $35000. This adjustment reveals the limitations of LLMs in accurately representing real-life problem scenario. We now analyze the cases that participants deemed invalid. Grammar error. In 2a, educator corrected grammar error of\u201cshe have\u201dto\u201cshe has.\u201d No other grammar errors occurred in the study besides this one, underscoring the capability of LLMs to consistently produce grammatically correct sentences. Not mastering KC. Regarding 5th KC, GPT-4 shows a lack of knowledge on the distinction between simplified and non-simplified surd. The followings are invalid stems generated by GPT-4: 1) 5a. If \u221a 20 is a simplified surd, what is its non-simplified form? 2) 5c. Express the simplified surd \u221a 45 in a non-simplified form. 3) 5d. A simplified surd is \u221a 8. How can it be represented in non-simplified form? This invalid stem has misled a participant to edit a stem to convey KC as simplifying surd, which is the opposite of non-simplifying surd (5c). Calculation error. In 4c, GPT-4 generated a key of $4750, erroneously calculating the car price after one year instead of two years. However, in the other three cases within the same KC, GPT-4 calculated correctly, showing its math problemsolving skills. 4.2 Distractor, Misconception, and Feedback Table 4 shows a breakdown of 60 distractors (comprising three distractors for 20 stems), categorized based on the validity of misconceptions, distractors, and feedback. Adopt All Responses (Case 1, 37%). Among 60 distractors, educators identified 22 responses as valid, including two cases that are actually invalid. Edit Feedback Only (Case 2, 8%). These cases have valid misconception and distractor and educators has made adjustments to the feedback to enhance its clarity. For example, one of the distractors for 2d is 2 3. The feedback \fTable 2: Pre-defined math KCs used in the pilot study. Coarse-grained Medium-grained Fine-grained Factors, Multiples and Primes Factors and Highest Common Factor Identify factors of a number Fractions Equivalent Fractions Identify equivalent fractions when presented as numbers Indices, Powers and Roots Squares, Cubes, etc Understand the notation for powers Percentages Repeated Percentages and Compound Interest Understand the elements of the formula for compound percentage decrease Surds Simplifying Surds Write a simplified surd in a non-simplified form Table 3: Question stems generated using HEDGE and the corresponding KCs. Fine-grained KC Stem 1. Identify factors of a number Which of these numbers is not a factor of 9? a. What are all the factors of the number 12? b. What are the factors of 18? c. Which of the following is a factor of 18? d. Which of the following numbers is a factor of 36? 2. Identify equivalent fractions when presented as numbers Which fraction is equivalent to 9 13? a. Sue has a fraction of 4 8. What fraction is equivalent to the fraction she has? b. The fraction 6 18 is equivalent to which of the following fractions? c. Which of the following fractions is equivalent to 3 9? d. Which of the following fractions is equivalent to 2 4? 3. Understand the notation for powers To calculate 532 you need to do ... a. The number 32 is equal to \u25a1. What number completes the sentence? b. The number 34 represents \u25a1. What number completes the sentence? c. If a3 is read as \u201da cubed\u201d, how is a4 read? d. What is the value of 23? 4. Understand the elements of the formula for compound percentage decrease A car depreciates in value by 10% each year. If a car was bought for $4500, what calculation would find the value of the car after 3 years? a. A car that costs $5000 loses 12% of its value each year. After one year, the car is worth \u25a1. What completes the sentence? b. A new car loses 20% of its value each year. If the car was originally priced at $15,000, what will be its value after 2 years? c. The price of a car is reduced by 5% each year. If the car was originally priced at $5000, what will be the price of the car after two years? d. A car depreciates in value by 10% each year. If the car is initially worth $35000, what is the formula to calculate the car\u2019s value after n years? 5. Write a simplified surd in a non-simplified form 5 \u221a 13 = \u221an What is the value of n? a. If 2 \u221a 5 is a simplified surd, what is its non-simplified form? b. The square root of 18 is written in simplified surd form as 3 \u221a 2. How can it be rewritten in a non-simplified form? c. Simplify the surd \u221a 45. d. A non-simplified surd is \u221a 8. How can it be represented in simplified form? generated by GPT-4 is as follows: \u201cYou seem to have compared only the numerators of the fractions. However, when checking for equivalent fractions, both the numerator and denominator need to be considered. The fraction 2 3 is not equivalent to 2 4.\u201d The educator removed the redundant final sentence and introduced \u201cRemember, equivalent fractions require both the numerator and denominator to be proportional.\u201d, which helps students better understand the importance of considering both the numerator and denominator when comparing fractions for equivalence. This adjustment emphasizes that the equivalence between fractions relies on maintaining proportionality between the numerator and denominator. While GPT-4 provides valid explanations, it sometimes fail to include critical insights that are necessary for students\u2019 improvement. Table 4: Breakdown of the 60 generated distractors and their quality ratings. (\u2713: valid, \u2717: invalid) Case Misconception Distractor Feedback Ratio Rating 1 \u2713 \u2713 \u2713 37% 4.8 2 \u2717 8% 2.8 3 \u2717 \u2713 4 \u2717 18% 2.1 5 \u2717 \u2713 \u2713 12% 3.4 6 \u2717 5% 3.0 7 \u2717 \u2713 8 \u2717 20% 2.3 Adopt Misconception Only (Case 4, 18%). These cases are often due to a mismatch between the misconception and the distractor. In 4c, the misconception \u201cThe student mistakenly believed that the car depreciates by a constant amount each year, not a percentage.\u201d did not match the distractor 35000 \u22120.10n. Additionally, there are cases when, even if the distractor is valid, it may not effectively encapsulate student misconceptions. In 1a, the educator updated the distractor from 1, 2, 3, 4, 6, 12, 24 to 12, 24, 36, 48, 60, making it a more attractive distractor for those who confuse factors for multiples. Edit Misconception Only (Case 5, 12%). As in Case 4, invalid cases are often due to a mismatch between the misconception and the distractor. In 5d, the misconception \u201cThe student may believe that all square roots are in their simplest form.\u201d did not match the distractor\u201c \u221a 2.\u201d The educator updated the misconception as \u201cThe student may have confused square roots with cube roots.\u201d providing a more accurate misconception for the distractor. Additionally, there are cases when, even if the misconception is valid, it may not likely be the misconception why the student selects the distractor. In 1c, the educator updated the misconception of distractor \u201c4\u201d from \u201cThe student might think that only the numbers less than 18 can be the factors of 18.\u201d to \u201cThe student might think that any even number can be a factor of an even number.\u201d, making it more accurate for addressing the student\u2019s misconception. Adopt Distractor Only (Case 6, 5%). These cases were when educators adopted distractors and edited wrong misconceptions and feedback. For example, in the case of 5a, \u221a 10 is a valid distractor as the student could simply multiply 2 and 5. However, the misconception and feedback generated by GPT-4 did not align with the distractor; therefore the educator had to edit it accordingly. In Cases 4, 5, and 6, LLMs revealed inconsistent mathematical reasoning when analyzing misconceptions, distractors, and feedback for a given stem. The inconsistency under\fscores a necessity for human educators to manually align distractors and their underlying misconceptions and corresponding feedback in many cases. Reject All Responses (Case 8, 20%). These cases were when misconceptions had poor quality or were wrong, resulting in inadequate distractors and feedback. Two of the distractors generated for 2b by GPT-4 shows both poor quality and wrong misconceptions. While the misconception in the first distractor is valid, stating that \u201cThe student may not divide both the numerator and denominator by the same number,\u201d the distractor itself, represented by 3 9, and its associated feedback lack coherence and fail to align with this misconception. Meanwhile, the misconception in the second distractor ( 8 24) lacks coherence, as expressed in the following manner: \u201cThe student may confuse the concept of equivalent fractions with simplifying fractions.\u201d These results reveal that LLMs often fail to anticipate valid misconceptions and errors that are common among students, making human educators\u2019 involvement crucial in the creation of math MCQs. 4.3 Takeaways from the Survey After the study, participants were asked to fill out the survey, asking the experience using HEDGE. We categorize result into two: Quality of LLM-generated responses and Tool Usability. 4.3.1 Quality of LLM-generated responses. Stem, Key, and Explanation. On a 5-point Likert scale, the participants gave an average rating of 4. This rating aligns with the open-ended responses regarding most of the generated stem, key, and explanation valid. However, two participants addressed the tool\u2019s limitation in terms of the level of question difficulty. One participant points out that the questions appear to be at a low Bloom\u2019s Taxonomy level. For example, \u201cIf a3 is read as \u2018a cubed\u2019, how is a4 read?\u201d While it\u2019s important for students to grasp the verbal representation of these terms, educators often place greater emphasis on whether students understand the equivalent expressions and concepts associated with them. The other participant points out that the Depth of Knowledge (DOK) levels predominantly focused on Level 1 (Recall) and Level 2 (Skill or Concept). We can prompt LLMs to generate questions at various Bloom\u2019s or DOK levels to enhance the question difficulty and promote deeper understanding [3]. Moreover, we can invite educators to craft in-context examples with higher Bloom\u2019s or DOK levels. Distractor, Misconception, and Feedback. On a 5-point Likert scale, the participants gave an average rating of 2.5. This rating aligns with the open-ended responses regarding most of the generated misconceptions, distractors, and feedback that do not reflect what students typically make in the classroom based on the participant\u2019s teaching experience. The responses again point to the observation that LLMs do not understand errors that student are likely to make. One participant suggest providing a \u201cbank\u201d of misconceptions that educators could refer to. We can prompt LLMs to generate multiple misconceptions and engage educators in ranking these misconceptions based on their alignment with actual student errors. 4.3.2 Tool Usability User Interface. On a 5-point Likert scale, the participants gave an average rating of 4 for comfort level with generating MCQs using HEDGE while giving an average rating of 3.25 for the effectiveness of generating high-quality MCQs. Participants are enthusiastic about the tool\u2019s potential for simplifying the process of generating MCQs but are nevertheless skeptical about LLMs\u2019 capability to generate valid distractors. We will need to enhance the tool by making improvements in the quality of generated distractors to align more closely with educators\u2019 expectations. 5.", + "additional_graph_info": { + "graph": [], + "node_feat": { + "Jaewook Lee": [ + { + "url": "http://arxiv.org/abs/2405.00864v1", + "title": "Math Multiple Choice Question Generation via Human-Large Language Model Collaboration", + "abstract": "Multiple choice questions (MCQs) are a popular method for evaluating\nstudents' knowledge due to their efficiency in administration and grading.\nCrafting high-quality math MCQs is a labor-intensive process that requires\neducators to formulate precise stems and plausible distractors. Recent advances\nin large language models (LLMs) have sparked interest in automating MCQ\ncreation, but challenges persist in ensuring mathematical accuracy and\naddressing student errors. This paper introduces a prototype tool designed to\nfacilitate collaboration between LLMs and educators for streamlining the math\nMCQ generation process. We conduct a pilot study involving math educators to\ninvestigate how the tool can help them simplify the process of crafting\nhigh-quality math MCQs. We found that while LLMs can generate well-formulated\nquestion stems, their ability to generate distractors that capture common\nstudent errors and misconceptions is limited. Nevertheless, a human-AI\ncollaboration has the potential to enhance the efficiency and effectiveness of\nMCQ generation.", + "authors": "Jaewook Lee, Digory Smith, Simon Woodhead, Andrew Lan", + "published": "2024-05-01", + "updated": "2024-05-01", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "main_content": "INTRODUCTION Multiple choice questions (MCQs) are widely used to evaluate students\u2019 knowledge since they enable quick and accurate administration and grading [2, 6, 9]. MCQs are constructed in a specific format. The stem refers to the statement on the problem setup and context, followed by a question that needs to be answered. Among the options, the correct one can be referred to as the key, while incorrect ones can be referred to as distractors. As the name implies, distractors in MCQs are typically formulated to align with common errors among students. These distractors are chosen because students either i) lack the necessary comprehension of the knowledge components (KCs) or concepts/skills tested in the question to accurately identify the key as the correct answer or ii) exhibit misconceptions that make them think a specific distractor is correct. While MCQs offer many advantages in student knowledge assessment, manually crafting high-quality MCQs, especially in math-related domains, is a demanding and labor-intensive process [5]. There are three main tasks in this process: First, educators need to formulate a question stem that effectively encapsulates the KCs they aim to test. Second, educators need to anticipate common errors and/or misconceptions among students and create corresponding distractors. Third, educators need to provide feedback to students who select distractors that can help them identify their errors and lead them to the correct answer, to expedite their learning process. The emergence of large language models (LLMs) has raised hopes for making MCQ creation more scalable by automating the process. Specifically, few-shot, in-context learning is promising for generating math MCQs since LLMs can follow instructions based on contextual information conveyed by a few examples. While automated question generation for open-ended questions has shown notable success, generating plausible distractors within MCQs presents a different challenge: distractors should be based on anticipated student errors/misconceptions [12], whereas LLMs have not necessarily learned this information during training. Moreover, math MCQs are challenging since they require mathematical reasoning, which means that distractors cannot be generated using a knowledge graph [13] or paraphrasing tool [8]. Consequently, math educators need to take an important role in guiding LLMs in math MCQ generation: LLMs are responsible for scaling up the process while humans use their expertise efficiently. Therefore, we raise following are two core research questions (RQs) that help identify opportunities to generate math MCQs through collaboration between LLMs and human educators: 1) RQ1: Can LLMs generate valid MCQs, especially distractors and feedback corresponding to common student errors/misconceptions? 2) RQ2: What are the key design elements in a system where human math educators and LLMs collaborate on MCQ generation? 1.1 Contributions In this paper, we introduce a prototype tool called the Human Enhanced Distractor Generation Engine(HEDGE) for math MCQ creation, which leverages the expertise of educators by asking them to edit LLM-generated MCQs in a two-step arXiv:2405.00864v1 [cs.CL] 1 May 2024 \fprocess. In the first step, we prompt the LLM to generate stem, key, and explanation in an MCQ, and ask educators to evaluate and edit the output to make sure it is mathematically correct and relevant to the intended KC. In the second step, we prompt the LLM to generate a set of possible errors/misconceptions and the corresponding distractors and feedback, and ask educators to evaluate and edit the output to make sure they correspond to valid distractors to the generated question stem. In a pilot study, we recruit four former/current math teachers to evaluate our tool on generating math MCQs related to five pre-defined KCs. Results show that educators considered 70% of the generated stem, key, and explanation generated by GPT-4 as valid. However, they only considered 37% of the generated misconception, distractor, and feedback valid, which reveals significant limitations of LLMs in capturing anticipated common errors/misconceptions among real students. This observation underscores the necessity of involving humans in the process of generating math MCQs and leveraging real math educators\u2019 expertise on common errors among students. 2. HUMAN ENHANCED DISTRACTOR GENERATION ENGINE 2.1 Overview Figure 1: HEDGE Overview: the human-AI collaboration setting for generating math MCQs for a given KC. Strikethrough text represents edits made to LLM-generated content while boldface text indicates misconceptions that correspond to distractors. HEDGE is our prototype for math MCQ generation that generates math MCQ for a given mathematical KC, as illustrated in Figure 1. These KCs are categorized into three levels of granularity: coarse, medium, and fine-grained. For instance, KCs can cover either a broad topic such as\u201cbasic arithmetic\u201d or a specific topic like \u201cIdentify that a problem needs to be solved using addition.\u201d HEDGE is designed to utilize LLMs within OpenAI. The provided example is generated using ChatGPT. We take a two-step approach for MCQ generation: 1) generate the question step and answer key, and an explanation, and 2) generate a list of possible misconceptions, corresponding distractors, and feedback messages. We implement both steps using by prompting LLMs with an in-context example of these tasks. The in-context example shows the KC converting ratios to fractions, employing a real-life scenario in which Kate and Isaac share yogurt in a 2 : 5 ratio. The objective is to calculate the fraction representing Kate\u2019s share, 2 7. In this Table 1: The in-context example used for prompting LLMs for math MCQ generation. KC Coarse Ratio, Medium Writing ratios, Fine Convert ratios to fractions Stem Kate and Isaac share yogurt in a 2 : 5 ratio. Kate has \u25a1of the total. Identify the fraction. Key 2 7 Explanation The total ratio is 7 parts. Kate\u2019s share of 2 7 is derived by dividing her 2 parts by the total. Misconceptions 1. Misinterpreting the ratio as a fraction. 2. Confusing the difference in ratio parts as relevant. 3. Calculating Isaac\u2019s share instead of Kate\u2019s. Distractors 1. 2 5 2. 3 7 3. 5 7 Feedback 1. The ratio 2 : 5 means 7 parts total, not 2 5. 2. The ratio splits the total, not the difference between parts. 3. Ensure you are calculating Kate\u2019s share, not Isaac\u2019s. context, we list three common misconceptions. First, a student mistakenly thinks that the ratio 2 : 5 could be directly converted into the fraction 2 5. Second, a student mistakenly calculates the difference between Kate\u2019s and Issac\u2019s share. Third, a student mistakenly think the goal is to calculate Issac\u2019s share. These misconceptions, along with the corresponding feedback on how to resolve them, are included as part of the in-context example. Now, we explore a scenario where an educator creates MCQs using our tool based on the concept of basic arithmetic, specifically focusing on mental addition. In the first step, given the target KC, along with an in-context example consisting of the concept, stem, key, and explanation, the LLM generates the following stem: \u201cSally has 5 apples. She gives 2 apples to her friend. How many apples does Sally have left?\u201d However, this stem mistakenly embodies the KC of subtraction rather than addition. Therefore, the educator edits the generated results to align it with the intended KC of addition. In the second step, using the adjusted stem, key, and explanation, as well as incorporating in-context examples with distractors, misconceptions, and feedback, the LLM generates distractors along with corresponding misconceptions and feedback. Figure 1 illustrates option B, which contains a misconception related to subtraction instead of addition, accompanied by feedback designed to correct this error. Additionally, the educator has the option to edit option D to address any misconceptions associated with multiplication. 2.2 User Interface We develop HEDGE interface, as illustrated in Figure 2. This interface is built using React and employs Firestore as its database for data storage. The interface comprises three components: a Sidebar, a Preview, and a Generation. The educator generates MCQs using the Generation component as discussed in Section 2.1. Here, after prompting LLMs using the edited stem, key, and explanation, we add a rating step to assess the overall quality of misconceptions, distractors, and feedback that the educator rates based on a 5-point Likert scale. Once the educator completes the distractor editing process, \fthe Preview component displays a fully structured MCQ, with the answer options randomized. We store any metadata that isn\u2019t visually represented within the image. Following the completion of distractor editing, the Sidebar component is refreshed. The educator can click on the stem to view the generated image along with the answer sheet or create a new MCQ. Figure 2: HEDGE Interface: what human participants use to generating an MCQ by editing LLM output. 3. PILOT STUDY 3.1 Experimental Setup We perform a pilot study to assess the usability of HEDGE in generating MCQs. In this study, we select pre-defined KCs and instruct participants to utilize these KCs to simulate a scenario where an educator is crafting MCQs. We select the KCs and the in-context example from a large education company\u2019s content repository, categorized under the label \u201cNumber,\u201d encompasses various subtopics, such as \u201cBasic Arithmetic,\u201d \u201cFractions,\u201d and \u201cRounding and Estimating.\u201d We choose five KCs, as shown in Table 2, from the KCs that incorporate mathematical expressions, such as fractions, powers, and surds. We utilize GPT-4 as LLM for the study and set the parameters to temperature = 0.7 and top p value = 0.9 to balance creativity and consistency of the generated MCQs. After completing the study, participants are asked to complete an exit survey. The survey includes open-ended questions and ratings on their satisfaction with the quality of LLM-generated responses and the usability of the tool using a 5-point Likert scale. 3.2 Participants We recruit four participants for the study, comprising one male and three females, all recruited through Upwork [14]. Among them, two currently work as middle/high school math teachers, while the other two currently work as tutors, with prior experience as former math teachers. All participants are selected based on their qualifications and expertise in mathematics education. Each participant was tasked with creating five MCQs using the HEDGE, employing the five KCs specified in Table 2. 4. RESULTS 4.1 Stem, Key, and Explanation Table 3 shows the stems produced by participants utilizing HEDGE. In the\u201cFine-grained KC\u201dcolumn, the original stem is indicated in italics, while the stems modified by each participant denoted as a, b, c, and d, respectively. In what follows, we label each MCQ in the format of 1a, where 1 denotes the index of the fine-grained KC and a denotes index of the participant. Out of 20 sets of stem, key, and explanation generated by the LLM, participants deemed 14 sets of them as valid. Among these valid sets, two added more details in their explanations, while the remaining sets were adopted without any need for edits. For example, italicized details were added in the explanation for 2c: \u201cThe fraction 3 9 simplifies to 1 3 because both the numerator and the denominator can be divided by a common factor of 3. 3 divided by 3 is 1, and 9 divided by 3 is 3. Hence, 1 3 is an equivalent fraction to 3 9.\u201d The other case was to make the question setting more realistic: In 4d, the educator edited the initial price of the car worth $5000 to $35000. This adjustment reveals the limitations of LLMs in accurately representing real-life problem scenario. We now analyze the cases that participants deemed invalid. Grammar error. In 2a, educator corrected grammar error of\u201cshe have\u201dto\u201cshe has.\u201d No other grammar errors occurred in the study besides this one, underscoring the capability of LLMs to consistently produce grammatically correct sentences. Not mastering KC. Regarding 5th KC, GPT-4 shows a lack of knowledge on the distinction between simplified and non-simplified surd. The followings are invalid stems generated by GPT-4: 1) 5a. If \u221a 20 is a simplified surd, what is its non-simplified form? 2) 5c. Express the simplified surd \u221a 45 in a non-simplified form. 3) 5d. A simplified surd is \u221a 8. How can it be represented in non-simplified form? This invalid stem has misled a participant to edit a stem to convey KC as simplifying surd, which is the opposite of non-simplifying surd (5c). Calculation error. In 4c, GPT-4 generated a key of $4750, erroneously calculating the car price after one year instead of two years. However, in the other three cases within the same KC, GPT-4 calculated correctly, showing its math problemsolving skills. 4.2 Distractor, Misconception, and Feedback Table 4 shows a breakdown of 60 distractors (comprising three distractors for 20 stems), categorized based on the validity of misconceptions, distractors, and feedback. Adopt All Responses (Case 1, 37%). Among 60 distractors, educators identified 22 responses as valid, including two cases that are actually invalid. Edit Feedback Only (Case 2, 8%). These cases have valid misconception and distractor and educators has made adjustments to the feedback to enhance its clarity. For example, one of the distractors for 2d is 2 3. The feedback \fTable 2: Pre-defined math KCs used in the pilot study. Coarse-grained Medium-grained Fine-grained Factors, Multiples and Primes Factors and Highest Common Factor Identify factors of a number Fractions Equivalent Fractions Identify equivalent fractions when presented as numbers Indices, Powers and Roots Squares, Cubes, etc Understand the notation for powers Percentages Repeated Percentages and Compound Interest Understand the elements of the formula for compound percentage decrease Surds Simplifying Surds Write a simplified surd in a non-simplified form Table 3: Question stems generated using HEDGE and the corresponding KCs. Fine-grained KC Stem 1. Identify factors of a number Which of these numbers is not a factor of 9? a. What are all the factors of the number 12? b. What are the factors of 18? c. Which of the following is a factor of 18? d. Which of the following numbers is a factor of 36? 2. Identify equivalent fractions when presented as numbers Which fraction is equivalent to 9 13? a. Sue has a fraction of 4 8. What fraction is equivalent to the fraction she has? b. The fraction 6 18 is equivalent to which of the following fractions? c. Which of the following fractions is equivalent to 3 9? d. Which of the following fractions is equivalent to 2 4? 3. Understand the notation for powers To calculate 532 you need to do ... a. The number 32 is equal to \u25a1. What number completes the sentence? b. The number 34 represents \u25a1. What number completes the sentence? c. If a3 is read as \u201da cubed\u201d, how is a4 read? d. What is the value of 23? 4. Understand the elements of the formula for compound percentage decrease A car depreciates in value by 10% each year. If a car was bought for $4500, what calculation would find the value of the car after 3 years? a. A car that costs $5000 loses 12% of its value each year. After one year, the car is worth \u25a1. What completes the sentence? b. A new car loses 20% of its value each year. If the car was originally priced at $15,000, what will be its value after 2 years? c. The price of a car is reduced by 5% each year. If the car was originally priced at $5000, what will be the price of the car after two years? d. A car depreciates in value by 10% each year. If the car is initially worth $35000, what is the formula to calculate the car\u2019s value after n years? 5. Write a simplified surd in a non-simplified form 5 \u221a 13 = \u221an What is the value of n? a. If 2 \u221a 5 is a simplified surd, what is its non-simplified form? b. The square root of 18 is written in simplified surd form as 3 \u221a 2. How can it be rewritten in a non-simplified form? c. Simplify the surd \u221a 45. d. A non-simplified surd is \u221a 8. How can it be represented in simplified form? generated by GPT-4 is as follows: \u201cYou seem to have compared only the numerators of the fractions. However, when checking for equivalent fractions, both the numerator and denominator need to be considered. The fraction 2 3 is not equivalent to 2 4.\u201d The educator removed the redundant final sentence and introduced \u201cRemember, equivalent fractions require both the numerator and denominator to be proportional.\u201d, which helps students better understand the importance of considering both the numerator and denominator when comparing fractions for equivalence. This adjustment emphasizes that the equivalence between fractions relies on maintaining proportionality between the numerator and denominator. While GPT-4 provides valid explanations, it sometimes fail to include critical insights that are necessary for students\u2019 improvement. Table 4: Breakdown of the 60 generated distractors and their quality ratings. (\u2713: valid, \u2717: invalid) Case Misconception Distractor Feedback Ratio Rating 1 \u2713 \u2713 \u2713 37% 4.8 2 \u2717 8% 2.8 3 \u2717 \u2713 4 \u2717 18% 2.1 5 \u2717 \u2713 \u2713 12% 3.4 6 \u2717 5% 3.0 7 \u2717 \u2713 8 \u2717 20% 2.3 Adopt Misconception Only (Case 4, 18%). These cases are often due to a mismatch between the misconception and the distractor. In 4c, the misconception \u201cThe student mistakenly believed that the car depreciates by a constant amount each year, not a percentage.\u201d did not match the distractor 35000 \u22120.10n. Additionally, there are cases when, even if the distractor is valid, it may not effectively encapsulate student misconceptions. In 1a, the educator updated the distractor from 1, 2, 3, 4, 6, 12, 24 to 12, 24, 36, 48, 60, making it a more attractive distractor for those who confuse factors for multiples. Edit Misconception Only (Case 5, 12%). As in Case 4, invalid cases are often due to a mismatch between the misconception and the distractor. In 5d, the misconception \u201cThe student may believe that all square roots are in their simplest form.\u201d did not match the distractor\u201c \u221a 2.\u201d The educator updated the misconception as \u201cThe student may have confused square roots with cube roots.\u201d providing a more accurate misconception for the distractor. Additionally, there are cases when, even if the misconception is valid, it may not likely be the misconception why the student selects the distractor. In 1c, the educator updated the misconception of distractor \u201c4\u201d from \u201cThe student might think that only the numbers less than 18 can be the factors of 18.\u201d to \u201cThe student might think that any even number can be a factor of an even number.\u201d, making it more accurate for addressing the student\u2019s misconception. Adopt Distractor Only (Case 6, 5%). These cases were when educators adopted distractors and edited wrong misconceptions and feedback. For example, in the case of 5a, \u221a 10 is a valid distractor as the student could simply multiply 2 and 5. However, the misconception and feedback generated by GPT-4 did not align with the distractor; therefore the educator had to edit it accordingly. In Cases 4, 5, and 6, LLMs revealed inconsistent mathematical reasoning when analyzing misconceptions, distractors, and feedback for a given stem. The inconsistency under\fscores a necessity for human educators to manually align distractors and their underlying misconceptions and corresponding feedback in many cases. Reject All Responses (Case 8, 20%). These cases were when misconceptions had poor quality or were wrong, resulting in inadequate distractors and feedback. Two of the distractors generated for 2b by GPT-4 shows both poor quality and wrong misconceptions. While the misconception in the first distractor is valid, stating that \u201cThe student may not divide both the numerator and denominator by the same number,\u201d the distractor itself, represented by 3 9, and its associated feedback lack coherence and fail to align with this misconception. Meanwhile, the misconception in the second distractor ( 8 24) lacks coherence, as expressed in the following manner: \u201cThe student may confuse the concept of equivalent fractions with simplifying fractions.\u201d These results reveal that LLMs often fail to anticipate valid misconceptions and errors that are common among students, making human educators\u2019 involvement crucial in the creation of math MCQs. 4.3 Takeaways from the Survey After the study, participants were asked to fill out the survey, asking the experience using HEDGE. We categorize result into two: Quality of LLM-generated responses and Tool Usability. 4.3.1 Quality of LLM-generated responses. Stem, Key, and Explanation. On a 5-point Likert scale, the participants gave an average rating of 4. This rating aligns with the open-ended responses regarding most of the generated stem, key, and explanation valid. However, two participants addressed the tool\u2019s limitation in terms of the level of question difficulty. One participant points out that the questions appear to be at a low Bloom\u2019s Taxonomy level. For example, \u201cIf a3 is read as \u2018a cubed\u2019, how is a4 read?\u201d While it\u2019s important for students to grasp the verbal representation of these terms, educators often place greater emphasis on whether students understand the equivalent expressions and concepts associated with them. The other participant points out that the Depth of Knowledge (DOK) levels predominantly focused on Level 1 (Recall) and Level 2 (Skill or Concept). We can prompt LLMs to generate questions at various Bloom\u2019s or DOK levels to enhance the question difficulty and promote deeper understanding [3]. Moreover, we can invite educators to craft in-context examples with higher Bloom\u2019s or DOK levels. Distractor, Misconception, and Feedback. On a 5-point Likert scale, the participants gave an average rating of 2.5. This rating aligns with the open-ended responses regarding most of the generated misconceptions, distractors, and feedback that do not reflect what students typically make in the classroom based on the participant\u2019s teaching experience. The responses again point to the observation that LLMs do not understand errors that student are likely to make. One participant suggest providing a \u201cbank\u201d of misconceptions that educators could refer to. We can prompt LLMs to generate multiple misconceptions and engage educators in ranking these misconceptions based on their alignment with actual student errors. 4.3.2 Tool Usability User Interface. On a 5-point Likert scale, the participants gave an average rating of 4 for comfort level with generating MCQs using HEDGE while giving an average rating of 3.25 for the effectiveness of generating high-quality MCQs. Participants are enthusiastic about the tool\u2019s potential for simplifying the process of generating MCQs but are nevertheless skeptical about LLMs\u2019 capability to generate valid distractors. We will need to enhance the tool by making improvements in the quality of generated distractors to align more closely with educators\u2019 expectations. 5." + }, + { + "url": "http://arxiv.org/abs/2402.10475v1", + "title": "Fundamental Benefit of Alternating Updates in Minimax Optimization", + "abstract": "The Gradient Descent-Ascent (GDA) algorithm, designed to solve minimax\noptimization problems, takes the descent and ascent steps either simultaneously\n(Sim-GDA) or alternately (Alt-GDA). While Alt-GDA is commonly observed to\nconverge faster, the performance gap between the two is not yet well understood\ntheoretically, especially in terms of global convergence rates. To address this\ntheory-practice gap, we present fine-grained convergence analyses of both\nalgorithms for strongly-convex-strongly-concave and Lipschitz-gradient\nobjectives. Our new iteration complexity upper bound of Alt-GDA is strictly\nsmaller than the lower bound of Sim-GDA; i.e., Alt-GDA is provably faster.\nMoreover, we propose Alternating-Extrapolation GDA (Alex-GDA), a general\nalgorithmic framework that subsumes Sim-GDA and Alt-GDA, for which the main\nidea is to alternately take gradients from extrapolations of the iterates. We\nshow that Alex-GDA satisfies a smaller iteration complexity bound, identical to\nthat of the Extra-gradient method, while requiring less gradient computations.\nWe also prove that Alex-GDA enjoys linear convergence for bilinear problems,\nfor which both Sim-GDA and Alt-GDA fail to converge at all.", + "authors": "Jaewook Lee, Hanseul Cho, Chulhee Yun", + "published": "2024-02-16", + "updated": "2024-02-16", + "primary_cat": "math.OC", + "cats": [ + "math.OC", + "cs.LG" + ], + "main_content": "Introduction The minimax problem aims to solve: min x\u2208Rdx max y\u2208Rdy f(x, y). (1) This has been popularized since the work by von Neumann (1928) and is widely studied in mathematics, economics, computer science, and machine learning. Particularly, in modern machine learning, many important problem settings fall within the problem (1), including but not limited to generative adversarial networks (GANs) (Arjovsky et al., 2017; Goodfellow et al., 2020; Heusel et al., 2017), adversarial training and robust optimization (Latorre et al., 2023; Madry et al., 2018; Sinha et al., 2018; Yu et al., 2022), reinforcement learning (Li et al., 2019), and area-under-curve (AUC) maximization (Liu et al., 2020; Ying et al., 2016; Yuan et al., 2021). The simplest baseline algorithm for solving minimax problems is gradient descent-ascent (GDA) (Dem\u2019yanov and Pevnyi, 1972), which naturally generalizes the idea of gradient descent for minimization problems. The GDA algorithm updates x in the direction of decreasing the objective function f while updating y in the direction of increasing f, either simultaneously (Sim-GDA) or alternately (Alt-GDA). Unfortunately, it is not easy for both algorithms to converge to an optimal point even in a convex-concave minimax problem: in an unconstrained bilinear problem minx maxy xy, for example, Sim-GDA diverges all the way out while Alt-GDA generates bounded but non-convergent iterates (Bailey et al., 2020; Gidel et al., 2019a,b; Zhang et al., 2022). To tackle the issues of vanilla GDA(s), numerous algorithms have been introduced and analyzed for smooth minimax problems, including Extra-gradient (EG) (Korpelevich, 1976), Optimistic Gradient Descent (OGD) (Popov, 1980), negative momentum (Gidel et al., 2019b), and many more (Lee and Kim, 2021; Park and Ryu, 2022; Yoon and Ryu, 2021, 2022). Although these algorithms enjoy accelerated convergence rates compared to vanilla GDA, the majority of these works focus on simultaneous updates of x and y, mainly because of the simplicity of analysis. However, in minimax problems applied in practical machine learning, it is more natural \u2217Authors contributed equally to this paper and are listed alphabetically. 1 arXiv:2402.10475v1 [math.OC] 16 Feb 2024 \fFundamental Benefit of Alternating Updates in Minimax Optimization 0 200 400 600 800 1000 #(gradient computation) 10 46 10 39 10 32 10 25 10 18 10 11 10 4 103 distance to z Sim-GDA Alt-GDA EG OGD Alex-GDA ( = 2.7, = 1.1) = 1e-50 0.6 0.4 0.2 0.0 0.2 0.4 0.6 0.8 x[0] 0.8 0.6 0.4 0.2 0.0 0.2 0.4 0.6 y[0] initial point Nash Equilibrium SCSC Quadratic Game f(x, y) = 1 2x Ax + x By 1 2y Cy Figure 1: Experiments on a strongly-convex-strongly-concave (SCSC) quadratic game. See Appendix G.1 for more details. (Left) Comparing the convergence speeds of algorithms: Sim-GDA, Alt-GDA, EG, OGD and Alex-GDA. (Right) Trajectory of the algorithms. This is a partial visualization\u2014originally, the trajectory is 6-dimensional since we set dx = dy = 3. for the training procedure to work in an alternating sense. In training GANs, for instance, the discriminator should update its weight based on the outcome of the generator, and vice versa. Moreover, there exist substantial amounts of empirical evidence of Alt-GDA exhibiting faster convergence (Goodfellow et al., 2020; Mescheder et al., 2017), as we demonstrate in Figure 1. In contrast, we still lack a theoretical understanding of why and how much Alt-GDA is faster, especially compared to Sim-GDA. To fill this gap between theory and practice, it is a timely and important subject to study which one is a winner between simultaneous and alternating updates. An existing work by Zhang et al. (2022) comes up with a theoretical explanation involving local convergence guarantees for \u00b5-strongly-convex-strongly-concave (SCSC), L-Lipschitz gradient functions. Their results constructively explain that Alt-GDA (of iteration complexity \u02dc O(\u03ba)) has a faster convergence rate than Sim-GDA ( \u02dc O(\u03ba2)), where \u03ba = L/\u00b5 is the condition number of the problem. However, their results are confined to guaranteeing local convergence rates, inevitably requiring a near-optimum initial point condition which could be highly impractical. Overall, this raises the following question: For minimax problems (1), are alternating updates strictly better than simultaneous updates, even in terms of global convergence? (2) 1.1 Summary of Contributions Our contributions are largely twofold. First, we eliminate the limitations of prior work by providing global convergence guarantees that elucidate the fundamental strength of Alt-GDA over Sim-GDA. Second, we propose a novel algorithm called Alternating-Extrapolation GDA (Alex-GDA) that achieves an identical rate to the Extra-gradient (EG) method with the same number of gradient computations per iteration as Sim-GDA and Alt-GDA. For the following results, we assume (\u00b5x, \u00b5y)-strongly-convex-strongly-concave (SCSC), (Lx, Ly, Lxy)-Lipschitz gradient objectives with condition numbers \u03bax = Lx/\u00b5x, \u03bay = Ly/\u00b5y, and \u03baxy = Lxy/\u221a\u00b5x\u00b5y.1 In particular, we 1For the definitions of SCSC functions having Lipschitz gradients, please refer to Definitions 2.1 and 2.2. For the definition of condition numbers \u03bax, \u03bay, and \u03baxy, please refer to Definition 2.3. 2 \fFundamental Benefit of Alternating Updates in Minimax Optimization study the upper and lower bounds on the iteration complexity K to achieve \u2225(xK, yK) \u2212(x\u22c6, y\u22c6)\u22252 \u2264\u03f5, where (x\u22c6, y\u22c6) is the Nash equilibrium.2 \u2022 In Section 3, we prove that Sim-GDA satisfies an iteration complexity rate of \u0398 \u0000(\u03bax + \u03bay + \u03ba2 xy) \u00b7 log(1/\u03f5) \u0001 by showing tightly matching upper and lower bounds. Our fine-grained convergence rate highlights the fact that the term \u03ba2 xy is the main cause of slow convergence, which previously known results do not capture. \u2022 In Section 4, we prove that Alt-GDA satisfies an iteration complexity rate upper bound of O \u0000\u0000\u03bax + \u03bay + \u03baxy(\u221a\u03bax + \u221a\u03bay) \u0001 \u00b7 log(1/\u03f5) \u0001 , which, compared to the results in Section 3, concludes that Alt-GDA is provably faster than Sim-GDA. \u2022 In Section 5, we propose a new algorithm, Alternating-Extrapolation GDA (Alex-GDA), and prove a smaller iteration complexity rate of \u0398 ((\u03bax + \u03bay + \u03baxy) \u00b7 log(1/\u03f5)) by showing tightly matching upper and lower bounds. We also show that EG\u2014which requires twice the number of gradient computations per iteration\u2014yields the same rate by showing an identical lower bound. Next, we turn to bilinear objectives f(x, y) = x\u22a4By, for which both Sim-GDA and Alt-GDA fail to converge. \u2022 In Section 6, we show that Alex-GDA enjoys linear convergence with an iteration complexity upper bound O \u0010 (Lxy/\u00b5xy)2 \u00b7 log(1/\u03f5) \u0011 , where \u00b5xy, Lxy are the smallest, largest nonzero singular values of the coupling matrix B, respectively. Long story short, our results altogether answer the ground-setting question (2) in the positive. For the optimization community\u2014we believe that our fundamental comparison between simultaneous and alternating updates could provide fruitful insights for future investigations to unveil new rate-optimal algorithms by using alternating updates. 2 Preliminaries Notation. We study unconstrained minimax problems with objective function f : Rdx \u00d7 Rdy \u2192R, where x \u2208Rdx and y \u2208Rdy are the variables. In some cases we use z = (x, y) \u2208Rdx \u00d7 Rdy and d = dx + dy for notational simplicity. We denote by \u2225\u00b7 \u2225the Euclidean \u21132-norm for vectors and the spectral norm (i.e., maximum singular value) for matrices. We denote by \u27e8\u00b7, \u00b7\u27e9the usual inner product between vectors in Euclidean space of the same dimension. The spectral radius (i.e., maximum absolute eigenvalue) of a matrix M is denoted by \u03c1(M). The letters O, \u2126, \u03c9, and \u0398 are for the conventional asymptotic notations, while the tilde notation (e.g., \u02dc O and \u02dc \u2126) hides polylogarithmic factors. 2.1 Function Class We first introduce the definitions we need to characterize the function class we will mainly focus on. Definition 2.1 (Strong-convex-strong-concavity). For given constants \u00b5x, \u00b5y > 0, we say that a differentiable function f : Rdx \u00d7 Rdy \u2192R is (\u00b5x, \u00b5y)-strong-convex-strong-concave (or (\u00b5x, \u00b5y)-SCSC) if f(x\u2032, y) \u2265f(x, y) + \u27e8\u2207xf(x, y), x\u2032 \u2212x\u27e9+ \u00b5x 2 \u2225x\u2032 \u2212x\u22252 f(x, y\u2032) \u2264f(x, y) \u2212\u27e8\u2207yf(x, y), y\u2032 \u2212y\u27e9\u2212\u00b5y 2 \u2225y\u2032 \u2212y\u22252 for all x, x\u2032 \u2208Rdx and y, y\u2032 \u2208Rdy. If \u00b5x = \u00b5y = 0, we say that f is convex-concave. 2For the definition of Nash equilibrium, please refer to Definition 2.5. 3 \fFundamental Benefit of Alternating Updates in Minimax Optimization Definition 2.2 (Lipschitz gradients). For given constants Lx, Ly \u22650 and Lxy \u22650, we say that a differentiable function f : Rdx \u00d7 Rdy \u2192R has (Lx, Ly, Lxy)-Lipschitz gradients3 if \u2225\u2207xf(x\u2032, y) \u2212\u2207xf(x, y)\u2225\u2264Lx\u2225x\u2032 \u2212x\u2225, \u2225\u2207xf(x, y\u2032) \u2212\u2207xf(x, y)\u2225\u2264Lxy\u2225y\u2032 \u2212y\u2225 \u2225\u2207yf(x, y\u2032) \u2212\u2207yf(x, y)\u2225\u2264Ly\u2225y\u2032 \u2212y\u2225, \u2225\u2207yf(x\u2032, y) \u2212\u2207yf(x, y)\u2225\u2264Lxy\u2225x\u2032 \u2212x\u2225 for all x, x\u2032 \u2208Rdx and y, y\u2032 \u2208Rdy. For SCSC and Lipschitz-gradient objective functions, the convergence rates of algorithms usually depend on the ratio between the parameters \u00b5x, \u00b5y and Lx, Ly, Lxy, which we often refer to as the condition number. Definition 2.3 (Condition numbers). For given constants 0 < \u00b5x \u2264Lx, 0 < \u00b5y \u2264Ly, and Lxy \u22650, we define the condition numbers as \u03bax := Lx/\u00b5x, \u03bay := Ly/\u00b5y, and \u03baxy := Lxy/\u221a\u00b5x\u00b5y. The definitions of \u03bax and \u03bay are completely analogous to the definition widely used in convex optimization literature, and we have \u03bax, \u03bay \u22651 since \u00b5x \u2264Lx, \u00b5y \u2264Ly. The number \u03baxy \u22650 additionally takes into account how the coupling between the two variables can affect the speed of convergence. Definition 2.4 (Function class). For 0 < \u00b5x \u2264Lx, 0 < \u00b5y \u2264Ly, and Lxy \u22650, we define F(\u00b5x, \u00b5y, Lx, Ly, Lxy) as the function class containing all f : Rdx \u00d7 Rdy \u2192R that are (i) twice-differentiable, (ii) (\u00b5x, \u00b5y)-SCSC, and (iii) has (Lx, Ly, Lxy)-Lipschitz gradients. Considering the minimax problem as in (1), the optimal solution is characterized as in Definition 2.5. Definition 2.5. A Nash equilibrium of a function f : Rdx \u00d7 Rdy \u2192R is defined as a point (x\u22c6, y\u22c6) \u2208Rdx \u00d7 Rdy which satisfies for all x \u2208Rdx and y \u2208Rdy: f(x\u22c6, y) \u2264f(x\u22c6, y\u22c6) \u2264f(x, y\u22c6). It is well known that if f \u2208F(\u00b5x, \u00b5y, Lx, Ly, Lxy), then the Nash equilibrium (x\u22c6, y\u22c6) of f uniquely exists (see, e.g., Zhang et al. (2019)). 2.2 Algorithms We focus on GDA algorithms with constant step sizes \u03b1, \u03b2 > 0. In Sections 3 and 4, we provide convergence analyses for Sim-GDA and Alt-GDA, shown in Algorithm 1. In Sections 5 and 6, we construct a new algorithm called Alternating-Extrapolation GDA (Alex-GDA), shown in Algorithm 2, which we formally define later. 2.3 Lyapunov Function Originally designed for stability analysis of dynamical systems (Kalman and Bertram, 1960), the Lyapunov function defined as in Definition 2.6 is widely used as a strategy to obtain convergence guarantees in optimization studies (Taylor et al., 2018). Algorithm 1 Sim-GDA and Alt-GDA Input: Number of epochs K, step sizes \u03b1, \u03b2 > 0 Initialize: (x0, y0) \u2208Rdx \u00d7 Rdy for k = 0, . . . , K \u22121 do xk+1 = xk \u2212\u03b1\u2207xf(xk, yk) if Sim-GDA then yk+1 = yk + \u03b2\u2207yf(xk, yk) else if Alt-GDA then yk+1 = yk + \u03b2\u2207yf(xk+1, yk) end if end for Output: (xK, yK) \u2208Rdx \u00d7 Rdy Definition 2.6 (Lyapunov function). Suppose that we have a function f : Rd \u2192R with optimal point z\u22c6\u2208Rd, an initialization point z0 \u2208Rd, and an algorithm that outputs zk \u2208Rd at the k-th iteration. A Lyapunov function is defined as a continuous function \u03a8 : Rd \u2192R such that: \u2022 (nonnegative) \u03a8(z) \u22650 for all z \u2208Rd, \u2022 (zero at optimum) \u03a8(z) = 0 if and only if z = z\u22c6, \u2022 (radially unbounded) \u03a8(z) \u2192\u221eas \u2225z\u2225\u2192\u221e, \u2022 (non-decreasing) \u03a8(zk+1) \u2264\u03a8(zk) for all k \u22650. 3Some papers call this class of functions as Lipschitz smooth functions. 4 \fFundamental Benefit of Alternating Updates in Minimax Optimization For an algorithm that outputs {zk}k\u22650 and a Lyapunov function \u03a8, we define {\u03a8k}k\u22650 as \u03a8k := \u03a8(zk), which we refer to as, with a bit of an abuse of notation, the Lyapunov function throughout the paper. Definition 2.7. We say that a Lyapunov function {\u03a8k}k\u22650 is valid if it satisfies for all k: \u03a8k \u2265A\u2225zk \u2212z\u22c6\u22252 (3) for some constant A > 0. If we find a valid Lyapunov function with contraction factor r \u2208(0, 1)\u2014 that is, for all k \u22650, we have \u03a8k+1 \u2264r\u03a8k, then we can deduce that K = O \u0012 1 1 \u2212r \u00b7 log \u03a80 A\u03f5 \u0013 (4) iterations are sufficient to ensure \u2225zK \u2212z\u22c6\u22252 \u2264\u03f5. We refer to K as the iteration complexity, and the rate in the right-hand side of (4) as the iteration complexity upper bound. 3 Convergence Analysis of Sim-GDA Given an objective function f \u2208F(\u00b5x, \u00b5y, Lx, Ly, Lxy), for which the Nash equilibrium is unique, we define the scaled distance to the Nash equilibrium V (x, y) as V (x, y) = 1 \u03b1\u2225x \u2212x\u22c6\u22252 + 1 \u03b2 \u2225y \u2212y\u22c6\u22252. (5) For Sim-GDA, we focus on the convergence rate in terms of the Lyapunov function \u03a8Sim k = V (xk, yk). Note that \u03a8Sim k is always nonnegative, and is valid since we have ASim\u2225zk \u2212z\u22c6\u22252 \u2264\u03a8Sim k for ASim = min n 1 \u03b1, 1 \u03b2 o . 3.1 Convergence Upper Bound Theorem 3.1 yields a contraction result for Sim-GDA. Theorem 3.1. Suppose that f \u2208F(\u00b5x, \u00b5y, Lx, Ly, Lxy). Then, there exists the step sizes \u03b1, \u03b2 with \u03b1\u00b5x = \u03b2\u00b5y = \u0398 \u0012 1 \u03bax + \u03bay + \u03ba2 xy \u0013 , such that Sim-GDA satisfies \u03a8Sim k+1 \u2264r\u03a8Sim k with r = \uf8eb \uf8ec \uf8ed \u0010 \u03baxy + q max {\u03bax, \u03bay} + \u03ba2 xy \u00112 \u22121 \u0010 \u03baxy + q max {\u03bax, \u03bay} + \u03ba2 xy \u00112 + 1 \uf8f6 \uf8f7 \uf8f8 2 . (6) While we defer the proof of Theorem 3.1 to Appendix B.1, by (4) we can restate the convergence rate upper bound in terms of the iteration complexity as follows. Corollary 3.2. For \u03b1, \u03b2 given as in Theorem 3.1, Sim-GDA linearly converges with iteration complexity O \u0012\u0000\u03bax + \u03bay + \u03ba2 xy \u0001 \u00b7 log \u03a8Sim 0 ASim\u03f5 \u0013 , where ASim = min n 1 \u03b1, 1 \u03b2 o . We defer the proof of Corollary 3.2 to Appendix B.2. 5 \fFundamental Benefit of Alternating Updates in Minimax Optimization Comparison with Previous Work. The previously known iteration complexity upper bound of Sim-GDA was \u02dc O(\u03ba2) (Azizian et al., 2020; Mescheder et al., 2017; Zhang et al., 2022), where the condition number is defined as \u03ba = max{Lx,Ly,Lxy} min{\u00b5x,\u00b5y} . However, using a single condition number might oversimplify the problem and lead to loose results; for instance, if the condition numbers follow \u03bax, \u03bay = \u0398(t2) and \u03baxy = \u0398(t) for some t, then previous results can only guarantee up to \u02dc O(t4), while Corollary 3.2 suggests a better rate \u02dc O(t2). This shows separating the condition numbers helps capture how \u03baxy, or the interaction between x and y, affects convergence speed. Meanwhile, a recent work by Zamani et al. (2022) proposes an iteration complexity upper bound for Sim-GDA of \u02dc O(\u03ba + \u03ba2 xy) for \u03ba = max{Lx,Ly} min{\u00b5x,\u00b5y} , but the proof heavily relies on a computer-assisted method known as the Performance Estimation Problem (PEP) (Drori and Teboulle, 2014). Our fine-grained analysis subsumes all of these previous results, and\u2014to the best of our knowledge\u2014is the first to clarify the exact convergence rate of Sim-GDA in terms of individual condition numbers \u03bax, \u03bay, and \u03baxy. 3.2 Convergence Lower Bound Theorem 3.3 provides a convergence lower bound of the iteration complexity of Sim-GDA which holds for all possible step sizes \u03b1, \u03b2 > 0. Theorem 3.3. There exists a 6-dimensional function f \u2208F(\u00b5x, \u00b5y, Lx, Ly, Lxy) with dx = dy = 3 such that for any constant step sizes \u03b1, \u03b2 > 0, the convergence of Sim-GDA requires an iteration complexity of rate at least \u2126 \u0012\u0000\u03bax + \u03bay + \u03ba2 xy \u0001 \u00b7 log 1 \u03f5 \u0013 in order to have \u2225zK \u2212z\u22c6\u22252 \u2264\u03f5. The iteration complexity rate in Theorem 3.3 exactly matches the upper bound in Corollary 3.2, ensuring that our analysis on Sim-GDA is indeed tight (ignoring log factors). We defer the proof of Theorem 3.3 to Appendix B.3. 4 Convergence Analysis of Alt-GDA For Alt-GDA, the half-step iterates alternating between x and y updates make theoretical analysis much harder than when dealing with simultaneous updates. We address this by focusing on the convergence rate in terms of the following Lyapunov function (instead of just V (xk, yk)): \u03a8Alt k = V Alt(xk, yk) + V Alt(xk+1, yk) \u2212\u03b1(1 \u2212\u03b1Lx)\u2225\u2207xf(xk, yk)\u22252, where V Alt(x, y) is defined as \u0012 1 \u03b1 \u2212\u00b5x \u0013 \u2225x \u2212x\u22c6\u22252 + \u0012 1 \u03b2 \u2212\u00b5y \u0013 \u2225y \u2212y\u22c6\u22252. Note that we capture the two-step-alternating nature of the algorithm by considering two adjacent iterates at a time, which turns out to be the key idea in the proofs. 4.1 Convergence Upper Bound Theorem 4.1 yields a contraction result for Alt-GDA. Theorem 4.1. Suppose f \u2208F(\u00b5x, \u00b5y, Lx, Ly, Lxy) and we run Alt-GDA with step sizes \u03b1, \u03b2 that satisfy \u03b1 \u22641 2 \u00b7 min \u001a 1 Lx , \u221a\u00b5y Lxy \u221aLx \u001b , \u03b2 \u22641 2 \u00b7 min ( 1 Ly , \u221a\u00b5x Lxy p Ly ) . Then \u03a8Alt k is valid, and satisfies \u03a8Alt k+1 \u2264r\u03a8Alt k with r = max ( 1 \u03b1 \u2212\u00b5x 1 \u03b1 \u22122\u03b22LyL2 xy , 1 \u03b2 \u2212\u00b5y 1 \u03b2 \u2212\u03b12LxL2 xy , 1 \u03b1 \u2212\u00b5x 1 \u03b1 ) < 1. 6 \fFundamental Benefit of Alternating Updates in Minimax Optimization While we defer the proof of Theorem 4.1 to Appendix C.1, by (4) we can restate the convergence rate upper bound in terms of the iteration complexity as follows. Corollary 4.2. For \u03b1, \u03b2 given by the maximum possible values in Theorem 4.1, Alt-GDA linearly converges with iteration complexity O \u0012\u0000\u03bax + \u03bay + \u03baxy(\u221a\u03bax + \u221a\u03bay) \u0001 \u00b7 log \u03a8Alt 0 AAlt\u03f5 \u0013 , where AAlt = min n 1 2\u03b1 \u2212\u00b5x, 2 \u0010 3 4\u03b2 \u2212\u00b5y \u0011o > 0. We defer the proof of Corollary 4.2 to Appendix C.2. Recall that for Sim-GDA we have an upper bound of \u02dc O \u0000\u03bax + \u03bay + \u03ba2 xy \u0001 , and a lower bound which shows that this rate cannot be improved. Comparing this with Corollary 4.2, we can conclude that the convergence rate of Alt-GDA is faster than Sim-GDA. Comparison with Local Analysis. Zhang et al. (2022) show that the local convergence rates of Sim-GDA and Alt-GDA are \u02dc O(\u03ba2) and \u02dc O(\u03ba), respectively, where \u03ba = max{Lx,Ly,Lxy} min{\u00b5x,\u00b5y} . Such kinds of local convergence rates of operators, including GDA iterates, rely on (the spectral radius of) the Jacobian matrix of the operator at the optimum (Bertsekas, 1999) and require that the iterates are in a small neighborhood around the optimum, or\u2014for gradient methods\u2014that the objective function is quadratic, so that the Jacobian is constant and the same spectral arguments hold everywhere in the domain. In contrast, Corollaries 3.2 and 4.2 both show global convergence rates for all initialization and SCSC objectives without such assumptions. While we can see that Corollary 3.2 naturally subsumes the local convergence rate \u02dc O(\u03ba2), it turns out that Corollary 4.2 is analogous to \u02dc O(\u03ba3/2), which is has a gap of \u221a\u03ba with the local convergence rate of \u02dc O(\u03ba) by Zhang et al. (2022). Viewing the local convergence result as a global convergence bound for the smaller class of quadratic SCSC functions, we believe that there may exist a non-quadratic function for which Alt-GDA requires an iteration complexity of \u02dc \u03c9(\u03ba), the proof of which we leave for future work. 5 Alternating-Extrapolation GDA A natural way of unifying the baseline algorithms Sim-GDA and Alt-GDA is to think of taking a linear combination between the two. That is, we can write: xk+1 = xk \u2212\u03b1\u2207xf(xk, yk), \u02dc xk+1 = (1 \u2212\u03b3)xk + \u03b3xk+1, yk+1 = yk + \u03b2\u2207yf(\u02dc xk+1, yk). (7) Note that this formulation provides an interpolation between Sim-GDA (\u03b3 = 0) and Alt-GDA (\u03b3 = 1). In the previous sections, we demonstrated a provable gap in the iteration complexity between the two endpoints \u03b3 = 0 and 1; this motivates us to consider an extrapolation to \u03b3 > 1 and see if we can achieve a further speed-up. However, if we extrapolate the x side alone, the update equations for x and y will no longer be of the same form. By symmetrizing the x and y sides, we now obtain the following general framework: xk+1 = xk \u2212\u03b1\u2207xf(xk, \u02dc yk), \u02dc xk+1 = (1 \u2212\u03b3)xk + \u03b3xk+1, yk+1 = yk + \u03b2\u2207yf(\u02dc xk+1, yk), \u02dc yk+1 = (1 \u2212\u03b4)yk + \u03b4yk+1, where \u02dc xk+1 and \u02dc yk+1 are the points where we compute the gradients, and \u03b3, \u03b4 \u22650 are hyperparameters. Notice that choosing (\u03b3, \u03b4) = (0, 1) recovers Sim-GDA and (\u03b3, \u03b4) = (1, 1) corresponds to Alt-GDA. We can rewrite our updates in terms of gradient updates (Algorithm 2). We name our algorithm AlternatingExtrapolation GDA (Alex-GDA), after the fact that our analysis mainly focuses on the case \u03b3, \u03b4 > 1 in which we compute gradients using extrapolated iterates, and we make alternating updates between x and y. 7 \fFundamental Benefit of Alternating Updates in Minimax Optimization Algorithm 2 Alternating-Extrapolation GDA (Alex-GDA) Input: Number of epochs K, step sizes \u03b1, \u03b2 > 0, hyperparameters \u03b3, \u03b4 \u22650 Initialize: (x0, y0) \u2208Rdx \u00d7 Rdy and \u02dc y0 = y0 \u2208Rdy for k = 0, . . . , K \u22121 do xk+1 = xk \u2212\u03b1\u2207xf(xk, \u02dc yk) \u02dc xk+1 = xk \u2212\u03b3\u03b1\u2207xf(xk, \u02dc yk) yk+1 = yk + \u03b2\u2207yf(\u02dc xk+1, yk) \u02dc yk+1 = yk + \u03b4\u03b2\u2207yf(\u02dc xk+1, yk) end for Output: (xK, yK) \u2208Rdx \u00d7 Rdy Initialization. Some careful readers might notice that the first step of Alex-GDA is a bit different from the rest of the iterations; for k = 0 we set \u02dc y0 = y0, whereas we use \u02dc yk = yk + (\u03b4 \u22121)\u03b2\u2207yf(\u02dc xk, yk\u22121) for all subsequent steps (k \u22651). This requires a bit more careful analysis, as in how we define the Lyapunov function for Alex-GDA: \u03a8Alex k = V (xk, yk) + V (xk+1, yk) \u2212\u03b1\u2225\u2207xf(xk, \u02dc yk)\u22252 + (\u03b4 \u22121)\u03b2\u2225\u2207xf(\u02dc xk, yk\u22121)\u22252 + (\u03b3 \u22121)(\u03b4 \u22121)\u03b1\u03b2 1 \u2212\u03b1\u00b5x \u00b7 Lxy r\u00b5y \u00b5x \u00b7 \u2225\u2207xf(xk\u22121, \u02dc yk\u22121)\u22252 for k \u22651, and \u03a8Alex 0 = V (x0, y0) + V (x1, y0) \u2212\u03b1\u2225\u2207xf(x0, \u02dc y0)\u22252 + (\u03b3 \u22121)(\u03b4 \u22121)\u03b1\u03b2 (1 \u2212\u03b1\u00b5x)(1 \u2212\u03b2\u00b5y) \u00b7 Lxy r\u00b5y \u00b5x \u00b7 \u2225\u2207xf(x0, \u02dc y0)\u22252 for k = 0, where V is defined in Equation (5). 5.1 Convergence Upper Bound Theorem 5.1 yields a contraction result for Alex-GDA. Theorem 5.1. Suppose that f \u2208F(\u00b5x, \u00b5y, Lx, Ly, Lxy) and we run Alex-GDA with \u03b3, \u03b4 > 1 and step sizes \u03b1, \u03b2 > 0 that satisfy \u03b1 \u2264C \u00b7 min \u001a 1 Lx , \u221a\u00b5y Lxy\u221a\u00b5x \u001b , \u03b2 \u2264C \u00b7 min \u001a 1 Ly , \u221a\u00b5x Lxy\u221a\u00b5y \u001b . for some constant C > 0 (which only depends on \u03b3 and \u03b4). Then \u03a8Alex k is valid, and satisfies \u03a8Alex k+1 \u2264r\u03a8Alex k with r = max {1 \u2212\u03b1\u00b5x, 1 \u2212\u03b2\u00b5y} . While we defer the proof of Theorem 5.1 to Appendix D.1, by (4) we can restate the convergence rate upper bound in terms of the iteration complexity as follows. Corollary 5.2. For \u03b1, \u03b2 given by the maximum possible values in Theorem 5.1, Alex-GDA linearly converges with iteration complexity O \u0012 (\u03bax + \u03bay + \u03baxy) \u00b7 log \u03a8Alex 0 AAlex\u03f5 \u0013 , where AAlex = min n 1 2\u03b1, 1 \u03b2 o > 0. While we defer the proof of Corollary 5.2 to Appendix D.2, we can observe that Corollary 5.2 provides a stronger iteration complexity upper bound than Corollary 4.2. 8 \fFundamental Benefit of Alternating Updates in Minimax Optimization 5.2 Convergence Lower Bound Theorem 5.3 provides a convergence lower bound of the iteration complexity of Alex-GDA which holds for all possible step sizes \u03b1, \u03b2 > 0. Theorem 5.3. There exists a 6-dimensional function f \u2208F(\u00b5x, \u00b5y, Lx, Ly, Lxy) with dx = dy = 3 such that for any constant step sizes \u03b1, \u03b2 > 0, the convergence of Alex-GDA with \u03b3, \u03b4 > 1 requires an iteration complexity of \u2126 \u0012 (\u03bax + \u03bay + \u03baxy) \u00b7 log 1 \u03f5 \u0013 in order to have \u2225zK \u2212z\u22c6\u22252 \u2264\u03f5. The iteration complexity rate in Theorem 5.3 exactly matches the upper bound in Corollary 5.2, which ensures that our analysis on Alex-GDA is tight (ignoring log factors). We defer the proof of Theorem 5.3 to Appendix D.3. 5.3 Comparison with EG Here we compare Alex-GDA to the Extra-gradient (EG) method (Korpelevich, 1976), an algorithm based on simultaneous updates of the form: xk+ 1 2 = xk \u2212\u03b1\u2207xf(xk, yk), yk+ 1 2 = yk + \u03b2\u2207yf(xk, yk), xk+1 = xk \u2212\u03b1\u2207xf(xk+ 1 2 , yk+ 1 2 ), yk+1 = yk + \u03b2\u2207yf(xk+ 1 2 , yk+ 1 2 ). It is known by Mokhtari et al. (2019) that EG converges with iteration complexity \u02dc O(\u03ba), where \u03ba = max{Lx,Ly,Lxy} min{\u00b5x,\u00b5y} . While EG is famous for its simplicity and fast convergence, we can show that EG must satisfy the same lower bound with Alex-GDA via the following proposition. Proposition 5.4. There exists a 6-dimensional function f \u2208F(\u00b5x, \u00b5y, Lx, Ly, Lxy) with dx = dy = 3 such that for any constant step sizes \u03b1, \u03b2 > 0, the convergence of EG requires an iteration complexity of rate at least \u2126 \u0012 (\u03bax + \u03bay + \u03baxy) \u00b7 log 1 \u03f5 \u0013 in order to have \u2225zK \u2212z\u22c6\u22252 \u2264\u03f5. We defer the proof of Proposition 5.4 to Appendix D.4. By comparing Proposition 5.4 with the upper (and lower) bound for Alex-GDA, it is clear that EG cannot be strictly faster than Alex-GDA in terms of iteration complexity rates. Moreover, Alex-GDA requires only two gradient values (one for x, y each) per a single iteration, while EG needs to perform exactly twice the amount of computations (two for x, y each). Nevertheless, Alex-GDA is provably as fast as EG, and in fact, it showcases faster empirical convergence compared to EG as shown in Figure 1. In Appendix A, we also compare Alex-GDA with another well-known baseline algorithm, Optimistic Gradient Descent (OGD) (Popov, 1980). 6 Alex-GDA Converges on Bilinear Problems One drawback shared by Sim-GDA and Alt-GDA is that both algorithms fail to converge for simple unconstrained bilinear problems of the form minx maxy f(x, y) = x\u22a4By (Gidel et al., 2019b), an important special case of a convex-concave but non-SCSC problem with Lipschitz gradients. Surprisingly, we show that Alex-GDA, on the other hand, does converge on bilinear problems. In order to present the result, we define \u00b5xy as the smallest nonzero singular value of B. Note that it is natural to assume 9 \fFundamental Benefit of Alternating Updates in Minimax Optimization the existence of nonzero singular values\u2014if not, then B = 0, and the objective is constantly zero. Similarly to previous definitions, we choose Lxy as the largest singular value of B. We first characterize the exact condition for convergent step sizes of Alex-GDA on bilinear problems. Interestingly, it allows a larger range of parameters \u03b3 and \u03b4: we no longer require \u03b3 > 1 and \u03b4 > 1. Theorem 6.1. With a proper choice of step sizes \u03b1 & \u03b2, Alex-GDA linearly converges to a Nash equilibrium of a bilinear problem if and only if \u03b3 + \u03b4 > 2. In this case, the exact conditions for convergent step sizes \u03b1 and \u03b2 are: \uf8f1 \uf8f4 \uf8f4 \uf8f2 \uf8f4 \uf8f4 \uf8f3 \u03b1\u03b2 < 4 (2\u03b3 \u22121)(2\u03b4 \u22121)L2 xy , if 4\u03b3\u03b4 \u22123(\u03b3 + \u03b4) + 2 \u22650, \u03b1\u03b2 < \u03b3 + \u03b4 \u22122 \u2212(\u03b3 \u22121)(\u03b4 \u22121)(\u03b3 + \u03b4 \u22121)L2 xy , if 4\u03b3\u03b4 \u22123(\u03b3 + \u03b4) + 2 < 0. We defer the proof of Theorem 6.1 to Appendix E.1. Furthermore, if we properly choose the step size, we can obtain the iteration complexity of Alex-GDA on bilinear problems. Theorem 6.2. For \u03b3 \u22651 and \u03b4 \u22651 such that \u03b3 +\u03b4 > 2, If we choose the step sizes \u03b1 and \u03b2 so that \u03b1\u03b2 = 1 C\u03b3,\u03b4L2 xy where C\u03b3,\u03b4 > 0 is a constant that only depends on \u03b3 and \u03b4, an iteration complexity upper bound of Alex-GDA is O C\u03b3,\u03b4 \u03b3 + \u03b4 \u22122 \u00b7 \u0012Lxy \u00b5xy \u00132 \u00b7 log \u0012\u2225w0\u22252 \u03f5 \u0013! , where \u2225w0\u22252 =\u2225x0\u2212x\u22c6\u22252+2\u2225y0\u2212y\u22c6\u22252 and (x\u22c6,y\u22c6) is a uniquely determined Nash equilibrium if z0 is given. If \u03b4 = 1, the optimal rate exponent of Alex-GDA is lim k\u2192\u221e \u2225zk \u2212z\u22c6\u2225 \u2225zk\u22121 \u2212z\u22c6\u2225= s L2 xy \u2212\u00b52 xy L2 xy + \u00b52 xy , where z\u22c6= (x\u22c6, y\u22c6) and the optimal choice of parameters satisfy \u03b1\u03b2 = 2\u00b52 xy/L2 xy L2 xy + \u00b52 xy , \u03b3 = 1 + L2 xy \u00b52 xy . While we defer the proof of Theorem 6.2 to Appendix E.2, we remark that the convergence speed depends on a new type of condition number, namely Lxy/\u00b5xy, which is distinct from our \u03baxy. 6.1 Comparison with EG A work by Zhang and Yu (2020) analyzes optimal convergence rates of EG and several other minimax optimization algorithms on bilinear problems. They prove that the optimal rate exponent of EG is L2 xy\u2212\u00b52 xy L2 xy+\u00b52 xy , which boils down to the iteration complexity \u02dc O((Lxy/\u00b5xy)2); it matches the iteration complexity of Alex-GDA up to constant factor. At first glance, it seems that the optimal rate exponent of EG is quadratically better than that of Alex-GDA with \u03b4 = 1. However, since EG takes twice more gradient computation per iteration than Alex-GDA, the optimal gradient computation complexity of EG and Alex-GDA with \u03b4 = 1 are exactly identical. Still, we believe that there is room for further improvement in the convergence rate of Alex-GDA by choosing \u03b4 other than 1, but we leave it as a future work. We also compare Alex-GDA with OGD in Appendix A. 10 \fFundamental Benefit of Alternating Updates in Minimax Optimization 7 Experiments The details of the experiments are illustrated in Appendix G. SCSC Quadratic Game. We run experiments on a simple (3 + 3)-dimensional SCSC quadratic game to compare the convergence rate of the algorithms. We choose appropriate step sizes for each algorithm by applying grid search, in terms of the number of iterations to arrive at a certain \u03f5-distant point from the Nash equilibrium, among convergent step sizes. As shown in Figure 1 and as already observed in the work by Zhang et al. (2022), Alt-GDA wins Sim-GDA in terms of the convergence rate by a large margin. We additionally observe that the convergence rate of Alt-GDA seems comparable to the rates of EG and OGD. Furthermore, with moderately tuned parameters \u03b3 and \u03b4, our Alex-GDA achieves a convergence rate that is even faster than EG and OGD. Bilinear Game. We also run experiments on a simple (3 + 3)-dimensional bilinear game. As showcased in Figure 2, the iterates of Sim-GDA diverge all the way out to infinity because of the unbounded domain, while the iterates of Alt-GDA do not escape from a limit cycle. On the contrary, Alex-GDA, EG, and OGD converge to a Nash equilibrium exponentially fast. For the bilinear game, we choose optimal parameters for EG and Alex-GDA with \u03b4 = 1. As a result, the convergence rate (in terms of gradient computation) of EG and Alex-GDA are the same, as mentioned in Section 6.1. For the difference in convergence rate between Alex-GDA and OGD, refer to the discussion in Appendix A. 0 250 500 750 1000 1250 1500 #(gradient computation) 10 46 10 39 10 32 10 25 10 18 10 11 10 4 103 distance to z Sim-GDA Alt-GDA EG OGD Alex-GDA ( = 26.0, = 1) = 1e-50 2.0 1.5 1.0 0.5 0.0 0.5 1.0 1.5 2.0 x[0] 2.0 1.5 1.0 0.5 0.0 0.5 1.0 1.5 2.0 y[0] initial point Nash Equilibrium Bilinear Game f(x, y) = x By Figure 2: Same experiments as in Figure 1 but on a bilinear game. 8" + }, + { + "url": "http://arxiv.org/abs/2302.04998v1", + "title": "Neural Networks vs. Splines: Advances in Numerical Extruder Design", + "abstract": "We present a novel application of neural networks to design improved mixing\nelements for single-screw extruders. Specifically, we propose to use neural\nnetworks in numerical shape optimization to parameterize geometries. Geometry\nparameterization is crucial in enabling efficient shape optimization as it\nallows for optimizing complex shapes using only a few design variables. Recent\napproaches often utilize CAD data in conjunction with spline-based methods\nwhere the spline's control points serve as design variables. Consequently,\nthese approaches rely on the same design variables as specified by the human\ndesigner. While this choice is convenient, it either restricts the design to\nsmall modifications of given, initial design features - effectively prohibiting\ntopological changes - or yields undesirably many design variables. In this\nwork, we step away from CAD and spline-based approaches and construct an\nartificial, feature-dense yet low-dimensional optimization space using a\ngenerative neural network. Using the neural network for the geometry\nparameterization extends state-of-the-art methods in that the resulting design\nspace is not restricted to user-prescribed modifications of certain basis\nshapes. Instead, within the same optimization space, we can interpolate between\nand explore seemingly unrelated designs. To show the performance of this new\napproach, we integrate the developed shape parameterization into our numerical\ndesign framework for dynamic mixing elements in plastics extrusion. Finally, we\nchallenge the novel method in a competitive setting against current free-form\ndeformation-based approaches and demonstrate the method's performance even at\nthis early stage.", + "authors": "Jaewook Lee, Sebastian Hube, Stefanie Elgeti", + "published": "2023-02-10", + "updated": "2023-02-10", + "primary_cat": "cs.CE", + "cats": [ + "cs.CE" + ], + "main_content": "Introduction Modern numerical design is boosted by high-performance computers and the advent of neural networks. While neural networks are well-established in \ufb01elds such as image recognition, their power to further polymer processing is yet to be fully discovered. This work attempts to contribute towards this goal. We combine deep neural networks with established shape-optimization methods to enhance mixing in single-screw extruders via a novel numerical design. In many polymer processing steps, screw-based machines play a crucial role. Screws are, e.g., used as plasticators to prepare polymer melts for injection molding or in extruders in pro\ufb01le extrusion. For simplicity, we will, in the remainder, summarize all such screw-based machines as extruders. Single-screw extruders (SSEs) are especially widespread among the many variants of extruders for their economic advantages and simple operation. Economics also drives current attempts to further increase the throughput. This increase is achieved using fast-rotating extruders. However, the current SSE\u2019s poor mixing ability has limited the advances and, therefore, improving the mixing ability is a topic of research [1, 2, 3, 4, 5, 6]. Special focus is put on improved mixing elements that alleviate this limitation. Approaches to improve mixing elements have been proposed based on analytical derivations, experimental, and simulation-based works. In the following, we review recent developments in these three areas. Subsequently, we outline relevant developments in the \ufb01eld of neural networks and, \ufb01nally, motivate the use of neural nets in the numerical design of mixing elements. \u2217Corresponding author Email addresses: jaewook.lee@tuwien.ac.at (Jaewook Lee), hube@cats.rwth-aachen.de (Sebastian Hube), stefanie.elgeti@tuwien.ac.at (Stefanie Elgeti) Preprint submitted to Engineering With Computers arXiv:2302.04998v1 [cs.CE] 10 Feb 2023 \fDue to the high pressures and temperatures, analyzing the \ufb02ow inside extruders is a di\ufb03cult task. Early studies thus focus on analytical models and geometrically simpler screw sections, e.g., the metering section [7]. Experiments complement these theoretical derivations and allow extending the analysis to more complex screw sections. As reported by Gale, typical con\ufb01gurations rely on photomicrographs of the solidi\ufb01ed melt [2] that allow either investigating cross sections of the \ufb02ow channel or the extrudate. One example of such \ufb02ow channel photomicrographs is Kim and Kwon\u2019s pioneering work on barrier screws via cold-screw extrusion [8]. Apart from investigating solidi\ufb01ed melt streams, attempts to analyze the melt \ufb02ow during the actual operation of extruders are occasionally reported, e.g., by Wong et al. [9]. Despite the great success of such experiments, a standard limitation is their focus on a single operating condition. In contrast, numerical analysis allows studying di\ufb00erent designs and operating points at signi\ufb01cantly reduced costs and, therefore, proliferates. In the following, we give an overview of such numerical analyses. One early example is Kim and Kwon\u2019s quasi-three-dimensional \ufb01nite-element (FE) simulation of the striation formation, studying the in\ufb02uence of the barrier \ufb02ight [10]. Another example is the work by Domingues et al., who obtain global mixing indices for dispersive and distributive mixing in both liquid-liquid and solid-liquid systems [11]. Utilizing a two-dimensional simpli\ufb01cation, their simulation domain extends from the hopper to the metering section, and their framework even allows for design optimization. While these early works typically neglect mixing sections, studying the in\ufb02uence of mixers has recently become a vital research topic. Celik et al. use three-dimensional \ufb02ow simulation coupled with a particle-tracking approach to determine the degree of mixing based on a deformation-based index [1]. Another example is Marschik et al.\u2019s study comparing di\ufb00erent Block-Head mixing screws in distributive and dispersive mixing [6]. A comparable study \u2013 focused on the mixing capabilities of di\ufb00erent pineapple mixers \u2013 is reported by Roland et al. [3]. Both works rely on three-dimensional non-Newtonian \ufb02ow simulations. Besides such works towards the numerical assessment of given screw designs, numerical design is also reported, however, partially in other \ufb01elds of polymer processing. For example, Elgeti et al. aim for balanced dies and reduced die swell by applying shape optimization [12, 13]. Design by optimization is also reported by Gaspar-Cunha and Covas, who alter the length of the feed and compression zones, the internal screw diameters of the feed and metering zone, the screw pitch, and the \ufb02ight clearance [14]. Potente and T\u00a8 obben report another recent study devoted to mixing elements that develops empirical models for shearing sections\u2019 pressure-throughput and power consumption for numerical design [15]. Finally, a \ufb01rst approach combining the shape optimization methods inspired by [12] with a mixing-quantifying objective function to design mixing sections is reported in [16]. However, the shape optimizations above share one commonality: They essentially only modify prede\ufb01ned geometry features. This is accepted in many cases like die or mold design, where the \ufb01nal product\u2019s shape is close to the initial one (i.e., the shape variation is small). However, topologically \ufb02exible shape parameterizations o\ufb00er far greater optimization gains for mixing element design, because the optimal geometry might di\ufb00er signi\ufb01cantly from the initial shape. The achievable improvements motivate research on geometry parametrization. Established shape-parameterization approaches include radial basis functions (RBF) [17], surface parameterizations using Bezier surfaces [18], and surface splines [19]. All these methods may be understood as \ufb01lters that parameterize a geometry by a few variables at the price of a lack of local control. The use of surface splines in shape optimizations can also be found in [12, 13]. A similar concept to surface splines is free-form deformation (FFD) [20] that encapsulates the body-to-deform in a volumetric spline, which allows tailoring the spline further towards an e\ufb03cient optimization. An alternative approach that does, however, not parameterize the geometry as a \ufb01lter is given using the computational grid\u2019s mesh nodes as shape parameters [21]. Fortunately, with the advent of neural networks, novel means of shape parameterizations o\ufb00ering outstanding \ufb02exibility emerged. Finalizing the introduction, we will summarize the most relevant works in this \ufb01eld. Many neural networks are essentially classi\ufb01ers. These neural networks are non-linear algorithms that are optimized, (i.e., trained), to determine \u2013 possibly counterintuitive \u2013 similarities and dissimilarities to discriminate between objects. One typical use case is image recognition using red-green-blue (RGB) pixel data. Neural networks can, however, be trained to classify features far beyond RGB-pixel values. One example is style transfer or texture synthesis [22]: Instead of aiming at reproducing pixel data, output images are generated in combination with perceptual data. This allows image transformations, where one image\u2019s style is transferred to the motive of another. An extension of these ideas to three-dimensional shapes is \ufb01rst reported by Friedrich et al. [23]. Comparing di\ufb00erent shape representations, the authors \ufb01nd that style transfer is applicable to shapes as well. Our work is especially inspired by Liu et al. [24], who utilize a so-called Variational Shape Learner, that learns 2 \fa voxel representation of three-dimensional shapes. Learning here refers to creating a so-called latent space, a lowdimensional, feature-rich embedding space to represent and morph between various shapes. Even beyond simple shape interpolation, it is shown that \u2013 using the latent representation \u2013 geometry features can be transferred from one to another shape. Successful learning of voxel-based shapes can also be found in [25, 26]. In terms of shape representations, pointcloud-based approaches [27, 28, 29], which utilize coordinates of three-dimensional point sets, as well as polygonal mesh-based approaches with either template meshes [30, 31] or multiple mesh planes [32] are widely adopted. While previously mentioned representations show that learning an embedding space of three-dimensional shapes is possible, each work lacks at least one of the following properties: water-tight surfaces, \ufb02exible output resolution, and smooth and continuous surface details. Recent works satisfy the aforementioned properties by learning shapes represented by continuous implicit functions such as signed-distance functions (SDFs) [33] and binary occupancies [34, 35], from which the shapes are extracted as isosurfaces. We exploit the feature richness of this latent space as an aid to reduce the optimization space\u2019s dimension for the given mixing-element shape optimization. The important novelty compared to recent spline-based \ufb01lters is that the neural network \ufb01nds \u2013 possibly counterintuitive \u2013 ways to commonly parameterize a set of signi\ufb01cantly di\ufb00erent shapes irrespective of user-de\ufb01ned design features. This abstraction from the human designer yields low-dimensional yet far more \ufb02exible shape parameterizations, which sets the motivation for the work presented here. This paper is structured as follows: We start in Sec. 2 by summarizing numerical shape optimization and splines, which leads to the concept of geometric \ufb01lters. Based on that, we explain in Sec. 3 how neural networks can be utilized to create suitable geometry parameterizations for shape optimization. In Sec. 4, we review the utilized software components, summarize the proposed framework\u2019s building blocks, and detail the speci\ufb01c di\ufb00erences to spline-based shape optimization setups. The results obtained from the new approach are presented in Sec. 5, including comparisons to current spline-based designs. Finally, we discuss the results and outline further developments in Sec. 6. 2. Geometric \ufb01lters as a component of shape optimization frameworks The following section discusses shape parameterizations as one building block of numerical shape optimization frameworks. Therefore, we \ufb01rst introduce the general shape optimization problem. After that, we recall spline-based shape parameterizations. Based on this general introduction of shape optimization frameworks, we will continue by discussing the speci\ufb01c changes needed to adapt neural nets in Sec. 3. 2.1. Building blocks of numerical shape optimization frameworks The general optimization problem is formulated as the minimization of a cost function J that relates the design variables \u03c3 to some output \u2013 here, the degree of mixing ability obtained with a speci\ufb01c mixing element, (i.e., a particular design). In shape optimization, this minimization problem is typically solved subject to two sets of constraints: (1) inequality and equality conditions, as well as bound constraints on the design variables and (2) partial di\ufb00erential equations (PDEs) that need to be ful\ufb01lled by each design to qualify as a feasible solution. This results in the following formulation: J : Rn\u03c3 7\u2192R (1a) arg min \u03c3\u2208\u03a3\u2282Rn J (\u03c3) (1b) s.t. F (\u03c3) = 0 in \u2126(\u03c3) , (1c) \u03c3i \u2265\u03c3min,i, i = 1, ...n\u03c3, (1d) \u03c3i \u2264\u03c3max,i, i = 1, ...n\u03c3. (1e) Here, (1d) and (1e) describe bound constraints on the optimization variables \u03c3, whereas (1c) denotes the set of governing PDEs. One approach to numerically solve such a PDE-constraint design problem is to alternately compute (1) shape updates and (2) the cost function value. For the studied use case of mixing element design, this results in the computational steps depicted in Fig. 1. First, we update the shape (i.e., the simulation domain covering the mixing element). We use this modi\ufb01ed computational domain to compute the \ufb02ow \ufb01eld from which we afterwards infer the 3 \fShape update (geometry kernel) Finite elements \ufb02ow solution Objective computation (particle tracking emulator) Shape parameter update by optimization algorithm FFD FEM OBJ OPT Figure 1: Building blocks of a shape optimization framework. The shape is updated by a geometry kernel such as FFD. Subsequently, the \ufb02ow \ufb01eld is computed using this updated shape and given as input to the objective calculator. Based on the current design variables and the design\u2019s objective value, the optimization algorithm computes optimized shape parameters and restarts the design loop until at least one termination criterion for the design loop is met. objective (i.e., the cost function). The design loop is closed by feeding back the cost function value to the optimization algorithm that now computes an updated shape. This loop continues until any termination criterion such as a minimal objective decrease, a maximum number of iterations or another condition is met. 2.2. Spline-based shape parameterizations In classical shape-optimization frameworks, the actual shape parameterization, or geometry \ufb01ltering, is often achieved using splines. The following paragraph, therefore, \ufb01rst provides a summary of splines illustrating how one achieves the \ufb01ltering. For a detailed description of B-splines, we refer the reader to the book of Piegl and Tiller [19]. After that, we detail on boundary splines and FFD as two particular use cases of spline parameterizations. Splines belong to the group of parametric shape representations. Therefore, each coordinate in the parametric space is connected to one point in physical space. This mapping is best understood using a simple B-spline surface that is written as: S (\u03be, \u03b7) = m X j=1 n X i=1 Ni,rNj,p (\u03be) Bi, j, (2) where \u03be and \u03b7 denote the parametric coordinates (two for the surface), Ni,r denote the interpolation or basis functions of order r in the \ufb01rst parametric direction, Nj,p denote the basis functions of order p in the second parametric direction, and \ufb01nally, B denotes the support or control points. Figure 2 illustrates the concept and visualizes how single control points a\ufb00ect the geometry. The control grid (i.e., the polygon spanned by the control point) aligns with the \u03be and \u03b7 Figure 2: B-spline representation (blue) obtained from control points (red) for a bi-quadratic B-spline. The upper four control points are rotated, illustrating a possible deformation. directions, and any parametric coordinate (within the spline\u2019s parametric bounds) maps to one point of the blue shape. Consequently, the spline mapping allows controlling an arbitrary number of parametric points by a constant, typically low, number of control points. Being able to control a high number of points with few control points will be the basic idea of \ufb01ltering using splines. 4 \fOne can obtain geometry parameterizations from splines in multiple ways. As shown in Fig. 2, one way uses the B-splines as a boundary representation. Such spline-based boundary representations are common in CAD. Using these CAD representations, their control points (i.e., the red points in Fig. 2) can be directly used as design variables in shape optimization. However, this use of the CAD\u2019s geometry parameterization limits the design process because a given spline may not be able to represent shapes substantially di\ufb00erent from the initial design. Consequently, if modi\ufb01cations of the spline\u2019s parameterization, such as inserting additional control point lines, are to be avoided, this limitation restricts the use of the CAD spline to use cases that deal with small shape updates such as die or mold design [12]. An alternative to using boundary B-splines is FFD [20]. In FFD, \ufb01rst, an \u2013 often volumetric \u2013 spline is constructed around the body to be deformed. Second, this volumetric spline is deformed, and \ufb01nally, the resulting deformation \ufb01eld is imposed on the enclosed body. Fig. 3 visualizes this process. The advantage of FFD is that the spline is Figure 3: Free-Form Deformation using a volumetric spline (light blue) applied to a mixing element (pink). The control points are omitted in this \ufb01gure. The embedded shape deforms correspondingly to the embedding, simple, volumetric spline. constructed irrespective of the enclosed shape, which gives complete freedom in choosing degree and resolution. This freedom allows tailoring the spline to the designer\u2019s needs (rather than using a given parameterization optimized for CAD usage). Therefore, FFD is widely applied, with just one example being the recent works by Lassila and Rozza combining FFD and reduced order modeling [36]. A combination of both methods, boundary B-splines and FFD, will be compared against the novel shape parameterization based on neural networks that use FFD as a generic interface to modify any given CAD spline, which in turn is used to update the boundary of the simulation domain [16]. 3. Shape parametrization using neural networks As explained in Sec. 2, the prime objective of this work is to investigate how neural networks can be used to encode di\ufb00erent shapes in a single set of a few continuous variables. In order to train the network, thereby determining such a condensed representation, it has to be provided with suitable data. Suitable here means that the input data (i.e., shapes) are provided in such a way that the network can learn from this data. In addition \u2013 using the same data format \u2013 we need to be able to produce high-quality computational meshes from the neural network\u2019s output. In the following, we \ufb01rst introduce deep generative models and then describe a shape representation meeting these two requirements. Finally, we discuss the training data generation and utilization of neural networks as shape generators. 3.1. Deep generative models With the advent of generative models, an alternative approach to shape parameterization emerged. In this subsection, we review two of the most common approaches of generative models, explain their basic concepts and use, and detail how they can be employed for geometric \ufb01ltering. Generative models are an application of neural networks and, thus, in essence, classi\ufb01cation algorithms. Classi\ufb01cation here means the ability to determine whether a certain object is in some measure close to a speci\ufb01ed input. Conversely to just classifying input, such models can also be used to generate an output that resembles an input. Resemble, however, needs to be explained. In most applications, the user is not interested in reproducing a given input exactly. Instead, the output should only be like the input (i.e., the output should feature a slight variation). Generative 5 \fmodels attempt to achieve this goal via statistical modeling. An excellent guide to generative models is found in [37], with special focus on the Variational Autoencoder (VAE). The VAE, like the traditional autoencoder, consists of an encoder and a decoder and aims to reproduce any given data while passing the input through a bottleneck. However, its probabilistic formulation using the so-called \u201dreparametrization trick\u201d provides an exceptional advantage over the traditional autoencoder in practice [38]. The roles of the encoder and the decoder can be interpreted as two separate processes. The encoder learns relations in the given data and encodes them in so-called latent variables, z. Given these latent variables, the decoder, in turn, learns to produce data that is likely to match the input. Once trained, the user can omit the encoder and directly generate new data from sampling the latent space. For details, we refer to [37, 38], and for applications, we refer to [24, 39] and [40]. The di\ufb00erence between the spline-based approach and generative models is the choice of latent variables. When the human designer creates a spline parameterization that allows modifying geometry in the desired way, the optimization variables are the control points, which are intuitively placed in R3 by the designer. Generative models, in contrast, learn a latent space and explicitly assume that the single latent variables do not have an intuitive interpretation. As a result, data is compressed from a high dimensional intuitive design space, in our case \u03c7 \u2282R3\u00d7n, onto a hardly interpretable, feature-dense, low-dimensional latent space Z. In short, generative models use the computational power of neural networks to \ufb01nd a dense classi\ufb01cation space that one can sample to produce new data. For the VAE, this process is depicted in Fig. 4a. Encoder Decoder (a) Autoencoder providing an input-to-output mapping while passing data through a bottleneck, i.e., the low-dimensional latent representation. Generator Real world data Discriminator Generated data Real world data (b) Generative adversarial model learning latent space by inferring representations that enable generating output indistinguishable from the input. Figure 4: Two main concepts of deep generative networks: Variational Autoencoders and Generative Adversarial Networks. A competing concept to VAEs are Generative Adversarial Networks (GANs). Their basic structure is shown in Fig. 4b. GANs, \ufb01rst introduced by Goodfellow et al. [41], follow a di\ufb00erent concept and train two adversarial nets, the generator and the discriminator. In GANs, the generator is trained to create data that mimics real-world data, while the discriminator tries to determine whether or not a dataset was arti\ufb01cially created. In a minimax fashion, the generator\u2019s learning goal is to maximize the probability of the discriminator making a wrong decision. GANs have proven to be an excellent tool for shape modeling. Wu et al., for example, apply a GAN for 3D shape generation and demonstrate their superior performance compared to three-dimensional VAEs. They even use a GAN to reconstruct three-dimensional models from two-dimensional images based on the a VAE output that is used to infer a latent representation for these images[42]. As in [39], Wu et al. also demonstrate the ability to apply shape interpolation and shape arithmetic to the learned latent representation. More recently, Ramasinghe et al. [43] utilize a GAN to model high-resolution three-dimensional shapes using point clouds. 6 \f3.2. Implicit shape representation The neural network learns a mapping between the low-dimensional latent space and a three-dimensional body. In order to construct such a mapping, we \ufb01rst need to de\ufb01ne how to represent our shapes (i.e., de\ufb01ne what data the neural network actually has to learn). Before presenting the approach chosen in this work, we review standard methods and their limitation. Three ways of shape representation are common in machine learning: (1) voxels, (2) point clouds, and (3) meshes [33]. The problem with meshes is that the mesh topology also prescribes the possible shape topologies. Point clouds, in contrast, can represent arbitrary topologies, but prescribe a given resolution. Finally, voxels can represent arbitrary topologies and vary in resolution, but, unfortunately, the memory consumption scales cubically with the resolution. Because of these drawbacks, the network utilized in this work learns SDFs following a network con\ufb01guration originally proposed by Park et al. [33]. SDFs provide the distance to the closest point on the to-be-encoded surface for every point in space. Furthermore, encoded in the sign, information on whether the point lies inside or outside the surface is available. Using such continuous SDF data, a shape is then extracted \u2013 at an arbitrary resolution suitable for meshing \u2013 as its zero-valued isosurface. 3.3. Training set generation As mentioned in Sec. 3.1, training a neural network requires a set of source shapes. However, to the authors\u2019 knowledge, no shape library exists for mixing elements in single screw extruders. Thus, we explain an approach to building custom training sets. To generate a suitable training set, we \ufb01rst select the basis shapes that should be considered \u2013 pin and pineapple mixers in our case. From this choice, we arbitrarily infer a total of four basis shapes (i.e., triangle, square, hexagon and cylinder \u2013 cf. Fig. 5) \u2013 clearly too few for successful and meaningful training. The basis shapes are varied using (a combination of) FFDs to gather an appropriate number of training shapes. Examples of applied deformations are given in Fig. 5. In total, 2659 training shapes are generated. (a) Square base (b) Cylinder base (c) Shrink along x (d) Translate top x (e) Tranlate top y (f) Expand middle (g) Expand top (h) Twist top Figure 5: Examples of basic shapes and applied deformations. In total, a triangle, a square, a cylinder, and a hexagon are used as basis shapes. To obtain SDF-training data from these shapes, we follow the approach by Park et al. [33] and \ufb01rst normalize each shape to \ufb01t into a unit sphere. Then, we sample 500,000 spatial points and their SDF value pairs using the trimesh library [44]. 3.4. Shape generator As explained, the shape generator\u2019s task is to provide a mixing element given a set of optimization variables. The shape generator \u2013 in this work \u2013 is thus built around the neural network, which is presented in the following. The utilized neural network is based on DeepSDF auto-decoder [33]: a feed-forward network with ten fully connected layers, with each of the eight hidden (i.e., internal) layers having 256 neurons and ReLU activation functions. In contrast to auto-encoders, the auto-decoder only trains the decoder using a simultaneous optimization of the network parameters and the latent code during training. We investigate four, eight, and sixteen as latent dimensions, l. The input layer consists of these l neurons concatenated with a three-dimensional query location. The output layer 7 \fhas only one neuron with a tanh activation function. For details on the chosen SDF network, we, again, refer to [33]. To train the network, we use the ADAM optimization algorithm [45]. To utilize improved learning rates, we follow a progressive approach with initial rates \u03b50 = 5e \u22124 for \u03b8, and \u03b50 = 1e \u22123 for z, and a decay as: \u03b5 = \u03b50 \u00b7 \u0010 0.5e%500\u0011 , (3) where e denotes the current training iteration (i.e., epochs) \u2013 and % denotes integer division. The network\u2019s training can be seen as the parametrization of the shapes. To extract isosurfaces (i.e., to generate new mixing elements) from the trained network\u2019s SDF output, we sample a discrete SDF \ufb01eld and apply a marching cube algorithm [46] in the implementation of [47]. Finally, we apply automated meshing using TetGen [48] to obtain a simulation domain as depicted in Fig. 6, including the new mixing element. 4. The developed shape optimization framework In general, our framework consists of three building blocks: (1) shape generator, (2) \ufb02ow solver, and (3) optimizer, which will be described in the following. Starting with an initial set of optimization variables, \u03c30, the shape generator creates a new mixing element \u2126(\u03c30). The \ufb02ow solver then computes the \ufb02ow \ufb01eld around this mixing element, which the optimizer evaluates to determine the \ufb02ow\u2019s degree of mixing. Based on the obtained mixing value and by comparison to previous iterations, an optimization algorithm determines a new set of optimization variables. This sequence is iteratively re-run until either a maximum number of iterations is reached or any other termination criterion \u2013 typically a good objective value or insigni\ufb01cant objective decrease \u2013 is met. 4.1. Flow solver and simulation model The \ufb02ow solver and simulation model is identical to the one introduced in [16] and therefore only summarized in the following. The \ufb02ow \ufb01eld induced by the various mixing elements is obtained from solving the steady, incompressible non-isothermal Navier-Stokes equations using a Carreau model and WLF temperature correction. The governing equations are discretized with linear stabilized \ufb01nite elements and solved using a Newton linearization and a GMRES iterative solver. Subsequently, we solve a set of advection equations using the identical con\ufb01guration to mimic particle tracking, which we use as an input to our objective function. All methods are implemented in an in-house \ufb02ow solver. We make two simpli\ufb01cations to our simulation model (i.e., the single-screw-extruder \ufb02ow channel): First, we simulate the \ufb02ow around only a single mixing element instead of simulating the entire mixing section. Second, we assume barrel rotation in an unwound \ufb02ow channel section. Both assumptions yield signi\ufb01cantly reduced computational costs while allowing a qualitative mixing improvement. To assess mixing, we mimic particle tracking by solving a series of advection equations yielding an in\ufb02ow-out\ufb02ow mapping for particles advected by the melt \ufb02ow. We process this advection information by subdividing a portion of the in\ufb02ow domain into smaller rectangular subdomains. In each of these rectangles, we select a set of particles such that the particle set\u2019s bounding box coincides with the rectangular subdomain. Then, we follow each particle as they are conveyed through the domain, store each particle\u2019s position at the out\ufb02ow domain, and \ufb01nally construct a convex hull at the out\ufb02ow around the same sets of points. Averaging the convex hull\u2019s length increments between inand out\ufb02ow yields a simple yet robust objective function inspired by interfacial area measurements. Using this objective function, we found that such a simulation model provides a good balance between accuracy and computational e\ufb03ciency [16]. Fig. 6 depicts the chosen simulation domain. 4.2. Optimizer We utilize the open-source optimization library Dakota [49] to drive the design process. Two di\ufb00erent algorithms are selected and described in the following. The \ufb01rst algorithm is the Dividing RECTangle (DIRECT) algorithm, \ufb01rst introduced in [50]. DIRECT belongs to the category of branch-and-bound methods and uses n-dimensional trisection to iteratively partition the design space. To \ufb01nd minima, it follows the approach of Lipschitzian optimization, which identi\ufb01es the design space partition that should be further sampled by evaluating a lower bound to the objective value in each partition. The partition with the lowest lower bound is chosen and further sampled. DIRECT modi\ufb01es that 8 \f0.0075 m 0.0315 m 0.024 05 m Figure 6: Simulation domain with single mixing element resembling the \ufb02ow around a single mixing element in the unwound screw channel. Flow conditions are shown in blue using a barrel rotation setup. For a detailed description of the objective function and governing equations, we refer the reader to [16]. concept and computes multiple lower bounds that weight the current sampling value (i.e., the objective value in the partition center). This promotes to further sample partitions with good objective values against the partition size, which permits to e\ufb00ectively sample large areas of unexplored design space. Thereby, DIRECT identi\ufb01es multiple partitions that are possibly optimal and allows for global convergence. The second algorithm utilized in this work is the single-objective genetic algorithm (SOGA) introduced (as its multi-objective variant) in the JEGA package [51]. As it belongs to the class of genetic algorithms, it solves optimization problems by recreating biological evolution. Therefore, each optimization run consists of numerous samples referred to as the population. Members of the population are paired and recombined in such ways that the \ufb01tness (i.e., the objective value) is successively improved. Regarding its application in this work, it is especially noteworthy that the recreation of evolution includes a mutation step, which modi\ufb01es or re-initializes design variables randomly. The added randomness allows the algorithm to escape locally convex regions of the design space. Such evolutionary optimization approaches generally converge slower yielding higher computational costs. However, they are often able to \ufb01nd better results than non-evolutionary algorithms. For both DIRECT and SOGA, we rely on the default convergence criterion and a maximum of 1000 iterations as a termination criterion. The complete computational framework is depicted in Fig. 7. Training set generation using FFD for various basis shapes Latent space construction by training auto-decoder Shape generator using neural networks Finite elements \ufb02ow solution Objective computation (particle tracking emulator) Shape parameter update by optimization algorithm BS-FFD ADAM DeepSDF FEM OBJ OPT o\ufb04ine online Figure 7: Pipeline with building blocks of the proposed computational framework. The process is split into two parts: A one-time computationally intensive training part and the actual optimization, including the quick \ufb01lter evaluation. To create a training set, FFD is applied to a set of basis shapes. Subsequently, we train the network using the ADAM optimizer, which concludes the o\ufb04ine phase. During optimization (i.e., the online phase), \ufb01rst, a new shape is created from the neural net. Then, a new computational mesh is created around this shape, and based on FEM simulations the new design\u2019s mixing is assessed. Depending on the objective value, the optimization loop is re-initiated using altered latent variables. Building blocks that are modi\ufb01ed compared to the general, geometry-kernel-based approach (cf. Fig. 1) are highlighted in blue. 9 \f5. Numerical results This section presents the results obtained using shape parameterizations from neural networks. Thereby, Sec. 5.1 focuses on the results of the o\ufb04ine phase, i.e., the training of the shape-representing neural network. In particular, we will discuss the di\ufb00erences in the constructed latent space based on its dimension using the widely used data reduction technique t-Distributed Stochastic Neighbor Embedding (t-SNE) to visualize the learned, n-dimensional shape parameterization. In Sec. 5.2 we then present the mixing shapes that could be obtained using our shape optimization approach. 5.1. Latent space dimension One of the most important choices is the target dimension of the embedding space l. In all established \ufb01ltering mechanisms like radial basis functions, free-form deformation, CAD-based approaches, and even mesh-based methods, the practitioner has to balance improved \ufb02exibility against the computational demand. Despite a potentially more compact and dense embedding with neural networks, this is still of relevance and manifests itself in the dimension of the chosen latent space. Previous works utilized only a very small number of optimization variables. Elgeti et al. vary between only one and two parameters [12]. Other works by the authors, however, showed that also for six design variables good results are obtained [16]. To obtain a competitively small number of optimization variables, we investigate embedding spaces of dimension four, eight, and sixteen respectively, and compare against a free-form-deformation approach using nine variables. Even though the latent space, as discussed in Sec. 3.1, in general, obtained lacks an intuitive interpretation, we are still interested in evaluating the quality of the learned embedding space. We do so in three di\ufb00erent ways which we present in the following: (1) we show a data reduction technique that allows us to visually investigate the latent space; (2) we apply an interpolation between the latent representation of two training shapes and compare with the expected result; (3) we apply shape arithmetics, i.e., we isolate a speci\ufb01c modi\ufb01cation of a basis shape and impose it onto another basis shape to inspect whether or not features are also recognized by the latent space. (1) For the visualization of the high-dimensional latent space, a dimension reduction technique is required. An intuitive choice might be principal component analysis (PCA), but PCA tries to primarily preserve global structures and thus data points which are far apart in the high-dimensional data will also be drawn far apart in the 2D plot. Conversely, the correlation between similar points is often lost. This loss of correlation in similar data is problematic since we aim to investigate whether \u2013 from a human\u2019s perspective \u2013 similar shapes are represented by similar latent code. The problem of loss in local correlation is, however, alleviated by t-SNE [52]. Using t-SNE, we plot each training shape\u2019s obtained latent code and \u2013 due to the preservation of local similarities \u2013 similar latent code will form clusters in the scatter plot. These clusters can then be sampled to verify that the latent code clusters resemble similar shapes. t-SNE plots for all three latent dimensions \u2013 four, eight, and sixteen \u2013 are shown in Fig. 8. Fig. 8 shows how (a) Training set encoded in 4D latent code (b) Training set encoded in 8D latent code (c) Training set encoded in 16D latent code Figure 8: t-SNE plots obtained using di\ufb00erent latent space dimensions. Increased latent dimension resembles in increased classi\ufb01cation performance of the neural net. Each color corresponds to one base training shape: green corresponds a triangular base, dark blue is the cube, red is the hexahedron, and light blue a tesselated version of the cylinder. an increased latent dimension leads to increased classi\ufb01cation performance of the neural net. Speci\ufb01cally, the four chosen basis shapes are clustered with their respective modi\ufb01cations more and more densely as the latent dimension 10 \fincreases. This improved classi\ufb01cation performance indicates that the neural net was able to learn the similarities between similar shapes properly for the case of eight and sixteen dimensions. (2) In addition to comparing clusters of similar shapes in physical and latent space, we also investigate how well the latent space is suited to represent shapes that have not been included in the training set. We do so by interpolation between two shapes. Fig. 9 shows the obtained results for all three latent spaces. Consistent to the observed lack (a) 4D shape interpolation revealing artifacts in the reconstructed shapes, i.e. bad quality of the latent representation. (b) 8D shape interpolation with satisfactory results. (c) 16D shape interpolation, which brings only slight improvement in shape representation compared to eight-dimensional latent representation. Figure 9: Shape interpolation using di\ufb00erent latent dimensions. An interpolated shape is obtained using zinterp = za + zb\u2212za N+1 n with za and zb denoting the latent code between shapes a \u2013 here the undeformed cube \u2013 and b \u2013 here the twisted cube. With N = 20, the shown examples represent n \u2208[1, 3, 7, 13, 17, 20]. in classi\ufb01cation ability of the four-dimensional latent space, Fig. 9a shows that interpolation between shapes yields unsatisfactory results. In particular, shape defects are observed. This might be a result of the fact that the twisted cube is not at all well represented in the latent space as seen in the rightmost \ufb01gure. However, both the eight and the sixteen-dimensional latent space show a visually smooth transition between the regular and the twisted cube shape. (3) The above two analyses investigated the overall classi\ufb01cation ability of the neural net and the suitability to represent intermediate shapes. A \ufb01nal test is given by applying shape arithmetic. Using arithmetic operations applied to the latent code, we extract an exemplary feature \u2013 here a stretching along the center plane \u2013 by taking the component-wise di\ufb00erence of a stretched and a regular cube. This di\ufb00erence represents center-plane expansion and can then be applied to any other basis shape \u2013 here the undeformed hexahedron. Fig. 10 shows the resulting shapes. Again, the four-dimensional latent space performs signi\ufb01cantly worse since the basis shapes are not represented in detail. Contrary to the interpolation case, the sixteen-dimensional latent space now shows better results than the eight-dimensional case. All three investigations, t-SNE plots, interpolation and arithmetic indicate that the four-dimensional latent space fails in producing a suitable latent representation. It should be noted though, that in view of the doubled number of optimization variables, the attainable gains in using sixteen latent variables compared to eight, appear unattractively small. 5.2. Optimization results To study the e\ufb00ects of the novel shape parameterization technique, we compare con\ufb01gurations that vary in latent space dimensions and optimization algorithms as shown in Tab. 1. Furthermore, we require all generated shapes to have the exact same volume as the undeformed rhombic mixing element utilized in the spline-based optimization (cf. Sec. 4.1). We choose such scaling to avoid convergence towards merely enlarged shapes that yield good objective 11 \f(a) 4D shape arithmetics with signi\ufb01cant representation errors, especially for the hexahedron and the \ufb01nal shape. (b) 8D shape arithmetic with improved representation compared to 4D latent code but still yielding slightly imprecise results. (c) 16D shape arithmetic showing perfect resemblance of all training shape and also a clean resulting shape. Figure 10: Shape arithmetics for di\ufb00erent latent dimensions. A linear thickening in the center plane is imposed on a hexagonal base body by evaluation of the latent code as zE4thick \u2212zE4 + zE6, where zE4thick, zE4, and zE6 denote the latent codes of the thickened cube, the regular cube, and the regular hexahedron respectively. values but do not deliver helpful insights. Tab. 1 lists the obtained results, and Tab. 2 gives insights into the corresponding computational e\ufb00ort. The obtained best shapes are shown in Fig. 11. Comparing the optimized geometries 4 8 16 FFD SOGA -0.0726 -0.0710 -0.0750 \u2013 DIRECT -0.0645 -0.0738 -0.0769 -0.0422 Table 1: Di\ufb00erent optimization algorithms and latent space dimensions compared by best objective value and contrasted to a nine-dimensional FFD. Smaller values correspond to better results using the aforementioned objective formulation 1b. 4 8 16 FFD # Iteration(s) Optimal Total Optimal Total Optimal Total Optimal Total SOGA 768 1000 752 1000 534 1000 \u2013 \u2013 DIRECT 96 113 129 143 138 149 16 67 Table 2: Di\ufb00erent optimization algorithms and latent space dimensions compared by the \ufb01nal iteration count, the obtained objective value, and the number of total iterations; contrasted to a nine-dimensional FFD. shows interesting results from a plastics processing point of view. On the one hand, the triangular shape and a mixing element that widens towards the top appear advantageous. One should note, however, that these deformations do not correspond to a general optimum for plastics engineering but are merely the best possible deformations within the range permitted by the training set. Choosing an even more diverse training set is expected to yield even further improved shapes. More relevant for this study (with a focus on neural nets as shape parameterizations) is the comparison of convergence, the achieved mixing, and the di\ufb00erence and similarities in the results. Tab. 1 shows that for the chosen shape optimization problem, the DIRECT algorithm has no disadvantages compared to SOGA and converges reliably. Simultaneously, the shape parameterization\u2019s dimensionality appears to in\ufb02uence the optimization because the four and eight-dimensional neural networks lead to optimized triangles. In contrast, the sixteen-dimensional case renders 12 \f(a) FFD-Direct (b) 4D-Direct (c) 4D-Soga (d) 8D-Direct (e) 8D-Soga (f) 16D-Direct (g) 16D-Soga Figure 11: Optimization results obtained for all di\ufb00erent latent codes and optimization algorithms compared to an existing FFD-based shape optimization. the top-expanded quadrilateral optimal. Common to all results is a skewed and slightly twisted geometry. A noticeable di\ufb00erence between the spline-based and neural-net-based shape optimization is that the neural-netbased shape parameterization encodes several shapes, of which multiple may mix the melt equally well. Because of this, from the practitioner\u2019s point of view, it does make sense to not only look at the best result but rather compare numerous equally optimal designs and derive design rules from that comparison. Fig. 12 shows such a comparison and reveals one advantage of evolutionary algorithms. While the DIRECT algorithm converges locally and, therefore, the ten best designs are geometrically similar, the generative nature of SOGA allows the practitioner to identify possibly equally well-working designs (cf. Figs. 12f and g) amongst which the most economical option may be chosen. Such a choice allows one to account for further restrictions regarding screw cleaning, manufacturability, and others. 13 \f(a) J = \u22120.0750 (b) J = \u22120.0712 (c) J = \u22120.0656 (d) J = \u22120.0633 (e) J = \u22120.0602 (f) J = \u22120.0601 (g) J = \u22120.0595 (h) J = \u22120.0592 (i) J = \u22120.0562 (j) J = \u22120.0535 Figure 12: Ten best shapes obtained from 16D SOGA optimization. Except for the 6th-best shape (h), all shapes feature an expanded top, similar orientation, and appear widened in y direction (i.e., perpendicular to the main \ufb02ow direction). 6. Discussion and outlook In this work, we studied the applicability of generative models as shape parameterizations. We choose numerical shape optimization of dynamic mixing elements as a use case. The developed shape parameterization\u2019s fundamental principle is to exploit neural nets\u2019 ability to construct a dimension reduction onto a feature-dense, low-dimensional latent space. First, the nature of this low dimensional space is studied by t-SNE-plots. These plots give visual evidence that the generative models create smooth shape parameterizations that enable one to use classical, heuristic optimization algorithms. Comparing genetic to such heuristic algorithms, Tab. 2 reveals that the SOGA algorithm required signi\ufb01cantly more iterations (i.e., simulations). Additionally, Tab. 1 shows that in the studied examples, this additional computational e\ufb00ort is not re\ufb02ected proportionally by improved mixing. One may expect that the SOGA algorithm\u2019s random nature may be better suited to explore the hardly interpretable latent space. However, the results suggest a smoothness of the learned parameterization that renders deterministic methods like DIRECT equally well suited for optimization in the latent space. In addition, to the general applicability of generative models, we study the in\ufb02uence of di\ufb00erent latent dimensions. While the actual optimization results appear pleasing, Figs. 9 and 10 suggest that very compressed (i.e., fourdimensional) latent spaces may not be used for optimization purposes. Analogously, no direct preference between the eightand sixteen-dimensional results can be drawn from the optimization results. Similarly, Fig. 10 indicates that higher-dimensional latent spaces yield more precise shape encoding, which seems generally preferable. Since the overall number of iterations until convergence of the optimization problem is comparable, the 16-dimensional parameterization might be chosen over the eight-dimensional variant. As intended, a fundamental improvement over established low-dimensional shape parameterizations is that the new approach covers a much broader design area in a single optimization. Since its fundamental concept is to encode diverse shapes, optimizations lead to numerous, nearly equally optimal shapes. Consequently, this novel approach extends on existing methods in that it allows the practitioner to derive design features that enhance mixing most and for a wide range of basis shapes. Therefore, rather than creating complex shape parameterizations, the crucial step towards optimal design reduces to the creative de\ufb01nition of a training set. Finally, a signi\ufb01cant challenge in using neural-net-based shape parameterization is proper size control of the output shapes. This work implements a volume constraint to avoid simple size maximization of the mixing elements. However, a reformulated objective, such as penalizing pressure loss, may circumvent such adverse designs. Alternatively, a scale factor may be added as an additional optimization variable. Both size control and e\ufb03cient training set generation may be topics of further studies. 14 \fGiven the presented results, utilizing the feature-rich latent representations and their immense generalization power has a signi\ufb01cant potential to improve established industrial designs. 7. Acknowledgements The German Research Foundation (DFG) fundings under the DFG grant \u201dAutomated design and optimization of dynamic mixing and shear elements for single-screw extruders\u201d and priority program 2231 \u201cE\ufb03cient cooling, lubrication and transportation \u2014 coupled mechanical and \ufb02uid-dynamical simulation methods for e\ufb03cient production processes (FLUSIMPRO)\u201d project number 439919057 are gratefully acknowledged. Implementation was done on the HPC cluster provided by IT Center at RWTH Aachen. Simulations were performed with computing resources granted by RWTH Aachen University under projects jara0185 and thes0735." + } + ] + }, + "edge_feat": {} + } +} \ No newline at end of file