diff --git "a/abs_29K_G/test_abstract_long_2405.01103v1.json" "b/abs_29K_G/test_abstract_long_2405.01103v1.json" new file mode 100644--- /dev/null +++ "b/abs_29K_G/test_abstract_long_2405.01103v1.json" @@ -0,0 +1,97 @@ +{ + "url": "http://arxiv.org/abs/2405.01103v1", + "title": "LLM Security Guard for Code", + "abstract": "Many developers rely on Large Language Models (LLMs) to facilitate software\ndevelopment. Nevertheless, these models have exhibited limited capabilities in\nthe security domain. We introduce LLMSecGuard, an open-source framework that\noffers enhanced code security through the synergy between static code analyzers\nand LLMs. LLMSecGuard aims to equip practitioners with code solutions that are\nmore secure than the code initially generated by LLMs. It also benchmarks LLMs,\nproviding valuable insights into the evolving security properties of these\nmodels.", + "authors": "Arya Kavian, Mohammad Mehdi Pourhashem Kallehbasti, Sajjad Kazemi, Ehsan Firouzi, Mohammad Ghafari", + "published": "2024-05-02", + "updated": "2024-05-02", + "primary_cat": "cs.SE", + "cats": [ + "cs.SE", + "cs.CR" + ], + "label": "Original Paper", + "paper_cat": "LLM Fairness", + "gt": "Many developers rely on Large Language Models (LLMs) to facilitate software\ndevelopment. Nevertheless, these models have exhibited limited capabilities in\nthe security domain. We introduce LLMSecGuard, an open-source framework that\noffers enhanced code security through the synergy between static code analyzers\nand LLMs. LLMSecGuard aims to equip practitioners with code solutions that are\nmore secure than the code initially generated by LLMs. It also benchmarks LLMs,\nproviding valuable insights into the evolving security properties of these\nmodels.", + "main_content": "INTRODUCTION The use of Large Language Models (LLMs) such as ChatGPT and Copilot has become popular for software development activities such as coding, design, comprehension, etc. [7, 17]. Nevertheless, hallucination, i.e., \u201cpresenting incorrect information as if it is correct\u201d, poses serious challenges for LLM users [18]. This issue is more prevalent in domains where reliable training content is scarce, such as in the realm of code security. Indeed, recent studies have shown that code models are widely adopted for code generation [12], but they have limited capabilities in the software security domain [2, 25]. Therefore, a vulnerable code piece that an LLM mistakenly recommends as a secure solution could compromise the entire system\u2019s security if it is used without enough scrutiny. We introduce LLMSecGuard, a framework designed to extensively examine the security properties of LLMs and leverage their analytical capabilities for secure code development. It applies static security analysis on LLM-generated code to uncover potential security issues and guides LLMs in resolving such issues in the code. EASE 2024, June 18\u201321, 2024, Salerno, Italy 2024. ACM ISBN 978-x-xxxx-xxxx-x/YY/MM...$15.00 https://doi.org/10.1145/xxx Moreover, LLMSecGuard assesses the security properties of LLMs and benchmark them across different CWEs. In summary, LLMSecGuard contributes to more secure software development. Unlike existing code assistants, developers can integrate unlimited LLMs and code analysis engines into this framework through REST APIs. LLMSecGuard is open-source and publicly available on GitHub.1 The remainder of this paper is structured as follows. In Section 2, we motivate this work. In Section 3, we introduce LLMSecGuard, and in Section 4, we explain its two use cases. In Section 5, we present related work. In Section 6, we outline our plans, and in Section 7, we conclude this paper. 2 MOTIVATION Security issues are pervasive in multiple domains [4], from mobile applications [9, 11] and web servers [10], to critical software systems [27]. There are program analysis tools designed to uncover security issues, but studies indicate their limitations [5, 6], as well as their lack of popularity among developers [14]. Unfortunately, the security landscape may not improve as we witness the popularity of language models for code generation [12]. It is concerning that developers place undue trust in these models, which are known to generate insecure code examples [2, 25]. In the following, we explain how we could iteratively ask ChatGPT 3.5 to generate secure code examples for symmetric encryption. Encryption plays a crucial role in our digital world, but its correct adoption has proven to be challenging for developers [15]. For instance, the analysis of 489 open-source Java projects showed that 85% of cryptography APIs are misused [16]. Importantly, existing tools that identify and fix such issues do not fulfill developers\u2019 expectations [1, 28]. Consider the StackOverflow post ID 3451670, which suffers from security issues such as \u201cECB encryption mode\u201d, \u201cstatic or constant key\u201d, \u201cstatic salt for key derivation\u201d, and \u201chardcoded password\u201d. When we presented this inquiry to ChatGPT, requesting a \u201csecure Java code example\u201d, the resulting response still contained identical security issues. We then explicitly instructed ChatGPT that \u201cI expect a secure encryption algorithm, a secure encryption mode, and a secure Initialization Vector (IV). Do not use hard-coded secrets, suggest best practices, and make sure that your example provides confidentiality and integrity\u201d. Listing 1 presents the response.2 1https://github.com/aryakvnust/LLMSecGuard 2https://chat.openai.com/share/8ce9eb6e-88b7-44dc-807a-51d04e7d446c arXiv:2405.01103v1 [cs.SE] 2 May 2024 \fPreprint Version EASE 2024, June 18\u201321, 2024, Salerno, Italy Arya Kavian, Mohammad Mehdi Pourhashem Kallehbasti, Sajjad Kazemi, Ehsan Firouzi, and Mohammad Ghafari 1 public class SecureEncryptionExample { 2 public static void main(String[] args) throws Exception { 3 String username = \"bob@google.org\"; 4 String password = \"Password1\"; 5 String secretID = \"BlahBlahBlah\"; 6 String SALT2 = \"deliciously salty\"; 7 // Generate a secure random IV (Initialization Vector) 8 SecureRandom secureRandom = new SecureRandom(); 9 byte[] iv = new byte[12]; 10 secureRandom.nextBytes(iv); 11 // Derive a key from the password using a secure method 12 byte[] key = deriveKey(SALT2, username, password); 13 // Create a SecretKey object using the derived key 14 SecretKey secretKey = new SecretKeySpec(key, \"AES\"); 15 // Instantiate the cipher with AES in GCM mode 16 Cipher cipher = Cipher.getInstance(\"AES/GCM/NoPadding\"); 17 ... 18 } 19 private static byte[] deriveKey(String salt, String username, String password) throws Exception { 20 // Concatenate salt, username, and password 21 String combined = salt + username + password; 22 // Use a secure hash function (SHA-256) to derive a key 23 MessageDigest md = MessageDigest.getInstance(\"SHA-256\"); 24 byte[] hash = md.digest(combined.getBytes()); 25 return Arrays.copyOf(hash, 16); 26 } 27 } Listing 1: ChatGPT\u2019s improved response to Post ID 3451670 ChatGPT identified the constant key, and it correctly suggested the secure \u201cGCM\u201d encryption mode as a best practice. However, violations such as \u201cstatic salt for key generation\u201d and \u201chardcoded password\u201d were in the response without any proper warning. The suggested code, at line 23, included the \u201cSHA-256\u201d hash function for key generation, which is not a best practice. Anyway, when we asked about the security risk at line 6 (i.e., static salt), ChatGPT responded: \u201cUsing a hard-coded string as a salt is generally not considered a good practice for security. Ideally, salt should be unique for each user and should be randomly generated for each password. Using a constant salt for all users could make the system more vulnerable to certain types of attacks, such as rainbow table attacks\u201d. It then provided the code example in Listing 2 as a replacement. 1 // Generate a secure random salt 2 SecureRandom secureRandom = new SecureRandom(); 3 byte[] salt = new byte[16]; 4 secureRandom.nextBytes(salt); 5 // Convert the salt to a Base64-encoded string for storage 6 String SALT2 = Base64.getEncoder().encodeToString(salt); Listing 2: ChatGPT\u2019s suggestion to replace a static salt Subsequently, we asked, \u201cAre there any other security risks in your provided code?\u201d. It failed to detect the hardcoded password, but at this stage, it correctly suggested a key derivation function (KDF) instead of the hash function for generating a password-based key. Upon a direct inquiry about the line where a hardcoded password existed, it offered a secure suggestion. Through proper interactions with ChatGPT, specifically by listing potential security issues and their locations within the code, it is feasible to harness ChatGPT\u2019s power to generate secure code examples. However, novice developers may not possess the necessary expertise to provide such inputs. Hence, we introduce LLMSecGuard, a framework that links LLMs and static analysis tools to overcome this limitation. 3 LLMSecGuard We introduce LLMSecGuard, an open-source framework that offers enhanced code security through the synergy between code security analyzers and LLMs. The primary objective of LLMSecGuard is (i) to enhance the security of LLM-generated code, and (ii) to benchmark the security properties of LLMs. It adopts a RESTful architecture, implemented in Python using Django and Flask, and uses SQLite for data persistence. LLMSecGuard is publicly available on GitHub.3 LLMSecGuard Prompt Agent Prompt Security Agent Benchmark Agent Potential Vulnerabilities Report Benchmark Prompts Benchmark uses Analysis Engine LLM Figure 1: LLMSecGuard\u2019s components Figure 1 illustrates the schematic architecture of LLMSecGuard, comprising three main components: \u201cPrompt Agent\u201d, \u201cSecurity Agent\u201d, and \u201cBenchmark Agent\u201d. Prompt Agent. This component is responsible for receiving a prompt and providing other components with the code model\u2019s response. Upon receipt of a prompt, \u201cPrompt Agent\u201d undertakes the task of engineering a response. Particularly, it can reformulate a prompt, pass the prompt to code models, collect the response, and forward the result to other components. Security Agent. This component has a pivotal role in uncovering security issues in LLM-generated code. Precisely, it is entrusted to pass the code to static code analysis engines (such as Semgrep and Weggli), and to collect potential security vulnerabilities. Benchmark Agent. This component puts different LLMs to security test. Particularly, it evaluates the security properties of LLMs based on a set of challenges, structured in JSON format. Each challenge includes a prompt and the expected outcome, and \u201cBenchmark Agent\u201d is responsible for flagging LLMs that pass the challenge. 3https://github.com/aryakvnust/LLMSecGuard \fPreprint Version LLM Security Guard for Code EASE 2024, June 18\u201321, 2024, Salerno, Italy 4 USAGE SCENARIOS We describe the LLMSecGuard\u2019s main configurations including its connection with external entities such as LLMs and code analyzers. We then explain the interplay between different components through two main usage scenarios, namely \u201cbenchmarking\u201d and \u201ccode generation\u201d. 4.1 Setup There are several key configurations for running LLMSecGuard that users can adjust according to their preferences. LLMSecGuard requires a minimum of one LLM and one code analysis engine to operate effectively. This can be easily set up by providing the API information (e.g., API endpoint and API key) associated with each entity. There is no limitation, and users can add as many entities as they wish. Currently, we have instantiated LLMSecGuard with ANYSCALE Llama2 [19], and Weggli and Semgrep static code security analysis tools [22, 26]. Weggli is a robust and efficient semantic code analysis for C and C++ programs, and Semgrep is designed for programs written in languages such as C#, Java, Java, JavaScript, Python, PHP, Ruby, Scala, etc. LLMSecGuard relies on CyberSecEval, a benchmark specifically designed to evaluate the cybersecurity aspects of LLMs functioning as coding assistants [3]. It comprises exhaustive test cases to assess the potential of LLMs in producing insecure code and facilitating cyberattacks. Nonetheless, users have the flexibility to update this benchmark or replace it with one of their preferences. LLMSecGuard includes a few execution parameters as well. The primary one is the \u201ctermination condition\u201d designed to prevent the system from looping indefinitely. This condition determines how many iterations a code piece should undergo improvement and analysis if potential vulnerabilities persist. The \u201cbenchmark interval\u201d parameter determines how often the benchmark process is executed, with options such as biweekly or monthly intervals. Given that LLMs evolve and their security properties may change, regular updates are necessary to ensure that results remain current. Lastly, users can choose the LLMs of interest for benchmarking, select specific code analysis engines, and specify challenges that should be considered for measurement. 4.2 Security Benchmarking In this scenario, a set of existing prompts, referred to as benchmark prompts, undergo evaluation. In particular, the \u201cBenchmark Agent\u201d sends each prompt in the benchmark to the \u201cPrompt Agent\u201d. This component dispatches the prompt to every LLM considered for benchmarking, collects responses for each prompt, and forwards them to the \u201cBenchmark Agent\u201d. Each prompt presents a challenge, and \u201cBenchmark Agent\u201d determines the extent to which an LLM succeeds in resolving it by comparing the response to the expected outcome. For instance, in terms of vulnerable code, it measures the quantity and severity of potential CWEs. Each LLM is assigned a performance score for each challenge, and ultimately, LLMs are ranked. At present, LLMSecGuard relies on the CyberSecEval benchmark, which incorporates extensive prompts and regular expressions to uncover potential issues. 4.3 Secure Code Generation Figure 2 illustrates the workflow in this scenario. It begins with the user providing a prompt of interest, anticipating the output to be a piece of code. The \u201cPrompt Agent\u201d assumes control and forwards the user\u2019s prompt to the top LLM determined in the benchmarking scenario (unless the user chooses a different LLM). Upon receiving the LLM\u2019s response, this component then transfers the produced code to the \u201cSecurity Agent\u201d for security inspection. The \u201cSecurity Agent\u201d collaborates with external analysis engines (configured earlier) to uncover potential vulnerabilities and respective lines in the code. If the termination condition is met (i.e., there is no vulnerability or the maximum analysis iterations is reached), the code along with its vulnerability information, is immediately reported back to the user. In cases where the termination condition is not met, this information is relayed to the \u201cPrompt Agent\u201d. This component formulates a new prompt based on the collected vulnerability information and queries the LLM once again. No Yes Terminate Prompt Analysis Prompt Security Analysis Code Potential Vulnerabilities Code and\u00a0Analysis Outcome LLM Analysis Engine Figure 2: LLMSecGuard\u2019s secure code generation workflow 5 RELATE WORK The popularity of AI-generated code has attracted the attention of security researchers. Pearce et al. assessed Copilot\u2019s performance in suggesting code across 89 scenarios aligned with MITRE\u2019s \u201cTop 25\u201d CWEs, revealing that approximately 40% of the generated programs contained vulnerabilities [21]. Nonetheless, in a later study, Asare et al. came to a different conclusion [2]. Precisely, Copilot did not consistently reproduce past vulnerabilities introduced by humans. In about 25% of cases, Copilot even proposed fixes for these vulnerabilities, suggesting a potentially lower susceptibility to security risks compared to human programmers. Fu et al. analyzed 435 code snippets generated by Copilot in public GitHub projects and found that 35% of the snippets exhibit CWEs [8]. Mousavi et al. designed 48 programming tasks for five common security APIs in Java and evaluated the performance of ChatGPT in developing these tasks. They uncovered a concerning level of security, specifically, they found an average misuse rate of 70% in these tasks [20]. \fPreprint Version EASE 2024, June 18\u201321, 2024, Salerno, Italy Arya Kavian, Mohammad Mehdi Pourhashem Kallehbasti, Sajjad Kazemi, Ehsan Firouzi, and Mohammad Ghafari Researchers have also developed benchmarks for assessing the security of LLM-generated code. Bhatt et al. developed CYBERSECEVAL, a benchmark tailored to evaluate the cybersecurity risks posed by LLMs [3]. Hajipour et al. examined code models for generating high-risk security weaknesses and built a collection of diverse nonsecure prompts for various vulnerability scenarios, which can be used as a benchmark to compare security weaknesses in LLMs [13]. 6 FUTURE WORK We plan to investigate whether LLMSecGuard will effectively support developers in writing secure code in real-world scenarios. Two groups of developers, both utilizing LLMs for coding, will be recruited for our study. We will assign the same programming tasks to each group, instructing one to utilize LLMs freely and the other to exclusively use LLMSecGuard during the coding process. We will measure the time taken to complete each task, the number, and the severity of vulnerabilities. We will compare the results across two groups as well as based on participants\u2019 experience. Pending positive evaluations, our ultimate goal is to integrate LLMSecGuard into at least one popular IDE (Integrated Development Environment), as deemed essential by developers for a seamless user experience [24]. Furthermore, this integration would allow LLMSecGuard to gather development context, thereby enabling the formulation of better prompts for code generation. Future work could also investigate code changes in each iteration between \u201cPrompt Agent\u201d and \u201cSecurity Agent\u201d, as well as examine how the engineering of prompts and vulnerability information affects the performance of LLMs. Finally, it is noteworthy that although a code snippet may not inherently contain vulnerabilities, its integration into a program or execution within a specific environment could still pose security risks and therefore warrants investigation [23]. 7", + "additional_graph_info": { + "graph": [ + [ + "Mohammad Ghafari", + "Davide Fucci" + ] + ], + "node_feat": { + "Mohammad Ghafari": [ + { + "url": "http://arxiv.org/abs/2208.00264v1", + "title": "Mining unit test cases to synthesize API usage examples", + "abstract": "Software developers study and reuse existing source code to understand how to\nproperly use application programming interfaces (APIs). However, manually\nfinding sufficient and adequate code examples for a given API is a difficult\nand a time-consuming activity. Existing approaches to find or generate examples\nassume availability of a reasonable set of client code that uses the API. This\nassumption does not hold for newly released API libraries, non-widely used\nAPIs, nor private ones. In this work we reuse the important information that is\nnaturally present in test code to circumvent the lack of usage examples for an\nAPI when other sources of client code are not available. We propose an approach\nfor automatically identifying the most representative API uses within each unit\ntest case. We then develop an approach to synthesize API usage examples by\nextracting relevant statements representing the usage of such APIs. We compare\nthe output of a prototype implementation of our approach to both human-written\nexamples and to a state-of-the-art approach. The obtained results are\nencouraging; the examples automatically generated with our approach are\nsuperior to the state-of-the-art approach and highly similar to the manually\nconstructed examples.", + "authors": "Mohammad Ghafari, Konstantin Rubinov, Mohammad Mehdi Pourhashem K", + "published": "2022-07-30", + "updated": "2022-07-30", + "primary_cat": "cs.SE", + "cats": [ + "cs.SE" + ], + "main_content": "INTRODUCTION Software reuse is a practice of using existing software, or software knowledge, to construct new software. Reusing existing software can decrease development effort while increasing the quality of production, if mature previously tested assets are used. Developers cope with the complexity of modern software systems and speed up the development process by increasingly relying on the functionalities provided by off-the-shelf components and frameworks. However, an effective integration of external third-party libraries requires an extensive knowledge of the application programming interfaces (APIs) and a detailed understanding of the required interaction protocols. Unfortunately, acquiring these competences is difficult and time-consuming. Moreover, real-world software libraries and frameworks may be underspecified or poorly documented. In addition, if documentation exists, it may include erroneous or out-of-date content.1 Hence, software developers resort to studying code examples to learn APIs, and pragmatically reuse parts of a code corpus that exercise these APIs to reduce the development effort.2 Given these premises, code examples play a crucial role in modern software development complementing the existing documentation and facilitating the learning curve for developers. However, generating meaningful code examples for a given API and finding the relevant ones is a difficult and time-consuming activity. Robillard et al. show that, in practice, insufficient or inadequate examples represent the greatest obstacle to effectively learn an API.2 This aspect becomes even morerelevantduringsoftwaremaintenancewheremaintainersusually have to deal with multiple different APIs that they may have exercised neither recently nor frequently. In response to the increasing need of meaningful code examples, research has been focusing on techniques to automatically extract or generate them to assist developers.3,4 T o find or generate the examples, existing approaches typically rely on external sources such as the source code of existing local or remote projects.5\u20137 More recently, good coverage of API usage examples in Q&A websites motivated researchers to use these crowd sources as well, for both documentation8 and recommendation purposes.9,10 Regardless of whether developers explore these resources manually or use a recommender system, there are still several problems that hamper the applicabilityofexistingapproachesinpractice.11 Amongthem,untrustworthiness, low quality, and inaccessibility of appropriate code examples are the most important ones. Also, maintaining these resources is challengingpartlybecausetheyarenotexplicitlylinkedtotheAPIs,and changes in the APIs are not reflected in the resources.1 Furthermore, J Softw Evol Proc. 2017;29:e1841. wileyonlinelibrary.com/journal/smr Copyright \u00a9 2017 John Wiley & Sons, Ltd. 1 of 19 https://doi.org/10.1002/smr.1841 Special Issue on Source Code Analysis and Manipulation (SCAM 2015) Journal of Software: Evolution and Process \f2 of 19 GHAFARI ET AL although existing approaches excel at locating examples of frequent APIs, they all suffer from the \u201ccold-start\u201d problem, i.e., examples can only be extracted when a reasonable sample of client systems that use theAPIexist.12 However,thisisabigassumption.Theseapproachesfall short at proposing examples when client program is not available (e.g., in case of private APIs) or clients themselves do not exist yet (e.g., for newly released API libraries, or nonwidely used ones). Agile development is founded on a seamless integration between continuous code changes and unit testing. Unit test cases are small fragments of code in charge of testing a unit of work in the system by checking a single assumption about the behavior of that unit of work. Once developers have written the test cases, they are executed every time the production code changes to support regression testing. This process requires unit test cases to be always up-to-date and makes them an important source of system documentation, especially for guiding software maintenance tasks. Unit test cases usually cover at least the critical functionalities provided by an API. Studying unit test cases of an API conveys significant information on (1) how to correctly instantiate classes, (2) how to construct arguments for method calls, and (3) the expected system state after the invocations of methods. Furthermore, unit test cases are not only helpful for API users but also advantageous to API developers. In software development, not all theAPIsaredesignedtobeexposedtoendusers:someprivateAPIsare intended to help the API developers during development process. Such APIs are internal, and therefore, there is no usage example for them in external resources. Also, they often lack usage documentation because of constraints in the development schedule. Hence, unit test cases of private APIs could be a good resource of such API uses. The amount of code that exists in test suites is limited in size, and at the same time, it is highly relevant with respect to the tested API. Unit test cases are usually executable, simple, and concise snippets that are conceived to run in isolation. In the light of these considerations, test cases seem to be a significant source of API uses in the absence of client systems.11,13 However, our experiments with state-of-the-art tools on several open-source libraries showed that even if test cases are part of the repository of these tools, they cannot be directly used to provide effective results. In fact, the different structure of test code with respect to general code suggests adoption of a dedicated mining approach to obtain examples from test cases. This paper presents an approach for automatically extracting API usage examples from test cases written in JUnit. The proposed approach intends to complement approaches that extract API usage examples from source code of client systems and is intended to be used when these client systems are not (yet) available and the only source of such examples is the test code. The approach is general; we detail each step of the approach with some reference examples taken from different systems. The approach consists of 2 phases: in the first phase it identifies traceability links between test methods and focal methods under test (F-MUT s). Our previous work has shown that focal methods represent the core of a test scenario inside a unit test case, and each test case represents more useful information about the F-MUT s rather than other (non-focal) methods.14 In the second phase we bring our work one step further, extend it by developing a relevant use case\u2014synthesizing API usage examples from test code\u2014and practically show that the F-MUT information is advantageous to accomplish this task. The approach traces object state changes verified in the oracle part of a test case to identify F-MUT s. T o construct examples, the approach examines the type of dependency relation between F-MUT s to determine if each of them should contribute to a different example. We adapted a graph-based object usage model that superimposes control dependency graph and data dependency graph to illustrate how each focal method interacts with other code elements within a test. The model takes into account the test semantics and the role of different methods within a test case. The approach uses this model to identify necessarystatementsforrepresentingtheusageofeachF-MUTwithin a test. The proposed approach has been implemented in a prototype, and both phases of the approach have been experimentally evaluated on different open-source systems. We have compared the examples generated with our approach to both human-written examples and examples generated by UsET eC\u2014a state-of-the-art tool of Zhu et al.4 The results show that our approach is superior to the state-of-the-art approach and generates examples that are highly similar to the manually constructed examples. In summary, this work makes the following contributions: \u2022 a formal representation of an approach to recover test-to-code traceability links on the method level precision by identifying the most representative APIs (focal methods) in unit test cases; \u2022 anapproachtoautomaticallysynthesizemeaningfulusageexamples of the focal methods from unit test cases; \u2022 a prototype implementation of the example synthesis approach in Java,andtheevaluationofitseffectivenesson4differentreal-world software systems. The remainder of this paper is organized as follows. Section 2 presents our automated approach to identifying F-MUT s, and Section 3 synthesizes meaningful API uses w.r.t. each focal method within a unit test case. Section 4 evaluates the prototype implementation of the proposed approach on real-life software systems and discusses the results. Section 5 overviews related work, and Section 6 concludes the paper. 2 IDENTIFYING TRACEABILITY LINKS BETWEEN SOURCE AND TEST CODE T o enable automated extraction of relevant API usage information from test code, we must first establish the relationship between the source and test code to be analyzed. Previous research has mostly focused on deriving this relationship only between test cases and classes under test.15 Although the knowledge of the class under test (CUT) is useful for test case comprehension and analysis, CUT information is insufficient to analyze test cases with a method-level precision. Our previous research has shown that a unit test case represents more useful usage information of the focal method rather than other (non-focal) methods within that test case, which led to the development of a solution for their automatic identification.14 In the rest of this section we introduce the most relevant concepts and background in identifying F-MUT s. \fGHAFARI ET AL 3 of 19 FIGURE 1 Unit test case for the Model class FIGURE 2 Unit test case for the ArrayTable class 2.1 T est case structure The core ideas behind our approach are formed from a few observations of a large number of real-life open-source projects. We observed test cases of different types and granularity use method invocations as the atoms to construct test cases from the work of Pezze et al.16 Our analysis indicates that test cases invoke on average 6 methods. We have manually investigated the role of each method within test cases with more than 1 method invocation. The result shows most of these methods are ancillary to few that are intended to be the actual (or focal) methods under test. Focal methods represent the core of a test scenario inside a test case, and therefore, each test case represents moreusefulinformationabouttheF-MUTratherthanother(non-focal) methods. Consequently, identifying F-MUT s is a preparatory step for synthesizing useful API usage examples from test code. Consider an example unit test case from PureMVC* in Figure 1. In the example, IModel is the CUT . Three methods in the test case belong to the CUT IModel any of which can be the method under test. However, the real intent in this test case is to check the removeProxy() method. An expert engineer can identify this with the aid of comments, test method names, and assertions. Without this knowledge or additional analysis, one might mistakenly conclude that the goal of the test is to check registerProxy() or retrieveProxy() methods. Although method registerProxy() might be viewed as a relevant method to the test case, its role is ancillary; it brings the modelobjecttoanappropriatestateinwhichitispossibletoinvokethe removeProxy() method. The retrieveProxy() invocation is used to inspect the state of the CUT while it is the method removeProxy() that causes a side effect on the current object and is the focal method under test. *http://puremvc.org Unit test cases are commonly structured in 3 logical parts: setup, execution, and oracle. The setup part instantiates the CUT and includes all dependencies on other objects that the unit under test will use. This partcontainsinitialmethodinvocationsthatbringtheobjectundertest intoastaterequiredfortesting.Theexecutionpartstimulatestheobject under test via a method invocation, i.e., the focal method in the test case. This action is then checked with a series of inspector methods and assert statements in the oracle part that query the side effects of the focal invocation to determine whether the expected outcome is obtained. In the context of object-oriented systems, unit test cases often test a single method.17 Nevertheless, occasionally, test cases aggregate and test several methods in a test scenario. In this case a complete test scenario comprises several sub-scenarios, where a sub-scenario contains a set of non-assert statements (setup and execution) followed by inspector and assert statements. That is, each sub-scenario may have a different focal method, and therefore, a test case can have more than 1 focal method. A focal method belongs to the execution part of a test case, and method invocations used in the oracle part often only inspect the side effect of the F-MUT . Despite the clear logical differentiation of test parts, each having its own purpose, in practice the parts are hardly discernible either manually or automatically. This hinders identifying F-MUT s without expert knowledge of the system. It is difficult to establish whether a method invocation belongs to the setup or the execution parts of a test. Even the oracle part associated with assert statements may contain method invocations that may be confused with the execution part of the test case. Inpractice,dataflowanalysisisrequiredtodistinguishamongdifferent types of method invocations in test cases. We need to distinguish F-MUT s from inspector methods serving the oracle part. For example, \f4 of 19 GHAFARI ET AL FIGURE 3 The main steps of the proposed approach for identifying focal methods under test (F-MUT s) the test case in Figure 2 belongs to the ArrayTableTest class in Guavalibrary.\u2020 Thereare6methodinvocationswithinthistestcase.The first invocation create() belongs to the setup part of the test case. This invocation is a helper method that initializes the table object and puts this object in an appropriate state for testing. The invocation at line 4, table.eraseAll() is the F-MUT and belongs to the execution part of the test case. In fact, the method eraseAll() causes a state change of the table object, whose effects are later inspected using 4 other invocations, namely, table.size(), table.get(), table.containsRow(), and table.containsValue(). Invocations at lines 5 to 8 are inspector methods and contribute to the oracle part of the test by inspecting the state of the object under test affected by the focal method eraseAll(), while preserving values of the class fields. 2.2 Identifying focal methods under test Having observed that identification of F-MUT s is not trivial and requires custom analysis, we have developed an approach on the basis of a novel heuristic to support the developer in the identification of the F-MUT s in a unit test case.14 By investigating a large number of open-source programs, we observed that dependencies between F-MUT s, the CUT and assertions manifest themselves through object state changes verified in the oracle part of a test case. This observation forms the underlying intuition behind our approach. Furthermore, our empirical observations led to the following heuristic: The last method invocation entailing an object state change whose effect is inspected in the oracle part of a test case is a focal method under test (F-MUT). Ourapproachleveragesdataflowanalysistocaptureessentialinformation in test cases and source code. It is general and applies to object-oriented systems. In this work we instantiate the approach for projects in Java and test cases in JUNIT format, the de-facto standard for unit testing Java applications.\u2021 Our static analysis works on the \u2020http://code.google.com/p/guava-libraries \u2021http://junit.org abstractsyntaxtreerepresentationofthesourcecode.Thestepsofthe approach and the input/output for each step are shown in Figure 3. In the following we provide an overview of these steps, but the interested reader may consult our previous work for a thorough discussion.14 T o explain our heuristic approach, we introduce a simple set-theoretic model in T able 1 that captures the key elements of the system. We also present the formal definitions of the essential notions used in the approach in T able 2. The approach takes as input a Java project and extracts test cases from the code being tested. It then analyzes test cases to establish the scope of the analysis, i.e., which system classes are involved in testing. It analyzes the identified classes of the system C (source code) to extract system dependencies and construct a system call graph CallGraph \u22862M\u00d7M that represents the calling relationship between the methods of all the classes involved in testing (Constructing call graph in Figure 3). A CallGraph is a directed graph whose nodes are methods in M, and each node mi is connected to mj, iff mi invokes mj. CallChain:M\ue0b6\u21922M is a function that returns all the methods invoked directly/indirectly by the given method. In fact, CallChain of a given method is the set of methods reachable from that method in the CallGraph. CallChain(m1) ={m2|(m1, m2) \u2208CallGraphor\u2203m\u2032 s.t. m\u2032 \u2208CallChain(m1), (m\u2032, m2) \u2208CallGraph} The approach applies inter-procedural forward reachability analysis to detectmutatorandinspectormethodswithintheclassesofinterest.An inspector is a side-effect free method that returns information about the state of an object, whereas a method causing object state change is a mutator method. T o determine whether a method is mutator or inspector, our approach analyzes the object fields in which a method accesses in its method body and also keeps track of changes to method parameters (tracing field access information in Figure 3). We suppose an action is the most fine-grained operation on an object including its instantiation, invocation, or field access. Accordingly, Action represents the set of all actions in the system. MActions:M\ue0b6\u21922Action is a function that returns the set of actions within the body of a given method,andMActions \u2217\u2236M \ue0b6\u21922Action isafunctionthatbesidesuchactions (MActions) also returns all the actions in the body of all other methods \fGHAFARI ET AL 5 of 19 T ABLE 1 The core model of the system Formal Element Description C The set of all the classes. M The set of all the methods. V The set of all the variables. Action The set of all actions in the system. Assertion The set of all the assertion statements. Acc = {r,w} The set of read/write access types on variables. Act \u2286Action \u00d7 V \u00d7 Acc The set of actions\u2019 activities on the variables. CM:C\ue0b6\u21922M Receives a class and returns a set of methods that belong to the given class. CV:C\ue0b6\u21922V Receives a class and returns a set of variables that belong to the given class. MV:M\ue0b6\u21922V Receives a method and returns its arguments. MActions:M\ue0b6\u21922Action Receives a method and returns its actions. MActions*:M\ue0b6\u21922Action Receives a method and returns its actions as well as the actions belonging to its CallChain. T ABLE 2 The formal notions Notion Definition CallGraph A directed graph whose nodes are methods in M and each node mi is connected to mj, iff mi invokes mj. CallChain A function that returns all the methods invoked directly or indirectly by a given method. CallChain (m1) = {m2|(m1, m2) \u2208 CallGraph or \u2203m\u2032 s.t. m\u2032 \u2208 CallChain(m1), (m\u2032, m2) \u2208CallGraph} Mut The set of mutators that comprises pairs of amethod and amutated variable, (m,v). Mut = {(m, v)|\u2203ai \u2208MActions\u2217(m), c \u2208C s.t.m \u2208CM(c), v \u2208(CV(c) \u222aMV(m)), (ai,v,w) \u2208Act} Ins The set of inspectors that comprises pairs of a method and an inspected variable, (m,v). Ins = {(m, v)|\u2203ai \u2208 MActions\u2217(m), c \u2208 C s.t.m \u2208 CM(c), v \u2208 (CV(c) \u222a MV(m)), (ai,v,w) \u2208Act and (m,v) \u2209Mut} TM A test method is a set of sub-scenarios (ss), each of which is a sequence of actions followed by a sequence of assertion statements. TM = {ai \u2026 antj \u2026 tm|{ai, \u2026 , an} \u2286Action and{tj, \u2026 , tm} \u2286Assertion, n \u2a7ei, m \u2a7ej and i, j > 0} FMss Focal Method of a sub-scenario is the last mutator having side effect on the actual asserted expression. FMss(ss) = {m1|\u2203v \u2208V,ai \u2208Action, tk s.t. m1 is invovled in ai and v is asserted in tk, (m1, v) \u2208Mut and\u2204j > is.t.m2 \u2208Method, m2 is involved inaj \u2208 Action and (m2,v) \u2208Mut} FMtm A function that returns the focal methods of a given test method. FMtm(tm) = \u222an i=1FMss(ssi) reachable from the given method. This function, called method chain actions, is defined as follows: MActions \u2217(m) = {a|a \u2208MActions(m) or \u2203m\u2032 s.t.a \u2208MActions(m\u2032) and m\u2032 \u2208CallChain(m)} Action a is a member of the returned set of MActions \u2217(m), iff a belongs either to m or to one of the methods in its CallChain. Mut \u2286M \u00d7 V is the set of mutators that comprises pairs of a method and a mutated variable, (m,v). Method m is a mutator of variable v, iff there exists an action in the method chain actions of m, such that it has a write access on v, which is either a parameter of m or a member of m\u2019s class. Mut = {(m, v)|\u2203ai \u2208MActions \u2217(m), c \u2208C s.t. m \u2208CM(c), v \u2208(CV(c) \u222aMV(m)), (ai, v, w) \u2208Act} Where CM and CV are the sets of methods and variables of the given class, respectively; and MV is the set of arguments of the given method. Ins \u2286M \u00d7 V is the set of inspectors that comprises pairs of a method and an inspected variable, (m, v). Method m is an inspector of variable v, iff there exists an action in the method chain actions of m, such that it has only read access on v that is either a parameter of m or a member of m\u2019s class. Ins = {(m, v)|\u2203ai \u2208MActions \u2217(m), c \u2208Cs.t.m \u2208CM(c), v \u2208(CV(c) \u222aMV(m)), (ai, v, w) \u2208Act and (m, v) \u2209Mut} Afterwards, the approach follows the test naming convention in JUnit 3 and annotations in JUnit 4 to distinguish test methods TM from helper ones (identifying test methods in Figure 3). It partitions each test method into sub-scenarios (ss), each of which is a sequence of actions followed by a sequence of assertion statements (slicing to sub-scenarios in Figure 3). \f6 of 19 GHAFARI ET AL TM = {ai \u2026 antj \u2026 tm|{ai, \u2026 , an} \u2286Action and{tj, \u2026 , tm} \u2286Assertion, n \u2a7ei, m \u2a7ejandi, j > 0} We consider all the common overloaded variants of the assert statements in JUnit format. If an asserted expression is a single variable, we find the method invocation from which the variable is assigned in that unit test case. If the declaring class of this variable does not originate in the project source code, we mark it as a helper\u00a7 (identifying expression(s) under assertion in Figure 3). The F-MUT s affect the state of the classes of the system under test, and assertions may check these state changes indirectly by accessing helper variables, rather than directly by accessing the classes of the system under test. In this case, we search, within a test method, for an expression from which the helper class is instantiated. The search continues recursively until we find an invocationorafieldaccessonaclassofthesystemundertestthatinstantiates thehelper.Weregisterthisinvocationastheactualassertedexpression (finding relevant (asserted) system expression in Figure 3). Having a set of actual asserted expressions for each sub-scenario ss, and the knowledge of mutator and inspector methods that our analysis discovered from the system source code, the approach reports a focal method FMss: TM \ue0b6\u21922M that is a last mutator having a side effect on the actual asserted expression in that sub-scenario (identifying F-MUT in Figure 3). FMss(ss) = {m1|\u2203v \u2208V, ai \u2208Action, tk s.t. m1 is invovled in ai and v is asserted in tk, (m1, v) \u2208Mut and \u2204j > i s.t. m2 \u2208Method, m2 is involved in aj \u2208Action and (m2, v) \u2208Mut} Finally, FMtm: 2TM \ue0b6\u21922M is a function that returns the focal methods of the given TM, that is, essentially, FMtm(tm) = \u222an i=1FMss(ssi), where tm = {ss1, ss2 \u2026 , ssn} The approach suffers from several limitations inherent to static analysis approaches that are generally unsound. For instance, the call graph construction bears limitations of the approach on type resolution for interfacecallsandpolymorphicmethodcalls.Moreover,weapplyasimplified intra-procedural alias analysis to identify references to a class fieldfromalocalvariable.Thatis,whentracingfieldaccessinformation, possible state manipulations may escape when a method parameter referencesaclassfield.Wehavethoroughlydiscussedtheselimitations in our dedicated work to F-MUT identification and have shown the significanteffectivenessoftheapproachtoidentifyF-MUT sforreal-world software systems, despite the identified limitations.14 3 SYNTHESIZING API USES In the first phase of the approach (Section 2) we identified F-MUT s that are the most representative uses of an API within a unit test case. This \u00a7Commonly these instances belong to classes external to the system, e.g., libraries, mock objects and stubs. section presents the second phase of the approach that automatically synthesizes a set of examples representing the usage of such APIs. Not all sequences of API calls in test code represent meaningful usage examples. In fact, a test case can have more than 1 focal method, and each focal method may focus on a different aspect of the unit under test that represents a separate API use. Figure 4 shows a real test case from the PureMVC project. The scenario in the test case is as follows. First, a controller object is created. Then, controller\u2019s registerCommand is called to register a particular ICommand class, ControllerTestCommand, as the handler for a particular INotification, named hasCommandTest. Next, the result of controller.hasCommand is checked through a JUnit method Assert.assertTrue to determine if the command is successfully registered for the specified notification. In the second sub-scenario, invocation of the focal method controller.removeCommand removes the previously registered command checked with the Assert.assertFalse JUnit method. The sequence of API uses registerCommand and then immediately removeCommand only serves a testing purpose; it is not a usage scenario that solves a practical programming problem. More specifically, each of these focal methods represents a separate example. Figure 5 shows another unit test case from the PureMVC project comprising 2 sub-scenarios for exercising registerCommand and executeCommandmethods.Thefirstsub-scenarioissimilartotheone explained for the unit test case in Figure 4. In the second sub-scenario, a helper object named vo is created and used as an optional body in the creation of the note object in line 7. Afterwards, invocation of the executeCommand on the controller object executes the ICommand previously registered as the handler for the given notification at line 3. This call is expected to change the value of the result attribute in the vo object. This is determined at the last line. In contrast to the previoustestcasewhereeachfocalmethodrepresentedaseparateAPI usage example, the focal methods in the current test are relevant and together represent a useful example. Therefore, the temporal sequence of API uses (focal methods) in test code is not always similar to the intended API uses as they appear in client code. Determining whether a sequence of 2 focal methods should be separated or not is not possible without realizing the relation between these methods. Moreover, to ease developer\u2019s understanding, usage examples should be concise and free from extraneous statements. That is, the mined examples must be processed before being presented to the user, to eliminate statements that are not necessary, like of those only belonging to the oracle part, e.g., inspector methods and assertion statements. In the rest of this paper we use the term meaningful examplein referring to a code snippet that represents such an API use. Finally, extracting meaningful examples from a given test may not be possible with traditional slicing proposals.18 These techniques fall short of precisely identifying statements related to a particular focal method, as focal methods are typically involved with the same program entities. For instance, focal methods are usually invoked on the same object, i.e., the unit under test. Moreover, different F-MUT s within a unit test may use the same setup part. Due to this intrinsic shared data among focal methods in a unit test case, existing slicing techniques may intermingle different uses together, rather than constructing \fGHAFARI ET AL 7 of 19 FIGURE 4 Two sub-scenarios with separate API uses in 1 unit test case FIGURE 5 Two sub-scenarios with relevant API uses in 1 unit test case FIGURE 6 Unit test case for the BasePartial class a slice, which focuses attention on the API of interest (excluding the code irrelevant to the F-MUT execution). For instance, consider the unit test case from JodaTime\u00b6 project in Figure 6. Three focal methods, each of which represents a different API use, comprise the test scenario in this unit test case, where the 3 API uses are highlighted with rectangles in the code. Suppose we are interested to extract the statements relevant to the method mock.set at line 11. If we construct a backward slice on the mock at this line, the slice will contain all statementsaffectingthisobject too(lines2-11).Indeed,sincethemock object is shared in different sub-scenarios, the slice becomes larger than expected rather than only focusing on the statements affecting on the mock.set method. 3.1 Identifying meaningful sequence of API uses Unit test cases generally exercise sequences of method calls that create and mutate objects. They check the return value and the state of the object on which the method was invoked to determine if F-MUT s \u00b6http://www.joda.org/joda-time behave as expected. According to our observations, a dependency relation exists in 2 different granularity levels between the statements within sub-scenarios in a test case. As indicated by the rectangles in the code, the second sub-scenario at line 8 uses the File object reportFile constructed in the first sub-scenario at line 3. Thus, a successful construction of the URL object in the second sub-scenario depends on the File object in the first scenario. Consequently, an example corresponding to the F-MUT in second sub-scenario, i.e., URL.openStream(), should comprise relevant statements from the first sub-scenario too. Nevertheless, when F-MUT s operate on the same object (e.g., see Figure 6), often a dependency relation with a finer granularity than the object level exists among API methods. Indeed, in unit testing, methods that access (read or write) the same object are usually tested together,19,20 although these methods may not necessarily be logically dependent or used together. That is, the sequence of such methods in a \u2016http://ant.apache.org Figure 7 illustrates a real unit test case from the Ant\u2016 project. \f8 of 19 GHAFARI ET AL FIGURE 7 Two explicitly data-dependent sub-scenarios in a unit test case test may not be practical when used in the client code and thus hardly represents a meaningful usage example. Accordingly, we examine the type of dependency relation between the methods not only at object level on which they operate but also transitively at the level of object attributes to investigate whether a sequence of 2 F-MUT s (in short, sequence) is meaningful. Sequences can be of 3 kinds: 1. Read-Read. When 2 methods m1 and m2 both only read the same field f, each method can be executed separately without requiring another one. These methods are usually tested together as they need the same setup part. 2. Write-Read. If method m1 writes field f and method m2 reads the value written to field f by m1, the execution of m2 depends on the value produced by m1. 3. Write-Write. When 2 methods m1 and m2 both write the same field f,andthereisnoWrite-Readrelationinbetween,theyareindependent and can be executed separately as one execution overwrites the other. Such sequences are very common in unit testing. For instance, many test scenarios involve testing the same method with different inputs. The only sequence representing a meaningful usage scenario is Write-Read. Otherwise, each scenario can execute separately and represents a usage example. 3.2 Computing relevant statements to an API The F-MUT s within a test are syntactically dependent on one another and extracting the statements relevant to a focal method can be challenging. T o realize how each focal method interacts with other code elements within a test, we adapt a graph-based object usage model that is more compact and specialized for object usage representation than program dependence graph and control flow graph.21 We model the usage of each object including its instantiation, invocation, or field access\u2014collectively called actions. The execution flow, data, and control dependencies among all of such actions form the entire unit test model. In our model, multiple object usages in a test scenario can be represented with a directed acyclic graph. For instance, Figure 8 illustrates multipleobjectsparticipatingintheunittestcaseinFigure5.Nodesare labeled C.m, in which C is the class name of the object and m is the name of a method, constructor, or a field. The directed edges between actions represent the usage order, control or data dependency among them. We use different edge types to distinguish between the edges (see the FIGURE 8 The representation of different object usages in the unit test case shown in Figure 5 legend in Figure 8). A directed edge from an action node i to an action node j represents the temporal order of actions w.r.t. the execution flow and indicates that in the test scenario, i is used before j, and j is used after i, respectively. Action aj is control dependent on action ai (control dependency CD between ai and aj evaluates to true) iff the execution result of ai determines whether aj will execute or not. Control flow CF: Action \u00d7 Action\ue0b6\u2192{true, false} between ai and aj is the sufficient condition (ai.cj) for the former action to be followed by the latter action in a program execution: \u2200ai \u2208Action if Succ(ai) = {aj, \u2026 , an}, then CF(ai, aj) = ai.cj, \u2026 ,CF(ai, an) = ai.cn, and (\u2228n l=jai.cl) = true. In other words, (1) CF(ai, aj) = false if aj is not a successor of ai , (2) CF(ai, aj) = true if aj is the sole successor of ai, and (3) if ai has more than one successor (there is a branch), then the disjunction of all CF from ai to its successors is true. For example, assume if flag is true, a1 is followed by a2, otherwise, it is followed by a3. In such a branch, CF(a1, a2) = a1.c2 = flag, \fGHAFARI ET AL 9 of 19 CF(a1, a3) = a1.c3 = \u00acflag, and the disjunction of the 2 control flows is true so that one path will be chosen during the execution. Algorithm 1 presents the control dependency computation on the basis of the conditions ai.Reachability on which each action ai is reachable from the root action aroot in the control flow graph. The algorithm initializes the WorkList with the successors of the root (line 3). For each ai in the WorkList that is not yet processed (line 4), ai.Reachability expresses that ai is reachable, i.e., at least one of its predecessors ap is both reachable, and there is a control flow from ap to ai. Once the algorithm computes the reachability conditions for an action (line 5), it performs the same computation for that action\u2019s successors; such computations continue until no unprocessed actions are left. In practice, we model control dependencies w.r.t. control statements such as if or while statements. We use control nodes to represent how actions are used within a control statement. T o conform to the use of edges for representing temporal orders, we place control nodes at the branching node, where the program chooses an execution path, instead of placing them at the starting points of the corresponding statements. For example, the control node labeled CONTROL(IF) in Figure 8 represents the if statement in the test code in Figure 5, and the edge from the node IController.hasCommand to CONTROL(IF) indicates that the execution of hasCommand occurs before the branching point of the if control statement. All actions inside the body of a control structure are control dependent on the control predicate. The directed edge from the node IController.hasCommand to Assert.fail indicates a control dependency between these 2 actions in the test. Action j is data dependent on action i, if i assigns to a memory location that j will read in some program path. In this work, we identify 2 kinds of data dependencies. 1. Intra-procedural. The object instance on which a particular method is invoked is data dependent on the object creation statement. For instance, there is a data dependency between Controller.getInstance and IController.registerCommand as the latter involves the object controller produced by the former action. Moreover, when an action is used as an argument in another action, e.g., ControllerTestCommand() and IController.registerCommand,thelatteractionisdatadependent on the former. 2. Inter-procedural. In contrast to the previous data dependency, which we could spot by looking at each individual test in isolation, there may also be an inter-procedural dependency between 2 method calls that is not explicit from within a test. More specifically, to enable slicing test code w.r.t. a focal method, the precision we consider during dependency analysis should not be limited to an object level but we also trace transitive access on the object in different attributes to determine if 2 API methods are dependent. The dashed-edge between IController.registerCommand and IController.executeCommand in Figure 8 indicates such a data dependency between these 2 F-MUT s. In general, data relation DR: Action \u00d7 Action \u00d7 V\ue0b6\u2192{RR, RW, WR, WW, null} between ai and aj, where i < j can be of 4 kinds: DR(ai, aj, v) = \u23a7 \u23aa \u23aa \u23a8 \u23aa \u23aa \u23a9 RR if {(ai, v, r), (aj, v, r)} \u2286Act RW if {(ai, v, r), (aj, v, w)} \u2286Act WR if {(ai, v, w), (aj, v, r)} \u2286Act WW if {(ai, v, w), (aj, v, w)} \u2286Act null if there is no shared variable. Accordingly, data dependency DD:Action \u00d7 Action\ue0b6\u2192{true, false} of actions ai and aj, where i < j, is true if the former action defines (write) a variable that is used (read) by the latter action without any other intermediate definitions: DD(ai, aj) = (\u2203v \u2208V s.t. DR(ai, aj, v) = WR and \u2204k > i s.t. DR(ak, aj, v) = WR) Synthesizing a meaningful example is not possible unless each action participatinginanexampleisaccompaniedbyallotheractionsonwhich that action is control/data dependent during its execution. We introduce CDSet: Action\ue0b6\u21922Action, a function that returns the set of actions on which a given action is control dependent, and DDSet: Action\ue0b6\u21922Action a function that returns the set of actions on which a given action is data dependent. Algorithm 2 synthesizes usage examples of the most representative APIs within a given test method TM. For each focal method f it computes the actions on which the method is dependent for its execution and stores them in the Result set. It does the same for all the actions included in the set whose dependencies have not yet been checked (Result\u2216Processed). This process continues till the example set Result is self-contained, i.e., none of the actions in this set have dependency on actions outside the list. Finally, it returns a set API-U, which comprises pairs of a focal method, and a complete usage example of that method synthesized from the tm. \f10 of 19 GHAFARI ET AL 3.3 Prototype implementation We implemented the proposed approach to synthesize API uses in an Eclipse plugin. Figure 9 presents the plugin\u2019s work flow and input/output for each step. Given a Java project, the plugin distinguishes test code from source code and excludes test cases that may not represent correct API usages. In practice, some tests may be in charge of checking a negative behavior like an exceptional condition. We assume that if a method checks a negative behavior, developers indicate that in JUnit using the expected parameter in @Test annotation, or they explicitly call a JUnit fail method right after an exceptional call in the test case. T o discern the role of different methods within a test case, we adapt a graph-based object usage model, Groum, that provides necessary control and (intra-procedural) data relations between the statements within a test case.21 We enrich this partial object usage model with field access information that is computed using the previously developed technique described in Section 2. For each method m, a set MOD(m) of class fields that may be modified by m is computed, and a set USE(m) of class fields that may be used by this method. In both cases, the effects of methods transitively called by m are taken into account. We find internally dependent methods on the basis of the common state, i.e., the fields they read or write. We construct a complete model for each unit test by connecting such methods together. Given an F-MUT f in a test method and a complete object usage model for that test, a slice for f constructs an example of that F-MUT use. Particularly, we realized the slice computation by implementing Algorithm 2 that for each F-MUT in a test, traverses the model starting from the node corresponding to the F-MUT of interest and includes in the program slice everynodethatcanreachtheF-MUTusingdataand/orcontroledgesin the model. 3.4 A complete working example In this section, we present the detailed operation of our approach through a working example. Figure 10 shows a simple unit test case for the Email class in Commons-Email, an API built on top of the Java Mail API for sending an email. The test case sets some fields of an email message and the outgoing mail server and checks whether a header folding works correctly. Figure 11 illustrates 2 examples constructed from this unit test case. We present the results of different phases,summarizedin4steps,whentheapproachisappliedtothisunit test case. FIGURE 9 The example synthesis work flow FIGURE 10 A unit test case with dependent sub-scenarios from Commons-Email project \fGHAFARI ET AL 11 of 19 FIGURE 11 Synthesized examples from the unit test case in Figure 10 Step 1. Identifying the analysis scope. In the first step we identify system classes that are involved in testing. Each system class is analyzed to realize if the class uses any other classes in the system. A call graph is built to represent the calling relationships between the methods of all classes transitively involved in the test. The approach finds the Email class as the only system class in the unittestcase.OtherclassessuchasStringandMimeMessageare external classes whose methods have an auxiliary role in the test. Step 2. Identifying mutator and inspector methods. This step tracks the flow of data involving the class fields starting from leaf methods in the call graph and moving toward caller methods used in the testcase.Itrecursivelymapsfieldaccesssetsfromthecalleestothe callers in the call graph and categorizes the methods used in the test case into mutators or inspectors according to whether they cause an object\u2019s state to change or not, respectively. Accordingly, setHostName, setSmtpPort, setFrom, setTo, setSubject, addHeader, and buildMimeMessage methods are classified as mutators and getHeaders, and getMimeMessage methods are classified as inspectors. Step3.IdentifyingF-MUTs.Ineachsub-scenario,weidentifyF-MUTas the last mutator whose effect is inspected in its following assertion expressions. The approach correctly identifies email.addHeader as F-MUT in the first sub-scenario. This method modifies the headers field in the Email class, and email.getHeaders, which is asserted in the first sub-scenario, also reads the same field. Identifying the F-MUT in the second sub-scenario is more interesting. The asserted expression is an array that is initialized by invoking getHeader method on an object of type MimeMessage. This class is not a system class and therefore cannot be a CUT . In fact, this external object is assigned the result of getMimeMessage call on an object of type Email, which belongs to the system. Thus, what has been asserted in the second sub-scenario is obtained from the Email.getMimeMessage, which inspects the message field of the Email class and is the actual asserted expression. The last mutator modifying this field is buildMimeMessage, which is the F-MUT . Step 4. Synthesizing examples. Our approach relies on data and control dependencies among all expressions in a test to determine which statements are actually necessary to execute each F-MUT . The approach correctly identifies the declaration of the headerValue at line 7 as the only statement in the unit test case on which the focal method email.addHeader at line 8 is data dependent. Nonetheless, identifying statements concerning the execution of the second focal method, email.buildMimeMessage at line 12 requires an analysis that is finer than the granularity of objects. The approach leverages the field access information, collected in the second step, to perceive the data relationship among the methods. The approach precisely finds that the execution of the buildMimeMessage depends on all of its preceding mutator methods. Finally, an example is constructed on the basis of a set of expression statements on which a given F-MUT is transitively control/data dependent, although in this particular case only the data dependency information is sufficient to construct the example. Figure 11 shows the resulting API usage examples synthesized from the unit test case. 4 EVALUATION T o evaluate this work, we have selected 4 open-source Java projects as the context of our study. T able 3 presents the key characteristics of the subject programs. These are mature programs from different application domains with at least 1 major release. The subject programs are equipped with substantial test suites with test cases in JUnit format. T o evaluate our approach we formulated the following research questions: RQ1: How effective is the proposed approach to identify the relationships between unit tests and source code in the form of F-MUT s? RQ2: How well can the proposed approach be used to generate meaningful examples from unit test cases? In the following, we first replicate, on a new machine, an empirical study that we performed in our previous research to assess our approachinidentifyingF-MUT s(RQ1).14 Wethenevaluatewhetherour T ABLE 3 Key characteristics of the subject programs in this study Source code characteristics Subject Programs Version KLoC T est Methods Commons Email 1.3.3 8.78 130 JGAP 3.4.4 73.96 1390 PureMVC 1.0.8 19.46 43 XStream 1.4.4 54.93 968 \f12 of 19 GHAFARI ET AL example synthesis approach can benefit from the F-MUT information to effectively synthesize meaningful API uses from unit test cases automatically; we compare, on the basis of 3 different metrics, the output of our research prototype with both human-written examples and a state-of-the-art approach (RQ2). The experiments have been conducted on an Intel 2.8 Core i7 CPU machine with 16 GB of RAM and Mac OS X 10.11 operating system. 4.1 Identifying F-MUTs Design. T o select test cases for the analysis we first generated a raw dataset by randomly sampling 100 test cases from each of the subject systems. For the PureMVC project, which has fewer test cases, we included all its 43 test cases in the dataset. Then, we filtered out test cases that do not satisfy applicability criteria. In particular, we excluded test cases that form an inheritance class hierarchy, do not use standard JUnit assertions, and contain assertion statements in private methods, helper classes, and inherited methods. Most test cases (above 87%) conform to the applicability criteria and test case structure supported by our prototype implementation. Following the preselection and filtering we obtained a dataset of 300 test cases. These test cases form a manually inspected reference dataset (ref-dataset1) that we use as an oracle in this study. For each test case, we have detected F-MUT s manually after thorough analysis of the system specification, javadocs, API usage manuals, and code comments that gave us the knowledge about each system under test. T o improve the understanding of concepts between ourselves, we met to identify sub-scenarios and F-MUT s within each test case in 5% of this set. As we assured ourselves that we agreed on how to extract expected information, we (the first 2 authors of the paper) inspected the remaining test cases independently, while providing a short rationale for why each focal method is selected. This rationale is used for internal validation purposes. For all the studied test cases, we exchanged our results to detect potential conflicts. After we agreed on the expected results, we finalized them in the ref-dataset1. T o evaluate the effectiveness of the approach, we applied our prototype to all the subject programs and the selected test cases in the ref-dataset1. We compared the results of the prototype with the manually identified F-MUT s. Results. The first part of this study involves assessing the effectiveness of the proposed approach to identify F-MUT s; we calculate the precision and recall of our approach with respect to the results in our reference dataset. Precision is the fraction of correctly identified F-MUT s versus all results returned by our prototype. Recall is the proportion of the results in the ref-dataset1 identified by the prototype. T o take both into account, we assess the overall effectiveness of our approach using the harmonic-mean (F\ud835\udefd= 1) of precision and recall. T able 4 summarizes the results, which confirm our previous findings. Our approach automatically identifies F-MUT s, in a minute in total, and achieves a high precision and a good recall. The harmonic-mean over the 4 subject programs is also promising (66%\u201387%), which indicates that we can establish the traceability links between unit tests and source code in the form of F-MUT s in an automated manner with a high effectiveness (RQ1). The interested reader may consult our previous T ABLE 4 Quantitative report on the accuracy of the proposed approach on the subject programs Experimental results Subject Programs Precision Recall H-mean Commons Email 0.94 0.69 0.79 JGAP 0.85 0.73 0.78 PureMVC 0.97 0.79 0.87 XStream 0.90 0.53 0.66 work for a detailed discussion on corner cases, as well as some cases where the approach is more advantageous.14 4.2 Synthesizing API uses Design. For synthesizing API uses we target the same open-source projects that we have used to evaluate identifying F-MUT s (see T able 3). These projects have an established code base and are equipped with documentation. Importantly, we acquired a good knowledge of their APIs during the earlier study and we conducted to assess our approach for identifying F-MUT s. For evaluating the API use synthesis from test cases, we generated a raw dataset comprising 140 test cases, by randomly sampling 35 different test cases from each of the subject systems. We left out simple test cases that only exercise inspector methods within single assertion statements. T o build a reference dataset of API usage examples to be used as an oracle in this study, we recruited 5 Java programmers. All of the programmers had a minimum of 4 years of general programming experience, and on average 2 years expertise in software development with Java programming language. We assigned each participant a random selection of test methods from the raw dataset. We asked the participants to inspect each test method manually and extract from a test any number of meaningful usage examples that it may represent. Each test method was inspected at least by 2 participants; when the participants extracted different usage scenarios from a unit test case, they had to discuss and converge on a set of agreed usage examples. The final results establish the ref-dataset2, which consists of 159 human-written examples of API uses. FIGURE 12 Comparison of the characterization of examples generated by our tool and UsET eC with respect to manually built usage examples \fGHAFARI ET AL 13 of 19 T o investigate how well the proposed approach and our prototype implementation that can be used to generate meaningful examples from unit test cases (RQ2), we applied our research prototype and a state-of-the-art tool (UsET eC4) to all the subject programs and the selected test cases and compared the outputs to manually built examples in the ref-dataset2 . We evaluate the effectiveness of our approach in comparison with UsET eC, which extracts representative APIs uses from test code.4 The UsET eC compares the text similarity between a test method name and invoked APIs within the test to find representative APIs. It then uses a slicing technique on the basis of 4 predefined code patterns to extract some code snippets representing usage of these APIs. The code snippets are then clustered to exclude redundant examples. Results. Figure 12 presents the evaluation results and compares the generated examples with the human-written examples in the ref-dataset2. The comparison criteria are fitness, completeness, and conciseness, which we define accordingly. Fitness: Each test case uses several APIs, including those under test, and others that are used as helpers for initialization or to specify test oracles. We have identified that tested APIs are the most representative ones within a test and consequently a test illustrates more meaningful usage of these APIs (focal use) rather than other non-focal ones. We thus define Fitness as the proportion of examples correctly mapped to the focal uses. The examples produced by our tool are linked to APIs on the basis of F-MUT s; 71% of these examples correspond to the focal uses that are manually picked by a human. In the case of UsET eC, it constructs such links correctly in 56% of cases. The UsET eC relies on name matching to identify focal uses in each test method, which is a brittle technique that only applies to test cases that strictly follow the naming convention.22\u201324 In particular, this strategy falls short if the test name does not contain the name of the unit under test or does not entail a known type. We illustrate the noted shortcoming of UsET eC on the test case in Figure 13. This test case belongs to the commons email library and demonstrates how to construct an MIME style email message to send. It sets various email fields and invokes buildMimeMessage on the email object to actually build the MimeMessage. T o examine whether the message is built successfully the unit test invokes getMimeMessage at line 10 to get the internal MimeMessage of the email object and assigns it to a new MimeMessage object. Invocation of saveChanges on this object updates the appropriate header fields of the message to be consistent with the message content. Finally, the test invokes the getContentType to retrieve the header field of this message and evaluates via an assertion statement if this is equal to the expected value. Among several API uses in this unit test case, buildMimeMessage is the focal use. In fact, msg.getContentType(), the asserted expression in the unit test case invokes the msg object, which belongs to MimeMessage class, a standard Java library. The test initializes this helper object at line 10 by assigning the email.getMimeMessage() method to this object. This invocation is an inspector method that returns the field Email.message. Therefore, this method invocation istheactualassertedexpressioninvolvingtheclassofthesystemunder test. The method email.buildMimeMessage() is the last mutator, which modifies the Email.message field accessed in the assertion statement. This method is the F-MUT and also the focal use in this unit test case. Nonetheless, the approach on the basis of name matching cannot identify this focal use correctly using the test name \u201ctestDefaultCharsetAppliesT oT extContent,\u201d as the test name does not entail a known API. In 71% of cases our approach picks correct focal uses. T o further investigate the reasons for missing 29% of correct APIs, we manually inspected test cases corresponding to the missing APIs. We realized that most of these exceptions are due to implementation problems that are reported in Section 2.2, but not the limitations of the approach. Indeed, we could detect focal uses in most of test cases correctly by applyingtheapproachofidentifyingF-MUT stoeachtestcasemanually. Consider a test case in Figure 14. An external call iter.remove() at line 8 affects the oracle part of this test case. An invocation of map.put() at line 4 is the last mutator, but the test case focuses on deletion of an item through iteration. In fact, the iter object at line 5 holds a reference to the entries in XmlMap. That is, the invocation of remove() method on the iter object at line 8 also removes an item from XmlMap. Our prototype does not consider such an implicit dependency between test code and external classes and negatively reports XmlMap.put() as the focal use in this test case. FIGURE 13 Unit test case for the Email class \f14 of 19 GHAFARI ET AL FIGURE 14 A test method with side effect from an external object Completeness. Usage examples should be self-contained and include all statements that are necessary for exercising focal uses.Indeed,anexampleiscompleteifitincludestheappropriate API and all parameters used in the API are well explained (i.e., they are syntactically correct and there is no missing statement) in the code example. We define completeness as the proportion of examples that are complete. Our tool is able to generate about 75% of complete examples, while UsET eC is able to generate 67% of complete examples. The reason that none of the tools is as good as a human at constructing complete examples is due to some dependencies in test methods. For instance, a common practice to decrease code duplication and maintenance costs in testing is to encapsulate the setup part of a test in a helper method that can be called by a group of test methods that need the same test preparation. In human-written examples, test dependencies are often consolidated into examples. Our tool detects such dependencies, but does not put related code directly into synthesized examples. This is mainly why 25% of examples that our tool generates are marked as incomplete and do not compile. The UsET eC produces even more incomplete examples (33%). This tool suffers from not only the same limitation for test dependencies but also the slicing heuristic, which it applies to test code, which is based on predefined code patterns that in 8% of cases, which was not able to identify all relevant statements to a focal use. We speculate that the slight improvement by our tool is mainly because of the more precise implementation of intra-procedural analysis in our extension of Groum.21 Conciseness. T o improve readability and enhance developer understanding, usage examples should be concise and free from superfluous statements. In particular, an example is concise if it is complete and does not have more statements than its corresponding human-written example. We define Conciseness as the proportion of complete examples that are concise. Automatically generated examples are less concise than human-written examples. Apart from statements that are substituted for test dependencies in human-written examples, the conciseness of the results is 91% for our tool and 73% for the UsET eC tool (both applied to the complete examples). Human-written examples are moreconcisemainlybecauseofsomecodetransformationappliedduring example construction. For instance, the unit test case in Figure 15 checks whether it can properly assign a URL to an EmailAttachment object. The code snippet in Figure 16 illustrates a human-written example corresponding to this test case. It correctly summarizes the intended focal use and also removes unnecessary for statements to demonstrate this usage. Differently, the slice computation we apply to test code merely removes part of a test that can be found to have no effect on the semantics of interests without any code transformation. Unfortunately, not all examples constructed in this way are sufficiently clear. For example, consider a simple unit test case from the JGap project in Figure 17, and an example corresponding to the third sub-scenario that our tool generates for this unit test case, shown in Figure 18. During slice computation the variable declaration statement in the first sub-scenario is found to be relevant because the chrom object is further used in the thirdsub-scenario.However,theobjectdefinitionatline11killstheformer definition at line 2. That is, the object initialization at the first line of the example is useless and only obscures the focal use. A code transformation that consolidates these statements would make the example far clearer. In 18% of cases UsET eC generates longer examples than our tool. For instance, Figure 19 shows an excerpt of a unit test case from Commons-Email that we need for our quick discussion. Intuitively, email.addHeader is the focal use in this unit test case, and an example to represent this API use should only include the headerValue declaration statement. However, UsET eC applied to this unit test case generates a completely different example. Indeed, the focal use addHeader in this test relies on 2 variables, email and headerValue. The UsET eC slices test code on the basis of email, the shared variable among the focal use and other statements, and includes in the same slice all other preceding invocations on this variable that are absolutely irrelevant to illustrate the correct use of addHeader. The UsET eC does not eliminate statements that may be irrelevant in an example, e.g., external APIs, and inspector methods. It is confined to the intra-procedural relations and roughly assumes 2 statements are dependentandshouldparticipateinanexampleiftheyinvolvethesame object(s), which is not true in general, as discussed in Section 3. We attribute our 18% improvement to more precise test analysis, i.e., realizing inter-procedural data relations (13%) and control dependencies (5%) among statements. \fGHAFARI ET AL 15 of 19 FIGURE 15 A simple unit test case FIGURE 16 A human-written example corresponding to the unit test case in Figure 15 FIGURE 17 Unit test case for the Chromosome class FIGURE 18 An example representing the use of the Chromosome class in the third sub-scenario in Figure 17 4.3 Threats to validity We note several limitations and threats to validity of the assessment of our approach. We mitigated the risks to the external validity and the generalization of the results by selecting real-world systems from different application domains with manually generated test cases. The study involved a randomly selected sample of test cases from the different systems. Threats to internal validity might arise from the process used in our empirical study. We used statistical methods to evaluate the result of the experiments where results could have been affected by a randomness in the test case selection. The accuracy of the results used to evaluate our approach affect the results achieved. We did not have access to the original program developers to indicate the focal methods in each test case. For this reason, we familiarized ourselves with the project documentation and details of the source and test code and \f16 of 19 GHAFARI ET AL FIGURE 19 The first sub-scenario of the unit test case shown in Figure 10 cross-checked the results. We acknowledge that the approach validated against manual analysis that has been performed by 2 of the authors is a threat to validity because of potential bias. Furthermore, to mitigate the construct validity risks and avoid mono-method bias in our experiments, we use 3 complementary metrics (fitness, completeness, and conciseness) for generalizing the quality and meaningfulness of the generated API usage examples. Nonetheless, studying these metrics may not reflect all the properties of the generated usage examples. Our approach shares inherent limitations with other static analysis techniques that are generally not sound. According to our findings in this study, many of the false negatives are due to implementation problems reported in Section 2.2, but not the limitations of the approach. Nevertheless, static analysis is preferable for analyzing large code bases. It reduces the cost of the application of our approach when itcomestominimizingthetimetosetupexecutionenvironments,working machines, etc, required to run the programs for dynamic analyses. Using better points-to analysis for type resolution and more precise call graph construction, for instance, using Spark25 could improve the approach. Although the aim of this research is to provide developers with API usageexamplesminedfromunittestcaseswhenothersourcesofclient code is inaccessible, we did not evaluate the effectiveness of extracting usage examples from test cases in comparison with those extracted from other sources, such as client systems. This would be important to decide the best source of usage examples when different sources are available. We leave this experiment to future work. 5 RELATED WORK This section overviews the related work concerning the 2 necessary phases of the approach, i.e., to test code traceability link recovery and example extraction. 5.1 Linking test code and source code The practical automated realization of test-to-code traceability has received little attention. Van Rompaey and Demeyer compare several traceability resolution strategies to link test cases and the units under test.23 For example, in the naming convention strategy, they match production code and test code by removing the string \u201ctest\u201d from the name of the test case. This strategy falls short if the test name does not contain the name of the unit under test or does not entail a known type. In another strategy, they analyze call behavior before assertion statementsandpresumethatatestcasecallsamethodontheunitundertest right before the assertion statement. They exploit the static call graph to identify the last class called before an assert statement. This strategyfailswhen,rightbeforetheassertstatement,thereisacalltoaclass other than the tested class.24 A strategy on the basis of lexical analysis builds on an assumption thatdevelopersusesimilarvocabularytowritethesourcecodeofatest case and the corresponding unit under test. Latent semantic indexing, an information retrieval technique, is used to calculate this similarity. However, their study shows that a significant amount of vocabulary in a test case does not repeat in the unit under test. Finally, a version log mining strategy builds on an assumption that test cases and their corresponding unit under test coevolve together throughout time. This strategy bears a risk to wrongly identify production code that changes frequently as the unit under test. Qusef et al. propose to use data flow analysis to circumvent the limitations of these strategies.24 They apply reachability analysis and exploitdatadependencetoidentifyasetofclassesthataffecttheresult of the last assertion statement in each unit test. This analysis, however, does not consider inter-procedural flow, inheritance, and aliasing. The SCOTCH is an improvement over this work\u2014a technique on the basis of dynamic slicing to restore test case traceability links.26 The set of identified classes by dynamic slicing is an overestimate of the set of classes actually tested in a test case. In fact, a slice will contain all the helper classes used in a test case as well. In a recent work, the same authors useanotherfilteringstrategyonthebasisofnamesimilaritytoenhance the accuracy of their earlier approach.27 These approaches rely on a \u201cstop-class list\" to hold the names of the classes to be considered as helper class in the analysis; however, these classes have to be manually identified prior to the analysis. While these works realize test to source code traceability links on the class level, G\u00e4lli et al. provide initial evidence that a single method is most often the unit under test in object-oriented programs.17 Nevertheless, there is a scant work on automatically identifying methods under test. Marschall exploits the naming convention of a test method to establish the relation between tests and methods under test.22 This requires developers to strictly follow the naming conventions, which is not regular in practice.24 Additionally, a test case may have different sub-scenarios and multiple methods under test, accordingly. But the test name may not entail the information about all these methods. \fGHAFARI ET AL 17 of 19 T o improve this approach, Marschall also suggests a method as a tested method, if it creates an argument for an assertion statement. This producesmanyfalsepositivesasitreportsalltheinspectormethodswhose results are asserted in a test case. Ying et al. propose a call graph filtering approach to detect methods that are probably irrelevant during program investigation.28 According to their findings, methods closer to the leaf of a call graph, as well as those with a small number of callees are unlikely to contribute to the understanding of the application logic. They use this approach to eliminate irrelevant methods from the set of methods that can be invoked, transitively, from a JUnit test case. This heuristic highlights the setup parts of a test and misses to detect a tested method, which is called right before an assertion. In addition, it fails to retrieve relevant invocations in a test with multiple sub-scenarios. In a recent work, Ghafari et al. propose an approach to automatically establish the relationship between the source and test code on the method level.14 They use classic analysis techniques to dissect the structure of unit test cases and realize the role of each method within a unit test. As a result, they precisely detect the focal methods that represent the core of a test scenario inside a unit test case. This precursor work, indeed, has inspired and enabled the presented approach in this paper. 5.2 Example extraction Extensive research has been performed to mine API usages from a local repository. Mandelin et al. observe that usually a programmer knows what type of object she needs, but does not know how to write the code to get the object.29 T o enable code reuse, they develop techniques for synthesizing code fragments automatically given a simple querythatdescribesthatdesiredcodeofinputandoutputtypes.Zhong et al. propose one of the first approaches that mine common API usage patterns for suggesting relevant code snippets to aid developers.3 Lack of code examples in API documentation motivates Kim et al. to propose a code example recommendation system that augments the standard API documentation with the code examples organized locally.30 Buse et al. present a technique to automatically synthesize human-readable API usage examples from a given software corpus to enrichdocumentation.5 Thisapproachisthefirsttoleveragetypeinformation and statement ordering; and generated examples are free from superfluous context and contain only the program statements needed to show typical use of a target data type. Likewise, Montandon et al. instrument API documentation in JavaDoc format with concrete usage examples.12 Wang et al. propose a technique to improve the quality of usage patterns.31 This work proposes a 2-step clustering algorithm to produce succinct and high-covering examples. While the mentioned approaches usually generate small examples, which are helpful in the initial stage of API learning, Moritz et al. propose a visualization-based approach for finding more detailed API usage examples that are helpful in later stages.32 Galenson et al. propose an approach for code synthesis given only partial specifications of desired results.33 In contrast to most of the approaches that rely on static information, this approach is dynamic and also allows users to incrementally give more information to refine the candidate code fragments. Nguyen et al. propose an API recommendation approach premised on statistical learning from low-level code changes and the context of such changes.34 It suggests a desirable API call in the top 5 positions in 77% of the time, which is a significant improvement over state-of-the-art approaches. The huge amount of open-source code that is available online has motivated researchers to focus on realizing internet-scale code search to enable developers to reap the benefits of these billions of lines of source code. However, ranking high-quality code examples at the top of the result set is challenging. Chatterjee et al. provide a code search technique that locates a small set of relevant code snippets to perform a desired task specified with a free-form query.35 T o improve search results, this technique relies on the API documentation to annotate undocumented code with the plain English meaning of each API. Although this approach provides better results than existing code search engines, its time complexity is high at runtime, which decreases its usability for real applications with a large-scale corpus. T o address this issue, Keivanloo et al. propose an approach to answer a free-form query within hundreds of milliseconds on a corpus covering millions of code snippets.6 This approach enables working code examples to be spotted in a time complexity similar to internet-scale code search engines. Existing code search engines use ranking algorithms that only compute the relevancy between the query and the results. Consequently, they may produce some results that all share the same characteristics, e.g., the code snippets originate from the same project with thesameimplementation.Hence,otherdifferentresultswithlowerrelevance scores on the basis of the query are hidden, even though they may actually be closer to what a programmer is looking for. Martie et al. propose 4 ranking algorithms that take into account relevance, diversity, and conciseness in ranking code search results.36 Two of these algorithms, which leverage the social and technical information of the code results, produce top 10 results that are much preferable to use by the programmers. Recent research also relies on communities where developers post questions and receive answers regarding their programming issues to extract useful development hints. These crowd sources often become an alternative for official API documentation where such documentation is either sparse or unavailable. De Souza et al. present an approach that leverages the Stack Overflow knowledge (questions and answers) to recommend information that is useful for developers.37 They present this information in a web browser and rank the results on the basis of the textual similarity between the query and the pairs of questions and answers, as well as the quality of the pairs. However, thisapproachcomeswithanumberof problems.Everytimedevelopers need to look for information, they interrupt their work flow and switch from the IDE to a web browser to perform and refine searches. Besides, query formulation is not easy to accomplish manually. Developers may notknowwhattosearchforandhowtoformulatetheirneedsinaquery appropriately. Ponzanelli et al. propose an approach to retrieve pertinent Stack Overflow discussions to the code context in the IDE.9 For each retrieved discussion, they calculate textual and structural similarity of the code context and the discussion, as well as the rate of that discussion on Stack Overflow to rank possible solutions. While all the aforementioned approaches rely on the client code, Ghafari et al. propose the idea of extracting the API usage examples fromunittestcases.11 Zhuetal.presenttheUsET eCtooltoextractsuch examples from test code.4 However, to find representative APIs within \f18 of 19 GHAFARI ET AL a test case they rely on a test method naming convention that has been proven to be inapplicable for this purpose.23,24 Conversely, we identify F-MUT s that represent the actual intents of a unit test case.14 In addition, UsET eC applies a heuristic code slicing technique that is restricted to predefined code patterns without realizing the exact relationship amongcodeelementsandtheirroleswithinatest,whereasweperform a more precise test analysis to perceive the role of different API calls within a test and synthesize examples that are highly similar to human generated ones. 6" + }, + { + "url": "http://arxiv.org/abs/2007.09863v1", + "title": "Why Research on Test-Driven Development is Inconclusive?", + "abstract": "[Background] Recent investigations into the effects of Test-Driven\nDevelopment (TDD) have been contradictory and inconclusive. This hinders\ndevelopment teams to use research results as the basis for deciding whether and\nhow to apply TDD. [Aim] To support researchers when designing a new study and\nto increase the applicability of TDD research in the decision-making process in\nthe industrial context, we aim at identifying the reasons behind the\ninconclusive research results in TDD. [Method] We studied the state of the art\nin TDD research published in top venues in the past decade, and analyzed the\nway these studies were set up. [Results] We identified five categories of\nfactors that directly impact the outcome of studies on TDD. [Conclusions] This\nwork can help researchers to conduct more reliable studies, and inform\npractitioners of risks they need to consider when consulting research on TDD.", + "authors": "Mohammad Ghafari, Timm Gross, Davide Fucci, Michael Felderer", + "published": "2020-07-20", + "updated": "2020-07-20", + "primary_cat": "cs.SE", + "cats": [ + "cs.SE" + ], + "main_content": "INTRODUCTION Test-driven development (TDD) is a development technique\u2014initially proposed twenty years ago [2]\u2014in which failing tests are written before any code is added or changed. This technique emphasizes small iterations and interleaved refactoring [27]. In the scientific literature, experts usually emphasize the positive effects of TDD [7, 38, 39]. This technique has become an integral part of the software engineering curriculum in universities [24]. When looking at the discourse around TDD in the grey literature, such as practitioners\u2019 blog posts or discussions, it becomes apparent that TDD has attracted great attention from practitioners\u2014for instance, the \u201cTDD\u201d tag on Stack Overflow has 4.7k watchers. ESEM \u201920, October 8\u20139, 2020, Bari, Italy \u00a9 2020 Association for Computing Machinery. This is the author\u2019s version of the work. It is posted here for your personal use. Not for redistribution. The definitive Version of Record was published in ESEM \u201920: ACM / IEEE International Symposium on Empirical Software Engineering and Measurement (ESEM) (ESEM \u201920), October 8\u20139, 2020, Bari, Italy, https://doi.org/10.1145/3382494.3410687. The motivation for this work is to provide software companies a road map for the introduction of TDD in their policies based on the current state of research. However, before that can happen, practitioners need to be made aware of the TDD research results, which are often inconclusive and oftentimes contradictory [22]. Although it is often claimed that TDD improves code quality (e.g., results in fewer bugs and defects), one of the largest systematic studies in this domain [30] shows that improvement in some studies is not significant, and that the claimed code quality gains are much more pronounced in \u201clow-rigor\u201d and \u201clow-relevance\u201d studies [20]. Research has also studied the impact of TDD on the productivity of software developers\u2014e.g., in terms of generation of new code and effort required to fix bugs. Some studies, for example Kollanus [25], claim that quality is increased at the price of degraded productivity; whereas some others, such as Bissi et al. [5], argue that existing studies are inconclusive as, for example, experiments in an academic context are different from an industrial context. These contradictions make it impossible to categorically provide evidence on the usefulness and effectiveness of TDD. Therefore, in this paper, we focus on identifying major factors that render findings in this field inconclusive and hinder the applicability of TDD research in the decision-making process in industrial context. Consequently, we answer the following research question: \u201cWhat factors can contribute to inconclusive research results on TDD?\u201d To answer our research question, we studied, from the lens of a practitioner, the state of the art in TDD research. We investigated contradictory results in this domain by studying secondary studies that organize the large body of research in the field. We then focused on primary studies published in top journals and conferences in the past decade. We compared several studies that investigated similar phenomena (e.g., internal or external code quality) to identify factors that may contribute to inconclusive results in TDD. We identified five categories of factors concerning how studies are set up that contribute to this problem. These categories are TDD definition, participants, task, type of project, and comparison. We found that the exact definition of TDD that a study follows is not always clear; the participants of the studies are often newcomers to this technique; experiments mainly focus on code generation in greenfield projects, and the opportunities to adopt TDD in an existing codebase is not investigated; the baseline practice against which TDD is compared should be agile; and finally, exploration of the long-term benefits and drawbacks of TDD has not received enough attention in the literature. arXiv:2007.09863v1 [cs.SE] 20 Jul 2020 \fESEM \u201920, October 8\u20139, 2020, Bari, Italy Ghafari, et al. In summary, this paper is the first to survey factors related to inconclusive results in TDD research. We believe it has important implications for both researchers and practitioners. It paves the way for researchers to conduct more reliable studies on TDD, and alert practitioners of important factors that they should consider when seeking advise from research in this area. The rest of this paper is structured as follows. In Section 2, we explain the methodology we followed to conduct this study. In Section 3 we present our findings. We discuss the implications of this research for practitioners and researchers in Section 4. In Section 5, we discuss the threats to validity of this work, and we conclude the paper in Section 6. 2 METHODOLOGY We conducted a literature study to compile a list of factors that are responsible for diverging research results and hinder the applicability of TDD research in practice. We were interested in threats that have an explicit impact on TDD and excluded those that, for instance, are inherent to the type of a study such as hypothesis guessing or evaluation apprehension in controlled experiments. We followed three main steps. Firstly, we studied literature reviews that concern TDD to acquaint ourselves with the state of research in this area, and to build an overview of the diverging results. We followed backward snowballing to obtain a list of primary studies from these literature reviews that were published from 2009 to 2017. Secondly, we analyzed these primary studies to identify reasons for inconclusive research into TDD. Thirdly, we went through the proceedings of several top journals/conferences, and collected papers published after the latest review study (i.e., from 2018 to April 2020) to capture the most recent work in the field. In the following we discuss these steps in detail as shown in Figure 1. In the first step, we looked at secondary studies on TDD. We mainly based our work on nine secondary studies reported in a recent meta literature study [22]. We used these secondary studies (see Table 1) to get an overview of the state of research on TDD, and to acquaint ourselves with the diverging results discussed in previous work. From these literature reviews we followed backward snowballing to identify potential primary studies to include in this analysis. We did not select studies published earlier than 2009. The decision to focus on publications in the past decade was mainly due to our limited resource that we prioritized on more recent body of knowledge in the field. We then started with the second step, the iterative identification and refinement of the factors that contribute to diverging outcomes in research on TDD. In order to achieve this, we had to reason about explicit and implicit threats to validity of TDD studies. However, the way each study was reported varied. We, the first two authors of this paper, read each study thoroughly, filled in a data extraction form, and resolved any conflict by discussion. We picked one primary study and analyzed its goals, setup, execution, findings, and threats to validity. We compared studies that investigated similar goals, for instance, assessing the impact of TDD on internal or external code quality. We then used the results of our analysis to firstly, refine our list categories of factors, either by adding a new category or by sharpening an existing category, and to secondly provide examples of the existing categories. Next, we picked another primary study and repeated this process. The selection process for the next paper chosen to be analysed was based on two criteria. First, we preferred studies that were cited multiple times and for which the abstract sounded relevant (e.g., it explains a comparative study or measures the impact of TDD). Secondly, we tried to keep a balance between the different types of studies such as experiments, case studies, and surveys. To determine when to stop the iteration, we used a criterion of saturation \u2014 i.e., we stopped adding new primary studies once the inclusion of a new one did not reveal a new threat nor provided any additional information regarding one of the identified categories of factors. Table 2 lists ten carefully selected examples of primary studies that we analyzed in this step. In the third step, we reflected on recent studies in the field. We browsed the proceedings of top-tier conferences and issues of journals from 2018 to April 2020 to include papers published after the latest TDD review study.1 We searched for the terms \u201cTDD\u201d, \u201ctest driven\u201d, \u201ctest-driven\u201d, \u201ctest first\u201d, and \u201ctest-first\u201d in several toptier journals/conferences. Particularly, we looked at six Journals (IEEE Transactions on Software Engineering, Empirical Software Engineering, Software Testing, Verification, and Reliability Journal, Journal of Systems and Software, Information and Software Technology, and Journal of Software: Evolution and Process); the proceedings of eight Software Engineering Conferences (International Conference on Software Engineering, International Conference on Automated Software Engineering, Joint European Software Engineering Conference and Symposium on the Foundations of Software Engineering, International Conference on Software Analysis, Evolution and Reengineering, International Conference on Software Maintenance and Evolution, International Symposium on Empirical Software Engineering and Measurement, International Conference on Evaluation and Assessment in Software Engineering, and International Conference on Mining Software Repositories); two top testing conferences (International Conference on Software Testing, Verification and Validation, International Symposium on Software Testing and Analysis); and three Software Process Conferences (International Conference on Agile Software Development, International Conference on Software and Systems Process, International Conference on Product-Focused Software Process Improvement). This process resulted in only ten new papers listed in table 3. We studied each paper in depth, similarly to the primary studies in the previous step, to check whether we can obtain a new insight. 3 RESULTS There have been many investigations into understanding the outcome of TDD in software development. Nevertheless, the understanding of the different outcomes of TDD is still inconclusive due to several reasons lying in the way previous studies were set up. In this section we discuss these outcomes and factors responsible for contradictory understanding, which is summarized in Figure 2. 1 We mainly selected top relevant journals from the ISI listed journals, and consulted the core conference ranking to identify relevant venues with at least A ranking. \fWhy Research on Test-Driven Development is Inconclusive? ESEM \u201920, October 8\u20139, 2020, Bari, Italy Secondary studies yes no New insight? yes More papers to check? Content analysis Exploring top venues Primary studies [2018-2020] yes no Is it relevant? Content Analysis no Factors In-depth analysis Pick the next study Snowballing End Start Primary studies [2009-2017] Figure 1: The methodology of our literature review. Table 1: The secondary studies we analyzed in the first step Authors Title Karac and Turhan [22] What Do We (Really) Know about Test-Driven Development? Bissi et al. [5] The effects of test driven development on internal quality, external quality and productivity: A systematic review Munir et al. [30] Considering rigor and relevance when evaluating test driven development: A systematic review Rafique and Misic [33] The Effects of Test-Driven Development on External Quality and Productivity: A Meta-analysis Causevic et al. [8] Factors Limiting Industrial Adoption of Test Driven Development: A Systematic Review Shull et al. [39] What Do We Know about Test-Driven Development? Turhan et al. [47] How Effective is Test-Driven Development? Kollanus [25] Test-Driven Development Still a Promising Approach? Siniaalto [40] Test driven development: empirical body of evidence 3.1 Outcomes In general, TDD promises to improve developer productivity and three dimensions of code quality, namely internal and external code quality as well as test quality [3]. External code quality is usually relevant for the users and is measured in terms of how well the code covers and implements the requirements or user stories. Internal code quality is only relevant for developers and describes how well the code is structured, how complex it is to understand or how maintainable it is. There are several ways to measure internal (code and test) quality (see Table 4). For instance, Shull et al. [39] reviewed studies that measured code quality in terms of metrics such as coupling and cohesion, complexity, and density. They reported mixed results with some papers measuring better and others measuring worse internal code quality. In terms of test quality, research has explored the quality of tests by measuring mutation scores (i.e., the bug detection ability of the tests) and code coverage (i.e., the degree to which the source code of a program is executed when a test suite runs). For example, Tosun et al. [45] conducted an experiment with 24 professionals and found that unit-test cases developed in TDD have a higher mutation score and branch coverage, but less method coverage than those developed in ITL. Their findings contradicts earlier findings that were mostly conducted with students [28]. \fESEM \u201920, October 8\u20139, 2020, Bari, Italy Ghafari, et al. Table 2: Examples of the primary studies collected in the second step Authors Title Pan\u010dur and Ciglaric [32] Impact of test-driven development on productivity, code and tests: A controlled experiment Fucci et al. [14] A Dissection of the Test-Driven Development Process: Does It Really Matter to Test-First or to Test-Last? Dog\u0161a and Batic [10] The effectiveness of test-driven development : an industrial case study Fucci and Turhan [16] A Replicated Experiment on the Effectiveness of Test-first Development Thomson et al. [44] What Makes Testing Work: Nine Case Studies of Software Development Teams Romano et al. [35] Findings from a multi-method study on test-driven development Buchan et al. [7] Causal Factors, Benefits and Challenges of Test-Driven Development: Practitioner Perceptions Scanniello et al. [38] Students\u2019 and Professionals\u2019 Perceptions of Test-driven Development: A Focus Group Study Beller et al. [4] Developer Testing in The IDE: Patterns, Beliefs, And Behavior Bannerman and Martin [1] A multiple comparative study of test-with development product changes and their effects on team speed and product quality Table 3: The primary studies collected in the third step Authors Title Karac et al. [23] A Controlled Experiment with Novice Developers on the Impact of Task Description Granularity on Software Quality in Test-Driven Development Tosun et al. [46] Investigating the Impact of Development Task on External Quality in Test-Driven Development: An Industry Experiment Borle et al. [6] Analyzing the effects of test driven development in GitHub Fucci et al. [15] A longitudinal cohort study on the retainment of test-driven development Kazerouni et al. [24] Assessing Incremental Testing Practices and Their Impact on Project Outcomes Santos et al. [37] Improving Development Practices through Experimentation : an Industrial TDD Case Tosun et al. [45] On the Effectiveness of Unit Tests in Test-driven Development Santos et al. [36] Does the Performance of TDD Hold Across Software Companies and Premises? A Group of Industrial Experiments on TDD Romano et al. [34] An Empirical Assessment on Affective Reactions of Novice Developers When Applying Test-Driven Development Sundelin et al. [43] Test-Driving FinTech Product Development: An Experience Report Table 4: Measurement of internal code and test quality Complexity Pan\u010dur and Ciglaric [32], Dog\u0161a and Batic [10], Bannerman and Martin [1], Tosun et al. [46] Code coverage Tosun et al. [45], Pan\u010dur and Ciglaric [32], Kazerouni et al. [24], Thomson et al. [44], Borle et al. [6], Bannerman and Martin [1] Mutation score Tosun et al. [45], Pan\u010dur and Ciglaric [32] None Fucci et al. [14], Fucci et al. [15], Fucci and Turhan [16], Santos et al. [37], Beller et al. [4], Karac et al. [23] In terms of external quality and developer productivity, previous research has mostly investigated new code generation (e.g., accepted user stories and time to implement them). For instance, Marchenko et al. [29] interviewed eight participants who used TDD at Nokia-Siemens Network for three years. The participants stated that the team confidence with the code base is improved, which is associated with improved productivity. Fucci et al. [15] conducted an experiment with students over a period of five months and showed that adoption of TDD only results in writing more tests; otherwise it has neither statistically significant effect on the external quality of software products nor on the developers\u00e2\u0102\u0179 productivity. We noted that TDD research has looked at bugs and code maintainability as static indicators for external and internal quality, respectively. However, in practice, their costs would be manifested to the full extent only once software is in use. Especially, we rarely found studies on the maintainability of tests and their co-evolution with production code. One reason might be that many people do not consider TDD as a testing technique per se, but as a design technique [3]. However, Sundelin et al. [43] studied a financial software under development for eight years, and found that the size of tests grows much faster than of production code. Therefore, it is necessary to clean, refactor, and prioritize tests to manage this grows. \fWhy Research on Test-Driven Development is Inconclusive? ESEM \u201920, October 8\u20139, 2020, Bari, Italy Participants Task Type of Project Comparison Code Quality Factors Test Productivity Outcomes Internal TDD Definition Individual Team TDD Study External Figure 2: Factors contributing to the inconclusive outcomes in research on TDD. Research often deals with short-term impact of TDD rather than its long-term benefits and drawbacks, which manifest themselves once the software is in use. This is especially the case for quality of test suites. 3.2 Factors We identified five categories of factors, namely TDD definition, participants, task, type of project, and comparison that influence the outcome of TDD research. In the following, we present these categories in detail. 3.2.1 TDD definition. The steps defining TDD and how strictly they are followed is very important for a study. There are two common TDD styles: one is classical TDD, where there is almost no design upfront and developers just drive the entire implementation from the tests; and the other one is where developers know the design before developing [21]. In effect, developers often adopt a combination of these styles depending on the problem domain. However, we noted that a commonly shared definition of TDD is missing. What TDD means is mostly boiled down to writing tests prior to production code, and its other characteristics have not received similar attention. For example, some studies measure refactoring explicitly and even use it to assess how much participants adhere to TDD, while others are not concerned with refactoring, even though it is supposed to be a key part of TDD [3]. There are a few recent studies that investigated how testing is actually done \u201cin the wild\u201d. Beller et al. [4] observed the work of 2,443 software developers over 2.5 years and discovered that developers who claim to do TDD, neither follow it strictly nor for all their modifications. They found that only 2.2% of sessions with test executions contain strict TDD patterns. Borle et al. [6] showed that TDD is practiced in only 0.8% of the 256,572 investigated public GitHub projects which contain test files. There is a variety of TDD definitions. Its exact meaning, the underlying assumptions, and how strictly one follows it are not well-explained in previous studies. 3.2.2 Participants selection. Studies who recruit their participants from companies tend to have fewer participants than studies done with students. One can see that from Table 5, which shows the numbers of participants in industrial and academic studies. In particular, studies with professionals usually have a maximum of 20 participants, whereas studies with students have in several cases 40+ participants. We observed that experiments are mostly conducted as part of exercises in a one-semester course with students, whereas in industry they are often part of an intensive course with professional participants lasting a couple of days (see Table 6). Nevertheless, anecdotal [39] as well as empirical evidence [38] suggest that when introducing TDD to developers, the benefits manifest themselves only after an initial investment and a ramp-up time. We noted that studies with participants who are proficient in TDD prior to the start of experiments, for example [7], are in the minority. We even observed studies, for example [46], where participants were asked to follow TDD right after only a short introduction. The fact that both practitioners and students have quite similar TDD experience (i.e., they have undergone very little training in TDD) does not necessarily imply that when practicing TDD the outcomes of the two subject groups are also similar. Professionals\u2019 competencies, for instance to develop tests and design software, may influence their performance when practicing TDD. For instance, Santos et al. [36] conducted four industrial experiments in two different companies, and reported that the larger the experience with unit testing and testing tools, the better developers perform in terms of external quality in ITL than in TDD. Latorre [26] found that in unit test-driven development, junior developers are not able to discover the best design, and this translates into a performance penalty since they need to revise their design choices more frequently than skilled developers. Romano et al. [34] investigated the affective reactions of novice developers to the development approach and reported that novices seem to like a non-TDD development approach more than TDD, and that the testing phase makes developers using TDD less happy. Suleman et al. [42] conducted an early pilot study with students who experienced TDD in an introductory programming course. They found that students do not necessarily experience the immediate benefits of TDD, and that this TDD is perceived to be more of a hindrance than a help to them. Studies participants (i.e., students and professionals) have little prior TDD experience, ranging generally from a couple of days to a couple of months. 3.2.3 Task selection. The number as well as the types of performed tasks are important. Tasks that are synthetic are easily comparable, for example, in terms of complexity. Nevertheless, they do not resemble tasks assigned during the course of a real-world project. We observed that most studies were concerned with one and up to four synthetic tasks, such as coding katas. Table 7 shows which studies used what kind of tasks. Surprisingly, synthetic tasks are dominant in experiments conducted in industrial settings. \fESEM \u201920, October 8\u20139, 2020, Bari, Italy Ghafari, et al. Table 5: Population of participants in studies with students and professionals Less than 20 participants 21-40 participants More than 40 participants Industrial Romano et al. [35], Buchan et al. [7], Scanniello et al. [38], Santos et al. [37], Tosun et al. [46] Tosun et al. [45], Dog\u0161a and Batic [10], Fucci et al. [14] Academic Romano et al. [35], Scanniello et al. [38] Thomson et al. [44] Pan\u010dur and Ciglaric [32], Kazerouni et al. [24], Fucci and Turhan [16], Karac et al. [23] Table 6: TDD experience <1 week Tosun et al. [45], Fucci et al. [14], Thomson et al. [44], Santos et al. [37], Tosun et al. [46] 1 week 0.5 years Fucci et al. [15], Kazerouni et al. [24], Romano et al. [35], Scanniello et al. [38], Dog\u0161a and Batic [10], Fucci and Turhan [16], Karac et al. [23] 0.5 years 1 year Pan\u010dur and Ciglaric [32] more Buchan et al. [7] Table 7: Synthetic tasks vs. real-world tasks Synthetic task Romano et al. [35], Fucci and Turhan [16], Tosun et al. [45], Pan\u010dur and Ciglaric [32], Karac et al. [23], Tosun et al. [46], Fucci et al. [14], Santos et al. [37], Fucci et al. [15], Kazerouni et al. [24] Real task Thomson et al. [44], Dog\u0161a and Batic [10] The granularity as well as the complexity of a task, e.g., whether it is related to other parts of a software and whether developers are familiar with the task, may impact the TDD outcomes. For instance, [23] investigated the effect of task description granularity on the quality (functional correctness and completeness) of software developed in TDD by novice developers (precisely graduate students), and reported that more granular task descriptions significantly improve quality. Latorre [26] showed that experienced developers who practice TDD for a short while become as effective in performing \u201csmall programming tasks\u201d as compared to more traditional test-last development techniques. However, many consider TDD as a design technique [3], but how much design is involved in a small task is debatable. Moreover, the suitability of TDD may differ not only for different tasks, but also for different parts in a software\u2014i.e., one might apply TDD to implement features in more critical parts of the code base and do not apply it for less critical parts. Finally, previous literature is mostly concerned with code generation, and exploring how TDD performs during bug-fixing or large-scale refactoring has not received enough attention. For instance, Marchenko et al. [29] interviewed a team of eight developers who adopted TDD at Nokia-Siemens Network for three years. The team reported that TDD was not suitable for bug fixing, especially for bugs that are difficult to reproduce or for quick \u201chacks\u201d due to the testing overhead. Synthetic, non-real world tasks are dominant. Research does not cover the variety of tasks to which TDD can be applied. 3.2.4 Type of Project. In agile software development, developers are often involved in changing existing code, either during bug fixing or to implement changing requirements. Therefore, whether the studies are concerned with projects developed from scratch (i.e., greenfield), or with existing projects (i.e., brownfield) plays a role.2 Brownfield projects are arguably closer to the daily work of a developer, and generalizing the results gathered from greenfield projects to brownfield projects may not be valid. Nevertheless, brownfield projects are under-represented in existing research (see Table 8). We believe that the application of TDD in an existing codebase depends on the availability of a rich test suite and the testability of a software \u2014 i.e., how difficult it is to develop and run tests [17]. In legacy systems that lack unit test cases, TDD may not be applicable as developers are deprived of the quick feedback from tests on changes. However, understanding how TDD performs in brownfield projects that comprise regression test suites is a research opportunity that needs to be explored. Research mostly focuses on greenfield projects rather than brownfield projects. Accordingly, the opportunity to apply TDD in an existing codebase is unclear. 2Creating a new functionality in an existing project that is largely unrelated to the rest of the project is still a greenfield project. \fWhy Research on Test-Driven Development is Inconclusive? ESEM \u201920, October 8\u20139, 2020, Bari, Italy Table 8: Greenvs. brownfield projects Greenfield Tosun et al. [45], Pan\u010dur and Ciglaric [32], Fucci et al. [14], Fucci et al. [15], Kazerouni et al. [24], Romano et al. [35], Thomson et al. [44], Dog\u0161a and Batic [10], Fucci and Turhan [16], Santos et al. [37], Karac et al. [23], Tosun et al. [46] Brownfield Buchan et al. [7], Scanniello et al. [38] 3.2.5 Comparisons. Factors that are actually responsible for the benefits of TDD vary. For instance, research has shown that, when measuring quality, the degree of iteration of the process is more important than the order in which the test cases are written [14]. In a recent study, Karac et al. [23] suggest that the success of TDD is correlated with the sub-division of a requirement into smaller tasks, leading to an increase in iterations. Previous research has shown that a lot of the superiority of TDD in existing studies is the result of a comparison with a coarsegrained waterfall process[32]. Nevertheless, TDD is an agile technique and should be compared with fine-grained iterative techniques, such as iterative test last (ITL), that share similar characteristics. This means not only we do not know what exactly is responsible for the observed benefits of TDD, but also that the benefits we measure depend on what we compare TDD against. Table 9 shows examples of what the analyzed studies compare TDD to. \u201cTest last\u201d (TL) describes that the tests are written after the production code without specifying when exactly. \u201cIterative test last\u201d (ITL) is similar in that the tests are written after the production code is implemented, but it is supposed to have the same iterativeness as TDD. This means in ITL a small code change is written and the tests are written immediately afterwards. The category \u201cYour way\u201d means that there is no guideline and developers should decide, if ever, when and how they write tests. Finally, the category \u201cTDD\u201d compares TDD to itself in different settings. For instance, the performance impact the granularity of task description has on TDD [23]. There may be more factors at play when comparing two techniques. For instance, a recent work has shown that testing phase makes novice developers using TDD less happy [34]. In the same vein, students perceive TDD more of an obstacle than a help [42]. The affective reactions of developers may not have an immediate impact on the outcome of TDD, but exploring the consequences over the long run is necessary to draw fair conclusions. The benefits of TDD may not be only due to writing tests first and, therefore, it should be compared to other Agile techniques. 4 DISCUSSION The promise of TDD is that it should lead to more testable and easier to modify code [3]. This makes it appealing from an industrial perspective, as developers spend half of their time dealing with technical debt, debugging, and refactoring with an associated opportunity cost of 85$ billion [41]. Nevertheless, the empirical evidence on TDD is contradictory, which hinders the adoption of this technique in practice. Causevic et al. [8] explored the reasons behind the limited industrial adoption of TDD, and identified seven factors, namely increased development time, insufficient TDD experience/knowledge, lack of upfront design, domain and tool specific issues, lack of developer skill in writing test cases, insufficient adherence to TDD protocol, and legacy code. Munir et al. [30] investigated how the conclusions of existing research change when taking into account the relevance and rigor of studies in this field. They found that studies with a high rigor and relevance scores show clear results for improvement in external quality at the price of degrading productivity. We have built on previous work by exploring the latest state of the research in this domain. We identified factors that contribute to diverging results when studying TDD, and highlighted research opportunities that improve the applicability of research results for practitioners. In particular, we found that the exact definition of TDD that a study follows is not always clear; the participants of the studies are often newcomers to this technique and experiments with TDD proficient participants are in a minority; experiments mainly focus on code generation in greenfield projects, and the opportunities to adopt TDD in an existing codebase is not investigated; the baseline practice against which TDD is compared should share similar agile characteristics; and exploration of the long-term benefits and drawbacks of TDD, especially how to manage the large body of test cases generated in TDD, has not received enough attention in the literature. This work has implications for both practitioners deciding on the adoption of TDD and researchers studying it. We discuss these implications in the following. Implications for practitioners. We propose a list of known factors for practitioners to take into account when making a decision about TDD. The factors are tuned for practitioners as their interest can be different from the one constituting the phenomena studied in research. For example, although a study may investigate the effect of TDD on maintainability (i.e., an important aspect for a practitioner), it does so in a greenfield project (i.e., irrelevant for the practitioners\u2019 everyday situation). Therefore, the factors can be used as a support for practitioners navigating the (vast) scientific TDD literature and can be used to filter results interesting for their specific cases. In general, industry practitioners are concerned that a low participation of professionals as subjects reduces the impact of software engineering research [11]. For practitioners, it is difficult to make a decision based on a group of students benefiting from TDD. Although CS graduates and entry-level developers are assumed to have similar skills [11], practitioners basing their decision to include TDD in their set of practices using the Participants factor need to be aware that motivations between these two types of participants are different [12]. Practitioners need to be also aware that designing experiments with students is vastly easier compared to professionals (e.g., due to ease of recruitment). Therefore, it is \fESEM \u201920, October 8\u20139, 2020, Bari, Italy Ghafari, et al. Table 9: What TDD is compared to Iterative test last Tosun et al. [45], Pan\u010dur and Ciglaric [32], Kazerouni et al. [24], Fucci et al. [14], Santos et al. [37], Tosun et al. [46] Test last Dog\u0161a and Batic [10], Fucci and Turhan [16], Bannerman and Martin [1], Fucci et al. [14] Your way Fucci et al. [15], Thomson et al. [44], Romano et al. [35], Santos et al. [37], Beller et al. [4], Buchan et al. [7], Scanniello et al. [38], Borle et al. [6] TDD Karac et al. [23] unwise to disregard potential insights gained from study with students. Notably, the correct application of TDD requires training and practice [24], but the current investigations are manily based on the observation of practitioners (either professional or not) who often received a short crash course in TDD. Santos et al. [36] have shown that the larger the experience with unit testing and testing tools, the more developers outperform in ITL than in TDD. Implications for researchers. The factors presented in this study can serve as the basis for the development of guidelines on how to design TDD studies that result in converging results. Similarly, researchers wanting to perform TDD studies\u2014independently from their goal\u2014need to prioritize the factors presented in this paper to be relevant for practice. One factor we deem important for scientific investigation of TDD is Comparison\u2014i.e., the baseline practice against which TDD is compared. The IT landscape was different when the Agile methodologies, including TDD, were first proposed [2, 3]. Not only the technologies, such as testing frameworks and automation infrastructure were not as mature as they are today, but also the development paradigms were mostly akin to the waterfall model, often without any explicit testing during development. But now, 20 years later, it is necessary to re-evaluate what factors of TDD we study and what we compare TDD to. We noted that research has mostly focused on short terms benefit (if any) of TDD, while it does not concentrate on how TDD impacts downstream activities in the software development lifecycle\u2014e.g., system testing [31]. Similarly, understanding effects such as the actual maintenance costs that manifest themselves only when the software is in use has not received enough attention in research. Especially, test suites could grow faster than production code in TDD [43], but we have not seen any study that concern managing tests. Final remarks. The major software testing venues do not seem to be interested in TDD\u2014e.g., no papers were published at the past two editions of ICST3, ISSTA4, ICSE5, and FSE6 nor submitted to STVR7 between 2013 and 2020 [31]. We believe that addressing these factors is necessary for a renaissance of TDD in the research community after the initial 15 years of inconclusive evidence. 3International Conference on Software Testing 4International Symposium on Software Testing and Analysis 5International Conference on Software Engineering 6International Conference on the Foundations of Software Engineering 7Software Testing, Verification, and Reliability Journal It is noteworthy that the list of factors we presented in this paper, although grounded in the existing literature, is not exhaustive as several other factors apply specifically to industry. For instance, factors such as Agility of a company [18], testing polices [19], and developers\u2019 work load have not received attention in research on TDD. We believe that conducting general and convincing studies about TDD is hard, however, if TDD research is to be relevant for decision makers, more in-depth research is necessary to provide a fair account of problems in TDD experiments. 5 THREATS TO VALIDITY We relied on several secondary studies to obtain a list of research on TDD which is as exhaustive as possible. We then manually browsed top and relevant journals/conferences to include recent papers. However, there is always risk of omitting relevant papers when performing a literature study. We mitigated the risk in two ways. First, we clearly defined and discussed what primary studies fit the scope of our study, and conducted a pilot study to examine our decision criteria on whether or not to include a paper based on an iterative saturation approach. Secondly, a random set of 15 excluded papers were examined independently by a second researcher to minimize the risk of missing important papers. The secondary studies used as a starting point in our process are Systematic Reviews and Meta-analyses which mainly aggregate evidence from quantitative investigations, such as controlled experiments. Conversely, none of the secondary studies presented an aggregation of qualitative investigations, such as thematic or narrative synthesis [9]. Although this can result in a set of primary studies skewed towards one type of investigation, we made sure that each factor is reported in studies following both qualitative and quantitative research methodologies. We sorted primary studies, published until 2017, according to number of citations. We acknowledge that due to such a criterion, we may have failed to include more recent studies as they had less time to be cited. For more recent primary studies that we collected manually, published from 2018 to 2020, we included all the papers. We had to understand, from the lens of practitioners, why research results on TDD are diverging and under which circumstances the results may not be generalizable to real-world context. We treated papers as artifacts to be understood through qualitative literature analysis [13], and tried to truthfully make connections between studies. In order to mitigate the risk of missing or misinterpreting information from a study, we designed a data extraction form and discussed it together to develop a shared understanding. We ran a pilot study with five randomly selected primary studies \fWhy Research on Test-Driven Development is Inconclusive? ESEM \u201920, October 8\u20139, 2020, Bari, Italy to make sure that we all agree on the extracted information. Finally, through constant iterations, we further mitigated the risk of missing information in our analysis and oversimplifying the results. The use of saturation in our analysis made sure that we did not prematurely stop including more entries and that the categories of factors were stable. 6" + } + ], + "Davide Fucci": [ + { + "url": "http://arxiv.org/abs/2206.04462v1", + "title": "When Traceability Goes Awry: an Industrial Experience Report", + "abstract": "The concept of traceability between artifacts is considered an enabler for\nsoftware project success. This concept has received plenty of attention from\nthe research community and is by many perceived to always be available in an\nindustrial setting. In this industry-academia collaborative project, a team of\nresearchers, supported by testing practitioners from a large telecommunication\ncompany, sought to investigate the partner company's issues related to software\nquality. However, it was soon identified that the fundamental traceability\nlinks between requirements and test cases were missing. This lack of\ntraceability impeded the implementation of a solution to help the company deal\nwith its quality issues. In this experience report, we discuss lessons learned\nabout the practical value of creating and maintaining traceability links in\ncomplex industrial settings and provide a cautionary tale for researchers.", + "authors": "Davide Fucci, Emil Al\u00e9groth, Thomas Axelsson", + "published": "2022-06-09", + "updated": "2022-06-09", + "primary_cat": "cs.SE", + "cats": [ + "cs.SE" + ], + "main_content": "Introduction The concept of traceability refers to links between different software artifacts, such as requirements, source code, and test cases. Establishing traceability within requirements helps the organization manage dependencies. In contrast, traceability between requirements and test cases, also referred to as alignment [1], is necessary to measure coverage and ensure that the product ful\ufb01lls customers\u2019 needs with a given degree of quality. Moreover, the traceability between source code and test cases enables the analysis of the impact when changes occur. Preprint submitted to Journal of Systems and Software June 10, 2022 \fIn this study, Blekinge Institute of Technology (BTH) collaborated with a large telecommunication company (hereafter, COMPANY1), with the original intent to study COMPANY\u2019s software quality. Speci\ufb01cally, the researchers collaborated with the organization within COMPANY in charge of the veri\ufb01cation before release (i.e., TestOrg). The initial aim of this collaboration was to investigate the challenges related to the state-of-practice of the COMPANY quality assurance process. TestOrg was a suitable partner in this project since they are responsible for the quality assurance of all COMPANY products and can be considered the \ufb01rst user of the product once it leaves the development phase. BTH and TestOrg organized a series of workshops to elicit pain points regarding software quality. One outcome of such workshops was the perceived mismatch between the organization level of quality assurance activities and the actual observed quality. We termed this difference quality in\ufb02ation. The study of the quality in\ufb02ation phenomenon at COMPANY became the re\ufb01ned objective of our industrial collaboration. The researchers analyzed the artifacts TestOrg uses to perform their activities, such as requirement speci\ufb01cations, issue reports, and test cases. To observe the symptoms and connect them to possible root causes of quality in\ufb02ation, the researchers needed to connect the requirements to other artifacts in the development process. Therefore, the existing traceability links and the possibility of establishing new ones play a signi\ufb01cant role for detecting quality in\ufb02ation. However, we found that such traces were generally lacking and that reverse engineering these traces from existing artifacts was not only time-consuming but also, in most cases, not possible. In this paper, we report our experience in detecting quality in\ufb02ation at COMPANY and how the investigation of this phenomenon failed due to the lack of traceability between artifacts. This experience serves as a case for academia to consider for future research and provide an example for practitioners of the challenges that can arise once traceability between software development artifacts is lacking. The rest of the paper is organized as follows: Section 2 presents an overview of the existing literature on traceability in software projects and its application to different artifacts in the software development lifecycle. Section 3 presents the industrial context of our study, and Section 4 shows our experience in trying to establish traceability links in a large-scale organization to investigate a particular aspect of software quality\u2014i.e., quality in\ufb02ation. Section 5 reports a discussion of 1We omitted the COMPANY name due to their wish to stay anonymous. 2 \four experience and takeaways for researchers and practitioners. Finally, Section 6 concludes the paper. 2. Software artifacts Traceability The body of research in traceability is vast, with early works published in the 70s\u2014e.g., Randell [2]. Since then, the concept has been explored in both industry and academia through empirical studies and summarized in systematic literature reviews [3, 4, 5]. Traceability is essential to govern the software development process, to manage complexity, and mitigate costs [6, 7]. However, research has also established that maintaining traceability links of high quality over time can be a costly process [8]. Therefore, research has started focusing on traceability automation [9]. Among the secondary studies, Mustafa and Labiche performed a literature review that identi\ufb01ed a lack of tools for tracing heterogenous artifacts [10]. Conversely, a review by Tufail et al. [11], identi\ufb01ed seven models, ten challenges, and 14 tools for traceability. The review by Javed and Zdun examined the connections between traceability and software architecture [12]. In contrast, Santiago et al. looked at managing traceability in the context of model-driven engineering and the associated complexities [13]. According to a recent mapping study surveying 63 papers between 2000 and 2020, traceability involving testing artifacts and related activities is the least investigated [4]. Moreover, according to the authors, few tools support traceability in software testing activity [4]. This conclusion is drawn, despite the numerous research on the tools that either explicitly focus on, or support, traceability in software development. Examples of such tools include PLM/ALM which, according to a study by Ebert [14], was bene\ufb01cial to de\ufb01ne traceability for testing purposes. The project management tool DOORS is commonly used in the industry and is used as a driver for research into, for example, automated traceability links [15] and rich traceability [16]. Another example is Enterprise architect\u2014a large-scale modelling framework that can be used to model traceability between several aspects of development, including assets, processes, and the organization [17]. Although used successfully in practice, these tools share a human component. Thereby, their success is tied to how rigorously they are used. Tian et al. [4] show that traceability in different software development activities has rarely been evaluated in industrial settings (i.e., 16% of the reviewed primary studies). However, despite several studies commending traceability as a prerequisite for a successful software project (and conversely pointing to a lack of 3 \ftraceability as a factor leading to failure [18]), there is no empirical evidence, to the best of our knowledge, that supports these claims in the industrial context. Traceability is often discussed as a sequential trace from requirements to code, and from code to test cases. Another critical dimension is alignment\u2014i.e., the traceability between requirements and tests. Unterkalmsteiner et al. [1] proposes a taxonomy for requirements and test alignment (REST). They show a method for designing contextual taxonomies of alignment, including concepts for how to reason about the establishment of traceability links from requirements speci\ufb01cations to design, development, and testing. Another concept related to traceability is change management [19]\u2014i.e., the idea that when a software artifact is changed, all associated artifacts need to be identi\ufb01ed and updated accordingly. The responsibility for this task is often delegated to the development team, and its complexity is affected by the already available traceability links. In their case study, Borg et al. found that developers prefer \ufb02exible forms of information rather than formal information, including traceability information, when dealing with changes [19]. Both the research on requirements-tests alignment and change management demonstrate an academic idea of the importance of traceability in software development. This suggestion, although reasonable, does not account for the costs of keeping traceability and alignment up-to-date in complex industrial settings. 3. Investigation of Software Quality within COMPANY BTH and COMPANY are collaborating on a research project in the area of software quality, with a focus on automated software testing. This project is enabled by BTH\u2019s approach to Industry-Academia collaboration and technology transfer based on the model proposed by Gorscheck et al. [20]. Since the project is performed in co-production, BTH began by identifying the demands and constraints imposed on TestOrg by the process used to develop and deliver software at COMPANY. TestOrg is in charge of verifying digital business solutions (DBS)\u2014e.g., online charging systems, mobile \ufb01nancial services, and service catalog manager\u2014 for telecommunication operators worldwide. 3.1. Preliminary Workshops and Research Objectives In the Fall of 2019, BTH organized two explorative workshops at COMPANY to understand the organizational challenges in the area of testing and quality assurance related to automation. The \ufb01rst workshop (Workshop 1) involved members of the quality assurance (QA) team and product managers, whereas the second 4 \f(Workshop 2) involved developers, testers, and operations personnel. We assumed that the challenges highlighted within the two groups would differ based on norms and values [21, 22]. The workshops were organized into three parts. First, the participants answered the question \u201cWhat are the challenges your team and organization are currently facing with test automation?\u201d in brief statements on post-it notes (several answers are possible) in a time-boxed exercise (15 minutes). In the second part, each participant pitches their answers to the entire group and engages in a discussion. As the participants read and connect their answers, the researchers cluster the notes according to themes emerging during the discussion. This part of the workshop is not time-boxed to give everyone the possibility to voice their ideas. In the third part, the researchers present the themes that emerged during the discussions (i.e., clusters) and, in real time, validate with the participants the correct interpretation of their answers. We identi\ufb01ed eight themes associated with QA and Test Automation challenges in Workshop 1 and six in Workshop 2. The researchers prioritized the themes that emerged during the workshop in collaboration with the TestOrg leaders and test managers, taking the results from the developers as supporting input. The rationale for this decision was that the emerging challenges on the management level were of more general and larger complexity; thereby, they incorporated several of the challenges expressed by the developers. BTH presented the results to the TestOrg team, and during a discussion structured around the identi\ufb01ed challenges, TestOrg provided further input to prioritize them. Finally, BTH and TestOrg set out to study quality in\ufb02ation\u2014a mismatch between the perceived effort in quality assurance activities performed by TestOrg and the quality observed in use. We use the term in\ufb02ation to indicate that the effort in quality assurance activities is arti\ufb01cially increased; hence, creating such mismatch. TestOrg perceived that the motivations for quality in\ufb02ation were related to both technical and human factors. For example, during Workshop 1, participants pointed out the misalignment between the metrics used by management and the ones used by developers to evaluate quality. TestOrg managers felt that the team would intentionally write \u201chappy\u201d test cases and defer more thorough testing activities down the development pipeline. During Workshop 2, developers felt that quality goals are not explicit and communicated with poor rationale. Focusing on the technical causes \ufb01rst, BTH researchers collaborating with TestOrg focused on the following objectives. O1. Evaluate the quality of the test suites associated with different types of Busi5 \fness Requirements (BR). A BR is a high-level requirement for the product, which is later broken down into smaller requirements. During Workshop 1, it emerged that some BRs within the DBS system are perceived to work well since they have few issue reports (IR) associated with them. We aim to locate such BRs, characterize them based on aspects such as their history and the history of the associated IR and test cases, and \ufb01nally compare them to those BRs that are considered troublesome by the development and QA teams. This analysis aimed to show TestOrg possible root causes for quality in\ufb02ation. O2. Correlate the quality of the automated test cases with the quality of the test speci\ufb01cation (e.g., test smells\u2014sub-optimal design of test code [23]). This objective assumes that better-speci\ufb01ed test scenarios lead to better test cases. O3. Evaluate the effects of the triaging process on perceived quality. During Workshop 1, it appeared that the number of open IRs plays a crucial role in establishing the managers\u2019 perception of quality (e.g., when a BR receives many IRs, or IRs for a BR are often reopened, that BR may be perceived as low-quality). However, IRs go through a triaging process to decide if (and to what extent) they will be tested (either with additional tests or during regression testing). We wanted to study such triaging process as it directly impacts the number of open/closed IRs and, in turn, the perceived quality of a BR. O4. Evaluate quality with respect to functional vs. non-functional BR features, such as performance. During Workshop 1, participants agreed that an area in which TestOrg should improve is testing of non-functional requirements. On the other hand, in Workshop 2, developers pointed out that non-functional testing is dif\ufb01cult. Our objective is to understand to what extent non-functional aspects contribute to quality in\ufb02ation. When conducting empirical investigations to address O1 and O2, it became apparent that although traceability links between artifacts (i.e., BR, IR, and test cases) were assumed to exist, this was not necessarily the case. 3.2. Development Lifecycle at COMPANY A simpli\ufb01ed version of COMPANY\u2019s development work\ufb02ow is reported in Figure 1. The work\ufb02ow is based on decisions taken together by development and 6 \fStudy Speci\ufb01cation Development Release Figure 1: Simpli\ufb01ed version of the SDLC in use at COMPANY. Orange marks indicate TestOrg main decision points in the process. product management and provides a transparent, common understanding of the development status. COMPANY initiates the Study phase to ful\ufb01ll new market needs, meet their R&D goals, or respond to a customer request. By the end of this phase, the stakeholders agree on the scope and resources available for development. In the Speci\ufb01cation phase, the development organizations within COMPANY and the organization impacted by the development2, work on a document specifying a BR, and divide it into Business Sub-Requirements (BSR). The end of this phase is a synchronization point for the interested organizations, including TestOrg, to align with the BR scope and plan. Once the Development phase starts, there is an agreement on the development plans, dependencies, and test plans to be carried out. Inputs to this phase are the BR, the architecture model, general improvements for the area (e.g., mobile payments) and several guidelines (e.g., coding conventions, UX guidelines). Development ends when TestOrg completes internal veri\ufb01cation consisting of unit testing and integration testing. TestOrg follows checklists containing the activities necessary to get the BR and BSR to a done state. The outputs of this phase are a package \ufb01le containing the implemented solution (e.g., a Jar in case of a Java project), the source code, a release note document, and the updated architecture and risk management models. This phase is iterative and feedbackdriven from internal channels\u2014which perform continuous integration, simulation, and laboratory evaluation\u2014and external ones, including customer laboratory evaluations and restricted launches. Each iteration usually lasts two weeks. The end of the Release signals that a feature is ready and can be commercially released. 3.3. Main Development artifacts at COMPANY Organizations within COMPANY handle requirements at different levels of granularity, from Business Opportunities (a high-level customer-centric de\ufb01nition 2The list of organizations impacted by the development of a new requirement is one of the outcome of the study phase. 7 \fTable 1: BR attributes used in this study extracted from COMPANY\u2019s requirements management system. Attribute Type Description name string name of a requirement description text description of the requirements. Contains purpose, DoD and references to sub-requirements type emun Indicates the granularity of a requirement\u2014i.e., task, user story, epic project enum The project the requirement belongs to version integer Existing revisions of the requirement created date Date when the requirement is created modi\ufb01ed date Date when the requirement is modi\ufb01ed creator string Name of the employee who created the requirement status enum Current status of the requirement (released, tested, etc.) release enum The targeted release in which the requirement should be included business goal enum The targeted high level business requirement validation enum Context in which the requirement is validated planned phase X date When the requirement is intended to be moved to the next phase X in the development X \u2208[study, speci\ufb01cation, development, release] moved to phase X date When the requirement is actually moved to the next phase X in the development X \u2208[study, speci\ufb01cation, development, release] issue reports list Issue reports currently associated with the requirement reference link Link to the complete document speci\ufb01cation stored in another system) of a solution to a problem or need) to User Stories, derived from BSR, implemented by development teams in Scrum sprints. TestOrg interacts mainly with BRs\u2014i.e., requirements at intermediate granularity\u2014which can be divided into BSR due to size and complexity. BR and BSR are speci\ufb01ed following a template, and their contents vary in length (on average between 20 and 40 pages). These speci\ufb01cations include: i) General Information (e.g., scope, terminology). ii) Output from the Study phase (including recommendation for further studies). iii) Technical solution description which contains the requirements for the technical use case implementation. iv) A Glossary of terms used in the document. v) References used in the documents. vi) Changelog and revision information. The speci\ufb01cation documents are stored in a repository that allows tracking of changes. A web-based project management tool tracks BR and BSR status, responsible team, and other metadata. Table 1 shows the main BR attributes tracked using the tool. 8 \fTable 2: IR attributes used in this study extracted from COMPANY\u2019s issue tracker system. Attribute Type Description name string Name given to the IR content string Content of the IR priority enum Priority of the IR according to the submitter registered date When the IR is created assigned date When the IR is assigned to a team answered date When a \ufb01x for the IR is proposed completed date When the work to address the IR is completed product string Product experiencing the fault market string Market reference in which the product is experiencing the fault issuing BR string Name of the BR related to this IR characteristic enum Quality characteristic describing the IR (e.g., functional, reliability) hot bool Whether the IR requires immediate attention duplicate bool Whether the IR is a duplicate of an existing one is child bool Whether the IR is in a is-a relationship with another IR parent string Reference to the parent IR (iff is child is True) rejected bool Whether the IR will be addressed or not observation text Free text (e.g., stack traces, steps to reproduce) Another type of artifact involved in the development process at COMPANY is Issue Report (IR). IRs are defects reported before release with varying granularity and can impact several BRs and BSRs. COMPANY uses a taxonomy of eight software characteristics associated with an IR. These range from functional suitability (e.g., functional completeness or correctness) to usability and maintainability. The IR lifecycle is handled using a web-based issue-tracking tool. Table 2 show the main IR attributes tracked using the tool. Testing activities occur at different levels of abstraction. Manual or semiautomated testing is usually performed for BRs that impact several organizations responsible for different DBS solutions within COMPANY. Manual tests are managed and reported in a separate web-based application. Automated tests\u2014written in different programming languages and version-controlled in a repository\u2014are also present and implemented to verify (parts of) BRs and BSRs. 4. The Role of Traceability in Investigating Software Quality BTH had access to BRs, source code, test results, and IRs. Given such a rich set of data, we set out to map artifacts (and their quality) developed early in the 9 \fConceptualization Workshops Literature study Design Artefact analysis Results presentation Feedback + Interviews Validation Project stopped before empirical validation Figure 2: Overview of design science approach and activities for each phase. Development phase to later ones. In this section, we present the approach that was initially taken to identify such traces and the challenges that arose during this work. 4.1. Research Methodology During this research collaboration with COMPANY, we followed the design science paradigm [24] as presented in Figure 2. After establishing the research objectives and familiarizing with the existing literature, the researchers at BTH started the solution design activities. Initially, the researchers needed to familiarize themselves with the artifacts, domain-speci\ufb01c language, information systems, and processes at COMPANY. To that end, BTH obtained access to the infrastructure and worked in collaboration with TestOrg3 to clarify uncertainties and be onboarded on the internal processes and systems in use within the organization. Moreover, BTH researchers and TestOrg members had more structured meetings during which the former presented their current understanding of the problem, based on the analysis of the artifact, and proposed alternative ways forward and possible solutions. TestOrg gave feedback on the results pointing out, for example, wrong assumptions the researchers made. The researchers followed up with 3One of the researchers spent approximately 20 hours/month at the company of\ufb01ce in early 2020 before work-from-home was established due to the pandemic. 10 \fTable 3: Metrics used to characterize Troublesome BR. In the table, phase \u2208[study, speci\ufb01cation, implementation, release]. Name De\ufb01nition Monthly IR for BR The number of functional IR associated to a speci\ufb01c BR averaged over a monthly period. Release IR for BR The number of functional IR associated to a speci\ufb01c BR averaged over release. Time BR spent in phase Number of days a BR stayed in a speci\ufb01c phase as indicated by its status. ad hoc interviews about speci\ufb01c topics when necessary. After several iterations between Conceptualization and Design, BTH and TestOrg could not produce a solution\u2014i.e., an intervention to reveal where quality in\ufb02ation was taking place\u2014 which could be Validated due to the lack of traceability between artifacts. 4.2. Provisional Design Solution The initial design goal was a set of guidelines to support TestOrg in identifying issues with quality in\ufb02ation, inform them about test cases that needed improvement, and suggest BRs that need to be better tested or re\ufb01ned (i.e., reworded, simpli\ufb01ed, re-scoped). Furthermore, we aimed to create automated support through a recommender system, based on such guidelines, which could be integrated into the TestOrg continuous integration environment and reveal \u201cquality in\ufb02ated\u201d BR and BSR. Based on the initial conceptualization and discussion with TestOrg, we hypothesize that writing automated tests can be more challenging for some BRs than for others (Objective O1). In the \ufb01rst design iteration (left pane of Figure 3), we characterize such Troublesome BRs using several metrics mined from the COMPANY artifact repositories (see Table 3). In particular, we considered the number of IRs associated with a BR over time (e.g., in a release), the amount of time a BR spent in the different phases of the development (e.g., as reported in Figure 4), and the difference between the planned and actual time for a BR to advance to the next phase. When de\ufb01ning the Troublesome BR metric, we used the existing traceability link between IR and BR available in the issue tracking tool (marked with 1 \u20ddin Figure 3). In informal interviews, the researchers presented and discussed their results together with TestOrg and got early feedback that helped better conceptualize the solution design. 11 \f1st iteration 2nd iteration IR BR History Troublesome BR 1 Test suite quality IR-\ufb01xing commit Test cases 2 3 4 Metrics Traceability No traceability Figure 3: Traceability (and lack thereof) between artefacts used for the proposed design. For the second iteration (right pane of Figure 3), we needed to de\ufb01ne the quality of the test cases for a BR to obtain a metric (i.e., Test suite quality) which we could then correlate with Troublesome BR. We discussed several ways of establishing the necessary traceability links between BRs and test cases. The more immediate one\u2014a direct bidirectional link between BR and test cases is unavailable (marked with 2 \u20ddin Figure 3). In their development process, COMPANY does not enforce traceability between high-level requirements, such as BR, and low-level code artifacts, such as test cases. Next, since we already had a traceability link between IR and BR, we investigated downstream (i.e., IR-to-source code) and upstream (i.e., source code-to-IR) traceability links to connect IR and test cases via IR-\ufb01xing commits (marked with 3 \u20ddin Figure 3). From a commit, it is possible to establish a link to the test cases (marked with 4 \u20ddin Figure 3)\u2014e.g., using the approach proposed in [25]. However, the \ufb01eld that explicitly connects IR and commit in the issue-tracking system is barely used (approximately 10% of IRs contains a reference to a commit). We then looked at the upstream traceability between test cases and IRs, using information from the IR-\ufb01xing commit (the other direction of the arrow marked as 3 \u20ddin Figure 3). Development organizations within COMPANY are required to use a structured commit template. The template has a \ufb01eld in which the developer can indicate, among other things, whether the commit is part of a \ufb01x. The developer can do this by including the id of the artifact describing the issue, such as an IR. From the source code version control system, we mined patches and associated 12 \fTable 4: Goal-Question-Metrics for the study of quality in\ufb02ation at COMPANY. Goal (based on O1 and O2) Question Metrics Answered? Purpose: Characterize the Issue: spread of Object: quality in\ufb02ation Viewpoint: QA managers at Context: COMPANY How widespread is quality in\ufb02ation? Quality in\ufb02ated TR All TR No When is there a mismatch between QA effort and quality observed for a BR? QA effort for TR Observed Quality for TR > Threshold No What is the QA effort dedicated to a BR? Test smells Test suite effectiveness Test suite time to complete Defect density Defect age No, due to lack of traceability Why are some BR perceived to be more troublesome than others? Monthly IR for BR Release IR for BR Time BR spent in phase Yes discussions taking place during the same timeframe during which BRs were implemented and IRs were addressed (i.e., 2016\u2013mid 2020). We parsed the commit messages looking for \ufb01xes mentioning IRs and their id. However, this automated approach returned a low number of hits. Upon manual inspection of 50 random commit messages, it appeared that the commit template is mostly \ufb01lled in by automatic tools (e.g., static code analyzers) or used for tracing refactoring to code smells (e.g., from SonarQube) which are outside the scope of this study. In summary, Table 4 show how the goal of the study relates to the metrics using the GQM framework [26]. In particular, we de\ufb01ned troublesome BRs but could not establish the QA effort associated with them due to lack of traceability. In turn, we could not answer the remaining questions to ful\ufb01ll our original goal. 0 200 400 600 800 1000 1200 1400 Days study specification implementation release Phase Figure 4: BRs times in the different SDLC phases. 13 \fLack of Traceability Consequences Integration issues Lack of ownership Redundant work Complex defect management Inef\ufb01cient resource allocation Causes Wrong feature scoping Outdated legacy system Tools Interoperability Fragmented information Human Perceived value Team norm Lack of resource Integration of different teams Components integration Figure 5: Causes and consequences of lack of traceability. 5. Discussion In this section, we provide the lessons learned from our failed attempt at studying (and eventually addressing) quality in\ufb02ation at COMPANY due to lack of traceability and discuss the implications of these results for research in this \ufb01eld. We summarize our considerations about the causes and consequences of lack of traceability in Figure 5. The lessons learned and considerations presented in this section are derived from our own experience analyzing the artifacts used within COMPANY and our interactions with practitioners in TestOrg during workshops, feedback, and interview sessions. The takeaways are complemented with references to the literature that can provide further insights. 5.1. Lessons learned We report the lessons learned from practitioner and researcher perspectives. For the latter, we include takeaways that should be considered when studying traceability in complex industry settings. 14 \fFor practitioners. In the context of a company, the traceability links are maintained for a purpose that can be different from the one of a research project. We recommend that establishing and updating traceability links, at least the one that matters for the company, should be treated as a backlog item and tracked like any other items in the development process. Moreover, the explicit lack of traceability needs to be treated as a technical debt item and included in Sprints aimed at paying it back. Toledo et al [27] show that the lack of traceability between artifacts leads to architectural technical debt. They show how, in the case of microservice architectures, maintaining data-to-data source traceability is often necessary to ful\ufb01ll regulations and identify services that are not needed anymore. Whereas, also in the area of architecture and design, Charalampidou et al [28] show that it is useful to document traceability links to manage and estimate the cost of paying debt. Similarly, traceability supports managing documentation and requirements debt [29]; conversely, lack thereof is detrimental to maintenance tasks. Considering traceability is important as lack of links, or hard-to-establish links, can be smells for other problems, such as wrong scoping. For example, when a requirement is too large in scope, its implementation is expected to receive several change requests. However, understanding the scope requires a mechanism to trace change requests to requirements. Traceability of artifacts is important from a management perspective. Without it, several overhead costs can be expected due to lack of implementation ownership\u2014i.e., which team is responsible for implementing a functionality (for example, see [30, 31]). This lack of ownership can, in the worst case, lead to the same functionality being implemented several times or for the implementation to be disrupted during integration due to lacking knowledge of the code dependencies [32]. Traceability also helps mitigate failure propagation\u2014i.e., due to defects in the code that endure through the development cycle and potentially reach the customer (for example, see [33, 34]). Aligning tests with requirements to establish coverage metrics is vital, and without this information, it can be unclear if a requirement has been correctly tested or not. Despite defects reaching the customer or not, lingering faults cause additional overhead, delay releases, and result in longer implementation time. Traceability also helps mitigate uncertainties regarding the allocation of resources (e.g., [35]). While changed or added requirements give input to allocate more resources, veri\ufb01cation of said requirements provides grounds for their deallocation. However, without knowing if a development task has been properly 15 \faddressed, such deallocation can be delayed or spent inef\ufb01ciently (for example, see [36, 37, 38]). For researchers. Researchers should validate their assumptions about what is available in terms of traceability when collaborating with a large company. Some activities that are taken for granted in some settings (e.g., open source) are hard to apply in a complex industrial organizational context, such as COMPANY. In such contexts, it is inherently dif\ufb01cult to have an overview of what is available in terms of information, data, and artifacts, and what is not. In the case of TestOrg, although the managers were aware of the traceability between BRs and IRs, the lack of traceability links between the source code and IRs or BRs was not considered since it is outside the scope of their activities. For the researchers, this became clear in discussions with people in more operational roles. When assessing traceability links between different artifacts involve early personnel working day-to-day with such artifacts. For example, when traceability between source code and requirements is needed, developers and business analysts in the company should be involved. Takeaway 1 Within the organization, we realised that horizontal traceability (i.e., traceability between artifacts at the same level of abstraction, such as requirement-torequirement) has more value than vertical traceability (e.g., at different levels, such as test cases-to-requirements). This may be the case in large companies, where the different phases of the development cycle\u2014and, therefore, their associated artifacts\u2014are managed by different internal organizations. For the practitioners we interacted with, limited traceability (mostly among BRs, and between BRs and IRs) was enough to perform their daily tasks. To enact a different type of traceability, researchers need a strong use case for the company. The company will have to i) allocate resources to support the researchers in establishing extra traceability links, ii) maintain the traceability links (e.g., for further evaluation by the researchers). 16 \fWe recommend gaining an early understanding of the organizational structure, and being aware that the amount of traceability information available may be in\ufb02uenced by such structure. Takeaway 2 Take-away 1 and Take-away 2 are related to research on Conway\u2019s law (e.g., [39, 40]) and the impact that traceability has on organizational vs. architectural structure. Moreover, research on communication of traceability highlights how organizational structure in\ufb02uences the way tasks are allocated and who within the company possesses the necessary knowledge about artifacts of interest for the researcher [41]. In the context of a large organization, different artifacts are tracked at different levels of detail. In our collaboration with COMPANY, we realized that the system used to track BRs was not populated with much information (e.g., many \ufb01elds were left blank or \ufb01lled with boilerplate values). The organizational norm in case BR details are needed is to refer to the BR speci\ufb01cation document (through a link in the tracking system) stored in a separate repository. This limited the automatic extraction of information from BRs, as the two systems are not designed to communicate autonomously. When establishing traceability, consider that in complex settings different information about the same artifact are likely to be scattered across systems. Regardless of the approach, consolidating such information requires knowledge of the company\u2019s norms. Takeaway 3 Several researchers tried to address the challenge of scattered information [42, 43, 44]. Promising approaches have combined traditional information retrieval with model-driven engineering [45] and (semi)supervised machine learning [46]. TestOrg was aware of the problem with dispersed information. They led, within COMPANY, an initiative to centralize test cases and BR tracking for several purposes, including improving their traceability. By the end of such initiative, COMPANY will use a single tracking system for all these artifacts. 17 \fDepending on the structure and location of the information required to establish traceability, a fully-automated solution may not be feasible. A solution for creating traceability links should not start by considering full-\ufb02edged automation but by accommodating human intervention. Takeaway 4 Tool support is fundamental when establishing, using, and maintaining traceability links. Organizations developing complex systems deal with artifacts at different levels of abstraction, details, and formats, which entail using different artifacttracking tools. Moreover, practitioners choose, con\ufb01gure, and use tools according to the level of traceability necessary for their tasks\u2014for them interoperability may not be a decisive criterion for selecting a tool. For example, COMPANY uses an homebrew system for tracking requirements speci\ufb01cation documents and IRs, a third-party commercial solution (which reached end-of-life in early 2019, but it is still maintained for legacy reasons) for tracking BRs, and different open-source systems for source code version control and code reviews. Some offer APIs, but none offered out-of-the-box integration towards any of the other systems. Fragmentation in terms of tools within a company developing complex systems is to be expected. Therefore, effort is required to achieve interoperability between systems when establishing traceability links. Takeaway 5 Addressing tools fragmentation is signi\ufb01cant for safety-critical software development [47] and could be mitigated, for instance, through modelling [48]. In the case of COMPANY, fragmentation (and the consequent lack of traceability) derived from a tradeoff between other system properties deemed more desirable. During our feedback sessions with TestOrg, it became apparent that the homebrew solution was selected to have control over i) the security of the speci\ufb01cations as they contain information pivotal for COMPANY\u2019s competitive advantage, and ii) the redundancy of the storage to avoid costly data loss. 5.2. Considerations on traceability in complex industrial settings The results of this study provide insights into the state-of-practice, lessons learned, and considerations for industrial practitioners and academics to re\ufb02ect 18 \fupon regarding traceability. The experience reported in this paper highlights a dichotomous and puzzling situation that was surprising for the researchers involved in this study. The academic literature on traceability and alignment supports the idea that traceability among software artifacts is needed to manage complexity [18]. Despite these claims, COMPANY is producing systems in the order of millions of lines of code, with thousands of developers, yet traceability links from high-level requirements to source code and tests are not readily available. Paradoxically, traceability within COMPANY may not be achieved because of the system\u2019s complexity, while traceability could mitigate said complexity. What is going on in this case? The results may seem baf\ufb02ing, but in realworld settings\u2014when software grows and the organization along with it\u2014several factors in\ufb02uence the evolution of a product, its development process, and the environment. We did not study the root cause of the current situation, but several hypotheses can be formulated. First, the system is considered a system of systems (SoS), developed in a heterogeneous environment of processes, tools, and third-party components. Without a strong culture to tie this development together with an emphasis on traceability, it is only natural that its amount and consistency will vary. Integration and SoS development are still considered possible, as such is managed at a higher level of abstraction, primarily considering the interfaces of the underlying components, despite the lack of end-to-end traceability. Second, achieving traceability in a large system is resource-intensive, and human commitment also comes into play. In a study by Borg et al. about change requests and change management, it was observed that teams are hesitant to even touch upon other teams\u2019 code [19]. In larger silo organizations this situation is exacerbated since silos may assume that other individuals or teams keep traceability up to date, especially when teams suffer from resource constraints. Take, as an example, a scenario in which Team A is adapting a system core component to conform to a BR. As BRs are high level, they likely require changes to surrounding components. The developers in Team A make changes, but due to the lack of insight or knowledge about the Team B artifacts, not all artifacts associated with the modi\ufb01ed components are updated. This causes a slight misalignment between the requirement traces and the system under development. After many such changes, traceability will naturally degrade over time, leading to a situation where traces will have to be maintained or reverse engineered. However, since reverse engineering is expensive, process solutions that enforce localized knowledge\u2014i.e., silo organizations are instead encouraged. Finally, there can be a legacy component to the challenge\u2014i.e., that compo19 \fnents within the SoS are decaying. These components may have had extensive traceability information, but as Agile practices are leaner in terms of documentation, such traces are no longer maintained. Hence, a situation that is perceived as a product of a new way of working with software in which artifacts of long-term value\u2014associated with high cost\u2014are de-prioritised in favour of short-term value gains and other forms of light-weight documentation that ful\ufb01ll similar purposes. Furthermore, we observed that traceability between high-level requirements, source code, and test suites is not maintained at COMPANY. A question remains, how can COMPANY continue producing high-quality, large-scale, and complex software on time while keeping their customers satis\ufb01ed? As discussed in this section, we believe it is due to the evolution of the development organization and its ability to adapt to circumstances in which traceability is not always available. Hence, instead of relying on traceability information, workarounds and alternate processes, coupled with organizational structures and architectural design decisions, provide COMPANY a cohesive understanding of the system. Hence, although the academic literature highlights the need for traceability for understanding of how the system \ufb01ts together [7], the situation we observed at COMPANY indicates that such requirements may have been overstated. 6." + }, + { + "url": "http://arxiv.org/abs/1907.09807v1", + "title": "On Using Machine Learning to Identify Knowledge in API Reference Documentation", + "abstract": "Using API reference documentation like JavaDoc is an integral part of\nsoftware development. Previous research introduced a grounded taxonomy that\norganizes API documentation knowledge in 12 types, including knowledge about\nthe Functionality, Structure, and Quality of an API. We study how well modern\ntext classification approaches can automatically identify documentation\ncontaining specific knowledge types. We compared conventional machine learning\n(k-NN and SVM) and deep learning approaches trained on manually annotated Java\nand .NET API documentation (n = 5,574). When classifying the knowledge types\nindividually (i.e., multiple binary classifiers) the best AUPRC was up to 87%.\nThe deep learning and SVM classifiers seem complementary. For four knowledge\ntypes (Concept, Control, Pattern, and Non-Information), SVM clearly outperforms\ndeep learning which, on the other hand, is more accurate for identifying the\nremaining types. When considering multiple knowledge types at once (i.e.,\nmulti-label classification) deep learning outperforms na\\\"ive baselines and\ntraditional machine learning achieving a MacroAUC up to 79%. We also compared\nclassifiers using embeddings pre-trained on generic text corpora and\nStackOverflow but did not observe significant improvements. Finally, to assess\nthe generalizability of the classifiers, we re-tested them on a different,\nunseen Python documentation dataset. Classifiers for Functionality, Concept,\nPurpose, Pattern, and Directive seem to generalize from Java and .NET to Python\ndocumentation. The accuracy related to the remaining types seems API-specific.\nWe discuss our results and how they inform the development of tools for\nsupporting developers sharing and accessing API knowledge. Published article:\nhttps://doi.org/10.1145/3338906.3338943", + "authors": "Davide Fucci, Alireza Mollaalizadehbahnemiri, Walid Maalej", + "published": "2019-07-23", + "updated": "2019-07-23", + "primary_cat": "cs.SE", + "cats": [ + "cs.SE" + ], + "main_content": "Introduction Software developers reuse software libraries and frameworks through Application Programming Interfaces (APIs). They often rely on reference documentation to identify which API elements are relevant for the task at hand, how the API can be instantiated, con\ufb01gured, and combined [1]. Compared to other knowledge sources, such as tutorials and Q&A portals, reference documentation like JavaDoc and PyDoc are considered the of\ufb01cial API technical documentation. They provide detailed and fundamental information about API elements, components, operations, and structures [2, 3]. arXiv:1907.09807v1 [cs.SE] 23 Jul 2019 \fOn Using Machine Learning to Identify Knowledge in API Reference Documentation A PREPRINT Directive Figure 1: A reference documentation page in the JDK API annotated with the knowledge types it contains. As API documentation can be thousands of pages long [4, 5], accessing relevant knowledge can be tedious and timeconsuming [1]. Moreover, the information necessary to accomplish a task can be scattered across the documentation pages of multiple elements, such as classes, methods, and properties. Thus, developers try to use other sources to ful\ufb01ll their information needs. For example, although the Java Development Kit (JDK) API documentation contains more than 7,000 pages, as of early 2019, there are more than 3 million StackOver\ufb02ow posts tagged as java. Over the last decade, software engineering researchers studied what information developers need when consulting API documentation [3, 6, 7]. One line of research focuses on automatically matching information needs with the types of knowledge available in the documentation. Maalej and Robillard [3] took a \ufb01rst step in this direction by developing an empirically-validated taxonomy of 12 knowledge types found within API reference documentation. A single documentation page can include several knowledge types (Figure 1). Functionality and Directive are particular types of knowledge needed to accomplish a development task, whereas the Non-information type contains only uninformative boilerplate text [3]. Maalej and Robillard argue that such knowledge categorization allows for a) understanding and improving the documentation quality and b) satisfying developers\u2019 information needs. [3] The research community has shown interest in studying speci\ufb01c knowledge types contained in API reference documentation. For example, Montperrus et al. [8] and Seid et al. [9] studied Directive to prevent the violation of API usage constraints. Robillard and Chhetri [5] \ufb01ltered Non-information when recommending APIs to developers. However, these automated approaches are based either on linguistic features engineering [5] or on syntactic patterns [8]. This work investigates how well modern text classi\ufb01cation approaches can automatically identify the knowledge types suggested by Maalej and Robillard in API documentation. Based on a dataset of 5,574 labelled Java and .NET documentation, we trained, tested, and compared conventional machine learning approaches\u2014i.e., k-Nearest Neighbors (k-NN) and Support Vector Machines (SVM)\u2014as well as deep learning approaches\u2014i.e., recurrent neural network (RNN) with a Long Short-Term Memory (LSTM) layer. The RNN learns features from a semantic representation of general purpose text (i.e., embeddings). Hence, we studied how our results are impacted by training the network using software development-speci\ufb01c corpora from StackOver\ufb02ow as opposed to a general purpose one. Finally, we studied the generalizability of the classi\ufb01ers to an unseen dataset obtained from the Python standard library. This paper makes three contributions. First, we present a detailed classi\ufb01cation benchmark for API documentation. The settings include different machine learning approaches and con\ufb01gurations, different word embeddings for the RNN, different datasets for different APIs, as well as various evaluation metrics. Researchers and tool vendors can use the benchmark, for example, to select and optimize a speci\ufb01c classi\ufb01er for a speci\ufb01c co\ufb01guration of API and knowledge types. Second, as we share the code and data of this study,1 several top-performing classi\ufb01ers (e.g., AUPRC \u226580%) already have practical relevance. Third, our \ufb01ndings and discussion of related work provide insights to researchers, tool vendors, and practitioners on how machine learning can help better organize, access, and share knowledge about API. The rest of the paper is organized as follows. Section 2 describes our research settings, Section 3 presents the con\ufb01gurations of the classi\ufb01ers, and Section 4 reports their performance We discuss related work in Section 5 and the implication of our results in Section 6. Finally, Section 7 concludes the paper. 1https://zenodo.org/badge/latestdoi/194706952 2 \fOn Using Machine Learning to Identify Knowledge in API Reference Documentation A PREPRINT 2 Research Settings This section introduces the research questions, method, and data. 2.1 Research Questions and Method Maalej and Robillard [3] proposed an empirically-validated taxonomy of 12 knowledge types based on grounded theory and systematic content analysis (17 experienced coders, 279 person-hours effort). Table 1 reports the identi\ufb01ed knowledge types which represent the basis for this work. Our primary goal is to study how well simple machine learning for text classi\ufb01cation, without additional feature engineering or advanced natural language processing (NLP) techniques, can identify these knowledge types. That is, our classi\ufb01ers label a document with one or more knowledge types. Table 1: Twelve knowledge types included in reference documentation (adapted from Maalej and Robillard [3]). Knowledge type Brief description Functionality Describes the capabilities of the API, and what happens when it is used. Concept Explains terms used to describe the API behavior or the API implementation. Directive Describe what the user is allowed (not allowed) to do with the API. Purpose Explains the rationale for providing the API or for a design decision. Quality Describes non-functional attributes of the API, including its implementation. Control Describes how the API manages the control-\ufb02ow and sequence of calls. Structure Describes the internal organization of API elements including their relationships. Pattern Explains how to get speci\ufb01c results using the API. Example Provides examples about the API usage. Environment Describes the API usage environment. Reference Pointers to external documents. Non-information Uninformative, boilerplate text. There are two main text classi\ufb01cation approaches which we study in this paper. Traditional approaches usually learn the classes from the occurrences of certain keywords or phrases in the training set. More computational intensive approaches, often referred to as deep learning, use the semantics of the keywords\u2014i.e., the context of the keyword occurrences [10]. For traditional approaches, we study two algorithms frequently used for text classi\ufb01cation, k-NN and SVM. For deep learning, we used RNN with an LSTM layer, which is particularly effective for text categorization problems [11]. This architecture is recommended over, for example, Convolutional Neural Network (CNN). While the latter is more suited for image recognition [12], RNN with LSTM handles more ef\ufb01ciently the dependencies between features [11]. We also compare these classi\ufb01ers to na\u00a8 \u0131ve baselines. The task tackled in this study is to assign knowledge types to an API document. As the document can contain more than one knowledge type, this task is modelled as a multiple binary classi\ufb01cation problem consisting of independently train one binary classi\ufb01er for each knowledge type. Another approach is to train a multi-label classi\ufb01er\u2014i.e., a classi\ufb01er that outputs a set of knowledge types rather than a single one. We analyze and report the results for both approaches when answering the following research question. RQ1. How well can text-based classi\ufb01ers identify knowledge types in API reference documentation? In particular, can deep learning improve over traditional approaches? For text classi\ufb01cation tasks, the input layer of an RNN usually consists of embeddings trained on large unlabeled textual corpora necessary to capture rich semantic features [13]. Pre-trained embeddings are available and can be easily \u201cplugged\u201d in the network without further effort. However, while these embeddings save computational time and well represent common language tasks, they can miss software engineering or API-speci\ufb01c semantics. This motivates our second research question. 3 \fOn Using Machine Learning to Identify Knowledge in API Reference Documentation A PREPRINT RQ2. Do software development-speci\ufb01c text embeddings improve classi\ufb01cation results compared to general purpose ones? Finally, a common question for machine learning evaluation is whether a model trained on a certain dataset generalizes to other data. The original dataset includes documentation of the standard Java and .NET libraries [3]. Since we aim to assess the generalizability of our approach to API reference documentation written in a different style, we manually annotated a new datase sampled from the Python standard library documentation. We used this dataset as an additional test set to report our classi\ufb01ers performance. RQ3. Can documentation classi\ufb01cation based on knowledge types be generalized across API? We assess models based on 10-fold cross-validation using 10% of the dataset as test set. When comparing individual knowledge types classi\ufb01ers, we report Area Under Precision-Recall Curve (AUPRC). Precision-Recall curves are a common metric to evaluate binary classi\ufb01cation and are obtained by plotting precision and recall values at different probabilities thresholds [14]. In particular, they are used to evaluate machine learning model trained on imbalanced data sets [14]. Therefore, AUPRC is a summary measure of performance irrespective of a particular threshold. When comparing classi\ufb01ers for multiple knowledge types, we report performance according to two types of metrics, item-based and label-based. The item-based metrics are a) Hamming Loss, namely the ratio of wrongly classi\ufb01ed labels to the total number of labels (its best value is zero) and b) Subset Accuracy, namely the percentage of exact matches between the predicted and the actual labelset. The label-based metrics are precision, recall, F1-measure (Formula 1), and Area Under Receiving Operator Curve (AUC). F1 = 2 \u00d7 TruePositives 2 \u00d7 TruePositives + FalsePositives + FalseNegatives (1) The Receiving Operator Curve is created by plotting recall against false positive rate (FPR, Equation 2), at different probability thresholds. Accordingly, AUC does not depend on a particular threshold [15]. To calculate the value of True Positive, False Positives, and False Negatives we used 0.5 as probability threshold [10]. FPR = 1 \u2212 TrueNegatives TrueNegatives + FalsePositives (2) The label-based metrics are macro-averaged. Macro-averaging applies the metric to the binary partition of each predicted label and then averages the results\u2014i.e., labels have equal contribution in the \ufb01nal result. In contrast, microaveraging \ufb01rst aggregates the individual metric components (i.e., true positives, false positives, true negatives, and false negatives) of each label and then averages them. Therefore, micro-averaging is biased toward the majority classes and should be avoided when evaluating unbalanced datasets [16]. We compare the results of the classi\ufb01ers to na\u00a8 \u0131ve baselines, MF1, MF2, and RAND. The \ufb01rst two always assign the \ufb01rst (respectively one of the \ufb01rst two) most-frequent labels to each document, whereas the latter assigns a random label. Table 2: Overview of the CADO dataset. #documents Words max. Words mean Vocab. size .NET 2,782 2,874 89 10,630 JDK 2,792 2,099 86 10,763 Total 5,574 2,874 87 17,758 2.2 Research Data We use the CADO dataset created by Maalej and Robillard [3] as the result of their content analysis of the JDK 6 and .NET 4.0 API reference documentation. CADO contains 5,574 observations. The columns include the name of the API element (e.g., a class, a method, or a property), its documentation text, and 12 binary values indicating the presence (or absence) of the corresponding knowledge type. Table 2 summarizes the dataset textual properties. 4 \fOn Using Machine Learning to Identify Knowledge in API Reference Documentation A PREPRINT The most frequent knowledge types are Functionality and Non-information, whereas Quality and Environment are the least frequent. We did not merge some of the knowledge types as Maalej and Robillard reported no signi\ufb01cant evidence of their co-occurence [3]. The majority of the documents (90.5%) contains one to \ufb01ve of the 12 knowledge types. We use the SCUMBLE [17] score (\u2208[0, 1]) to report the level of unbalancedness. For a given label, a high SCUMBLE score represents a large difference between the frequencies of all the other co-occurring labels. In general, datasets with high scores are problematic for classi\ufb01cation tasks [18]. However, for datasets characterized by low SCUMBLE score, resampling can reduce unbalancedness [18]. CADO mean SCUMBLE score is 0.11. We applied random underand over-sampling to 90% of the dataset (i.e., the training set). We did not resample the test set (10% of the dataset) to avoid sampling bias. For resampling, we removed 30% of the documents containing Functionality and Non-Information in their labelset and duplicated 50% of the documents containing Environment and Quality. The thresholds were obtained empirically based on the SCUMBLE score. After resampling, the training and test sets contain 3,876 and 430 observations respectively. Figure 2 presents the label frequencies in the dataset we used to train the models after re-sampling. We prepared a new PYTHON dataset consisting of 100 API documentation pages (i.e., modules, types, attributes, and methods) from the Python 2.7 standard library.2 We selected the Python standard library since its code is organized differently than Java or .NET as it makes extensive use of modules in which functions, classes, and variables are de\ufb01ned. The Python programming paradigm is more functional than Java and .NET which instead follow an object-oriented paradigm. Python is dynamically typed, and its reference documentation tends to focus on functions, whereas types documentation is embeddable in the source code (e.g., through Docstrings). Finally, its development and documentation are driven by an open source, non-pro\ufb01t community (the Python Software Foundation) whereas Java and .NET are owned by corporations. We followed the sampling strategy suggested by Maalej and Robillard [3]\u2014i.e., strati\ufb01ed random sampling. We \ufb01rst created strata for each of the base modules and then randomly sampled API documentation from each stratum proportionally to their frequencies. 2https://docs.python.org/2.7/library/ Figure 2: Knowledge types distribution in the CADO dataset after resampling. 5 \fOn Using Machine Learning to Identify Knowledge in API Reference Documentation A PREPRINT Figure 3: Knowledge types distribution in the PYTHON dataset. Two Ph.D. students in software engineering, accustomed to work with Python, manually labelled the knowledge types in each document. For this task, we provided them the same guidelines from Maalej and Robillard3 with small adaptations, such as providing examples using the Python programming language. The agreement on the label set was 14%\u2014i.e., 14 out of the 100 examples were labeled with the exact same set of knowledge types. The overall agreement was 75%\u2014i.e., of the 1200 labels (100 examples \u00d7 12 labels), 300 were con\ufb02icting. Two of the authors addressed the con\ufb02icts and created the \ufb01nal dataset. Figure 3 shows the distribution of knowledge types in the PYTHON dataset. Functionality is the majority label. However, as PYTHON represents an additional test set (i.e., no examples from this dataset are used to train the classi\ufb01ers), we did not resample it to avoid biased results. Table 3: Summary of the corpora used to train the GloVe embedding. ID Name Corpus description #docs #vocabulary CC Common Crawl General purpose, high-quality text crawled from Internet pages 2.2M \u223c220.000 CCOTF Common Crawl on-the-\ufb02y Common Crawl where missing words are learned from CADO 2.2M \u223c220.000 SO StackOver\ufb02ow StackOver\ufb02ow questions and answers 20M \u223c400.000 SOAPI StackOver\ufb02ow Java and .NET posts StackOver\ufb02ow questions and answers tagged as java or .net 4M \u223c100.000 For both datasets, we performed several, simple operations to clean and prepare the textual data. We lower-cased, tokenized, and applied stop-words removal to the API documentation text. Then, we transformed terms in an orderpreserving one-hot vectors. For the deep learning classi\ufb01ers in our benchmark, we train GloVe [19] embeddings based on four large corpora, summarized in Table 3. The Common Crawl (CC) is a pre-trained embedding downloaded in March 2018.4 It includes 840B tokens and a vocabulary of 2.2M words. The corpus contains high-quality, general-purpose text crawled from the Internet. However, the CC corpus is missing domain-speci\ufb01c terms present in the CADO dataset. Accordingly, in the CCOTF embeddings, the missing words from the CC corpus are trained on-the-\ufb02y [20]. Finally, we obtained a completely domain-speci\ufb01c representation of the input by training embeddings on two additional corpora, StackOver\ufb02ow (SO) and StackOver\ufb02ow API (SOAPI). The former includes 20 million posts, while the latter includes 4 million posts tagged as java or .net. 3 Classi\ufb01ers Con\ufb01guration This section reports information about the con\ufb01guration of both machine learning and deep learning classi\ufb01ers used in this study. 3.1 Traditional Machine Learning The machine learning approaches we selected for our classi\ufb01cation task are SVM [21] and k-NN [22] as well as their adaptations to multi-label problems, namely One-vs-Rest SVM (OvRSVM) and Multi Label k-NN (ML-kNN). We use unigrams and bigrams extracted from the CADO dataset as their input features as n-gram language models are easy to 3https://cado.informatik.uni-hamburg.de 4https://commoncrawl.org 6 \fOn Using Machine Learning to Identify Knowledge in API Reference Documentation A PREPRINT compute and use. Moreover, they have been used in studies where machine learning and natural language processing are applied to software engineering contexts [23, 24]. SVM is one of the most investigated approaches for statistical document classi\ufb01cation and it is considered stateof-the-art [25, 21]. Moreover, it showed good results in software engineering-speci\ufb01c text classi\ufb01cation problems (e.g., [26, 27]). SVM \ufb01nds the hyper-plane maximizing the margin between two classes in the feature space, and it can learn and generalize high-dimensional features typical for text classi\ufb01cation tasks [21, 25]. When taking into account multiple knowledge types at once, we trained and reported the results of a SVM model adapted to such problem\u2014i.e., OvRSVM using binary relevance [28]. OvRSVM considers additional parameters and constraints necessary to solve the optimization problem with several classes and to handle the separation of several hyper-planes [28]. For the SVM classi\ufb01ers, we report the best model after hyper-parameters tuning using GridSearch [29]. k-NN is a widely-used approach in machine learning [30]. It determines the k nearest neighbors of a document using Euclidean distance. Then it assigns the document a label based on the document neighbors using Bayes decision rules [30]. For the multi-label classi\ufb01cation, we use ML-kNN, which outperforms well-established multi-label classi\ufb01ers [31]. 3.2 RNN with LSTM layer Deep learning has recently brought substantial improvements in the \ufb01eld of machine vision and natural language processing (NLP) [10]. The LSTM layer extends RNN capabilities by utilizing several gates and a memory cell in the recurrent module to alleviate the vanishing gradient problem and to handle more ef\ufb01ciently the long-term dependencies between features [11]. tanh ReLU ReLU sigmoid Input LSTM Dense Dense Output Figure 4: Architecture of the RNN used for classi\ufb01cation of the knowledge types. Figure 4 shows the architecture we used in this work. The network is composed of a single LSTM layer, two dense layers, and an output layer. The number of units in the LSTM layer is proportional to dimensions of the vectors used to represent each word. The dense layers contain 128 and 64 units respectively. The number of units in the output layer is the number of knowledge types (i.e., 12 units). The core component of the LSTM layer is a memory cell which stores the information related to the previous analysis steps within the network. At each step of the training, the network predicts the output based on a) the new input, b) the previous state of the other hidden layers of the RNN, and c) and the current state of the memory cell. Accordingly, the role of the gates is to learn how to modify the memory cell to enhance prediction accuracy (see Figure 5). The forget gate (ft) processes the information from the previous hidden state layer (ht\u22121) and the current input (Xt)\u2014 i.e., a representation of the API documentation text. It then decides what information should be discarded or kept from the previous state of the memory cell (Ct\u22121). The input gate (it) is responsible for selecting new information from the input (Xt) that should be stored in the cell state. The third gate is called output gate (ot) and decides which part of the available information in the memory cell should be used to produce the \ufb01nal output (ht). The role of the forget cell in our network is to optimally discard information related to previous knowledge types when they change in the new input. For instance, when the knowledge type in the new input is Directive, the forget gate removes the pieces of information associated with other knowledge types. Consequently, the forget gate reduces the ambiguity of the memory cell when learning individual types. The features associated with the knowledge type in the current input document are moved into the memory cell. In the memory cell, the input gate decides what information should be stored. For example, when the current input contains Directive, its features will be extracted and stored in the memory cell using the input gate. Finally, the output gate selects the most signi\ufb01cant features associated with the Directive type. The input layer for the RNN consists of word embedding vectors trained using GloVe [19]. GLoVe relies on the global occurrences of a word in a corpus by de\ufb01ning a word-to-word co-occurrence matrix. Each value of the matrix contains the probability P of word j appearing in the context of word i, as reported in Equation 3. In particular, Xij denotes the number of times word j occurs in the context of word i and Xi denotes the number of times that any word 7 \fOn Using Machine Learning to Identify Knowledge in API Reference Documentation A PREPRINT Figure 5: A single LSTM recurrent module containing input (it), output (ot), and forgets gates (ft). k appears in the context of word i. Namely, GLoVe de\ufb01nes a learning function that estimates the probability ratio of the co-occurrence of two target words, i and j, given a context word k [19]. Pij = P(j|i) = Xij/Xi, Xi = X k Xik. (3) As input vectors with identical length speed up the process of building the embedding layer of the RNN [32], we considered 300 as the maximum vector length and padded shorter vectors with zeroes. Therefore, our input layer is a 2D matrix where each row is a 300-dimensional unit (arti\ufb01cial neuron). The number of units in the input layer depends on the vocabulary size of the corpus used to train the embeddings. When a document is fed to the RNN, the units associated with the document terms will be activated. The \ufb01rst hidden layer is a LSTM unit which learns textual features of an input document. It is suited to learn longterm dependencies, such as in the case of the large API reference documentation text. To prevent over-\ufb01tting, we applied a dropout technique to the weights matrix and to the bias vector [33]. The output of the LSTM layer (i.e., a set of features that can be associated with a knowledge type) goes through two fully-connected dense layers. The dense layers provide deep representations of the features extracted by the LSTM layer and enable the network to learn their hierarchical and compositional characteristics. To alleviate feature loss due to the projection of the features from a high-dimensional space to the low-dimensional space of the output layer, the model smoothly reduces the number of units in the dense layers from 128 to 64 [10]. We use the ReLU activation function for the dense layers to prevent over-\ufb01tting [10]. The output layer provides the predicted knowledge types using a sigmoid activation function. Hence, the number of units in the output layer is the number of labels that the model learns. As the output of the sigmoid is a probability value between 0 and 1, each neuron in this layer learns to estimate the probability of observing one of the labels. To binarize the predicted probabilities we used different thresholds according to the different metrics. We tuned the following network parameters. Epoch is a complete pass (back and forward) of every sample through the neural network. As customary, we run 100 epochs [10]. Batch size is the number of samples passed through the network at once. As customary, we used a batch size of 32 [10]. Optimizer is the optimization method minimizing the prediction error. We use Adam, a state-of-the-art algorithm for training RNN [34]. Loss Function is the measure of the network prediction error. We use sigmoidal cross-entropy as it is ef\ufb01cient for text classi\ufb01cation [35]. Learning Rate is the parameter controlling the adjustments to the weights with respect to the prediction error. We used a customary learning rate of .001 [10]. 4 Results In this section, we compare the performance of RNN and traditional machine learning approaches for the classi\ufb01cation of API reference documentation. We contrast the performance of RNN classi\ufb01ers trained using different embeddings. Moreover, we assess the classi\ufb01ers generalizability to another test set. 8 \fOn Using Machine Learning to Identify Knowledge in API Reference Documentation A PREPRINT Table 4: Comparison between Deep learning classi\ufb01ers (trained with embeddings from general purpose and software development corpora), traditional machine learning, and na\u00a8 \u0131ve approaches for classifying individual knowledge types in the CADO dataset. Values report the Area Under Precision-Recall Curve (AUPRC). Na\u00a8 \u0131ve baselines Traditional approaches Deep learning (General Purpose) Deep learning (Software dev.) Knowledge Type MF1 MF2 RAND k-NN SVM RNNCC RNNCCotf RNNSO RNNSOapi Functionality 0.69 0.73 0.72 0.76 0.39 0.86 0.84 0.87 0.87 Concept 0.11 0.14 0.12 0.13 0.57 0.25 0.28 0.28 0.28 Directive 0.26 0.16 0.17 0.22 0.04 0.40 0.41 0.41 0.45 Purpose 0.22 0.21 0.17 0.17 0.09 0.36 0.40 0.40 0.41 Quality 0.04 0.04 0.05 0.12 0.13 0.78 0.69 0.68 0.54 Control 0.08 0.12 0.09 0.08 0.81 0.28 0.32 0.30 0.30 Structure 0.37 0.37 0.35 0.38 0.42 0.61 0.56 0.63 0.60 Pattern 0.14 0.17 0.14 0.21 0.59 0.46 0.46 0.48 0.51 Example 0.24 0.23 0.20 0.25 0.60 0.90 0.85 0.90 0.90 Environment 0.04 0.03 0.06 0.16 0.43 0.68 0.80 0.66 0.51 Reference 0.11 0.14 0.16 0.13 0.15 0.35 0.35 0.41 0.30 Non-information 0.29 0.31 0.28 0.33 0.71 0.57 0.58 0.62 0.55 4.1 Knowledge Types Identi\ufb01cation Individual knowledge types. We trained two RNN-based classi\ufb01ers using a general-purpose corpus to create the embeddings for the input layer\u2014RNNCC and RNNCCotf. Table 4 reports the evaluation of our classi\ufb01ers. RNN and traditional machine learning approaches improve over the na\u00a8 \u0131ve baselines for all the individual knowledge type classi\ufb01cation by up to 74% (41% on average). SVM always performed better than k-NN. Deep learning classify Functionality, Example, and Environment with high precision and high recall at different probability thresholds (AUPRC \u226580%) outperforming machine learning approaches. The RNNs yields subpar results for Directive, Purpose, Reference, Concept, and Control (AUPRC < 50%). However, the best SVM outperforms the best RNN only for the latter two types. The best classi\ufb01ers for Quality, Structure, Patterns, and Non-information yield an AUPRC between 59% and 78%. Also in this case, the best machine learning approach (i.e., SVM) outperforms the best RNNs classi\ufb01ers only for the latter two types. Compared to machine learning, RNNs classify better eight knowledge types. Multiple knowledge types. The second step is to consider our task as a multi-label classi\ufb01cation problem rather than building individual classi\ufb01ers for each knowledge type. We compare the RNN classi\ufb01ers to the multi-label adaptation of the same two machine learning models and two na\u00a8 \u0131ve baselines (see Table 5). ML-kNN and OvRSVM perform worse than the baselines for the item-based metrics, whereas the RNNs shows the best performance. The RNNs outperform ML-kNN, OvRSVM, and the baselines for label-based metrics. There is an 11% improvement regarding the most strict metric (i.e., Subset Accuracy) between the best RNN and machine learning classi\ufb01ers. Regarding MacroPrecision, MacroRecall, and MacroF1, there is an improvement between 25% and 28% for the RNN. MF1 performs better than traditional machine learning regarding MacroAUC, which RNNs improves by 17%. Answer to RQ1. One-third of the knowledge types can be automatically identi\ufb01ed with good results (i.e., AUPRC \u226580%). RNN can more accurately (\u00bf 10%) identify eight of the 12 knowledge type compared to traditional machine learning approaches. When considering multi-label classi\ufb01cation, RNN outperforms traditional machine learning approaches for itemand label-based metrics. 4.2 Software Development-speci\ufb01c Corpus Individual knowledge types. One RNN uses freely-available, pre-trained embeddings based on a general purpose textual corpus (i.e., RNNCC), whereas RNNCCotf uses the same corpus but learns missing words on-the-\ufb02y from the CADO dataset. The assumption behind text statistical representations such as GloVe is that the meaning of a document is determined by the meaning of the words that appear in it [36]. Accordingly, RNNSO and RNNSOapi use corpora in a domain closer to the one of API documentation. As shown in Table 4, the best among these RNNs performs similarly to their general domain counterparts (\u2206AUPRC = 4%). For Functionality, Purpose, Control, and Structure the differences are minimal (1-2%). However, for Quality and Environment there is a substantial decrease in performance when using software development-speci\ufb01c embeddings 9 \fOn Using Machine Learning to Identify Knowledge in API Reference Documentation A PREPRINT Table 5: Comparison between Deep learning classi\ufb01ers (trained with embeddings from general purpose and software development corpora), traditional machine learning, and na\u00a8 \u0131ve approaches for classifying multiple knowledge types in CADO. Na\u00a8 \u0131ve baselines Traditional approaches Deep learning (General Purpose) Deep learning (Software dev.) Metric MF1 MF2 ML-kNN OvRSVM RNNCC RNNCCotf RNNSO RNNSOapi Hamming Loss 0.17 0.20 0.18 0.30 0.16 0.14 0.14 0.14 Subset Accuracy 0.00 0.13 0.11 0.02 0.20 0.22 0.19 0.21 MacroPrecision 0.05 0.08 0.41 0.21 0.56 0.66 0.61 0.63 MacroRecall 0.16 0.16 0.24 0.27 0.55 0.39 0.30 0.33 MacroF1 0.10 0.10 0.27 0.24 0.55 0.44 0.40 0.43 MacroAUC 0.62 0.50 0.55 0.61 0.73 0.74 0.78 0.79 (10% and 14%, respectively). Overall, the improvement is rather limited given the overhead in obtaining the corpus and computing the embeddings. Multiple knowledge types. Table 5 shows that RNNCC and RNNCCotf outperform RNNSO and RNNSOapi when considering label-based metrics (except for MacroAUC) and perform similarly when considering item-based metrics. The RNN trained using Java and .NET StackOver\ufb02ow posts yields the best MacroAUC (79%). Answer to RQ2. RNN using software development-speci\ufb01c embeddings show slight to no improvement over RNN using general purpose embeddings for classi\ufb01cation of individual knowledge types. When considering multi-label learning, except for MacroAUC, using general purpose embeddings yields better results across itemand label-based metrics. 4.3 Classi\ufb01ers Generalizability Individual knowledge types. Table 6 reports the performance of the individual RNN-based classi\ufb01ers on the PYTHON test set. Also in this setting, no na\u00a8 \u0131ve baseline performs better than traditional or deep learning approaches. The RNNs are the best classi\ufb01ers for seven knowledge types, whereas SVM shows the best results for the remaining \ufb01ve. Consistently with the CADO setting, SVM is the best classi\ufb01er for Concept, Pattern, and Non-information. Classi\ufb01ers for Functionality, Concept, and Purpose show some improved performance compared to the CADO settings (\u2206AUPRC = 8.3%). There is a large absolute difference (\u2206AUPRC = 33%) between the two settings when considering Directive, Quality, Control, Structure, Example, and Environment, suggesting that these knowledge types are dependent on the settings. On average, the performance on the PYTHON dataset decrease by \u223c16% over the 12 knowledge types. Multiple knowledge types. Table 7 present the results of the multi-label classi\ufb01cation task. Regarding item-based metrics, our classi\ufb01ers perform worse or on par with respect to the na\u00a8 \u0131ve baselines. The classi\ufb01ers show low precision (40% for the best classi\ufb01er, SVM) and recall (26% for the best classi\ufb01ers, RNNCCotf and RNNSOapi). SVM also achieves the best F1 (30%). RNNSOapi shows the best performance for MacroAUC (64%) . Answer to RQ3. Classi\ufb01ers for Functionality, Concept, Purpose, Pattern, and Directive seem to generalize from Java and .NET to Python documentation. The generalization for multiple knowledge types classi\ufb01ers is limited. 5 Related Work To the best of our knowledge, this is the \ufb01rst study, addressing the automated identi\ufb01cation of several knowledge types within API reference documentation. In this section, we report related work investigating some of the knowledge types individually. We present studies comparing traditional machine learning and deep learning approaches for text classi\ufb01cation in software engineering. 5.1 Knowledge Types in API Documentation Identifying a document based on the knowledge types it contains can support documentation quality assessment and improvement. For example, Ding et al. [37] systematic review of 60 primary studies investigates documentation quality attributes. The authors focus on knowledge-based approaches used to address quality issues of API documentation. Although retrievability is reported as an essential quality attribute, the authors show a lack of advanced ways to retrieve 10 \fOn Using Machine Learning to Identify Knowledge in API Reference Documentation A PREPRINT Table 6: Comparison between Deep learning classi\ufb01ers (trained with embeddings from general purpose and domain speci\ufb01c corpora), traditional machine learning, and na\u00a8 \u0131ve approaches for classifying API documents based on individual knowledge type in the PYTHON dataset. Values report the Area Under Precision-Recall Curve (AUPRC). Na\u00a8 \u0131ve baselines Traditional approaches Deep learning (General Purpose) Deep learning (Software dev.) Knowledge Type MF1 MF2 RAND k-NN SVM RNNCC RNNCCotf RNNSO RNNSOapi Functionality 0.89 0.89 0.92 0.85 0.94 0.90 0.89 0.95 0.94 Concept 0.29 0.28 0.31 0.26 0.64 0.40 0.33 0.49 0.41 Directive 0.41 0.41 0.49 0.42 0.71 0.49 0.44 0.55 0.63 Purpose 0.28 0.28 0.25 0.30 0.13 0.46 0.40 0.51 0.39 Quality 0.17 0.17 0.19 0.17 0.27 0.20 0.17 0.20 0.32 Control 0.27 0.27 0.32 0.24 0.33 0.43 0.46 0.39 0.35 Structure 0.24 0.24 0.24 0.32 0.11 0.26 0.24 0.30 0.32 Pattern 0.22 0.22 0.24 0.29 0.61 0.50 0.30 0.41 0.43 Example 0.36 0.36 0.38 0.43 0.44 0.48 0.49 0.51 0.48 Environment 0.16 0.16 0.17 0.16 0.37 0.15 0.15 0.18 0.17 Reference 0.12 0.12 0.17 0.11 0.22 0.16 0.19 0.24 0.25 Non-information 0.23 0.23 0.24 0.27 0.61 0.30 0.39 0.30 0.28 Table 7: Comparison between Deep learning classi\ufb01ers (trained with embeddings from general purpose and software development corpora), traditional machine learning, and na\u00a8 \u0131ve approaches for classifying multiple knowledge types in PYTHON. Na\u00a8 \u0131ve Traditional approaches Deep learning (General Purpose) Deep learning (Software dev.) Metric MF1 MF2 MLkNN OvRSVM RNNCC RNNCCotf RNNSO RNNSOapi Hamming Loss 0.23 0.25 0.28 0.35 0.27 0.30 0.26 0.27 Subset Accuracy 0.05 0.05 0.02 0.01 0.02 0.03 0.04 0.05 MacroPrecision 0.07 0.10 0.33 0.40 0.36 0.31 0.31 0.31 MacroRecall 0.08 0.16 0.24 0.24 0.24 0.26 0.21 0.26 MacroF1 0.07 0.13 0.28 0.30 0.29 0.28 0.25 0.28 MacroAUC 0.50 0.50 0.53 0.54 0.60 0.57 0.62 0.64 speci\ufb01c information from API documentation. On the one hand, our work represents a \ufb01rst step towards developing retrieval mechanisms for documents containing a set of knowledge types from the Java and .NET API reference documentation. On the other hand, the individual classi\ufb01ers showing a performance (e.g., Functionality, Control, Example, and Environment) can be used to retrieve documents containing a speci\ufb01c knowledge type. Moreover, our classi\ufb01ers can be used to retrieve documents containing Functionality from the Python standard library documentation. Previous research tried to automatically retrieve particular knowledge from API documentation. Robillard and Chhetri [5] presented an approach to identify API-related information that developers should not ignore as well as non-critical information. Their approach\u2014based on natural language analysis (i.e., part-of-speech tagging, word patterns)\u2014shows 90% precision and 69% recall when applied to 1000 Java documentation units. However, the authors needed to manually assess, on top of the sensible knowledge items, also obvious, unsurprising, and predictable documentation\u2014i.e., what we consider Non-information [5]. Our SVM classi\ufb01er, trained using simple features, identi\ufb01es Non-information with 71% accuracy. Montperrus et al. [8] studied a particular knowledge type found in API reference documentation, Directive. They analyzed more than 4000 API documentation from open source libraries. To determine the documents containing Directive, they developed a set of syntactic patterns associated with concerns reported in the documentation. Finally, they manually created a taxonomy of 23 directives. Pandita et al. [38] proposed an NLP-based approach to verify the legal usage of API methods against its description extracted automatically from the documentation. Their approach uses features derived from part-of-speech tagging and chunking techniques to semantically analyze text. Moreover, using a domain dictionary, the authors extracted methods speci\ufb01cations as \ufb01rst-order logic expressions to verify their legal usage in client code. Conversely, in this work, we attempted a simple approach based only on features which can be automatically extracted from the raw text. Our goal was to create a benchmark which can be improved by including, for example, natural language patterns speci\ufb01c for each knowledge types and domain-speci\ufb01c models. We show that some classi\ufb01ers have already practical relevance. 11 \fOn Using Machine Learning to Identify Knowledge in API Reference Documentation A PREPRINT 5.2 Deep Learning in Software Engineering Xu et al. [39] use CNN to semantically link together knowledge units from StackOver\ufb02ow. Their approach focuses on predicting several classes of relatedness (e.g., duplicate, related information). The network input is the word2vec representation of 100,000 Java-related posts from StackOver\ufb02ow, whereas the dataset includes 8,000 knowledge units balanced among relatedness types. The CNN outperformed machine learning baselines\u2014i.e., SVM trained using tf-idf and word2vec. However, Fu and Menzies [27] replicated Xu et al. study comparing their results to the same SVM baselines optimized using hyper-parameter tuning. The authors showed improved results for the baselines which perform closely (if not better) to the CNN, although the latter required 84x more time to train. In this work, we also used a deep learning approach with a semantic representation of the input based on StackOver\ufb02ow. We found that, for our task, there are only a few small improvements due to the software development-speci\ufb01c corpus, which may not be worth when considering the extra effort required to obtain and train the embeddings. We compared the deep learning approach to (among others) SVM models trained in line with the suggestions of Fu and Menzies [27]. We showed that the approaches are complementary as their performance depends on the speci\ufb01c knowledge types. Fakhoury et al. [40] applied deep learning and traditional machine learning to the detection of language anti-patterns in software artifacts (e.g., poor naming conventions and documentation) using a dataset of 1,700 elements collected from 13 large Java system. The authors showed that using Bayesian optimization and model selection, traditional machine learning models can outperform deep learning not only in accuracy but also regarding the use of computational resources. They advise researchers and practitioners to explore traditional machine learning models with hyper-parameter tuning before turning to deep learning approaches. Our results for individual knowledge types partly support this conclusion. However, when tackling multi-label problems, our work shows that deep learning performs better than traditional machine learning for all the reported metrics. 6 Discussion In this section, we discuss the implications for practitioners and researchers. Then, we present the limitations of this study. 6.1 Implications Building automated knowledge extraction tools. Classi\ufb01ers showing good performance (AUPRC \u226580%) can already be used in practice to tag documents containing crucial information for developers. Moreover, these classi\ufb01ers are trained using either traditional machine learning algorithms with simple text features or using deep learning but with readily available embeddings. A document containing Functionality can answer developers\u2019 information needs regarding what the API does, whereas Control and Example address how to accomplish a task using the API. The classi\ufb01er for Functionality can be applied also to Python documentation. The Environment classi\ufb01ers can be used to get information regarding an API usage context. Classi\ufb01ers for Quality and Non-information showed encouraging results (AUPRC \u226570%). The former is relevant to understand API performance, whereas the latter is useful for suggesting information that a developer can ignore. Moreover, the Non-information classi\ufb01er showed promising results generalizing to the Python documentation. Given its particular use case, we suggest research to focus on maximizing recall to ensure that all uninformative documents can be tagged appropriately. For the other knowledge types, we suggest maximizing precision to guarantee that fundamental information is correctly tagged. The results for other knowledge types can be improved by adding NLP-based features. For example, Structure usually contains references to other API elements that can be identi\ufb01ed using a speci\ufb01c named-entity tagger (e.g., [41]). Concept and Pattern are strongly characterized by explanations of speci\ufb01c terms and sequence of steps. These can be identi\ufb01ed through specialized features based on linguistic inquiry [42], such as drives (e.g., \u201cdo this to achieve that\u201d) and time orientation (e.g., \u201cdo this, then do that\u201d). As these classi\ufb01ers showed similar results when applied to the Python documentation, their improvement can also increase their generalizability. The classi\ufb01ers showed the worst results (AUPRC < 50%) for Directive, Purpose, and Reference knowledge types. The \ufb01rst two can be the subject of further research. In particular, features for a Directive classi\ufb01er can be extracted from Maalej and Robillard work [3] as well as from the speci\ufb01c taxonomy developed by Montperrus et al. [8]. Furthermore, previous work on rationale mining for other software engineering tasks (e.g., [43, 44]) can be adapted to improve the results for the Purpose knowledge type. The Reference classi\ufb01er showed some of the weakest performance but it can be improved with simple syntactical features\u2014e.g., the presence of links. 12 \fOn Using Machine Learning to Identify Knowledge in API Reference Documentation A PREPRINT There is a variation in performance between the classi\ufb01er con\ufb01gurations (e.g., traditional machine learning vs. deep learning) and between the individual knowledge types. We hypothesize that some knowledge types can be sensitive to speci\ufb01c keywords, such as \u201ccallback,\u201d \u201cevent,\u201d and \u201ctrigger\u201d in the case of Control. On the other hand, knowledge types such as Environment and Example are characterized by a change in the language context. The former tends to interpolate text with numbers (as it includes information such as version and copyright year), while the latter contains sequences that do not occur in natural language (i.e., source code). We postulate that the RNN can capture this change of context. However, the explanations for some classi\ufb01ers results are more subtle. For example, Non-information implies expressing in natural language information already provided by a method signature. This implies a mapping between source code tokens and natural language ones which need to be further investigated. Similarly, the Purpose knowledge type contains information\u2014i.e., the answer to a \u201cwhy\u201d question\u2014which can be dif\ufb01cult to identify, from a semantic perspective, using the simple con\ufb01gurations of our classi\ufb01ers. Arguably, the intrinsic dif\ufb01culty to identify a knowledge type, even for a human expert, can explain some of the poor results. For instance, Maalej and Robillard report low agreement for Purpose\u2014a knowledge type showing subpar results (AUPRC = 41%). Another explanation for the different performance between traditional machine learning and deep learning can lay in the parameters used to tune the latter. A suggested improvement is to create 12 binary RNNs (one for each knowledge type) and select different parameters for a) the activation function of the output layer (e.g., SoftMax [45]), b) the loss function (e.g., Categorical Cross-Entropy [46]), and c) the optimizer (e.g., RMSProp or ADA [47]). Using knowledge types when developing software. One of the main applications for the classi\ufb01ers presented in this study is documentation \ufb01ltering. On top of the current option (e.g., by package or class), API websites can offer their users the possibility to search documentation based on speci\ufb01c knowledge types. For example, a developer \ufb01xing a speci\ufb01c performance bug (e.g., related to wireless connectivity) can search the network API documentation containing the Quality knowledge type. In this scenario, the classi\ufb01er can be optimized for precision\u2014i.e., the developer would consult a small number of documents which are likely to contain the information she needs. On the other hand, a developer exploring possible usage of a new set of APIs can \ufb01lter them according to Functionality which describes the their capabilities. In this scenario, the classi\ufb01er can be optimized for recall\u2014i.e., the developer consults a substantial amount of documents which offer her a complete overview of the API functionalities, even if some may be irrelevant. Our benchmark is a starting point for selecting which classi\ufb01er to optimize according to speci\ufb01c scenarios. Classi\ufb01ers with AUPRC \u226580% can already be utilized the scenarios above. Classi\ufb01ers with AUPRC \u226550% need further optimization. The proposed benchmark is also a stepping stone to support software developers \ufb01ltering API documentation based on multiple knowledge types of interest. Given the complexity of such a task, our best classi\ufb01er (RNNSOapi) showed good results (MacroAUC = 79%). However, when disregarding recall\u2014based on the assumption that a developer will not read a large number of documents\u2014the classi\ufb01er with the highest precision (66%) is RNNCCotf. Conversely, when a developer can tolerate noisy yet comprehensive results, we recommend using RNNCC\u2014i.e., the classi\ufb01er with the best recall. In both cases, the classi\ufb01ers rely on \u201caffordable\u201d embeddings. Using knowledge types when authoring documentation. API documentation providers can leverage the results of this work to monitor their product. For example, they can use simple machine learning models (e.g., SVM) to \ufb01nd documents containing Non-information and remove irrelevant text that increase the developers\u2019 cognitive effort (e.g., the repetition of a method signature in textual form). Furthermore, they can monitor the presence of knowledge types containing crucial information for software developers, such as Functionality, Control, and Example. API documentation provider can also monitor the decrease of important knowledge types (e.g., Functionality) or increase of harmful ones (i.e., Non-information) before releasing new version of an API and its documentation. API defects can be diagnosed by identifying (and subsequently improving) documentation containing Directive and Quality. Further research outlets. Researchers investigating documentation quality can bene\ufb01t from the results of our work. For example, quality models can be devised based on the presence (or absence) of speci\ufb01c types. A \ufb01rst step is the identi\ufb01cation of knowledge types in a set of documents. In our benchmark, the RNNSOapi model showed good results (MacroAUC = 0.79). The classi\ufb01er correctly identi\ufb01es documents containing a set of knowledge types with 60% false positives rate when maximizing recall. Researchers should also consider the trade-off between using a pre-trained embedding while losing some performance (5%) in terms of MacroAUC. Given the results obtained on the Python standard library, we recommend researchers to be careful when applying our multi-label models to different API documentation. Researchers have shown interest in studying how the usage of particular elements in a framework is documented (e.g., [5, 9, 8]). This line of research can bene\ufb01t from an approach to automatically retrieve API reference documentation containing the Functionality knowledge type using the RNNSO model, as it showed good performance on different test sets. 13 \fOn Using Machine Learning to Identify Knowledge in API Reference Documentation A PREPRINT 6.2 Threats to Validity The API reference documentation used to train our classi\ufb01ers is based on two libraries, JDK and .NET. While the language paradigms are similar, their documentation styles are different [3]. Moreover, we directly addressed a threat to generalizability by investigating the less structured documentation of the Python programming language API [48]. We acknowledge that our results may not hold for API reference documentation in other domains (e.g., for a speci\ufb01c framework) or for a different programming paradigm (e.g., declarative programming). Although Maalej and Robillard taxonomy is general enough [3], other knowledge types may exist. The labeling of our new test set can introduce a threat to internal validity. To mitigate such threat, two raters independently labeled the documents using validated guidelines [3]. We reconciled the disagreements (approximately 50% were clear mistakes) by discussing borderline cases and reaching consensus among the authors. Our benchmark only includes two traditional machine learning algorithms, one speci\ufb01c deep learning architecture, and four representations for the RNN input layer. Nevertheless, there may be other algorithms, embeddings, and con\ufb01gurations worth of investigation. The results can be biased due to the unbalancedness of the dataset. To reduce this threat, we applied common resampling techniques to the training set and reported the performance according to appropriate metrics. We did not observe a correlation between the classi\ufb01ers performance and the distribution of the labels. 7" + }, + { + "url": "http://arxiv.org/abs/1807.02971v1", + "title": "A Longitudinal Cohort Study on the Retainment of Test-Driven Development", + "abstract": "Background: Test-Driven Development (TDD) is an agile software development\npractice, which is claimed to boost both external quality of software products\nand developers' productivity. Aims: We want to study (i) the TDD effects on the\nexternal quality of software products as well as the developers' productivity,\nand (ii) the retainment of TDD over a period of five months. Method: We\nconducted a (quantitative) longitudinal cohort study with 30 third year\nundergraduate students in Computer Science at the University of Bari in Italy.\nResults: The use of TDD has a statistically significant effect neither on the\nexternal quality of software products nor on the developers' productivity.\nHowever, we observed that participants using TDD produced significantly more\ntests than those applying a non-TDD development process and that the retainment\nof TDD is particularly noticeable in the amount of tests written. Conclusions:\nOur results should encourage software companies to adopt TDD because who\npractices TDD tends to write more tests---having more tests can come in handy\nwhen testing software systems or localizing faults---and it seems that novice\ndevelopers retain TDD.", + "authors": "Davide Fucci, Simone Romano, Maria Teresa Baldassarre, Danilo Caivano, Giuseppe Scanniello, Burak Thuran, Natalia Juristo", + "published": "2018-07-09", + "updated": "2018-07-09", + "primary_cat": "cs.SE", + "cats": [ + "cs.SE" + ], + "main_content": "INTRODUCTION Test-Driven Development (TDD) is a software development technique which leverages unit tests to incrementally deliver small pieces of functionality. Peculiar to TDD is the order in which tests and production code are written\u2014the former are specified first and only in the case of a failure the developer is allowed to write production code to make them pass [2]. An important role in this process is played by refactoring which allows changing the code internal representation (e.g., an algorithm) while preserving its external behavior due to the safety net provided by the test suite [1]. TDD promised to support the delivery of high-quality products, both from a functional (e.g., fewer bugs) and technical perspective (e.g., \u201ccleaner\u201d code), while improving developers\u2019 productivity [2]. Consequently, industry has taken an interest in adopting this technique [4] and academia has dedicated large effort to gather empirical evidence to support or disprove its claimed effects. The results, gathered and combined in secondary studies [20, 21, 28], are conflicting and limited conclusions can be drawn. The primary studies, such as controlled experiments and case studies, are often cross-sectional [5] and only capture a \u201csnapshot\u201d of the phenomena at a given time. However, despite being recommended in the literature [10, 19, 26], only few investigations [15, 23] take a longitudinal perspective on the study of TDD. In one of these studies, Latorre [15] followed professional developers of different levels of seniority (but all with no experience in TDD) working on a project for a month while learning to apply the technique. The author studied how the conformance to the TDD process and the participants\u2019 productivity evolved during the investigation. The focus was to evaluate how the different subjects\u2019 learning curves affect their performance (e.g., in terms of code quality). A long-term case study at IBM by Sanchez et al. [23] aimed at understanding whether TDD improves over the process previously adopted in the company. The observation arXiv:1807.02971v1 [cs.SE] 9 Jul 2018 \fESEM, October 2018, Oulu, Finland Fucci et al. focused on a team and its sustained use of TDD for a period of five years. However, the investigation was carried out retrospectively\u2014 i.e., using existing data gathered during such period but also before TDD was introduced. Conversely, we present a longitudinal cohort study [5] involving two separate observations of the same variables (i.e., functional quality, productivity, and number of tests written), obtained from the same participants (i.e., 30 novice developers), five months apart. Our cohort is composed of software developers of homogeneous experience who attended the same training regarding Agile software development principles, including TDD. Our goal is to understand how well TDD can be applied after the passage of time, giving an indication of its retention. Thus, the main research question driving our study is: To what extent can novice software developers retain TDD and its effects (if any) over a period of five months? To establish a baseline, we compared the treatment of interest (i.e., TDD) with the non-TDD development process (e.g., iterative test-last, big-bang testing, or no testing at all) that subjects would normally follow. We refer to the latter as Your Way development (i.e., YW). This paper makes the following main contributions: \u2022 an evidence-based discussion of TDD retainment and its implication for research and practice; \u2022 a longitudinal design methodology that can be applied to other software development processes to distinguish between short-term and long-term phenomena; \u2022 a laboratory package1 to foster further replication of the presented longitudinal cohort study. Paper organization. In Section 2, we present background information and related work. In Section 3, we describe the design of our longitudinal cohort study, while the results are presented and discussed in Section 4 and Section 5, respectively. Final remarks conclude the paper in Section 6. 2 BACKGROUND In this section, we summarize the available evidence supporting (or refuting) the effects of TDD on external quality (or functional e.g., number of defects) and developers\u2019 productivity. We also summarize research work on existing longitudinal studies in the context of Software Engineering (SE). 2.1 Types of longitudinal studies in SE Longitudinal research in SE is not so common and appears to be mostly associated with the case study methodology. According to Yin [33], in a longitudinal case study data collection happens over an extended period with the goal of investigating \u201chow certain conditions change over time\u201d [33]. This is the case when the phenomenon under investigation is a process bounded to its context. Therefore, similarly to ethnography [25], longitudinal case studies require the researchers to be co-located with the case in which the phenomena takes place. For example, in the longitudinal case study reported by McLeod et al. [18], researchers spent several hundreds of hours 1www2.unibas.it/sromano/downloads/LabPackageUniba.zip at the case company site. They attended meetings, observed and interviewed stakeholders within a period of two years to characterize software development as an emergent process. In a similar fashion, Salo and Abrahamson [22] followed how Software Process Improvement (SPI) techniques were introduced in the workflow of five Agile projects. Their investigation lasted for 18 months during which the researchers constantly recorded the output of retrospective meetings, interviews with the developers, as well as the metrics collected from the SPI tool in use at the company. Longitudinal studies are useful when the observations cover an interesting event\u2014e.g., the introduction of a new practice within a company. Therefore, the researcher is interested in observing the impact of such change while it unfolds. This scenario is similar to interrupted time series in quasi-experimental designs [5]. For example, Li et al. [16] studies the changes brought by replacing a waterfall-like approach with Scrum in a small software company. The authors followed the development of a project for more than three years\u201417 months using waterfall and 20 using Scrum. This approach allowed for a before-after comparison of defects density and productivity. The longtime span was necessary to avoid a biased comparison between the established process and an immature one. A similar approach is reported in Vanhanen et al. [29] in which the impact of introducing pair programming was assessed over a period of two years with data collected through survey with the developers. Other examples of longitudinal studies in SE cover a long period of time in retrospect\u2014e.g., by analyzing archival data. Harter et al. [11] analyzed the type of defects identified over time by the progressive introduction of SPI techniques in a firm and its subsequent CMMI improvements over a period of 20 years. In the health science and medicine, longitudinal studies are sometimes realized in the form of cohort studies. A cohort is a sample of subjects (e.g., who undergo a treatment) sharing a specific characteristic of interest (e.g., age). The cohort is tested in several occasions over time to, for example, check for a drug side-effect before releasing it to the market [5]. 2.2 Effects of TDD The effects of TDD on several outcomes, including the ones of interest for this study\u2014i.e., functional quality and productivity\u2014is the topic of several empirical studies, summarized in Systematic Reviews (SR) and Meta-Analysis (MA). Turhan et al. [28] SR includes 32 primary studies in which TDD was investigated in different settings. Although the results show a positive effect on quality, the ones regarding productivity are inconclusive. Rafique and Misic [21] conducted an MA covering 25 primary studies published between 2000 and 2011. When considering participants from academia, TDD seems to improve quality to the loss of productivity. Finally, Munir et al. [20] SR classifies the primary studies according to relevance and rigor dimensions. The results show, for both student and professional developers, that TDD increases quality but not productivity. The authors recommend increasing relevance by carrying out longterm studies. One example of such investigation is presented in Marchenko et al. [17] which reports a three-year-long case study about the use of TDD at Nokia-Siemens Network. The authors observed and interviewed eight participants (one Scrum master, \fA Longitudinal Cohort Study on the Retainment of Test-Driven Development ESEM, October 2018, Oulu, Finland Training sessions YW Experimental session TDD Experimental session TDD Training sessions YW Experimental session TDD Experimental session Period P1 (16/11/2016) Period P2 (07/12/2016) Period P3 (03/05/2017) Period P4 (04/05/2017) Group G1 Groups G1, G2 Group G2 MRA MRA BSK BSK GOL GOL SSH SSH Figure 1: Summary of the study. one Product owner, and six developers) and extracted themes from the data. The participants perceived TDD as an important driver towards the improvement of their code quality, both from a structural and functional perspective. Moreover, the team confidence with the code base improved, which is associated with improved productivity [2]. The examined team reported that TDD was not suitable for bug fixing, especially for bugs that are difficult to reproduce (e.g., needing a specific environment setup) or for quick \u201chacks\u201d due to the testing overhead. The authors also report some concerns regarding the lack of a solid architecture when applying TDD. Latorre [15] studied the capability of 30 professional software developers (junior, intermediate, and experts) to develop a realworld system using TDD. The study targeted the learnability of TDD, as the participants did not know the technique beforehand. The longitudinal one-month study started after giving the developers, proficient in Java and unit testing, a tutorial about TDD. After only a short practice session, the participants were able to correctly apply TDD (e.g., following the prescribed steps). Although they correctly followed the process between 80% and 90% of the time, their ability to initially apply TDD depended on experience\u2014 while seniors needed only few iterations, intermediates and juniors needed more time. Experience had an impact on productivity\u2014only the experts were able to be as productive as they were when applying a traditional development methodology (measured during the initial development of the system). Refactoring and design decision hindered the productivity of intermediates and junior participants. Finally, regarding functional quality, all the participants in the study delivered a correct version of the system regardless of their seniority level. 3 LONGITUDINAL COHORT STUDY In this section, we describe the planning of our longitudinal cohort study. We summarize the most important steps of this study in Figure 1. In particular, the participants in the study (groups G1 and G2) first took part in training sessions (and accomplished homework) where they practiced unit testing, iterative test-last development, and big-bang testing. Then, the participants in the groups G1 and G2 were asked to perform two implementation tasks on two different experimental objects (i.e., BSK2 and MRA,3) in the same period P1\u2014 a period is the time during which a treatment is applied [30]. In P1, the participants could apply only YW because they were not aware of TDD yet. Between the periods P1 and P2, all the participants (G1 and G2) practiced TDD during training sessions (and homework). In the second period (i.e., P2), we asked the participants in G1 and G2 to perform other two tasks, GOL4 and SSH5 respectively. The applied treatment was TDD for both groups. After five months, we asked the same participants in group G1 to implement a new task\u2014 i.e., SSH\u2014during P3 using the YW approach. In the same period (i.e., P3), the participants of G2 implemented the GOL task using YW. While, during P4, the participants of G1 and G2 were asked to apply TDD on MRA and BSK, respectively. We considered P3 and P4 to study the effects of TDD and its retainment. We introduced the last periods, P3 and P4, several months apart from the first two to assess whether the initial knowledge of TDD is retained. The planning of our cohort study is reported according to the template by Jedlitschka et al. [13]. When planning and conducting our study, we followed the guidelines by Juristo and Moreno [14] and Wohlin et al. [32]. 2BSK (Bowling ScoreKeeper) is an API for calculating the score of a bowling game. 3MRA (Mars Rover API) is an API for moving a rover on a planet. 4GOL (Game Of Life) is an API for Conway\u2019s game of life. 5SSH (SpreadSHeet) is an API for a spreadsheet. \fESEM, October 2018, Oulu, Finland Fucci et al. 3.1 Research Questions We aimed at investigating the following Research Questions (RQs): RQ1. To what extent do novice software developers retain TDD and how does this affect their performance? RQ2. Are there differences between TDD and YW in the external quality of the implemented solutions, developers\u2019 productivity, and number of tests written? We defined RQ1 to study whether TDD retainment affects the application of YW and whether there are deteriorations (or improvements) in the application of TDD over five months. The considered constructs are, external quality of the implemented solutions, developers\u2019 productivity, and the number of tests developers wrote. Finally, RQ2 aimed at understanding whether the claim that TDD increases both external quality of the software products and developers\u2019 productivity is well-founded as well as whether TDD leads developers to write more tests. 3.2 Experimental Units The participants were third-year undergraduate students in Computer Science. They were sampled by convenience among the students taking the Integration and Testing course at the University of Bari in Italy. The course covered the following topics, software quality, unit testing, integration testing, SOLID principles, refactoring, iterative test-last development, big-bang testing, and TDD. The course included frontal lectures, laboratory sessions, and homework. During the laboratory sessions, the students improved their knowledge about how to develop unit tests in Java by using the Eclipse IDE and JUnit, and the refactoring functionality available in Eclipse. During laboratory sessions and by developing homework, the students practiced unit testing, iterative test-last development, big-bang testing, and TDD. Participation in the cohort study was voluntary. We informed the students that any gathered data would be treated anonymously and used for research purposes only. We also informed them that their performance in the study would not affect their final mark for the Integration and Testing course. To encourage the participation, we rewarded who accepted to take part in the study with a bonus in their final mark. Among the students taking the Integration and Testing course, 30 accepted to participate. The participants had passed the exams for the courses of Procedural Programming, Object Oriented Programming, Software Engineering, and Databases. During these courses, all participants had acquired programming experience in C and Java. Between the first two periods and the last two, the participants followed the same university curricula courses in which TDD was not used. However, we did not control whether, within such period, the participants practices TDD outside the academic scope (e.g., in personal projects). 3.3 Experimental Materials The experimental objects were four code katas (i.e., programming exercises used to practice a technique or a programming language). \u2022 BSK. It is an API for calculating the score of a bowling game. This API allows adding a frame to a game, as well as bonus throws; computing the score of a frame; identifying when a frame is spare or strike; and computing the score of a game. \u2022 MRA. It is an API for moving a rover on a planet, which is represented as a grid. The cells of this grid can contain obstacles that the rover cannot pass through. MRA allows the initialization of a planet (i.e., defining the grid with the obstacles) and moving the rover on the planet by parsing a list of commands (i.e., turning left/right and moving forward/backward). \u2022 SSH. It is an API for a spreadsheet. SSH allows evaluating the content of a cell and thus returning the result of this evaluation. Cells can contain integers, strings, references, and formulas (e.g., concatenation of strings or integer addition). \u2022 GOL. It is an API for Conway\u2019s game of life. This game takes place on a square grid of cells. Each cell can assume two states: alive or dead. At each step, the current state of the grid is used to determine the next state. GOL allows initializing the grid and determining the next state of each cell (it depends on the current state of the cell and of its neighbors) and then of the grid. The implementation of the aforementioned APIs did not require any graphical user interfaces. Each experimental object was composed of several user stories6 to be implemented, as well as a template project (of Eclipse) that contained a stub of the expected API signature and an example JUnit class test. To verify that the user stories were correctly implemented, each experimental object was accompanied by acceptance test suites\u2014an acceptance test suite for each user story. It is worth mentioning that the acceptance test suites were not provided to the participants. That is, these suites were only exploited to quantify the quality of the solutions implemented by the participants and their productivity (see Section 3.5). The use of code katas in empirical studies on TDD is quite common (e.g., [6\u20138]). For BSK and MRA, we exploited the materials used in the experiment by Fucci et al. [8]. As for SSH and GOL, we created the experimental materials (e.g., template projects). 3.4 Tasks Each task was coupled to an experimental object (i.e., four tasks, one for each experimental object). A task consisted of implementing a solution for an experimental object (e.g., BSK). To this end, we provided the participants with: (i) the user stories to be implemented for the considered experimental object; and (ii) the template project. Thus, the participants had to use the template project when implementing the user stories for that experimental object. 3.5 Hypotheses and Variables The participants were asked to carry out each task by using either TDD or the approach they preferred (i.e., YW)\u2014of course, in this latter case, they could not use TDD. Therefore, one of the independent variable (also known as factor) is Technique. It is a nominal variable assuming two values, TDD and YW. Since our study is longitudinal\u2014i.e., we collected data over time\u2014 we took into account another independent variable, which represents the 6A user story is a description of a feature to be implemented from the perspective of the end user. \fA Longitudinal Cohort Study on the Retainment of Test-Driven Development ESEM, October 2018, Oulu, Finland Table 1: Design Summary. Period P1 (16/11/2016) P2 (07/12/2016) P3 (03/05/2017) P4 (04/05/2017) Group G1 YW, BSK TDD, GOL YW, SSH TDD, MRA G2 YW, MRA TDD, SSH YW, GOL TDD, BSK period during which each treatment (i.e., TDD or YW) was applied. We named this variable Period. It is a nominal variable and assumes the following values, P1, P2, P3, and P4. We also considered the independent variable Group representing the two groups of participants. It is a nominal variable assuming two values: G1 and G2. The dependent variables considered in our study are, QLTY, PROD, and TEST. We choose these dependent variables because they have been previously used in other empirical studies on TDD (e.g., [6\u20138, 27]). The variable QLTY quantifies the external quality of the solution a participant implemented. This variable is defined as follows (e.g., [8]): QLTY = \u00cd#TU S i=1 QLTYi #TUS \u2217100 (1) where #TUS is the number of user stories a participant tackled, while QLTYi is the external quality of i-th user story. To understand if a user story was tackled or not, we checked the asserts in the corresponding acceptance test suite. Namely, if at least one assert in the test suite (for that story) passed, then the story was tackled. #TUS is formally defined as: #TUS = n \u00d5 i=1 \u001a 1 #ASSERTi(PASS) > 0 0 otherwise (2) On the other hand, the quality of the i-th user story (i.e., QLTYi) is equal to the number of asserts passed for the i-th story with respect to the total number of asserts for the same story. More formally: QLTYi = #ASSERTi(PASS) #ASSERTi(ALL) (3) Given the Formulas 1,2, and 3, QLTY assumes values between 0 and 100, where a value close to 0 means that the quality of the solution is low, while a value close to 1 indicates high quality of the solution. The variable PROD estimates the productivity of a participant. It is computed as follows (i.e., [27]): PROD = #ASSERT(PASS) #ASSERT(ALL) \u2217100 (4) where #ASSERT(PASS) is the number of asserts passed, by considering all the acceptance test suites, with respect to the total number of the asserts in the acceptance test suites. PROD assumes values between 0 and 100. A value close to 0 indicates low productivity, while a value close to 1 means high productivity. The variable TEST quantifies the number of unit tests a participant wrote. It is defined as the number of asserts in the test suite written by a participant when tackling a task (e.g., [8]). TEST ranges from 0 to \u221e. We formulated the following parameterized null hypotheses, HN1X. There is no significant effect of Period with respect to X (i.e., QLTY, PROD, or TEST). HN2X. There is no significant effect of Technique with respect to X (i.e., QLTY, PROD, or TEST). The alternative hypotheses are two-tailed\u2014i.e., we did not consider the direction of the effect for either independent variable. HN1X was defined to investigate RQ1, while we defined HN2X to investigate RQ2. 3.6 Study Design The design of the cohort study is depicted in Table 1. The participants were randomly split into two groups\u2014G1 and G2\u2014having 15 participants each. Whatever the group was, the participants were assigned to each treatment (i.e., TDD or YW) twice. In particular, both groups were assigned to: YW in the first period (i.e., P1), TDD in the second period (i.e., P2), YW in third period (i.e., P3), and TDD in the last period (i.e., P4). Therefore, the design of our study can be classified as a repeated measures, within-subjects design. In each period, the participants in G1 and G2 dealt with different experimental objects. For instance, in P1, the participants in G1 dealt with BSK, while those in G2 with MRA. At the end of the study, every participant had tackled each experimental object only once. 3.7 Procedure Before our study took place, we collected some demographic information on the participants. To this end, the participants filled out an on-line pre-questionnaire (created by means of Google Forms). The Integration and Testing course\u2014in which the cohort study was conducted\u2014started in October, 2016. The first application of the YW treatment (i.e., P1) took place on November 16th, 2016 (see Table 1). Between the beginning of the course and P1, the participants had never dealt with TDD, while they knew unit testing, iterative test-last development, and big-bang testing. On these techniques, the participants had taken part in two training sessions and carried out some homework. TDD was introduced to the participants between P1 and P2. They also had taken part in three training sessions on TDD and completed some homework by using this development practice. Given the previous considerations, we can exclude that the knowledge of TDD affected in anyway the YW treatment in P1. The YW treatment was applied again on May 3rd, 2017 (in P3), while TDD was applied the day after in P4. From P2 to P3 five months passed. Since the participants knew TDD in P3, we cannot exclude that the knowledge of TDD would have affected the treatment YW in P3 in someway. That is, if the TDD retainment had affected the application of YW or not. On the other hand, we assessed the retainment of TDD by asking the participants to use TDD (once again) in P4. \fESEM, October 2018, Oulu, Finland Fucci et al. Table 2: Some descriptive statistics for each dependent variable grouped by Period and by Technique. Variable Statistic Period (Technique) Technique P1 (YW) P2 (TDD) P3 (YW) P4 (TDD) YW TDD QLTY mean 59.3989 63.1002 63.0505 58.535 61.2247 60.8176 median 76.7614 69.7251 71.2867 74.7614 72.9702 71.9962 SD 37.8509 31.989 30.7322 34.5895 34.232 33.1112 PROD mean 34.1145 32.4793 30.991 37.9692 32.5527 35.2243 median 27.5281 29.0698 27.907 42.8571 27.907 34.8837 SD 32.182 29.039 28.9798 29.194 30.403 29.0012 TEST mean 4.9333 7.8333 7.9333 10.1 6.4333 8.9667 median 4 6.5 5 8.5 5 7 SD 4.0508 5.5216 7.5198 7.2462 6.1764 6.4885 The execution of the study tasks took place under controlled conditions in a laboratory at the University of Bari. In each period, the participants in G1 and G2 were randomly assigned to the laboratory PCs. We alternated participants in G1 in G2 to avoid that participants of the same group assigned to the same experimental object were close to each other. This setup limited interactions among the participants. In addition, we monitored them during the execution of tasks. All the PCs in the laboratory were equipped with the same hardware and software. Furthermore, they were set up with all the experiment materials necessary for carrying out the task, i.e., the template project (of Eclipse) corresponding to the assigned experimental object. Each subject provided a solution for the assigned task by using the template project. The participants implemented the tasks in Java and used JUnit as testing framework. At the end of each task, the participants uploaded their solution through GitHub and then filled out a post-questionnaire. This questionnaire collected feedback on how the participant perceived the execution of each task. 3.8 Analysis Procedure The gathered experimental data were analyzed according to the following procedure: (1) Descriptive Statistics. We computed descriptive statistics, i.e., mean, median and standard deviation (SD), to summarize the distributions of the dependent variable values. We also used boxplots to graphically summarize these distributions. (2) Inferential Statistics. We used Linear Mixed Model (LMM) analysis methods to test the defined null hypotheses. LMM is a popular method for analyzing data from longitudinal studies [31]. For each dependent variable, we built an LMM that included the following terms: Period, Group, and Period:Group (i.e., the interaction between Period and Group) as fixed effects, while the participant represents the random effect (this is customary in SE experiments [30]). It is worth noting that the periods P1 and P3 correspond to the application of the YW treatment, while TDD was applied in the periods P2 and P4. This means that, if the LMM does not indicate a statistically significant effect of Period, then there is no statistically significant effect of Technique. To build LMMs, we considered Group because, based on the study design, it also represents the sequence (i.e., the order in which the treatments are applied in combination with the experimental objects). In repeated measures designs the effect of sequence on the dependent variables must be analyzed [30]. LMM analysis methods have two assumptions: (i) the residuals of LMM have to be normally distributed and (ii) their mean has to be equal to zero [30]. If these two assumptions are not verified, transforming the data of the dependent variable is an option (e.g., by using log or square-root transformation) [30]. To check if the residuals were normally distributed, we used the Shapiro-Wilk test (Shapiro test, from here onwards) [24]. As it is customary with tests of statistical significance, we accepted a probability of 5% of committing Type-I error (i.e., \u03b1 = 0.05). 4 RESULTS In this section, we first report the results from the descriptive statistics followed by those pertaining the inferential statistics. 4.1 Descriptive Statistics In Table 2, we report the values of mean, median, and SD for each depended variable. These values are grouped by both Period and Technique. We also show the boxplots for the dependent variables in Figure 2. QLTY. As shown in Table 2 and Figure 2.a, there are no noticeable differences in the QLTY values between the periods. In particular, by comparing the boxplots for P1 and P3\u2014i.e., same treatment (YW) but different experimental objects\u2014we can observe that these boxplots overlap and the median level in P1 is higher than that in P3 (see also the median values in Table 2). Similarly, we can observe that the boxplots for P2 and P4\u2014i.e., same TDD treatment but different experimental objects\u2014overlap and the median level is higher in P4. Therefore, such slight differences in the QLTY values seem to be due to the experimental objects rather than the retainment of TDD. Namely, when the experimental objects are BSK and MRA (i.e., in P1 and P4), the median levels are higher. When comparing TDD and YW, the results in Table 2 do not suggest differences in QLTY values (e.g., on average, QLTY is equal \fA Longitudinal Cohort Study on the Retainment of Test-Driven Development ESEM, October 2018, Oulu, Finland 0 25 50 75 100 P1 (YW) P2 (TDD) P3 (YW) P4 (TDD) Period (Technique) QLTY (a) 0 25 50 75 100 P1 (YW) P2 (TDD) P3 (YW) P4 (TDD) Period (Technique) PROD (b) 0 10 20 30 40 P1 (YW) P2 (TDD) P3 (YW) P4 (TDD) Period (Technique) TEST (c) Figure 2: Boxplots for QLTY (a), PROD (b), and TEST (c) for each period. to 60.8176 for TDD, while it is equal to 61.2247 for YW). This outcome is confirmed when we compare P4 (TDD) with P1 (YW) and P2 (TDD) with P3 (YW)\u2014same experimental objects. For instance, the participants in P4 and P1 achieved, on average, similar values for QLTY (58.535 vs. 59.3989) although, when dealing with the same experimental objects, they applied either TDD or YW. The comparison between P2 and P3 lead to a similar observation. PROD. The boxplots in Figure 2.b do not indicate noticeable differences in the PROD values among the periods. Indeed, when passing from P2 to P4, we can observe that the boxplots overlap, but the median level for P4 is higher than for P2. In particular, the medians of the PROD values are equal to 42.8571 and 29.0698 for P4 and P2, respectively. This improvement in the PROD values might be due to the TDD retainment. As for the comparison between P1 and P3, the boxplots are very similar to each other. Therefore, it seems that the knowledge the participants had of TDD (i.e., its retainment) in P3, with respect to P1, did not affect QLTY. The results in Table 2 seem to suggest that there is a slight difference in the PROD values between TDD and YW in favor of TDD (e.g., PROD for TDD is equal to 35.2243 on average, while it is equal to 32.5527 for YW). By comparing pairs of periods in which the same experimental objects are used, we can observe that the PROD values in P4 (TDD) are better than those in P1 (YW). Namely, it seems that the participants who applied TDD on BSK and MRA \fESEM, October 2018, Oulu, Finland Fucci et al. Table 3: Results (i.e., p-values) from the LMM analysis methods for the dependent variables QLTY, PROD, and TEST. Variable Period Group Period:Group QLTY 0.8837 0.6108 <0.0001\u22c6 PROD 0.7973 0.8225 <0.0001\u22c6 TEST 0.0002\u22c6 0.0617 0.4632 \u22c6Statistically significant effect. achieved PROD values better than the participants who applied YW. This trend is not observed when comparing P2 (TDD) and P3 (YW). For instance, the boxplot for P2 is very similar to that for P3, so suggesting that there is no difference with respect to the dependent variable PROD. TEST. By looking at the boxplots in Figure 2.c, we can observe differences in the TEST values among the periods. In particular, if we compare the YW treatments in P1 and P3, it appears that the boxplot for P3 is higher than that for P1. The TEST values for P3 are also better on average (7.9333 for P3, 4.9333 for P1). This difference might suggest a positive effect of the TDD retainment when participants had to apply YW in P3. On the other hand, the boxplots for TDD in P2 and P4 suggest a less pronounced difference in TEST values. The boxplots for P2 and P4 overlap, although, as for P4, the boxplot is shorter and the median value is higher (8.5 in P4 vs. 6.5 in P2). The comparison between TDD and YW seems to suggest that the TEST values for TDD are higher than those for YW. For instance, the mean values are equal to 8.9667 and 6.4333 for TDD and YW, respectively. By considering only P4 and P1, we can observe a clear improvement in the TEST values in P4 (see the boxplots). Namely, the participants who applied TDD in P4 seem to achieve values for TEST higher than those who applied YW in P1 on the same experimental objects (e.g., the mean values are 10.1 in P4, while 4.9333 in P1). Interestingly, the comparison between P2 (TDD) and P3 (YW) does not confirm the trend previously observed. Namely, it seems that the distributions of the TEST values for P2 (TDD) and P3 (YW) are quite similar (see both boxplots and descriptive statistics), despite the application of either TDD (in P2) or YW (in P3) on the same experimental objects. This outcome can indicate that the TDD retainment influenced the participants who applied YW in P3. 4.2 Inferential Statistics The results (i.e., p-values) from the LMM analysis methods are reported in Table 3. When a p-values is less than \u03b1, we highlighted it with the \u22c6symbol. QLTY. The assumptions of the LMM analysis method for QLTY were both verified, so we did not perform any data transformation. As shown in Table 3, the LMM analysis method does not allow us to reject HN1QLTY, the p-value for Period is 0.8837, namely the effect of Period is not statistically significant. This means that either there is no deterioration nor improvement in the observed time period (i.e., no effect of time period) with respect to QLTY, or that the test does not have enough statistical power to show differences, would they exists. The built LMM also suggests that the effect of Group is not statistically significant, while the interaction between Group and Period is. This interaction is due to the effect of the experimental objects (e.g., whatever the treatment is, the distributions for BSK are higher than those for GOL). Since LMM analysis method for QLTY does not indicate an effect of Period, the effect of Treatment is not statistically significant either. Therefore, we cannot reject HN2QLTY. PROD. The LMM analysis method for PROD needed data transformation since the method assumptions were not satisfied. In particular, we applied a square-root transformation to meet these assumptions. The results in Table 3 show that the effect of Period is not statistically significant. Therefore, we can neither reject HN1PROD nor HN2PROD, indicating that the participants may retain TDD with respect to PROD. Moreover, applying either TDD or YW seems to not affect the PROD values. The LMM also includes a significant effect, namely Group:Period. Again, this significant interaction is due to the effect of the experimental objects. TEST. To apply the LMM analysis method for TEST, we had to transform the data of the dependent variable. In particular, we performed a log transformation so that the assumptions were verified. The LMM analysis suggests that the effect of Period is statistically significant (the p-value is equal to 0.0002). Therefore, we can reject HN1TEST. There is evidence that a significant effect of Period on the number of tests the participants wrote exists. According to the boxplots in Figure 2.c, the significant difference in Period is not due to a deterioration of TEST values for TDD over time\u2014the worst distribution can be observed in P1\u2014therefore, we can conclude that the ability of writing unit tests is retained by developers using TDD. Since we found a significant effect of Period and in accordance with the results from the descriptive statistics (i.e., there is a clear difference in favor of TDD in P4 with respect to YW in P1 on the same experimental objects), we reject HN2TEST. Therefore, we can conclude that there is a significant effect of Technique on the number of tests the participants wrote. 5 DISCUSSION In this section, we discuss the results obtained according to the RQs and present possible practical implications from our research. Finally, we delineate threats that could have affected the validity of our study. 5.1 Answers to Research Questions The data analysis gives some indication that developers retain TDD. In particular, we observed neither deteriorations in the external quality of the solutions developed by the participants nor in their productivity. Moreover, it seems that, with time, there is an improvement in the number of tests written when using TDD. Our results do not suggest differences between TDD and YW with respect to the quality of the implemented solutions, as well as the developer\u2019s productivity. However, who practices TDD tends to write more tests. 5.2 Implications The participants retention of TDD is particularly noticeable in the amount of unit tests written. This is in line with the findings of a cross-sectional study by Erdogmus et al. [6] which pointed out that \fA Longitudinal Cohort Study on the Retainment of Test-Driven Development ESEM, October 2018, Oulu, Finland the number of tests correlates with the ability of novice developers to follow TDD. Our study extends that notion to a longitudinal perspective\u2014TDD helps retain TEST over a period of five months. The motivations for such effect are to be considered for further studies. However, we believe that TDD raises the participants awareness about the importance of writing several (fine-grained) unit tests. Nevertheless, this does not translate in improved QLTY, nor PROD. The latter result contrasts the ones of Latorre [15] which saw, over a period of a month of constant observation, a steady and significant improvement in performance measure similar to our QLTY.7 We conjecture that this can be the case due to the better experience of Latorre\u2019s study participants (i.e., professional developers), furthering the thesis that TDD alone is not a silver bullet but pre-existing skills play a crucial role [4, 10] Based on our findings, software companies that value unit testing (e.g., for creating a regression for continuous integration) should encourage the use of TDD as developers are likely to produce more tests when using such technique. We showed that a small initial investment in training results in retainment of this particular feature on the long term. Likewise, computer science educators should include TDD early in their curricula to install a long-term unittesting mentality in the students. Finally, \u201cExperience with TDD\u201d is a characteristic that researchers should foster when building a sample for studies requiring (novice) participants familiar with unit testing. Likewise, when designing experiments where unit testing is desirable, researchers can avoid or limit time spent on training as, at least in our time-frame, such skill is retained by (novice) participants who already have minimal TDD experience. Our results did not show any improvement of TDD over YW, contributing to the null results in TDD research [8, 9, 21]. However, differently from previous attempts, we showed that no effects are observable also when the same subjects are tested again several months later, under similar conditions. Time did not drastically decrement the novices performance when TDD was applied, hinting at the fact that they soon regain familiarity with technique similarly to what the study of Latorre [15] reports for the junior developers in the sample. Although carrying out cohort longitudinal studies\u2014 in particular, with several observations over a long time span\u2014is difficult in SE (e.g., controlling for maturation or learning effects), we put forward the idea that we might not be looking long enough (rather than hard enough) for the claimed effects of TDD to become apparent. As a starting point towards this direction, we recommend longitudinal studies in academia, which allow to follow the \u201ccareer\u201d of students over several years and can achieve a good amount of control (e.g., based on grades, attendance) 5.3 Threats to Validity We discuss the threats that could have affected the validity of the obtained results according to the guidelines by Wohlinet al. [32]. Accordingly, we ranked our threats from the most sensible for the goal of this study to the least one. In particular, being this the first test for a theory of TDD retainment, we prefer to limit threats to internal validity (i.e., make sure that the cause-effect relationships are correctly identified), rather than being in favor of generalization. 7In Latorre [15], all the subjects completed the task\u2014e.g., achieved PROD of 100%. 5.3.1 Threats to Internal Validity. This kind of threat concerns internal factors of the study that could have affected the results. The effect of letting volunteers take part in the study may influence the results because volunteers are generally more motivated [32] (i.e., selection threat). To prevent participants exchanging information during the tasks (i.e., threat of diffusion or treatments imitations), at least two researchers monitored them. We also prevented the diffusion of experimental materials by gathering it at the end of each task. A threat of resentful demoralization might exist. For instance, a participant, who was given a less desirable treatment or task, might not perform as good as they generally would. This threat to validity might have equally affected both TDD and YW. Finally, control over subject maturation was checked by making sure that the students attended the same courses between the first observation and the last one. 5.3.2 Threats to Construct Validity. These threats concern the relationship between theory and observation. The investigated constructs were quantified by means of one dependent variable each. This might affect the results (i.e., threat of mono-method bias). However, we used well-known and widely adopted dependent variables in TDD experiments (e.g., [8]). Although the participants were not informed about the goals of the study, they might guess them and change their behavior accordingly (i.e., threat of hypotheses guessing). To mitigate an evaluation apprehension threat, we told the participants that they would not be evaluated on the basis of their performances in the study. 5.3.3 Threats to Conclusion Validity. This kind of threat concerns the relationship between the dependent and independent variables. To mitigate a threat of random heterogeneity of participants, our sample included students with similar backgrounds\u2014i.e., students taking the same courses in the same university with similar development experience. In empirical studies like this one, a threat of reliability of treatment implementation might also exist. For example, a participant might follow TDD more strictly than another one. We did not control for such effect in this study, however, we explicitly reminded the participants to follow the treatment they were assigned to. The treatments might have impacted other constructs which were not observed (e.g., number of refactoring, code complexity). Nevertheless, we focused on the most salient dependent variables according to the literature. Finally, our sample was limited due to difficulty of recruiting participants available for the period of the entire study. 5.3.4 Threats to External Validity. These threats concern the generalizability of the results. The participants in our longitudinal study were students, thus generalizing the obtained results to the population of professional developers poses a threat of interaction of selection and treatment. However, the use of students as participants also implies a number of advantages [3], such as participants with homogeneous background, the possibility to obtain preliminary evidence. In addition, the tasks to be performed did not require a high level of industrial experience, we believe that the use of students as participants could be considered appropriate, as suggested in the literature [3, 12]. The experimental objects might also affect the external validity of the results (i.e., threat of interaction of setting and treatment) as they are not representative of real-world settings. \fESEM, October 2018, Oulu, Finland Fucci et al. On the other hand, simpler tasks, which can be completed in a single exercise session (approximately three hours), allow a better control over the participants. The latter was our preferred trade-off due to the theory-testing nature of this study. 6" + } + ] + }, + "edge_feat": {} + } +} \ No newline at end of file