|
|
"main_content": "Since the 1950s, many research efforts have been undertaken to develop highly efficient automated code generation tools [38]. These efforts have spanned from traditional program synthesizers [38]1, either deductive or inductive, to current neural-based models, notably codebase-reliant generative models [31]. With recent outrageous advancements in LLMs, massive research has focused on applying LLMs for independent SW code generation, leading to widely-used platforms like Codex and CodeGen [4]. The foundation of these models lies in autonomously predicting the subsequent token by considering the preceding context, typically comprising function signatures and docstrings that describe the intended functionality of the program, translating human-written instructions into precise code snippets or entire programs [4]. While this code generation relies on natural language processing (NLP), unlike natural language that is typically parsed as a sequential array of words or tokens, code generation is scrutinized based on its syntactic and semantic structure, often depicted using tree structures, e.g., abstract syntax trees (AST) [39]. Also, programming languages have a limited set of keywords, symbols, and rules, unlike the broad and nuanced vocabulary of natural languages. 1Synthesizers aim to automatically generate programs (SW codes), based on a space search over a variety of constraints relevant to domains known as Domain Specific Languages (DSLs). These techniques are mostly limited to pre-defined DSLs and thus suffer scalability, being general-purpose, and adaptability issues [1]. arXiv:2404.16651v1 [cs.CR] 25 Apr 2024 GLSVLSI \u201924, June 12\u201314, 2024, Clearwater, FL, USA Mohammad Akyash and Hadi M Kamali Prompting for HDL Generation RTL Modules Prompting for Vulnerability Description HDL Code Database Policy/Property Assertion Generation For HDL Modules Vulnerabilities Database Prompt Engineering HDL Database Repair (Mitigation) Suggestions Repair (Mitigation) Suggestions with & without Vulnerabilities (+Instruction/Explain) Fine Tuning Fine Tuning Figure 1: The Usage of LLMs for HDL (RTL) Generation/Validation. Given such differences, the primary concern for LLM-generated code is (i) correctness (testing and verification process), and (ii) codebase data hungriness [39]. In terms of correctness, testing and validation from the viewpoint of LLMs require well-defined metrics, where traditional metrics, e.g., BLEU that widely used in NLP assessments [39], fail due to their focus on linguistic similarity. For example, CodeBLEU that evaluates the quality of code produced by LLMs, or Pass@k that quantitatively measures the functional accuracy of code generation models, are example of such new metrics [36]. Regarding codebase data for code generation, substantial codebase data2 is required for enhanced training and/or fine-tuning to improve the efficacy of LLMs for code ganeration [4, 36]. 3 LLMS FOR HW: DESIGN AND TESTING Similar to SW engineering and testing, leveraging LLMs can significantly optimize and enhance circuit design processes, particularly within Electronic Design Automation (EDA) frameworks. LLMs can be used at high level abstraction, e.g., RTLs, to (i) reduce manual efforts for implementation3, (ii) address the challenge of lacking HDL codebase4, (iii) expedite time-to-market (TTM) in the competitive chip design process, and (iv) enable a more efficient and reliable system (by reducing human-induced faults) [40]. The current LLM-based methodologies in HW can be classified into two primary categories: (1) Development of automated AI agents aimed at streamlining EDA workflows (e.g., ASIC flow); (2) Derivation of SW code generation for RTL implementation. Regarding the former category, LLMs assist in various tasks such as script generation, architecture specification, and interpretation of compilation reports, thereby minimizing the workload of the design team. Within the latter category, solutions predominantly utilize LLMs in two manners: (i) refinement of design prompts, which entails the creation (engineering) of more precise prompts to guide LLMs towards RTL generation with increased effectiveness, and (ii) RTL-based tuning, which involves directly tuning LLMs through 2The data must be not only vast but also diverse, relevant, and of high integrity as the superioir quality codebase data enhances model performance significantly [32]. 3It can potentially serve as an alternative to high level synthesis (HLS), thereby enabling designers with limited HDL expertise to swiftly generate HW designs [40]. 4Lack of HDL codebase is always a substantial barrier for AI-driven HW solutions, consequently enhancing the efficiency of the training phase [33]. training on RTL code examples. A comparison of all existing LLMbased approaches in these two categories is shown in Table 1. 3.1 LLM Agent for EDA Automation Several studies have explored the potential of LLM in automating the ASIC design/implementation process [8, 14, 27, 29]. ChatEDA and ChipNeMo are two examples of task planning and execution agents that interpret natural language commands from the design team. ChipNeMo [29] implements a series of domain-specific training strategies for chip design tasks. It involves the deployment of bespoke tokenizers, domain-adaptive continued pretraining, and supervised fine-tuning guided by domain-specific instructions. ChatEDA [27] aims to facilitate optimal interaction with the EDA tools by comprehending instructions in natural language for generating and delivering executable programs. Using such techniques, LLM agents can offer automated ASIC flow, from RTL generation to GDSII creation, by invoking necessary SW tools and utilizing required scripts/files. However, while promising, these techniques necessitate thorough analysis to truly enhance automation in EDA tools for the following reasons: (1) Expert-Oriented Training and Fine-Tuning: Constructing such frameworks heavily relies on expert efforts for training or finetuning them to accommodate specific ASIC flows. Given the variety of technologies with their respective documentation, syntaxes, flows, and scripting methods, the pre-trained LLM may not offer a universally applicable model for all environments. (2) Failure in Handling Unforeseen Incidents: Despite extensive finetuning, the LLM-based agent may inaccurately extract information from reports/specs or generate incorrect scripts/configs when confronted with new incidents in the flows. Technology advancements, EDA tools updates, etc., may worsen this issue, as the LLM agent may fail to provide the desired output under evolving conditions. (3) Dependence on Technology: To clarify this, we raise a question! How similar is the EDA flow (i) from one design to another design, (ii) from one technology to another technology, (iii) from one vendor to another vendor? Now, the question becomes how deep is LLM fine-tuned based on these designs, technologies, and vendors? While chatbots may offer basic assistance, the prospect of achieving comprehensive automation seems to remain elusive. 3.2 LLM for RTL Generation and Refinement The main LLM-based RTL-oriented research focuses on the generation and refinement of RTL, primarily transitioning from specification to RTL design (+optimization). Initial efforts emphasize prompt engineering, crucial to successful RTL generation while relying on the existing LLMs [8, 10, 25]. Other methods, e.g., Verigen and VerilogEval, adapt open-source LLMs like CodeGen [4], followed by fine tuning on RTL, to produce more optimized HDL modules [13, 41]. Additionally, studies such as ChipGPT and AutoChip explore use of feedback mechanisms to enhance HDL quality, addressing aspects like compilation errors and design optimization (PPA optimization) [10, 20]. While these methods often rely on static analysis, DeLorenzo et al. Introduce optimization techniques like Monte Carlo tree search (MCTS) to fine-tune LLM tokens even further for more tuned optimization at the backend of LLMs [12]. Evolutionary LLMs for Hardware Security: A Comparative Survey GLSVLSI \u201924, June 12\u201314, 2024, Clearwater, FL, USA Table 1: A Top Comparison of LLM-based HW RTL Generation and EDA Tools. Study Target LLM Engine Input Output Comment (\u2014Shortcomings\u2014) Chang et al. [10] RTL Generation + Refinement GPT-3.5 Design Specification Prompts + Human Feedback for Corrections RTL Module Static PPA analysis is post-LLM with no LLM-based improvement. Human feedback is needed for manual correction per design. Thakur et al. [20] RTL Generation w/ guaranteed Compilation GPT-4, Llama2, GPT-3.5T, Claude 2 Design prompt + Compile/Synthesis Report Compiled and Tested RTL Design Feedback addresses compilation/simulation errors but may alter function priority, leading to unintended functions. No Feedback for PPA Efficiency Matter He et al. [27] Automatic EDA Flow Scripting and Execution Calls Llama2-70B Natural Language Instructions + RTL Design EDA Tool Commands & Reports + Scripts + Synthesized Design + Layout (GDSII) It is either designor technology-Dependent. Cannot be easily design/tool-agnostic. Li et al. [14] Architecture Specifications Generation + Review GPT-4 Architecture specifications + RTL Design Hierarchical Reviewed Architecture Specifications Specifications are limited to the existing technologies. It is mostly processor-based instructions. Not for generic HW. Lu et al. [25] RTL Generation GPT-3.5, GPT-4, VeriGen, StarCoder Natural language instructions RTL Design WIth no feedback, success rate is low for functional correctness. The reference designs are very limited and relatively small. Liu et al. [18] RTL Generation RTLCoder Natural language instructions RTL Design Diversity rate is low in the training dataset. The functional correctness of training dataset is not ensured, leading to lower functional coverage in the generated outputs. Thakur et al. [41] Completing Partial RTL Design MegatronLM-355M, CodeGen, code-davinci-002, and J1-Large-7B Partial RTL Design + Custom problem set with testbenches RTL Design Lack of Organized Dataset. RTLLM shows the performance does not surpass existing commercial models. Completion necessarily does not provide correct functionalities. Cheng et al. [11] RTL Generation + Repair + EDA Script Generation Llama2-7B, Llama2-13B Natural language descriptions + Verilog files + EDA scripts Corrected Verilog code + Verilog code from descriptions + EDA scripts For refinement, it is for syntactic errors (compilation issues). DeLo et al. [12] RTL Generation VeriGen-2B Natural language instruction + RTL modules description Compiled, Tested, and PPA Improved RTL Design Tested on Small Toy Circuits, e.g., adders and MAC units. Stochastic behavior of MCTS. Less Improvement in More Iterations. Li et al. [42] RTL Synthesis (Mapping) Circuit Transformer Gate-Level Design (AIG) Design Model (Truth Table) + Synthesized AIG Low Accuracy for Larger Circuits. Low Performance with no MCTS (Low Scalability). More recent advancements have shifted the focus from fine tuning and prompt engineering in existing LLMs to the development of dedicated circuit transformers, e.g., Li et al. Introduce \"Circuit Transformer\" with 88M parameters and integrated MCTS for optimization, leading to a fully open-source independent LLMs for RTL [42]. Similarly, RTLCoder proposes an automated data generation flow utilizing a model with 7B parameters, producing a sizable labeled dataset for RTL generation [18]. These endeavors have led to the emergence of large circuit models (LCM), enhancing the expression of circuit data\u2019s semantics and structures, thus creating more robust, efficient, and innovative design approaches. Despite its promise, more research is needed as follows: (1) Universality Issues: LLM-based RTL generation faces limitations due to scarce codebase knowledge available for model fine-tuning and training per application [18]. As an example, developing security enclaves or fully-debugged Verilog modules is incredibly challenging as there are not many training datasets available for it. (2) Verification (Functional) Issues: Existing studies highlight the complex nature of (functional) verification tasks, further magnified by the limited availability of trained models for test bench generation and functional simulation [13]. The complexity of circuit designs, which involve both functional and structural attributes, worsens the challenge, as even small changes to the structure (a code line) can have significant effects on functionality, underscoring the complexity of testbench generation and simulation of circuits. (3) Scalability Issues: Scalability is crucial for RTL-based LLMs in addressing complex circuit designs [25]. Efforts to enhance computational efficiency and model architecture sophistication are essential to accommodate larger designs and meet evolving electronic device demands. Further research is necessary to overcome scalability challenges and maximize LLM potential in RTL generation. 4 LLM FOR HW: SECURITY (VERIFICATION) Given the paramount significance of security of HW designs in modern SoCs, and in light of the earlier discussion emphasizing the importance of verification over LLMs, several studies have commenced employing LLM for SoC verification (moving towards bug-free designs, either functional or security-oriented). Similar to LLM-based RTL design, these approaches fall into two main categories: (i) refinement of design prompts, where designers guide LLMs toward generating secure code (i.e. prompt engineering), and (ii) RTL-based tuning, which is about altering the LLM\u2019s framework itself to generate output bug-free code. In advancing HW security, researchers have leveraged LLMs using either pure natural language prompts (i.e. description of the code) or a blend of natural language (i.e. comments designed by human experts) and code. The following describes these two categories in detail and how each category can enhance verification and security for HW designs. 4.1 Prompt Engineering Prompt engineering is the practice of designing inputs for LLMs, to obtain specific, desirable outputs. This technique optimizes the interaction with LLMs to improve its performance on various tasks, leveraging strategies like few-shot [21], and chain-of-thought [9] prompting to guide the model\u2019s responses effectively. A few recent studies in HW explore the applications of prompt engineering for enhancing vulnerability detection and repair, as well as design verification. For example, [3] employs a range of detailed instruction GLSVLSI \u201924, June 12\u201314, 2024, Clearwater, FL, USA Mohammad Akyash and Hadi M Kamali prompts for various LLMs, aiming to evaluate the efficacy of each model in correcting HW vulnerabilities5. Fig. 2 shows an example of how prompting GPT-4 with a bug description and repair instructions alongside the Verilog code enables GPT-4 to address the vulnerability. Here are two important lessons to be learned: (1) The example shows that being super specific is crucial in engineering the prompt to ensure the generated code is devoid of vulnerabilities. Thus, it is vital to have careful crafting by human experts to generate such prompts. This requirement for human input could become a tedious process, posing challenges in scaling and automating the approach for broader applications. (2) The performance and efficacy of LLMs depends on the infrastructure of LLM used. While commercial LLMs like GPT-4 tend to outperform models trained on coding datasets, including Codegen and VeriGen, in terms of repair accuracy and efficacy, this advantage comes at the cost of increased number of parameters. The importance of precision in prompt generation is also shown in [15], relying on ChatGPT, revealing the fact that the success rate can be degraded significantly while the model is more limited6. This study also demonstrates models misguiding the designers while the Verilog code of various CWE scenarios as part of instruction can lead to new form of vulnerabilities from prompts (may not fully represent the capture of potential vulnerabilities in SoC designs). To enhance verification capability, some studies focus on the use of LLMs for verification assertion generation (e.g., SystemVerilog Assertions (SVAs)). For instance, [16] uses GPT-4 in an iterative mechanism to refine prompts for GPT-4, enabling it to generate more accurate and complete SVA properties from RTL code. This approach coupled with AutoSVA2, which automatically generates formal verification testbenches, enables LLM-guided formal verification towards more automation. However, the major obstacle to this automation is the reliance of this approach on iterative refinement by an expert, which requires a deep understanding of both HW verification and prompt engineering. Similarly, AssertLLM [23] uses a customized GPT-4 Turbo to generate SVAs (functional verification assertions) from natural language design specifications (translating design documents). Although results show high success rate, this model is also heavily dependent to the quality and completeness of the design documents. This is while richness of documentation is always a critical issue in HW design, thus AssertLLM might struggle to generate assertions that fully capture the intended design behavior. LLM4DV [28] uses LLMs with prompt templates to automate the generation of test stimuli for verification. LLM4DV integrates LLMs with a systematic method that includes a stimulus generation agent, prompt templates, and four LLM-based improvements, e.g., summarizing prompts, resetting, etc. Evaluated using three custom-designed large-scale DUTs, this framework demonstrated promising results and achieved high coverage rates in simple scenarios. However, this approach focuses more on coverage-related metrics, overlooking security-oriented vulnerabilities. Similar to these formal-based mechanisms, [37] proposes designing an evaluation framework that includes generating natural 5These prompts must provide a thorough description of the bug, strategies for debugging, and illustrative examples that contrast insecure code with its secure counterpart. 6The number of parameters was restricted to a range of millions instead of billions. module key_compar input [7:0] k input [7:0] k output reg mat output [7:0] ); wire [7:0] compar assign compare_re always @(compare_ if (compare_r 8'b00000000) match = 1 else match = 0 end assign diff = com endmodule Write a hardware compares bits of module user_grant_access(data_out, usr_id, data_in, clk, rst_n); output reg [7:0] data_out; input wire [2:0] usr_id; input wire [7:0] data_in; input wire clk, rst_n; reg grant_access; always @(posedge clk or negedge rst_n) begin if (!rst_n) begin data_out <= 0; end else begin grant_access = (usr_id == 3'h4) ? 1'b1 : 1'b0; if (grant_access) begin data_out <= data_in; end end end endmodule Based on the provided instruction, correct the security bug in this Verilog module. BUG: Access Control Check Implemented After Asset is Accessed. Ensure that access is granted before data is accessed. module user_grant_access(data_out, usr_id, data_in, clk, rst_n); output reg [7:0] data_out; input wire [2:0] usr_id; input wire [7:0] data_in; input wire clk, rst_n; reg grant_access; always @(posedge clk or negedge rst_n) begin if (!rst_n) data_out = 0; else begin grant_access = (usr_id == 3'h4) ? 1'b1 : 1'b0; data_out = (grant_access) ? data_in : data_out; end end endmodule Response Response Prompt Prompt Design LLM Bug Detector Bug free design Designer Bug Design Bug free design Figure 2: An Examplary Case in GPT-4 for Security Debugging. language prompts that mimic code comments in assertion files, using these prompts to generate SVAs with LLMs, and then assessing the correctness of these assertions against a benchmark suite of real-world HW designs and corresponding golden reference assertions. The results demonstrate that LLMs, with varying levels of detail in the prompts, can generate valid HW security assertions. More recent use of LLMs for RTL debugging aimed to enhance automation in the domain. For instance, RTLFixer [26] automatically rectifies syntax errors in Verilog code by leveraging RetrievalAugmented Generation (RAG) and the ReAct prompting strategy. RTLFixer employs a retrieval database filled with expert knowledge of syntax errors. ReAct also introduces an iterative approach involving reasoning, action, and observation, mimicking experts\u2019 debugging techniques. This combination builds a more effective system for automating the debugging. However, it still heavily relies on the comprehensiveness and currentness of the external knowledge database, which is collected by human experts. Some LLM-based studies focus on the use of such models at the SoC level. DIVAS [19] uses LLMs to analyze SoC specifications and crafts precise queries that encapsulate potential security vulnerabilities related to the SoC. These queries are submitted to LLMs, e.g., ChatGPT and Google\u2019s BARD, and the LLMs map these queries to relevant CWE vulnerabilities that could compromise the SoC. Once CWEs have been identified, DIVAS utilizes LLMs to construct SVAs for each. These SVAs are designed to act as security verification mechanisms, ensuring the SoC\u2019s design complies with security standards and is safeguarded against identified vulnerabilities. Similarly, [5] explores how GPTs are utilized in SoC level for security vulnerability insertion, detection, assessment, and mitigation. This study, focusing on smaller models, e.g., ChatGPT-3.5, and relying on a sub-set of CWEs, evaluates the modification possibility over RTL using oneand few-shot learning. By comprehensive Evolutionary LLMs for Hardware Security: A Comparative Survey GLSVLSI \u201924, June 12\u201314, 2024, Clearwater, FL, USA Table 2: A Top Comparison of LLM-based HW Security Validation Solutions Study Target LLM Engine # of Bugs Success Rate Source of Benchmarks Expert Knowledge Needed? Reference (for Eval) Comment Nair et al. [15] Prompt generation for Debugging RTL ChatGPT 10 100%\u22171 CWE (Descriptions) For the Whole Process Manual expert intervention per debugging Cannot be automated. Limited evaluation on CWEs Kande et al. [37] Detection (Generate Assertion) OpenAI Codex (code-davinci002) 10 \u223c25% Hack@DAC21, OpenTitan For manually building detailed security constraints Golden Assertion High success rate only when bug and security policy is known. Otherwise, it is below 10%. Only for single endmodule, No Hierarchical and Recursive SVA. Ahmad et al. [3] Repair (pre-detected bugs) OpenAI Codex (code-davinci001, codedavinci-002, code-cushman001), CodeGen 15 \u223c31% CWE (Benchmark), OpenTitan, Hack@DAC21 For training (dataset generation for assisting repairs) For CWEAT static analyze verification Repaired Code (Prompt Reference) Only applicable on pre-observed cases with high similarity (to be detected by CWEAT) Saha et al. [5] Detection (Generate Assertion), security vulnerability insertion GPT 3.5, GPT 4 N/R\u22172 N/R\u22172 CWE, Trust-Hub For prompt engineering and evaluation Manual expert intervention per debugging Limited evaluation on CWEs and smart toy circuits. Fu et al. [22] Detection and/or Repair StableLM, Falcon, LLama2 1 (different models) \u223c35% Open-Source SoCs and Microprocessors For fine-tuning (Open-source code classifications) Repaired Code (Preand Post-correction of Git (CVA6, Opentitan, ...)) Detailed enhancement for training is needed. Per design, a new training might be required. Raw dataset is limited and not design-agnostic). Meng et al. [24] Detection (Generate Assertion) HS-BERT 8 326 Bugs from 1723 sentences RISC-V, OpenRISC, MIPS, OpenSPARC, OpenTitan (documentation) For classifying security rules in documents Manual expert labling for security property validation Limited by the quality of the input HW documentation. Limited to the design/verification team knowledge. Fang et al. [23] Detection (Generate Assertion) GPT4 Turbo N/A 89% Open-source CPUs, SoCs, Xbars, arithmetic. For extracting verification-required information from documents Golden RTL Implementation Limited by the quality of the input HW documentation. Mostly syntactic and basic functional verification. Paria et al. [19] Detection (Generate Assertion) ChatGPT, BART N/A N/A CEP SoC (MITLL) For assumptions (CWE-based security rules) N/R\u22172 Expert review for Spec Generation is needed per design. Vera et al. [16] Detection (Generate Assertion) GPT-4 N/R\u22172 N/R\u22172 RISC-V CVA6 For building rules related to assertions Previously developed formal tools (AutoSVA) The success rate heavily depends on expert\u2019s input for prompt engineering. Zhang et al. [28] Test Stimuli Generation GPT-3.5-turbo N/A small: \u223c98%, large: \u223c65% Self-designed RTL Designs For prompts generation Coverage Monitoring Not for security purposes. Coverage-based testing. Tsai et al. [26] Syntax Errors Repair GPT-3.5, GPT-4 212 98.5% VerilogEval benchmarks, RTLLM benchmarks For retrieval database (debugging reference) VerilogEval, RTLLM Not for security purposes. Only for Syntax errors. \u22171: It is 100% as all the debugging is done manually. Bug is known, the debugging instruction (flow) is known, and GPT is used for generation. N/R\u22172: Not Reported. exploration, the study suggests specific prompt guidelines for effectively using LLMs in SoC security-related tasks. LLMs possess a dual-use nature; While advancing HW security initiatives, LLM can also present new threats simultaneously. [7] delves into the potential of general-purpose models like ChatGPT in the offensive HW security domain This study involves employing prompt engineering techniques to guide LLMs in filtering complex HW design databases, correlating system-level concepts with specific HW modules, identifying security-critical design modules, and modifying them to introduce HW Trojans. This study initiates the possibility of using LLMs for building more stealthy and undetectable HW Trojans, reshaping the characteristics of HW Trojan implementation, detection, and mitigation. 4.2 Fine-Tuning As mentioned previously, some of these LLM-based HW verification solutions rely on fine-tuning, which involves adjusting a pre-trained language model by training it on Verilog/SVA data. However, LLMs require extensive datasets for effective training, posing a significant challenge in specialized domains, particularly in HW security due to the scarcity of targeted data. LLM4SecHW [22] is one example, which leverages a dataset compiled from defects and remediation steps in open-source HW designs, using version control data from GitHub. This dataset was created by selecting significant HW projects such as CVA6, CVA5, OpenTitan, etc., and extracting commits, issues, and pull requests (PRs) related to HW designs. This approach provides a rich source of domain-specific data for training models, specifically tailored to identifying and fixing bugs in HW designs. Although innovative and promising, the quality of this data is dependent on the filtering process accuracy. The effectiveness of LLMs in debugging HW designs is thus directly tied to how precisely the data is curated and processed. The NSPG framework [24] is another example of LLM solution for HW verification that offers a novel methodology for automating the generation of HW security properties utilizing fine-tuned LLMs. This approach is anchored by the development of a specialized language model for HW security, HS-BERT, which is trained on domain-specific data. Through deep evaluation on previously unseen design documents from OpenTitan, NSPG has proven its capability by extracting and validating security properties, showing security vulnerabilities within the OpenTitan design. However, a notable limitation of not only NSPG, but also all HW-oriented finetuned model for now lies in its dependency on the quality and scope of the HW documentation provided as input (which is almost super limited). As in the realm of HW/SoC design, this documentation often remains incomplete, inconsistent, or lack necessary detail, the precision and efficacy of the solution could be adversely affected. 5 TAKEAWAYS AND FUTURE DIRECTIONS In all facets of using LLMs for HW security, it becomes apparent that a significant hurdle, whether in HW design or in testing/verification, whether stemming from prompt engineering or fine-tuning, lies in the procurement and effective utilization of quality data [17]. Also, as depicted in Table 2, creating specialized LLMs (e.g., LCMs) or GLSVLSI \u201924, June 12\u201314, 2024, Clearwater, FL, USA Mohammad Akyash and Hadi M Kamali employing pre-existing ones necessitates a deep expert knowledge to achieve a high success rate for generation, detection, and mitigation. Considering these two obstacles, despite being promising, the endeavor requires rigorous effort across multiple facets. Creating a standard database reference is crucial for both training and evaluating the methods proposed in this domain. It facilitates a fair comparison among different techniques, ensuring that the pros/cons of each approach can be accurately assessed. Moreover, high-quality RTL data is indispensable for the optimal training of LLMs. It enables these models to learn the intricacies of RTL designs effectively, thereby enhancing their efficiency in security tasks. Given the distinct characteristics of RTL codes as opposed to natural language texts, it becomes crucial to consider domain-specific models for handling HW codes. Incorporating concepts such as graphs and ASTs into LLMs can bridge the gap between the structural nuances of RTL codes and the inherently sequential processing of conventional language models. It is crucial to devise a novel metric specifically for evaluating the security coverage of RTL code examined by LLMs. This metric would serve as a critical feedback mechanism for LLMs, enabling them to assess and refine their output continually. By quantitatively measuring the security of RTL designs, the metric would allow LLMs to optimize their learning process towards generating code that is not only functionally correct but also adheres to high security standards. Building on the foundational strategies mentioned above, further refinement can be achieved through the optimization of continuous prompts7. Such strategies also open the doors for mechanisms to enhance prompt automation for LLMs, e.g., auto-prompting8. These optimizations are open research directions potentially presenting a more feasible and efficient alternative to LLM fine-tuning. 6 CONCLUSION This paper examined the use of LLMs in detecting/addressing security flaws in HW designs. We specifically analyzed their incorporation into RTL, revealing their independent problem-solving abilities in this domain. Our examination of existing approaches highlights both their benefits and drawbacks, notably scalability and accuracy issues. Also, we identified potential areas for future research. Our suggestion involves developing dedicated LLM architectures and datasets focused on HW security, indicating a path toward targeted improvements that could mitigate HW vulnerabilities." |