{ "url": "http://arxiv.org/abs/2404.16587v1", "title": "Understanding Privacy Risks of Embeddings Induced by Large Language Models", "abstract": "Large language models (LLMs) show early signs of artificial general\nintelligence but struggle with hallucinations. One promising solution to\nmitigate these hallucinations is to store external knowledge as embeddings,\naiding LLMs in retrieval-augmented generation. However, such a solution risks\ncompromising privacy, as recent studies experimentally showed that the original\ntext can be partially reconstructed from text embeddings by pre-trained\nlanguage models. The significant advantage of LLMs over traditional pre-trained\nmodels may exacerbate these concerns. To this end, we investigate the\neffectiveness of reconstructing original knowledge and predicting entity\nattributes from these embeddings when LLMs are employed. Empirical findings\nindicate that LLMs significantly improve the accuracy of two evaluated tasks\nover those from pre-trained models, regardless of whether the texts are\nin-distribution or out-of-distribution. This underscores a heightened potential\nfor LLMs to jeopardize user privacy, highlighting the negative consequences of\ntheir widespread use. We further discuss preliminary strategies to mitigate\nthis risk.", "authors": "Zhihao Zhu, Ninglu Shao, Defu Lian, Chenwang Wu, Zheng Liu, Yi Yang, Enhong Chen", "published": "2024-04-25", "updated": "2024-04-25", "primary_cat": "cs.CL", "cats": [ "cs.CL", "cs.AI" ], "label": "Original Paper", "paper_cat": "LLM Fairness", "gt": "Large language models [10, 27] have garnered significant attention for their exceptional capabilities across a wide range of tasks like natural language generation [7, 37], question answering [35, 55], and sentiment analysis [5, 52]. Nonetheless, it\u2019s observed that large language models can confi- dently assert non-existent facts during their reasoning process. For example, Bard, Google\u2019s AI chatbot, concocted information in the first demo that the James Webb Space Telescope had taken the first pictures of a planet beyond our solar system [12]. Such a halluci- nation problem [31, 54] of large language models is a significant barrier to artificial general intelligence [22, 44]. A primary strat- egy for tackling the issue of hallucinations is to embed external knowledge in the form of embeddings into a vector database [19, 23], making them accessible for retrieval augmented generation by large language models [6, 13]. Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from permissions@acm.org. Conference\u201917, July 2017, Washington, DC, USA \u00a9 2024 Association for Computing Machinery. ACM ISBN 978-x-xxxx-xxxx-x/YY/MM...$15.00 https://doi.org/10.1145/nnnnnnn.nnnnnnn An embedding model [43, 49] encodes the original objects\u2019 broad semantic information by transforming the raw objects (e.g., text, image, user profile) into real-valued vectors of hundreds of dimen- sions. The advancement of large language models enhances their ability to capture and represent complex semantics more effectively, such that an increasing number of businesses (e.g., OpenAI [40] and Cohere [1]) have launched their embedding APIs based on large language models. Since embeddings are simply real-valued vectors, it is widely believed that it is challenging to decipher the semantic information they contain. Consequently, embeddings are often viewed as secure and private, as noted by [50], leading data owners to be less concerned about safeguarding the privacy of em- beddings compared to raw external knowledge. However, in recent years, multiple studies [30, 38, 50] have highlighted the risk of em- beddings compromising privacy. More specifically, the pre-trained LSTM (Long Short-Term Memory) networks [25] or other language models can recover parts of the original texts and author informa- tion from text embeddings, which are generated by open-source embedding models. Although current studies have exposed security weaknesses in embeddings, the effects of large language models on the privacy of these embeddings have not been fully explored. A pressing issue is whether LLMs\u2019 emergent capabilities enable at- tackers to more effectively decipher sensitive information from text embeddings. This issue is driven not only by the proliferation of large language models but also by the availability of the embedding APIs based on LLMs, which permits attackers to gather numerous text-embedding pairs to build their attack models. To this end, we establish a comprehensive framework that lever- ages a large language model (LLM) to gauge the potential privacy leakage from text embeddings produced by the open-sourced em- bedding model. From a security and privacy perspective, LLM serves as the attacker, and the embedding model acts as the target, while the goal is to employ the attack model to retrieve sensitive and confidential information from the target model. Specifically, our approach begins with fine-tuning attack models to enable text re- construction from the outputs of the target model. Following this, we assess the privacy risks of embeddings via two types of attack scenarios. On the one hand, we recover the texts from their embed- dings in both in-distribution and out-of-distribution scenarios. On the other hand, we identify certain private attributes of various en- tities in the original text (such as birthdays, nationalities, criminal charges, etc.) and predict these attributes from the text embeddings. This prediction is determined by the attribute that exhibits the highest cosine similarity between the text embedding and the cor- responding attribute embedding. Consequently, this method does not necessitate training with supervised data. Should the target embedding model decline to generate embeddings for attributes with extremely brief texts described (1-2 words) out of embedding stealing concerns, we introduce an external embedding model that acts as a proxy to project the original text and the attribute value arXiv:2404.16587v1 [cs.CL] 25 Apr 2024 Conference\u201917, July 2017, Washington, DC, USA Zhihao Zhu, Ninglu Shao, Defu Lian, Chenwang Wu, Zheng Liu, Yi Yang, Enhong Chen in the same embedding space. Specifically, this external model is tasked with embedding the attribute and the text reconstructed by the attack model, the latter being derived from text embeddings produced by the target embedding model. The evaluation of text reconstruction reveals that 1) a larger attack language model, when fine-tuned with a sufficient amount of training data, is capable of more accurately reconstructing texts from their embeddings in terms of metrics like BLEU [41], regardless of whether the texts are in-distribution or out-of-distribution; 2) in-distributed texts are more readily reconstructed than out-of- distributed texts, with the reconstruction accuracy for in-distributed texts improving as the attack model undergoes training with more data; 3) the attack model can improve the reconstruction accuracy as the expressiveness of the target embedding models increases. The evaluation of attribute prediction demonstrates that 1) at- tributes can be predicted with high accuracy across various domains, including encyclopedias, news, medical, and legislation. This means the attacker is capable of inferring details like a patient\u2019s health condition, a suspect\u2019s criminal charges, and an individual\u2019s birthday from a set of seemingly irrelevant digital vectors, highlighting a significant risk of privacy leakage; 2) generally speaking, enlarg- ing the scale of the external/target embedding model substantially enhances the accuracy of attribute prediction; 3) when the target model denies embedding services for very short texts, the most effective approach using reconstructed text by the attack model can achieve comparable performance to using original text, when the target model and the external embedding model are configured to be the same. From the experiments conducted, we find that knowledge repre- sentations merely through numerical vectors encompass abundant semantic information. The powerful generative capability of large language models can continuously decode this rich semantic infor- mation into natural language. If these numerical vectors contain sensitive private information, large language models are also capa- ble of extracting such information. The development trend of large language models is set to increase these adverse effects, underscor- ing the need for our vigilance. Our research establishes a foundation for future studies focused on protecting the privacy of embeddings. For instance, the finding that accuracy in text reconstruction di- minishes with increasing text length indicates that lengthening texts may offer a degree of privacy protection. Furthermore, the ability of the attack model to reconstruct out-of-distributed texts points towards halting the release of original texts associated with released embeddings as a precaution.", "main_content": "Table 1: Sizes of attack models and embedding models Attack Model GPT2 GPT2-Large GPT2-XL #parameters 355M 744M 1.5B Target Model SimCSE BGE-Large-en E5-Large-v2 #parameters 110M 326M 335M We employ pre-trained GPT2 [45] of varying sizes as the attacking model to decipher private information from embeddings produced by the target embedding models such as SimCSE [21], BGE-Large-en [53], and E5-Large-v2 [51]. We hypothesize that larger embedding models, due to their capacity to capture more information, are more likely to be exposed to a greater privacy risk. Consequently, all these models are designated as the target models, and their numbers of parameters are detailed in Table 1. It\u2019s important to note that we treat the target model as a black box, meaning we do not have access to or knowledge of its network architecture and parameters. Figure 1 showcases the fine-tuning process for the attack model. Initially, the example text \u201cDavid is a doctor.\u201d is inputted into the target embedding model to generate its respective embedding. This embedding is then used as the input for the attack model, which aims to reconstruct the original text based solely on this embedding. An EOS (End-of-Sentence) token is appended to the text embedding to signal the end of the embedding input. The attacker\u2019s training goal is to predict the t-th token of the original text based on text embedding and the preceding (t-1) tokens of the original text. In the testing phase, the attacker employs beam search [18] to progressively generate tokens up to the occurrence of the EOS. Once the attack model has been fine-tuned, we evaluate the privacy risks of embeddings through two distinct attack scenarios: text reconstruction and attribute prediction. Target Embedding Model Sentence Embedding [CLS] Attack Model David is a doctor . [EOS] [EOS] David is a doctor . [EOS] David is a doctor . Training Stage Figure 1: The fine-tuning of the foundation attack model. Initially, the attacker queries the target embedding model to convert the collected text into text embeddings. To signify the completion of the embedding input, an EOS (Endof-Sentence) token is appended to the text embedding. Next, the attacker selects the pre-trained GPT2 model as the attack model and uses the collected text and corresponding text embeddings as a dataset to train the attack model. When a text embedding is input, the attack model is trained to sequentially reconstruct the related original text. 2.2 Evaluation of Text Reconstruction Settings For each text in the test set, we reconstruct it using the attack model based on its embedding generated by the target model. To evaluate the reconstruction accuracy, we employ two metrics: BLEU (Bilingual Evaluation Understudy) [41] and ROUGE (Recall-Oriented Understudy for Gisting Evaluation) [33]. Specifically, we utilize BLEU-1 and ROUGE-1, which are based solely on unigrams, as they yield better results compared to BLEU and ROUGE based on other n-grams [4]. These metrics gauge the similarity between the Understanding Privacy Risks of Embeddings Induced by Large Language Models Conference\u201917, July 2017, Washington, DC, USA Table 2: Reconstruction attack performance against different embedding models on the wiki dataset. The best results are highlighted in bold. \u2217represents that the advantage of the best-performed attack model over other models is statistically significant (p-value < 0.05). Training data wiki-small Wiki-large Wiki-xl Target model Attack model BLEU-1 ROUGE-1 BLEU-1 ROUGE-1 BLEU-1 ROUGE-1 SimCSE GPT2 0.3184\u00b10.0010 0.3212\u00b10.0010\u2217 0.4512\u00b10.0010\u2217 0.4961\u00b10.0011\u2217 0.4699\u00b10.0016 0.5256\u00b10.0010 GPT2_large 0.2996\u00b10.0014 0.2913\u00b10.0012 0.4349\u00b10.0013 0.4678\u00b10.0009 0.5293\u00b10.0010 0.5930\u00b10.0009 GPT2_xl 0.3196\u00b10.0011\u2217 0.3112\u00b10.0013 0.4455\u00b10.0015 0.4833\u00b10.0010 0.5331\u00b10.0011\u2217 0.5987\u00b10.0007\u2217 BGE-Large-en GPT2 0.3327\u00b10.0014\u2217 0.3288\u00b10.0011\u2217 0.4173\u00b10.0016 0.4483\u00b10.0009 0.4853\u00b10.0011 0.5337\u00b10.0016 GPT2_large 0.2935\u00b10.0013 0.2783\u00b10.0011 0.4446\u00b10.0011 0.4788\u00b10.0006 0.5425\u00b10.0012 0.5998\u00b10.0010 GPT2_xl 0.3058\u00b10.0019 0.3043\u00b10.0012 0.4689\u00b10.0011\u2217 0.5057\u00b10.0007\u2217 0.5572\u00b10.0008\u2217 0.6151\u00b10.0007\u2217 E5-Large-v2 GPT2 0.3329\u00b10.0013\u2217 0.3341\u00b10.0012\u2217 0.4838\u00b10.0005 0.5210\u00b10.0008 0.5068\u00b10.0016 0.5522\u00b10.0014 GPT2_large 0.3093\u00b10.0009 0.2875\u00b10.0012 0.4700\u00b10.0011 0.4990\u00b10.0010 0.5679\u00b10.0011 0.6220\u00b10.0011 GPT2_xl 0.3083\u00b10.0013 0.3017\u00b10.0013 0.4993\u00b10.0013\u2217 0.5274\u00b10.0011\u2217 0.5787\u00b10.0007\u2217 0.6378\u00b10.0009\u2217 original and reconstructed texts. Given that the temperature setting influences the variability of the text produced by GPT, a non-zero temperature allows for varied reconstructed texts given the same text embedding. We calculate the reconstruction accuracy across 10 trials to obtain mean and standard error for statistical analysis. Based on these outcomes, we compare the performance of various attack and target configurations using a two-sided unpaired t-test [16]. The evaluation is conducted on seven datasets, including wiki [47], wiki-bio [29], cc-news [24], pile-pubmed [20], triage [32], cjeu-terms [9], and us-crimes [9]. Details and statistics for these datasets are presented in Table 5. Results of In-Distributed Texts We create three subsets from the wiki dataset of varying sizes (i.e., wiki-small, wiki-large, and wiki-xl) and use these subsets to finetune the attack models, resulting in three distinct versions of the attack model. The performance of these attack models is then assessed using held-out texts from the wiki dataset. The experimental results presented in Table 2 illustrate that the size of the training datasets and the models have a considerable influence on the reconstruction accuracy. To elaborate, regardless of the attack model employed, it\u2019s found that larger embedding models, such as BGE-Large-en and E5Large-v2, enable more effective text reconstruction compared to others like SimCSE. This is attributed to the strong expressivity of the large target embedding model, which allows it to retain more semantic information from the original text, proving beneficial for the embedding\u2019s application in subsequent tasks. Moreover, provided that the attack model is adequately fine-tuned and the embedding model is expressive enough, the accuracy of text reconstruction improves as the size of the attack model increases. This improvement is reflected in the table\u2019s last two columns, showing higher accuracy as the attack model progresses from GPT2 to GPT2_large, and finally to GPT2_xl, attributed to the improved generative capabilities of larger models. Additionally, adequately fine-tuning the large language models is a prerequisite for their effectiveness in text reconstruction tasks. When the embeddings are less informative, fine-tuning the attack model demands a larger amount of training data. This is highlighted by the lesser performance of GPT2_xl compared to GPT2 when fine-tuning with wiki-small, which reverses after fine-tuning with Wiki-xl. Moreover, GPT2_xl outperforms GPT2 in reconstructing text from SimCSE\u2019s embeddings as the fine-tuning dataset shifts from \u201cWiki-large\" to \u201cWiki-xl\". To summarize, a larger attack model, when fine-tuned with an increased amount of training data, is capable of more accurately reconstructing texts from the embeddings generated by target embedding models with higher expressivity. Hence, a straightforward approach for safeguarding privacy involves not disclosing the original dataset when publishing its embedding database. Nonetheless, it remains to be investigated whether an attack model, fine-tuned on datasets with varying distributions, would be effective. Results of Out-of-Distributed Texts To address the unresolved question, we assume that the attacker model is trained on the Wiki dataset with more rigorous descriptions of world knowledge, yet the texts used for testing do not originate from this dataset. This implies that the distribution of the texts used for testing differs from that of the texts used for training. We assess the reconstruction capability of this attack model on sample texts from six other datasets: wiki_bio, cc_news, pile_pubmed, triage, us_crimes, and cjeu_terms. The results presented in Table 3 show that the attack model retains the capability to accurately reconstruct texts from the embeddings, even with texts derived from different distributions than its training data. In greater detail, the best reconstruction accuracy of the attack model on texts from 5 out of 6 datasets is equal to or even exceeds that of a model fine-tuned on wiki-small, a relatively small subset of Wikipedia. As a result, if we release embeddings for the wiki_bio, cc_news, pile_pubmed, us_crimes, and cjeu_terms datasets, an attack model fine-tuned on the Wiki-xl dataset can extract semantic information from them with relatively high confidence. This suggests that simply withholding the original raw data does not prevent an attacker from reconstructing the original text information from their released embeddings. To understand which kind of text data is more easily recovered from text embedding based on the attack model fine-tuned with Wiki-xl, we also analyze the similarity between the six evaluation datasets and Wiki based on previous works [28]. The results reported in figure 2 show that texts from evaluation datasets with higher similarity to the training data are reconstructed more accurately. To elaborate, Wiki-bio, which compiles biographies from Wikipedia, shares the same origin as the training dataset. Despite covering different content, the language style is very similar. Consequently, the quality of the attack\u2019s text reconstruction for this Conference\u201917, July 2017, Washington, DC, USA Zhihao Zhu, Ninglu Shao, Defu Lian, Chenwang Wu, Zheng Liu, Yi Yang, Enhong Chen Table 3: Reconstruction attack performance on different datasets. The best results are highlighted in bold. \u2217represents that the advantage of the best-performed attack model over other models is statistically significant (p-value < 0.05). Test dataset wiki_bio cc_news pile_pubmed Target model Attack model BLEU-1 ROUGE-1 BLEU-1 ROUGE-1 BLEU-1 ROUGE-1 SimCSE GPT2 0.5428\u00b10.0025 0.5596\u00b10.0022 0.3881\u00b10.0012 0.4487\u00b10.0011 0.3623\u00b10.0015 0.3976\u00b10.0014 GPT2_large 0.5859\u00b10.0011 0.6272\u00b10.0015 0.4314\u00b10.0015 0.5020\u00b10.0014 0.4054\u00b10.0009 0.4427\u00b10.0011 GPT2_xl 0.5878\u00b10.0012\u2217 0.6329\u00b10.0016\u2217 0.4355\u00b10.0010\u2217 0.5084\u00b10.0008\u2217 0.4133\u00b10.0013\u2217 0.4505\u00b10.0010\u2217 BGE-Large-en GPT2 0.4773\u00b10.0025 0.5327\u00b10.0020 0.3906\u00b10.0014 0.4314\u00b10.0013 0.3297\u00b10.0013 0.3581\u00b10.0011 GPT2_large 0.5497\u00b10.0018 0.6015\u00b10.0009 0.4339\u00b10.0009 0.4867\u00b10.0012 0.3819\u00b10.0013 0.4074\u00b10.0014 GPT2_xl 0.5652\u00b10.0030\u2217 0.6200\u00b10.0015\u2217 0.4480\u00b10.0008\u2217 0.5038\u00b10.0006\u2217 0.3955\u00b10.0019\u2217 0.4218\u00b10.0017\u2217 E5-Large-v2 GPT2 0.5312\u00b10.0015 0.5532\u00b10.0014 0.4065\u00b10.0009 0.4428\u00b10.0010 0.3673\u00b10.0012 0.3995\u00b10.0009 GPT2_large 0.5695\u00b10.0014 0.6206\u00b10.0017 0.4521\u00b10.0013 0.5006\u00b10.0013 0.4174\u00b10.0006 0.4523\u00b10.0007 GPT2_xl 0.5823\u00b10.0012\u2217 0.6354\u00b10.0017\u2217 0.4645\u00b10.0014\u2217 0.5173\u00b10.0015\u2217 0.4316\u00b10.0009\u2217 0.4683\u00b10.0009\u2217 Test dataset triage us_crimes cjeu_terms Target model Attack model BLEU-1 ROUGE-1 BLEU-1 ROUGE-1 BLEU-1 ROUGE-1 SimCSE GPT2 0.0932\u00b10.0007 0.1555\u00b10.0010 0.3092\u00b10.0006 0.3238\u00b10.0008 0.3485\u00b10.0010 0.3646\u00b10.0009 GPT2_large 0.1188\u00b10.0006 0.1756\u00b10.0006 0.3268\u00b10.0006\u2217 0.3406\u00b10.0003 0.3755\u00b10.0012 0.3954\u00b10.0008\u2217 GPT2_xl 0.1299\u00b10.0006\u2217 0.2004\u00b10.0010\u2217 0.3226\u00b10.0005 0.3429\u00b10.0006\u2217 0.3739\u00b10.0007\u2217 0.3914\u00b10.0006 BGE-Large-en GPT2 0.0828\u00b10.0007 0.1072\u00b10.0008 0.2705\u00b10.0008 0.2834\u00b10.0010 0.3271\u00b10.0008 0.3378\u00b10.0009 GPT2_large 0.1376\u00b10.0008\u2217 0.1659\u00b10.0009\u2217 0.2755\u00b10.0008 0.3155\u00b10.0004 0.3639\u00b10.0016\u2217 0.3785\u00b10.0014\u2217 GPT2_xl 0.1232\u00b10.0008 0.1640\u00b10.0007 0.2775\u00b10.0008\u2217 0.3207\u00b10.0006\u2217 0.3522\u00b10.0012 0.3794\u00b10.0012 E5-Large-v2 GPT2 0.1451\u00b10.0007 0.2265\u00b10.0008 0.2825\u00b10.0009 0.2938\u00b10.0010 0.3418\u00b10.0018 0.3668\u00b10.0016 GPT2_large 0.2272\u00b10.0004\u2217 0.3172\u00b10.0007\u2217 0.3066\u00b10.0007 0.3308\u00b10.0006 0.3621\u00b10.0013 0.3980\u00b10.0012 GPT2_xl 0.2230\u00b10.0008 0.3177\u00b10.0007 0.3164\u00b10.0007\u2217 0.3459\u00b10.0005\u2217 0.3688\u00b10.0009\u2217 0.4101\u00b10.0011\u2217 dataset is the highest. In other words, simply withholding the original text associated with embeddings does not adequately safeguard sensitive information from being extracted, as fine-tuning the attack model with datasets that are similar in terms of language style or content can elevate the risk of privacy breaches. 0.5 0.6 0.7 0.8 Similarity 0.20 0.25 0.30 0.35 0.40 0.45 0.50 0.55 0.60 Best BLEU-1 wiki_bio cc_news pile_pubmed triage us_crimes cjeu_terms 0.5 0.6 0.7 0.8 Similarity 0.30 0.35 0.40 0.45 0.50 0.55 0.60 0.65 Best ROUGE-1 wiki_bio cc_news pile_pubmed triage us_crimes cjeu_terms Figure 2: The similarity between the evaluation datasets and the Wiki dataset v.s. the best reconstruction performance. The disclosure of even a small collection of original texts significantly amplifies the risk, as illustrated in figure 3 which presents results from the pile-pubmed dataset where the attack model undergoes further fine-tuning with these original texts. The availability of more original texts linked to the embedding directly correlates with an increased risk of sensitive information leakage. Specifically, the BLEU-1 score for GPT2-xl against the BGE-large-en embedding model sees a 2% increase when the attack model is supplemented with 10k original texts. Notably, even with the use of a limited amount of target data (1K samples), the improvements in BLEU-1 score are considerable. In summary, concealing the datasets of published embeddings does not effectively prevent information leakage from these embeddings. This is because their underlying information can still be extracted by an attack model that has been fine-tuned on datasets of similar style or content. Furthermore, revealing even a small number of samples can significantly improve the extraction accuracy. 0 2000 4000 6000 8000 10000 Number of disclosed original texts 0.34 0.36 0.38 0.40 0.42 0.44 0.46 BLEU-1 T arget Model = SimCSE 0 2000 4000 6000 8000 10000 Number of disclosed original texts T arget Model = BGE-Large-en 0 2000 4000 6000 8000 10000 Number of disclosed original texts T arget Model = E5-Large-v2 GPT2 GPT2_large GPT2_xl 0 2000 4000 6000 8000 10000 Number of disclosed original texts 0.36 0.38 0.40 0.42 0.44 0.46 0.48 0.50 0.52 ROUGE-1 T arget Model = SimCSE 0 2000 4000 6000 8000 10000 Number of disclosed original texts T arget Model = BGE-Large-en 0 2000 4000 6000 8000 10000 Number of disclosed original texts T arget Model = E5-Large-v2 Figure 3: Impact of disclosed original texts volume on text reconstruction accuracy. Each column represents a different target embedding model. The first and second rows represent the reconstruction performance concerning the BLEU-1 and ROUGE-1 metrics, respectively. Results of Varying Text Lengths Given GPT\u2019s capability to generate outputs of varying lengths with consistent meanings, exploring how text length impacts reconstruction quality becomes pertinent. With the average text length in the Wiki dataset being 21.32, as noted in Table 5, we selected three subsets: Wiki-base, Wiki-medium, and Wiki-long, each with 5,000 Understanding Privacy Risks of Embeddings Induced by Large Language Models Conference\u201917, July 2017, Washington, DC, USA samples but average lengths of 20, 40, and 80 words, respectively. Among them, Wiki-medium and Wiki-long are created by extending texts in Wiki-base via the GPT4 API [3] accessible by OpenAI. GPT2 GPT2_large GPT2_xl Attack model 0.0 0.1 0.2 0.3 0.4 0.5 BLEU-1 GPT2 GPT2_large GPT2_xl Attack model 0.0 0.1 0.2 0.3 0.4 0.5 BLEU-1 GPT2 GPT2_large GPT2_xl Attack model 0.0 0.1 0.2 0.3 0.4 0.5 0.6 BLEU-1 Base Medium Long GPT2 GPT2_large GPT2_xl Attack model 0.0 0.1 0.2 0.3 0.4 0.5 0.6 ROUGE-1 (a) SimCSE GPT2 GPT2_large GPT2_xl Attack model 0.0 0.1 0.2 0.3 0.4 0.5 0.6 ROUGE-1 (b) BGE-Large-en GPT2 GPT2_large GPT2_xl Attack model 0.0 0.1 0.2 0.3 0.4 0.5 0.6 ROUGE-1 (c) E5-Large-v2 Figure 4: Influence of the text length. Error bars represent the mean reconstruction accuracy with 95% confidence intervals obtained from 10 independent trials, and each column corresponds to a different target embedding model. The results of text reconstruction are depicted in figure 4. It\u2019s evident that embeddings from shorter texts are more susceptible to being decoded, posing a greater risk to privacy. For example, the ROUGE-1 score of GPT2-xl on BGE-Large-en fell by over 43.3% as the test text length increased from Wiki-base to Wiki-long. This decline can be attributed to the fixed length of text embeddings, which remain constant regardless of the original text\u2019s length. Consequently, embeddings of longer texts, which encapsulate more information within the same embedding size, make accurate text recovery more formidable. Therefore, extending the original texts could potentially fortify the security of released embeddings. 2.3 Evaluation of Sensitive Attribute Prediction Target/External Embedding Model doctor teacher chef 0.7 0.2 0.5 cosine similarity original/reconstructed text candidate attribute values Attribute Embedding Text Embedding Similarity Score Attribute Inference Figure 5: The inference framework of sensitive attributes. The attacker employs the same embedding model to convert original text and candidate attributes into embeddings. The attacker then identifies the attribute that exhibits the highest cosine similarity between its embedding and text embedding as sensitive information of the original text. Settings In contrast to text reconstruction, our primary concern is whether the attacker can extract specific sensitive information from the text embeddings. The task of predicting sensitive attributes more clearly illustrates the issue of privacy leakage through embeddings, compared to the task of reconstructing text. Initially, we pinpoint sensitive or crucial data within the datasets: wiki-bio, triage, cjeuterms, and us-crimes. In particular, we examined patient dispositions and blood pressure readings in the triage dataset, and in the wiki-bio dataset, we focused on individuals\u2019 nationality, birthdate, and profession. Additionally, we looked into criminal charges in the us_crimes dataset and considered legal terminology as important information in the cjeu_terms dataset. Given the extensive variety of attributes and the scarcity of labeled data for each, it\u2019s impractical for the privacy attacker to train a dedicated model for each sensitive attribute, unlike the approach taken in previous studies [50]. Therefore, we predicted the sensitive information from text embedding by selecting the attribute that exhibits the highest cosine similarity between text embedding and its embedding. This approach is effective across various attributes and does not necessitate training with supervised data. However, a challenge arises because the text describing the attribute is often very brief (sometimes just a single word), and the target embedding model may refuse to produce embeddings for such short texts due to concerns about embedding theft [17, 34]. As a result, it becomes difficult to represent the original text and the attribute value within the same embedding space. To overcome this, we introduce an external embedding model to serve as an intermediary. This external model is responsible for embedding both the attribute and the reconstructed text, which has been derived from text embeddings by the attack model. Consequently, texts and attributes are embedded within the same space via reconstructing texts from the text embeddings, allowing for accurate similarity measurement. The overall process for inferring attributes is depicted in figure 5. The outcomes of the attribute inference attack using various methods are presented in Table 4. The last row of this table, which relies on the similarity between attributes and original texts, lacks randomness in its measurement due to the direct comparison method employed. In contrast, the preceding rows, which are based on reconstructed texts, introduce randomness into the similarity measurement. This variability stems from employing a non-zero temperature setting, allowing for multiple independent text generations to produce diverse outputs. Results The findings presented in Table 4 reveal that sensitive information can be accurately deduced from text embeddings without the necessity for any training data, highlighting a significant risk of privacy leakage through embeddings. Specifically, attributes such as nationality and occupation can be inferred with high precision (an accuracy of 0.94) even when using texts that have been reconstructed. This level of accuracy is attributed to the embeddings\u2019 ability to capture the rich semantic details of texts coupled with the attack model\u2019s strong generative capabilities. Remarkably, the accuracy of predictions made using an external embedding model on reconstructed texts is on par with those made using the target embedding model on original texts in several cases. For an equitable comparison, the Conference\u201917, July 2017, Washington, DC, USA Zhihao Zhu, Ninglu Shao, Defu Lian, Chenwang Wu, Zheng Liu, Yi Yang, Enhong Chen Table 4: Attribute inference attack performance (accuracy) when using bge-large-en as the embedding model. The best results are highlighted in bold. \u2217denotes that the advantage of the best-performed attack model over other models is statistically significant (p-value < 0.05). The last row provides experimental results where the attacker has unrestricted access to the target embedding model and compares the embeddings of original text and candidate attributes. Other results are based on the attacker\u2019s usage of an external embedding model to calculate the similarities between reconstructed text and candidate attributes. Dataset wiki-bio triage cjeu_terms us_crimes Similarity Model Attack Model nationality birth_date occupation disposition blood_pressure legal_term criminal_charge SimCSE GPT2 0.875\u00b10.005 0.546\u00b10.008 0.869\u00b10.005 0.506\u00b10.001 0.135\u00b10.004 0.395\u00b10.010 0.377\u00b10.003 GPT2_large 0.882\u00b10.003 0.579\u00b10.011 0.878\u00b10.004 0.506\u00b10.001 0.119\u00b10.006 0.405\u00b10.006 0.396\u00b10.004 GPT2_xl 0.886\u00b10.003\u2217 0.596\u00b10.009\u2217 0.878\u00b10.005 0.514\u00b10.002\u2217 0.137\u00b10.003 0.428\u00b10.008\u2217 0.407\u00b10.004\u2217 BGE-Large-en GPT2 0.927\u00b10.005 0.525\u00b10.009 0.913\u00b10.006 0.505\u00b10.002 0.195\u00b10.004 0.496\u00b10.008 0.470\u00b10.005 GPT2_large 0.937\u00b10.005 0.560\u00b10.009 0.919\u00b10.003 0.504\u00b10.001 0.238\u00b10.005 0.538\u00b10.004 0.510\u00b10.005 GPT2_xl 0.941\u00b10.003\u2217 0.581\u00b10.008\u2217 0.922\u00b10.005 0.504\u00b10.001 0.254\u00b10.007\u2217 0.551\u00b10.006\u2217 0.527\u00b10.003\u2217 E5-Large-v2 GPT2 0.932\u00b10.004 0.670\u00b10.008 0.927\u00b10.005 0.514\u00b10.002 0.206\u00b10.005 0.492\u00b10.010 0.465\u00b10.004 GPT2_large 0.940\u00b10.003 0.729\u00b10.008 0.938\u00b10.003 0.521\u00b10.001 0.229\u00b10.003 0.544\u00b10.007 0.506\u00b10.006 GPT2_xl 0.941\u00b10.003 0.756\u00b10.008\u2217 0.940\u00b10.005 0.521\u00b10.002 0.230\u00b10.004 0.545\u00b10.006 0.524\u00b10.006\u2217 BGE-Large-en None 0.953 0.742 0.950 0.538 0.431 0.764 0.716 external embedding model was set to be identical to the target embedding model, although, in practice, the specifics of the target embedding model might not be known. The inference accuracy improves when employing a larger attack model for text reconstruction and a more expressive embedding model. This outcome aligns with observations from text reconstruction tasks. Although the accuracy of attribute prediction with reconstructed text falls short in some instances compared to using original texts, the ongoing advancement in large language models is rapidly closing this gap. Hence, the continuous evolution of these models is likely to escalate the risks associated with privacy breaches, emphasizing the critical need for increased awareness and caution in this domain. 3 DISCUSSIONS AND LIMITATIONS This study delves into the implications of large language models on embedding privacy, focusing on text reconstruction and sensitive information prediction tasks. Our investigation shows that as the capabilities of both the sophisticated attack foundation model and the target embedding model increase, so does the risk of sensitive information leakage through knowledge embeddings. Furthermore, the risk intensifies when the attack model undergoes fine-tuning with data mirroring the distribution of texts linked to the released embeddings. To protect the privacy of knowledge embeddings, we propose several strategies based on our experimental findings: \u2022 Cease the disclosure of original texts tied to released embeddings: Preventing the attack model from being finetuned with similar datasets can be achieved by introducing imperceptible noise into the texts or embeddings. This aims to widen the gap between the dataset of original or reconstructed texts and other analogous datasets. \u2022 Extend the length of short texts before embedding: Enhancing short texts into longer versions while preserving their semantic integrity can be accomplished using GPT4 or other large language models with similar generative capacities. \u2022 Innovate new privacy-preserving embedding models: Develop embedding models capable of producing high-quality text embeddings that are challenging to reverse-engineer into the original text. This entails training models to minimize the cloze task loss while maximizing the reconstruction loss. However, our study is not without limitations. Firstly, due to substantial training expenses, we did not employ an attack model exceeding 10 billion parameters, though we anticipate similar outcomes with larger models. Secondly, while we have quantified the impact of various factors such as model size, text length, and training volume on embedding privacy, and outlined necessary guidelines for its protection, we have not formulated a concrete defense mechanism against potential embedding reconstruction attacks. Currently, effective safeguards primarily rely on perturbation techniques and encryption methods. Perturbation strategies, while protective, can compromise the embedding\u2019s utility in subsequent applications, necessitating a balance between security and performance. Encryption methods, though secure, often entail considerable computational demands. Future work will explore additional factors influencing embedding privacy breaches and seek methods for privacy-preserving embeddings without sacrificing their utility or incurring excessive computational costs. 4 METHODS 4.1 Preliminary Prior to delving into the attack strategies, we will commence with the introduction of the language models and evaluation datasets. Language Model A language model is a technique capable of assessing the probability of a sequence of words forming a coherent sentence. Traditional language models, such as statistical language models [8, 46] and grammar rule language models [26, 48], rely on heuristic methods to predict word sequences. While these conventional approaches may achieve high predictive accuracy for limited or straightforward sentences within small corpora, they often struggle to provide precise assessments for the majority of other word combinations. Understanding Privacy Risks of Embeddings Induced by Large Language Models Conference\u201917, July 2017, Washington, DC, USA Table 5: Statistics of datasets Dataset Domain #sentences avg. sentence len wiki General 4,010,000 21.32 wiki_bio General 1,480 22.19 cc_news News 5,000 21.39 pile_pubmed Medical 5,000 21.93 triage Medical 4668 54.10 cjeu_terms Legal 2127 118.96 us_crimes Legal 4518 181.28 With increasing demands for the precision of language model predictions, numerous researchers have advocated for neural networkbased language models [15, 36] trained on extensive datasets. The performance of neural network-based language models steadily increases as model parameters are increased and sufficient training data is received. Upon reaching a certain threshold of parameter magnitude, the language model transcends previous paradigms to become a Large Language Model (LLM). The substantial parameter count within LLMs facilitates the acquisition of extensive implicit knowledge from corpora, thereby fostering the emergence of novel, powerful capabilities to handle more complex language tasks, such as arithmetic operations [39] and multi-step reasoning [42]. These capabilities have enabled large language models to comprehend and resolve issues like humans, leading to their rising popularity in many aspects of society. However, the significant inference capacity of large language models may be leveraged by attackers to reconstruct private information from text embeddings, escalating the risk of privacy leakage in embeddings. Datasets for Evaluation In this paper, we assess the risk of embedding privacy leaks on seven datasets, including wiki [47], wiki-bio [29], cc-news [24], pile-pubmed [20], triage [32], cjeu-terms [9], and us-crimes [9]. Wiki collects a large amount of text from the Wikipedia website. Since it has been vetted by the public, the text of the wiki is both trustworthy and high-quality. Wiki-bio contains Wikipedia biographies that include the initial paragraph of the biography as well as the tabular infobox. CC-News (CommonCrawl News dataset) is a dataset containing news articles from international news websites. It contains 708241 English-language news articles published between January 2017 and December 2019. Pile-PubMed is a compilation of published medical literature from Pubmed, a free biomedical literature retrieval system developed by the National Center for Biotechnology Information (NCBI). It has housed over 4000 biomedical journals from more than 70 countries and regions since 1966. Triage records the triage notes, the demographic information, and the documented symptoms of the patients during the triage phase in the emergency center. Cjeu-term and us-crimes are two datasets from the legal domain. Cjeu-term collects some legal terminologies of the European Union, while us-crimes gathers transcripts of criminal cases in the United States. For wiki, cc_news, and pile_pubmed datasets, we randomly sample data from the original sets instead of using the entire dataset because the original datasets are enormous. To collect the sentence texts for reconstruction, we utilize en_core_web_trf [2], an open-source tool, to segment the raw data into sentences. Then, we cleaned the data and filtered out sentences that were too long or too short. The statistical characteristics of the processed datasets are shown in Table 5. Ethics Statement For possible safety hazards, we abstained from conducting attacks on commercial embedding systems, instead employing open-source embedding models. Additionally, the datasets we utilized are all publicly accessible and anonymized, ensuring no user identities are involved. To reduce the possibility of privacy leakage, we opt to recover more general privacy attributes like occupation and nationality in the attribute prediction evaluation rather than attributes that could be connected to a specific individual, such as phone number and address. The intent of our research is to highlight the increased danger of privacy leakage posed by large language models in embeddings, suggest viable routes to safeguard embedding privacy through experimental analysis, and stimulate the community to develop more secure embedding models. 4.2 Threat Model Attack Goal This paper primarily investigates the extraction of private data from text embeddings. Given that such private data often includes extremely sensitive information such as phone numbers, addresses, and occupations, attackers have ample motivation to carry out these attacks. For instance, they might engage in illegal activities such as telecommunications fraud or unauthorized selling of personal data for economic gain. These practices pose significant threats to individual privacy rights, potentially lead to economic losses, increase social instability, and undermine trust mechanisms. Attack Knowledge To extract private information from text embeddings, attackers require some understanding of the models responsible for generating these embeddings. Intuitively, the more detailed the attacker\u2019s knowledge of the target embedding model, including its internal parameters, training specifics, etc., the more potent the attack\u2019s efficacy. However, in real-world scenarios, to safeguard their intellectual property and commercial interests, target models often keep their internal information confidential, providing access to users solely through a query interface O. Based on the above considerations, this study assumes that attackers only possess query permissions to the target embedding model, which responds with the corresponding text embedding based on the text inputted by the attacker. This query process does not reveal any internal information about the model. Furthermore, with the increasing popularity of large language models, numerous companies are opting to release their anonymized datasets for academic research purposes. Therefore, this study also assumes that attackers have the capability to gather specific open-source data (e.g., Wikipedia in our evaluation) and leverage interactions with target models to acquire associated text embeddings. Clearly, such low-knowledge attack settings help simulate real attack scenarios and more accurately assess the risks of embedding privacy leaks. Conference\u201917, July 2017, Washington, DC, USA Zhihao Zhu, Ninglu Shao, Defu Lian, Chenwang Wu, Zheng Liu, Yi Yang, Enhong Chen 4.3 Attack Methodology Attack Model Construction Attackers reconstructed original text from text embedding by training an attack model. However, they lack prior knowledge about which architecture should be used for the attack model. When using neural networks as the attack model, it is challenging to decide which neural network architecture should be employed. If the architecture of the attack model is not as expressive and complex as the embedding model, then it is difficult to ensure that it can extract private information. Considering the exceptional performance of large language models across various domains, particularly in text comprehension [11] and information extraction [14], employing them as attack models could be an appropriate choice. Based on these considerations, this study utilizes GPT-2 models of varying sizes as attack models, training them to reconstruct the original text from the embeddings. Training set generation. We start by retrieving the text embeddings from the collected open-source data \ud835\udc37using the query interface O of the target embedding model. \ud835\udc52\ud835\udc64= O(\ud835\udc64) (1) Then, we construct the training dataset \ud835\udc37\ud835\udc61\ud835\udc5f\ud835\udc4e\ud835\udc56\ud835\udc5bfor the attack model based on these embeddings and corresponding texts. \ud835\udc37\ud835\udc61\ud835\udc5f\ud835\udc4e\ud835\udc56\ud835\udc5b= {(\ud835\udc51\ud835\udc64,\ud835\udc64)|\ud835\udc64\u2208\ud835\udc37,\ud835\udc51\ud835\udc64= (\ud835\udc52\ud835\udc64, < \ud835\udc38\ud835\udc42\ud835\udc46>)} (2) where EOS(End-of-Sentence) token is a special token appended to the text embedding to signify the end of the embedding input. The attack model performs the opposite operation compared to the embedding model: for each sample (\ud835\udc51\ud835\udc64,\ud835\udc64) \u2208\ud835\udc37\ud835\udc61\ud835\udc5f\ud835\udc4e\ud835\udc56\ud835\udc5b, the attack model strives to recover the original text \ud835\udc64from the embedding \ud835\udc52\ud835\udc64. Attack model training. For each embedding input \ud835\udc51\ud835\udc64, the attack model is trained to output the first token of the original text. Subsequently, when the attacker provides \ud835\udc51\ud835\udc64along with the first \ud835\udc56\u22121 tokens of the original text, the attack model is tasked with predicting the \ud835\udc56\u2212\ud835\udc61\u210etoken of the original text with utmost accuracy. Formally, for a sample (\ud835\udc51\ud835\udc64,\ud835\udc64), the text reconstruction loss of the attack model is as follows: \ud835\udc3f\ud835\udf03(\ud835\udc51\ud835\udc64,\ud835\udc64) = \u2212 \u2211\ufe01\ud835\udc59 \ud835\udc56=1log \ud835\udc43(\ud835\udc65\ud835\udc56|\ud835\udc51\ud835\udc64,\ud835\udc64<\ud835\udc56,\ud835\udf03) (3) where \ud835\udc64<\ud835\udc56= (\ud835\udc651, ...,\ud835\udc65\ud835\udc56\u22121) represents the first \ud835\udc56\u22121 tokens of the original text \ud835\udc64. \ud835\udc59denotes the length of \ud835\udc64and \ud835\udf03represents the parameter of the attack model. Therefore, the training loss for the attack models is the sum of the text reconstruction loss across all samples in the training dataset: \ud835\udc3f\ud835\udf03= \u2212 \u2211\ufe01 (\ud835\udc51\ud835\udc64,\ud835\udc64)\u2208\ud835\udc37\ud835\udc61\ud835\udc5f\ud835\udc4e\ud835\udc56\ud835\udc5b\ud835\udc3f\ud835\udf03(\ud835\udc51\ud835\udc64,\ud835\udc64) (4) To evaluate the effectiveness of the attack models, we employ two tasks: text reconstruction and attribute prediction. Text Reconstruction Task Reconstructed text generation. In the text reconstruction task, the attack models aim to generate reconstructed text that closely resembles the original text. When generating a reconstructed text of length \ud835\udc59, the attacker aims for the generated text to have the highest likelihood among all candidate texts, which is formalized as follows: \ud835\udc64\u2217= arg max \ud835\udc64={\ud835\udc651,...,\ud835\udc65\ud835\udc59} \ud835\udc43(\ud835\udc651, ...,\ud835\udc65\ud835\udc59|\ud835\udc51\ud835\udc64,\ud835\udf03) (5) However, on the one hand, the number of candidate tokens for \ud835\udc65\ud835\udc56 in the reconstructed text \ud835\udc64is large, often exceeding 10, 000 in our experiments. On the other hand, the number of candidate texts exponentially increases as the text length \ud835\udc59grows. As a result, it becomes infeasible to iterate through all candidate texts of length \ud835\udc59and select the one with the highest likelihood as the output. A viable solution is greedy selection, which involves choosing the candidate token with the highest likelihood while progressively constructing the reconstructed text. \ud835\udc65\u2217 \ud835\udc56= arg max \ud835\udc65\ud835\udc56 \ud835\udc43(\ud835\udc65\ud835\udc56|\ud835\udc51\ud835\udc64,\ud835\udf03,\ud835\udc64\u2217 <\ud835\udc56) (6) \ud835\udc64\u2217= (\ud835\udc65\u2217 1, ...,\ud835\udc65\u2217 \ud835\udc59) (7) However, this approach may easily lead the generation process into local optima. To enhance the quality of the reconstructed text and improve generation efficiency, we employ beam search in the text reconstruction task. In beam search, if the number of beams is \ud835\udc58, the algorithm maintains \ud835\udc58candidates with the highest generation probability at each step. Specifically, in the initial state, for a given input text embedding \ud835\udc52\ud835\udc64, the attack model first records \ud835\udc58initial tokens with the highest generation likelihood (ignoring eos token) as candidate texts of length 1 (C\u2217 1 ). C\u2217 1 = arg max C1\u2282X,| C1|=\ud835\udc58 \u2211\ufe01 \ud835\udc65\u2208C1\ud835\udc43(\ud835\udc65|\ud835\udc51\ud835\udc64,\ud835\udf03) (8) where X represents the set of all tokens in the attack model. Subsequently, these \ud835\udc58initial tokens are combined with any token in X to create a text set of length 2. The attack model then iterates through these texts of length 2 and selects \ud835\udc58texts with the highest generation likelihood as candidate texts of length 2 (C\u2217 2 ). C\u2217 2 = arg max | C2|=\ud835\udc58 \u2211\ufe01 (\ud835\udc651,\ud835\udc652)\u2208C2\ud835\udc43(\ud835\udc651,\ud835\udc652|\ud835\udc51\ud835\udc64,\ud835\udf03) (9) C2 \u2282{(\ud835\udc651,\ud835\udc652)|\ud835\udc651 \u2208C\u2217 1,\ud835\udc652 \u2208X} (10) This process continues, incrementing the text length until the model generates an EOS token signaling the end of the generation process. Evaluation metric. We adopt BLEU-1 and ROUGE-1 to evaluate the reconstruction performance of the attack model, measuring how similar the reconstructed text \ud835\udc64\u2032 is to the original text \ud835\udc64. The formulas for these two metrics are as follows: BLEU-1 = \ud835\udc35\ud835\udc43\u00b7 \u00cd \ud835\udc65\u2208set(\ud835\udc64\u2032) min(count(\ud835\udc65, \ud835\udc64), count(\ud835\udc65, \ud835\udc64\u2032)) \u00cd \ud835\udc65\u2208set(\ud835\udc64\u2032) count(\ud835\udc65, \ud835\udc64\u2032) (11) ROUGE-1 = \u00cd \ud835\udc65\u2208set(\ud835\udc64) min(count(\ud835\udc65, \ud835\udc64), count(\ud835\udc65, \ud835\udc64\u2032)) \u00cd \ud835\udc65\u2208set(\ud835\udc64)count(\ud835\udc65, \ud835\udc64) (12) where set(\ud835\udc64) and set(\ud835\udc64\u2032) are the sets of all tokens in \ud835\udc64and \ud835\udc64\u2032. Count(\ud835\udc65,\ud835\udc64) and count(\ud835\udc65,\ud835\udc64\u2032) are the number of times \ud835\udc65appears in \ud835\udc64and \ud835\udc64\u2032, respectively. The brevity penalty (BP) is used to prevent short sentences from getting an excessively high BLEU-1 score. BLEU-1 primarily assesses the similarity between the reconstructed text and the original text, whereas ROUGE-1 places greater emphasis on the completeness of the reconstruction results and whether Understanding Privacy Risks of Embeddings Induced by Large Language Models Conference\u201917, July 2017, Washington, DC, USA the reconstructed text can encompass all the information present in the original text. Dataset similarity calculation. We assess the similarity between the evaluation datasets and the wiki dataset based on a simple character n-gram comparison [28]. Specifically, we employed the 5000 commonly used 4-gram characters in English as the feature set F of the dataset. Each dataset is then represented as a 5000dimensional feature vector. \u2212 \u2192 \ud835\udc39\ud835\udc37= [count(\ud835\udc531, \ud835\udc37), ..., count(\ud835\udc535000, \ud835\udc37)] (13) where \ud835\udc53\ud835\udc56\u2208F is a 4-gram character, and count(\ud835\udc53\ud835\udc56, \ud835\udc37) is the number of times \ud835\udc53\ud835\udc56appears in dataset \ud835\udc37. Finally, we calculate the Spearman correlation coefficient between the feature vectors of the two datasets to quantify their similarity. Sim(\ud835\udc371, \ud835\udc372) = Spearman(\u2212 \u2192 \ud835\udc391, \u2212 \u2192 \ud835\udc392) (14) where \u2212 \u2192 \ud835\udc391 and \u2212 \u2192 \ud835\udc392 are feature vectors of \ud835\udc371 and \ud835\udc372, respectively. The Spearman coefficient ranges from -1 to 1, where a higher value indicates a greater similarity between the two corresponding datasets. Attribute Prediction Task In the attribute prediction task, this study focuses on the attacker\u2019s ability to extract private information from the original text. We chose several private attributes from four datasets and evaluated the attack model\u2019s ability to infer the precise values of these private attributes from the released text embedding. For example, in the wiki-bio dataset, occupation is chosen as a private attribute. The attacker attempts to ascertain that the original text contains the private message \u201cdoctor\" by using the embedding of the sentence \u201cDavid is a doctor.\" Instead of training the attack model to perform the attribute prediction task, this study utilizes the embedding similarity between the text and the attribute value to determine the suggested attribute value of the original text. Its rationality is that the text contains relevant information about sensitive attributes, so their embeddings should be similar. The ideal approach would be to determine based on the similarity between embeddings of the original text and sensitive attribute embeddings. However, this poses challenges: (1) The original text is unknown. (2) Privacy attributes are often short texts, and in most cases, consist of only one word; such frequent anomalous (short) inputs might be considered malicious attempts and rejected. Therefore, this study (1) uses reconstructed text instead of the original text, and (2) employs an open-source external embedding model as a proxy to obtain embeddings instead of using the target embedding model. It\u2019s worth noting that this study did not directly search for privacy attributes in the reconstructed text due to potential inaccuracies in reconstructing privacy attributes, such as missing tokens or reconstructing synonyms of the attributes. Specifically, the attacker initially acquires embeddings of the reconstructed text and sensitive attribute from their proxy embedding model, subsequently computing the cosine similarity between them, and ultimately selecting the attribute with the highest similarity as the prediction result. Formally, the attacker infers the sensitive attribute \ud835\udc64\ud835\udc63as follows: \ud835\udc64\ud835\udc63= arg max \ud835\udc63\u2208C\ud835\udc63 \ud835\udc52\ud835\udc64\u2032 \u00b7 \ud835\udc52\ud835\udc63 |\ud835\udc52\ud835\udc64\u2032 ||\ud835\udc52\ud835\udc63| (15) where C\ud835\udc63is the set of candidate attribute values, \ud835\udc64\ud835\udc63is the predicted attribute value of the original text \ud835\udc64. \ud835\udc52\ud835\udc64\u2032 and \ud835\udc52\ud835\udc63are the embedding vectors of reconstructed text \ud835\udc64\u2032 and attribute value \ud835\udc63with the aid of the external embedding model, respectively. We employ accuracy as the metric to evaluate the performance of the attack model on the attribute prediction task.", "additional_graph_info": { "graph": [], "node_feat": { "Zhihao Zhu": [ { "url": "http://arxiv.org/abs/2404.16587v1", "title": "Understanding Privacy Risks of Embeddings Induced by Large Language Models", "abstract": "Large language models (LLMs) show early signs of artificial general\nintelligence but struggle with hallucinations. One promising solution to\nmitigate these hallucinations is to store external knowledge as embeddings,\naiding LLMs in retrieval-augmented generation. However, such a solution risks\ncompromising privacy, as recent studies experimentally showed that the original\ntext can be partially reconstructed from text embeddings by pre-trained\nlanguage models. The significant advantage of LLMs over traditional pre-trained\nmodels may exacerbate these concerns. To this end, we investigate the\neffectiveness of reconstructing original knowledge and predicting entity\nattributes from these embeddings when LLMs are employed. Empirical findings\nindicate that LLMs significantly improve the accuracy of two evaluated tasks\nover those from pre-trained models, regardless of whether the texts are\nin-distribution or out-of-distribution. This underscores a heightened potential\nfor LLMs to jeopardize user privacy, highlighting the negative consequences of\ntheir widespread use. We further discuss preliminary strategies to mitigate\nthis risk.", "authors": "Zhihao Zhu, Ninglu Shao, Defu Lian, Chenwang Wu, Zheng Liu, Yi Yang, Enhong Chen", "published": "2024-04-25", "updated": "2024-04-25", "primary_cat": "cs.CL", "cats": [ "cs.CL", "cs.AI" ], "main_content": "Table 1: Sizes of attack models and embedding models Attack Model GPT2 GPT2-Large GPT2-XL #parameters 355M 744M 1.5B Target Model SimCSE BGE-Large-en E5-Large-v2 #parameters 110M 326M 335M We employ pre-trained GPT2 [45] of varying sizes as the attacking model to decipher private information from embeddings produced by the target embedding models such as SimCSE [21], BGE-Large-en [53], and E5-Large-v2 [51]. We hypothesize that larger embedding models, due to their capacity to capture more information, are more likely to be exposed to a greater privacy risk. Consequently, all these models are designated as the target models, and their numbers of parameters are detailed in Table 1. It\u2019s important to note that we treat the target model as a black box, meaning we do not have access to or knowledge of its network architecture and parameters. Figure 1 showcases the fine-tuning process for the attack model. Initially, the example text \u201cDavid is a doctor.\u201d is inputted into the target embedding model to generate its respective embedding. This embedding is then used as the input for the attack model, which aims to reconstruct the original text based solely on this embedding. An EOS (End-of-Sentence) token is appended to the text embedding to signal the end of the embedding input. The attacker\u2019s training goal is to predict the t-th token of the original text based on text embedding and the preceding (t-1) tokens of the original text. In the testing phase, the attacker employs beam search [18] to progressively generate tokens up to the occurrence of the EOS. Once the attack model has been fine-tuned, we evaluate the privacy risks of embeddings through two distinct attack scenarios: text reconstruction and attribute prediction. Target Embedding Model Sentence Embedding [CLS] Attack Model David is a doctor . [EOS] [EOS] David is a doctor . [EOS] David is a doctor . Training Stage Figure 1: The fine-tuning of the foundation attack model. Initially, the attacker queries the target embedding model to convert the collected text into text embeddings. To signify the completion of the embedding input, an EOS (Endof-Sentence) token is appended to the text embedding. Next, the attacker selects the pre-trained GPT2 model as the attack model and uses the collected text and corresponding text embeddings as a dataset to train the attack model. When a text embedding is input, the attack model is trained to sequentially reconstruct the related original text. 2.2 Evaluation of Text Reconstruction Settings For each text in the test set, we reconstruct it using the attack model based on its embedding generated by the target model. To evaluate the reconstruction accuracy, we employ two metrics: BLEU (Bilingual Evaluation Understudy) [41] and ROUGE (Recall-Oriented Understudy for Gisting Evaluation) [33]. Specifically, we utilize BLEU-1 and ROUGE-1, which are based solely on unigrams, as they yield better results compared to BLEU and ROUGE based on other n-grams [4]. These metrics gauge the similarity between the Understanding Privacy Risks of Embeddings Induced by Large Language Models Conference\u201917, July 2017, Washington, DC, USA Table 2: Reconstruction attack performance against different embedding models on the wiki dataset. The best results are highlighted in bold. \u2217represents that the advantage of the best-performed attack model over other models is statistically significant (p-value < 0.05). Training data wiki-small Wiki-large Wiki-xl Target model Attack model BLEU-1 ROUGE-1 BLEU-1 ROUGE-1 BLEU-1 ROUGE-1 SimCSE GPT2 0.3184\u00b10.0010 0.3212\u00b10.0010\u2217 0.4512\u00b10.0010\u2217 0.4961\u00b10.0011\u2217 0.4699\u00b10.0016 0.5256\u00b10.0010 GPT2_large 0.2996\u00b10.0014 0.2913\u00b10.0012 0.4349\u00b10.0013 0.4678\u00b10.0009 0.5293\u00b10.0010 0.5930\u00b10.0009 GPT2_xl 0.3196\u00b10.0011\u2217 0.3112\u00b10.0013 0.4455\u00b10.0015 0.4833\u00b10.0010 0.5331\u00b10.0011\u2217 0.5987\u00b10.0007\u2217 BGE-Large-en GPT2 0.3327\u00b10.0014\u2217 0.3288\u00b10.0011\u2217 0.4173\u00b10.0016 0.4483\u00b10.0009 0.4853\u00b10.0011 0.5337\u00b10.0016 GPT2_large 0.2935\u00b10.0013 0.2783\u00b10.0011 0.4446\u00b10.0011 0.4788\u00b10.0006 0.5425\u00b10.0012 0.5998\u00b10.0010 GPT2_xl 0.3058\u00b10.0019 0.3043\u00b10.0012 0.4689\u00b10.0011\u2217 0.5057\u00b10.0007\u2217 0.5572\u00b10.0008\u2217 0.6151\u00b10.0007\u2217 E5-Large-v2 GPT2 0.3329\u00b10.0013\u2217 0.3341\u00b10.0012\u2217 0.4838\u00b10.0005 0.5210\u00b10.0008 0.5068\u00b10.0016 0.5522\u00b10.0014 GPT2_large 0.3093\u00b10.0009 0.2875\u00b10.0012 0.4700\u00b10.0011 0.4990\u00b10.0010 0.5679\u00b10.0011 0.6220\u00b10.0011 GPT2_xl 0.3083\u00b10.0013 0.3017\u00b10.0013 0.4993\u00b10.0013\u2217 0.5274\u00b10.0011\u2217 0.5787\u00b10.0007\u2217 0.6378\u00b10.0009\u2217 original and reconstructed texts. Given that the temperature setting influences the variability of the text produced by GPT, a non-zero temperature allows for varied reconstructed texts given the same text embedding. We calculate the reconstruction accuracy across 10 trials to obtain mean and standard error for statistical analysis. Based on these outcomes, we compare the performance of various attack and target configurations using a two-sided unpaired t-test [16]. The evaluation is conducted on seven datasets, including wiki [47], wiki-bio [29], cc-news [24], pile-pubmed [20], triage [32], cjeu-terms [9], and us-crimes [9]. Details and statistics for these datasets are presented in Table 5. Results of In-Distributed Texts We create three subsets from the wiki dataset of varying sizes (i.e., wiki-small, wiki-large, and wiki-xl) and use these subsets to finetune the attack models, resulting in three distinct versions of the attack model. The performance of these attack models is then assessed using held-out texts from the wiki dataset. The experimental results presented in Table 2 illustrate that the size of the training datasets and the models have a considerable influence on the reconstruction accuracy. To elaborate, regardless of the attack model employed, it\u2019s found that larger embedding models, such as BGE-Large-en and E5Large-v2, enable more effective text reconstruction compared to others like SimCSE. This is attributed to the strong expressivity of the large target embedding model, which allows it to retain more semantic information from the original text, proving beneficial for the embedding\u2019s application in subsequent tasks. Moreover, provided that the attack model is adequately fine-tuned and the embedding model is expressive enough, the accuracy of text reconstruction improves as the size of the attack model increases. This improvement is reflected in the table\u2019s last two columns, showing higher accuracy as the attack model progresses from GPT2 to GPT2_large, and finally to GPT2_xl, attributed to the improved generative capabilities of larger models. Additionally, adequately fine-tuning the large language models is a prerequisite for their effectiveness in text reconstruction tasks. When the embeddings are less informative, fine-tuning the attack model demands a larger amount of training data. This is highlighted by the lesser performance of GPT2_xl compared to GPT2 when fine-tuning with wiki-small, which reverses after fine-tuning with Wiki-xl. Moreover, GPT2_xl outperforms GPT2 in reconstructing text from SimCSE\u2019s embeddings as the fine-tuning dataset shifts from \u201cWiki-large\" to \u201cWiki-xl\". To summarize, a larger attack model, when fine-tuned with an increased amount of training data, is capable of more accurately reconstructing texts from the embeddings generated by target embedding models with higher expressivity. Hence, a straightforward approach for safeguarding privacy involves not disclosing the original dataset when publishing its embedding database. Nonetheless, it remains to be investigated whether an attack model, fine-tuned on datasets with varying distributions, would be effective. Results of Out-of-Distributed Texts To address the unresolved question, we assume that the attacker model is trained on the Wiki dataset with more rigorous descriptions of world knowledge, yet the texts used for testing do not originate from this dataset. This implies that the distribution of the texts used for testing differs from that of the texts used for training. We assess the reconstruction capability of this attack model on sample texts from six other datasets: wiki_bio, cc_news, pile_pubmed, triage, us_crimes, and cjeu_terms. The results presented in Table 3 show that the attack model retains the capability to accurately reconstruct texts from the embeddings, even with texts derived from different distributions than its training data. In greater detail, the best reconstruction accuracy of the attack model on texts from 5 out of 6 datasets is equal to or even exceeds that of a model fine-tuned on wiki-small, a relatively small subset of Wikipedia. As a result, if we release embeddings for the wiki_bio, cc_news, pile_pubmed, us_crimes, and cjeu_terms datasets, an attack model fine-tuned on the Wiki-xl dataset can extract semantic information from them with relatively high confidence. This suggests that simply withholding the original raw data does not prevent an attacker from reconstructing the original text information from their released embeddings. To understand which kind of text data is more easily recovered from text embedding based on the attack model fine-tuned with Wiki-xl, we also analyze the similarity between the six evaluation datasets and Wiki based on previous works [28]. The results reported in figure 2 show that texts from evaluation datasets with higher similarity to the training data are reconstructed more accurately. To elaborate, Wiki-bio, which compiles biographies from Wikipedia, shares the same origin as the training dataset. Despite covering different content, the language style is very similar. Consequently, the quality of the attack\u2019s text reconstruction for this Conference\u201917, July 2017, Washington, DC, USA Zhihao Zhu, Ninglu Shao, Defu Lian, Chenwang Wu, Zheng Liu, Yi Yang, Enhong Chen Table 3: Reconstruction attack performance on different datasets. The best results are highlighted in bold. \u2217represents that the advantage of the best-performed attack model over other models is statistically significant (p-value < 0.05). Test dataset wiki_bio cc_news pile_pubmed Target model Attack model BLEU-1 ROUGE-1 BLEU-1 ROUGE-1 BLEU-1 ROUGE-1 SimCSE GPT2 0.5428\u00b10.0025 0.5596\u00b10.0022 0.3881\u00b10.0012 0.4487\u00b10.0011 0.3623\u00b10.0015 0.3976\u00b10.0014 GPT2_large 0.5859\u00b10.0011 0.6272\u00b10.0015 0.4314\u00b10.0015 0.5020\u00b10.0014 0.4054\u00b10.0009 0.4427\u00b10.0011 GPT2_xl 0.5878\u00b10.0012\u2217 0.6329\u00b10.0016\u2217 0.4355\u00b10.0010\u2217 0.5084\u00b10.0008\u2217 0.4133\u00b10.0013\u2217 0.4505\u00b10.0010\u2217 BGE-Large-en GPT2 0.4773\u00b10.0025 0.5327\u00b10.0020 0.3906\u00b10.0014 0.4314\u00b10.0013 0.3297\u00b10.0013 0.3581\u00b10.0011 GPT2_large 0.5497\u00b10.0018 0.6015\u00b10.0009 0.4339\u00b10.0009 0.4867\u00b10.0012 0.3819\u00b10.0013 0.4074\u00b10.0014 GPT2_xl 0.5652\u00b10.0030\u2217 0.6200\u00b10.0015\u2217 0.4480\u00b10.0008\u2217 0.5038\u00b10.0006\u2217 0.3955\u00b10.0019\u2217 0.4218\u00b10.0017\u2217 E5-Large-v2 GPT2 0.5312\u00b10.0015 0.5532\u00b10.0014 0.4065\u00b10.0009 0.4428\u00b10.0010 0.3673\u00b10.0012 0.3995\u00b10.0009 GPT2_large 0.5695\u00b10.0014 0.6206\u00b10.0017 0.4521\u00b10.0013 0.5006\u00b10.0013 0.4174\u00b10.0006 0.4523\u00b10.0007 GPT2_xl 0.5823\u00b10.0012\u2217 0.6354\u00b10.0017\u2217 0.4645\u00b10.0014\u2217 0.5173\u00b10.0015\u2217 0.4316\u00b10.0009\u2217 0.4683\u00b10.0009\u2217 Test dataset triage us_crimes cjeu_terms Target model Attack model BLEU-1 ROUGE-1 BLEU-1 ROUGE-1 BLEU-1 ROUGE-1 SimCSE GPT2 0.0932\u00b10.0007 0.1555\u00b10.0010 0.3092\u00b10.0006 0.3238\u00b10.0008 0.3485\u00b10.0010 0.3646\u00b10.0009 GPT2_large 0.1188\u00b10.0006 0.1756\u00b10.0006 0.3268\u00b10.0006\u2217 0.3406\u00b10.0003 0.3755\u00b10.0012 0.3954\u00b10.0008\u2217 GPT2_xl 0.1299\u00b10.0006\u2217 0.2004\u00b10.0010\u2217 0.3226\u00b10.0005 0.3429\u00b10.0006\u2217 0.3739\u00b10.0007\u2217 0.3914\u00b10.0006 BGE-Large-en GPT2 0.0828\u00b10.0007 0.1072\u00b10.0008 0.2705\u00b10.0008 0.2834\u00b10.0010 0.3271\u00b10.0008 0.3378\u00b10.0009 GPT2_large 0.1376\u00b10.0008\u2217 0.1659\u00b10.0009\u2217 0.2755\u00b10.0008 0.3155\u00b10.0004 0.3639\u00b10.0016\u2217 0.3785\u00b10.0014\u2217 GPT2_xl 0.1232\u00b10.0008 0.1640\u00b10.0007 0.2775\u00b10.0008\u2217 0.3207\u00b10.0006\u2217 0.3522\u00b10.0012 0.3794\u00b10.0012 E5-Large-v2 GPT2 0.1451\u00b10.0007 0.2265\u00b10.0008 0.2825\u00b10.0009 0.2938\u00b10.0010 0.3418\u00b10.0018 0.3668\u00b10.0016 GPT2_large 0.2272\u00b10.0004\u2217 0.3172\u00b10.0007\u2217 0.3066\u00b10.0007 0.3308\u00b10.0006 0.3621\u00b10.0013 0.3980\u00b10.0012 GPT2_xl 0.2230\u00b10.0008 0.3177\u00b10.0007 0.3164\u00b10.0007\u2217 0.3459\u00b10.0005\u2217 0.3688\u00b10.0009\u2217 0.4101\u00b10.0011\u2217 dataset is the highest. In other words, simply withholding the original text associated with embeddings does not adequately safeguard sensitive information from being extracted, as fine-tuning the attack model with datasets that are similar in terms of language style or content can elevate the risk of privacy breaches. 0.5 0.6 0.7 0.8 Similarity 0.20 0.25 0.30 0.35 0.40 0.45 0.50 0.55 0.60 Best BLEU-1 wiki_bio cc_news pile_pubmed triage us_crimes cjeu_terms 0.5 0.6 0.7 0.8 Similarity 0.30 0.35 0.40 0.45 0.50 0.55 0.60 0.65 Best ROUGE-1 wiki_bio cc_news pile_pubmed triage us_crimes cjeu_terms Figure 2: The similarity between the evaluation datasets and the Wiki dataset v.s. the best reconstruction performance. The disclosure of even a small collection of original texts significantly amplifies the risk, as illustrated in figure 3 which presents results from the pile-pubmed dataset where the attack model undergoes further fine-tuning with these original texts. The availability of more original texts linked to the embedding directly correlates with an increased risk of sensitive information leakage. Specifically, the BLEU-1 score for GPT2-xl against the BGE-large-en embedding model sees a 2% increase when the attack model is supplemented with 10k original texts. Notably, even with the use of a limited amount of target data (1K samples), the improvements in BLEU-1 score are considerable. In summary, concealing the datasets of published embeddings does not effectively prevent information leakage from these embeddings. This is because their underlying information can still be extracted by an attack model that has been fine-tuned on datasets of similar style or content. Furthermore, revealing even a small number of samples can significantly improve the extraction accuracy. 0 2000 4000 6000 8000 10000 Number of disclosed original texts 0.34 0.36 0.38 0.40 0.42 0.44 0.46 BLEU-1 T arget Model = SimCSE 0 2000 4000 6000 8000 10000 Number of disclosed original texts T arget Model = BGE-Large-en 0 2000 4000 6000 8000 10000 Number of disclosed original texts T arget Model = E5-Large-v2 GPT2 GPT2_large GPT2_xl 0 2000 4000 6000 8000 10000 Number of disclosed original texts 0.36 0.38 0.40 0.42 0.44 0.46 0.48 0.50 0.52 ROUGE-1 T arget Model = SimCSE 0 2000 4000 6000 8000 10000 Number of disclosed original texts T arget Model = BGE-Large-en 0 2000 4000 6000 8000 10000 Number of disclosed original texts T arget Model = E5-Large-v2 Figure 3: Impact of disclosed original texts volume on text reconstruction accuracy. Each column represents a different target embedding model. The first and second rows represent the reconstruction performance concerning the BLEU-1 and ROUGE-1 metrics, respectively. Results of Varying Text Lengths Given GPT\u2019s capability to generate outputs of varying lengths with consistent meanings, exploring how text length impacts reconstruction quality becomes pertinent. With the average text length in the Wiki dataset being 21.32, as noted in Table 5, we selected three subsets: Wiki-base, Wiki-medium, and Wiki-long, each with 5,000 Understanding Privacy Risks of Embeddings Induced by Large Language Models Conference\u201917, July 2017, Washington, DC, USA samples but average lengths of 20, 40, and 80 words, respectively. Among them, Wiki-medium and Wiki-long are created by extending texts in Wiki-base via the GPT4 API [3] accessible by OpenAI. GPT2 GPT2_large GPT2_xl Attack model 0.0 0.1 0.2 0.3 0.4 0.5 BLEU-1 GPT2 GPT2_large GPT2_xl Attack model 0.0 0.1 0.2 0.3 0.4 0.5 BLEU-1 GPT2 GPT2_large GPT2_xl Attack model 0.0 0.1 0.2 0.3 0.4 0.5 0.6 BLEU-1 Base Medium Long GPT2 GPT2_large GPT2_xl Attack model 0.0 0.1 0.2 0.3 0.4 0.5 0.6 ROUGE-1 (a) SimCSE GPT2 GPT2_large GPT2_xl Attack model 0.0 0.1 0.2 0.3 0.4 0.5 0.6 ROUGE-1 (b) BGE-Large-en GPT2 GPT2_large GPT2_xl Attack model 0.0 0.1 0.2 0.3 0.4 0.5 0.6 ROUGE-1 (c) E5-Large-v2 Figure 4: Influence of the text length. Error bars represent the mean reconstruction accuracy with 95% confidence intervals obtained from 10 independent trials, and each column corresponds to a different target embedding model. The results of text reconstruction are depicted in figure 4. It\u2019s evident that embeddings from shorter texts are more susceptible to being decoded, posing a greater risk to privacy. For example, the ROUGE-1 score of GPT2-xl on BGE-Large-en fell by over 43.3% as the test text length increased from Wiki-base to Wiki-long. This decline can be attributed to the fixed length of text embeddings, which remain constant regardless of the original text\u2019s length. Consequently, embeddings of longer texts, which encapsulate more information within the same embedding size, make accurate text recovery more formidable. Therefore, extending the original texts could potentially fortify the security of released embeddings. 2.3 Evaluation of Sensitive Attribute Prediction Target/External Embedding Model doctor teacher chef 0.7 0.2 0.5 cosine similarity original/reconstructed text candidate attribute values Attribute Embedding Text Embedding Similarity Score Attribute Inference Figure 5: The inference framework of sensitive attributes. The attacker employs the same embedding model to convert original text and candidate attributes into embeddings. The attacker then identifies the attribute that exhibits the highest cosine similarity between its embedding and text embedding as sensitive information of the original text. Settings In contrast to text reconstruction, our primary concern is whether the attacker can extract specific sensitive information from the text embeddings. The task of predicting sensitive attributes more clearly illustrates the issue of privacy leakage through embeddings, compared to the task of reconstructing text. Initially, we pinpoint sensitive or crucial data within the datasets: wiki-bio, triage, cjeuterms, and us-crimes. In particular, we examined patient dispositions and blood pressure readings in the triage dataset, and in the wiki-bio dataset, we focused on individuals\u2019 nationality, birthdate, and profession. Additionally, we looked into criminal charges in the us_crimes dataset and considered legal terminology as important information in the cjeu_terms dataset. Given the extensive variety of attributes and the scarcity of labeled data for each, it\u2019s impractical for the privacy attacker to train a dedicated model for each sensitive attribute, unlike the approach taken in previous studies [50]. Therefore, we predicted the sensitive information from text embedding by selecting the attribute that exhibits the highest cosine similarity between text embedding and its embedding. This approach is effective across various attributes and does not necessitate training with supervised data. However, a challenge arises because the text describing the attribute is often very brief (sometimes just a single word), and the target embedding model may refuse to produce embeddings for such short texts due to concerns about embedding theft [17, 34]. As a result, it becomes difficult to represent the original text and the attribute value within the same embedding space. To overcome this, we introduce an external embedding model to serve as an intermediary. This external model is responsible for embedding both the attribute and the reconstructed text, which has been derived from text embeddings by the attack model. Consequently, texts and attributes are embedded within the same space via reconstructing texts from the text embeddings, allowing for accurate similarity measurement. The overall process for inferring attributes is depicted in figure 5. The outcomes of the attribute inference attack using various methods are presented in Table 4. The last row of this table, which relies on the similarity between attributes and original texts, lacks randomness in its measurement due to the direct comparison method employed. In contrast, the preceding rows, which are based on reconstructed texts, introduce randomness into the similarity measurement. This variability stems from employing a non-zero temperature setting, allowing for multiple independent text generations to produce diverse outputs. Results The findings presented in Table 4 reveal that sensitive information can be accurately deduced from text embeddings without the necessity for any training data, highlighting a significant risk of privacy leakage through embeddings. Specifically, attributes such as nationality and occupation can be inferred with high precision (an accuracy of 0.94) even when using texts that have been reconstructed. This level of accuracy is attributed to the embeddings\u2019 ability to capture the rich semantic details of texts coupled with the attack model\u2019s strong generative capabilities. Remarkably, the accuracy of predictions made using an external embedding model on reconstructed texts is on par with those made using the target embedding model on original texts in several cases. For an equitable comparison, the Conference\u201917, July 2017, Washington, DC, USA Zhihao Zhu, Ninglu Shao, Defu Lian, Chenwang Wu, Zheng Liu, Yi Yang, Enhong Chen Table 4: Attribute inference attack performance (accuracy) when using bge-large-en as the embedding model. The best results are highlighted in bold. \u2217denotes that the advantage of the best-performed attack model over other models is statistically significant (p-value < 0.05). The last row provides experimental results where the attacker has unrestricted access to the target embedding model and compares the embeddings of original text and candidate attributes. Other results are based on the attacker\u2019s usage of an external embedding model to calculate the similarities between reconstructed text and candidate attributes. Dataset wiki-bio triage cjeu_terms us_crimes Similarity Model Attack Model nationality birth_date occupation disposition blood_pressure legal_term criminal_charge SimCSE GPT2 0.875\u00b10.005 0.546\u00b10.008 0.869\u00b10.005 0.506\u00b10.001 0.135\u00b10.004 0.395\u00b10.010 0.377\u00b10.003 GPT2_large 0.882\u00b10.003 0.579\u00b10.011 0.878\u00b10.004 0.506\u00b10.001 0.119\u00b10.006 0.405\u00b10.006 0.396\u00b10.004 GPT2_xl 0.886\u00b10.003\u2217 0.596\u00b10.009\u2217 0.878\u00b10.005 0.514\u00b10.002\u2217 0.137\u00b10.003 0.428\u00b10.008\u2217 0.407\u00b10.004\u2217 BGE-Large-en GPT2 0.927\u00b10.005 0.525\u00b10.009 0.913\u00b10.006 0.505\u00b10.002 0.195\u00b10.004 0.496\u00b10.008 0.470\u00b10.005 GPT2_large 0.937\u00b10.005 0.560\u00b10.009 0.919\u00b10.003 0.504\u00b10.001 0.238\u00b10.005 0.538\u00b10.004 0.510\u00b10.005 GPT2_xl 0.941\u00b10.003\u2217 0.581\u00b10.008\u2217 0.922\u00b10.005 0.504\u00b10.001 0.254\u00b10.007\u2217 0.551\u00b10.006\u2217 0.527\u00b10.003\u2217 E5-Large-v2 GPT2 0.932\u00b10.004 0.670\u00b10.008 0.927\u00b10.005 0.514\u00b10.002 0.206\u00b10.005 0.492\u00b10.010 0.465\u00b10.004 GPT2_large 0.940\u00b10.003 0.729\u00b10.008 0.938\u00b10.003 0.521\u00b10.001 0.229\u00b10.003 0.544\u00b10.007 0.506\u00b10.006 GPT2_xl 0.941\u00b10.003 0.756\u00b10.008\u2217 0.940\u00b10.005 0.521\u00b10.002 0.230\u00b10.004 0.545\u00b10.006 0.524\u00b10.006\u2217 BGE-Large-en None 0.953 0.742 0.950 0.538 0.431 0.764 0.716 external embedding model was set to be identical to the target embedding model, although, in practice, the specifics of the target embedding model might not be known. The inference accuracy improves when employing a larger attack model for text reconstruction and a more expressive embedding model. This outcome aligns with observations from text reconstruction tasks. Although the accuracy of attribute prediction with reconstructed text falls short in some instances compared to using original texts, the ongoing advancement in large language models is rapidly closing this gap. Hence, the continuous evolution of these models is likely to escalate the risks associated with privacy breaches, emphasizing the critical need for increased awareness and caution in this domain. 3 DISCUSSIONS AND LIMITATIONS This study delves into the implications of large language models on embedding privacy, focusing on text reconstruction and sensitive information prediction tasks. Our investigation shows that as the capabilities of both the sophisticated attack foundation model and the target embedding model increase, so does the risk of sensitive information leakage through knowledge embeddings. Furthermore, the risk intensifies when the attack model undergoes fine-tuning with data mirroring the distribution of texts linked to the released embeddings. To protect the privacy of knowledge embeddings, we propose several strategies based on our experimental findings: \u2022 Cease the disclosure of original texts tied to released embeddings: Preventing the attack model from being finetuned with similar datasets can be achieved by introducing imperceptible noise into the texts or embeddings. This aims to widen the gap between the dataset of original or reconstructed texts and other analogous datasets. \u2022 Extend the length of short texts before embedding: Enhancing short texts into longer versions while preserving their semantic integrity can be accomplished using GPT4 or other large language models with similar generative capacities. \u2022 Innovate new privacy-preserving embedding models: Develop embedding models capable of producing high-quality text embeddings that are challenging to reverse-engineer into the original text. This entails training models to minimize the cloze task loss while maximizing the reconstruction loss. However, our study is not without limitations. Firstly, due to substantial training expenses, we did not employ an attack model exceeding 10 billion parameters, though we anticipate similar outcomes with larger models. Secondly, while we have quantified the impact of various factors such as model size, text length, and training volume on embedding privacy, and outlined necessary guidelines for its protection, we have not formulated a concrete defense mechanism against potential embedding reconstruction attacks. Currently, effective safeguards primarily rely on perturbation techniques and encryption methods. Perturbation strategies, while protective, can compromise the embedding\u2019s utility in subsequent applications, necessitating a balance between security and performance. Encryption methods, though secure, often entail considerable computational demands. Future work will explore additional factors influencing embedding privacy breaches and seek methods for privacy-preserving embeddings without sacrificing their utility or incurring excessive computational costs. 4 METHODS 4.1 Preliminary Prior to delving into the attack strategies, we will commence with the introduction of the language models and evaluation datasets. Language Model A language model is a technique capable of assessing the probability of a sequence of words forming a coherent sentence. Traditional language models, such as statistical language models [8, 46] and grammar rule language models [26, 48], rely on heuristic methods to predict word sequences. While these conventional approaches may achieve high predictive accuracy for limited or straightforward sentences within small corpora, they often struggle to provide precise assessments for the majority of other word combinations. Understanding Privacy Risks of Embeddings Induced by Large Language Models Conference\u201917, July 2017, Washington, DC, USA Table 5: Statistics of datasets Dataset Domain #sentences avg. sentence len wiki General 4,010,000 21.32 wiki_bio General 1,480 22.19 cc_news News 5,000 21.39 pile_pubmed Medical 5,000 21.93 triage Medical 4668 54.10 cjeu_terms Legal 2127 118.96 us_crimes Legal 4518 181.28 With increasing demands for the precision of language model predictions, numerous researchers have advocated for neural networkbased language models [15, 36] trained on extensive datasets. The performance of neural network-based language models steadily increases as model parameters are increased and sufficient training data is received. Upon reaching a certain threshold of parameter magnitude, the language model transcends previous paradigms to become a Large Language Model (LLM). The substantial parameter count within LLMs facilitates the acquisition of extensive implicit knowledge from corpora, thereby fostering the emergence of novel, powerful capabilities to handle more complex language tasks, such as arithmetic operations [39] and multi-step reasoning [42]. These capabilities have enabled large language models to comprehend and resolve issues like humans, leading to their rising popularity in many aspects of society. However, the significant inference capacity of large language models may be leveraged by attackers to reconstruct private information from text embeddings, escalating the risk of privacy leakage in embeddings. Datasets for Evaluation In this paper, we assess the risk of embedding privacy leaks on seven datasets, including wiki [47], wiki-bio [29], cc-news [24], pile-pubmed [20], triage [32], cjeu-terms [9], and us-crimes [9]. Wiki collects a large amount of text from the Wikipedia website. Since it has been vetted by the public, the text of the wiki is both trustworthy and high-quality. Wiki-bio contains Wikipedia biographies that include the initial paragraph of the biography as well as the tabular infobox. CC-News (CommonCrawl News dataset) is a dataset containing news articles from international news websites. It contains 708241 English-language news articles published between January 2017 and December 2019. Pile-PubMed is a compilation of published medical literature from Pubmed, a free biomedical literature retrieval system developed by the National Center for Biotechnology Information (NCBI). It has housed over 4000 biomedical journals from more than 70 countries and regions since 1966. Triage records the triage notes, the demographic information, and the documented symptoms of the patients during the triage phase in the emergency center. Cjeu-term and us-crimes are two datasets from the legal domain. Cjeu-term collects some legal terminologies of the European Union, while us-crimes gathers transcripts of criminal cases in the United States. For wiki, cc_news, and pile_pubmed datasets, we randomly sample data from the original sets instead of using the entire dataset because the original datasets are enormous. To collect the sentence texts for reconstruction, we utilize en_core_web_trf [2], an open-source tool, to segment the raw data into sentences. Then, we cleaned the data and filtered out sentences that were too long or too short. The statistical characteristics of the processed datasets are shown in Table 5. Ethics Statement For possible safety hazards, we abstained from conducting attacks on commercial embedding systems, instead employing open-source embedding models. Additionally, the datasets we utilized are all publicly accessible and anonymized, ensuring no user identities are involved. To reduce the possibility of privacy leakage, we opt to recover more general privacy attributes like occupation and nationality in the attribute prediction evaluation rather than attributes that could be connected to a specific individual, such as phone number and address. The intent of our research is to highlight the increased danger of privacy leakage posed by large language models in embeddings, suggest viable routes to safeguard embedding privacy through experimental analysis, and stimulate the community to develop more secure embedding models. 4.2 Threat Model Attack Goal This paper primarily investigates the extraction of private data from text embeddings. Given that such private data often includes extremely sensitive information such as phone numbers, addresses, and occupations, attackers have ample motivation to carry out these attacks. For instance, they might engage in illegal activities such as telecommunications fraud or unauthorized selling of personal data for economic gain. These practices pose significant threats to individual privacy rights, potentially lead to economic losses, increase social instability, and undermine trust mechanisms. Attack Knowledge To extract private information from text embeddings, attackers require some understanding of the models responsible for generating these embeddings. Intuitively, the more detailed the attacker\u2019s knowledge of the target embedding model, including its internal parameters, training specifics, etc., the more potent the attack\u2019s efficacy. However, in real-world scenarios, to safeguard their intellectual property and commercial interests, target models often keep their internal information confidential, providing access to users solely through a query interface O. Based on the above considerations, this study assumes that attackers only possess query permissions to the target embedding model, which responds with the corresponding text embedding based on the text inputted by the attacker. This query process does not reveal any internal information about the model. Furthermore, with the increasing popularity of large language models, numerous companies are opting to release their anonymized datasets for academic research purposes. Therefore, this study also assumes that attackers have the capability to gather specific open-source data (e.g., Wikipedia in our evaluation) and leverage interactions with target models to acquire associated text embeddings. Clearly, such low-knowledge attack settings help simulate real attack scenarios and more accurately assess the risks of embedding privacy leaks. Conference\u201917, July 2017, Washington, DC, USA Zhihao Zhu, Ninglu Shao, Defu Lian, Chenwang Wu, Zheng Liu, Yi Yang, Enhong Chen 4.3 Attack Methodology Attack Model Construction Attackers reconstructed original text from text embedding by training an attack model. However, they lack prior knowledge about which architecture should be used for the attack model. When using neural networks as the attack model, it is challenging to decide which neural network architecture should be employed. If the architecture of the attack model is not as expressive and complex as the embedding model, then it is difficult to ensure that it can extract private information. Considering the exceptional performance of large language models across various domains, particularly in text comprehension [11] and information extraction [14], employing them as attack models could be an appropriate choice. Based on these considerations, this study utilizes GPT-2 models of varying sizes as attack models, training them to reconstruct the original text from the embeddings. Training set generation. We start by retrieving the text embeddings from the collected open-source data \ud835\udc37using the query interface O of the target embedding model. \ud835\udc52\ud835\udc64= O(\ud835\udc64) (1) Then, we construct the training dataset \ud835\udc37\ud835\udc61\ud835\udc5f\ud835\udc4e\ud835\udc56\ud835\udc5bfor the attack model based on these embeddings and corresponding texts. \ud835\udc37\ud835\udc61\ud835\udc5f\ud835\udc4e\ud835\udc56\ud835\udc5b= {(\ud835\udc51\ud835\udc64,\ud835\udc64)|\ud835\udc64\u2208\ud835\udc37,\ud835\udc51\ud835\udc64= (\ud835\udc52\ud835\udc64, < \ud835\udc38\ud835\udc42\ud835\udc46>)} (2) where EOS(End-of-Sentence) token is a special token appended to the text embedding to signify the end of the embedding input. The attack model performs the opposite operation compared to the embedding model: for each sample (\ud835\udc51\ud835\udc64,\ud835\udc64) \u2208\ud835\udc37\ud835\udc61\ud835\udc5f\ud835\udc4e\ud835\udc56\ud835\udc5b, the attack model strives to recover the original text \ud835\udc64from the embedding \ud835\udc52\ud835\udc64. Attack model training. For each embedding input \ud835\udc51\ud835\udc64, the attack model is trained to output the first token of the original text. Subsequently, when the attacker provides \ud835\udc51\ud835\udc64along with the first \ud835\udc56\u22121 tokens of the original text, the attack model is tasked with predicting the \ud835\udc56\u2212\ud835\udc61\u210etoken of the original text with utmost accuracy. Formally, for a sample (\ud835\udc51\ud835\udc64,\ud835\udc64), the text reconstruction loss of the attack model is as follows: \ud835\udc3f\ud835\udf03(\ud835\udc51\ud835\udc64,\ud835\udc64) = \u2212 \u2211\ufe01\ud835\udc59 \ud835\udc56=1log \ud835\udc43(\ud835\udc65\ud835\udc56|\ud835\udc51\ud835\udc64,\ud835\udc64<\ud835\udc56,\ud835\udf03) (3) where \ud835\udc64<\ud835\udc56= (\ud835\udc651, ...,\ud835\udc65\ud835\udc56\u22121) represents the first \ud835\udc56\u22121 tokens of the original text \ud835\udc64. \ud835\udc59denotes the length of \ud835\udc64and \ud835\udf03represents the parameter of the attack model. Therefore, the training loss for the attack models is the sum of the text reconstruction loss across all samples in the training dataset: \ud835\udc3f\ud835\udf03= \u2212 \u2211\ufe01 (\ud835\udc51\ud835\udc64,\ud835\udc64)\u2208\ud835\udc37\ud835\udc61\ud835\udc5f\ud835\udc4e\ud835\udc56\ud835\udc5b\ud835\udc3f\ud835\udf03(\ud835\udc51\ud835\udc64,\ud835\udc64) (4) To evaluate the effectiveness of the attack models, we employ two tasks: text reconstruction and attribute prediction. Text Reconstruction Task Reconstructed text generation. In the text reconstruction task, the attack models aim to generate reconstructed text that closely resembles the original text. When generating a reconstructed text of length \ud835\udc59, the attacker aims for the generated text to have the highest likelihood among all candidate texts, which is formalized as follows: \ud835\udc64\u2217= arg max \ud835\udc64={\ud835\udc651,...,\ud835\udc65\ud835\udc59} \ud835\udc43(\ud835\udc651, ...,\ud835\udc65\ud835\udc59|\ud835\udc51\ud835\udc64,\ud835\udf03) (5) However, on the one hand, the number of candidate tokens for \ud835\udc65\ud835\udc56 in the reconstructed text \ud835\udc64is large, often exceeding 10, 000 in our experiments. On the other hand, the number of candidate texts exponentially increases as the text length \ud835\udc59grows. As a result, it becomes infeasible to iterate through all candidate texts of length \ud835\udc59and select the one with the highest likelihood as the output. A viable solution is greedy selection, which involves choosing the candidate token with the highest likelihood while progressively constructing the reconstructed text. \ud835\udc65\u2217 \ud835\udc56= arg max \ud835\udc65\ud835\udc56 \ud835\udc43(\ud835\udc65\ud835\udc56|\ud835\udc51\ud835\udc64,\ud835\udf03,\ud835\udc64\u2217 <\ud835\udc56) (6) \ud835\udc64\u2217= (\ud835\udc65\u2217 1, ...,\ud835\udc65\u2217 \ud835\udc59) (7) However, this approach may easily lead the generation process into local optima. To enhance the quality of the reconstructed text and improve generation efficiency, we employ beam search in the text reconstruction task. In beam search, if the number of beams is \ud835\udc58, the algorithm maintains \ud835\udc58candidates with the highest generation probability at each step. Specifically, in the initial state, for a given input text embedding \ud835\udc52\ud835\udc64, the attack model first records \ud835\udc58initial tokens with the highest generation likelihood (ignoring eos token) as candidate texts of length 1 (C\u2217 1 ). C\u2217 1 = arg max C1\u2282X,| C1|=\ud835\udc58 \u2211\ufe01 \ud835\udc65\u2208C1\ud835\udc43(\ud835\udc65|\ud835\udc51\ud835\udc64,\ud835\udf03) (8) where X represents the set of all tokens in the attack model. Subsequently, these \ud835\udc58initial tokens are combined with any token in X to create a text set of length 2. The attack model then iterates through these texts of length 2 and selects \ud835\udc58texts with the highest generation likelihood as candidate texts of length 2 (C\u2217 2 ). C\u2217 2 = arg max | C2|=\ud835\udc58 \u2211\ufe01 (\ud835\udc651,\ud835\udc652)\u2208C2\ud835\udc43(\ud835\udc651,\ud835\udc652|\ud835\udc51\ud835\udc64,\ud835\udf03) (9) C2 \u2282{(\ud835\udc651,\ud835\udc652)|\ud835\udc651 \u2208C\u2217 1,\ud835\udc652 \u2208X} (10) This process continues, incrementing the text length until the model generates an EOS token signaling the end of the generation process. Evaluation metric. We adopt BLEU-1 and ROUGE-1 to evaluate the reconstruction performance of the attack model, measuring how similar the reconstructed text \ud835\udc64\u2032 is to the original text \ud835\udc64. The formulas for these two metrics are as follows: BLEU-1 = \ud835\udc35\ud835\udc43\u00b7 \u00cd \ud835\udc65\u2208set(\ud835\udc64\u2032) min(count(\ud835\udc65, \ud835\udc64), count(\ud835\udc65, \ud835\udc64\u2032)) \u00cd \ud835\udc65\u2208set(\ud835\udc64\u2032) count(\ud835\udc65, \ud835\udc64\u2032) (11) ROUGE-1 = \u00cd \ud835\udc65\u2208set(\ud835\udc64) min(count(\ud835\udc65, \ud835\udc64), count(\ud835\udc65, \ud835\udc64\u2032)) \u00cd \ud835\udc65\u2208set(\ud835\udc64)count(\ud835\udc65, \ud835\udc64) (12) where set(\ud835\udc64) and set(\ud835\udc64\u2032) are the sets of all tokens in \ud835\udc64and \ud835\udc64\u2032. Count(\ud835\udc65,\ud835\udc64) and count(\ud835\udc65,\ud835\udc64\u2032) are the number of times \ud835\udc65appears in \ud835\udc64and \ud835\udc64\u2032, respectively. The brevity penalty (BP) is used to prevent short sentences from getting an excessively high BLEU-1 score. BLEU-1 primarily assesses the similarity between the reconstructed text and the original text, whereas ROUGE-1 places greater emphasis on the completeness of the reconstruction results and whether Understanding Privacy Risks of Embeddings Induced by Large Language Models Conference\u201917, July 2017, Washington, DC, USA the reconstructed text can encompass all the information present in the original text. Dataset similarity calculation. We assess the similarity between the evaluation datasets and the wiki dataset based on a simple character n-gram comparison [28]. Specifically, we employed the 5000 commonly used 4-gram characters in English as the feature set F of the dataset. Each dataset is then represented as a 5000dimensional feature vector. \u2212 \u2192 \ud835\udc39\ud835\udc37= [count(\ud835\udc531, \ud835\udc37), ..., count(\ud835\udc535000, \ud835\udc37)] (13) where \ud835\udc53\ud835\udc56\u2208F is a 4-gram character, and count(\ud835\udc53\ud835\udc56, \ud835\udc37) is the number of times \ud835\udc53\ud835\udc56appears in dataset \ud835\udc37. Finally, we calculate the Spearman correlation coefficient between the feature vectors of the two datasets to quantify their similarity. Sim(\ud835\udc371, \ud835\udc372) = Spearman(\u2212 \u2192 \ud835\udc391, \u2212 \u2192 \ud835\udc392) (14) where \u2212 \u2192 \ud835\udc391 and \u2212 \u2192 \ud835\udc392 are feature vectors of \ud835\udc371 and \ud835\udc372, respectively. The Spearman coefficient ranges from -1 to 1, where a higher value indicates a greater similarity between the two corresponding datasets. Attribute Prediction Task In the attribute prediction task, this study focuses on the attacker\u2019s ability to extract private information from the original text. We chose several private attributes from four datasets and evaluated the attack model\u2019s ability to infer the precise values of these private attributes from the released text embedding. For example, in the wiki-bio dataset, occupation is chosen as a private attribute. The attacker attempts to ascertain that the original text contains the private message \u201cdoctor\" by using the embedding of the sentence \u201cDavid is a doctor.\" Instead of training the attack model to perform the attribute prediction task, this study utilizes the embedding similarity between the text and the attribute value to determine the suggested attribute value of the original text. Its rationality is that the text contains relevant information about sensitive attributes, so their embeddings should be similar. The ideal approach would be to determine based on the similarity between embeddings of the original text and sensitive attribute embeddings. However, this poses challenges: (1) The original text is unknown. (2) Privacy attributes are often short texts, and in most cases, consist of only one word; such frequent anomalous (short) inputs might be considered malicious attempts and rejected. Therefore, this study (1) uses reconstructed text instead of the original text, and (2) employs an open-source external embedding model as a proxy to obtain embeddings instead of using the target embedding model. It\u2019s worth noting that this study did not directly search for privacy attributes in the reconstructed text due to potential inaccuracies in reconstructing privacy attributes, such as missing tokens or reconstructing synonyms of the attributes. Specifically, the attacker initially acquires embeddings of the reconstructed text and sensitive attribute from their proxy embedding model, subsequently computing the cosine similarity between them, and ultimately selecting the attribute with the highest similarity as the prediction result. Formally, the attacker infers the sensitive attribute \ud835\udc64\ud835\udc63as follows: \ud835\udc64\ud835\udc63= arg max \ud835\udc63\u2208C\ud835\udc63 \ud835\udc52\ud835\udc64\u2032 \u00b7 \ud835\udc52\ud835\udc63 |\ud835\udc52\ud835\udc64\u2032 ||\ud835\udc52\ud835\udc63| (15) where C\ud835\udc63is the set of candidate attribute values, \ud835\udc64\ud835\udc63is the predicted attribute value of the original text \ud835\udc64. \ud835\udc52\ud835\udc64\u2032 and \ud835\udc52\ud835\udc63are the embedding vectors of reconstructed text \ud835\udc64\u2032 and attribute value \ud835\udc63with the aid of the external embedding model, respectively. We employ accuracy as the metric to evaluate the performance of the attack model on the attribute prediction task.", "introduction": "Large language models [10, 27] have garnered significant attention for their exceptional capabilities across a wide range of tasks like natural language generation [7, 37], question answering [35, 55], and sentiment analysis [5, 52]. Nonetheless, it\u2019s observed that large language models can confi- dently assert non-existent facts during their reasoning process. For example, Bard, Google\u2019s AI chatbot, concocted information in the first demo that the James Webb Space Telescope had taken the first pictures of a planet beyond our solar system [12]. Such a halluci- nation problem [31, 54] of large language models is a significant barrier to artificial general intelligence [22, 44]. A primary strat- egy for tackling the issue of hallucinations is to embed external knowledge in the form of embeddings into a vector database [19, 23], making them accessible for retrieval augmented generation by large language models [6, 13]. Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from permissions@acm.org. Conference\u201917, July 2017, Washington, DC, USA \u00a9 2024 Association for Computing Machinery. ACM ISBN 978-x-xxxx-xxxx-x/YY/MM...$15.00 https://doi.org/10.1145/nnnnnnn.nnnnnnn An embedding model [43, 49] encodes the original objects\u2019 broad semantic information by transforming the raw objects (e.g., text, image, user profile) into real-valued vectors of hundreds of dimen- sions. The advancement of large language models enhances their ability to capture and represent complex semantics more effectively, such that an increasing number of businesses (e.g., OpenAI [40] and Cohere [1]) have launched their embedding APIs based on large language models. Since embeddings are simply real-valued vectors, it is widely believed that it is challenging to decipher the semantic information they contain. Consequently, embeddings are often viewed as secure and private, as noted by [50], leading data owners to be less concerned about safeguarding the privacy of em- beddings compared to raw external knowledge. However, in recent years, multiple studies [30, 38, 50] have highlighted the risk of em- beddings compromising privacy. More specifically, the pre-trained LSTM (Long Short-Term Memory) networks [25] or other language models can recover parts of the original texts and author informa- tion from text embeddings, which are generated by open-source embedding models. Although current studies have exposed security weaknesses in embeddings, the effects of large language models on the privacy of these embeddings have not been fully explored. A pressing issue is whether LLMs\u2019 emergent capabilities enable at- tackers to more effectively decipher sensitive information from text embeddings. This issue is driven not only by the proliferation of large language models but also by the availability of the embedding APIs based on LLMs, which permits attackers to gather numerous text-embedding pairs to build their attack models. To this end, we establish a comprehensive framework that lever- ages a large language model (LLM) to gauge the potential privacy leakage from text embeddings produced by the open-sourced em- bedding model. From a security and privacy perspective, LLM serves as the attacker, and the embedding model acts as the target, while the goal is to employ the attack model to retrieve sensitive and confidential information from the target model. Specifically, our approach begins with fine-tuning attack models to enable text re- construction from the outputs of the target model. Following this, we assess the privacy risks of embeddings via two types of attack scenarios. On the one hand, we recover the texts from their embed- dings in both in-distribution and out-of-distribution scenarios. On the other hand, we identify certain private attributes of various en- tities in the original text (such as birthdays, nationalities, criminal charges, etc.) and predict these attributes from the text embeddings. This prediction is determined by the attribute that exhibits the highest cosine similarity between the text embedding and the cor- responding attribute embedding. Consequently, this method does not necessitate training with supervised data. Should the target embedding model decline to generate embeddings for attributes with extremely brief texts described (1-2 words) out of embedding stealing concerns, we introduce an external embedding model that acts as a proxy to project the original text and the attribute value arXiv:2404.16587v1 [cs.CL] 25 Apr 2024 Conference\u201917, July 2017, Washington, DC, USA Zhihao Zhu, Ninglu Shao, Defu Lian, Chenwang Wu, Zheng Liu, Yi Yang, Enhong Chen in the same embedding space. Specifically, this external model is tasked with embedding the attribute and the text reconstructed by the attack model, the latter being derived from text embeddings produced by the target embedding model. The evaluation of text reconstruction reveals that 1) a larger attack language model, when fine-tuned with a sufficient amount of training data, is capable of more accurately reconstructing texts from their embeddings in terms of metrics like BLEU [41], regardless of whether the texts are in-distribution or out-of-distribution; 2) in-distributed texts are more readily reconstructed than out-of- distributed texts, with the reconstruction accuracy for in-distributed texts improving as the attack model undergoes training with more data; 3) the attack model can improve the reconstruction accuracy as the expressiveness of the target embedding models increases. The evaluation of attribute prediction demonstrates that 1) at- tributes can be predicted with high accuracy across various domains, including encyclopedias, news, medical, and legislation. This means the attacker is capable of inferring details like a patient\u2019s health condition, a suspect\u2019s criminal charges, and an individual\u2019s birthday from a set of seemingly irrelevant digital vectors, highlighting a significant risk of privacy leakage; 2) generally speaking, enlarg- ing the scale of the external/target embedding model substantially enhances the accuracy of attribute prediction; 3) when the target model denies embedding services for very short texts, the most effective approach using reconstructed text by the attack model can achieve comparable performance to using original text, when the target model and the external embedding model are configured to be the same. From the experiments conducted, we find that knowledge repre- sentations merely through numerical vectors encompass abundant semantic information. The powerful generative capability of large language models can continuously decode this rich semantic infor- mation into natural language. If these numerical vectors contain sensitive private information, large language models are also capa- ble of extracting such information. The development trend of large language models is set to increase these adverse effects, underscor- ing the need for our vigilance. Our research establishes a foundation for future studies focused on protecting the privacy of embeddings. For instance, the finding that accuracy in text reconstruction di- minishes with increasing text length indicates that lengthening texts may offer a degree of privacy protection. Furthermore, the ability of the attack model to reconstruct out-of-distributed texts points towards halting the release of original texts associated with released embeddings as a precaution." }, { "url": "http://arxiv.org/abs/2312.10943v2", "title": "Model Stealing Attack against Graph Classification with Authenticity, Uncertainty and Diversity", "abstract": "Recent research demonstrates that GNNs are vulnerable to the model stealing\nattack, a nefarious endeavor geared towards duplicating the target model via\nquery permissions. However, they mainly focus on node classification tasks,\nneglecting the potential threats entailed within the domain of graph\nclassification tasks. Furthermore, their practicality is questionable due to\nunreasonable assumptions, specifically concerning the large data requirements\nand extensive model knowledge. To this end, we advocate following strict\nsettings with limited real data and hard-label awareness to generate synthetic\ndata, thereby facilitating the stealing of the target model. Specifically,\nfollowing important data generation principles, we introduce three model\nstealing attacks to adapt to different actual scenarios: MSA-AU is inspired by\nactive learning and emphasizes the uncertainty to enhance query value of\ngenerated samples; MSA-AD introduces diversity based on Mixup augmentation\nstrategy to alleviate the query inefficiency issue caused by over-similar\nsamples generated by MSA-AU; MSA-AUD combines the above two strategies to\nseamlessly integrate the authenticity, uncertainty, and diversity of the\ngenerated samples. Finally, extensive experiments consistently demonstrate the\nsuperiority of the proposed methods in terms of concealment, query efficiency,\nand stealing performance.", "authors": "Zhihao Zhu, Chenwang Wu, Rui Fan, Yi Yang, Defu Lian, Enhong Chen", "published": "2023-12-18", "updated": "2023-12-26", "primary_cat": "cs.LG", "cats": [ "cs.LG", "cs.CR" ], "main_content": "In this section, we first introduce the target model. Then we characterize the attacker\u2019s goal, background knowledge, and 2 Table 2: Summary of the notations. Notation Description g = (V ,A,X) Graph gs A subgraph of g A Adjacement matrix X Node features yg Label of graph g MT The target model MC The clone model P The sample pool D0 The Initial dataset N The number of iterations \u03b1 Modification rate of adjacency matrix \u03b3 Proportion of mixed nodes attack setting. Table 2 provides a convenient summary of the notations introduced in this research. 2.1 Graph Neural Network (GNN) In recent years, GNNs have been frequently employed for node classification, link prediction, and graph classification due to their exceptional performance. The effectiveness of GNNs is attributable to the message propagation and aggregation mechanism, which may effectively combine information from neighboring nodes. Specifically, a single graph convolutional layer of a GNN can transmit the knowledge of its first-order neighbors to a node, and there are numerous such layers in GNN. After k iterations of message passing, a node can capture abundant information from its k-hop neighbors. Formally, the message passing and aggregating of a graph convolutional layer in a GNN can be defined as the following: hl v = AGGREGATE(hl\u22121 v ,MSG(hl\u22121 v ,hl\u22121 u )),u \u2208Nv, (1) where hl\u22121 v and hl\u22121 u indicate the l \u22121 layer embedding of node v and u. AGGREGATE and MSG are the corresponding aggregate function and message-passing function. Nv contains v\u2019s first-order neighbors. Once trained, each node of the graph can be represented as an embedding vector, which integrates both graph structure and node features. In graph classification, the model\u2019s goal is to classify the whole graph. Therefore, we need to extract graph embedding from the node embeddings, which is called the pooling operation at the graph level. hl g = Pooling(hl v),v \u2208g. (2) For a k-layer GNN, the objective function in graph classification can be defined as: max \u03b8 \u2211 g\u2208GT I( f\u03b8(h0 g,h1 g,...,hk g) = yg), (3) where hl g and yg are l-th layer\u2019s embedding vector and label of graph g. GT is the graph training set. 2.2 Attacker\u2019s Goal Model stealing attacks involve creating a clone model MC to mimic the behavior of the target model MT. There are two metrics to measure the performance of model stealing attacks, one is the accuracy of MC, and the other is fidelity. Accuracy is used to estimate whether the prediction of MC matches the ground truth label, while fidelity measures the degree of agreement between the predictions of MC and MT. Accuracy = 1 |GT| \u2211 g\u2208GT I(MC(g) = yg). (4) Fidelity = 1 |GT| \u2211 g\u2208GT I(MC(g) = MT(g)). (5) Evaluating model stealing attacks using fidelity is a common practice in recent works [16, 79]. In this setting, the attackers do not force the clone model to perform well, but expect it to mimic the target model. It means that even if the target model makes wrong predictions, the clone model should imitate them. A well-imitated clone model is profitable for the attackers to implement subsequent attacks, such as adversarial attacks [31,49] and membership inference attacks [64, 96]. Therefore, we adopt fidelity to evaluate our method. 2.3 Attacker\u2019s Capability Previous works assumed that the attacker can obtain enough data which comes from the same distribution of the target model\u2019s training data, namely target data. However, this assumption is often unrealistic in practical applications. In this paper, we assume that the attacker only has access to a small amount (10% or even 1% in our experiments) of the target data. In addition, we assume that the attacker has query access to the target model, but the attacker cannot obtain intermediate results (node embeddings or confidence vectors) of the target model as in previous works [60]. For a given query sample, the target model will simply return the target model\u2019s prediction of the class for that sample, namely the hard label. Finally, we assume that the attacker knows the architecture of the target model, an assumption that will be relaxed in section 4.4.2. 2.4 Attack Setting In this article, we use synthetic samples to mitigate the problem of insufficient target data available. We synthesize new data by making a small number of modifications to the original data. In order to make the synthetic samples more realistic and our attacks less perceptible, we make strict constraints on the range and scale that the attacker can modify. Specifically, we discuss model stealing attacks against graph classification tasks under two scenarios. One scenario is when the attacker 3 \u2463 Sample Pool k-th synthetics (k-1)-th synthetics 1-th synthetics Initial Data ... Clone Model Target Model \u2460 \u2461 \u2462 Figure 1: The workflow of our framework. Each round begins with the generation of new synthetic data by applying model-related or model-agnostic methods. We then query new samples against the target model, whose predictions will be utilized to boost the clone model\u2019s imitation. can only slightly modify the topology of the original graph, the adjacency matrix. In the other scenario, the attacker can not only modify the adjacency matrix but also replace the node features with other features of the same distribution as it. Note that we do not modify the node features here, because modifying node features is prone to contradictory situations. 3 Model Stealing Attack 3.1 Overview Our framework consists of three components: (i) the target model MT, (ii) the clone model MC, and (iii) the sample pool P. The target model provides a query interface for the public and returns hard labels for query graphs. The goal of the clone model is to imitate MT \u2019s predictions, including both correct and incorrect decisions. The expansion of P is the core of the framework, where the attacker synthesizes authentic and valuable data to persuade MC to imitate MT . The workflow of our framework is shown in Fig 1. We initialize the sample pool with a small portion of the real data D0(initial data) and pretrain the clone model MC on D0. Each cycle begins with synthesizing new samples through the model-related method (MSA-AU), model-agnostic method (MSA-AD), or both(MSA-AUD). Next, we query the target model for these synthetic examples and add them to the pool of samples (P). Finally, we utilize the samples in P and the corresponding hard labels predicted by the target model to train MC. Alternating iterations of sample creation and training of MC eventually condition MC to be an excellent duplicate of MT. In the following subsections, we introduce three strategies MSA-AU, MSA-AD, and MSA-AUD, which guarantee authenticity and uncertainty, authenticity and diversity, all three characteristics of generated samples, respectively. 3.2 Model Stealing Attacks with Authenticity and Uncertainty 3.2.1 Optimization objective based on uncertainty Inspired by active learning [52, 59], we can employ uncertainty as a metric to assess the value of generated samples. The basic idea of active learning is to promote a machine learning system to attain greater accuracy with fewer labeled training instances. In the context of active learning, uncertainty plays a crucial role in evaluating the knowledge content within samples. Samples with higher uncertainty are more challenging for models to differentiate and, as a result, are deemed more valuable. Commonly used metrics for quantifying uncertainty include: margin of confidence (Margin), minimized maximum confidence (Max), and entropy of confidence vector (Entropy), which have the following formula: Margin(g,M ) = PM (y2|g)\u2212PM (y1|g) (6) Max(g,M ) = \u2212PM (y1|g) (7) Entropy(g,M ) = \u2212\u2211 y\u2208Cg PM (y|g)\u2217log(PM (y|g)), (8) where Cg represents all possible classes to which graph g can belong, and PM (y|g) is the likelihood that the model M believes graph g belongs to class y. y1 and y2 are the two classes in which the model considers graph g to belong to the largest and the second largest probability. The greater the value of these metrics, the greater the sample\u2019s uncertainty. Diverging from conventional active learning, which entails selecting samples with higher uncertainty from real samples, we opt to modify genuine samples based on their predictions by the clone model, thereby generating new samples with heightened uncertainty. Querying the generated samples with high uncertainty and directing the clone model with the relevant responses of the target model can assist the clone model in better correcting the discrepancies between the target model and the clone model. Typically, genuine samples undergo a forward pass as inputs into the clone model. To create new samples with increased uncertainty, we treat the samples as parameters, while keeping model parameters fixed and using uncertainty as the optimization objective. Employing both forward and backward passes, gradients of the loss (uncertainty) with respect to parameters (the genuine samples) can be computed. Modifying samples in the direction opposite to the gradient yields new samples with greater uncertainty in the clone model. However, this approach encounters two notable challenges. On one hand, modifying features through this method can lead to results devoid of meaning. For instance, when a node represents a user in a social network, it is conceivable to create a user who is seven years old and has been working for more than 20 years, which is obviously unreliable. The target model might flag these samples as malicious queries, adversely affecting stealing performance. On the other hand, 4 Algorithm 1: AU_gen Input: An original sample g; The clone model MC; Modification rate of adjacency matrix \u03b1; Output: A new sample g\u2032; 1 V ,A,X \u2190 \u2212g; //The adjacency matrix and features 2 Ug \u2190 \u2212Uncertainty(g,MC); 3 GradA \u2190 \u2212\u2202Ug \u2202A ; 4 for i \u2208A\u2019s rows, j \u2208A\u2019s columns do 5 if (A[i, j] is 0 and GradA[i, j] < 0) or 6 (A[i, j] is 1 and GradA[i, j] > 0) then 7 GradA[i,j] \u2190 \u2212|GradA[i, j]|; 8 end 9 else 10 GradA[i,j] \u2190 \u22120; 11 end 12 end 13 M \u2190 \u2212Argmax_desc(GradA); 14 A\u2032 \u2190 \u2212A; 15 for (i, j) \u2208M[0 : \u03b1\u02d9 |M|] do 16 A\u2032[i, j] \u2190 \u22121\u2212A\u2032[i, j]; 17 g\u2032 \u2190 \u2212(V ,A\u2032,X); 18 Ug\u2032 \u2190 \u2212Uncertainty((A\u2032,X),MT); 19 if Ug\u2032 > 0.1 then 20 break 21 end 22 # Authenticity constraint 23 if |Statistics(g)\u2212Statistics(g\u2032)| > 0.05 then 24 break 25 end 26 end this modification technique is only applicable to continuous variables and is unsuitable for altering discrete adjacency matrices. Even when adjacency matrices are relaxed to continuous variables, followed by the addition of perturbation and reconversion to discrete variables, this method may still yield graphs that differ significantly from the originals, thus going against our first principle. To address these two challenges, we constrain the attacker to modify only a limited number of positions within the adjacency matrix, aiming to generate samples with higher authenticity and uncertainty. We evaluate the potential of each position in the adjacency matrix for enhancing sample uncertainty by the sample\u2019s gradient under the current clone model. In this study, we assume that the adjacency matrix lacks multiple edges, meaning the adjacency matrix values are confined to {0,1}. Consequently, there is no need to introduce negative perturbation for positions with a value of 0 or positive perturbation for positions with a value of 1. Therefore, we denote the potential score of these positions as 0 and use the absolute value of the gradient as the potential score for the other positions. Portions with potential score exceeding zero signify that modifying these positions can increase sample uncertainty. We greedily select \u03b1 positions with higher importance and flip the values of the corresponding positions in descending order of importance. The impact of the \u03b1 on the attack model is discussed in Section 4.5.2. Given the minimal alterations to edges, the generated graph closely resembles the original, thereby ensuring the imperceptible of the attack method. To ensure that the generated new samples do not exhibit excessive differences in uncertainty compared to the original graph, we assess uncertainty of the new graph after every modification. If its uncertainty surpasses a predefined threshold, we terminate the algorithm. 3.2.2 Authenticity constraint While constraining the number of modifications to the adjacency matrix can to some extent ensure the authenticity of the generated graph, we are unable to measure whether certain statistical features of the graph, such as degree distribution and the number of triangles, have undergone significant changes. Therefore, when greedily selecting modification positions, we introduce additional authenticity constraints to our attack method. After each modification, we assess the statistical difference between the new graph and the original graph. If this difference exceeds a predefined threshold, the attack process terminates prematurely. For example, in the case of degree distribution, the authenticity constraint stipulates that the difference in degree distribution between the generated and original graphs should be less than 0.05. In section 6, we observe that these authenticity constraints are effective in maintaining the similarity between the generated graph and the original graph across multiple statistical features. The algorithm that combines uncertainty and authenticity, namely MSA-AU, is a model-related sample generation method. Algorithm 1 provides a detailed description of the sample generation steps in the MSA-AU algorithm, explicitly outlining the process that ensures both uncertainty and authenticity. Argmax_desc provides a collection of indices. The indices in the set are sorted descently according to the potential score of the input matrix indicated by the indices. 3.3 Model-independent Sample Generation Strategy In MSA-AU, the generated, label-worthy samples are related to the current clone model. While this model-related generation method is effective in obtaining samples with increased uncertainty, the differences between these samples decrease gradually with the convergence of model training across multiple iterations. This trend is intuitively depicted in Fig 2, where the solid line represents the decision boundary of the clone model in the 19th round, and the dashed line represents the decision boundary in the 20th round. It can be observed that 5 19th Model 20th Model Original Data MSA-AD data MSA-AU data MSA-AUD data Original Data Figure 2: Difference between MSA-AU, MSA-AD and MSAAUD. in the later stages of model training, the change in the clone model\u2019s decision boundary is minimal, resulting in limited differences between the MSA-AU generated samples in the 19th and 20th rounds. Although both sets of samples exhibit high uncertainty, labeling them simultaneously does not provide satisfactory assistance to the clone model. In addition to ensuring the uncertainty and authenticity of new samples, we also aim to introduce diversity among these samples. This concept is akin to the diversity selection criteria in active learning [4, 86]. In active learning, besides selecting samples solely based on uncertainty, diversity is typically used to assess the sample set to avoid redundancy. Similarly, to achieve superior attack performance with a minimal number of queries, we must ensure diversity among the generated sample set while synthesizing samples. Based on this, we propose another model-independent sample generation strategy, which involves generating new samples through the fusion of real samples. Since the fusion process is model-independent, it does not lead to a loss of diversity in the sample pool as the clone model converges. Specifically, we begin by randomly selecting original graphs g1 and g2 from the initial sample pool. Subsequently, we determine the proportion of nodes to be modified and obtain corresponding induced subgraphs gs 1 and gs 2 for g1 and g2 based on this proportion. Finally, we establish a one-to-one mapping relationship between the nodes in gs 1 and gs 2. Through this one-to-one mapping, we exchange the topological structure and node features of gs 1 and gs 2, resulting in modified graphs denoted as g\u2032 1 and g\u2032 2. This process is referred to as mixup [77,87] for graph data, an emerging graph data augmentation method. In our paper, the attack model using this sample generation strategy is named as MSA-AD. On one hand, due to the limited proportion of adjacent matrix (just induced subgraph) being modified, the differences in statistical features between the generated and original graphs are minimal. On the other hand, the copied node features in MSA-AD are drawn from nodes in other graphs that belong to the same distribution, ensuring that no feature content Algorithm 2: AD_gen Input: An original sample g1; the Initial Dataset D0; Proportion of mixed nodes \u03b3 Output: A new sample g\u2032 1; 1 g2 \u2190 \u2212randomly_sample(D0/{g1}); 2 v1 \u2190 \u2212randomly_sample(g1); //Select a Vertex 3 v2 \u2190 \u2212randomly_sample(g2); 4 gs 1 \u2190 \u2212Expand_ from(v1,\u03b3); //Induce Subgraph 5 gs 2 \u2190 \u2212Expand_ from(v2,\u03b3); 6 PageRank(gs 1) \u2194PageRank(gs 2); 7 //One-to-one Coorespondence 8 g\u2032 1 \u2190 \u2212Replace(g1,gs 1,gs 2); Table 3: Comparison of MSA-AU and MSA-AD MSA-AU MSA-AD Model-related \u2713 Modify Adjacent Matrix \u2713 \u2713 Modify Node features \u2713 contradictions arise. This method of copying features is also considered to have a higher level of imperceptibility in several studies [18,82]. The core of MSA-AD lies in establishing a one-to-one mapping of nodes in subgraphs. To ensure minimal differences in statistical features between the generated and original graphs, we aim to create a one-to-one correspondence for nodes that play significant roles in the graph. For instance, we do not want nodes with higher degrees in graph g1 to be mapped to nodes with lower degrees in graph g2. Therefore, we use the PageRank [3] algorithm to calculate the importance of each node in graphs g1 and g2 and link the mapping relationship of nodes in the subgraphs to their importance. Nodes with higher importance in g1 are mapped to nodes with higher importance in g2. The importance of each node in the original graph can be pre-computed, eliminating the need for repetitive importance calculations in different rounds of expanding the sample pool. Furthermore, to ensure a greater diversity of generated samples, we require different pairings for each original graph in various rounds. In other words, each pairing of graph g1 with graph g2 will appear at most once. Table 3 provides a detailed comparison between MSAAU and MSA-AD. MSA-AU is model-related and considers sample uncertainty and authenticity, with constraints limited to modifying adjacency matrices. MSA-AD, on the other hand, is model-independent, considers sample set diversity and authenticity, and can simultaneously modify adjacency matrices and copy node features. Compared with MSA-AU, the difference between MSA-AD generated data in the 19th and 20th round is significant, as clearly depicted in Fig 2. As MSA-AD is model-agnostic, the MSA-AD data does not have any connection to the 19th and 20th models. 6 Algorithm 3: MSA-AUD Input: The trained target model MT; The initial dataset D0; The number of iterations N Output: The trained clone model MC; 1 Select and Initialize MC(e.g. GCN) ; 2 L0 \u2190 \u2212Query(MT,D0) ; 3 Initialize the sample pool P with (D0,L0) ; 4 Train MC with P; 5 for i \u22081,...,N do 6 Di \u2190 \u2212{}; 7 for g \u2208D0 do 8 g\u2032 \u2190 \u2212AD_gen(g,D0); 9 g\u2032\u2032 \u2190 \u2212AU_gen(g\u2032,MC); 10 Di \u2190 \u2212Di S{g\u2032\u2032}; 11 end 12 Li \u2190 \u2212Query(MT,Di) ; 13 P = P \u222a(Di,Li) ; 14 Train MC with P; 15 end For a detailed procedure of MSA-AD\u2019s sample-generating process, please refer to Algorithm 2. In practice, subgraphs are generated by randomly selecting a central vertex and expanding from it to form a connected cluster comprising \u03b3 rate of the total vertices. The discussion of \u03b3\u2019s value can be found in Section 4.5.1. Afterward, PageRank is used to establish a one-to-one correspondence, and the final new sample is created by switching subgraphs accordingly. 3.4 MSA-AUD In this section, we introduce MSA-AUD, a strategy that combines the advantages of MSA-AU and MSA-AD, simultaneously incorporating authenticity, uncertainty, and diversity during the process of expanding the sample pool. Specifically, in the process of expanding the sample pool, we initially employ the MSA-AD method to blend the original graphs g1 and g2, resulting in modified graphs g\u2032 1 and g\u2032 2. Subsequently, building upon this, we execute a model-dependent generation process on g\u2032 1, akin to the steps in MSA-AU, to obtain generated sample g\u2032\u2032 1. As both modification processes constrain the differences between the generated graphs and the original ones, the authenticity of the generated samples is assured. Fig 2 illustrates the distinctions between MSAAU, MSA-AD, and MSA-AUD. It can be observed that while MSA-AU can generate synthetic samples with higher uncertainty based on the original data, it is susceptible to losing diversity due to the influence of the clone model. On the other hand, MSA-AD, although model-independent and avoiding the generation of redundant samples, cannot guarantee individual sample uncertainty, i.e., label worthiness. MSA-AUD combines the advantages of the formers, ensuring the authen0 2000 4000 6000 8000 Difference of unnormalized model output 0.0000 0.0005 0.0010 0.0015 0.0020 0.0025 0.0030 Density MSA-AU MSA-AD MSA-AUD Figure 3: Diversity visualization. ticity, uncertainty, and diversity of the synthetic sample set. The complete workflow of MSA-AUD is detailed in Algorithm 3. In the algorithm, the Query function receives a target model and a dataset. It inputs the dataset into the target model and returns the predictions. AD_gen and AU_gen refer to Algorithm 2 and Algorithm 1, respectively. Diversity visualization. In order to investigate whether the introduction of mixup genuinely enhances data diversity, a statistical analysis is conducted on the target model. We synthesize new samples based on the initial data for three different methods at the 19th and 20th iterations. These synthetic samples are subsequently input into the target model. The squared sum of the model output vectors\u2019 differences between two consecutive iterations is computed as a measure of the dissimilarity between the two classes of data. The results are visualized using histograms and fitted with probability density functions, as presented in Fig.3. From the observations in the figure, it can be noted that the dissimilarity values for MSA-AU are relatively concentrated and centered around zero, indicating that the samples generated by MSA-AU in two consecutive iterations exhibit minimal differences. In contrast, the MSA-AD algorithm, primarily focused on ensuring diversity, shows a peak around 500, signifying a substantial increase in dissimilarity compared to MSA-AU. Meanwhile, the MSA-AUD algorithm, combining the strengths of the first two algorithms, exhibits similar diversity results to MSA-AD. These results serve as compelling evidence to demonstrate the effectiveness of our approach in enhancing sample diversity. 4 Evaluation In this section, we evaluate several attack methods against three Graph Neural Network (GNN) models on multiple datasets. We begin by introducing our experimental setup. Following the experimental setup, we present the attack performance and imperceptibility of different algorithms in Sections 4.2 and 4.3, respectively. Then we assess the performance of attack methods with varying knowledge and parameters in Sections 4.4 and 4.5, respectively. Furthermore, we discuss 7 Table 4: The statistics of the datasets Dataset #Graphs #Classes #Nodes #Edges ENZYMES 600 6 19580 74564 COIL-DEL 3900 100 83995 423048 NCI1 4110 2 122747 265506 TRIANGLES 45000 10 938438 2947024 the performance of our attack methods under defense strategies in Section 4.6. Finally, we discuss the feasible defense directions based on the experimental results in Section 4.7. 4.1 Setup 4.1.1 Dataset We evaluate our attack algorithms on four datasets: ENZYMES, COIL-DEL, NCI1, and TRIANGLES. The statistical characteristics of these datasets are summarized in Table 4. For each dataset, we randomly select 80% of the graphs for training the victim model. We assume that the attacker can only access 10% of the real data to Initialize the sample pool for cloning. Specifically, for the TRIANGLES dataset, we assume that the attacker can only use 1% of the real data. \u2022 ENZYMES [2, 58] included 600 proteins from each of the 6 Enzyme Commission top-level enzyme classes (EC classes) and the goal was to correctly predict enzyme class membership for these proteins. \u2022 COIL-DEL [45,53] contains 3900 graphs which are converted from images by the Harris corner detection algorithm [24] and Delaunay triangulation [34]. Each node of these graphs is given a feature vector that comprises the node\u2019s position. \u2022 NCI [61] is a biological dataset used for anticancer activity classification. In this dataset, each graph represents a compound, with nodes and edges representing atoms and chemical bonds, respectively. The graph labels indicate whether the corresponding compounds exhibit positive or negative effects on lung cancer cells. \u2022 TRIANGLES [33]. Counting the number of triangles in a graph is a common task that can be solved analytically but is challenging for GNNs. This dataset contains ten different graph classes, corresponding to the number of triangles in each graph in the dataset. 4.1.2 Target Model We choose three classic graph neural networks SAGE, GCN, and GIN to verify the performance of different algorithms. \u2022 SAGE [23] collects and combines messages from node\u2019s neighbors. Due to the fact that it only requires the local topology of nodes, it can be easily implemented in largescale graphs. \u2022 GCN [32] is one of the most widely used GNNs, simplifies the GNN model using ChebNet [12]\u2019s first-order approximation. Compared to conventional node embedding techniques [51,69], it performs much better in graph representation learning tasks. \u2022 GIN [85] analyzes the upper bound on GNNs\u2019 performance and proposes an architecture to satisfy this bound. It is widely employed in graph classification tasks and is gradually becoming an essential baseline. 4.1.3 Baseline We compare our method with MSA-Real, JbDA(Jacobianbased Dataset Augmentation), T-RND(Targeted randomly), and Random for benchmarking purposes. \u2022 MSA-Real [60,81] utilized only real data to train the clone model. The previous works implemented model stealing attacks against node classification in this way. It serves as a benchmark without data generation and is used to assess whether synthetic samples improve attack performance. \u2022 JbDA [49] employs a data augmentation approach based on the Jacobian matrix to generate images. Specifically, JbDA calculates the gradient on the original image and adds a small noise in the direction of the gradient. The modified image is then fed into the target model as queries. \u2022 T-Rnd [31] is an improvement over JbDA, which switches from single-step perturbation to multi-step perturbation. In each perturbation step, T-Rnd algorithm chooses a target class at random and then computes the Jacobian matrix of the output that the MC produces in this class. Furthermore, T-Rnd introduces smaller noise along the gradient direction compared to JbDA when adding it to the original sample. JbDA and T-RND were originally designed for continuous inputs (images). To adapt them for the graph domain, we first relax the adjacency matrix into continuous variables, add a small noise along the gradient direction, and finally, restore the adjacency matrix to discrete variables. \u2022 Random [55] utilizes random images to steal the target model. We migrate it to the graph data and randomly generate some graphs satisfying the real graph distribution to expand the initial dataset. The number of random samples in MSA-Rnd is the same as that in other attack methods based on synthesizing samples. 4.1.4 Implementation details In our study, we construct the target model as a Graph Neural Network (GNN) comprising three hidden layers, with 8 Table 5: Attack performance(Fidelity) on various datasets. The best results are highlighted in bold. Dataset ENZYMES COIL-DEL NCI1 TRIANGLES Model SAGE GCN GIN SAGE GCN GIN SAGE GCN GIN SAGE GCN GIN MSA-Real 0.435 0.760 0.202 0.228 0.211 0.222 0.843 0.911 0.818 0.347 0.425 0.521 JbDA 0.494 0.840 0.452 0.477 0.515 0.458 0.912 0.944 0.861 0.559 0.685 0.633 T-RND 0.546 0.827 0.448 0.465 0.489 0.461 0.910 0.944 0.861 0.565 0.696 0.625 MSA-AU 0.577 0.854 0.452 0.590 0.616 0.589 0.944 0.966 0.914 0.609 0.700 0.739 Random 0.519 0.850 0.346 0.497 0.471 0.516 0.873 0.935 0.818 0.475 0.647 0.642 MSA-AD 0.669 0.890 0.502 0.650 0.691 0.665 0.947 0.963 0.909 0.623 0.693 0.723 MSA-AUD 0.694 0.892 0.600 0.661 0.721 0.713 0.962 0.969 0.919 0.632 0.714 0.775 each GNN layer containing 128 units. We adopt the average function as the pooling mechanism for the GNN layers. We employ the ReLU activation function and Adam optimizer with a learning rate of 0.01. We use the cross-entropy loss to train the target model and the clone model. The number of rounds of generating samples (N) is set to 20. The proportions of the modified adjacency matrix (\u03b1) and mixed nodes (\u03b3) are set to 0.05 and 0.1, respectively. Lastly, we report the average fidelity of 5 runs for each model-stealing attack method. 4.2 Attack Performance on Various Datasets In this section, we conduct experiments on four datasets: ENZYMES, COIL-DEL, NCI1, and TRIANGLES. The upper part of Table 5 represents attack methods that do not modify node features, while the lower part includes attack methods that modify both node features and adjacency matrix. From the experimental results presented in Table 5, we draw the following conclusions: Finding 1: The generated samples help to alleviate the issue of insufficient real data. Based on the results in the table, all methods that rely on generating new samples have shown a significant performance boost compared to MSA-Real. For instance, on the COIL-DEL dataset under the SAGE model, JbDA, T-RND, MSA-AU, Random, MSA-AD, and MSAAUD have demonstrated improvements of 24.9%, 23.7%, 36.2%, 26.9%, 42.2%, and 43.3% respectively over MSAReal. This suggests that the generated samples can, to some extent, replace real samples to guide clone model updates. Finding 2: Considering the uncertainty in the process of synthesizing samples can effectively improve the attack performance of the method. Compared to all other methods that solely modify the adjacency matrix, including MSA-Real, JbDA, and T-RND, MSA-AU outperforms other attack methods across all datasets and target models. Notably, on the TRIANGLES dataset under the GIN model, MSA-AU even surpasses the performance of MSA-AD and Random, two methods that modify both node features and adjacency matrix. This indicates that the samples generated by MSA-AU indeed have high values to be queried. MSA-AUD, which combines the advantages of MSA-AD and MSA-AU, simultaneously considers authenticity, uncertainty, and diversity in the process of synthesizing new samples, and shows the best performance. Finding 3: The difficulty of model stealing attacks is strongly correlated with the number of classes in the dataset. The improvements over MSA-Real achieved by other attack methods exhibit significant variations across different datasets. On the NCI1 dataset, the improvements over MSA-Real are relatively modest, even with our method MSA-AUD achieving only a 10% relative improvement. This is primarily attributed to the NCI1 dataset\u2019s limited number of classes (2 classes), making it more susceptible to model stealing attacks. Even with a small number of real samples, attackers can achieve sufficiently good stealing performance. Conversely, on the COIL-DEL dataset with 100 classes, MSA-Real only achieves around 20% stealing performance, while our methods can enhance the stealing performance to around 70%. Consequently, we will conduct further analytical experiments on the COILDEL dataset to observe the performance of different attack algorithms in various scenarios. 4.3 Imperceptibility of Different Attack Methods In this section, we conduct an analysis of the imperceptibility of attack methods. Table 6 presents the discrepancies in statistical measures between graphs generated using various attack methods and the statistical measures of the original graphs. These statistical measures include degree distribution, triangle counts, clustering coefficient, transitivity, number of cliques, and average node connectivity. Smaller discrepancies indicate a higher level of authenticity in the generated graphs. From Table 6, it can be observed that among all methods that only modify the adjacency matrix, MSA-AU exhibits the best imperceptibility due to its adherence to authenticity constraints. In addition, MSA-AD, as it only mixes a small amount of the topological structure from other real graphs, yields the highest 9 Table 6: Imperceptibility of different attack methods. The best results are highlighted in bold. Algorithm Degree Distribution #Triangles Clustering Coefficient Transitivity #Cliques Node Connectivity JbDA 0.11652874 11.55641026 0.076869 0.066009 4.807692 0.551384 T-RND 0.22114451 160.6102564 0.050915 0.056535 44.51795 2.867363 MSA-AU 0.02605779 4.074358970 0.019472 0.015664 1.676923 0.209449 Random 0.13860179 99.33076923 0.133064 0.139343 32.78974 3.930681 MSA-AD 0.00002089 0.017948720 5.33E-05 5.63E-05 0.015385 0.000809 MSA-AUD 0.02348852 2.917948720 0.017514 0.012399 1.471795 0.169263 0 4 8 12 16 20 Iteration 0.2 0.3 0.4 0.5 0.6 0.7 Fidelity (a) SAGE 0 4 8 12 16 20 Iteration 0.2 0.3 0.4 0.5 0.6 0.7 Fidelity (b) GCN 0 4 8 12 16 20 Iteration 0.2 0.3 0.4 0.5 0.6 0.7 Fidelity (c) GIN MSA-AUD MSA-AD MSA-AU T-RND JbDA Random MSA-Real Figure 4: Attack performance with the different number of query samples. level of authenticity in all methods. MSA-AUD\u2019s authenticity falls between MSA-AD and MSA-AU, demonstrating significantly improved authenticity compared to the baselines. Random, characterized by a highly random graph generation process, exhibits the lowest level of imperceptibility. It is pertinent to note that we did not assess MSA-AD\u2019s imperceptibility at the node feature level in this study. This is due to MSA-AD\u2019s feature distribution, which should be consistent with the original dataset since it just transfers node features and does not create new ones. 4.4 Performance with Different Knowledge 4.4.1 Query budget We present the performance variations of different attack methods during the data generation process in Fig 4. The number of queries we have is equivalent to the number of samples in the pool. The observations are as follows: Finding 1: With an increase in the query count, there is a noticeable improvement in the performance of all methods. Particularly, in the initial rounds, the performance gains for the attack methods are particularly prominent. On the SAGE model, all attack methods achieve a fidelity improvement exceeding 20% within the first four rounds. The rate of fidelity improvement gradually diminishes as the rounds progress, eventually leading to fidelity convergence. Finding 2: Across all models, our method consistently demonstrates optimal performance. In the initial rounds, the performance of MSA-AUD and MSA-AU is closely matched, but with the advancement of iteration rounds, MSA-AUD progressively exhibits significantly superior performance compared to MSA-AU. This observation underscores the effectiveness of integrating MSA-AD in alleviating the problem of lack of diversity in the generated data caused by the convergence of the clone model in MSA-AU. 4.4.2 Unware model architecture In this part, we assume that the attacker lacks knowledge of the target model\u2019s architecture. The experimental results presented in Table 7 yield the following insights: Finding 1: MSA-AUD and MSA-AU demonstrate superior performance compared to other methods, validating the effectiveness of our approach. Even when the attacker is unaware of the target model\u2019s structure, our method can still achieve a certain level of attack performance. Finding 2: When the category of the target model is unknown, the performance of all attack methods experiences a decline. This experimental result aligns with intuition, suggesting that safeguarding the target model against stealing attacks can begin by maintaining secrecy regarding the model\u2019s category. 4.5 Parametric Analysis In this section, we further analyze the effect of the proportion of mixed nodes, proportion of modifications on attack performance. 10 Table 7: Attack performance with different model architectures. The best results are highlighted in bold. Victim Model SAGE GCN GIN Clone Model SAGE GCN GIN SAGE GCN GIN SAGE GCN GIN MSA-Real 0.2276 0.1109 0.1333 0.2939 0.2109 0.1638 0.2458 0.1439 0.2215 JbDA 0.4769 0.2673 0.3160 0.4282 0.5147 0.4288 0.4016 0.3343 0.4577 T-RND 0.4651 0.2590 0.3099 0.4298 0.4888 0.4321 0.4061 0.3487 0.4606 MSA-AU 0.5901 0.3083 0.3362 0.5032 0.6157 0.5250 0.4087 0.3904 0.5894 Random 0.4971 0.1109 0.1833 0.5013 0.4708 0.4692 0.3288 0.2590 0.5163 MSA-AD 0.6497 0.2674 0.3106 0.5984 0.6907 0.6173 0.4497 0.4317 0.6647 MSA-AUD 0.6615 0.2869 0.3340 0.6449 0.7212 0.6631 0.4740 0.4609 0.7125 0.02 0.04 0.06 0.08 0.1 Proportion of mixed nodes 0.62 0.64 0.66 0.68 0.70 Fidelity (a) SAGE 0.02 0.04 0.06 0.08 0.1 Proportion of mixed nodes 0.64 0.66 0.68 0.70 0.72 Fidelity (b) GCN 0.02 0.04 0.06 0.08 0.1 Proportion of mixed nodes 0.60 0.62 0.64 0.66 0.68 0.70 0.72 Fidelity (c) GIN MSA-AUD MixTRND MixJbDA MSA-AD Figure 5: Attack performance with different proportions of mixed nodes. 4.5.1 Proportion of mixed nodes(\u03b3) Fig 5 illustrates the performance variations of different attack methods when increasing the proportion of mixed nodes. To rigorously assess the efficacy of MSA-AU, we integrate JbDA and T-Rnd with MSA-AD, resulting in MixJbDA and MixTRND, respectively. It is discernible that: Finding 1: MSA-AUD consistently exhibits superior performance across all scenarios. While MixJbDA and MixTRND show marginal improvements in some instances when combined with MSA-AD, these enhancements remain relatively inconspicuous. Furthermore, the integration of MSA-AD with JbDA and T-RND, given their inherently low authenticity, paradoxically tends to compromise the imperceptibility of MSA-AD. Finding 2: Elevating the mixing ratio generally leads to improved MSA-AD performance in most cases. However, MSA-AUD, MixJbDA, and MixTRND, due to their incorporation of other methods, are less influenced by changes in the mixing ratio. 4.5.2 Proportion of modifications on adjacency matrix(\u03b1) In this part, we investigate the influence of the proportions of modifications on adjacency matrix on MSA-AU and MSAAUD. The experimental results presented in Fig 6 reveal several noteworthy findings. Table 8: The detection accuracy of the defender. Attack algorithm SAGE GCN GIN MSA-AU 0.5176 0.5208 0.5401 MSA-AD 0.5337 0.5288 0.5593 MSA-AUD 0.5176 0.5032 0.5288 Finding 1: Firstly, both MSA-AU and MSA-AUD exhibit improved performance when a greater number of modifications are applied. Notably, the performance of MSA-AU appears to be nearly proportional to the number of modified edges. Conversely, MSA-AUD\u2019s performance is less evidently affected by the number of modifications. This phenomenon arises from MSA-AUD\u2019s incorporation of MSA-AD, which introduces a dual influence on its performance. Therefore, MSA-AUD is not significantly affected by the parameters of a certain strategy, aligning with the findings from the experiments in Section 4.5.1. Finding 2: It is important to highlight that even when modifying only 1% of the adjacency matrix, both MSA-AU and MSA-AUD exhibit noteworthy model stealing capabilities. This underscores the resilience and effectiveness of our approach, demonstrating its proficiency in model stealing attacks, even in situations involving a relatively low percentage of modifications. 11 0.02 0.04 0.06 0.08 0.1 Proportion of modification 0.50 0.55 0.60 0.65 0.70 Fidelity (a) SAGE 0.02 0.04 0.06 0.08 0.1 Proportion of modification 0.55 0.60 0.65 0.70 0.75 Fidelity (b) GCN 0.02 0.04 0.06 0.08 0.1 Proportion of modification 0.50 0.55 0.60 0.65 0.70 0.75 Fidelity (c) GIN MSA-AUD MSA-AU Figure 6: Attack performance with different proportions of modifications on adjacency matrix. 0% 5% 10% 15% 20% Perturbation 0 0.2 0.4 0.6 (a) SAGE 0% 5% 10% 15% 20% Perturbation 0 0.2 0.4 0.6 (b) GCN 0% 5% 10% 15% 20% Perturbation 0 0.2 0.4 0.6 (c) GIN MSA-AU MSA-AD MSA-AUD Figure 7: Attack performance with perturbations of different scales added to the target model\u2019s output. 4.6 Defense In this section, we analyze how our method performs under existing defense strategies. We tested two specific strategies to counter our attacks. One strategy involves adding noise to the model\u2019s output [48,62], while the other relies on the detection of generated graphs [41]. Specifically, we introduce random noise into the responses from the target model, which induces the clone model to learn predictions that are inconsistent with those of the target model. Note that we operate under the assumption that attackers can only access the hard label returned by the target model. Consequently, defenders are compelled to modify the predictions of the target model if they wish to introduce noise. As the level of noise increases, the confidence in the predictions made by the target model diminishes. The attack performance of our three methods under varying perturbations is presented in Fig 7. Our findings reveal that the noise-based defense strategy does not mitigate our attack threat. Even with the addition of 20% noise, MSA-AUD\u2019s model stealing performance remains above 50% for all three models. Furthermore, we employ state-of-the-art detection mechanisms to distinguish between our generated graphs and real graphs, which is reported in Table 8. Notably, even these advanced detection mechanisms struggle to discern the differences between our generated graphs and real ones. This is attributed to two primary reasons. First, our generation process considers imperceptibility, resulting in minimal differences between the generated and real graphs. Second, existing detection methods are designed to differentiate between real graphs and generated graphs(e.g. by generators like GANs or diffusion models).However, our generated graphs are minimally changed from real ones, and there is currently no reliable way to effectively distinguish them from genuine graphs. 4.7 Discussion Based on our experimental findings (Section 4.6), existing defense strategies are not effective against our attack methods. To address this, we propose three potential defense directions: \u2022 Section 4.4.2 suggests that maintaining the secrecy of the model\u2019s architecture and training details is a straightforward and effective defense strategy, as the attack performance degrades when the model\u2019s architecture is unknown. \u2022 As shown in Section 4.4.1, the attacker\u2019s success depends on their query budget. Increasing the cost of these queries is a viable defense option. \u2022 In Section 4.5, we observed that attackers perform better when they introduce larger perturbations to the data. While current detection mechanisms cannot differentiate between synthetic and genuine graphs, future research could focus on building more robust detectors based on the attacker\u2019s sample generation (e.g. adversarial training). Ethics statement. The datasets utilized in our study are all open-source, and the target model is locally trained, thereby 12 eliminating any risk of privacy breaches. Our research on model stealing attacks in the context of graph classification is motivated by the desire to encourage the community to design more secure graph classification models. 5 Related Work Graph Neural Networks Graphs are ubiquitous in our lives [9,56,74]. For instance, subway tracks in a city form a transportation graph [13]. The social network is comprised of the relationship between online users [73]. Due to the expressiveness of graphs, graph analysis problems have attracted increasing attention in recent years. When compared to DeepWalk [50], node2vec [22] and other traditional node embedding methods [10,51,68,69], GNNs [23,28,32] are far more effective, hence they have found widespread application in graph analysis jobs. GNNs can typically be divided into two families: spectral methods [25,37] and spatial methods [21,95]. Bruna et al. [5] firstly generalize the convolution operation from Euclidean data to non-Euclidean data. GCN [32], one of the most popular algorithms in this family, simplifies the GNN model using ChebNet [12]\u2019s first-order approximation. There are numerous GCN-based variants [7,15,71]. For instance, TAGCN [15] uses multiple kernels to extract neighborhood information of different receptive fields, whereas GCN uses a single convolution kernel. RGCN [71] extended GCN to accommodate heterogeneous graphs with different types of nodes and relations. GraphSage [23] is a famous spatial approach that utilizes various aggregation functions to combine node neighborhood information. It is inductive and can predict freshly inserted nodes. In addition, GraphSage restricted the amount of aggregated neighbors using sampling to be able to process large graphs. GAT [71] included the attention mechanism in its architecture. In each graph convolutional operation, distinct attention scores are learned for each of the target node\u2019s neighbors. In recent years, GNNS have been pointed out to be threatened by adversarial attacks [40, 91], attribute inference attacks [80], and model stealing attacks [60,81]. More details can be found in the recent surveys [84,94]. Model Stealing Attack An increasing number of privacy attacks [20,54,65,83] have been proposed to violate the owner\u2019s intellectual property from a data and model perspective. Unlike adversarial attacks [14,19,42], which try to undermine the performance and credibility of the target model, privacy attacks aim to violate the target model\u2019s privacy by abusing its permissions. Model stealing attack [43, 67, 72, 88], which steals various components of a black-box machine learning(ML) model(e.g. hyperparameters [75], architecture [46]), is one of the most common privacy attacks. Most recent works are mostly concerned with stealing the functionality [29,47] of the target model. To be specific, the attacker anticipates constructing a good clone of the target model(MT) through MT\u2019s query API. The clone model can be further used in other privacy attacks(e.g. membership inference attack [8,27,64]). With query access, Tramer et al. [72] proposed a model stealing attack against a machine learning model. Since then, researchers in fields like computer vision [78,89], generative adversarial networks [26], and recommendation systems [90] have explored model stealing attacks. Traditional model stealing attacks assume that the attacker can obtain sufficient data to achieve their goal. Papernot et al. [49] established a framework with limited access to the data distribution sample set. They perturbed the original samples to progressively obtain synthetic examples for training the clone model. The defense of model stealing attack is mainly divided into two categories. One is to detect whether the target model is under stealing attack [30,66], and the other is to prevent in advance by adding noise and other ways to destroy the availability of the clone model [1,63]. There have been few works [60,81] that focus on stealing the graph neural networks. Yun et al. [60] proposed different attacks based on the attacker\u2019s background knowledge and the responses of the target models. They assumed that the target model could provide node embedding, prediction, or t-sne projection vectors. Bang Wu. [81] assumed that it was possible to obtain a portion of the training graph and a shadow graph with the same distribution as the training graph. All of these works, however, focus on node classification while ignoring the graph-level task. 6 Conclusion and Future Work In conclusion, recent research has highlighted GNNs\u2019 vulnerability to model-stealing attacks, mainly focused on node classification tasks, and questioned their practicality. We propose strict settings using limited real data and hard-label awareness for synthetic data generation, simplifying the theft of the target model. We introduce three model-stealing methods for various scenarios: MSA-AU emphasizes uncertainty, MSA-AD adds diversity, and MSA-AUD combines both. Our experiments consistently show the effectiveness of our methods in enhancing query efficiency, and improving model-stealing performance. In the future, we will design more effective defense mechanisms to address the threats posed by model stealing attacks. 13", "introduction": "Graph data has been extensively employed across various fields, including transportation networks [13,92], social net- works [44,70], and chemical property prediction [6,39]. To analyze the graph data, numerous graph-based machine learn- ing (ML) models [51,68,69] have been posited. Graph neural networks (GNNs) [23,28,32], representing one of the most cutting-edge graph-based paradigms, have gained noteworthy attention due to their superior performance in tasks such as node classification [76,93], link prediction [36,38] and graph classification [17,35]. Nonetheless, recent studies have under- scored the susceptibility of GNNs to model stealing attacks in Table 1: The main differences between our work and related works [60,81]. Related works Our framework Target task node-level graph-level Attack capability unlimited data few real data intermediate output hard label node classification tasks [60,81]. These attacks involve acquir- ing a replica of the model deployed within Machine Learning as a Service (MLaaS) systems, facilitated through query per- missions. Such incursions harbor the potential to compromise the owner\u2019s intellectual property, escalate the vulnerability to adversarial activities [31,49], and heighten susceptibility to membership inference attacks [64,96], thereby undermining the credibility and privacy of the models. While prior work [60,81] has explored model stealing at- tacks targeting GNNs in node-level classification tasks, the threats of GNNs in graph-level classification tasks have re- mained unexplored. Moreover, previous research [60,81] as- sumed that attackers could access a substantial volume of real data, which comes from the same distribution with the target model\u2019s training data. They utilized query permissions on the target model to obtain prediction results for samples, which could be either confidence vectors of the classes likelihood or node representations. Subsequently, the attacker harnessed these data samples and prediction results to facilitate the replication of the target model by the clone model. How- ever, the assumption of unrestricted access to source-similar data proves infeasible in graph-level tasks [80]. For exam- ple, in the case of AlphaFold [11], its training data involves the resource-intensive constructions of biological networks, rendering it a formidable challenge for potential attackers to obtain a substantial amount of source-similar data. Addition- ally, not all models deployed on MLaaS systems possess the capability to furnish comprehensive predictive results. Most MLaaS systems [57,78] merely offer categorical labels for samples (e.g., the image depicts a cat or a dog, rather than the 1 arXiv:2312.10943v2 [cs.LG] 26 Dec 2023 probabilities of the image belonging to various categories). These challenges force us to urgently study more practical model stealing attacks for graph classification. To this end, we assume that attackers can only access a limited amount of source-similar data to the target model and resolve the issue of insufficient real samples through the generation of new samples. We propose two principles for the generation process of new samples to ensure the attack\u2019s imperceptibility and effectiveness. \u2022 Principle 1. Authenticity. Generated samples should ex- hibit minimal differences from real samples while maintain- ing an adequate level of authenticity. \u2022 Principle 2. Query Value. Generated samples should pos- sess sufficient query value to enhance the effectiveness of the target model simulation, that is, more informative for mimicking the target model\u2019s behavior. It is evident that the constraints of Principle 1 preserve the requisite covert nature of generated samples, thereby eluding the detection mechanisms of the target model. Simultaneously, adherence to the ideal properties of Principle 2 augments the efficacy of the attack, allowing for heightened performance within the confines of a restricted query budget. Grounded in these two fundamental principles, we develop three model stealing attacks, MSA-AU, MSA-AD, and MSA-AUD. MSA-AU emphasizes authenticity and uncertainty. Drawing inspiration from the doctrines of active learn- ing [52,59], it deems samples yield augmented uncertainty in the predictions of the extant model as possessing elevated query worth. This elevation in query value stems from the fact that heightened uncertainty suggests the proximity of these samples to the model\u2019s decision threshold. As a consequence, affixing labels to these samples characterized by substantial uncertainty can substantially assist the model in rectifying prior fallacious predictions. Building on it, we adopt the idea of adversarial attacks to introduce perturbations in the ac- cessible real samples, aiming to get samples with higher un- certainty. Notably, we focused on the perturbations on the graph\u2019s adjacency matrix rather than node features, which avoids generating unreasonable node features that undermine the requirements of Principle 1. To further maintain authentic- ity, attackers can modify only a minority of adjacency matrix positions, guided by importance scores from the clone model. By limiting the perturbation, the generated synthetic samples strike a balance between authenticity and uncertainty. MSA-AD prioritizes authenticity and diversity. The un- certain samples generated by MSA-AU rely on the clone model, which may lead to diminishing differences among the generated samples as the clone model converges. Conse- quently, this leads to assailants persistently soliciting redun- dant samples, depleting the query budget ineffectually. Draw- ing inspiration from the concept of active learning [4, 86], we introduce diversity as a measure of the query worth of a batch of samples. Accordingly, we introduce an alternative model-independent sample generation strategy: MSA-AD, to prevent excessive similarity among the generated sam- ples. In MSA-AD, we employ Mixup [77,87] to derive new samples by blending multiple samples. Specifically, two ran- domly selected genuine samples serve as the foundation, from which MSA-AD selects a small subset of nodes from each to form corresponding induced subgraphs. Subsequently, MSA- AD exchanges the topological structures and node attributes of these induced subgraphs. Given the minimal number of modified nodes, the statistical disparities between the gen- erated graphs and their original counterparts remain subtle. Moreover, MSA-AD applies feature copying to modify node attributes, thereby mitigating the risk of inconsistencies in feature content. To further amplify the diversity, we strive to pair the original graph with diverse graphs as extensively as possible in various iterations. MSA-AUD concurrently considers authenticity, uncer- tainty, and diversity within the generated sample ensem- ble. Specifically, we first apply the MSA-AU strategy to the original samples and subsequently use MSA-AD on the re- sults. Lastly, we feed the twice-modified new samples into the target model for querying to aid in the training of the clone model. Given that both MSA-AU and MSA-AD address the issue of attack authenticity, the disparities between the new samples and the original samples remain minimal. Benefiting from the advantages of the above two methods, MSA-AU ensures the uncertainty of the new samples, while the model- independent MSA-AD guarantees diversity within the sample collection. Our contributions can be summarized as follows: \u2022 We are the first to explore model stealing attacks for graph classification tasks. Our work involves adopting stringent assumptions regarding the attacker\u2019s capabilities, including limited real data and exclusive availability of hard labels. The proposed methods based on these assumptions enjoy both effectiveness and practicality. \u2022 We introduce important principles governing the sample generation. In alignment with these principles, we proffer three strategic methodologies aimed at ensuring the authen- ticity, uncertainty, and diversity of the generated samples. \u2022 Extensive experiments demonstrate that our attack method is covert, high-performing, and efficient in various scenarios. Even the latest defense methods cannot fully withstand our attack. In addition, through experimental analysis, we sug- gest several viable directions for defending against model extraction attacks in graph classification tasks." }, { "url": "http://arxiv.org/abs/2312.11571v2", "title": "Model Stealing Attack against Recommender System", "abstract": "Recent studies have demonstrated the vulnerability of recommender systems to\ndata privacy attacks. However, research on the threat to model privacy in\nrecommender systems, such as model stealing attacks, is still in its infancy.\nSome adversarial attacks have achieved model stealing attacks against\nrecommender systems, to some extent, by collecting abundant training data of\nthe target model (target data) or making a mass of queries. In this paper, we\nconstrain the volume of available target data and queries and utilize auxiliary\ndata, which shares the item set with the target data, to promote model stealing\nattacks. Although the target model treats target and auxiliary data\ndifferently, their similar behavior patterns allow them to be fused using an\nattention mechanism to assist attacks. Besides, we design stealing functions to\neffectively extract the recommendation list obtained by querying the target\nmodel. Experimental results show that the proposed methods are applicable to\nmost recommender systems and various scenarios and exhibit excellent attack\nperformance on multiple datasets.", "authors": "Zhihao Zhu, Rui Fan, Chenwang Wu, Yi Yang, Defu Lian, Enhong Chen", "published": "2023-12-18", "updated": "2023-12-26", "primary_cat": "cs.CR", "cats": [ "cs.CR", "cs.AI", "cs.LG" ], "main_content": "Recommender system [2, 26, 30] models user preferences and item attributes to recommend a list of items that users are more likely to be interested in. Most recommender systems [10, 21] generate a unique representation vector for each user to model their preferences. For example, WRMF [10] and PMF [21] use matrix factorization to decompose the user rating matrix into two matrices: user embeddings and item embeddings. The likelihood of a given interaction can be represented as the inner product of the user embedding and the item embedding. Recently, some studies [15, 44\u201346] have pointed out that recommender systems face serious data privacy risks. For example, attackers can detect whether a user has engaged in the training process of the target model by using auxiliary data and query permission [36, 41, 44]. Such attacks are called membership inference attacks [9, 27, 31]. The research on the model privacy of recommender systems is, however, still in its early stages. Countermeasures [1, 19, 43] against privacy attacks on recommender systems have also been proposed. For instance, Zhang et al. [44] proposes to use randomized recommendation lists to resist membership inference attacks on recommender systems. Model stealing attack [14, 20, 32, 33] aims to steal internal information of the target model, including hyperparameters [34], architecture [23], etc. Model stealing attacks can also be used to realize functional stealing attacks [11, 14, 24], which means building a clone model to imitate the predictions of the target model. The fine-tuned clone model can replace the target model in some ways. In addition, the clone model can be used for subsequent adversarial attacks [3, 18], membership inference attacks [27, 31], etc. Model stealing attacks have been widely studied in fields such as images [35, 40] and graphs [6, 38], while received little attention in recommender systems. Yue et al. [42] proposed to utilize the autoregressive nature of sequential recommender system to steal its internal information. However, their method is only applicable to sequential recommender systems and lacks systematic evaluation of the performance of model stealing attacks. 3 PROBLEM FORMULATION 3.1 Target Model After modeling users and items, the majority of recommender systems, such as matrix factorization-based recommenders [21, 28], assign a fixed-length embedding to each user and item and then use inner product operation to calculate the predicted rating of each Model Stealing Attack against Recommender System Conference\u201917, July 2017, Washington, DC, USA Table 2: Summary of the notations Notation Description \ud835\udc91\ud835\udc56 User \ud835\udc56\u2019s embedding \ud835\udc92\ud835\udc57 Item \ud835\udc57\u2019s embedding \ud835\udc5f\ud835\udc56\ud835\udc57 Predicted rating of user \ud835\udc56to item \ud835\udc57 I \ud835\udc56 Item set that user \ud835\udc56has interacted with R\ud835\udc61\ud835\udc4e\ud835\udc5f\ud835\udc54\ud835\udc52\ud835\udc61 \ud835\udc56 Target model\u2019s recommendation for user \ud835\udc56 R\ud835\udc50\ud835\udc59\ud835\udc5c\ud835\udc5b\ud835\udc52 \ud835\udc56 Clone model\u2019s recommendation for user \ud835\udc56 \ud835\udc92\ud835\udc4e Auxiliary item embedding \ud835\udc92\ud835\udc50 Clone item embedding \ud835\udefc\u2032 Attention coefficient for \ud835\udc92\ud835\udc50 \ud835\udefd\u2032 Attention coefficient for \ud835\udc92\ud835\udc4e \ud835\udc3f\ud835\udc46 Stealing loss function \ud835\udc3f\ud835\udc5f, \ud835\udc3f\ud835\udc5d Ranking loss and positive item loss m The value of margin user for each item. The following formula defines the predicted rating of user \ud835\udc56to item \ud835\udc57. The higher the predicted rating \ud835\udc5f\ud835\udc56\ud835\udc57, the higher the likelihood of interaction between the two. \ud835\udc5f\ud835\udc56\ud835\udc57= \ud835\udc91\ud835\udc56\u00b7 \ud835\udc92\ud835\udc57, (1) where \ud835\udc91\ud835\udc56and \ud835\udc92\ud835\udc57are embeddings of user \ud835\udc56and item \ud835\udc57, respectively. After calculating the predicted rating of the user \ud835\udc56for all items, the recommender system sorts these items based on their rating and recommends the highest-rated items to the user, excluding those that have already been interacted with. R\ud835\udc56= arg max \ud835\udc57\u2209I \ud835\udc56 \ud835\udc5f\ud835\udc56\ud835\udc57. (2) I \ud835\udc56represents the set of items that user \ud835\udc56has interacted with, and R\ud835\udc56the recommended item list for user \ud835\udc56. The notations described in this paper are summarized in an easy-to-read format in Table 2. 3.2 Threat Model Adversary\u2019s Goal. Model stealing attacks, also known as MSA, are presented in order to set up a local replica, also known as a clone model, of the target model. The fidelity, which promotes the clone model to deliver the same prediction for each sample as the target model, is the evaluation metric used for most model stealing attacks. In recommender systems, the target model tries to generate a user-preferred item list for each user. Thus, we hope that the clone model will recommend items for the same user that align with the recommendations made by the target model. We use Agreement (Agr) [42] to analyze the attack performance of model stealing attacks on recommender systems. The Agr for user \ud835\udc56is constructed according to the following: \ud835\udc34\ud835\udc54\ud835\udc5f\ud835\udc56= |R\ud835\udc61\ud835\udc4e\ud835\udc5f\ud835\udc54\ud835\udc52\ud835\udc61 \ud835\udc56 \u2229R\ud835\udc50\ud835\udc59\ud835\udc5c\ud835\udc5b\ud835\udc52 \ud835\udc56 | |R\ud835\udc61\ud835\udc4e\ud835\udc5f\ud835\udc54\ud835\udc52\ud835\udc61 \ud835\udc56 | , (3) where R\ud835\udc61\ud835\udc4e\ud835\udc5f\ud835\udc54\ud835\udc52\ud835\udc61 \ud835\udc56 and R\ud835\udc50\ud835\udc59\ud835\udc5c\ud835\udc5b\ud835\udc52 \ud835\udc56 are recommendations of the target model and the clone model for user \ud835\udc56, respectively. |R\ud835\udc61\ud835\udc4e\ud835\udc5f\ud835\udc54\ud835\udc52\ud835\udc61 \ud835\udc56 | is the length of recommendation, which equals to |R\ud835\udc50\ud835\udc59\ud835\udc5c\ud835\udc5b\ud835\udc52 \ud835\udc56 |. Adversary\u2019s Knowledge. In this part, we reviewed three forms of knowledge that an attacker could have. Firstly, compared with previous works [16, 39] that use all the target data, we assume that Table 3: Attack knowledge of our methods Method Partial Target data Auxiliary data Query PTA \u2713 \u2713 PTQ \u2713 \u2713 PTAQ \u2713 \u2713 \u2713 the attacker is able to acquire a minute portion (10% in our experiments) of the target data. This data may originate from the target model\u2019s public dataset or accounts stolen by the attacker. Secondly, in reference to previous privacy attacks [36, 44], we suppose that the adversary may have an auxiliary dataset that comes from the same distribution as the target data. In this assumption, the attacker may be a commercial competitor of the target model, with auxiliary data similar to the target data in terms of item sets. Finally, the attacker may obtain the recommendations of the target model on the available target data by query permission like QSD [42]. 4 ATTACK METHODOLOGY In this section, we propose three algorithms based on the attacker\u2019s permissions. The methods and corresponding knowledge required are shown in Table 3. We will demonstrate step-by-step how to construct a clone model that imitates the target model\u2019s recommendation by using the three types of attack knowledge. In general, when the attacker can obtain a portion of the target data, we force the clone model to provide high ratings for interacted items(Section 4.1). When the attacker can obtain auxiliary data, we fuse the auxiliary item embeddings obtained from the auxiliary data into the item embeddings of the clone model and use an attention mechanism to assign reasonable weights to them (Section 4.2). Finally, for attackers who have query permission to the target model, we design a stealing function to extract two kinds of information from the target model, namely ranking information and recommended item information (Section 4.3). 4.1 Partial Target Data In this subsection, we introduced how to establish a clone model in the scenario where the attacker only has access to partial target data. The training objective of the target model is to furnish personalized recommendations to the target users. Therefore, during the training phase, the target model is inclined to assign higher ratings to items interacting with these users. Leveraging this characteristic, we employ the same training target to train the clone model, with the expectation that it will also provide personalized recommendations to available target users. Consider, for instance, the Bayesian Personalized Ranking (BPR) model. The input format of the BPR model is a triplet of the type (user ID, positive item ID, negative items ID). Specifically, the user ID and positive item ID come from the user\u2019s interaction record, while the negative items are sampled from the remaining items. For instance, (user ID 5, positive item ID 4, negative item ID 3) indicates that user 5 has interacted with item 4, not item 3. When training the recommender systems, BPR establishes a pair-wise loss function for each pair of positive and negative items. The goal of this function is to produce higher ratings for positive items compared to the ratings Conference\u201917, July 2017, Washington, DC, USA Zhihao Zhu, Rui Fan, Chenwang Wu, Yi Yang, Defu Lian, Enhong Chen of negative items. The following gives BPR\u2019s loss function: \ud835\udc3f\ud835\udc35\ud835\udc43\ud835\udc45(\ud835\udc5f\ud835\udc56\ud835\udc5d,\ud835\udc5f\ud835\udc56\ud835\udc5b) = \u2212ln\ud835\udf0e(\ud835\udc5f\ud835\udc56\ud835\udc5d\u2212\ud835\udc5f\ud835\udc56\ud835\udc5b). (4) \ud835\udc5f\ud835\udc56\ud835\udc5dand \ud835\udc5f\ud835\udc56\ud835\udc5bare user \ud835\udc56\u2019s predicted ratings for positive item \ud835\udc5dand negative item \ud835\udc5b. \ud835\udf0e(\u00b7) is the sigmoid function. 4.2 Auxiliary Data In this subsection, we will discuss how incorporating auxiliary data into our approach will work. This kind of scenario often takes place when the attacker is a rival of the target model or when the attacker has obtained auxiliary data from other open platforms. Considering that the auxiliary data is independent of the target model, simply employing the auxiliary data to train the clone model, as Section 4.1 does, does not assist the clone model in becoming a precise imitation of the target model. Undeniably, auxiliary data and target data are often generated by similar users, which may make them contain similar behavior patterns. Then, fusing information of the auxiliary data into the modeling of the clone model\u2019s item attribute is likely to better help to emulate the target model. First, in order to mine the hidden information in the auxiliary data, we train an auxiliary model on it (note that the auxiliary model is not the clone model). The item embeddings of the auxiliary data trained on the auxiliary model with the same architecture as the clone model has the item embeddings aligned with the ones in the clone model. That is, each row in the item embedding matrix of the clone model corresponds to each row in the item embedding matrix of the auxiliary model. Then, we employ a weighted addition approach to combine the clone item embeddings and auxiliary item embeddings, utilizing an attention mechanism. This mechanism is a unique structure in machine learning models that facilitates automatic learning and calculation of the respective contributions of the clone and auxiliary item embeddings towards the ultimate fused item embedding. Our approach involves the implementation of a single-layer neural network that utilizes a non-linear activation function to compute the weight of both the clone item embedding and auxiliary item embedding. Furthermore, in order to tailor the weights for individual users, we incorporate user embeddings into the weight computation procedure. To predict user \ud835\udc56\u2019s rating of item \ud835\udc57, \ud835\udc5f\ud835\udc56\ud835\udc57, we first obtain user embedding \ud835\udc91\ud835\udc56, clone item embedding \ud835\udc92\ud835\udc50 \ud835\udc57, and auxiliary item embedding \ud835\udc92\ud835\udc4e \ud835\udc57. Then, the attention coefficient between user embedding and clone item embedding, auxiliary item embedding is calculated by the following formula. \ud835\udefc= \ud835\udc98\ud835\udc47\ud835\udc45\ud835\udc52\ud835\udc3f\ud835\udc48(\ud835\udc91\ud835\udc56\u2299\ud835\udc92\ud835\udc50 \ud835\udc57) + \ud835\udc4f, \ud835\udefd= \ud835\udc98\ud835\udc47\ud835\udc45\ud835\udc52\ud835\udc3f\ud835\udc48(\ud835\udc91\ud835\udc56\u2299\ud835\udc92\ud835\udc4e \ud835\udc57) + \ud835\udc4f, (5) where \u2299represents Hadamard product(element-wise product). \ud835\udc98 and \ud835\udc4fare parameters of the neural network. ReLU is a non-linear activation function. In order to keep the magnitude of the final item embedding constant, we normalize the obtained \ud835\udefcand \ud835\udefd, to guarantee that the sum of \ud835\udefc\u2032 and \ud835\udefd\u2032 equals 1. \ud835\udefc\u2032 = \ud835\udc52\ud835\udefc \ud835\udc52\ud835\udefc+ \ud835\udc52\ud835\udefd, \ud835\udefd\u2032 = \ud835\udc52\ud835\udefd \ud835\udc52\ud835\udefc+ \ud835\udc52\ud835\udefd. (6) Algorithm 1: Algorithm of PTAQ Input: Available target data \ud835\udc37\ud835\udc61; the auxiliary data \ud835\udc37\ud835\udc4e; Query permission to the target model \ud835\udc40\ud835\udc61\ud835\udc4e\ud835\udc5f\ud835\udc54\ud835\udc52\ud835\udc61; Output: The trained clone model \ud835\udc40\ud835\udc50\ud835\udc59\ud835\udc5c\ud835\udc5b\ud835\udc52 1 Train \ud835\udc91\ud835\udc4e, \ud835\udc92\ud835\udc4ewith \ud835\udc37\ud835\udc4e; // Parameters of \ud835\udc40\ud835\udc4e\ud835\udc62\ud835\udc65 2 Random Initialize \ud835\udc91, \ud835\udc92\ud835\udc50; // Parameters of \ud835\udc40\ud835\udc50\ud835\udc59\ud835\udc5c\ud835\udc5b\ud835\udc52 3 while not converge do // train \ud835\udc40\ud835\udc50\ud835\udc59\ud835\udc5c\ud835\udc5b\ud835\udc52with \ud835\udc37\ud835\udc61 4 \ud835\udc3f\ud835\udc5c\ud835\udc60\ud835\udc60\u21900; 5 for (\ud835\udc56, \ud835\udc57) \u2208\ud835\udc37\ud835\udc61do // user-item pair 6 \ud835\udc5f\ud835\udc56\ud835\udc57\u2190\ud835\udc91\ud835\udc56\u00b7weighted_sum(\ud835\udc92\ud835\udc50 \ud835\udc57, \ud835\udc92\ud835\udc4e \ud835\udc57) ; // Eq. 8 7 for \ud835\udc58\u2208Sampled Negative Items do 8 \ud835\udc5f\ud835\udc56\ud835\udc58\u2190\ud835\udc91\ud835\udc56\u00b7weighted_sum(\ud835\udc92\ud835\udc50 \ud835\udc58, \ud835\udc92\ud835\udc4e \ud835\udc58) ; // Eq. 8 9 \ud835\udc3f\ud835\udc5c\ud835\udc60\ud835\udc60+ = \ud835\udc3f\ud835\udc35\ud835\udc43\ud835\udc45(\ud835\udc5f\ud835\udc56\ud835\udc57,\ud835\udc5f\ud835\udc56\ud835\udc58) ; // Eq. 4 10 end 11 end 12 Update \ud835\udc91, \ud835\udc92\ud835\udc50by \ud835\udc3f\ud835\udc5c\ud835\udc60\ud835\udc60; 13 end 14 while not converge do // Fine-tune \ud835\udc40\ud835\udc50\ud835\udc59\ud835\udc5c\ud835\udc5b\ud835\udc52 15 \ud835\udc3f\ud835\udc5c\ud835\udc60\ud835\udc60\u21900; 16 for user \ud835\udc56\u2208\ud835\udc37\ud835\udc61do 17 \ud835\udc45\ud835\udc61\ud835\udc4e\ud835\udc5f\ud835\udc54\ud835\udc52\ud835\udc61 \ud835\udc56 \u2190\ud835\udc40\ud835\udc61\ud835\udc4e\ud835\udc5f\ud835\udc54\ud835\udc52\ud835\udc61\u2019s recommendation list for \ud835\udc56; 18 for \ud835\udc57\u2208\ud835\udc45\ud835\udc61\ud835\udc4e\ud835\udc5f\ud835\udc54\ud835\udc52\ud835\udc61 \ud835\udc56 do 19 \ud835\udc5f\ud835\udc56\ud835\udc57\u2190\ud835\udc91\ud835\udc56\u00b7weighted_sum(\ud835\udc92\ud835\udc50 \ud835\udc57, \ud835\udc92\ud835\udc4e \ud835\udc57) ; // Eq. 8 20 for \ud835\udc58\u2208Sampled Negative Items do 21 \ud835\udc5f\ud835\udc56\ud835\udc58\u2190\ud835\udc91\ud835\udc56\u00b7weighted_sum(\ud835\udc92\ud835\udc50 \ud835\udc58, \ud835\udc92\ud835\udc4e \ud835\udc58) ; // Eq. 8 22 \ud835\udc3f\ud835\udc5c\ud835\udc60\ud835\udc60+ = \ud835\udc3f\ud835\udc3b\ud835\udc56\ud835\udc5b\ud835\udc54\ud835\udc52(\ud835\udc5f\ud835\udc56\ud835\udc57,\ud835\udc5f\ud835\udc56\ud835\udc58) ; // Eq. 9 23 end 24 \ud835\udc57\u2032 \u2190the next item of \ud835\udc57in \ud835\udc45\ud835\udc61\ud835\udc4e\ud835\udc5f\ud835\udc54\ud835\udc52\ud835\udc61 \ud835\udc56 ; 25 \ud835\udc5f\ud835\udc56\ud835\udc57\u2032 \u2190\ud835\udc91\ud835\udc56\u00b7weighted_sum(\ud835\udc92\ud835\udc50 \ud835\udc57\u2032, \ud835\udc92\ud835\udc4e \ud835\udc57\u2032) ; // Eq. 8 26 \ud835\udc3f\ud835\udc5c\ud835\udc60\ud835\udc60+ = \ud835\udc3f\ud835\udc35\ud835\udc43\ud835\udc45(\ud835\udc5f\ud835\udc56\ud835\udc57,\ud835\udc5f\ud835\udc56\ud835\udc57\u2032) ; // Eq. 4 27 end 28 end 29 Update \ud835\udc91, \ud835\udc92\ud835\udc50by \ud835\udc3f\ud835\udc5c\ud835\udc60\ud835\udc60; 30 end After calculating the weight pairs, we fuse the clone and the auxiliary item embedding to obtain the fused item embedding. \ud835\udc92\ud835\udc53 \ud835\udc57= \ud835\udefc\u2032\ud835\udc92\ud835\udc50 \ud835\udc57+ \ud835\udefd\u2032\ud835\udc92\ud835\udc4e \ud835\udc57. (7) We perform inner product operations on the user embeddings and fused item embeddings to obtain their predicted ratings. \ud835\udc5f\ud835\udc56\ud835\udc57= \ud835\udc91\ud835\udc56\u00b7 \ud835\udc92\ud835\udc53 \ud835\udc57. (8) The process of utilizing the auxiliary data is shown in Fig 1. 4.3 Query Permission In this subsection, we assume that the attacker has query permission and can obtain the ordered recommendation list of the target data from the target model. Inspired by the previous work [42], we extract two pieces of information from the recommendation, namely recommended item information and ranking information. The recommended item information relates to the fact that users are Model Stealing Attack against Recommender System Conference\u201917, July 2017, Washington, DC, USA Attention Unit Attention Unit \ud835\udc5f \ud835\udc56\ud835\udc57 \ud835\udc5f \ud835\udc56\ud835\udc58 \ud835\udc3f\ud835\udc5c\ud835\udc60\ud835\udc60 User i Positive Item j Auxiliary Item j Auxiliary Item k Negative Item k Embedding Layer \ud835\udc5e\ud835\udc50 \ud835\udc5d\ud835\udc56 \ud835\udc5e\ud835\udc4e ReLU ReLU Linear Linear \ud835\udefc \ud835\udefd Normalize \ud835\udefc\u2032 \ud835\udefd\u2032 Attention Unit \ud835\udc5d\ud835\udc56 \ud835\udc5d\ud835\udc56 \ud835\udc5e\ud835\udc57 \ud835\udc50 \ud835\udc5e\ud835\udc58 \ud835\udc50 \ud835\udc5e\ud835\udc57 \ud835\udc4e \ud835\udc5e\ud835\udc58 \ud835\udc4e \ud835\udc5e\ud835\udc53 \ud835\udc5e\ud835\udc58 \ud835\udc53 \ud835\udc5e\ud835\udc57 \ud835\udc53 Figure 1: The process of utilizing the auxiliary data. thought to interact with the recommended items more likely than with other items. The ranking information indicates that the user is more likely to prefer the items at the top of the recommendation list rather than those at the bottom. We adopt the margin hinge loss to extract the recommended item information. \ud835\udc3f\ud835\udc3b\ud835\udc56\ud835\udc5b\ud835\udc54\ud835\udc52(\ud835\udc5f\ud835\udc56\ud835\udc5d,\ud835\udc5f\ud835\udc56\ud835\udc5b) = \ud835\udc5a\ud835\udc4e\ud835\udc65(0,\ud835\udc5a\u2212(\ud835\udc5f\ud835\udc56\ud835\udc5d\u2212\ud835\udc5f\ud835\udc56\ud835\udc5b)), (9) where \ud835\udc5f\ud835\udc56\ud835\udc5dand \ud835\udc5f\ud835\udc56\ud835\udc5bare user \ud835\udc56\u2019s predicted ratings for positive item \ud835\udc5dand negative item \ud835\udc5b. \ud835\udc5ais the value of the margin. The margin hinge loss instructs the model to adjust its parameters in such a way that the difference in rating between the positive and negative items is greater than the margin value. This explicit design allows us to better control the constraints on the model\u2019s predicted ratings. We use the BPR loss function in Section 4.1 to utilize the ranking information. We designed a stealing loss function, which considered both ranking information and recommended item information. \ud835\udc3f\ud835\udc56 \ud835\udc46= \ud835\udc3f\ud835\udc56 \ud835\udc5f+ \ud835\udc3f\ud835\udc56 \ud835\udc5d, (10) where \ud835\udc3f\ud835\udc56 \ud835\udc46, \ud835\udc3f\ud835\udc56 \ud835\udc5f, and \ud835\udc3f\ud835\udc56 \ud835\udc5dare user \ud835\udc56\u2019s stealing loss, ranking loss, and positive item loss, respectively. \ud835\udc3f\ud835\udc56 \ud835\udc5fand \ud835\udc3f\ud835\udc56 \ud835\udc5dare shown as follows: \ud835\udc3f\ud835\udc56 \ud835\udc5f= \u2211\ufe01 \ud835\udc57\u2208R\ud835\udc61\ud835\udc4e\ud835\udc5f\ud835\udc54\ud835\udc52\ud835\udc61 \ud835\udc56 \ud835\udc3f\ud835\udc35\ud835\udc43\ud835\udc45(\ud835\udc5f\ud835\udc56\ud835\udc57,\ud835\udc5f\ud835\udc56\ud835\udc57\u2032), (11) \ud835\udc3f\ud835\udc56 \ud835\udc5d= \u2211\ufe01 \ud835\udc57\u2208R\ud835\udc61\ud835\udc4e\ud835\udc5f\ud835\udc54\ud835\udc52\ud835\udc61 \ud835\udc56 \u2211\ufe01 \ud835\udc58\u2208\ud835\udc5b\ud835\udc57 \ud835\udc3f\ud835\udc3b\ud835\udc56\ud835\udc5b\ud835\udc54\ud835\udc52(\ud835\udc5f\ud835\udc56\ud835\udc57,\ud835\udc5f\ud835\udc56\ud835\udc58), (12) where R\ud835\udc61\ud835\udc4e\ud835\udc5f\ud835\udc54\ud835\udc52\ud835\udc61 \ud835\udc56 is the target model\u2019s recommendation for user \ud835\udc56. \ud835\udc57\u2032 is the item after item \ud835\udc57in the recommendation list, and \ud835\udc5b\ud835\udc57is the negative item set for item \ud835\udc57. 4.4 Overall Workflow In this subsection, we describe the workflow of our methods using BPR as the target model. Here we mainly focus on PTAQ with three types of knowledge, and its algorithm flow is shown in Alg. 1. We first train the auxiliary model \ud835\udc40\ud835\udc4e\ud835\udc62\ud835\udc65with auxiliary data \ud835\udc37\ud835\udc4e(line 1). The parameters of \ud835\udc40\ud835\udc4e\ud835\udc62\ud835\udc65are denoted as \ud835\udc91\ud835\udc4e, \ud835\udc92\ud835\udc4e, which refer to user and item embeddings. After that, we randomly initialize \ud835\udc91, \ud835\udc92\ud835\udc50, the parameters of \ud835\udc40\ud835\udc50\ud835\udc59\ud835\udc5c\ud835\udc5b\ud835\udc52(line 2), and carefully train the parameters by BPR loss(line 3-13). During the training, the predicted rating for each user-item pair is mainly based on Eq.8, which incorporates the weighted sum of item embedding of \ud835\udc40\ud835\udc50\ud835\udc59\ud835\udc5c\ud835\udc5b\ud835\udc52and \ud835\udc40\ud835\udc4e\ud835\udc62\ud835\udc65. Once the converged \ud835\udc40\ud835\udc50\ud835\udc59\ud835\udc5c\ud835\udc5b\ud835\udc52is obtained, \ud835\udc37\ud835\udc61is utilized again to fine-tune \ud835\udc40\ud835\udc50\ud835\udc59\ud835\udc5c\ud835\udc5b\ud835\udc52 (line 14-30). In the fine-tuning process, we iterate each user in \ud835\udc37\ud835\udc61 and query \ud835\udc40\ud835\udc61\ud835\udc4e\ud835\udc5f\ud835\udc54\ud835\udc52\ud835\udc61with the user id to obtain \ud835\udc40\ud835\udc61\ud835\udc4e\ud835\udc5f\ud835\udc54\ud835\udc52\ud835\udc61\u2019s recommendation list \ud835\udc45\ud835\udc61\ud835\udc4e\ud835\udc5f\ud835\udc54\ud835\udc52\ud835\udc61 \ud835\udc56 . After that, Hinge Loss and BPR Loss are adopted to extract positive item information and ranking information. For PTA that lacks query permission compared to PTAQ, the fine-tuning phase (Line 14-30 of Alg. 1) will be lacking. For PTQ that lacks auxiliary data, the function \ud835\udc64\ud835\udc52\ud835\udc56\ud835\udc54\u210e\ud835\udc61\ud835\udc52\ud835\udc51_\ud835\udc60\ud835\udc62\ud835\udc5a(\ud835\udc5e\ud835\udc50 \ud835\udc56,\ud835\udc5e\ud835\udc4e \ud835\udc56) (\ud835\udc56= \ud835\udc57,\ud835\udc58,\ud835\udc5c\ud835\udc5f\ud835\udc57\u2032 in Alg. 1) will be replaced by the item embedding \ud835\udc5e\ud835\udc50 \ud835\udc56(\ud835\udc56= \ud835\udc57,\ud835\udc58,\ud835\udc5c\ud835\udc5f\ud835\udc57\u2032 in Alg. 1) of the clone model. 5 EVALUATION In this section, we evaluate model stealing attacks against multiple classic recommender systems on diverse datasets. We assess the attack methods on three real-world datasets, including ML1M(MovieLens-1M) [5], Ta-feng1, and Steam2. We randomly split each dataset evenly into two disjoint subsets: a target dataset and an auxiliary dataset, and report the average Agreement (Agr) [42] of 5 runs for each method to avoid bias. The target and auxiliary data do not have a common user, but they do have common items. We conduct experiments against three classic recommender systems, namely BPR(Bayesian Personalized Ranking), NCF(Neural Collaborative Filtering), and LMF(Logistic Matrix Factorization). We adopt PTD [16, 22] and QSD [42] as baselines. The detailed experimental setup is included in the Appendix A.1. Following the experimental setup, we investigate the performance of different attack methods across multiple datasets in Section 5.1 and carry out an in-depth analysis of the impact of attack knowledge in Section 5.2. Furthermore, in Section 5.3 we go through the auxiliary data in detail. In addition, we discuss the influence of query budget and stealing loss function in the Appendix A. 1https://www.kaggle.com/datasets/chiranjivdas09/ta-feng-grocery-dataset 2https://www.kaggle.com/datasets/tamber/steam-video-games Conference\u201917, July 2017, Washington, DC, USA Zhihao Zhu, Rui Fan, Chenwang Wu, Yi Yang, Defu Lian, Enhong Chen Table 4: Attack performance(Agr) on various datasets. Algorithms ML-1M Ta-feng Steam BPR NCF LMF BPR NCF LMF BPR NCF LMF QSD 0.0533 0.2727 0.0487 0.0598 0.4398 0.2371 0.3561 0.8710 0.3721 PTD 0.4987 0.3633 0.2747 0.4718 0.6526 0.4537 0.7773 0.8512 0.0181 PTQ 0.5107 0.3633 0.2873 0.4845 0.7518 0.7986 0.9276 0.9813 0.2496 PTA 0.6280 0.4400 0.5266 0.5135 0.6545 0.5488 0.8957 0.8899 0.6817 PTAQ 0.6506 0.4407 0.5627 0.5141 0.7692 0.8038 0.9476 0.9833 0.9553 10 20 30 40 50 60 70 80 80 100 Recommendation Length 0.1 0.2 0.3 0.4 0.5 0.6 0.7 Agreement (a) BPR 10 20 30 40 50 60 70 80 80 100 Recommendation Length 0.1 0.2 0.3 0.4 0.5 0.6 Agreement (b) NCF 10 20 30 40 50 60 70 80 80 100 Recommendation Length 0.1 0.2 0.3 0.4 0.5 0.6 0.7 Agreement (c) LMF PTAQ PTA PTQ PTD QSD Figure 2: Attack performance with various recommendation lengths on ML-1M. 0.02 0.04 0.06 0.08 0.1 The Size of Available T arget Data 0.0 0.1 0.2 0.3 0.4 0.5 0.6 Agreement (a) BPR 0.02 0.04 0.06 0.08 0.1 The Size of Available T arget Data 0.0 0.1 0.2 0.3 0.4 0.5 0.6 Agreement (b) NCF 0.02 0.04 0.06 0.08 0.1 The Size of Available T arget Data 0.0 0.1 0.2 0.3 0.4 0.5 0.6 Agreement (c) LMF PTAQ PTA PTQ PTD QSD Figure 3: Attack performance with different sizes of available target data on ML-1M. 5.1 Attack Performance on Various Datasets In this section, we perform experiments on ML-1M, Ta-feng, and Steam datasets against three recommendation models, including BPR, NCF, and LMF. The experimental results show that all these methods can steal model information to some extent. Based on the experimental results in Table 4, we draw the following conclusions: \u2022 Auxiliary data and query permission can effectively improve the performance of model stealing attacks. PTAQ, which fuses both auxiliary data and query feedback information, achieves the best attack performance in all scenarios. \u2022 The QSD algorithm has low performance across several datasets. QSD was designed for sequential recommender systems, and its performance is dependent on producing a large number of fake queries, which are not applicable to our assumptions. In the appendix, we compare the performance of our methods with QSD on sequential recommendation models. \u2022 It is difficult to steal the model information of LMF when we can only obtain a few target data. On the ML-1M, Ta-feng, and Steam datasets, PTD\u2019s stealing performance on LMF was only 27.47%, 45.37%, and 1.81%, respectively. When the attacker makes use of ample external information, such as auxiliary data and query feedback, this issue is significantly reduced. 5.2 Performance with Different Knowledge 5.2.1 Recommendation Length. In this part, we set the recommendation list length from 10 to 100 and analyze the impact of the recommendation list length on the performance of the attack method in Fig 2. Typically, the effectiveness of algorithms tends to increase with the lengthening of recommendation lists. On NCF, PTA is comparable to that of PTAQ, and PTD is comparable to that of PTQ, which is consistent with the experimental results in Section 4. It is probably because of the large number of user interactions that are recorded in the ML-1M dataset. As a consequence of this, the available target data and auxiliary data contain almost as much additional information as the recommendation lists can provide. While it can be seen from Table 4 that PTQ is significantly improved when compared to PTD in the sparsely interacting Steam dataset. 5.2.2 Sizes of Available Target Data. Figure 3 analyzed attack performances with respect to the varying size of the target data accessible to the attacker. We can obeserve that the efficacy of the attack method is proportional to the magnitude of the available target Model Stealing Attack against Recommender System Conference\u201917, July 2017, Washington, DC, USA 0.2 0.4 0.6 0.8 1.0 The Size of Available Auxiliary Data 0.25 0.30 0.35 0.40 0.45 0.50 0.55 0.60 Agreement (a) ML-1M 0.2 0.4 0.6 0.8 1.0 The Size of Available Auxiliary Data 0.4 0.5 0.6 0.7 0.8 Agreement (b) T a-feng 0.2 0.4 0.6 0.8 1.0 The Size of Available Auxiliary Data 0.2 0.4 0.6 0.8 1.0 Agreement (c) Steam PTAQ PTA PTD Figure 4: Attack performance with different sizes of available auxiliary data aginst LMF. 0.2 0.4 0.6 0.8 1.0 The Proportion of Intersection Items 0.25 0.30 0.35 0.40 0.45 0.50 0.55 0.60 Agreement (a) ML-1M 0.2 0.4 0.6 0.8 1.0 The Proportion of Intersection Items 0.4 0.5 0.6 0.7 0.8 Agreement (b) T a-feng 0.2 0.4 0.6 0.8 1.0 The Proportion of Intersection Items 0.2 0.4 0.6 0.8 1.0 Agreement (c) Steam PTAQ PTA PTD Figure 5: Attack performance with different proportions of intersection items aginst LMF. Table 5: Attack performance with different ways to exploit the auxiliary data Algorithms ML-1M Ta-feng Steam BPR NCF LMF BPR NCF LMF BPR NCF LMF PTA(Pre) 0.4900 0.3793 0.2807 0.4629 0.6533 0.4598 0.7918 0.8594 0.0167 PTAQ(Pre) 0.4833 0.3793 0.2900 0.4598 0.7492 0.7886 0.9083 0.9841 0.1370 PTA 0.6280 0.4400 0.5266 0.5135 0.6545 0.5488 0.8957 0.8899 0.6817 PTAQ 0.6506 0.4407 0.5627 0.5141 0.7692 0.8038 0.9476 0.9833 0.9553 dataset. The main reason is that more available target data can provide us with more interactions and query feedback information, thereby improving the attack effect of the clone model. 5.2.3 Knowable Model Architecture. . In this section, we eliminate the assumption that the clone model exhibits identical architecture to that of the target model. Figure 6 depicts the utilization of heatmaps to visually represent the attack efficacy across various permutations of the target model and the clone model\u2019s architectures. The vertical axis denotes the structural design of the target model, while the horizontal axis represents the structural design of the clone model. The visual representation of stronger attack effects is denoted by darker colors. Based on the empirical evidence presented in Figure 6, the following inferences can be made: \u2022 In scenarios where the attacker lacks knowledge of the underlying architecture of the target model, our attack capabilities remain adequate. \u2022 The utilization of BPR as the clone model shows superior attack efficacy in comparison to alternative models. This finding suggests that prior knowledge of the target model\u2019s architecture is not a prerequisite. Identifying an appropriate architecture for the clone model is possible to enhance the attack performance. 5.3 Auxiliary Dataset 5.3.1 Attention Mechanism or Pretraining. In this section, an alternative approach to exploit auxiliary data is investigated. The method called Pretraining(Pre) uses the auxiliary item embeddings as the initial state of the item embeddings in the clone model. We compare the performance of PTA and PTAQ using pretraining and attention mechanisms on three datasets in Table 5. It can be observed that using the attention mechanism to fuse auxiliary data can achieve more powerful attack performance than the pretraining method, regardless of whether the attacker has query permission. 5.3.2 Sizes of Available Auxiliary Data. Figure 4 analyzed attack performances with respect to the varying size of available auxiliary data. From the experimental results, we find that even with limited auxiliary data, our method can produce a significant improvement in terms of PTD. Furthermore, the performance of PTA and PTAQ tends to be proportional to the number of auxiliary data. 5.3.3 Sizes of Iteractions Items. We adjusted the overlap ratio(from 10% to 100%) between items in the auxiliary data and target data, meaning attackers can only use the auxiliary item embeddings overlapping with the target items for their modeling. The experimental Conference\u201917, July 2017, Washington, DC, USA Zhihao Zhu, Rui Fan, Chenwang Wu, Yi Yang, Defu Lian, Enhong Chen BPR NCF LMF Clone Model LMF NCF BPR T arget Model 0.0487 0.2887 0.0487 0.0440 0.2727 0.0440 0.0533 0.3253 0.0533 QSD BPR NCF LMF Clone Model LMF NCF BPR T arget Model 0.4440 0.3627 0.2747 0.4560 0.3633 0.2773 0.4987 0.4020 0.2953 PTD BPR NCF LMF Clone Model LMF NCF BPR T arget Model 0.4487 0.3513 0.2873 0.4587 0.3633 0.2953 0.5107 0.4020 0.3192 PTQ BPR NCF LMF Clone Model LMF NCF BPR T arget Model 0.5280 0.4167 0.5266 0.5413 0.4400 0.4787 0.6280 0.4953 0.5547 PTA BPR NCF LMF Clone Model LMF NCF BPR T arget Model 0.5367 0.4160 0.5627 0.5413 0.4407 0.4840 0.6506 0.4953 0.5687 PTAQ 0.0 0.2 0.4 0.6 0.8 1.0 Figure 6: Attack performance with different model architectures on ML-1M. 0 10 20 30 40 50 The Number of Mixed Items 0.1 0.2 0.3 0.4 0.5 0.6 0.7 Agreement (a) BPR 0 10 20 30 40 50 The Number of Mixed Items 0.10 0.15 0.20 0.25 0.30 0.35 0.40 0.45 0.50 Agreement (b) NCF 0 10 20 30 40 50 The Number of Mixed Items 0.1 0.2 0.3 0.4 0.5 0.6 Agreement (c) LMF PTAQ PTA PTQ PTD QSD Figure 7: Attack performance with mixed recommendations on ML-1M. 0 10 20 30 40 50 The Number of Mixed Items 0.05 0.10 0.15 0.20 0.25 0.30 0.35 0.40 Recall BPR NCF LMF Figure 8: Recommendation performance with mixed recommendations on ML-1M. results in Figure 5 show that, the utilisation of auxiliary data, despite a low overlap ratio with the target data, demonstrates a noticeable enhancement in the stealing performance. 6 DEFENSE It is of the utmost need to develop efficient techniques, as soon as possible, to reduce the harm caused by model stealing attacks. In this section, we construct a defensive method based on altering the recommendation list of the target model. We introduce inaccuracies into the query feedback information that the attacker obtained in order to mislead them into mimicking recommendation behaviors that are incompatible with the target model. Specifically, we first calculate item popularity based on the number of times it interacts with users. Then for each recommendation, we select some recommended items and replace them with popular items. While the inclusion of popular items may not necessarily align with the user\u2019s preferences, it is typically the case that such items do not elicit strong negative reactions from the user. We fix the length of the recommendation list to 50 and randomly select 5-25 items from the 100 most popular items to replace an equal number of items in the recommendation list. We use Recall [4] to assess the impact of the mixed recommendation on recommendation performance. We report the changes in the attack performance of different algorithms and the recommendation performance of the target model in Fig 7 and Fig 8, respectively. Based on the experimental results, we draw the following conclusions: \u2022 The mixed defense strategy can effectively resist model stealing attacks. This is due to two factors. On the one hand, mixing popular items into the recommendation list misleads attackers utilizing query permission. On the other hand, adding some random popular items to the recommendation list pushes the target model to give recommendations based on not only user preferences but also randomness, making it more difficult for the cloned model to imitate the target model\u2019s prediction behavior. The decreases in the attack performance of PTD and PTA also verify this assertion. \u2022 Although the mixing strategy can resist model stealing attacks to a certain extent, it inevitably causes a decrease in the recommendation performance of the target model. As the number of mixed items increases, the defense performance gradually improves, but the recommendation performance continues to decline. We need to find an acceptable balance between the defense performance and the recommendation performance. It further illustrates the serious threat of model stealing attacks to recommender systems and the need to propose more robust and powerful defense strategies in the future. 7 CONCLUSION AND FUTURE WORK In this paper, we focus on model stealing attacks against recommender systems. We present multiple strategies to leverage the various types of knowledge that can be obtained by attackers. We use an attention mechanism to fuse auxiliary data with target data and design a stealing function to extract the recommendation list of the target model. In future work, we will investigate how to better guard against model stealing attacks on recommender systems. Model Stealing Attack against Recommender System Conference\u201917, July 2017, Washington, DC, USA", "introduction": "In the contemporary age of information explosion, recommender systems have gained immense popularity in domains like market- ing [2] and e-commerce [29, 37] owing to their exceptional ability to provide personalized recommendations. By analyzing user-item interactions and external knowledge, recommender systems model user preferences and item attributes, and then recommend items that users are more likely to be interested in. In recent years, several academic investigations [15, 44, 45] have highlighted the serious privacy risks that recommender systems encounter. Attackers can exploit their query permissions and other auxiliary knowledge to peek into the training data of recommender systems [44, 48] or privacy attributes of their users. [46]. These methods pose serious threats to the security of recommender sys- tems from the perspective of data privacy. However, there is still a lack of relevant research on the model privacy leakage threats faced by recommender systems, such as model stealing attacks [20, 32, 33]. Model stealing attacks seek to obtain a good copy of the target model, namely the clone model. These attacks can compromise Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from permissions@acm.org. Conference\u201917, July 2017, Washington, DC, USA \u00a9 2023 Association for Computing Machinery. ACM ISBN 978-x-xxxx-xxxx-x/YY/MM...$15.00 https://doi.org/10.1145/nnnnnnn.nnnnnnn Table 1: Our work vs. previous works Algorithm Target data Query QSD [42] None Abundant Adversarial Attacks [16, 39] Abundant None Our work Few Few the owner\u2019s intellectual property and increase the risk of adver- sarial attacks [3, 18] and membership inference attacks [36, 44], undermining the credibility and privacy of the models. There is currently no work that formally proposes model stealing attacks against recommender systems. Some articles [16, 39] have established surrogate models to achieve adversarial attacks against recommender systems, which have, to some extent, achieved model stealing attacks. However, the aforementioned works frequently as- sume that the attacker can obtain a large amount of training data for the target model. This assumption is unreasonable in recommender systems because confidentiality and credibility are so important in recommender systems that their owners would pay extra attention to preventing training data exposure. In addition, the attackers did not instruct the surrogate model to replicate the target model, which is the main purpose of model stealing attacks; rather, they employed the surrogate model solely as an intermediate to carry out adversarial attacks. QSD(Query with Synthesized Data) [42], another adversarial attack, makes use of synthetic data rather than target data to construct surrogate models. They synthesized user interactions by utilizing the autoregressive nature of sequential rec- ommender systems, produced a huge number of queries, and then trained a surrogate model by extracting the outputs acquired from the queries. However, the data generation method that they have proposed is only appropriate for sequential recommender systems, and the assumption that a large number of queries will be made is also unreasonable. To properly define a model stealing attack against recommender systems, we assume that the attacker can only access a small amount of target data and conduct a small number of queries. This assump- tion is based on the challenges presented by previous research. Table 1 presents a comparison of our study to other works. Fur- thermore, in reference to previous privacy attacks [36, 44] on rec- ommender systems, we hypothesize that the attacker can collect a portion of the auxiliary data, which that comes from the same distribution as the target data but does not take part in the training process of the target model. We employ corresponding strategies to exploit three types of knowledge: target data, auxiliary data, and query permission. First, when an attacker can only access partial target data, we extract the (\ud835\udc62\ud835\udc60\ud835\udc52\ud835\udc5f,\ud835\udc56\ud835\udc61\ud835\udc52\ud835\udc5a) interaction pairs from the available target data and encourage the clone model to predict higher ratings for these interaction pairs. We name this attack PTD (Partial Target Data). It is effective because most recommender systems have the same training objective as ours: to provide higher predicted ratings for interaction pairs in the training data. Some works [16, 22] took arXiv:2312.11571v2 [cs.CR] 26 Dec 2023 Conference\u201917, July 2017, Washington, DC, USA Zhihao Zhu, Rui Fan, Chenwang Wu, Yi Yang, Defu Lian, Enhong Chen this way to build the surrogate model for further adversarial attacks against recommender systems. They assume that attackers can obtain a sufficient amount of target data. In this scenario, PTD is enough to learn a threatening clone model. However, in the real world, most recommender systems do not disclose their complete training data, so we can only improve the performance of model stealing attacks by obtaining other knowledge, including auxiliary data and the recommendation lists of the target model. Second, when the auxiliary data is available, we utilize an at- tention mechanism to fuse auxiliary data and available target data. Specifically, we first train an auxiliary model on the auxiliary data and obtain corresponding auxiliary item embedding vectors. Then we build the clone model and obtain initial clone item embedding vectors and user embedding vectors. We use a weighted sum to fuse the auxiliary item embeddings with the clone item embeddings to obtain fused item embeddings. The fused item embeddings and user embeddings are further used to calculate the predicted rating of the user for the item. The calculation of the weights for the auxiliary item embeddings and clone item embeddings is combined with the attention mechanism. For a (\ud835\udc62\ud835\udc60\ud835\udc52\ud835\udc5f,\ud835\udc56\ud835\udc61\ud835\udc52\ud835\udc5a) interaction pair, we use a trainable neural network layer to calculate the attention coefficients for the auxiliary item embeddings and clone item embeddings of the interaction pair. Finally, using the same training mode as PTD, we can implement a new attack method called PTA (Partial Target data and Auxiliary data). Third, when the attacker can perform limited queries on the target model, we design the stealing function to extract the recom- mendation list obtained by querying the target model. Inspired by the previous work [42], we divide the information contained in the recommendation list into two categories: recommended item infor- mation and ranking information. Recommended item information implies that the target model considers the user as more likely to like the recommended item than other items. We randomly sample some items from other items, called negative items. During the training process, we encourage the clone model to provide higher predicted ratings for positive items than negative items when mak- ing recommendations for the current user. Ranking information represents the target model\u2019s belief that items ranked higher in the recommendation list is more preferred by the user than those ranked lower. Therefore, we design corresponding losses to encour- age the clone model to provide higher predicted ratings for items ranked higher in the recommendation list than those ranked lower when making recommendations for the user. We call the attack algorithm with Partial Target data and Query permission as PTQ, and call PTQ with the Auxiliary data as PTAQ. In response to model stealing attacks on recommender systems, we also explore a defense strategy. We attempt to mix in some pop- ular items to the recommendation list to mislead the clone to mimic the target model. These popular items, while not necessarily liked by users, are generally not hated. While achieving adequate defensive performance, it degrades the recommendation performance of the target model, which is consistent with perturbation-based defenses in other fields [25, 47]. Its drawback underscores the significance of our work and the need for more effective defense mechanisms. To recap, the main contributions of this paper are: \u2022 We outline model stealing attacks against recommender systems with various types of knowledge and propose multiple effective attack strategies accordingly. \u2022 Through an attention mechanism, we fuse \"cheap\" auxiliary data and \"precious\" target data. Besides, we design the stealing function to extract recommendation lists. These moves could help the clone model to better approximate the target model. \u2022 We validate that our attacks can pose a serious threat to the model privacy of recommender systems on multiple datasets and various recommender systems. Even if the target model incorporates defense mechanisms, our algorithms still have good attack performance." }, { "url": "http://arxiv.org/abs/1807.03514v1", "title": "Topic-Guided Attention for Image Captioning", "abstract": "Attention mechanisms have attracted considerable interest in image captioning\nbecause of its powerful performance. Existing attention-based models use\nfeedback information from the caption generator as guidance to determine which\nof the image features should be attended to. A common defect of these attention\ngeneration methods is that they lack a higher-level guiding information from\nthe image itself, which sets a limit on selecting the most informative image\nfeatures. Therefore, in this paper, we propose a novel attention mechanism,\ncalled topic-guided attention, which integrates image topics in the attention\nmodel as a guiding information to help select the most important image\nfeatures. Moreover, we extract image features and image topics with separate\nnetworks, which can be fine-tuned jointly in an end-to-end manner during\ntraining. The experimental results on the benchmark Microsoft COCO dataset show\nthat our method yields state-of-art performance on various quantitative\nmetrics.", "authors": "Zhihao Zhu, Zhan Xue, Zejian Yuan", "published": "2018-07-10", "updated": "2018-07-10", "primary_cat": "cs.CV", "cats": [ "cs.CV" ], "main_content": "2.1. Overall Framework Our topic-guided attention network for image captioning follows the Encoder-Decoder framework, where an encoder is first used for obtaining image features, and then a decoder is used for interpreting the encoded image features into captions. The overall structure of our model is illustrated in Fig. 2. LSTM Attribute Predictor Topic Predictor TG-Semantic-Attention TG-Spatial-Attention Feature Extractor Attribute Vector CNN Features ... man red ball ... 1 T 2 T k T t h t x T A v \ufffd A v \u0275 t p t o Fig. 2. The framework of the proposed image captioning system. TG-Semantic-Attention represents the Topic-guided semantic attention model, and TG-Spatial-Attention represents the Topic-guided spatial attention model. Different from other systems\u2019 encoding process, three types of image features are extracted in our model: visual features, attributes and topic. We use a deep CNN model to extract image\u2019s visual features, and a multi-label classifier, a single-label classifier are separately adopted for extracting image\u2019s topic and attributes (in section 3). Given these three information, we then apply topic-guided spatial and semantic attention to select the most important visual features \ufffd v and attributes \ufffd A (Details in section 2.2 and 2.3) to feed into decoder. In the decoding part, we adopt LSTM as our caption generator. Different from other works, we employ a unique way to utilize the image topic, attended visual features and attributes: the image topic T is fed into LSTM only at the first time step, which offers LSTM a quick overview of the general image content. Then, the attended visual features \ufffd v and attributes \ufffd A are fed into LSTM in the following steps. The overall working flow of LSTM network is governed by the following equations: x0 = W x,T T (1) xt = W x,oot\u22121 \u2295W x,v\ufffd v (2) ht = LSTM(ht\u22121, xt) (3) o,h o,A \ufffd (4) \u2295\ufffd ht = LSTM(ht\u22121, xt) (3) \ufffd \u2212 ot \u223cpt = \u03c3(W o,hht + W o,A \ufffd A) (4) note weights, and \u2295represents the concatenaation. \u03c3 stands for sigmoid function, pt stands \u223c \ufffd where Ws denote weights, and \u2295represents the concatenation manipulation. \u03c3 stands for sigmoid function, pt stands for the probability distribution over each word in the vocabulary, and ot is the sampled word at each time step. For clearness, we do not explicitly represent the bias term in our paper. 2.2. Topic-guided spatial attention In general, spatial attention is used for selecting the most information-carrying sub-regions of the visual features, guided by LSTM\u2019s feedback information. Unlike all previous works, we propose a new spatial attention mechanism, which integrates the image topic as auxiliary guidance when generating the attention. We first reshape the visual features \u03bd = [v1, v2, ..., vm] by flattening its width W and height H, where m = W \u00b7 H, and vi \u2208RD corresponds to the i-th location in the visual feature map. Given the topic vector T and LSTM\u2019s hidden state ht\u22121, we use a multi-layer perceptron with a softmax output to generate the attention distribution \u03b1 = {\u03b11, \u03b12, ..., \u03b1m} over the image regions. Mathematically, our topic-guided spatial attention model can be represented as: e = fMLP ((W e,T T) \u2295(W e,\u03bd\u03bd) \u2295(W e,hht\u22121)) (5) \u03b1 = softmax(W \u03b1,ee) (6) Where fMLP (\u00b7) represents a multi-layer perceptron. Then, we follow the \u201csoft\u201d approach to gather all the visual features to obtain \ufffd v by using the weighted sum: \ufffd v = m \ufffd \u03b1ivi (7) \ufffd \ufffd v = manti m \ufffd i=1 \ufffd i=1 \u03b1ivi (7) \ufffd 2.3. Topic-guided semantic attention Adding image attributes in the image captioning system was able to boost the performance of image captioning by explicitly representing the high-level semantic information[10]. Similar to the topic-guided spatial attention, we also apply a topic-guided attention mechanism on the image attributes A = {A1, A2, ..., An}, where n is the size of our attribute vocabulary. In our topic-guided semantic attention network, we use only one fully connected layer with a softmax to predict the attention distribution \u03b2 = {\u03b21, \u03b22, ..., \u03b2n} over each attribute. The \ufb02ow of the semantic attention can be represented as: b = fF CL((W b,T T) \u2295(W b,AA) \u2295(W b,hht\u22121)) (8) \u03b2 = softmax(W \u03b2,bb) (9) where fF CL(\u00b7) represents a fully-connected layer. Then, we are able to reconstruct our attribute vector to obtain b A by multiplying each element with its weight: b Ai = \u03b2i \u2299Ai, \u2200i \u2208n (10) where \u2299denotes the element-wise multiplication, and b Ai is the i-th attribute in the b A. 2.4. Training Our training objective is to learn the model parameters by minimizing the following cost function: L = \u22121 N N X i=1 L(i)+1 X t=1 log pt(w(i) t ) + \u03bb \u00b7 \u2225\u03b8\u22252 2 (11) where N is the number of training examples and L(i) is the length of the sentence for the i-th training example. pt(w(i) t ) corresponds to the Softmax activation of the t-th output of the LSTM, and \u03b8 represents model parameters, \u03bb \u00b7 \u2225\u03b8\u22252 2 is a regularization term. 3. IMAGE TOPIC AND ATTRIBUTE PREDICTION Topic: We follow [12] to \ufb01rst establish a training dataset of image-topic pairs by applying Latent Dirichlet Allocation (LDA) [13] on the caption data. Then, each image with the inferred topic label T composes an image-topic pair. Then, these data are used to train a single-label classi\ufb01er in a supervised manner. In our paper, we use the VGGNet[14] as our classi\ufb01er, which is pre-trained on the ImageNet, and then \ufb01ne-tuned on our image-topic dataset. Attributes: Similar to [15, 10], we establish our attributes vocabulary by selecting c most common words in the captions. To reduce the information redundancy, we perform a manual \ufb01ltering of plurality (e.g. \u201cwoman\u201d and \u201cwomen\u201d) and semantic overlapping (e.g. \u201cchild\u201d and \u201ckid\u201d), by classifying those words into the same semantic attribute. Finally, we obtain a vocabulary of 196 attributes, which is more compact than [15]. Given this attribute vocabulary, we can associate each image with a set of attributes according to its captions. We then wish to predict the attributes given a test image. This can be viewed as a multi-label classi\ufb01cation problem. We follow [16] to use a Hypotheses-CNN-Pooling (HCP) network to learn attributes from local image patches. It produces the probability score for each attribute that an image may contain, and the top-ranked ones are selected to form the attribute vector A as the input of the caption generator. 4. EXPERIMENTS In this section, we will specify our experimental methodology and verify the effectiveness of our topic-guided image captioning framework. 4.1. Setup Data and Metrics: We conduct the experiment on the popular benchmark: Microsoft COCO dataset. For fair comparison, we follow the commonly used split in the previous works: 82,783 images are used for training, 5,000 images for validation, and 5,000 images for testing. Some images have more than 5 corresponding captions, the excess of which will be discarded for consistency. We directly use the publicly available code 1 provided by Microsoft for result evaluation, which includes BLEU-1, BLEU-2, BLEU-3, BLEU-4, METEOR, CIDEr, and ROUGH-L. Implementation details: For the encoding part: 1) The image visual features v are extracted from the last 512 dimensional convolutional layer of the VGGNet. 2) The topic extractor uses the pre-trained VGGNet connected with one fully connected layer which has 80 unites. Its output is the probability that the image belongs to each topic. 3) For the attribute extractor, after obtaining the 196-dimension output from the last fully-connected layer, we keep the top 10 attributes with the highest scores to form the attribute vector A. For the decoding part, our language generator is implemented based on a Long-Short Term Memory (LSTM) network [17]. The dimension of its input and hidden layer are both set to 1024, and the tanh is used as the nonlinear activation function. We apply a word embedding with 300 dimensions on both LSTM\u2019s input and output word vectors. In the training procedure, we use Adam[18] algorithm for model updating with a mini-batch size of 128. We set the language model\u2019s learning rate to 0.001 and the dropout rate to 0.5. The whole training process takes about eight hours on a single NVIDIA TITAN X GPU. 4.2. Quantitative evaluation results Table. 1 compares our method to several other systems on the task of image captioning on MSCOCO dataset. Our baseline methods inludes NIC[1], an end-to-end deep neural network translating directly from image pixels to natural languages, spatial attention with soft-attention[9], semantic attention 1https://github.com/tylin/coco-caption BLEU-1 BLEU-2 BLEU-3 BLEU-4 METEOR ROUGH-L CIDEr-D Google NIC[1] 66.6 46.1 32.9 24.6 Soft attention[9] 70.7 49.2 34.4 24.3 23.90 Semantic attention[8] 73.1 56.5 42.4 31.6 25.00 53.5 94.3 PG-SPIDEr-TAG[11] 75.1 59.1 44.6 33.6 25.5 55.1 104.2 Ours-BASE 74.8 55.8 41.1 30.2 27.0 57.8 109.8 Ours-T-V 75.2 56.16 41.4 30.4 27.0 58.1 109.2 Ours-T-A 75.8 57.0 42.5 30.9 27.4 58.2 112.5 Ours-T-(V+A) 77.8 59.34 44.5 33.2 28.60 60.1 117.6 Table 1. Performance of the proposed topic-guided attention model on the MSCOCO dataset, comparing with other four baseline methods. with explicit high-level visual attributes [8]. For fair comparison, we report our results with 16-layer VGGNet since it is similar to the image encoders used in other methods[1, 8, 9]. We also consider several systematic variants of our method: ( 1 ) OUR-BASE adds spatial attention and semantic attention jointly in the NIC model. ( 2 ) OUR-T-V adds image topic only to the spatial attention model in OUR-BASE. ( 3 ) OUR-T-A adds image topic only to the semantic attention model in OUR-BASE. ( 4 ) OUR-T-(V+A) adds image topic to both spatial and semantic attention in OUR-BASE. On the MSCOCO dataset, using the same greedy search strategy, adding image topics to either spatial attention or semantic attention outperforms the base method (OUR-BASE) on all metrics. Moreover, the bene\ufb01ts of using image topics as guiding information in the spatial attention and semantic attention are addictive, proven by further improvement in OURT-(V+A), which outperforms OUR-BASE across all metrics by a large margin, ranging from 1% to 5%. 4.3. Qualitative evaluations Fig. 3. Example of generated spatial attention map and captions. a) OUR-BASE; b) OUR-T-(V+A). To evaluate our system qualitatively, in Fig. 3, we show an example demonstrating the effectiveness of topic-guided attention on the image captioning. We note that \ufb01rst, the topic-guided attention shows a clearer distinction of object (the places where the attention would be focusing) and the background (where the attention weights are small). For example, when describing the little girl in the picture, our model gives a more precisely contoured attention areas covering the upper part of her body. In comparison, the base model pays the majority of attention to her head, while other body parts are overlooked. Secondly, we observe that our model can better capture details in the target image, such as the adjective \u201clittle\u201d describing the girl and the quanti\ufb01er \u201ca slice of\u201d quantifying the \u201cpizza\u201d. Moreover, our model explores the spatial relation between the girl and the table: \u201csitting at a table\u201d which has even not been discovered by the baseline model. Also, the topic-guided attention discovers more accurate context information in the image, such as the verb \u201ceating\u201d produced by our model, compared to the inaccurate verb \u201cholding\u201d produced by the baseline model. This example demonstrates that topic-guided attention has a bene\ufb01cial in\ufb02uence on the image caption generation task. 5. CONCLUSION In this paper, we propose a novel method for image captioning. Different from other works, our method uses image topics as guiding information in the attention module to select semantically-stronger parts of visual features and attributes. The image topic in our model serves as two major functions: Macroscopically, it offers the language generator an overview of the high-level semantic content of the image; Microscopically, it\u2019s instructive for guiding the attention to exploit image\u2019s \ufb01ne-grained local information. For next steps, we plan to experiment with new methods to merge the spatial attention and semantic attention together in a single attention network. Acknowledgement: This work was supported by the National Key R&D Program of China (No.2016YFB1001001) and the National Natural Science Foundation of China (No.91 648121, No.61573280) 6.", "introduction": "Automatic image captioning presents a particular challenge in computer vision because it needs to interpret from visual information to natural languages, which are two completely different information forms. Furthermore, it requires a level of image understanding that goes beyond image classi\ufb01cation and object recognition. A widely adopted method for tackling this problem is the Encoder-Decoder framework[1, 2, 3, 4], where an encoder is \ufb01rst used to encode the pixel informa- tion into a more compact form, and later a decoder is used to translate this information into natural languages. Inspired by the successful application of attention mecha- nism in machine language translation[5], spatial attention has also been widely adopted in the task of image captioning. It\u2019s a feedback process that selectively maps a representation of partial regions or objects in the scene. On that basis, to further re\ufb01ne the spatial attention, some works[6, 7] applied stacked spatial attention, where the latter attention is based on the previous attentive feature map. Besides of spatial attention, (a) food and beer near a laptop computer sitting on top of a desk. (b) a laptop computer on the desk with a cat on the bed. (c) a laptop computer sitting on top of a desk next to a plate of food. Fig. 1. A comparison of captions generated from different methods. (a) represents for our proposed method, (b) stands for a baseline method[9] which does not use topic informa- tion. and (c) is the ground truth. Quanzeng et al[8] proposed to utilize the high-level seman- tic attributes and apply semantic attention to select the most important attributes at each time step. However, a common defect of the above spatial and se- mantic attention models is that they lack a higher-level guid- ing information, which may cause the model to attend to some image regions that are visually salient but semantically irrel- evant with image\u2019s main topic. In general, when describing an image, having an intuition about image\u2019s high-level se- mantic topic is bene\ufb01cial for selecting the most semantically- meaningful and topic-relevant image areas and attributes as context for later caption generation. For example, in Fig. 1, the image on the left depicts a scene where a laptop com- puter lies next to food. For we human, it\u2019s reasonable to in- fer the topic of the image to be \u201cworking and eating\u201d. How- ever, without this high-level guiding information, the baseline method [9] tends to describe all the salient visual objects in the image, including objects that are irrelevant to the general content of the image, such as the cat on the top left corner. To solve the above issue, we propose a topic-guided at- tention mechanism that uses the image topic as a high-level guiding information. Our model starts with the extraction of image topic, based on image\u2019s visual appearance. Then, the topic vector is fed into the attention model together with the feedback from LSTM to generate attention on image visual features and attributes. The experimental results demonstrate that our method is able to generate captions that are more ac- cordant with image\u2019s high-level semantic content. arXiv:1807.03514v1 [cs.CV] 10 Jul 2018 The main contributions of our work consists of following two parts. 1) we propose a new attention mechanism which uses image topic as auxiliary guidance for attention genera- tion. The image topic acts like a regulator, maintaining the attention consistent with the general image content. 2) we propose a new approach to integrate the selected visual fea- tures and attributes into caption generator. Our algorithm is able to achieve state-of-the-art performance on the Microsoft COCO dataset." } ] }, "edge_feat": {} } }