diff --git "a/abs_29K_G/test_abstract_long_2405.01130v1.json" "b/abs_29K_G/test_abstract_long_2405.01130v1.json" new file mode 100644--- /dev/null +++ "b/abs_29K_G/test_abstract_long_2405.01130v1.json" @@ -0,0 +1,601 @@ +{ + "url": "http://arxiv.org/abs/2405.01130v1", + "title": "Automated Virtual Product Placement and Assessment in Images using Diffusion Models", + "abstract": "In Virtual Product Placement (VPP) applications, the discrete integration of\nspecific brand products into images or videos has emerged as a challenging yet\nimportant task. This paper introduces a novel three-stage fully automated VPP\nsystem. In the first stage, a language-guided image segmentation model\nidentifies optimal regions within images for product inpainting. In the second\nstage, Stable Diffusion (SD), fine-tuned with a few example product images, is\nused to inpaint the product into the previously identified candidate regions.\nThe final stage introduces an \"Alignment Module\", which is designed to\neffectively sieve out low-quality images. Comprehensive experiments demonstrate\nthat the Alignment Module ensures the presence of the intended product in every\ngenerated image and enhances the average quality of images by 35%. The results\npresented in this paper demonstrate the effectiveness of the proposed VPP\nsystem, which holds significant potential for transforming the landscape of\nvirtual advertising and marketing strategies.", + "authors": "Mohammad Mahmudul Alam, Negin Sokhandan, Emmett Goodman", + "published": "2024-05-02", + "updated": "2024-05-02", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "label": "Original Paper", + "paper_cat": "Diffusion AND Model", + "gt": "In Virtual Product Placement (VPP) applications, the discrete integration of\nspecific brand products into images or videos has emerged as a challenging yet\nimportant task. This paper introduces a novel three-stage fully automated VPP\nsystem. In the first stage, a language-guided image segmentation model\nidentifies optimal regions within images for product inpainting. In the second\nstage, Stable Diffusion (SD), fine-tuned with a few example product images, is\nused to inpaint the product into the previously identified candidate regions.\nThe final stage introduces an \"Alignment Module\", which is designed to\neffectively sieve out low-quality images. Comprehensive experiments demonstrate\nthat the Alignment Module ensures the presence of the intended product in every\ngenerated image and enhances the average quality of images by 35%. The results\npresented in this paper demonstrate the effectiveness of the proposed VPP\nsystem, which holds significant potential for transforming the landscape of\nvirtual advertising and marketing strategies.", + "main_content": "Introduction Virtual Product Placement (VPP) refers to the unobtrusive, digital integration of branded products into visual content, which is often employed as a stealth marketing strategy [15]. Advertising solutions utilizing VPP have significant appeal due to their high customizability, effectiveness across diverse customer bases, and quantifiable efficiency. *The author performed this work as an intern at Amazon Web Services (AWS). Accepted at 6th AI for Content Creation (AI4CC) workshop at CVPR 2024. (Preprint) (a) Background (b) Inpainting Figure 1. An illustration of the proposed VPP system with an Amazon Echo Dot device. The input background image is shown in (a), and the inpainted output image is shown in (b) where an Amazon Echo Dot device is placed on the kitchen countertop by automatic identification of optimal location. Previous research underscores the impact of product placement within realms such as virtual reality [22] and video games [5]. With the recent advancements in generative AI technologies, the potential for product placement has been further expanded through the utilization of diffusion models. Significant research has focused on the development of controlled inpainting via diffusion models, albeit largely without an explicit emphasis on advertising applications [1, 8, 11]. However, these methods can be fine-tuned with a small set of 4 to 5 product sample images to generate high-quality advertising visual content. In this paper, we propose a novel, three-stage, fully automated system that carries out semantic inpainting of products by fine-tuning a pre-trained Stable Diffusion (SD) model [18]. In the first stage, a suitable location is identified for product placement using visual question answering and text-conditioned instant segmentation. The output of this stage is a binary mask highlighting the identified location. Subsequently, this masked region undergoes inpainting using a fine-tuned SD model. This SD model is fine-tuned by 1 arXiv:2405.01130v1 [cs.CV] 2 May 2024 \fDreamBooth [19] approach utilizing a few sample images of the product along with a unique identifier text prompt. Finally, the quality of the inpainted image is evaluated by a proposed Alignment Module, a discriminative method that measures the image quality, or the alignment of the generated image with human expectations. An illustration of the proposed VPP system is presented in Figure 1 with an Amazon Echo Dot device. Controlled inpainting of a specific product is a challenging task. For example, the model may fail to inpaint the intended object at all. If a product is indeed introduced through inpainting, the product created may not be realistic and may display distortions of shape, size, or color. Similarly, the background surrounding the inpainted product may be altered in such a way that it either meaningfully obscures key background elements or even completely changes the background image. This becomes especially problematic when the background images contain human elements, as models can transform them into disturbing visuals. As a result, the proposed Alignment Module is designed to address these complications, with its primary focus being on the appearance, quality, and size of the generated product. To exert control over the size of the generated product, morphological transformations, specifically erosion, and dilation, are employed. By adjusting the size of the mask through dilation or erosion, the size of the inpainted product can be effectively increased or decreased. This allows the system to generate a product of an appropriate size. In summary, the main contributions of this paper are twofold. The first pertains to the design of a fully automated Virtual Product Placement (VPP) system capable of generating high-resolution, customer-quality visual content. The second involves the development of a discriminative method that automatically eliminates subpar images, premised on the content, quality, and size of the product generated. The remainder of this paper is organized as follows. In section 2 we will delve into the related literature, with a specific emphasis on semantic inpainting methods utilizing diffusion models, and section 3 will highlight the broad contributions of the paper. Next, the proposed end-to-end pipeline for automatic VPP will be discussed in section 4. This includes a detailed examination of the three primary stages of the solution, along with the three sub-modules of the Alignment Module. Thereafter, we will elucidate the experimental design and evaluation methodologies adopted and report the corresponding results in section 5. Subsequently, deployment strategy and web application design will be explained in section 6. Finally, the paper will conclude with an outline of the identified limitations of our proposed methodology in section 7, complemented by a discussion on potential avenues for future research. 2. Related Works Recently, there has been significant progress in developing semantic or localized image editing using diffusion models largely without an explicit focus on digital marketing. Nevertheless, new generative AI approaches promise significant advances in VPP technology. For instance, in Blended Diffusion [1], the authors proposed a method of localized image editing using image masking and natural language. The area of interest is first masked and then modified using a text prompt. The authors employed a pre-trained CLIP model [17] along with pre-trained Denoising Diffusion Probabilistic Models (DDPM) [7] to generate natural images in the area of interest. Similar to Blended Diffusion, Couairon et. al. [3] proposed a method of semantic editing with a mask using a diffusion model. However, instead of taking the mask from the user, the mask is generated automatically. Nevertheless, a text query input from the user is utilized to generate the mask. The difference in noise estimates, as determined by the diffusion model based on the reference text and the query text, is calculated. This difference is then used to infer the mask. The image is noised iteratively during the forward process and in the reverse Denoising Diffusion Implicit Model (DDIM) [21] steps, the denoised image is interpolated with the same step output of the forward process using masking. Paint by Word proposed by Bau et. al. [2], is also similar, however instead of a diffusion model they utilized a Generative Adversarial Networks (GAN) [4] with a mask for semantic editing guided by text. On the other hand, Imagic [8] also performs text-based semantic editing on images using a diffusion model but without using any mask. Their approach consists of three steps. In the beginning, text embedding for a given image is optimized. Then the generative diffusion model is optimized for the given image with fixed-optimized text embedding. Finally, the target and optimized embedding are linearly interpolated to achieve input image and target text alignment. Likewise, a semantic editing method using a pre-trained text-conditioned diffusion model focusing on the mixing of two concepts is proposed by [12]. In this method, a given image is noised for several steps and then denoised with text condition. During the denoising process, the output of a denoising stage is also linearly interpolated with the output of a forward noise mixing stage. Hertz et. al. [6] took a different approach to semantic image editing where text and image embeddings are fused using cross-attention. The cross-attention maps are incorporated with the Imagen diffusion model [20]. However, instead of editing any given image, their approach edits a generated image using a text prompt which lacks any interest when VPP is concerned. Alternatively, Stochastic Differential Edit (SDEdit) [16] synthesizes images from stroke 2 \fpaintings and can edit images based on stroke images. For image synthesis, coarse colored strokes are used and for editing, colored stroke on real images or image patches on target images is used as a guide. It adds Gaussian noise to an image guide of a specific standard deviation and then solves the corresponding Stochastic Differential Equations (SDE) to produce the synthetic or edited image. To generate images from a prompt in a controlled fashion and to gain more control over the generated image, Li et. al proposed grounded text-to-image generation (GLIGEN) [11]. It feeds the model the embedding of the guiding elements such as bounding boxes, key points, or semantic maps. Using the same guiding components, inpainting can be performed in a target image. DreamBooth [19] fine-tunes the pre-trained diffusion model to expand the dictionary of the model for a specific subject. Given a few examples of the subject, a diffusion model such as Imagen [20] is fine-tuned using random samples generated by the model itself and new subject images by optimizing a reconstruction loss. The new subject images are conditioned using a text prompt with a unique identifier. Fine-tuning a pre-trained diffusion model with a new subject is of great importance in the context of VPP. Therefore, in this paper DreamBooth approach is utilized to expand the model\u2019s dictionary by learning from a few sample images of the product. 3. Contributions In this paper, a method of automated virtual product placement and assessment in images using diffusion models is designed. Our broad contributions are as follows: 1. We introduce a novel fully automated VPP system that carries out automatic semantic inpainting of the product in the optimal location using language-guided segmentation and fine-tuned stable diffusion models. 2. We proposed a cascaded three-stage assessment module named \u2018Alignment Module\u2019 designed to sieve out lowquality images that ensure the presence of the intended product in every generated output image. 3. Morphological transformations are employed such as dilation and erosion to adjust the size of the mask, therefore, to increase or decrease the size of the inpainted product allowing generating a product of appropriate size. 4. Experiments are performed to validate the results by blind evaluation of the generated images with and without the Alignment module resulting in 35% improvement in average quality. 5. The inpainted product generated by the proposed system is not only qualitatively more realistic compared to the previous inpainting approach [23] but also shows a superior quantitative CLIP score. 4. Methodology Fine-tuned Model DreamBooth VILT Visual Question Answering \u201cdesk\u201d \u201cwhich object in the image has a flat surface area?\u201d CLIPSeg Semantic Segmentation Content Score Quality Score Volume Score Stage 2 Stage 1 Stage 3 Figure 2. The block diagram of the proposed solution for the VPP system where each of the three stages is distinguished by varied color blocks. In stage 1, a suitable placement for product inpainting is determined by creating a mask using CLIPSeg and VILT models. Next, in stage 2, semantic inpainting is performed in the masked area using the fine-tuned DreamBooth model. Finally, stage 3 contains the cascaded sub-modules of the Alignment Module to discard low-quality images. 4.1. Proposed Method For semantic inpainting, we utilized the DreamBooth algorithm [19] to fine-tune stable diffusion using five representative images of the product and a text prompt with a unique identifier. Even with a limited set of five sample images, the fine-tuned DreamBooth model was capable of generating images of the product integrated with its background. Nevertheless, when inpainting was conducted with this fine-tuned model, the resulting quality of the inpainted product was significantly compromised. To enhance the quality of the product in the inpainted image, we augmented the sample images through random scaling and random cropping, consequently generating a total of 1,000 product images used to fine-tune SD. 4.2. Product Localization Module The proposed VPP system operates in three stages. A core challenge in product placement lies in pinpointing a suitable location for the item within the background. In the first stage, this placement is indicated via the generation of a binary mask. To automate this masking process, we leveraged the capabilities of the Vision and Language Transformer (ViLT) Visual Question Answering (VQA) model [9] in conjunction with the Contrastive Language3 \fImage Pretraining (CLIP) [17]-based semantic segmentation method, named CLIPSeg [13]. Notably, each product tends to have a prototypical location for its placement. For example, an optimal location for an Amazon Echo Dot device is atop a flat surface, such as a desk or table. Thus, by posing a straightforward query to the VQA model, such as \u201dWhich object in the image has a flat surface area?\u201d, we can pinpoint an appropriate location for the product. Subsequently, the identified location\u2019s name is provided to the CLIPSeg model, along with the input image, resulting in the generation of a binary mask for the object. 4.3. Product Inpainting Module In the second stage, the input image and the generated binary mask are fed to the fine-tuned DreamBooth model to perform inpainting on the masked region. Product inpainting presents several challenges: the product might not manifest in the inpainted region; if it does, its quality could be compromised or distorted, and its size might be disproportionate to the surrounding context. To systematically detect these issues, we introduce the third stage: the Alignment Module. 4.4. Product Alignment Module The Alignment Module comprises three sub-modules: Content, Quality, and Volume. The Content sub-module serves as a binary classifier, determining the presence of the product in the generated image. If the product\u2019s probability of existence surpasses a predefined threshold, then the Quality score is calculated for that image. This score evaluates the quality of the inpainted product in relation to the sample images originally used to train the SD model. Finally, if the image\u2019s quality score exceeds the set quality threshold, the Volume sub-module assesses the product\u2019s size in proportion to the background image. The generated image will be successfully accepted and presented to the user only if all three scores within the Product Quality Alignment Module meet their respective thresholds. Within the Content module, an image captioning model [14] is employed to generate a caption, which is then refined by incorporating the product\u2019s name. The super-class name of the product can also be utilized. Both the captions and the inpainted image are fed into the CLIP model to derive a CLIP score. If the modified caption scores above 70%, it\u2019s inferred that the product exists in the inpainted image. The Quality module contrasts the mean CLIP image features of the sample images with the CLIP image feature of the generated image. The greater the resemblance of the inpainted product to the sample images, the higher the quality score. A threshold of 70% has been established. The Volume module finally gauges the size of the inpainted product. The generated image is processed through the CLIP model, accompanied by three distinct textual size prompts. Given that \u201ca small dog sitting on a desk next to a computer\u201d \u201ca small dog sitting on a desk next to a computer with an echo dot\u201d \u201cInput Image\u201d \u201cGenerated Image\u201d Caption Generator CLIP Score Fine-tuned Caption Product Exist (a) Content Sub-module \u201cSample Images\u201d \u201cGenerated Image\u201d Mean CLIP Image Feature CLIP Image Feature Cosine Similarity Quality Score (b) Quality Sub-module \u201cGenerated Image\u201d CLIP Score \u201ctoo large {product}\u201d \u201cregular size {product}\u201d \u201ctoo small {product}\u201d Product Size (c) Volume Sub-module Figure 3. Block diagram of each of the components of the Alignment Module. The Content sub-module is built using a pre-trained caption generator and CLIP models shown in (a). The generated caption is fine-tuned by adding the name of the intended product to the caption. For the Quality sub-module, the image features of the same CLIP model are utilized shown in (b). Finally, in the Volume sub-module, the same CLIP model with three different size text prompts is used shown in (c). size perception can be subjective and varies based on camera proximity, a milder threshold of 34% (slightly above a random guess) has been selected. The comprehensive block diagram of the proposed VPP system is illustrated in Figure 2, with the three stages distinguished by varied color blocks. The block diagrams for each sub-module can be found in Figure 3. 4 \fThe Volume sub-module provides insights regarding the size of the inpainted product. To modify the product\u2019s size, the mask\u2019s dimensions must be adjusted. For this task, morphological transformations, including mask erosion and dilation, can be employed on the binary mask. These transformations can either reduce or augment the mask area, allowing the inpainting module to produce a product image of the desired size. The relationship between alterations in the mask area and the size of the inpainted product across various erosion iterations is depicted in Figure 4. Approximately, 25 iterations of erosion consume around 3 milliseconds, making it highly cost-effective. 0 10 20 25 Figure 4. Application of erosion to the mask where a kernel of size (5 \u00d7 5) is used for 0, 10, 20, and 25 iterations shown in the figure consecutively. The resulting output is presented at the bottom of the corresponding mask to show the size reduction of the generated product in the output image. 5. Experimental Results Experiments were conducted to evaluate the performance of the proposed VPP system. For these experiments, five sample images of an \u201cAmazon Echo Dot\u201d were chosen. 1, 000 augmented images of each product created from these five sample images were used to fine-tune the DreamBooth model using the text prompt \u201dA photorealistic image of a sks Amazon Alexa device.\u201d The model was fine-tuned for 1, 600 steps, employing a learning rate of 5 \u00d7 10\u22126, and a batch size of 1. The fine-tuned model can inpaint products into the masked region. However, issues such as lack of product appearance, poor resolution, and disproportionate shape persist. The goal of the proposed Alignment Module is to automatically detect these issues. If identified, the problematic images are discarded, and a new image is generated from different random noise. Only if a generated image meets all the module\u2019s criteria it is presented to the user. Otherwise, a new image generation process is initiated. This loop continues for a maximum of 10 iterations. 5.1. Assessing Alignment Module To assess the effectiveness of the Alignment Module, images were generated both with and without it. For each submodule, as well as for the overall Alignment Module, 200 images were generated: 100 with the filter activated and 100 without (referred to as the \u201dNaive\u201d case). To prevent bias, all images were given random names and were consolidated into a single folder. These images were also independently evaluated by a human, whose scores served as the ground truth. This ground truth information was saved in a separate file for the final evaluation, which followed a blindfolded scoring method. All the experiments were also repeated for another product named \u201cLupure Vitamin C\u201d. 5.2. Evaluation Metrics The evaluation and scoring method of each of the submodules of the Alignment module is described in the consecutive segments. \u2022 Content Score For the image content score, images are categorized into two classes: \u2018success\u2019 if the product appears, and \u2018failure\u2019 otherwise. When the content module is utilized, the Failure Rate (FR), defined as the ratio of Failure to Success, is below 10% for both of the products. \u2022 Quality Score For the quality score, images are rated on a scale from 0 to 10: 0 indicates the absence of a product, and 10 signifies a perfect-looking product. To evaluate in conjunction with the CLIP score, both the Mean Assigned Quality Score (MAQS) and Mean Quality Score (MQS) are calculated. MAQS represents the average score of images labeled between 0 and 10, while MQS is the output from the quality module, essentially reflecting cosine similarity. \u2022 Volume Score For the volume module, images are also rated on a scale from 0 to 10: 0 for a highly unrealistic size, and 10 for a perfect size representation. When evaluating the volume module, the content module is not utilized. Since the size score necessitates the presence of a product, images without any product are excluded from this evaluation. To gauge performance, the Mean Assigned Size Score (MASS) is calculated in addition to the CLIP score. 5.2.1 Overall Results The results of individual evaluations are presented in Table 1. It can be observed from this table that using any of the sub-modules consistently produced better outcomes compared to when no filtering was applied across various metrics. The results of the comprehensive evaluation, encompassing all sub-modules, can be found in Table 2. 5 \fTable 1. Individual evaluation of content, quality, and volume sub-modules within the overall Alignment Module. \u201cNaive\u201d represents the outputs without any filtering sub-modules. Content classifies the presence of the product in the generated images. Quality measures the proximity of the generated product to the sample product images used to fine-tune the diffusion model. Finally, Volume identifies the size category of the product. Naive Content Naive Quality Naive Volume Amazon Echo Dot Success 72 94 CLIP 32.49 \u00b1 3.69 33.80 \u00b1 2.69 CLIP 32.58 \u00b1 3.70 33.42 \u00b1 2.69 Failure 28 6 MAQS 4.41 \u00b1 3.23 6.41 \u00b1 1.90 MASS 3.01 \u00b1 2.68 4.81 \u00b1 2.31 FR 38.89% 6.38% MQS 0.75 \u00b1 0.14 0.83 \u00b1 0.06 Lupure Vitamin C Success 87 100 CLIP 24.61 \u00b1 2.4 25.23 \u00b1 2.66 CLIP 24.22 \u00b1 3.01 24.51 \u00b1 2.89 Failure 13 0 MAQS 5.65 \u00b1 2.85 6.47 \u00b1 1.09 MASS 5.64 \u00b1 3.05 7.14 \u00b1 1.53 FR 14.94% 0.0% MQS 0.81 \u00b1 0.13 0.86 \u00b1 0.04 Table 2. Comparison of the proposed method with and without using the Alignment Module in addition to the Paint-By-Example (PBE) [23] inpainting model. The \u201cNaive\u201d performance represents the generated output without applying the Alignment Module. The \u201cAlignment\u201d column represents the generated outputs where three cascaded filtering sub-modules are used, i.e., the Alignment Module. Amazon Echo Dot Lupure Vitamin C PBE Naive Alignment PBE Naive Alignment CLIP 31.44 \u00b1 3.43 32.85 \u00b1 3.19 33.85 \u00b1 2.54 27.01 \u00b1 2.10 24.71 \u00b1 2.64 24.89 \u00b1 2.90 MAQS 1.13 \u00b1 1.30 4.65 \u00b1 3.60 6.31 \u00b1 2.39 1.75 \u00b1 1.51 6.60 \u00b1 3.01 7.81 \u00b1 1.13 MASS 1.22 \u00b1 1.60 3.05 \u00b1 2.98 4.70 \u00b1 2.81 2.43 \u00b1 2.07 6.25 \u00b1 3.08 7.30 \u00b1 1.59 MQS 0.64 \u00b1 0.08 0.75 \u00b1 0.14 0.82 \u00b1 0.05 0.67 \u00b1 0.06 0.82 \u00b1 0.12 0.86 \u00b1 0.05 FR 78.57% 29.87% 0.00% 38.89% 17.64% 0.00% (a) (b) (c) (d) Figure 5. Inpainted product image of Paint-by-Example (PBE). PBE generates high-quality images which explains the higher CLIP score in the case of Lupure Vitamin C. However, the inpainted product does not look similar to the desired product at all resulting in very poor mean assigned quality and size scores. Output images for Amazon Echo Dot is shown in (a) and (b), and for Lupure Vitamin C is shown in (c) and (d). Figure 6. Empirical performance of Alignment Module for Amazon Echo Dot. Noticeably, no output is generated without any product when the Alignment Module is employed. Moreover, the mean quality score has increased from 4.65 to 6.31. 5.3. Comparison with Paint-By-Example The proposed method is compared with the Paint-ByExample (PBE) [23] inpainting model and Table 2 shows the performance comparison of the proposed method along with PBE. PBE can generate very high-quality images, however, the inpainted product in the generated image does not look alike the desired product at all as shown in Figure 5 resulting in very poor MAQS and MASS. Whereas the inpainted product of our proposed method resembles much of the original product shown in Figure Figure 7. 6 \f5.4. Frequency Distribution The frequency distribution and density function of the assigned quality scores in the case of \u201cNaive\u201d and \u201cAlignment\u201d for Amazon Echo Dot is presented in Figure 6. The density mean has shifted from 4.65 to 6.31 when Alignment Module is adopted indicating the effectiveness of the proposed module. 6. Path to Production 6.1. Product API The location identifier, fine-tuned model, and Alignment Module are combined to develop an easy-to-use VPP Streamlit web app 1. This app is hosted on Amazon Sagemaker using an \u201cml.p3.2xlarge\u201d instance, which is a single V100 GPU with 16GB of GPU memory. The demo app\u2019s interface is illustrated in Figure 8. In the top-left \u2018Image\u2019 section, users can either upload their own background image or choose from a selection of sample background images to generate an inpainted product image. The web app provides extensive flexibility for tuning the parameters of the Alignment Module so that users can comprehend the effects of these parameters. In the \u2018seed\u2019 text box, a value can be input to control the system output. The segmentation threshold for CLIPSeg defaults to 0.7, but users can refine this value using a slider. Within the \u2018Mask Params\u2019 section, the number of dilation and erosion iterations can be set and visualized in real-time. The filter, represented by the Alignment Module, can be toggled on or off. The \u2018Max Attempt\u2019 slider determines the number of regeneration attempts if the model doesn\u2019t produce a satisfactory output. However, if a seed value is specified, the model will generate the output only once, regardless of the set value. Lastly, in the \u2018Filter Params\u2019 section, users can fine-tune the threshold values for each sub-module of the Alignment Module, specifically for content, quality, and volume. The \u201cshow stats\u201d button beneath the input image displays the mask alongside details of the model outputs. These details include the seed value, placement, generated and modified captions, and the content, quality, and volume/size scores. By visualizing the mask and its area, users can apply erosion or dilation to adjust the product\u2019s size. The default threshold values for content, quality, and volume are 0.7, 0.7, and 0.34, respectively. While these values can be adjusted slightly higher, it\u2019s recommended to also set the \u2019Max Attempt\u2019 to 10 in such cases. A higher threshold means that the generated output is more likely to fail the criteria set by the Alignment Module. 1STREAMLIT: https://streamlit.io/ 6.2. Future Considerations for Product Scalability Fine-tuning stable diffusion using DreamBooth can take up to 30 minutes, depending on dataset size, image resolution, and extent of training. When considering a customer with hundreds or thousands of products, this process could take days to complete model training across different products. Our pipeline is deployed on Amazon SageMaker, a managed service that supports the automatic scaling of deployed endpoints. This service can dynamically accommodate large computational needs by provisioning additional instances as required. As such, fine-tuning 100 SD models for 100 different products would still only take about 30 minutes if 100 instances were utilized in parallel. The fine-tuned models are stored in an Amazon S3 (Simple Storage Service) bucket, with each model being 2.2 GB in size. Consequently, 100 fine-tuned models would occupy approximately 220 GB of storage space. A pertinent question arises: Can we strike a space-time trade-off by training a single model with a unique identifier for each product? If this is feasible, the space requirement would be reduced to a consistent 2.2 GB. However, that one model would need more extensive training specifically training steps would increase by a factor of 100 for 100 products, thereby lengthening the computation time. This approach remains untested and warrants future exploration [10]. 7.", + "additional_graph_info": { + "graph": [ + [ + "Mohammad Mahmudul Alam", + "Edward Raff" + ], + [ + "Mohammad Mahmudul Alam", + "Stella Biderman" + ], + [ + "Edward Raff", + "Jared Sylvester" + ], + [ + "Stella Biderman", + "Lintang Sutawika" + ], + [ + "Stella Biderman", + "Edward Raff" + ], + [ + "Stella Biderman", + "Leo Gao" + ] + ], + "node_feat": { + "Mohammad Mahmudul Alam": [ + { + "url": "http://arxiv.org/abs/2405.01130v1", + "title": "Automated Virtual Product Placement and Assessment in Images using Diffusion Models", + "abstract": "In Virtual Product Placement (VPP) applications, the discrete integration of\nspecific brand products into images or videos has emerged as a challenging yet\nimportant task. This paper introduces a novel three-stage fully automated VPP\nsystem. In the first stage, a language-guided image segmentation model\nidentifies optimal regions within images for product inpainting. In the second\nstage, Stable Diffusion (SD), fine-tuned with a few example product images, is\nused to inpaint the product into the previously identified candidate regions.\nThe final stage introduces an \"Alignment Module\", which is designed to\neffectively sieve out low-quality images. Comprehensive experiments demonstrate\nthat the Alignment Module ensures the presence of the intended product in every\ngenerated image and enhances the average quality of images by 35%. The results\npresented in this paper demonstrate the effectiveness of the proposed VPP\nsystem, which holds significant potential for transforming the landscape of\nvirtual advertising and marketing strategies.", + "authors": "Mohammad Mahmudul Alam, Negin Sokhandan, Emmett Goodman", + "published": "2024-05-02", + "updated": "2024-05-02", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "main_content": "Introduction Virtual Product Placement (VPP) refers to the unobtrusive, digital integration of branded products into visual content, which is often employed as a stealth marketing strategy [15]. Advertising solutions utilizing VPP have significant appeal due to their high customizability, effectiveness across diverse customer bases, and quantifiable efficiency. *The author performed this work as an intern at Amazon Web Services (AWS). Accepted at 6th AI for Content Creation (AI4CC) workshop at CVPR 2024. (Preprint) (a) Background (b) Inpainting Figure 1. An illustration of the proposed VPP system with an Amazon Echo Dot device. The input background image is shown in (a), and the inpainted output image is shown in (b) where an Amazon Echo Dot device is placed on the kitchen countertop by automatic identification of optimal location. Previous research underscores the impact of product placement within realms such as virtual reality [22] and video games [5]. With the recent advancements in generative AI technologies, the potential for product placement has been further expanded through the utilization of diffusion models. Significant research has focused on the development of controlled inpainting via diffusion models, albeit largely without an explicit emphasis on advertising applications [1, 8, 11]. However, these methods can be fine-tuned with a small set of 4 to 5 product sample images to generate high-quality advertising visual content. In this paper, we propose a novel, three-stage, fully automated system that carries out semantic inpainting of products by fine-tuning a pre-trained Stable Diffusion (SD) model [18]. In the first stage, a suitable location is identified for product placement using visual question answering and text-conditioned instant segmentation. The output of this stage is a binary mask highlighting the identified location. Subsequently, this masked region undergoes inpainting using a fine-tuned SD model. This SD model is fine-tuned by 1 arXiv:2405.01130v1 [cs.CV] 2 May 2024 \fDreamBooth [19] approach utilizing a few sample images of the product along with a unique identifier text prompt. Finally, the quality of the inpainted image is evaluated by a proposed Alignment Module, a discriminative method that measures the image quality, or the alignment of the generated image with human expectations. An illustration of the proposed VPP system is presented in Figure 1 with an Amazon Echo Dot device. Controlled inpainting of a specific product is a challenging task. For example, the model may fail to inpaint the intended object at all. If a product is indeed introduced through inpainting, the product created may not be realistic and may display distortions of shape, size, or color. Similarly, the background surrounding the inpainted product may be altered in such a way that it either meaningfully obscures key background elements or even completely changes the background image. This becomes especially problematic when the background images contain human elements, as models can transform them into disturbing visuals. As a result, the proposed Alignment Module is designed to address these complications, with its primary focus being on the appearance, quality, and size of the generated product. To exert control over the size of the generated product, morphological transformations, specifically erosion, and dilation, are employed. By adjusting the size of the mask through dilation or erosion, the size of the inpainted product can be effectively increased or decreased. This allows the system to generate a product of an appropriate size. In summary, the main contributions of this paper are twofold. The first pertains to the design of a fully automated Virtual Product Placement (VPP) system capable of generating high-resolution, customer-quality visual content. The second involves the development of a discriminative method that automatically eliminates subpar images, premised on the content, quality, and size of the product generated. The remainder of this paper is organized as follows. In section 2 we will delve into the related literature, with a specific emphasis on semantic inpainting methods utilizing diffusion models, and section 3 will highlight the broad contributions of the paper. Next, the proposed end-to-end pipeline for automatic VPP will be discussed in section 4. This includes a detailed examination of the three primary stages of the solution, along with the three sub-modules of the Alignment Module. Thereafter, we will elucidate the experimental design and evaluation methodologies adopted and report the corresponding results in section 5. Subsequently, deployment strategy and web application design will be explained in section 6. Finally, the paper will conclude with an outline of the identified limitations of our proposed methodology in section 7, complemented by a discussion on potential avenues for future research. 2. Related Works Recently, there has been significant progress in developing semantic or localized image editing using diffusion models largely without an explicit focus on digital marketing. Nevertheless, new generative AI approaches promise significant advances in VPP technology. For instance, in Blended Diffusion [1], the authors proposed a method of localized image editing using image masking and natural language. The area of interest is first masked and then modified using a text prompt. The authors employed a pre-trained CLIP model [17] along with pre-trained Denoising Diffusion Probabilistic Models (DDPM) [7] to generate natural images in the area of interest. Similar to Blended Diffusion, Couairon et. al. [3] proposed a method of semantic editing with a mask using a diffusion model. However, instead of taking the mask from the user, the mask is generated automatically. Nevertheless, a text query input from the user is utilized to generate the mask. The difference in noise estimates, as determined by the diffusion model based on the reference text and the query text, is calculated. This difference is then used to infer the mask. The image is noised iteratively during the forward process and in the reverse Denoising Diffusion Implicit Model (DDIM) [21] steps, the denoised image is interpolated with the same step output of the forward process using masking. Paint by Word proposed by Bau et. al. [2], is also similar, however instead of a diffusion model they utilized a Generative Adversarial Networks (GAN) [4] with a mask for semantic editing guided by text. On the other hand, Imagic [8] also performs text-based semantic editing on images using a diffusion model but without using any mask. Their approach consists of three steps. In the beginning, text embedding for a given image is optimized. Then the generative diffusion model is optimized for the given image with fixed-optimized text embedding. Finally, the target and optimized embedding are linearly interpolated to achieve input image and target text alignment. Likewise, a semantic editing method using a pre-trained text-conditioned diffusion model focusing on the mixing of two concepts is proposed by [12]. In this method, a given image is noised for several steps and then denoised with text condition. During the denoising process, the output of a denoising stage is also linearly interpolated with the output of a forward noise mixing stage. Hertz et. al. [6] took a different approach to semantic image editing where text and image embeddings are fused using cross-attention. The cross-attention maps are incorporated with the Imagen diffusion model [20]. However, instead of editing any given image, their approach edits a generated image using a text prompt which lacks any interest when VPP is concerned. Alternatively, Stochastic Differential Edit (SDEdit) [16] synthesizes images from stroke 2 \fpaintings and can edit images based on stroke images. For image synthesis, coarse colored strokes are used and for editing, colored stroke on real images or image patches on target images is used as a guide. It adds Gaussian noise to an image guide of a specific standard deviation and then solves the corresponding Stochastic Differential Equations (SDE) to produce the synthetic or edited image. To generate images from a prompt in a controlled fashion and to gain more control over the generated image, Li et. al proposed grounded text-to-image generation (GLIGEN) [11]. It feeds the model the embedding of the guiding elements such as bounding boxes, key points, or semantic maps. Using the same guiding components, inpainting can be performed in a target image. DreamBooth [19] fine-tunes the pre-trained diffusion model to expand the dictionary of the model for a specific subject. Given a few examples of the subject, a diffusion model such as Imagen [20] is fine-tuned using random samples generated by the model itself and new subject images by optimizing a reconstruction loss. The new subject images are conditioned using a text prompt with a unique identifier. Fine-tuning a pre-trained diffusion model with a new subject is of great importance in the context of VPP. Therefore, in this paper DreamBooth approach is utilized to expand the model\u2019s dictionary by learning from a few sample images of the product. 3. Contributions In this paper, a method of automated virtual product placement and assessment in images using diffusion models is designed. Our broad contributions are as follows: 1. We introduce a novel fully automated VPP system that carries out automatic semantic inpainting of the product in the optimal location using language-guided segmentation and fine-tuned stable diffusion models. 2. We proposed a cascaded three-stage assessment module named \u2018Alignment Module\u2019 designed to sieve out lowquality images that ensure the presence of the intended product in every generated output image. 3. Morphological transformations are employed such as dilation and erosion to adjust the size of the mask, therefore, to increase or decrease the size of the inpainted product allowing generating a product of appropriate size. 4. Experiments are performed to validate the results by blind evaluation of the generated images with and without the Alignment module resulting in 35% improvement in average quality. 5. The inpainted product generated by the proposed system is not only qualitatively more realistic compared to the previous inpainting approach [23] but also shows a superior quantitative CLIP score. 4. Methodology Fine-tuned Model DreamBooth VILT Visual Question Answering \u201cdesk\u201d \u201cwhich object in the image has a flat surface area?\u201d CLIPSeg Semantic Segmentation Content Score Quality Score Volume Score Stage 2 Stage 1 Stage 3 Figure 2. The block diagram of the proposed solution for the VPP system where each of the three stages is distinguished by varied color blocks. In stage 1, a suitable placement for product inpainting is determined by creating a mask using CLIPSeg and VILT models. Next, in stage 2, semantic inpainting is performed in the masked area using the fine-tuned DreamBooth model. Finally, stage 3 contains the cascaded sub-modules of the Alignment Module to discard low-quality images. 4.1. Proposed Method For semantic inpainting, we utilized the DreamBooth algorithm [19] to fine-tune stable diffusion using five representative images of the product and a text prompt with a unique identifier. Even with a limited set of five sample images, the fine-tuned DreamBooth model was capable of generating images of the product integrated with its background. Nevertheless, when inpainting was conducted with this fine-tuned model, the resulting quality of the inpainted product was significantly compromised. To enhance the quality of the product in the inpainted image, we augmented the sample images through random scaling and random cropping, consequently generating a total of 1,000 product images used to fine-tune SD. 4.2. Product Localization Module The proposed VPP system operates in three stages. A core challenge in product placement lies in pinpointing a suitable location for the item within the background. In the first stage, this placement is indicated via the generation of a binary mask. To automate this masking process, we leveraged the capabilities of the Vision and Language Transformer (ViLT) Visual Question Answering (VQA) model [9] in conjunction with the Contrastive Language3 \fImage Pretraining (CLIP) [17]-based semantic segmentation method, named CLIPSeg [13]. Notably, each product tends to have a prototypical location for its placement. For example, an optimal location for an Amazon Echo Dot device is atop a flat surface, such as a desk or table. Thus, by posing a straightforward query to the VQA model, such as \u201dWhich object in the image has a flat surface area?\u201d, we can pinpoint an appropriate location for the product. Subsequently, the identified location\u2019s name is provided to the CLIPSeg model, along with the input image, resulting in the generation of a binary mask for the object. 4.3. Product Inpainting Module In the second stage, the input image and the generated binary mask are fed to the fine-tuned DreamBooth model to perform inpainting on the masked region. Product inpainting presents several challenges: the product might not manifest in the inpainted region; if it does, its quality could be compromised or distorted, and its size might be disproportionate to the surrounding context. To systematically detect these issues, we introduce the third stage: the Alignment Module. 4.4. Product Alignment Module The Alignment Module comprises three sub-modules: Content, Quality, and Volume. The Content sub-module serves as a binary classifier, determining the presence of the product in the generated image. If the product\u2019s probability of existence surpasses a predefined threshold, then the Quality score is calculated for that image. This score evaluates the quality of the inpainted product in relation to the sample images originally used to train the SD model. Finally, if the image\u2019s quality score exceeds the set quality threshold, the Volume sub-module assesses the product\u2019s size in proportion to the background image. The generated image will be successfully accepted and presented to the user only if all three scores within the Product Quality Alignment Module meet their respective thresholds. Within the Content module, an image captioning model [14] is employed to generate a caption, which is then refined by incorporating the product\u2019s name. The super-class name of the product can also be utilized. Both the captions and the inpainted image are fed into the CLIP model to derive a CLIP score. If the modified caption scores above 70%, it\u2019s inferred that the product exists in the inpainted image. The Quality module contrasts the mean CLIP image features of the sample images with the CLIP image feature of the generated image. The greater the resemblance of the inpainted product to the sample images, the higher the quality score. A threshold of 70% has been established. The Volume module finally gauges the size of the inpainted product. The generated image is processed through the CLIP model, accompanied by three distinct textual size prompts. Given that \u201ca small dog sitting on a desk next to a computer\u201d \u201ca small dog sitting on a desk next to a computer with an echo dot\u201d \u201cInput Image\u201d \u201cGenerated Image\u201d Caption Generator CLIP Score Fine-tuned Caption Product Exist (a) Content Sub-module \u201cSample Images\u201d \u201cGenerated Image\u201d Mean CLIP Image Feature CLIP Image Feature Cosine Similarity Quality Score (b) Quality Sub-module \u201cGenerated Image\u201d CLIP Score \u201ctoo large {product}\u201d \u201cregular size {product}\u201d \u201ctoo small {product}\u201d Product Size (c) Volume Sub-module Figure 3. Block diagram of each of the components of the Alignment Module. The Content sub-module is built using a pre-trained caption generator and CLIP models shown in (a). The generated caption is fine-tuned by adding the name of the intended product to the caption. For the Quality sub-module, the image features of the same CLIP model are utilized shown in (b). Finally, in the Volume sub-module, the same CLIP model with three different size text prompts is used shown in (c). size perception can be subjective and varies based on camera proximity, a milder threshold of 34% (slightly above a random guess) has been selected. The comprehensive block diagram of the proposed VPP system is illustrated in Figure 2, with the three stages distinguished by varied color blocks. The block diagrams for each sub-module can be found in Figure 3. 4 \fThe Volume sub-module provides insights regarding the size of the inpainted product. To modify the product\u2019s size, the mask\u2019s dimensions must be adjusted. For this task, morphological transformations, including mask erosion and dilation, can be employed on the binary mask. These transformations can either reduce or augment the mask area, allowing the inpainting module to produce a product image of the desired size. The relationship between alterations in the mask area and the size of the inpainted product across various erosion iterations is depicted in Figure 4. Approximately, 25 iterations of erosion consume around 3 milliseconds, making it highly cost-effective. 0 10 20 25 Figure 4. Application of erosion to the mask where a kernel of size (5 \u00d7 5) is used for 0, 10, 20, and 25 iterations shown in the figure consecutively. The resulting output is presented at the bottom of the corresponding mask to show the size reduction of the generated product in the output image. 5. Experimental Results Experiments were conducted to evaluate the performance of the proposed VPP system. For these experiments, five sample images of an \u201cAmazon Echo Dot\u201d were chosen. 1, 000 augmented images of each product created from these five sample images were used to fine-tune the DreamBooth model using the text prompt \u201dA photorealistic image of a sks Amazon Alexa device.\u201d The model was fine-tuned for 1, 600 steps, employing a learning rate of 5 \u00d7 10\u22126, and a batch size of 1. The fine-tuned model can inpaint products into the masked region. However, issues such as lack of product appearance, poor resolution, and disproportionate shape persist. The goal of the proposed Alignment Module is to automatically detect these issues. If identified, the problematic images are discarded, and a new image is generated from different random noise. Only if a generated image meets all the module\u2019s criteria it is presented to the user. Otherwise, a new image generation process is initiated. This loop continues for a maximum of 10 iterations. 5.1. Assessing Alignment Module To assess the effectiveness of the Alignment Module, images were generated both with and without it. For each submodule, as well as for the overall Alignment Module, 200 images were generated: 100 with the filter activated and 100 without (referred to as the \u201dNaive\u201d case). To prevent bias, all images were given random names and were consolidated into a single folder. These images were also independently evaluated by a human, whose scores served as the ground truth. This ground truth information was saved in a separate file for the final evaluation, which followed a blindfolded scoring method. All the experiments were also repeated for another product named \u201cLupure Vitamin C\u201d. 5.2. Evaluation Metrics The evaluation and scoring method of each of the submodules of the Alignment module is described in the consecutive segments. \u2022 Content Score For the image content score, images are categorized into two classes: \u2018success\u2019 if the product appears, and \u2018failure\u2019 otherwise. When the content module is utilized, the Failure Rate (FR), defined as the ratio of Failure to Success, is below 10% for both of the products. \u2022 Quality Score For the quality score, images are rated on a scale from 0 to 10: 0 indicates the absence of a product, and 10 signifies a perfect-looking product. To evaluate in conjunction with the CLIP score, both the Mean Assigned Quality Score (MAQS) and Mean Quality Score (MQS) are calculated. MAQS represents the average score of images labeled between 0 and 10, while MQS is the output from the quality module, essentially reflecting cosine similarity. \u2022 Volume Score For the volume module, images are also rated on a scale from 0 to 10: 0 for a highly unrealistic size, and 10 for a perfect size representation. When evaluating the volume module, the content module is not utilized. Since the size score necessitates the presence of a product, images without any product are excluded from this evaluation. To gauge performance, the Mean Assigned Size Score (MASS) is calculated in addition to the CLIP score. 5.2.1 Overall Results The results of individual evaluations are presented in Table 1. It can be observed from this table that using any of the sub-modules consistently produced better outcomes compared to when no filtering was applied across various metrics. The results of the comprehensive evaluation, encompassing all sub-modules, can be found in Table 2. 5 \fTable 1. Individual evaluation of content, quality, and volume sub-modules within the overall Alignment Module. \u201cNaive\u201d represents the outputs without any filtering sub-modules. Content classifies the presence of the product in the generated images. Quality measures the proximity of the generated product to the sample product images used to fine-tune the diffusion model. Finally, Volume identifies the size category of the product. Naive Content Naive Quality Naive Volume Amazon Echo Dot Success 72 94 CLIP 32.49 \u00b1 3.69 33.80 \u00b1 2.69 CLIP 32.58 \u00b1 3.70 33.42 \u00b1 2.69 Failure 28 6 MAQS 4.41 \u00b1 3.23 6.41 \u00b1 1.90 MASS 3.01 \u00b1 2.68 4.81 \u00b1 2.31 FR 38.89% 6.38% MQS 0.75 \u00b1 0.14 0.83 \u00b1 0.06 Lupure Vitamin C Success 87 100 CLIP 24.61 \u00b1 2.4 25.23 \u00b1 2.66 CLIP 24.22 \u00b1 3.01 24.51 \u00b1 2.89 Failure 13 0 MAQS 5.65 \u00b1 2.85 6.47 \u00b1 1.09 MASS 5.64 \u00b1 3.05 7.14 \u00b1 1.53 FR 14.94% 0.0% MQS 0.81 \u00b1 0.13 0.86 \u00b1 0.04 Table 2. Comparison of the proposed method with and without using the Alignment Module in addition to the Paint-By-Example (PBE) [23] inpainting model. The \u201cNaive\u201d performance represents the generated output without applying the Alignment Module. The \u201cAlignment\u201d column represents the generated outputs where three cascaded filtering sub-modules are used, i.e., the Alignment Module. Amazon Echo Dot Lupure Vitamin C PBE Naive Alignment PBE Naive Alignment CLIP 31.44 \u00b1 3.43 32.85 \u00b1 3.19 33.85 \u00b1 2.54 27.01 \u00b1 2.10 24.71 \u00b1 2.64 24.89 \u00b1 2.90 MAQS 1.13 \u00b1 1.30 4.65 \u00b1 3.60 6.31 \u00b1 2.39 1.75 \u00b1 1.51 6.60 \u00b1 3.01 7.81 \u00b1 1.13 MASS 1.22 \u00b1 1.60 3.05 \u00b1 2.98 4.70 \u00b1 2.81 2.43 \u00b1 2.07 6.25 \u00b1 3.08 7.30 \u00b1 1.59 MQS 0.64 \u00b1 0.08 0.75 \u00b1 0.14 0.82 \u00b1 0.05 0.67 \u00b1 0.06 0.82 \u00b1 0.12 0.86 \u00b1 0.05 FR 78.57% 29.87% 0.00% 38.89% 17.64% 0.00% (a) (b) (c) (d) Figure 5. Inpainted product image of Paint-by-Example (PBE). PBE generates high-quality images which explains the higher CLIP score in the case of Lupure Vitamin C. However, the inpainted product does not look similar to the desired product at all resulting in very poor mean assigned quality and size scores. Output images for Amazon Echo Dot is shown in (a) and (b), and for Lupure Vitamin C is shown in (c) and (d). Figure 6. Empirical performance of Alignment Module for Amazon Echo Dot. Noticeably, no output is generated without any product when the Alignment Module is employed. Moreover, the mean quality score has increased from 4.65 to 6.31. 5.3. Comparison with Paint-By-Example The proposed method is compared with the Paint-ByExample (PBE) [23] inpainting model and Table 2 shows the performance comparison of the proposed method along with PBE. PBE can generate very high-quality images, however, the inpainted product in the generated image does not look alike the desired product at all as shown in Figure 5 resulting in very poor MAQS and MASS. Whereas the inpainted product of our proposed method resembles much of the original product shown in Figure Figure 7. 6 \f5.4. Frequency Distribution The frequency distribution and density function of the assigned quality scores in the case of \u201cNaive\u201d and \u201cAlignment\u201d for Amazon Echo Dot is presented in Figure 6. The density mean has shifted from 4.65 to 6.31 when Alignment Module is adopted indicating the effectiveness of the proposed module. 6. Path to Production 6.1. Product API The location identifier, fine-tuned model, and Alignment Module are combined to develop an easy-to-use VPP Streamlit web app 1. This app is hosted on Amazon Sagemaker using an \u201cml.p3.2xlarge\u201d instance, which is a single V100 GPU with 16GB of GPU memory. The demo app\u2019s interface is illustrated in Figure 8. In the top-left \u2018Image\u2019 section, users can either upload their own background image or choose from a selection of sample background images to generate an inpainted product image. The web app provides extensive flexibility for tuning the parameters of the Alignment Module so that users can comprehend the effects of these parameters. In the \u2018seed\u2019 text box, a value can be input to control the system output. The segmentation threshold for CLIPSeg defaults to 0.7, but users can refine this value using a slider. Within the \u2018Mask Params\u2019 section, the number of dilation and erosion iterations can be set and visualized in real-time. The filter, represented by the Alignment Module, can be toggled on or off. The \u2018Max Attempt\u2019 slider determines the number of regeneration attempts if the model doesn\u2019t produce a satisfactory output. However, if a seed value is specified, the model will generate the output only once, regardless of the set value. Lastly, in the \u2018Filter Params\u2019 section, users can fine-tune the threshold values for each sub-module of the Alignment Module, specifically for content, quality, and volume. The \u201cshow stats\u201d button beneath the input image displays the mask alongside details of the model outputs. These details include the seed value, placement, generated and modified captions, and the content, quality, and volume/size scores. By visualizing the mask and its area, users can apply erosion or dilation to adjust the product\u2019s size. The default threshold values for content, quality, and volume are 0.7, 0.7, and 0.34, respectively. While these values can be adjusted slightly higher, it\u2019s recommended to also set the \u2019Max Attempt\u2019 to 10 in such cases. A higher threshold means that the generated output is more likely to fail the criteria set by the Alignment Module. 1STREAMLIT: https://streamlit.io/ 6.2. Future Considerations for Product Scalability Fine-tuning stable diffusion using DreamBooth can take up to 30 minutes, depending on dataset size, image resolution, and extent of training. When considering a customer with hundreds or thousands of products, this process could take days to complete model training across different products. Our pipeline is deployed on Amazon SageMaker, a managed service that supports the automatic scaling of deployed endpoints. This service can dynamically accommodate large computational needs by provisioning additional instances as required. As such, fine-tuning 100 SD models for 100 different products would still only take about 30 minutes if 100 instances were utilized in parallel. The fine-tuned models are stored in an Amazon S3 (Simple Storage Service) bucket, with each model being 2.2 GB in size. Consequently, 100 fine-tuned models would occupy approximately 220 GB of storage space. A pertinent question arises: Can we strike a space-time trade-off by training a single model with a unique identifier for each product? If this is feasible, the space requirement would be reduced to a consistent 2.2 GB. However, that one model would need more extensive training specifically training steps would increase by a factor of 100 for 100 products, thereby lengthening the computation time. This approach remains untested and warrants future exploration [10]. 7." + }, + { + "url": "http://arxiv.org/abs/2403.17978v1", + "title": "Holographic Global Convolutional Networks for Long-Range Prediction Tasks in Malware Detection", + "abstract": "Malware detection is an interesting and valuable domain to work in because it\nhas significant real-world impact and unique machine-learning challenges. We\ninvestigate existing long-range techniques and benchmarks and find that they're\nnot very suitable in this problem area. In this paper, we introduce Holographic\nGlobal Convolutional Networks (HGConv) that utilize the properties of\nHolographic Reduced Representations (HRR) to encode and decode features from\nsequence elements. Unlike other global convolutional methods, our method does\nnot require any intricate kernel computation or crafted kernel design. HGConv\nkernels are defined as simple parameters learned through backpropagation. The\nproposed method has achieved new SOTA results on Microsoft Malware\nClassification Challenge, Drebin, and EMBER malware benchmarks. With log-linear\ncomplexity in sequence length, the empirical results demonstrate substantially\nfaster run-time by HGConv compared to other methods achieving far more\nefficient scaling even with sequence length $\\geq 100,000$.", + "authors": "Mohammad Mahmudul Alam, Edward Raff, Stella Biderman, Tim Oates, James Holt", + "published": "2024-03-23", + "updated": "2024-03-23", + "primary_cat": "cs.CR", + "cats": [ + "cs.CR", + "cs.AI", + "cs.LG", + "stat.ML" + ], + "main_content": "Introduction Ever since the transformer (Vaswani et al., 2017) revolutionized natural language processing research (Brown et al., 2020; Devlin et al., 2018; Raffel et al., 2020), significant attention has been paid to the quadratic cost of increasing sequence length. While Proceedings of the 27th International Conference on Artificial Intelligence and Statistics (AISTATS) 2024, Valencia, Spain. PMLR: Volume 238. Copyright 2024 by the author(s). traditional academic benchmarks tend to not require sequence lengths beyond 4096, many real-world applications such as multi-round chat (Team, 2023; Yao et al., 2023), biological sequence modeling (Ahdritz et al., 2022; Avsec et al., 2021; Dalla-Torre et al., 2023; Jumper et al., 2020; Lin et al., 2022), and analyzing computer programs (Alam et al., 2023a; Muennighoff et al., 2023; Rozi` ere et al., 2023) do. The unique challenges, data, and sequence dynamics that occur within each application can have a significant effect on what techniques work well, which is not well elucidated within the current Transformer literature. In this paper we are concerned with malware classification using byte-level representations of executables (Raff and Nicholas, 2017b), a task that can require sequence lengths of up to 200 million in common realworld scenarios. Though we are not able to process this extreme length in its entirety, we focus on it as an important research direction to test and develop algorithms for long-sequence task modeling. In particular, we find that some popular benchmarks from natural language processing are not well correlated with improvement in malware detection tasks. Thus, we find it necessary to develop new architectures, which we do by incorporating aspects of classical neuro-symbolic methods like the Holographic Reduced Representation (HRR) (Plate, 1995). 1.1 Malware Detection Two predominant types of malware detection tasks exist: distinguishing malicious programs from benign and distinguishing a known malicious file into unique families of malware. Both of these tasks are relevant to real-world cyber security and are complicated by the long-range interactions, spatial and non-spatial locality, exhibited within binary sequences (Raff and Nicholas, 2020). Because ML algorithms can not usually handle more than a few thousand tokens of sequence length, the field has relied heavily on manarXiv:2403.17978v1 [cs.CR] 23 Mar 2024 \fHolographic Global Convolutional Networks for Long-Range Prediction Tasks in Malware Detection ually designed hash functions (Botacin et al., 2021; Breitinger et al., 2013; Lillis et al., 2017; Oliver et al., 2013; Raff and Nicholas, 2018b; Roussev, 2009; Winter et al., 2013). In this work we will push deep learningbased sequence modeling to over 100,000 tokens, and longer sequences will be truncated down. Though this does not yet reach the full possible sequence length, it serves as a real-world task to determine the efficacy of our methods. 1.2 Efficient Transformer-Based Models The quadratic cost of attention has motivated substantial research into more efficient architectures that maintain the performance of transformers. For smaller-scale models, there are a wide variety of such architectures (Choromanski et al., 2020; Katharopoulos et al., 2020; Ma et al., 2021; Wang et al., 2020; Zaheer et al., 2020), however they are limited by their inability to be scaled and match the performance of traditional transformers. Another approach to the quadratic run-time of attention that\u2019s gained popularity lately has been to simply pay it. Newer kernels for attention are reasonably fast in practice (Dao, 2023; Dao et al., 2022) and new techniques for extending context length during posttraining (Chen et al., 2023; Peng et al., 2023b; Rozi` ere et al., 2023). However the expense of such models is impractical for many applications, as while they substantially decrease the costs associated with training long-context models they do not substantially decrease the memory overhead at inference time. This is essential because for most applications the primary bottleneck is GPU VRAM and not raw computing power. 1.3 Non-Transformer Models for Sequences Recent research has also raised the prospect of alternatives to the transformer architecture for sequencebased tasks. Foremost among these are state-space models, S4 (Gu et al., 2021) and its variants (Gu et al., 2020; Li et al., 2022; Poli et al., 2023) which have achieved impressive performance on language and vision tasks. Previous work in malware detection has independently developed larger-width convolutions on the order of 128-256 wide kernels, followed by temporal pooling (Raff et al., 2021; Raff and Nicholas, 2017b) Simultaneously with this work, non-transformer architectures with more efficient inference-time context length scaling have begun to match the performance of transformers on natural language tasks(Gu and Dao, 2023; Peng et al., 2023a) and pose an interesting area of exploration for future work in malware detection and other long-sequence problems. 1.4 Our Contributions Our primary contributions are as follows: 1. We introduce HGConv, a novel fusion of previous architectures (Li et al., 2022; Plate, 1995) that achieves state-of-the-art performance on three standard malware classification benchmarks, and furthermore achieves its excellent performance with lower inter-run variance. 2. We introduce novel algorithmic optimizations that enable HGConv to run substantially faster and with lower memory overhead than other global convolutional models. 3. We show that the widely used Long Range Arena (LRA) (Tay et al., 2020) benchmark is a poor proxy for performance at malware classification, despite the fact that it is a task that requires reasoning about long contexts. This underlines the need for using domain-specific benchmarks whose construct validity has been validated in the real world instead of \u201cgeneral performance\u201d benchmarks. 2 Methodology In convolution, inputs are convolved with kernels or filters. Recent works have demonstrated the potential of global convolution in sequence modeling yet intricate kernel computation requires custom CUDA extensions to run (Gu et al., 2021) or crafted kernel design trying to make an approximation of the S4 kernel for each task (Li et al., 2022). In this paper, we focus on building a neuro-symbolic mechanism where kernels are defined as parameters and learned through autodifferentiation eliminating the necessity of intricate and detailed computations and task-specific kernel design. Before going over the details of the proposed HGConv, first we will give a brief overview of the HRR, and its properties, then the proposed method will be elaborated and finally, the algorithmic complexity will be delineated. Our implementation can be found at https://github.com/FutureComputing4AI/HGConv. 2.1 Holographic Reduced Representations Holographic Reduced Representations (HRR) is a type of vector symbolic architecture (VSA) that represents compositional structure using circular convolution in distributed representations (Plate, 1995). In HRR, vector representations of properties and values can be combined together using circular convolution, and has been successfully used in recent literature (Alam et al., 2023b, 2022; Menet et al., 2023). \fAlam, Raff, Biderman, Oates, Holt For instance, the color and shape of a red circle can be stored in a compressed representation using binding operation ( ) and additive properties of HRR by simply b = color red+shape circle. Here the abstract concepts \u201ccolor\u201d, \u201cred\u201d, \u201cshape\u201d, and \u201ccircle\u201d are arbitrarily assigned to a d dimensional vector. The method of retrieving knowledge from this compressed representation is known as unbinding which is similar to binding operation with the inverse of a vector representation. Given vectors xi, yi of dimension d, the binding operation is defined in Equation 1. B = xi yi = F\u22121(F(xi) \u2299F(yi)) (1) Here, F(\u00b7) and F\u22121(\u00b7) refer to Fast Fourier Transform (FFT) and its inverse, respectively. To retrieve xi component from bound representation B, the same binding operation is performed with the inverse of the yi vector component defined in Equation 2. y\u2020 i = F\u22121( 1 F(yi)) (2) To extract the shape of the object in our example from b, the unbinding operation is performed as b shape\u2020 \u2248circle. Similarly, the same concept can be utilized to encode features by binding and decode by unbinding. 2.2 Holographic Global Convolutional Networks We will learn both sequence-wise and depth-wise by integrating binding, global circular convolution, and unbinding operations subsequently. The filters for all the operations will be defined as parameters. First, the binding will be applied along the features which will encode kernel features with input features. Next, a global convolution will be applied along the elements of the sequence which will inter-mix the features of each sequence element. Finally, unbinding will decode the necessary useful features for learning. Since the binding step encodes the filter features to the input features, it will be denoted as the Encoder (E). For conciseness, circular convolution will be referred to as Conv or Convolution (C) unless otherwise specified, and likewise, the unbinding step will be deemed the Decoder (D). Given an input sequence of i-th layer Xi \u2208RT \u00d7H has T tokens each having H dimensional features, we will define three filter weights WE i \u2208RH, WC i \u2208 RK\u00d7H, WD i \u2208RH for encoder, convolution, and decoder, respectively where K is the kernel dimension and K \u2264T. WC i will be padded by zero up to maximum sequence length T to perform global convolution. The input features are encoded with encoder filter WE i using binding given in Equation 3 and Equation 4 1. Here, each m-th element of feature vector yn of Yi is a linear combination of features where \u2200n, m \u2208N : 0 \u2264 n \u2264T \u22121, 0 \u2264m \u2264H \u22121. The encoder step does not mix or alter the sequence elements. The sole attention is put on feature learning. Yi = Xi WE i \u2208RT \u00d7H (3) yn[m] = H\u22121 X j=0 xn[j] we n[((m \u2212j))H] (4) After learning the features, the encoded features of each element are mixed with weighted input features, i.e., kernel WC i using convolution given in Equation 5. Each feature vector h[n] of the convolution layer is a linear weighted combination of encoded features of the tokens expressed in Equation 6 and Equation 7. To include a bias term, a weight WB i \u2208RH is defined which is elementwise multiplied with Yi is added, and consecutively a gelu (Hendrycks and Gimpel, 2016) is applied. Hi = Yi \u229bWC i + Yi \u2297WB i \u2208RT \u00d7H (5) Yi \u229bWC i : h[n] = T \u22121 X j=0 y[j] wc[((n \u2212j))T ] (6) h[n] = y0wc n + y1wc n\u22121 + \u00b7 \u00b7 \u00b7 + ynwc 0+ yn+1wc T \u22121 + yn+2wc T \u22122 + \u00b7 \u00b7 \u00b7 + yT \u22121wc n+1 (7) Since unbinding can extract information from the added feature vectors, it will be utilized to decode useful features from the convolutional step. Given that features are mixed regardless of their significance, by learning appropriate kernels, the most important features can be extracted using unbinding. Specifically, the unbinding step is expected to learn to get rid of overmixed or unnecessarily mixed element features. Zi = Hi WD i \u2020 \u2208RT \u00d7H (8) zn[m] = H\u22121 X j=0 hn[j] wd n[((m \u2212j))H] (9) The extracted features are processed by a gated linear unit (GLU) (Dauphin et al., 2017) given in Equation 10 and subsequently a dropout layer is used. W\u03b1 i and W\u03b2 i are the weights are GLU unit and \u03c3 is the sigmoid activation. Gi = W\u03b1 i Zi \u2299\u03c3(W\u03b2 i Zi) (10) 1Notations: \u2192binding ops \u229b\u2192circular convolution \u2299\u2192elementwise multiplication \fHolographic Global Convolutional Networks for Long-Range Prediction Tasks in Malware Detection \ud835\udc651 \ud835\udc650 \ud835\udc652 \ud835\udc65\ud835\udc47\u22121 Dropout Norm Embedding Unbind Bind Conv Dropout Norm GLU \ud835\udc7e\ud835\udc56 \ud835\udc38 \ud835\udc7e\ud835\udc56 \ud835\udc36 \ud835\udc7e\ud835\udc56 \ud835\udc37 skip GAP Linear Softmax Cross Entropy Loss Dropout Figure 1: The block diagram of the proposed method. The dotted region shows a single layer of the proposed network which is repeated N times. In the figure, prenorm is applied. In the case of postnorm, normalization is applied after the GLU layer before the skip connection. Finally, a skip connection is used by adding the unperturbed input Xi to the processed feature from GLU unit Gi. The output of the i-th layer Xi+1 can be fed to the next layer the process can be repeated N times to extract the deeper features by the combinations of bind \u2192conv \u2192unbind \u2192glu units in each layer to improve the performance of the network. Xi+1 = Gi + Xi (11) A generic block diagram of the proposed method is presented in Figure 1. In the embedding layer, both word and position embeddings are used and added together. For normalized floating point inputs, a linear layer is used in place of word embedding. In the norm layer, either layer normalization (Ba et al., 2016) or batch normalization (Ioffe and Szegedy, 2015) can be employed. The global average pooling (GAP) is applied to the output of the N-th layer which is subsequently fed to a linear layer with a feature size the same as the number of classes. The loss is calculated using the softmax cross-entropy loss function which is optimized using the Adam optimizer where a cosine decay learning rate scheduler with warmup is employed. 2.3 Algorithmic Complexity The time complexity of the main three layers, i.e., binding, convolution, and unbinding are O(T \u00b7 H log H), O(T log T \u00b7 H), and O(T \u00b7 H log H), respectively. Therefore, the overall time complexity is O(T log T) log-linear with respect to the sequence length T. Since in all the layers, the shape of the tensors is T \u00d7 H, the space complexity is O(T) linear. Feature dimension H is assumed to be constant. A step-by-step breakdown of the time and space complexity is given in Equation 12 and Equation 13. Time Complexity = O(T \u00b7 H log H + T log T \u00b7 H + T \u00b7 H log H) = O(2 \u00d7 T \u00b7 H log H + T log T \u00b7 H) = O(T + T log T) [H is constant] = O(T \u00b7 {1 + log T}) = O(T log T) log-linear (12) Space Complexity = O(T \u00b7 H) = O(T) linear [H is constant] (13) 3 Experiments and Results In this paper, we are proposing a neuro-symbolic method of sequence processing that encode feature, convolve along all the sequence elements, and finally decode necessary features compensating for overmixing. To validate the proposed method, experiments are performed focusing on practical applications where long sequences are a common phenomenon such as malware classification where sequence length can reach up to \u2248200M. In our experiments, we will adopt wellknown malware classification benchmarks such as the Microsoft Windows Malware benchmark that comes from the 2015 Kaggle competition (Panconesi et al., 2015), Android application packages (APK) Malware benchmark from Drebin dataset (Arp et al., 2014), and EMBER malware classification benchmark (Anderson and Roth, 2018). As will be seen in the results, in most cases existing hash-based algorithms that have no learning phase outperform existing Transformer and similar long-sequence learning algorithms. Kaggle Microsoft Malware Classification Challenge (BIG 2015) hosted on Kaggle (Panconesi et al., 2015) is a benchmark of 9 Windows malware families. The dataset contains 10, 868 samples total uncompressed size of 184 GB which is split into train and test set by \fAlam, Raff, Biderman, Oates, Holt 80\u221220 ratio per class by random sampling. Each of the data samples comes in two different forms, in one form it is the raw binary of the original executables referred to as Kaggle Raw of size 47 GB, and in another form, it is the human-readable assembly referred to as Kaggle Asm of size 137 GB. Asm files are generated by IDA Pro which contains additional features that seem to make it easier to learn. However, it is also \u22483\u00d7 larger with longer sequence lengths than Raw files, thus, balancing the difficulty of the dataset. Drebin Android APK namely Drebin (Arp et al., 2014) is a benchmark of 178 malware families containing 5, 560 samples total uncompressed size of 16 GB. Nevertheless, 70% of the families contains less than 10 samples and 88.8% of the families contains less than 40 samples. Therefore, to be able to learn from enough data, in our experiments we have utilized top 20 malware families containing 4, 664 samples of size 14 GB which is split into train and test set by 80 \u221220 ratio per class. The original data of the dataset is in APK format which is referred to as Drebin Apk of size 6 GB. Like Kaggle, another version of the dataset is built by converting the APK files to uncompressed TAR files which have a size of 8 GB and are referred to as Drebin Tar. Since the difference between the samples is the amount of compression, it will be useful to understand how compression is handled by each algorithm. EMBER EMBER is binary malware classification benchmark (Anderson and Roth, 2018) containing 800K samples of Windows executable files of total 1.02 TB of size. Among them, the training split contains 300K benign and 300K malicious files of a total size of 826 GB. On the other hand, the test split contains 100K benign and 100K malicious files of a total size of 220 GB. Although the sequence length of the files in the EMBER dataset can be over 100M long which is not practical to process by any sequence model, we start our experiments with a relatively shorter length of 256 (28) which is exponentially incremented until 131, 072 (217). Since most of the important features are encoded at the beginning of the sequence, we could not see any benefit of using a much longer sequence length than 217. 3.1 Training The sequences of inputs are padded or truncated up to the maximum sequence length to train the proposed HGConv network. To suppress the embedding for the padded tokens binary mask is produced and multiplied by the embedding matrix. In the convolutional step, the kernel dimension K can be smaller than the actual sequence length which is also padded with zeros up to the maximum sequence length T to perform FFT convolution. Since all the tasks are essentially classification, to train the network, the softmax cross-entropy loss function is employed which is optimized using the Adam optimizer with cosine scheduler learning rate. Moreover, label smoothing is applied with a smoothing factor \u03b1 = 0.1. The hyperparameter used in each of the tasks is fine-tuned and optimized. The list of the hyperparameters used in each task is presented in Appendix A. The training is performed on a single node 16 NVIDIA TESLA PH402 32GB GPU machine where the mean of the gradient from each machine is used to update the parameters. 3.2 Evaluations To evaluate the performance, the proposed HGConv is compared with other state-of-the-art (SOTA) sequence models. For Kaggle and Drebin datasets, the proposed method is compared with non-attentionbased processors such as Lempel-Ziv Jaccard Distance (LZJD) (Edward Raff et al., 2019; Raff and Nicholas, 2018a, 2017a), Stochastic Hashed Weighted LempelZiv (SHWeL) (Raff and Nicholas, 2017b), attentionbased processors such as Transformer (Vaswani et al., 2017), Performer (Choromanski et al., 2020), Hrrformer (Alam et al., 2023a), and state space model based processors S4 (Gu et al., 2021) and SGConv (Li et al., 2022). Other compression-based methods like Burrows-Wheeler Markov Distance (BWMD) (Raff et al., 2020) and Lempel-Ziv Networks (Saul et al., 2023) were not considered due to lower accuracy compared to the selected baselines, and their other benefits are not a focus of this work. Table 1 shows the mean accuracy with standard deviation for 10-fold cross-validation for each of the methods. Among all the methods, the proposed HGConv achieved the best results for all the datasets with the smallest standard deviation. It is also the only method to out-perform the existing hash-based approaches, showing how existing methods did not adequately learn from long sequence problems. In terms of fluctuation among the models, the variation in the results of Drebin Apk is the most noticeable. Figure 3 shows the UMAP 3D representation (McInnes et al., 2018; Nolet et al., 2021) of the output of the penultimate layer of all the models which reveals the clustering patterns. HGConv has visibly better clusters which makes the final layer classifier predict correctly. Moreover, qualitative inspection shows that models that perform better generally show clearer and better separated clusters, with HGConv in particular showing the best clustering behavior. EMBER is a benchmark with very long sequences. In our experiments, we started with a moderate sequence \fHolographic Global Convolutional Networks for Long-Range Prediction Tasks in Malware Detection 29 210 211 212 213 214 215 216 217 Maximum Sequence Length 77 79 81 83 85 87 89 91 93 Accuracy (%) OOM OOT OOT OOT Transformer H-Trans 1D S4 SGConv F-Net Hrrformer HGConv Transformer H-Trans 1D S4 SGConv F-Net Hrrformer HGConv 29 210 211 212 213 214 215 216 217 Maximum Sequence Length 0.0 0.2 0.4 0.6 0.8 1.0 Execution Time (s) \u00d7104 OOM OOT OOT OOT O(T 2) O(T) O(T log T) O(T log T) O(T) O(T) O(T log T) O(T 2) O(T) O(T log T) O(T log T) O(T) O(T) O(T log T) Figure 2: Ember long sequence malware classification results. In the figure, OOT and OOM stand for out-oftime and memory shown for models that face such issues after a particular sequence length. The figure shows a shorter comparison. A broader comparison with additional models Linformer (Wang et al., 2020), Performer (Choromanski et al., 2020), and F-Net (Lee-Thorp et al., 2021) and numeric results are presented in Appendix C. Figure 3: Drebin Apk dataset in the benchmark has the most variation in the results across the models. The figure shows the UMAP 3D representation of the output from the penultimate layer of all the models for Drebin Apk. The better the clusters the higher the accuracy. \fAlam, Raff, Biderman, Oates, Holt Table 1: Results of 10-fold cross-validation on Kaggle Microsoft Malware Classification Challenge and Drebin Android Malware classification. Values inside the parenthesis are standard deviations. Also, for both of the datasets, the training time per epoch is provided in seconds. Model Kaggle Raw Kaggle Asm Kaggle Time Drebin Apk Drebin Tar Drebin Time LZJD (Raff and Nicholas, 2017a) 97.6 (1.50) 97.1 (6.10) \u2013 80.8 (2.60) 81.0 (6.50) \u2013 1NN-SHWeL (Raff and Nicholas, 2017b) 97.6 (1.38) 97.3 (1.93) \u2013 83.6 (1.94) 87.9 (1.84) \u2013 LR-SHWeL (Raff and Nicholas, 2017b) 96.7 (2.07) 96.9 (2.08) \u2013 78.4 (2.26) 89.1 (2.29) \u2013 Transformer (Vaswani et al., 2017) 72.68 (3.77) 95.60 (1.52) 31.55 40.13 (6.11) 69.50 (2.67) 15.90 F-Net (Lee-Thorp et al., 2021) 93.17 (1.08) 95.74 (1.03) 6.54 69.41 (1.81) 80.98 (1.22) 4.73 Luna-256 (Ma et al., 2021) 89.50 (0.89) 93.47 (0.96) 26.19 24.30 (2.36) 56.42 (7.90) 16.49 H-Transformer (Zhu and Soricut, 2021) 92.78 (0.49) 98.07 (0.29) 117.53 71.85 (0.84) 87.40 (0.70) 99.64 Performer (Choromanski et al., 2020) 94.63 (0.79) 97.66 (0.48) 37.08 70.44 (3.65) 82.38 (1.12) 18.31 Hrrformer (Alam et al., 2023a) 94.41 (0.57) 98.52 (0.23) 7.35 57.28 (3.80) 84.07 (1.03) 5.42 S4 (Gu et al., 2021) 96.44 (0.41) 98.66 (0.32) 17.51 88.38 (1.69) 87.94 (1.05) 14.97 SGConv (Li et al., 2022) 95.13 (0.91) 98.12 (0.56) 24.37 76.23 (3.14) 80.04 (4.33) 24.37 HGConv 98.86 (0.12) 99.63 (0.14) 5.86 90.15 (0.47) 91.86 (0.35) 3.63 length of 256 (28) and incremented up to 131, 072 (217) and computed the accuracy and execution time for each sequence length which is presented in Figure 2. For practical reasons, we have set a maximum limit of 10, 000 seconds per epoch. If a method takes more than that is marked as out-of-time (OOT). If the model and data can not be put onto the memory for a particular sequence length that is marked as outof-memory (OOM) in the figure. HGConv not only achieves the best accuracy but also takes the least amount of time among all the compared methods. The full comparison and all the numerical results are presented in Appendix C. We also find that HGConv runs substantially faster than all other methods, achieving far more efficient scaling despite the increased theoretical complexity compared to Hrrformer and F-Net. 4 Long Range Arena Does Not Predict EMBER Reliably Recent work on benchmarking large language models (Gao et al., 2021; Raji et al., 2021) has questioned the construct validity of the widespread practice of assuming that \u201cdiverse\u201d acontextual benchmarks are indicative of performance on tasks of interest. For long-context models, this is exemplified by the widespread use of the Long Range Arena (Tay et al., 2020), which contains tasks that evaluate parsing long expressions, classifying movie reviews, assessing text similarity, classifying flattened CIFAR-10 images, and identifying if two points are connected by a long path. Despite the lack of relevance of these tasks to their application domains, LRA scores have been used to motivate architectural design choices in work in genomics (Nguyen et al., 2023; Romero and Zeghidour, 2023), analyzing ECGs (Zama and Schwenker, 2023), speech enhancement (Du et al., 2023), and reinforcement learning (Lu et al., 2023). Long Range Arena (LRA) is a benchmark of 6 tasks covering diverse problem areas with different modalities. The ListOps task deals with the hierarchically structured data of mathematical operations with delimiters with 96K training and 2K test data. In Text task, IMDB movie review (Maas et al., 2011) text dataset is employed. Classification is performed character level to include additional complexity. The task has a balanced train-test split of size 25K. The Retrieval task models the textual similarity of two documents for which the ACL Anthology Network (AAN) (Radev et al., 2013) dataset is utilized with 147K training and 17K test samples. Image task comprises of grayscale sequential CIFAR-10 image classification that puts the hurdle of 2D spatial relations into a 1D sequence of pixels. Finally, the Pathfinder and Path-X are the binary classification tasks containing grayscale images of dotted lines and circles connected or disconnected introduced in (Linsley et al., 2018). The difference between them is the sequence length from 1K to 16K both containing 160K training and 20K test samples. We investigate the Long Range Arena and find that average performance is uncorrelated with performance on any of our malware tasks. While performance between LRA tasks is highly correlated with one another, they all correlate far worse with performance on malware task benchmarks shown in Figure 4. In terms of performance, in text classification, HGConv achieved the second-best score of 88.15% and overall third-based average accuracy of 81.13% in the \fHolographic Global Convolutional Networks for Long-Range Prediction Tasks in Malware Detection Table 2: LRA benchmark scores. HGConv is from this work, while Hrrformer, S4, and SGConv scores are from their respective papers. All other scores are from (Tay et al., 2020). (Tay et al., 2020) and (Alam et al., 2023a) report that models \u201cdo not learn anything\u201d on Path-X, shown here with a \u2717. We observe this happening with HGConv as well. Model ListOps Text Retrieval Image Pathfinder Path-X Average Random 10.00 50.00 50.00 10.00 50.00 50.00 36.67 Transformer (Vaswani et al., 2017) 36.37 64.27 57.46 42.44 71.40 \u2717 54.39 Linformer (Wang et al., 2020) 35.70 53.94 52.27 38.56 76.34 \u2717 51.36 Performer (Choromanski et al., 2020) 18.01 65.40 53.82 42.77 77.05 \u2717 51.41 F-Net (Lee-Thorp et al., 2021) 35.33 65.11 59.61 38.67 77.80 \u2717 55.30 Luna-256 (Ma et al., 2021) 37.25 64.57 79.29 47.38 77.72 \u2717 61.24 H-Transformer (Zhu and Soricut, 2021) 49.53 78.69 63.99 46.05 68.78 \u2717 61.41 Hrrformer (Alam et al., 2023a) 39.98 65.38 76.15 50.45 72.17 \u2717 60.83 S4 (Gu et al., 2021) 59.60 86.82 90.90 88.65 94.20 96.35 86.09 SGConv (Li et al., 2022) 61.45 89.20 91.11 87.97 95.46 97.83 87.17 HGConv 49.75 88.15 90.62 85.08 92.04 \u2717 81.13 Table 3: Rank order performance of models on each benchmark. For Ember we use sequences of length up to 214 as that\u2019s the maximum size found in the LRA Benchmark. Model K. Raw K. Asm D. Apk D. Tar Ember (214) LRA Transformer (Vaswani et al., 2017) 9 8 8 8 9 8 Performer (Choromanski et al., 2020) 3 7 4 5 7 9 F-Net (Lee-Thorp et al., 2021) 3 6 6 7 5 7 Luna-256 (Ma et al., 2021) 8 9 9 9 8 5 H-Transformer (Zhu and Soricut, 2021) 7 4 4 2 3 4 Hrrformer (Alam et al., 2023a) 6 2 7 4 2 6 S4 (Gu et al., 2021) 2 2 2 2 4 2 SGConv (Li et al., 2022) 3 4 3 6 5 1 HGConv (ours) 1 1 1 1 1 3 LRA benchmark presented in Table 2. When comparing the rank order of model performance on LRA to our malware tasks, we see that LRA scores are not very predictive of performance on the malware benchmarks as shown in Table 3. LRA rates S4 and SGConv well ahead of the other models, while their performance is far less outstanding on malware benchmarks. SGConv in particular has a median ranking of fourth on our malware benchmarks, despite being the clear best model on LRA. The actual best malware model, HGConv, only beats S4 or SGConv on one of the six LRA tasks. 5" + }, + { + "url": "http://arxiv.org/abs/2312.15310v1", + "title": "Towards Generalization in Subitizing with Neuro-Symbolic Loss using Holographic Reduced Representations", + "abstract": "While deep learning has enjoyed significant success in computer vision tasks\nover the past decade, many shortcomings still exist from a Cognitive Science\n(CogSci) perspective. In particular, the ability to subitize, i.e., quickly and\naccurately identify the small (less than 6) count of items, is not well learned\nby current Convolutional Neural Networks (CNNs) or Vision Transformers (ViTs)\nwhen using a standard cross-entropy (CE) loss. In this paper, we demonstrate\nthat adapting tools used in CogSci research can improve the subitizing\ngeneralization of CNNs and ViTs by developing an alternative loss function\nusing Holographic Reduced Representations (HRRs). We investigate how this\nneuro-symbolic approach to learning affects the subitizing capability of CNNs\nand ViTs, and so we focus on specially crafted problems that isolate\ngeneralization to specific aspects of subitizing. Via saliency maps and\nout-of-distribution performance, we are able to empirically observe that the\nproposed HRR loss improves subitizing generalization though it does not\ncompletely solve the problem. In addition, we find that ViTs perform\nconsiderably worse compared to CNNs in most respects on subitizing, except on\none axis where an HRR-based loss provides improvement.", + "authors": "Mohammad Mahmudul Alam, Edward Raff, Tim Oates", + "published": "2023-12-23", + "updated": "2023-12-23", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.LG", + "q-bio.NC" + ], + "main_content": "Introduction Subitizing, also referred to as numerosity, is the ability to recognize small counts nearly instantaneously (Kaufman et al. 1949), allowing for fast, accurate, and confident identification of an object\u2019s count in limited space. The ability to recognize drops quickly after four items (Saltzman and Garner 1948). Subitizing is a cognitive function distinct from explicit counting (Trick and Pylyshyn 1994), and recent work has shown that Convolutional Neural networks (CNNs) fail to subitize on simple MNIST-like tasks (Wu, Zhang, and Shu 2019). The failure is astonishing because a simple, hard-coded convolutional kernel is capable of perfectly solving the subitizing tasks (Wu, Zhang, and Shu 2019). This means a CNN captures the hypothesis space of a valid solution, so it is unclear what component is unable to reach this target goal. Seemingly there are two options: the need for better Preprint. Accepted in 38th Annual AAAI Workshop on NeuroSymbolic Learning and Reasoning in the Era of Large Language Models (NuCLeaR), 2024. optimization strategies, or alternative loss functions. While a different loss function may sound implausible when using cross-entropy (CE) on a simple, clean dataset, we explore changing the loss function as the strategy in this work. The goal of this work is to investigate how a neurosymbolic approach affects the generalization of subitizing in a CNN, but not to solve the problem. We devise a prediction and loss strategy built from the Holographic Reduced Representations (HRRs) (Plate 1995) which has a long successful history of its use in Cognitive Science (CogSci) research. The proposed loss function is applied to the same set of experiments as proposed by (Wu, Zhang, and Shu 2019) where a CNN failed to subitize. Our results indicate an improvement in generalization on most of the tasks under consideration but are not yet a complete answer to the subitizing task. Favorably, the errors in generalization with our approach are more congruent with the expectation that performance will decrease after 5 objects are present, though the accuracy is still lower than human performance. Moreover, the same set of experiments is performed on a Vision Transformer (ViT) (Dosovitskiy et al. 2020) where the proposed loss function demonstrates improvement in generalization over CE loss and results are more in accordance with subitization expectation as well. In summary, our contributions are: 1) An adaption of the HRR into a loss function for classification. 2) A empirical evaluation of the impact of subitizing, and a qualitative evaluation of the cases where subitizing is improved or hindered based on the loss function. Note that improved predictive accuracy is not a goal, and difficult to deconflate from subitization performance due to background items. In addition, classic object detection methods (e.g., FasterRCNN (Ren et al. 2015)) are not a proxy of subitizing because such methods perform explicit object counting, where subitizing is a task of instantaneous recognition of numerosity \u2014 not a sequential process of identification and counting. The remainder of the paper is organized as follows. First, different types of vector symbolic architectures, related works, and our motivation for using HRRs are covered. Next, a brief overview of HRRs is provided and the methodology of the proposed HRR loss function is described. Afterward, all the experiments and the corresponding results are described. Finally, concluding remarks, limitations, and future work are presented. arXiv:2312.15310v1 [cs.CV] 23 Dec 2023 \fRelated Work Vector Symbolic Architectures (VSA) have been researched since seminal work by (Smolensky 1990), who made an ever-green argument for their use. In short, VSAs provide a foundation for combining the benefits of connectionist architectures (robustness to deviations in input, and learning) with the benefits of symbolic AI (reasoning, logical inference). This is made possible by defining a system in which arbitrary concepts are assigned to specific vectors, and a set of binding and unbinding operations are defined, which associate or disassociate two vectors respectively (Schlegel, Neubert, and Protzel 2021). Most VSAs use a fixed feature space for their representation, and thus necessarily introduce noise as more items are bound/unbound. Barring this noise they can symbolically manipulate the concepts associated with the original vectors. Many such VSAs exist today (Gosmann and Eliasmith 2019; Gayler 1998; Gallant and Okaywe 2013; Kanerva 1996). For example, given vectors representing running, sleeping, cat, and dog, one can compose a vector x = bind(running, cat) + bind(sleeping, dog), and then generally determine which animal was sleeping by computing unbind(x, sleeping) \u2248dog. While the specifics vary between VSAs, we will use the Holographic Reduced Representation proposed by (Plate 1995), which is both commutative and associative in the binding and unbinding operations and has been used successfully in multiple differentiable applications (Alam et al. 2022, 2023; Saul et al. 2023; Menet et al. 2023). The motivation for using HRR is that it may specifically engender better subitizing which is inspired by current literature in CogSci research that leverages the HRR. The seminal work by (Eliasmith et al. 2012) developed \u201cSpaun,\u201d(Choo 2018) a visual input-based brain model implemented using HRRs and able to perform several cognitive tasks like counting, question answering, rapid variable creation, and others. The HRR has been implemented in a spiking infrastructure (Bekolay et al. 2014) for biological plausibility, but has also shown utility in analogy reasoning (Eliasmith and Thagard 2001), and solving Raven\u2019s Progressive Matrices (Rasmussen and Eliasmith 2011). Little work has been done investigating subitizing via machine learning. Early work by (Zhang et al. 2015) treated the classification task from a purely ML perspective looking for enhanced performance. Later work showed that endowing an object segmentation network with the subitizing task improved the saliency of individual object recognition (He et al. 2017; Islam, Kalash, and Bruce 2018). Our work is concerned with the generalization of subitizing in simple images, which a CNN is not able to do, as shown by (Wu, Zhang, and Shu 2019). We use their MNIST-like shape, color, and edge generalization tasks to measure if an HRR-based loss function can improve the generalization of subitizing in simple CNNs (Wu, Zhang, and Shu 2019). This allows us to isolate the problem to just subitization, and show that the HRR loss does improve results for most generalization tasks. Due to the severe deficiency of modern CNNs to subitize simple images, we consider many possible related tasks out of scope in our study. This includes prior work in other visual aspects like foveation (Kaplanyan et al. 2019) and visual reasoning (Nie et al. 2020), which intersect machine learning and CogSci. Our goal is only to study how a tool in CogSci modeling, the HRR, impacts CNNs\u2019 robustness to the cognitive task of subitizing. Because CNNs cannot yet perform the task at human levels, we also consider matching human reaction times and performance matters for future work. Methodology Background Before diving into the construction of our loss function, we will first review the details of the HRR. HRRs are a type of VSA that represent compositional structure using circular convolution in distributed representations (Plate 1995). Given vectors xi and yi in a d-dimensional space Rd, Plate (1995) used a circular convolution to define a binding operation between these two vectors sampled from a Normal distribution. This can be specified more succinctly using the Fourier transform F(\u00b7) and its inverse F\u22121(\u00b7). Specifically, the resulting vector B \u2208Rd of binding xi and yi is given by B = xi yi = F\u22121(F(xi) \u2299F(yi)) where \u2299indicates element-wise multiplication. Here we use the symbol to denote the binding operation. The retrieval of bound components is referred to as unbinding. A vector can be retrieved by constructing an inverse function \u2020 : Rd \u2192Rd so that it complies with the identity function F(z\u2020 i) \u00b7 F(zi) = \u20d7 1 where z\u2020 i is the inverse of the vector z given by z\u2020 i = F\u22121 (1/F(zi)). To unbind xi from B, we circularly convolve its inverse: B xi\u2020 \u2248yi. The necessary condition for these operations to behave as expected is an initialization procedure. As originally proposed by (Plate 1995), each vector is sampled from a Normal distribution as zi \u223cN(0, 1/d). This sampling means that in expectation, the above binding and unbinding steps will work for random pairs of vectors. However, the inversion operation is numerically unstable, and originally a pseudoinverse was proposed that traded a large numerical error for a smaller approximation error. However, more recently (Ganesan et al. 2021) proposed a projection operation \u03c0(\u00b7) to enforce that the inverse will be numerically stable, and exactly equal to the faster pseudo-inverse of (Plate 1995). This is done by a projection \u03c0(\u00b7) onto the ball of complex unit magnitude, \u03c0(zi) = F\u22121 ( F(zi)/|F(zi)| ). We make use of this projection step to initialize the vectors in our work. HRR Loss Function In this paper, experiments are performed using both CNN and ViT models that take an image as input and predict the number of objects present in that image. To train such models, a standard softmax cross entropy (CE) loss can approximate the one-hot representation of the associated class/count. In our approach, we have taken a different strategy to devise the HRR loss function. We re-interpret the logits of CNN and ViT as an HRR vector instead of approximating a one-hot encoding. We then convert the logits to a class prediction by associating each class with its own unique HRR \fvector. To keep the comparison with CE loss fair, our HRR loss will maintain a classification style design in which each class corresponds to a distinct count of objects1. The idea here is to represent each class with a unique key-value (K \u2212V) pair identifier. Each K and V is uniquely sampled from normal distribution with projection \u03c0(N(0, IH \u00b7 H\u22121)) where H is the feature size. We use the concept of binding and unbinding operations of HRRs and the network will predict the linked key-value pair, i.e., the bound term. Therefore, if the unbinding operation is performed using the key kn \u2208K = {k1, k2, \u00b7 \u00b7 \u00b7 , kC} where C is the number of classes, the associated value vector vn \u2208V = {v1, v2, \u00b7 \u00b7 \u00b7 , vC} is expected to be the output, K, V \u2208R1\u00d7C\u00d7H. Let a network F predict bound vector \u02c6 Y \u2208RB\u00d71\u00d7H of feature size H with tanh activation function in the final layer for input X of batch size B. The choice of tanh activation is intentional to keep the output in the range of [\u22121, 1] as K V will remain in this range. This is due to sampling from a normal distribution with mean zero and standard deviation 1/ \u221a H. 99.98% of the data will be in the following range \u22124/ \u221a H < kn, vn < 4/ \u221a H (4\u03c3 rule where \u03c3 is the standard deviation). Therefore, it is safe to assume that the extremum of kn vn would be \u2264|4 \u221a 2/ \u221a H|. Choosing a sufficiently large value of {H : H \u226b32} would keep the value of Y = K V in the [\u22121, 1] range. To make sure that the network predicts the linked keyvalue pair associated with the input class of the image, the loss function is defined by Equation 1, where \u02c6 yi \u2208\u02c6 Y = tanh(F(\u00b7)) is the network\u2019s output. Equation 1 is sufficient for training the network, but we still need an explicit prediction for evaluation. To get the associated class label from the network output, we apply the K vectors of all the C classes to the \u02c6 Y which will return the estimation of value vectors \u02c6 V = K \u02c6 Y \u2208RB\u00d7C\u00d7H. \u02c6 V contains the values for all the C classes, however, the value for the associated input would be the most similar to the ground truth value after training. Accordingly, the cosine similarity score S is calculated given in Equation 2, and the arg max of S will be the predicted class/count output associated with the input image. L = B X i=1 \u2225ki vi \u2212\u02c6 yi \u22252 (1) S = PH i=1 Vi \u00b7 \u02c6 Vi \u2225V\u22252\u2225\u02c6 V\u22252 \u2208RB\u00d7C (2) Experiments and Results Wu, Zhang, and Shu (2019) examined the cognitive potential of a CNN in numerosity using four experiments. Numerosity is perhaps the simplest innate cognitive computing task 1One could select the HRR vectors to encode an ordinal style loss, but that amounts a prior for counting in the loss design. Our goal is to determine if the HRR alone has benefits separate from being able to implement inductive biases into the architecture. Thus a classification-oriented design maintains that goal. that a child can do. Disappointingly, the key finding of the work is the failure in the subitizing tasks of the CNN learned by CE loss. In this paper, we re-do the same experiments using the same CNN to show how our proposed HRR loss function, where each class is represented using a unique keyvalue pair, improves the CNN\u2019s numerosity performance. Humans have a good sense of small numbers and can recognize the number of objects in a scene up to 4 items without counting them explicitly (Nieder and Miller 2003; Piazza et al. 2004; Tokita and Ishiguchi 2010). This ability is independent of the type, shape, and color of the object. For example, if a child learns to subitize or count circles, that same skill is utilized to subitize or count squares even though circles and squares have different shapes. Nevertheless, current methods of training CNNs on subitizing perform poorly in comparison to humans. In the following experiments, we discuss how the basic skills of numerosity are lacking in CNNs and how the proposed loss helps to build a numerical sense. In all these experiments, the same CNN and dataset are used as in (Wu, Zhang, and Shu 2019). In addition, a ViT network is used in the same set of experiments. However, we modify the final layer of the networks with the HRR loss. Instead of predicting logits with softmax activation from the network, the network is used to predict features of size H = 64 with a tanh activation function for both networks. The network is trained using the Numerosity database which has a total of 6000 training images of dimension 100 \u00d7 100 with a varying number of circles from 1 to 6. The test dataset contains 7 variations (described below) of the training images. Each variation of the test split contains 6000 images 2. (a) n=1 (b) n=2 (c) n=3 (d) n=4 (e) n=5 (f) n=6 Figure 1: Sample training images of classes 1 to 6 are shown from (a) to (f) used to train the network for the first four experiments. The task is to predict the number of objects in an image. The generalization is tested using five different test sets in four groups that alter the size, shape, color, and infilling of the objects to make the task more difficult. The training set contains images of white circles on a black background. They are made such that the number of circles is independent of the total area of the circles to avoid any possible information leakage that may be used to \u201ccheat\u201d and obtain predictions without learning to actually subitize. The maximum number of circles, i.e., the total number of classes, is C = 6. A sample image of each class is given in Figure 1. For ViTs, images are divided into 10 \u00d7 10 patches. For each patch, a feature of size 256 is used. In multi-head attention, 4 heads are used and the encoder block is repeated 6 times. Both networks are trained 2Training and test images are not publicly available. We got access to the dataset in correspondence with (Wu, Zhang, and Shu 2019). \fby optimizing the HRR loss function in Equation 1 for a total of 300 epochs on a single RTX 2070 Super 8GB GPU. The dropout rate is set to be 0.1 and the initial learning rate is set to be 10\u22123 for the first 100 epochs which is lowered to 10\u22124 and 10\u22125 for every 100 epochs. Framing the task in terms of classification presents challenges when interpreting the results. There are cases where the network consistently over-predicts the true number of items in an image (i.e., says \u201c4\u201d instead of \u201c3\u201d). This causes cases of false success, in that the accuracy of predicting the target of \u201c6\u201d is near 100% not because the network has successfully subitized, but because the network cannot overpredict beyond 6, and through this limit falsely appears to perform well. This situation is common, and we identify such cases with italics to avoid incorrectly bringing the reader\u2019s attention to what is actually a failure, while simultaneously indicating the nature of the result. This also occurs with consistent under-counting and the \u201c1\u201d target class but is less prevalent in the results. With this caveat, we describe the set of experiments that were performed and their results. In the following subsections, the subitizing ability of a CNN and ViT is tested and compared using both CE and HRR loss. We also show saliency maps (Simonyan, Vedaldi, and Zisserman 2013) for each example test image. The saliency maps allow us to better understand why the HRR approach improves subitizing in the majority of cases over CE loss. The general result is that the standard cross-entropy loss has spurious attention placed on non-informative regions of the image. The HRR approach is not immune to this, especially since the network between approaches is the same, but it is noteworthy how significant the difference is. Experiment of Object Sizes The networks are originally trained using the images of circles shown in Figure 1 and it classifies all the training images with 100% accuracy. In this experiment, we test the performance of the network with the test images of circles where the size of the circles is made 50% larger than the original training images. Apart from that, all other parameters such as color and shape are kept the same. The sample images of the circle with a bigger radius are illustrated in Figure 2. Results of this experiment are presented in the \u201850% Larger\u2019 column of Table 1 and Table 2 for the CNN and ViT, respectively. Although varying object size does not cause the CE network\u2019s accuracy to fall significantly for classes 1 to 4, for classes 5 and 6 of the CNN, and for class 5 of the ViT, accuracy falls considerably. On the other hand, HRR loss can classify all the images with over 80% accuracy using the CNN and over 50% accuracy using the ViT for all the classes. It is interesting to note that the accuracy follows the subitizing pattern, i.e., as the number of circles in the image increases the probability of correctly recognizing them decreases. Figure 2 shows the saliency maps of both HRR and CE loss for the CNN. HRR loss puts more restricted attention in the boundary regions whereas attention in the case of the CE loss spreads out broadly. n=1 n=2 n=3 n=4 n=5 n=6 (a) Experiment 1 Images (b) HRR Loss (c) CE Loss Figure 2: Sample images of experiment 1 where the radius of the circles are 50% greater than the circles of training images are shown in (a). Saliency maps of the experiment 1 images for both HRR and CE loss are shown in (b) and (c), respectively. HRR puts more attention toward the boundary regions whereas the network trained with CE loss function puts attention on both the inside and output of circles along with the boundary regions. Experiment of Object Shapes In this experiment, the networks are tested by replacing the circles with other shapes such as white equilateral triangles and squares on a black background, illustrated in Figure 3. Results of this experiment are presented in the \u2018Triangles\u2019 and \u2018Squares\u2019 columns of Table 1 and Table 2. When only changing the shape of the object to triangles, the accuracy of the CE CNN drops below 50% for all classes except for class 6, with an average accuracy of 45.17%, revealing poor generalization. In the case of the images of squares, the network performs comparably well with an increase in average accuracy to 75.68%. By contrast, due to using the HRR loss and a key-value-based transformation layer, the accuracy of the same network is over 50% for images of triangles and over 80% for images of squares for all the classes. The average accuracy for triangles and squares is 75.7% and 77.0%, respectively. In the case of ViT, the performance of both HRR and CE losses are similar. For images of triangles, the HRR loss average accuracy is 55.33%, slightly lagging behind the CE loss accuracy of 56.0%, whereas for images of squares, the HRR loss average accuracy is 65.66%, slightly lagging behind the CE loss accuracy of 66.0%. The saliency maps for both HRR and CE loss for the CNN are presented in Figure 3. Consistently, the HRR loss puts strict focus on the edges of the objects whereas the CE loss spreads attention throughout the image. Experiment of Object Colors The object\u2019s color in the test images is swapped in this experiment. The images contain newly generated synthetic circles of the same size as the training set circles, but the test circles are black on a white background. The results of this experiment are the \u2018Color Swap\u2019 column of the Table 1 and Table 2. Figure 4 shows the example images that are used in this experiment along with the saliency maps. From the \fn=1 n=2 n=3 n=4 n=5 n=6 n=1 n=2 n=3 n=4 n=5 n=6 (b) Experiment 2 images (Triangles and Squares) (d) HRR Loss (Triangles and Squares) (f) CE Loss (Triangles and Squares) Figure 3: Sample images of experiment 2 where circles of classes 1 to 6 are replaced by triangles and squares shown in (a). Filters that rely on the curvature of a circle explicitly will perform poorly on this task, which is evident in the CE approach\u2019s lower accuracy. Saliency maps of the experiment 2 images are shown in (b) for HRR loss and (c) for CE loss. HRR\u2019s attention is concentrated on the informative regions, i.e., boundary regions whereas attention is more distributive in the case of CE. 50% Larger Triangles Squares Color Swap White Rings Target HRR CE HRR CE HRR CE HRR CE HRR CE 1 1.000 1.000 0.997 0.327 1.000 0.876 0.093 0.160 0.033 0.004 2 0.920 0.997 0.787 0.441 0.914 0.811 0.228 0.340 0.007 0.002 3 0.967 0.990 0.715 0.361 0.964 0.641 0.388 0.680 0.000 0.010 4 0.953 0.959 0.541 0.287 0.944 0.686 0.370 0.670 0.003 0.096 5 0.904 0.672 0.619 0.364 0.900 0.549 0.251 0.420 0.019 0.194 6 0.815 0.549 0.883 0.930 0.888 0.978 0.122 0.250 1.000 0.989 Table 1: Results of the CNN where bold are best unless the result is due to consistent over/under accounting at the boundary. No result is marked \u201cbest\u201d when performance is worse than random guessing (\u226416.7%) or similar. The HRR approach generalizes better for the first three tasks (or is closely behind) but degrades on the color swap task. Both methods fail on the last test. 50% Larger Triangles Squares Color Swap White Rings Target HRR CE HRR CE HRR CE HRR CE HRR CE 1 1.000 1.000 0.637 0.681 0.942 0.977 0.020 0.000 0.632 0.053 2 0.932 0.981 0.595 0.662 0.731 0.798 0.026 0.001 0.616 0.113 3 0.920 0.923 0.488 0.470 0.553 0.567 0.062 0.001 0.467 0.187 4 0.780 0.785 0.356 0.331 0.393 0.340 0.094 0.005 0.366 0.331 5 0.555 0.372 0.431 0.312 0.401 0.276 0.283 0.024 0.267 0.382 6 0.990 0.995 0.813 0.906 0.948 0.968 0.822 0.995 0.269 0.704 Table 2: Results of the ViT where bold are best unless the result is due to consistent over/under accounting at the boundary. No result is marked \u201cbest\u201d when the performance of both methods is comparable. The HRR approach generalizes better or closely behind for all the tasks while using ViT. In the color swap task, we can see performance degrades for both but HRR yields better generalization. figure, it is obvious that the changes in the test images are immense compared to the training images from a network\u2019s perspective. From a human perspective, this is quite an easy task to generalize after learning from the training images. Both of the methods also fail the subitizing test. A human being can count a lower number of objects with less effort than a higher number of objects. Nevertheless, the CE classification approach has achieved 16% accuracy for class 1 and 25% for class 6. Likewise, the HRR-based method has achieved 9.3% for class 1 and 12.2% for class 6. However, in the case of the ViT, while the performance using both losses degrades and degenerates, the HRR loss shows better generalization compared to the CE approach. Experiment of Region-Boundary Duality Differentiating between objects from the boundary representation is vital to recognition (Marr 2010). Humans can easily identify objects, separate and count objects given just their boundaries. To examine the network\u2019s ability to generalize across the region-boundary duality, the network is \fn=1 n=2 n=3 n=4 n=5 n=6 (a) Experiment 3 Images (b) HRR Loss (c) CE Loss Figure 4: Sample images of experiment 3 where the circle and background colors are swapped in the test images shown in (a). Saliency maps of the HRR and CE loss are shown in (b) and (c), respectively. The attention of the network is more focused on the boundary region in the case of HRR. tested using images of white circle rings on a black background. Examples of these test images along with saliency maps are presented in Figure 5, and the results are in the \u2018White Rings\u2019 columns of Table 1 and Table 2. Recall that the network is originally trained on the images in Figure 1. From the network\u2019s perspective, the rings of white circles are completely new images. As a result, both the CE classification approach with softmax activation and the HRR classification approach with the key-value transformation layer approach degrade in performance. In the case of CNN, we can see degeneracy for both CE and HRR losses except for class 6 where both methods overcount and have achieved 98.9% and 100% accuracy, respectively. This is peculiar from the subitizing point of view because the accuracy for classes with a single ring of a circle in each approach is 0.4% and 3.3%, respectively. However, in the case of the ViT, we can see the effectiveness of the HRR loss over CE loss for classes 1 to 4 with a big margin ranging from 4% to 58%. For classes 5 and 6, HRR loss remains consistent with the subitizing pattern with lower accuracy than CE loss, but for class 6 the CE loss overcounts. In conclusion, the CNN lacks the ability to generalize across the region-boundary duality and fails on this more complex subitizing task. On the other hand, the ViT with HRR loss shows robust performance in generalization on this complex subitizing task. Boundary Representation Tests Experiments 1 to 4 demonstrate CNN\u2019s lack of generalization in learning. To improve the abstraction ability of CNNs, Wu et. al. (Wu, Zhang, and Shu 2019) suggested learning from the boundary representation of objects. Instead of learning from single-shaped images, each class is built with different-shaped polygons with n sides. This should eliminate the shape bias in test results. The size will be altered to allow isolation of generalization to fundamental subitizing ability rather than change the re-use of shape patterns. Moreover, each object is represented by its boundary which bridges the representation of the black object on a white n=1 n=2 n=3 n=4 n=5 n=6 (a) Experiment 4 Images (b) HRR Loss (c) CE Loss Figure 5: Sample images of experiment 4 where the circles are represented by the boundary edges shown in (a). This is the most challenging generalization task, as it changes the ratio of white and black pixels. Saliency maps for object region-boundary duality are shown in (b) and (c) for HRR and CE, respectively. n=1 n=2 n=3 n=4 n=5 n=6 (a) Boundary representation images (b) HRR Loss (c) CE Loss Figure 6: Sample images of boundary representation of the various shaped objects are shown in (a). In all cases with the CE loss shown in (c), we see spurious attention placed on empty regions of the input generally increasing in magnitude with more items. By contrast, the HRR loss shown in (b) keeps activations focused on the actual object edges and appears to suffer only for large n when objects are placed too close together. background and the white object on a black background. Figure 6 illustrates sample images of different shapes and sizes of objects with the boundary representation. The network is re-trained using 80% of the images of Figure 6 and the remaining 20% of the images is used for testing. The accuracy on a test set of in-distribution is shown in Table 3. While the CE loss appears to obtain better training accuracy, the goal of this study is the generalization of subitizing ability. As such the results in Table 3 are more interesting because the in-distribution results are seen to imply that the HRR loss is worse, but we will see that it has a meaningful impact on generalization. This nuance would be difficult to identify in standard computer vision datasets. To inspect how much generalization is achieved by training the network with images of object boundaries, the test \fBoundary Edge Representation Target HRR CE 1 1.000 1.000 2 0.985 1.000 3 0.950 0.970 4 0.855 0.930 5 0.635 0.790 6 0.795 0.920 Table 3: In distribution results, show baseline training performance of the HRR and CE-based loss functions on the edge-map distribution, rather than testing generalization. In practice, while the HRR has a lower training accuracy, it has better generalization. images are scaled up and down by 50%. Next, we will examine how boundary representation helps towards generalization. Intriguingly, the CE method does not follow the expected subitizing degradation pattern, though our HRR approach is closer to achieving it for the 50% larger case. Table 4 reveals how the results deteriorate by only changing the scale of the object. However, in the case of scaling up, both of the methods show solid evidence of humanlike subitizing, i.e., the accuracy decreases as the number of objects in the image increases. The proposed HRR loss approach has achieved an average accuracy of 49% whereas the CE approach has achieved an average accuracy of 45.6%, but the CE\u2019s performance is inflated in the sense that it has a higher training accuracy and drops precipitously. 50% Larger 50% Smaller Target HRR CE HRR CE 1 0.935 0.991 1.000 0.687 2 0.715 0.984 0.005 0.390 3 0.585 0.496 0.005 0.021 4 0.300 0.207 0.000 0.014 5 0.225 0.032 0.000 0.043 6 0.180 0.026 0.000 0.988 Table 4: Generalization results for the boundary edge maps. Bold results are the best unless the result is due to over/under accounting at the boundary. No result is marked \u201cbest\u201d when worse than random guessing (\u226416.7%). In the case of scaling down, no apparent subitizing pattern is present for either method. The proposed method achieved 100% accuracy for class 1 due to under-counting and failed to generalize for the rest of the classes. Conversely, the CE approach has achieved 98.8% accuracy due to over-counting for class 6 and failed to generalize for the rest of the classes. Overall, the boundary representation has helped the network\u2019s abstraction ability of subitizing but failed to generalize, especially in the case of scaling down. The saliency maps of the boundary representation test images are presented in Figure 6. In the boundary representation tests, decisions are supposed to be made by the edge/boundary representation. The saliency maps reveal how HRR loss is concentrating networks\u2019 attention in the boundary regions whereas attention is much diffused in the case of CE loss. Moreover, based on the observation of saliency maps of correct and incorrect predictions following conclusions (see Appendix for details) are made: \u2022 Even when the CE-based model is correct, its saliency map indicates it uses the inside region of an object and the area around the object/background toward its prediction in almost all cases. \u2022 When the HRR loss-based model is correct, it rarely activates for anything besides the object boundary and does not tend to focus on the inside content of an object. \u2022 When the HRR-based model is correct, the edges of the objects in the saliency map are usually nearlycomplete, and large noisy activations can be observed surrounding the boundary regions. \u2022 When the CE-based model is incorrect, it often has two objects that are nearby each other. When this happens, the CE saliency map tends to produce especially large activations between the objects, creating an artificial \u201dbridge\u201d between the two objects. \u2022 When the HRR-based loss is incorrect, it tends to have a saliency map that is either 1) activating on the inside content of the object, or 2) has large broken/incomplete edges detected for the object." + }, + { + "url": "http://arxiv.org/abs/2312.01242v1", + "title": "DDxT: Deep Generative Transformer Models for Differential Diagnosis", + "abstract": "Differential Diagnosis (DDx) is the process of identifying the most likely\nmedical condition among the possible pathologies through the process of\nelimination based on evidence. An automated process that narrows a large set of\npathologies down to the most likely pathologies will be of great importance.\nThe primary prior works have relied on the Reinforcement Learning (RL) paradigm\nunder the intuition that it aligns better with how physicians perform DDx. In\nthis paper, we show that a generative approach trained with simpler supervised\nand self-supervised learning signals can achieve superior results on the\ncurrent benchmark. The proposed Transformer-based generative network, named\nDDxT, autoregressively produces a set of possible pathologies, i.e., DDx, and\npredicts the actual pathology using a neural network. Experiments are performed\nusing the DDXPlus dataset. In the case of DDx, the proposed network has\nachieved a mean accuracy of 99.82% and a mean F1 score of 0.9472. Additionally,\nmean accuracy reaches 99.98% with a mean F1 score of 0.9949 while predicting\nground truth pathology. The proposed DDxT outperformed the previous RL-based\napproaches by a big margin. Overall, the automated Transformer-based DDx\ngenerative model has the potential to become a useful tool for a physician in\ntimes of urgency.", + "authors": "Mohammad Mahmudul Alam, Edward Raff, Tim Oates, Cynthia Matuszek", + "published": "2023-12-02", + "updated": "2023-12-02", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "main_content": "Introduction Differential Diagnosis (DDx) is referred to the process of systematically identifying a disease from a possible set of pathologies through the process of elimination based on a patient\u2019s medical history and physical examinations [5]. During a clinical process, a doctor asks several questions about the patient\u2019s symptoms and antecedents (medical history). Based on the response, possible differential diagnoses are narrowed down. If there is uncertainty about the underlying condition, then a medical examination is performed or additional tests are suggested. Given a patient\u2019s information and symptoms, an automated system that narrows down the possible pathologies if not identifying the exact one will be of great benefit. In particular, such improvements could help lower-performing doctors or those in under-resourced communities obtain better diagnostic outcomes [25]. Moreover, in times of emergency, an automated system that has access to the patient\u2019s medical history and current conditions will be quite valuable. In recent years, automated diagnosis systems using machine learning have increasingly developed [30, 17, 6, 12]. Existing works have demonstrated the potential of such automated systems in Preprint. Accepted at 1st Workshop on Deep Generative Models for Health at NeurIPS 2023. arXiv:2312.01242v1 [cs.LG] 2 Dec 2023 \fperforming complete blood count (CBC) test [1], syndrome detection [15], coronavirus, heart disease, and diabetes detection [14], and more. Previous work such as Diaformer [4] also demonstrated success in automated diagnosis using a sequence of explicit and implicit symptoms of a disease. But what is lacking is the details of the symptoms, the patient\u2019s previous medical history, and relevant information such as age, and gender. In a DDx process, a doctor would consider all of these information. In this paper, an automated DDx system is proposed using Transformer [28], named DDxT, that would take a sequence of all the patient\u2019s necessary information as input to perform DDx by autoregressively generating a set of most likely pathologies and finally, predict the ground truth pathology using a neural network. This sequence of patient information will contain age, gender, medical history, and evidence, i.e., symptoms. Transformer architecture is employed since it is currently state-of-the-art for sequence generation [3]. Asking questions to a patient and acquiring information can easily be done through an automated system. The challenging part is to make an intelligent decision based on the acquired information which will be addressed in this paper. This will be beneficial not only during the time of emergency but also as an assistive tool to the doctor during the diagnosis process. 2 Related Works Recent works have demonstrated the feasibility of the machine learning-based automated diagnosis system. Such work is presented by [10] where a Transformer-based model is utilized for the differential diagnosis. To perform the task, multi-modal magnetic resonance imaging (MRI) is utilized where a sequence of the brain and spinal cord MRI is processed by the Transformer. Their model performed considerably better than the previous approaches, however, the work is limited only to the diagnosis of demyelinating diseases. Likewise, [24] presented an ensemble approach for automated diagnosis. Their approach involves multiple deep learning-based approaches where the final prediction is the ensemble of all the predictions. On the other hand, [1] proposed an automated complete blood count (CBC) test system which is a very common test in medical diagnosis. Their approach employed YOLO [22] object detection algorithm for blood cell detection. An image-based classifier is a powerful tool for automated diagnosis. Such a system is presented by [20] where convolutional neural networks (CNN) are utilized to diagnose Coronary Artery Disease (CAD) using Myocardial Perfusion Imaging (MPI). Their system utilizes and compares performance on pre-trained VGG-16 [23] and DenseNet-121 [11] architectures. In the same fashion, [16] developed an automated classification system for fungal keratitis. Their system uses ResNet [8] architecture with fungal hyphae images for binary classification of fungal keratitis. Similarly, [17, 18] both employed EfficientNet [26] for the detection of COVID-19 using X-ray images and Malaria classification from the blood smear images, respectively. In a slightly different manner, [21] adopts vision transformer-based Swin-UNETR [7] model to automatic retinal lesion segmentation from spectral-domain optical coherence tomography (SD-OCT) images. The rest of the paper is organized as follows. section 3 will cover the proposed method including a description of the dataset, network, and training procedure. Next, section 4 will highlight the results and compare the proposed method to the RL agent-based methods. Finally, we conclude in section 5 with a discussion of the limitations of our approach. 3 Proposed Method In this paper, differential diagnosis will be performed using a generative Transformer which will take a sequence of patient information as input and predict a sequence of most likely pathologies as differential diagnosis, and finally, the most likely pathology will be predicted using a classifier. In the following subsections, a brief description of the dataset, proposed network architecture, and training process will be discussed. 3.1 Dataset For differential diagnosis, along with evidence, i.e., symptoms, patient\u2019s antecedents (medical history) and personal details such as age and sex are necessary information. DDXPlus [27] dataset is such a dataset that contains synthetically generated 1.3M patient information where each sample contains 2 \fpatient details, evidence, ground truth differential diagnoses, and the ground truth condition. The dataset has a total of 49 pathologies that cover various age groups, sexes, and patients with a broad spectrum of medical history. We note that our work assumes the fidelity of the data since obtaining diagnostic data and medical history from patients comes at high expense, legal hurdles, ethics review, and slow collection rate [19]. Such challenges are beyond the scope of our study. The dataset is preprocessed so that it can be processed by the Transformer. Each patient\u2019s information in the dataset contains age, sex, initial evidence, evidence (symptoms), ground truth differential diagnosis, and ground truth pathology. The age is categorized into 8 groups in the following way: [less than 1), [1-4], [5-14], [15-29], [30-44], [45-59], [60-74], and [above 75]. Sex is represented by M for male and F for female. The Initial and rest of the evidence were acquired by back-and-forth questioning with a patient. Differential diagnosis contains a set of likely pathologies with a probability score for each pathology based on the evidence. Therefore, the ground truth DDx output sequence is organized in descending order of the probability score of each pathology, i.e., the order of prediction is significant and the pathology with a higher probability needs to be predicted first. Finally, the ground truth pathology is what the patient actually has. Special tokens are incorporated to facilitate the learning process. Particularly, indicates the beginning of the sequence, token is used to separate each type of information, and to indicate the end of sequence is used. Since all sequences need to be equal in size, token is used in shorter sequences to fill out the sequence up to the maximum length, and longer sequences are truncated. Each patient\u2019s information is preprocessed as follows. First is used to initiate a sequence. Next, age, sex, initial evidence, and evidence all are stacked together using in between. Finally, the end of the sequence is indicated by token. To cover the unknown words in special circumstances, token is included in the vocabulary. 3.2 Network Architecture age sex initial evidence evidences word + positional embedding Encoder Decoder <\ud835\udc5d\ud835\udc61\u22121> word + positional embedding Encoder xN xN Decoder GAP GAP Concat Classifier Differential Diagnosis (DDx) Diagnosed Pathology <\ud835\udc5d1> <\ud835\udc5d2> <\ud835\udc5d\ud835\udc5b> \u22ef

<\ud835\udc5d\ud835\udc61> context prediction Figure 1: The block diagram of the proposed deep generative network architecture. The encoder blocks are shadowed with blue and the decoder blocks are shadowed with green. The classifier section is shadowed with orange. DDx is the set of n pathologies from to and the classifier predicts the diagnosed pathology

. After preprocessing the dataset, a vocabulary is built using all the unique tokens. The input string is split into words and using the generated vocabulary, each word is replaced with the associated index of the word in the vocabulary. The encoder vocabulary length is 436 and the decoder vocabulary length is 54 (49 pathologies + 5 special tokens). Next, these integer values are utilized to gather the associated word embedding and added with positional embedding so that the order of the word in a sequence is recognized by the network. Similarly, the decoder input tokens are also preprocessed, and word and positional embedding are applied. The Transformer architecture consists of encoder and decoder blocks. Each of the blocks contains a self-attention mechanism, a brief description of which is provided in Appendix A. The encoder will process the patient\u2019s information and feed the context to the decoder. The decoder will be initialized with the p0= token which will iteratively take previously generated pathology pt\u22121 as input and use tokens p0 to pt\u22121 to generate a new possible pathology pt until it reaches token. Both encoder and decoder are repeated N(N = 6) times which will help recognize richer context. The decoder output is the DDx, a sequence of most likely pathologies. The final layer of the encoder holds the processed context information of the evidence, i.e., symptoms and relevant patients\u2019 information, and the final layer of decoder holds the information of all the possible likely pathologies. Therefore, combining both features will be quite advantageous in predicting the actual pathology. As a result, Global average pooling (GAP) is applied to both the encoder and decoder features, concatenated, and fed to a classifier. The classifier is a two-layer neural network. The first layer contains the same number of features as the encoder or decoder, and the second layer has the same number of logits as the number of classes in the dataset. Both layers 3 \fare preceded by layer normalization [2]. In between the layers, GELU activation [9] is used. The classifier predicts the ground truth pathology among the most likely DDx. The full block diagram of the network architecture is presented in Figure 1. 3.3 Training During training, the input size of the encoder and decoder must be fixed. Therefore, the maximum sequence length for the encoder is set to be 80, and the maximum sequence length for the decoder is set to be 40 by truncating or adding tokens. The built vocabulary has 436 unique tokens thus the vocab size for the word embedding is set to be 436. For the embedding layers, the feature size is set to 128 and the feature size of the multi-layer perceptron (MLP) of the encoders and decoders is increased 4 times. In the self-attention layers, 4 heads are used and the encoder and decoder are repeated 6 times. A categorical cross-entropy loss is employed for both the decoder output and the classifier which are added together to compute the final loss. To regularize the network, Dropout with a rate of 0.1 and layer normalization are employed. The loss function is optimized using the Adam [13] optimizer and trained for a total of 20 epochs. The initial learning rate is set to 10\u22123 with an exponential decay learning rate scheduler of the decay rate of \u03b3 = 0.95. 4 Results The proposed network predicts a sequence of most likely pathologies, i.e., DDx, and the actual pathology among the DDx. Both the predicted DDx sequence and the predicted pathology are compared with the ground truth DDx sequence and pathology. The ground truth DDx sequence is organized in descending order of probability distribution of most likely pathologies. As a result, the positional embedding plays an important role in maintaining the correct prediction order leading to a better performance. The ground truth DDx sequence is compared with the predicted sequence elementwise and the mean result is computed. For evaluation, Accuracy, Precision, Recall, and F1 scores are considered. In the following subsections, a comparison of the proposed method with the RL agent-based automated diagnosis methods is performed. Subsequently, the performance of DDx pathology sequence generation and pathology classification will be analyzed and discussed. 4.1 Comparison The baseline models that perform automatic diagnosis using the DDXPlus dataset are Reinforcement Learning (RL)-based agents. Adaptive Alignment of Reinforcement Learning and Classification (AARLC) presented by [29] is such a system that employs an RL-based agent to adaptively acquire the patient\u2019s symptoms and subsequently uses a classifier to predict the pathology. The process continues iteratively thus generating a DDx sequence of pathologies. Similarly, the baseline automatic symptom detector (BASD) [27] utilizes an RL-based agent to gather evidence and an MLP classifier to predict the pathology. Table 1 shows the comparison of the performance of the proposed DDxT model with the baseline RL agent-based models. Table 1: Our DDxT improves Precision and thus F1 score significantly, showing value in a generative approach to retrieving accurate diagnoses over the prior RL agent-based approaches. The best results for each metric are highlighted in bold. Method GTPA@1 DDP DDR DDF1 GM AARLC 99.21 69.53 97.73 0.7824 87.68 BASD 97.15 88.34 85.03 0.8369 90.03 DDxT 99.98 94.84 94.65 0.9472 96.45 The comparison is performed in terms of top-1 ground truth pathology accuracy (GTPA@1), Precision, Recall, and F1 score of DDx denoted as DDP, DDR, and DDF1 following the convention of [27]. AARLC gets the highest recall score but a much lower precision score, lesser than the BASD model, therefore, a lower F1 score. On the other hand, DDxT has a balanced performance in terms of both precision and recall. As a result, it achieved the new highest F1 score of 0.9472 in the DDXPlus 4 \fdataset. Additionally, the proposed method outperforms the previous approaches in terms of top-1 accuracy. Moreover, the accuracy, precision, and recall are combined by geometric mean (GM) also shown in Table 1 to compare the effectiveness of each method. DDxT achieves the best result of 96.45 with a big margin over the previous RL agent-based methods. 4.2 DDx Pathology Sequence Generation The predicted DDx sequence is compared with the ground truth DDx sequence. Since the ground truth sequence is organized in descending order by the probability distribution, predictions are compared element-wise and the mean result is computed per sequence. To evaluate all the metrics, a confusion matrix is built. In DDx pathology sequence generation, the proposed method achieved 99.82% mean accuracy and a mean F1 score of 0.9472%. The confusion matrix of the generated pathology sequence is presented in Appendix B which demonstrates the robustness of the proposed generative method. The accuracy, precision, recall, and F1 score of all the pathology classes are also presented in Appendix B. Among all the pathologies, the highest F1 score of 0.9946 is achieved for Myasthenia gravis, and the minimum F1 score of 0.8643 is achieved for Pancreatic neoplasm. 4.3 Pathology Classification The proposed network takes the processed feature of the encoder and decoder using a GAP, concatenating them together and feeding them into a classifier for the final pathology classification, i.e., given the list of evidence and set of pathologies (DDx), the final classifier will predict the actual pathology among the most likely DDx pathologies. The results of pathology classification are also evaluated in terms of accuracy, precision, recall, and F1 score. Since the classifier has both encoder and decoder information, it shows significant robustness in classification where the network achieved a mean accuracy of 99.98% with a mean F1 score of 0.9949. Additionally, the mean precision and recall scores achieved are 99.61% and 99.44%, respectively. The minimum F1 score achieved 0.8567 is for the Acute rhinosinusitis. The confusion matrix of classification along with metric scores for all the pathologies are presented in Appendix C. Some conditions, like Unstable angina, Acute rhinosinusitis, and Chronic rhinosinusitis obtain lower precision for varying recall rates. These conditions may need to be considered distinctly in the case of the condition\u2019s likelihood to a given population, the risk of the condition itself, and other factors to decide if such conditions are useful to detect in this fashion. Separately, the vast majority of conditions can be detected and a conservative threshold may be used to increase confidence in deployment while expecting a limited reduction in missed diagnoses. 5" + }, + { + "url": "http://arxiv.org/abs/2305.19534v1", + "title": "Recasting Self-Attention with Holographic Reduced Representations", + "abstract": "In recent years, self-attention has become the dominant paradigm for sequence\nmodeling in a variety of domains. However, in domains with very long sequence\nlengths the $\\mathcal{O}(T^2)$ memory and $\\mathcal{O}(T^2 H)$ compute costs\ncan make using transformers infeasible. Motivated by problems in malware\ndetection, where sequence lengths of $T \\geq 100,000$ are a roadblock to deep\nlearning, we re-cast self-attention using the neuro-symbolic approach of\nHolographic Reduced Representations (HRR). In doing so we perform the same\nhigh-level strategy of the standard self-attention: a set of queries matching\nagainst a set of keys, and returning a weighted response of the values for each\nkey. Implemented as a ``Hrrformer'' we obtain several benefits including\n$\\mathcal{O}(T H \\log H)$ time complexity, $\\mathcal{O}(T H)$ space complexity,\nand convergence in $10\\times$ fewer epochs. Nevertheless, the Hrrformer\nachieves near state-of-the-art accuracy on LRA benchmarks and we are able to\nlearn with just a single layer. Combined, these benefits make our Hrrformer the\nfirst viable Transformer for such long malware classification sequences and up\nto $280\\times$ faster to train on the Long Range Arena benchmark. Code is\navailable at\n\\url{https://github.com/NeuromorphicComputationResearchProgram/Hrrformer}", + "authors": "Mohammad Mahmudul Alam, Edward Raff, Stella Biderman, Tim Oates, James Holt", + "published": "2023-05-31", + "updated": "2023-05-31", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "stat.ML" + ], + "main_content": "Introduction Self-attention has risen to prominence due to the development of transformers (Vaswani et al., 2017) and their recent successes in machine translation, large language modeling, and computer vision applications. The fundamental con1Department of Computer Science and Electrical Engineering, University of Maryland, Baltimore County, Baltimore, MD, USA 2Laboratory for Physical Sciences, College Park, MD, USA 3Booz Allen Hamilton, McLean, VA, USA 4EleutherAI. Correspondence to: Edward Raff , Tim Oates . Proceedings of the 40 th International Conference on Machine Learning, Honolulu, Hawaii, USA. PMLR 202, 2023. Copyright 2023 by the author(s). 29 210 211 212 213 214 215 216 217 Maximum Sequence Length 74 76 78 80 82 84 86 88 90 Accuracy (%) OOM OOT OOT OOM OOM Transformer H-Transformer-1D Luna-256 Performer Linformer F-Net Hrrformer Transformer H-Transformer-1D Luna-256 Performer Linformer F-Net Hrrformer Figure 1. Our primary result, comparison of our Hrrformer with other self-attention models in EMBER malware classification dataset. Most prior methods fail early by running Out Of Memory (OOM) or Time (OOT). Hrrformer is presented in a solid line and achieves the best accuracy, scales to longer sequences. The two prior best models according to the Long Range Arena, HTransformer-1D and Luna-256, are in the dashed lines, and do not perform as well as the LRA would have indicated in speed or accuracy. The rest of the models are in the dotted line. struction of self-attention includes a triplet of \u201cqueries, keys, and values\u201d, where the response is a weighted average over the values based on the query-key interactions. This results in a quadratic memory and computational complexity, that has inhibited the use of Transformers to those without significant GPU infrastructure and prevented applications to longer sequences. Ever since, a myriad of approaches has been proposed to approximate the self-attention mechanism, with the vast majority trading some amount of accuracy for speed or memory use. The \u201cmarket\u201d of self-attention strategies currently offers various trade-offs in the total package of speed, memory use, and accuracy. We test our method in two settings: using the Long Range Arena (LRA) to compare with prior approaches and a real1 arXiv:2305.19534v1 [cs.LG] 31 May 2023 \fRecasting Self-Attention with Holographic Reduced Representations world task in malware detection. These results show several benefits to the Hrrformer: it is near state-of-the-art in terms of accuracy, and one of only two methods to improve upon the original Transformer for all tasks in the LRA. The Hrrformer sets a new benchmark for state-of-the-art speed and memory use, processing 28\u00d7 more samples/second and using 79.15% less memory than the best prior art for each respective metric. The Hrrformer converges in 10\u00d7 fewer epochs and is effective with just a single layer. Combined this makes the Hrrformer up to 280\u00d7 times faster to train. On our malware classification task, we find that the relative accuracies of Transformer models change from the LRA benchmark, but that our Hrrformer still obtains the best accuracy and scales the best with sequence length up to T = 131, 072, as demonstrated in Figure 1. The remainder of our manuscript is organized as follows. Work related to our own, as well as adjacent techniques beyond our study\u2019s scope, is reviewed in section 2. The recasting of attention in our Hrrformer is a simple procedure demonstrated in section 3, which redefines the Attention function using HRR, and multi-headed self-attention then continues as normal. We then demonstrate these benefits in section 4, showing Hrrformer is consistently one of the best methods with respect to accuracy and considerably faster thanks to reduced memory usage, the number of layers, and epochs needed to converge. In section 5 we draw conclusions from out work. 2. Related Works Since the introduction of the Self-Attention mechanism and the transformer architecture, considerable research has occurred to mitigate its computational burdens. Though not explicit in much of the current literature, many of these approaches resemble strategies for improving Support Vector Machines that have similar complexity. This includes projection (Kaban, 2015) to a lower dimension (Wang et al., 2020), finding/creating sparse structure in the correlations (Wang et al., 2014) by (Kitaev et al., 2020; Child et al., 2019; Tay et al., 2020b; Beltagy et al., 2020; Zaheer et al., 2020), using randomized features (Rahimi & Recht, 2007; Sinha & Duchi, 2016) by (Choromanski et al., 2020), factorized or budgeted representations (Si et al., 2016; Wang et al., 2010) by (Xiong et al., 2021; Ma et al., 2021), and creating simplified linear approximations (Wang et al., 2011; Kantchelian et al., 2014) by (Katharopoulos et al., 2020). Other more differentiated approaches include the hierarchical decomposition of the correlations (by (Zhu & Soricut, 2021)), and approaches that replace self-attention entirely with alternative \u201cmixing\u201d strategies (Tay et al., 2020a; LeeThorp et al., 2021). To the best of our knowledge, ours is the first work that attempts to re-create the same logic of self-attention with the HRR. Among these prior methods, we note that F-Net (Lee-Thorp et al., 2021) is the most closely related as both F-Net and HRR rely upon the Fast Fourier Transform (FFT) as a fundamental building block. While F-Net does not approximate self-attention so much as replace it with an alternative \u201cmixing\u201d procedure, we include it due to its relevance in using the FFT. Our results will show significant improvement over F-Net, highlighting the value of a neuro-symbolic approach to reconstructing the same logic as opposed to using the FFT as a generic differentiable mixing strategy. The HRR has seen successful use in cognitive science research (Jones & Mewhort, 2007; Blouw & Eliasmith, 2013; Stewart & Eliasmith, 2014; Blouw et al., 2016; Eliasmith et al., 2012; Singh & Eliasmith, 2006; Bekolay et al., 2014), but comparatively little application in modern deep learning. The symbolic properties have been previously used in knowledge graphs (Nickel et al., 2016) and multi-label classification (Ganesan et al., 2021). There is limited use of HRRs for sequential modeling. (Plate, 1992) proposed an HRR-based Recurrent Neural Network (RNN), while other work has used complex numbers inspired by HRRs but not actually used the corresponding operations (Danihelka et al., 2016). An older alternative to the HRR, the Tensor Product Representation (TPR) (Smolensky, 1990) has been used to endow associative memories (Le et al., 2020) and RNNs with enhanced functionality (Huang et al., 2018; Schlag & Schmidhuber, 2018). Compared to these prior works, we are re-casting the logic into HRRs, rather than augmenting the logic. However, we slightly abuse the assumptions of HRRs to make our method work. A strategic design allows us to effectively remove additionally created noise via the softmax function. In addition, the TPR\u2019s complexity is exponential in the number of sequential bindings, making it a poor choice for tackling the scaling problems of self-attention. Other recent approaches to sequential modeling such as Legendre Memory Units (Voelker et al., 2019), IGLOO (Sourkov, 2018), and State Space Models(Gu et al., 2022; Goel et al., 2022; Gu et al., 2021; 2020) are highly promising. We consider these, along with RNNs, beyond the scope of our work. Our goal is to explore the value of recasting self-attention within the neuro-symbolic framework of HRR. As such, other sequence modeling approaches are out of scope. The need for both less memory and extension to very long sequences is also important in malware detection. Processing malware from raw bytes has been found to be one of the most robust feature types in the face of common malware obfuscations (Aghakhani et al., 2020), but simple ngram based features have been maligned for being unable to learn complex sequential information when executable can be tens of kilobytes on the small side and hundreds of 2 \fRecasting Self-Attention with Holographic Reduced Representations megabytes on the larger side (Kephart et al., 1995; AbouAssaleh et al., 2004; Kolter & Maloof, 2006; Raff et al., 2019; Zak et al., 2017). Given that a maximum T = 200M is realistic, many strategies to handle such sequence lengths have been developed. These include attempts to create \u201cimages\u201d from malware (Nataraj et al., 2011; Liu & Wang, 2016), using compression algorithms as a similarity metric (Li et al., 2004; Walenstein & Lakhotia, 2007; Borbely, 2015; S. Resende et al., 2019; Men\u00b4 endez et al., 2019; Raff & Nicholas, 2017; Raff et al., 2020), and attempts to scale 1Dconvolutional networks over raw bytes (Kr\u02c7 c\u00b4 al et al., 2018; Raff et al., 2018; 2021). We will use the Ember (Anderson & Roth, 2018) dataset for malware detection as a real-world test of our new selfattention for processing long sequences. It has been observed empirically that \u201cbest practices\u201d developed in the machine learning, computer vision, and natural language processing communities do not always transfer to this kind of data. For example, this phenomenon has been observed with CNNs (Raff et al., 2018) and Transformers for malicious URL detection (Rudd & Abdallah, 2020). Most recently, (Rudd et al., 2022) attempted to apply Transformers to raw byte prediction and had to use a chunked attention that limits the attention window (Sukhbaatar et al., 2019). Using Hrrformer we show much longer sequence processing than this prior work, while simultaneously demonstrating that our method generalizes to a domain that is notorious for a lack of transfer. This increases our confidence in the effectiveness of our method. Notably, the two current stateof-the-art Transformers as measured by the Long Range Arena (LRA) (Tay et al., 2020c) benchmarks do not pass this test, performing considerably worse on the malware task. 3. Attention with Holographic Reduced Representations The HRR operation allows assigning abstract concepts to numerical vectors, and performing binding ( ) and unbinding operations on those concepts via the vectors. One could bind \u201cred\u201d and \u201ccat\u201d to obtain a \u201cred cat\u201d. The vectors can also be added, so \u201cred\u201d \u201ccat\u201d + \u201cyellow\u201d \u201cdog\u201d represents a \u201cred cat and yellow dog\u201d. An inverse operator \u2020 is used to perform unbinding. One can then query a bound representation, asking \u201cwhat was red?\u201d by unbinding \u201cred cat and yellow dog\u201d \u201cred\u201d\u2020 to get a vector \u2248\u201ccat\u201d, where the resulting vector is necessarily corrupted by the noise by combining multiple vectors into a single fixed size representation. To perform this symbolic manipulation the binding operation can be defined as B = x y = F\u22121(F(xi)\u2299F(yi)), where F denotes the FFT and \u2299an element-wise multiplication1. The inversion is defined as y\u2020 = F\u22121 \u0010 1 F(y) \u0011 . Combined Plate showed that the response B\u22a4y\u2020 should be \u22481 if the vector y \u2208B, and \u22480 if not present. These properties hold in expectation provided that all vectors satisfy the sufficient condition that their elements are I.I.D. sampled from a Gaussian with zero mean and variance 1/H, where H is the dimension of the vectors. We will now show how to apply the same general logic of attention using HRR operations, creating an alternative (but not mathematically equivalent) form of self-attention that runs in linear time with respect to the sequence length. This is a slight \u201cabuse\u201d of the HRR, as our vectors will not be I.I.D. sampled random values, but results from prior layers in the network. Our design circumvents this issue in practice, which we will discuss shortly. We note this is a satisfying, but not required condition. Deviating from this adds more noise (our vectors are the outputs of prior layers in the network), but a softmax operation will act as a cleanup step to work without this condition. Attention can be represented using queries Q, keys K, and values V matrices where the final output is computed as the weighted sum of the values. A query vector can be mapped to a set of linked key-value pairs to retrieve the value vector associated with the associated key. The concept of binding and unbinding operations of HRR is applied to link the key-value pair (i.e., bind the terms together), and then query a single representation of all key-value pairs to find the response values. For this reason, we will define the steps in an element-by-element manner that more naturally corresponds to the HRR operations, but our implementation will work in a batched manner. For this reason, we will discuss a single query qt \u2208RH, against the set of T key/value pairs kt, vt \u2208RH, where H is the dimension of the representation and t \u22081, 2, \u00b7 \u00b7 \u00b7 T. Thus K = [k1, k2, . . . kT ] is a matrix of shape (T, H), and similar for Q and V . First, we will create a superposition \u03b2 \u2208RH of the keyvalue pairs, meaning that all vectors entering the superposition \u03b2 are also similar (to some degree) to the final result. This is done by binding ( ) each key-value pair to associate them, and summing the results to form the superposition: \u03b2 = T X i=1 ki vi (1) \u03b2 lets us compute interaction effects against all key-value pairs in one O(TH log H) operation, avoiding the O(T 2H) cost of explicit cross-correlation. This now gives us a single vector \u03b2 that represents the entire sequence of T different key-value pair bindings. Now for 1This is faster than an equivalent reformulation as multiplication by a circulant matrix of only real values. 3 \fRecasting Self-Attention with Holographic Reduced Representations each query we are interested in, we can obtain a vector that approximately matches the values v1,2,...,T via the symbolic property of HRRs that x\u2020 (x y + a b) \u2248y, giving: \u02c6 vt = qt \u2020 \u03b2 (2) The queries are checked against the representation of all keyvalue pairs \u03b2, where each qt will contribute a corresponding value based on the response of the bound key, and the HRR framework allows us to perform them jointly. This now gives us a representation \u02c6 vt \u2208RH that represents the set of values present given the keys that respond to the input queries. We can then approximately determine the values present using the dot-product test that present values should result in \u22481 scalars, performing: at = cosine-similarity (vt, \u02c6 vt) (3) Each at is a scalar given the match between the original value vt against the HRR extracted \u02c6 vt, and is repeated for all T values to give us a response on the relative magnitude of each value present. With these approximate responses, we can compute a weighted distribution w \u2208RT by computing the softmax over all a1,2,...,T responses, giving w = softmax(a1, a2, . . . , aT )2. While each at will be highly noisy due to the inherent noise of HRR\u2019s superposition \u03b2, and an amplified level of noise due to the use of non-I.I.D. Gaussian elements, the softmax has the practical effect of removing this noise for us. This occurs because the HRR results in similar magnitude noise across each at, and the softmax operation is invariant to constant additions to all elements. For notational convenience to express this in more detail, let \u02dc \u03a0h(x1, . . . , xk) denote the pairwise interactions of the h\u2019th term in evaluating an expression of the form \u0010PT i=1 xi xi+T \u0011\u22a4 qT , where all bold symbols are H dimensional vectors. The response of any query of the form q = xm + z takes the form PH h=1(xm,h+zh)\u02dc \u03a0h(x1,...,xk)(\u22121)h+1 ( PH h=1(\u22121)h+1xm,h+PH h=1(\u22121)h+1zh)( PH h=1 xm,h+zh). In doing so we see that any noise vector z has a similar magnitude impact regardless of the target vector xm. Because the softmax is invariant to uniform magnitude adjustments to all inputs, and we have the same noise occurring for each computation, we get the behavior of the softmax effectively denoising the response due to the magnitude impacts. We discuss this further in Appendix D. This softmax-based cleanup step is necessary because attempting to use \u02c6 vt directly results in degenerate randomguessing performance due to the noise of the HRR steps. With w in hand, we obtain the final Attention result Attention(Q, K, V) = [w1v1, w2v2, . . . , wT vT , ] (4) 2We find no meaningful difference in results when using a temperature softmax(exp(\u03b1)[a1, . . . , aT ]). returning a weighted version of the original values V , approximating the standard attention\u2019s response. Critically, this process is linear in T and approximates an all pairs interaction between queries and keys, as shown by Theorem A.1 The rest of self-attention works in the same manner as the standard Transformer. The Attention function\u2019s inputs and outputs are altered by linear layers, and instead of performing single attention, we split the feature vector H of the query, key, and value into h heads each having a feature size of H\u2032 = H/h. The attention is computed in parallel in each head and then merged into single attention which is projected to get the final output. The Hrrformer is implemented using JAX and a code snippet of the self-attention mechanism is presented in Appendix A. The block diagram representation of the Hrrformer self-attention is presented in Figure 2. The diagram is shown for single head and single batch elements for brevity. A high-level overview of the architecture in a multi-head setting is presented in Figure 3 showing the analogy between Hrrformer and Transformer. \ud835\udf37 \ud835\udf37 \ud835\udc921 \ud835\udc8c1 \ud835\udc971 \ud835\udc831 T \ud835\udc92\ud835\udc47 \ud835\udc8c\ud835\udc47 \ud835\udc97\ud835\udc47 cosine unbind \u0dde \ud835\udc971 \ud835\udc971 softmax cosine unbind \u0dde \ud835\udc97\ud835\udc47 \ud835\udc97\ud835\udc47 \ud835\udc7d attention weighted values linear output bind \ud835\udc83\ud835\udc47 bind \u0dcd \ud835\udc56=1 \ud835\udc47 \ud835\udc83\ud835\udc56 \ud835\udf37 \ud835\udc4e1 \ud835\udc82 \ud835\udc4e\ud835\udc47 \ud835\udc4e2 \ud835\udc4e3 Figure 2. The block diagram of the Hrrformer self-attention. The dashed straight line represents the continuation of the same process for each T element. After computing the cosine similarity score vector a, softmax is applied to compute the final attention weights w which is elementwise multiplied with value matrix V = [v1, v2, . . . vT ]. Afterward, a linear layer is used to get the final output. The time complexity of the binding/unbinding operation is O(H log H), which is performed T times as the dominant cost. Therefore, the time and space complexity of the Hrrformer attention per layer is linear in sequence length T where the time complexity is O(TH log H) and the space complexity is O(TH). This simple approach allows us to have fully replicated the same overall logical goals and construction of the attention mechanism first proposed by (Vaswani et al., 2017). The correspondence is not exact (e.g., returning weight original values instead of approximate value constructions), but allows us to avoid the non-I.I.D. issue of using arbitrary Q, K, and V as learned by the network. This neuro-symbolic re4 \fRecasting Self-Attention with Holographic Reduced Representations cosine unbind \u03a3 bind Q K V softmax multiply T x H 1 x H T x H T x 1 T x H T x H linear linear linear Q K V h concat Hrrformer Attention linear Hrrformer Attention Multi-Head Attention Figure 3. A high-level overview of our architecture, showing how the Hrrformer is analogous to the traditional transformer. Dataflow in a single-head with the shape of the tensor in different stages is shown on the left and multi-head attention is shown in right. construction yields several benefits, as we will demonstrate in the next section. Simply replacing the self-attention in a standard Transformer with our HRR-based self-attention gives the \u201cHrrformer\u201d that we will use to judge the utility of this new derivation. 4. Experiments and Results The proposed Hrrformer is designed as an inexpensive alternative to the self-attention models for longer sequences. Experiments are performed to validate the effectiveness of the method in terms of time and space complexity in known benchmarks. Our first result is running many of the current popular and state-of-the-art (SOTA) xformers on the real-world classification task of the Ember malware detection dataset (Anderson & Roth, 2018). This provides an example where the need to handle ever longer sequences exists and demonstrates that Hrrformer is one of the fastest and most accurate options on a problem with complex real-world dynamics. In doing so we also show that current SOTA methods such as Luna-256 do not generalize as well to new problem spaces, as our Hrrformer does. Our second result will use the Long Range Arena (LRA) (Tay et al., 2020c) which has become a standard for evaluations in this space. The primary value of these results is to compare our Hrrformer with numerous prior works, establishing the broad benefits of faster time per epoch, convergence in 10\u00d7 fewer epochs, requiring only a single layer, and competitive overall accuracy. In addition, the LRA results are more accessible to the broader ML comunity and allow us to show visual evidence of HRR based attention learning to recover complex structure from a one-dimensional sequence. 4.1. EMBER EMBER is a benchmark dataset for the malware classification task (Anderson & Roth, 2018). The benchmark contains 600K labeled training samples (300K malicious, 300K benign) and 200K labeled test samples (100K malicious, 100K benign). The maximum sequence length of this dataset is over 100M which is not feasible for any of the self-attention models to train with. We experiment with relatively shorter sequence lengths starting from T = 256 and doubling up to T = 131072 by truncating or padding the bytes until this maximum length is reached. In this benchmark, Hrrformer is compared with Transformer (Vaswani et al., 2017), H-Transformer-1D (Zhu & Soricut, 2021), Luna-256 (Ma et al., 2021), Performer (Choromanski et al., 2020), Linformer (Wang et al., 2020), and F-Net (Lee-Thorp et al., 2021). All use 8 heads of a single encoder with 256 embedding size and 512 hidden size of the feed-forward network. Because this is a binary classification task, the encoder output is mapped into 2 logits output using back-to-back dense layers with ReLU activation. During training, the softmax cross-entropy loss function is optimized. For sequence length 256, the batch size is set to be 256. In the experiment, as the sequence length doubles, we halved the batch size to fit the data and the model to the memory which can be expressed as max(216\u2212log2 T , 1). This is done to push other models to the maximum possible length, and keep the batch size consistent between experiments. Additionally, a timeout limit of 10, 000s per epoch is set before experiments are terminated. The dropout rate is chosen to be 0.1, the learning rate is 10\u22123 with an exponential decay rate of 0.85. Each of the models is trained for a total of 10 epochs in 16 NVIDIA TESLA PH402 32GB GPUs. Figure 1 shows the classification accuracy of each of the methods for incremental sequence length from 512 to 131072. As the sequence length increases, Hrrformer outperforms the rest of the models achieving the highest 91.03% accuracy for maximum sequence length 16384. In terms of execution time F-Net is the only model that is faster than ours, however the accuracy of F-Net is an absolute 4.53% points lower (Table 1). Even after exponentially decaying batch size, we could not fit the standard Transformer model to the memory for the sequence length 8196 indicating out-of-memory (OOM) in all figures. H-transformer-1d and Luna-256 crossed the timeout limit for sequence length 16384 indicated out-of-time (OOT) in the figure. The detailed numeric results are presented in Appendix B with additional results for the sequence length of 256. The execution time for linear time complexity methods seems quadratic in the figure; this is due to the exponential decay of the batch size with the increase of sequence length, which was necessary to push each model to its maximum possible sequence 5 \fRecasting Self-Attention with Holographic Reduced Representations length. The more detailed timing information can be seen in Figure 4, where all models but F-Net and Hrrformer run out of time or memory before reaching the maximum sequence length. Note as well that as the sequence length increases, the already small difference in runtime between F-Net and Hrrformer reduces to near-zero. 29 210 211 212 213 214 215 216 217 Maximum Sequence Length 0 2000 4000 6000 8000 10000 Execution Time (s) OOM OOT OOT OOM OOM Transformer O(T2 \u00b7 H) H-Transformer-1D O(T \u00b7 H) Luna-256 O(T \u00b7 H) Performer O(T \u00b7 H) Linformer O(T \u00b7 H) F-Net O(T \u00b7 H logH) Hrrformer O(T \u00b7 H logH) Transformer O(T2 \u00b7 H) H-Transformer-1D O(T \u00b7 H) Luna-256 O(T \u00b7 H) Performer O(T \u00b7 H) Linformer O(T \u00b7 H) F-Net O(T \u00b7 H logH) Hrrformer O(T \u00b7 H logH) Figure 4. The total runtime on the Ember dataset for each algorithm, with the big-O runtime complexity associated. While Hrrformer is technically a slower big-O due to the extra log H term, the hidden size of the network is generally fixed and smaller than the sequence length. Thus we see in practice our design allows for faster execution in training and inference. Most prior methods fail early by running Out Of Memory (OOM) or Time (OOT). Of significant importance to our results is that Luna-256 performs considerably worse than all other options, compared to its top accuracy in the LRA. We hypothesize that the Ember task requires more complex reasoning and feature extraction over time and because Luna performs aggressive compression and approximation of the time component of the model it suffers in terms of accuracy. Our Hrrformer on the other hand has consistent behavior across Ember and the LRA: high accuracy, able to handle longer sequences, and convergence in few epochs, a requirement for working on this dataset which is 1 TB in size and is otherwise prohibitive in its scale. 4.2. Long Range Arena The Long Range Arena (LRA) (Tay et al., 2020c) benchmark comprises 6 diverse tasks covering image, text, math, language, and spatial modeling under long context scenarios ranging from 1K to 16K. ListOps \u2013 task inspects the capability of modeling hierarchically structured data in a longer sequence context with mathematical operators MAX, MEAN, MEDIAN, and SUM MOD enclosed by delimiters. This is a ten-way classification problem with a maximum sequence length of 2K. Text \u2013 is a byte/character level classification task using the IMDB movie review (Maas et al., 2011) dataset. Character-level language modeling makes the models reason with compositional unsegmented data.This is a binary classification task with a maximum sequence length of 4K. Retrieval \u2013 evaluates the model\u2019s ability to encode and compress useful information for matching and retrieval by modeling similarity score between two documents. For this task, the ACL Anthology Network (Radev et al., 2013) dataset is used in a character level setup. This task has a maximum sequence length of 8K and this is a binary classification task. Image \u2013 is an image classification task of 10 classes that uses grayscale CIFAR-10 dataset in a sequence of length 32 \u00d7 32 = 1024. This task allows assessing the model\u2019s ability to process discrete symbols. Pathfinder \u2013 task evaluates the model\u2019s performance over long-range spatial dependency. This is a binary classification task that classifies whether two circles are connected by a line which is introduced in (Linsley et al., 2018), and includes distractor paths. The images have dimension 32 \u00d7 32 which is reshaped into 1024. Path-X is extremely difficult version of pathfinder task which contains images of dimension 128 \u00d7 128 = 16384 with additional distractor paths. In Hrrformer, we use the same number or fewer parameters as mentioned in the LRA benchmark (Tay et al., 2020c) across the tasks and a list of hyper-parameters used in each task is provided in Appendix B. Global average pooling is applied to the output of the encoder sequences and subsequently back to back dense layers are used with ReLU activation to get the final logits output. During training, the softmax cross-entropy loss function is optimized using the Adam optimizer. We use the exponential decay learning rate with the initial value of 10\u22123, and the final value of 10\u22125. For all the tasks, Hrrformer is trained for a total of 20 epochs both in the case of singleand multi-layer which is 10\u00d7 less training than previous works. The results in terms of accuracy in all the tasks of the LRA benchmark are presented in Table 1. 3 Ours is one of only two methods that improve accuracy upon the Transformer and consistently displayed higher performance in all the tasks. We show the performance for both single and multiple layers. In 3 of the 5 tasks (ListOps, Text, Image), Hrrformer achieves the second-best results using 3The Pathfinder task as originally reported by (Tay et al., 2020c) uses a \u201chard\u201d version of the task, but the code provided defaults to an \u201ceasy\u201d version. Most papers do not make clear which version of the task is evaluated, and the F-Net authors indicated in correspondence the \u201ceasy\u201d version was used. Luna-256 used the hard version, and other authors have not yet reached back to us. On the easy version, Hrrformer gets 80.81% in a single-layer and 80.77% in the multi-layer, but we report the hard version in our table and assume others are using the hard version. 6 \fRecasting Self-Attention with Holographic Reduced Representations Table 1. Accuracy results of Hrrformer on Long Range Arena (LRA) benchmark. Even using just one layer Hrrformer is highly competitive, and the only method besides Luna is a Pareto improvement over the original Transformer. Our method is further advantaged in that it requires 10\u00d7 fewer epochs to reach competitive accuracies. Best results in bold, second best in italics. Model ListOps (2k) Text (4k) Retrieval (4k) Image (1k) Path (1k) Path-X (16k) Avg Epochs Transformer (Vaswani et al., 2017) 36.37 64.27 57.46 42.44 71.40 FAIL 54.39 200 Local Attention (Tay et al., 2020c) 15.82 52.98 53.39 41.46 66.63 FAIL 46.06 200 Linear Transformer (Katharopoulos et al., 2020) 16.13 65.90 53.09 42.34 75.30 FAIL 50.55 200 Reformer (Kitaev et al., 2020) 37.27 56.10 53.40 38.07 68.50 FAIL 50.67 200 Sparse Transformer (Child et al., 2019) 17.07 63.58 59.59 44.24 71.71 FAIL 51.24 200 Sinkhorn Transformer (Tay et al., 2020b) 33.67 61.20 53.83 41.23 67.45 FAIL 51.29 200 Linformer (Wang et al., 2020) 35.70 53.94 52.27 38.56 76.34 FAIL 51.36 200 Performer (Choromanski et al., 2020) 18.01 65.40 53.82 42.77 77.05 FAIL 51.41 200 Synthesizer (Tay et al., 2020a) 36.99 61.68 54.67 41.61 69.45 FAIL 52.88 200 Longformer (Beltagy et al., 2020) 35.63 62.85 56.89 42.22 69.71 FAIL 53.46 200 BigBird (Zaheer et al., 2020) 36.05 64.02 59.29 40.83 74.87 FAIL 55.01 200 F-Net (Lee-Thorp et al., 2021) 35.33 65.11 59.61 38.67 77.78 FAIL 54.42 200 Nystromformer (Xiong et al., 2021) 37.15 65.52 79.56 41.58 70.94 FAIL 58.95 200 Luna-256 (Ma et al., 2021) 37.98 65.78 79.56 47.86 78.55 FAIL 61.95 200 H-Transformer-1D (Zhu & Soricut, 2021) 49.53 78.69 63.99 46.05 68.78 FAIL 61.41 200 Hrrformer Single-layer 38.79 66.50 75.40 48.47 70.71 FAIL 59.97 20 Hrrformer Multi-layer 39.98 65.38 76.15 50.45 72.17 FAIL 60.83 20 only 1 layer of the encoder. For the Image classification task, it achieves the best results of 50.45% accuracy using 3 layers of the encoder. Moreover, Hrrformer requires 10\u00d7 fewer epochs than others to produce comparable or better results. Overall, the multi-layered Hrrformer produces the third-best result of 60.83% in the benchmark. Image Airplane Automobile Bird Cat Deer Dog Frog Horse Ship Truck Head 1 Head 2 Head 3 Head 4 Figure 5. Visualization of weight vector w \u2208R1024\u00d71 reshaped to 32 \u00d7 32, the shape of the original image of the CIFAR-10 dataset used in the LRA Image classification task. A single-layer Hrrformer is able to learn the 2D structure from the 1D sequence of the image. This is particularly noticeable in the Airplane, dog, Frog, and Horse images. Note context sensitive Head activation can be observed comparing Head 3 for dog vs Frog, where activation occurs for different pixel intensities indicating the model is not naively activating for simple color intensity. The ability to learn with a single layer aids in both throughput and memory use. The result is surprising, and in visualizing the weight vector w we can confirm that a single layer 101 102 Speed (Examples per second) 46 48 50 52 54 56 58 60 62 LRA Score * indicates single layer Longformer Transformer Sparse Transformer Synthesizer* BigBird Luna-256 Linear Transformer Sinkhorn Transformer Local Attention Performer Linformer H-Transformer-1D Hrrformer Hrrformer* Figure 6. Performance (y-axis), Speed (x-axis, log-scale) of different xformers, and memory footprint on GPU are illustrated by the size of the circles. Hrrformer is in the top-right of the graph, with the smallest circle size, indicating it is the fastest and most memory efficient for training (this does not factor in convergence speed). is sufficient to learn the structure. We show this for the Image task of single-layer Hrrformer in Figure 5 (multi-layer in Appendix C). Here, the weight vector w \u2208R1024\u00d71 is reshaped to 32\u00d732, the shape of the original grayscale images of the CIFAR-10 dataset for visualization. From the figure, it is clear that the Hrrformer is learning to identify the 2D structure from the 1D sequence of the Image classification task. We also compare against the standard Transformer in Appendix Figure 10, where it is less obvious how the model\u2019s weights might correspond to the 2D structure of the image. Hrrformer\u2019s benefits go beyond accuracy and convergence 7 \fRecasting Self-Attention with Holographic Reduced Representations speed: it is fast and consumes the least amount of memory on GPU of the alternatives tested. Figure 6 compares all the self-attention models in terms of LRA score, speed (training examples per second), and memory footprint (size of the circle). LRA score is the mean accuracy of all the tasks in the LRA benchmark. Speed and memory footprint is calculated on the byte-level text classification task per epoch. To measure these results, a single NVIDIA TESLA PH402 32GB GPU is utilized with a fixed batch size of 4 and a maximum sequence length of 4000 with an embedding size of 32 and feature size of 64. For all the models 6 layers of the encoder are used. Both singleand multi-layered Hrrformer are 28\u00d7 and 10\u00d7 faster than the Luna-256 (Ma et al., 2021) which has achieved the highest accuracy in the LRA benchmark. Hrrformer also consumes the least amount of memory, taking 79.15% and 70.66% less memory compared to Luna-256 in the case of single and multi-layered Hrrformer, respectively. The detailed numeric results of Figure 6 are given in Appendix B. Hrrformer also reduces the amount of overfitting between training and test performance. We compare the training and test accuracy, and amount of overfitting of the Image classification task to the other self-attention models presented in LRA benchmark (Tay et al., 2020c) and for which data are available4. Table 2 exhibits that the Hrrformer acquires the best results on the test set with an 6.83% train/test gap. The learning curves of all the task is also presented in Appendix Figure 8 demonstrating the lower overfitting nature of the Hrrformer across the tasks. Table 2. Training and test accuracy of different self-attention models on the Image classification task. Among all the models, Hrrformer achieves the best test accuracy with the least amount of overfitting (lower is better). Model Train Accuracy (%) \u2191 Test Accuracy (%) \u2191 Overfitting (%) \u2193 Transformer 69.45 42.44 27.01 Local Attention 63.19 41.46 21.73 Sparse Transformer 66.74 44.24 22.50 Longformer 71.65 42.22 29.43 Linformer 97.23 38.56 58.67 Reformer 68.45 38.07 30.38 Sinkhorn Transformer 69.21 41.23 27.98 Synthesizer 97.31 41.61 55.70 BigBird 71.49 40.83 30.66 Linear Transformer 65.61 42.34 23.27 Performer 73.90 42.77 31.13 Hrrformer 57.28 50.45 6.83 Hrrformer\u2019s inference time is also faster than other options for long sequences. As an example, the time to make predictions for the text classification task is given in Appendix Table 7, where the single-layer Hrrformer is the fastest op4We do not have the compute resources to run the other xformers on the LRA ourselves, in part due to the higher memory use that exceeds our infrastructure. tion, followed by the multi-layer Hrrformer. We also find Hrrformer\u2019s inference time is relatively faster regardless of the batch size. The inference time for the Hrrformer with a batch size of 2 is still 5\u00d7 faster than the inference time for the Transformer with a batch size of 32. More details are presented in Appendix Table 6. 5." + }, + { + "url": "http://arxiv.org/abs/2206.05893v1", + "title": "Deploying Convolutional Networks on Untrusted Platforms Using 2D Holographic Reduced Representations", + "abstract": "Due to the computational cost of running inference for a neural network, the\nneed to deploy the inferential steps on a third party's compute environment or\nhardware is common. If the third party is not fully trusted, it is desirable to\nobfuscate the nature of the inputs and outputs, so that the third party can not\neasily determine what specific task is being performed. Provably secure\nprotocols for leveraging an untrusted party exist but are too computational\ndemanding to run in practice. We instead explore a different strategy of fast,\nheuristic security that we call Connectionist Symbolic Pseudo Secrets. By\nleveraging Holographic Reduced Representations (HRR), we create a neural\nnetwork with a pseudo-encryption style defense that empirically shows\nrobustness to attack, even under threat models that unrealistically favor the\nadversary.", + "authors": "Mohammad Mahmudul Alam, Edward Raff, Tim Oates, James Holt", + "published": "2022-06-13", + "updated": "2022-06-13", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.CR", + "cs.CV", + "stat.ML" + ], + "main_content": "Introduction As convolutional neural networks (CNN) have become more popular, so to have the concerns around their deployment. Many tricks like low-precision \ufb02oats, pruning of weights, and classic software engineering and performance tuning have been employed to reduce these computation costs. Still, it is often necessary to deploy a model on third-party compute hardware or cloud environments for a variety of reasons (e.g., lower latency to customers, lack of computing resources, and elasticity of computing demand). In these situations, there are cases where the owner of the model does not fully trust the third party and desires to obfuscate information about the model running on this untrusted 1Department of Computer Science and Electrical Engineering, University of Maryland, Baltimore County, Baltimore, MD, USA 2Laboratory for Physical Sciences, College Park, MD, USA 3Booz Allen Hamilton, McLean, VA, USA. Correspondence to: Edward Raff , Tim Oates . Proceedings of the 39 th International Conference on Machine Learning, Baltimore, Maryland, USA, PMLR 162, 2022. Copyright 2022 by the author(s). platform. The current solutions to this situation naturally come from the encryption community, and the tools of Secure Multiparty Computation (SMC) (Kerschbaum, 2006; Du & Atallah, 2001) and Homomorphic Encryption (HE) (GiladBachrach et al., 2016) provide methods for running programs on untrusted hardware that guarantee the privacy of the results. These are valuable tools, but computationally demanding and limiting. They often require restrictions on even basic CNN functionality like avoiding softmax activation and sigmoid/tanh non-linearity, limits on the size of the computation itself, and can dwarf the compute time saved by of\ufb02oading to the third party. Especially when providers charge by compute-hours, this makes SMC and HE tools impractical when computing and latency constraints are a factor, or when neural networks are very large. Current approaches to untrusted inference are all slower than simply running the computation locally, making them impractical. For this reason, our work scari\ufb01es provable security for empirical security, by developing an approach to insert \u201csecrets\u201d into a network\u2019s input that can be later extracted, yet obfuscate the input/output to the untrusted party in an encryption-like manner. We emphatically stress this is not strong encryption, but empirically we observe a realistic adversary\u2019s attacks are at random-guessing performance, and an unrealistically powerful adversary fairs little better. We term our approach CONNECTIONIST SYMBOLIC PSEUDO SECRETS (CSPS)1, and compared to the fastest alternative (Mishra et al., 2020). CSPS is 5000\u00d7 faster and transfers 18, 000\u00d7 less data, making it practically deployable. To summarize, we leverage inspirations from encryption and neuro-symbolic methods to symbolically represent a one-time pad strategy from the encryption literature within a neural network. The rest of our paper is organized as follows. First, we will review work related to our own in section 2. Our approach uses a Vector Symbolic Architecture (VSA) known as the Holographic Reduced Representations (HRR) from more classical symbolic AI work that may not be 1Our code can be found at https://github.com/ NeuromorphicComputationResearchProgram/ Connectionist-Symbolic-Pseudo-Secrets arXiv:2206.05893v1 [cs.LG] 13 Jun 2022 \fConnectionist Symbolic Pseudo Secrets familiar to all readers, so we will review them brie\ufb02y in section 3. This will allow us to discuss our method CSPS and how we develop a mechanism for inserting a secret \u201cone-time pad\u201d into a network input and extracting it from the output in section 4. This produces a 5000\u00d7 speedup and 18, 000\u00d7 reduction in data transfer compared to the fastest alternatives, providing the \ufb01rst speedup for untrusted computation, as shown in section 5. In addition, we show an overly powerful adversary is empirically only slightly better than random guessing, providing practical security for many applications, and extensive ablation studies over six alternative design choices that validate our approach. Finally, we conclude in section 6 with a discussion of the limitations of our approach. Most notably that we are not implementing true strong encryption, and so must temper expectations where privacy is a critical requirement. 2. Related Work The desire to hide the details of a program\u2019s inputs and outputs from third-party performing the computation has been studied for many decades by the security and cryptography communities. These methods have been naturally adapted to deep learning tasks, providing provable privacy guarantees. Unfortunately, the high costs of these methods prevent them from being useful when there is any compute or runtime constraint, often requiring multiple order-of-magnitude slowdowns. We review the primary approaches. The \ufb01rst approach that has been used is (Fully) Homomorphic Encryption (FHE), which allows recasting any program into a new version that takes encrypted inputs, and produces encrypted outputs, providing strong privacy. However, this conversion process can result in extreme computational cost, often requiring arbitrary-precision integer arithmetic2. To make FHE \u201cpractical\u201d, restrictions on the size, depth, and activation functions have been necessary to minimize these compute overheads. For example, (Gilad-Bachrach et al., 2016) required squared activations (\u03c3(x) = x2) to perform MNIST in an hour per datum. Current FHE methods, through a mix of network and FHE optimizations, can scale to CIFAR 10 (Brutzkus et al., 2019), but result in networks slower and less accurate than our CSPS. We are not aware of any works that have scaled past CIFAR-10 for FHE-based inference (Chou et al., 2018; QaisarAhmadAlBadawi et al., 2020; van Elsloo et al., 2019; Nandakumar, 2019; Esperanca et al., 2017). The second broad class of approaches is protocol-based, requiring multiple rounds of communication where data is sent back-and-forth between the host that is requesting computation, and the third party server performing the bulk of computation. Methods like Secure Multi-party Com2Also called \u201cbignum\u201d or \u201cbig-integer\u201d. putation (SMC) (Kerschbaum, 2006; Du & Atallah, 2001) and other \u201cprotocols\u201d are developed on top of \u201cOblivious Transfer\u201d (OT), a primitive by which a sender and receiver exchange messages (Rabin, 1981). Many OT protocols3 have been customized for deep learning applications (Riazi et al., 2018; Rouhani et al., 2018; Chandran et al., 2019; Riazi et al., 2019; Liu et al., 2017; Mohassel & Zhang, 2017), but suffer similar limitations to FHE. They require minutes of computation per data point prediction, require multiple rounds of computation (a problem for deployment with any limited bandwidth or high latency network), and must send large \u201cmessages\u201d on the order of hundreds of megabytes per prediction. We note that our approach requires only one round of communication, the messages are the same size as the original data points and can perform predictions in milliseconds. Hybrid approaches combining OT and FHE have been developed (Juvekar et al., 2018; Mishra et al., 2020) and are faster than only OT or FHE, but they have not yet overcome the compute, multiple rounds of communication, and scaling limitations that prevent practical use. The most similar approach to our own work is InstaHide (Huang et al., 2020), which randomly combines training instances with a second population of images. These mixed images (and labels) are sent to a third party for training, in an attempt to hide the true training task from the third party. Carlini et al. (2021) showed how to break InstaHide and proved learning bounds indicating the impossibility of the approach. The key failure of InstaHide being a dual problem that: 1) the random additions are highly structured natural images, creating an attack avenue and 2) the goal is third party training, which requires InstaHide to provide the mixed image, leaving only 4 parameters a \u201csecret\u201d per image. Our focus on HRRs allows unstructured secrets making attack harder, and the focus on inference of a trained model allows us to hide a large secret from the third party. We perform extensive customized attacks against CSPS to show we do not suffer the same failing, and provide learning bounds in the linear case that show we do not suffer the same conditions identi\ufb01ed by (Carlini et al., 2021). We note that to the best of our knowledge, our work is the only approach seeking an approximate solution to the problem. This means our method should not be used when privacy is of extreme importance to be \u201cmission critical\u201d. Still, we do obtain empirically good privacy in our results, and we show that our method is the only approach practically deployable when runtime or latency is a requirement. 3We note that there are many different classes of protocols involved in these works, and our related work is oversimplifying them to be just \u201cOT\u201d, but a full description of the different nuances would not aid the reader in understanding our approach, and all prior work share the same fundamental limitations. \fConnectionist Symbolic Pseudo Secrets In particular, for all prior work cited the time it takes to run the FHE or OT protocols are orders of magnitude greater than the time to compute the result locally. Our work is the \ufb01rst that we are aware of to present a method that enters the positive direction on the runtime trade-off. 3. Technical Background Holographic Reduced Representations (HRR) is a method of representing compositional structure using circular convolution in distributed representations (Plate, 1995). Vectors can be composed together using circular convolution which is referred to as a binding operation. Using the original notation for binding of Plate\u2019s (1995) paper, the binding operation is expressed in eq. 1, where xi, yi \u2208Rd are arbitrary vector values, F(\u00b7) and F\u22121(\u00b7) are the Fast Fourier Transform and its inverse, respectively. B = xi yi = F\u22121(F(xi) \u2299F(yi)) (1) B \u2208Rd is the bound term comprised of xi and yi. Two things makes HRR intriguing and valuable: the use of circular convolution, which is commutative, and its ability to retrieve bound components. The retrieval of bound components is referred to as unbinding. A vector can be retrieved by de\ufb01ning an inverse function \u2020 : Rd \u2192Rd and identity function F(y\u2020 i) \u00b7 F(yi) = \u20d7 1 which gives y\u2020 i = F\u22121 \u0010 1 F(yi) \u0011 . Using the inverse of vector yi, another component of the bound term can be approximately retrieved by xi \u2248B y\u2020 i These properties are interesting because they hold in expectation even if B is de\ufb01ned with multiple terms, i.e., B = Pk i=1 xi yi, or when composed with hierarchical structure. This allows composing complex symbolic relationships by assigning meaning to arbitrary vectors, staying in a \ufb01xed d-dimensional space. As the number of terms bound or added together increases, the noise of the reconstruction x\u2032 i will also increase. To make these properties work we will use initialization conditions proposed by (Ganesan et al., 2021), where xi, yi \u223c\u03c0(N(0, 1/d)), where \u03c0(\u00b7) is a projection onto the ball of complex unit magnitude \u03c0(yi) = F\u22121 \u0010 F(yi) |F(yi)| \u0011 , and N(\u00b5, \u03c32) is the Normal distribution. 4. CONNECTIONIST SYMBOLIC PSEUDO SECRETS Our approach to make CONNECTIONIST SYMBOLIC PSEUDO SECRETS requires two steps. First, we introduce a simple modi\ufb01cation of the HRR from 1-D to 2-D, exploiting a property of its construction so that we can embed the secret into the inputs in such a manner that they are likely to be preserved by the network. Then we design a training approach to use the symbolic behavior of HRR to bind the input, and then unbind the output, such that the majority of work can be done by a remote 3rd party. 4.1. 2D HRR As Pseudo One-Time Pad Our \ufb01rst insight comes from the fact that in Equation 1, the result B = x s is a simple linear operation that at in\ufb01nite precision is invertible, giving s = x\u2020B. Thus if we have x represent the image (network input) we wish to obscure, and we have a random secret s to apply, then the resulting B object will appear random in nature. And for any bound output B, there are in\ufb01nite possible image/secret pairs that will produce the exact same output4. This allows for a \u201conetime pad\u201d kind of approach to obscuring the input to the network from the untrusted party. If we can preserve the secret s \u2208B as B is processed by a neural network, we can attempt to extract it using the unbinding operation at the end. Phrased mathematically, if f(\u00b7) is a normal CNN, we desire a secure function \u02dc f(\u00b7) such that \u02dc f(x s) s\u2020 \u2248f(x), yet \u02dc f(x s) appears random. A critical part of this is to maintain the information within s. We can achieve this by recognizing that the HRR operation is equivalent to a 1D convolution over a sequence, but the 1D convolution is not an important property. By simply switching to 2D Fourier Transforms, we instead perform 2D convolutions to bind our secret, which aligns the resulting x s with the 2D CNN that will process it. By construction, this retains all the symbolic properties of HRR, as experimentally shown in Figure 2, but allows the inputs to behave in a manner consistent with a CNN. This is critical as it means the binding operation is equivalent to another convolutional layer of the network, where the secret s is a user-chosen weight matrix rather than a learned one. This also means subsequent layers of convolution and pooling may learn to retain the structure of s to a suf\ufb01cient degree that it can be extracted later, and effectively obfuscates the nature of the input as shown in Figure 3. 4.2. Network Design We now specify our novel approach to a network architecture that leverages 2D HRR to hide the output, while of\ufb02oading \u224875% of computation onto a remote third party. The proposed method has three networks, one larger backbone network fW (\u00b7) that performs the \u201cwork\u201d of feature extraction, and two identical smaller networks. As for the main network fW (\u00b7), U-net CNN architecture (Ronneberger et al., 2015) has been employed, and at deployment would be run by the untrusted party. There are multiple bene\ufb01ts 4The output is not uniformly random, a key difference from a true one-time pad. \fConnectionist Symbolic Pseudo Secrets Reverse Gradient Unbinding using secrets \ud835\udc94 Binding L2 Adversarial Network L1 Input Images Secrets Unet \ud835\udc99 \ud835\udc94 Main Network Prediction Network Figure 1. Block diagram of encryption process of the CNN using improved 2D HRR with three stages. Both of the orange regions are on the user-end. The secrets to unbind the images and outputs of the main network are only shared in these regions (dashed line). The red region indicates the untrusted third party who will run the main network after it has been trained. 0 200 400 600 800 1,000 \u22121 0 1 2 Number of bound terms, k P xi \u00b7 (B y\u2020 i) Naive Present Proj \u03c0(\u00b7) Present Naive Absent Proj \u03c0(\u00b7) Absent Figure 2. Binding and unbinding terms using improved 2D HRR where for the terms that are present, xi yi \u2208B, the output along the y-axis is close to 1 and for the absent terms, xi yi / \u2208B, output is close to 0. Retrieval without using projection to the input is referred to as Naive present and absent shown in violet and orange color. Present and absent terms output with projection is shown in green and pink, respectively. of using U-net architecture. It is a deep CNN with identical input and output shapes so that our secret vector s can be used on the input to and output to fW (\u00b7). This requirement is because the secret s needs to be matched with the shape of the output of the network. The client computes \u02c6 x = x s, sending \u02c6 x to the third party while keeping the randomly chosen s \u223c\u03c0 \u0000N(0, 1/ \u221a W \u00d7 H \u00d7 D) \u0001 a secret. The provider sends back the result r = fW (\u02c6 x). Afterward, two identical classi\ufb01cation networks are designed, as shown in the third stage of Figure 1. One is the Prediction Network fP (\u00b7) which is classifying af(a) Original Image (b) Bound Image (c) Retrieved Image Figure 3. A sampled image x in (a) bound with a secret s in (b) using improved 2D HRR. The original image is retrieved using s\u2020(x s) \u2248x is shown in (c). ter unbinding the main network outputs, giving the prediction \u02c6 yP = fP (r s\u2020). The other network is the Adversarial Network fA(\u00b7) which attempts to perform the prediction task without access to the secret s, computing it\u2019s own prediction \u02c6 yA = fA(r). Both networks are used during training, but we reverse the gradient sign (i.e., multiply by \u22121) from fA(\u00b7) back to the backbone network fW (\u00b7). Ganin et al. (2016) introduced this approach as a form of domain adaption, but we instead use it to enforce that the secret s be necessary to extract meaning from the base network fW . This gradient reversal will ensure that fA(\u00b7) is attempting to minimize the same predictive loss, but the change in gradient sign means fW (\u00b7) receives a learning signal sending it\u2019s optimization in the opposite direction discouraging it from retaining any information that would be useful to fA(\u00b7), but still receiving the correct gradient from fP (\u00b7). Combined, the binding \u02c6 x = x s ensures that the input to fW (\u00b7) is random in appearance and is not discernible on its own, and the gradient reversal on fA(\u00b7) ensures the output r = fW (\u02c6 x) is also not informative on its own. A new secret s is sampled for every prediction so that the third party cannot collect multiple samples to try and discover any \u201csingle key\u201d. This design heuristically provides the components of a secure protocol, but uses only standard \fConnectionist Symbolic Pseudo Secrets operations built into all modern deep learning frameworks, giving it minimal overhead compared to prior approaches outlined in section 2. The overall training procedure is given in Algorithm 1. Algorithm 1 CONNECTIONIST SYMBOLIC PSEUDO SECRETS Training using a dataset with images of size W \u00d7 H \u00d7 D using a loss function \u2113(\u00b7, \u00b7). for xi, yi \u2208dataset do \u25b7Training loop s \u223c\u03c0 \u0000N(0, 1/ \u221a W \u00d7 H \u00d7 D) \u0001 \u25b7New secret \u02c6 x \u2190xi s \u25b7Obfuscated input r \u2190fW (\u02c6 x) \u25b7Run by 3rd party after training \u02c6 yP \u2190fP \u0000r s\u2020\u0001 \u25b7Used locally after training \u02c6 yA \u2190REVERSEGRAD(fA(r)) \u25b7Discarded after training L \u2190\u2113(y, \u02c6 yP ) + \u2113(y, \u02c6 yA) \u25b7Incur training loss Back-propagate on the loss L Run optimization step The main network fW (\u00b7) has four U-Net rounds in every experiment and doubles from 64 \ufb01lters after each round, reversing for the decode. The fA(\u00b7) and fP (\u00b7) are always identical, with 3 rounds of aggressive convolution followed by pooling to minimize compute costs and shrink the representation, followed by two fully connected hidden layers. Mini-ImageNet receives a fourth round of pooling due to its larger resolution. All network details and code can be found in Appendix A. We further perform extensive ablation studies in subsection D.1 looking at different binding operations (HRR without projection, 1D HRR, 1D HRR with Hilbert Curves, and the vector-derived transformation binding (VTB)) and network designs (Residual style) that show our design of U-Net with 2D HRRs is critical to obtaining high predictive accuracy. While our results are heuristic for the deep neural networks, we provide theoretical evidence for our approach by analyzing the linear case. Given an adversary who has all n bounded inputs \u02c6 xi with the true labels yi, the problem is likely not linearly learnable due to an O(n) Rademacher complexity, as we show in Theorem 4.1. Theorem 4.1. Learning w\u22a4xi si without the secrets si has a non-trivial Rademacher complexity of O(n), Proof. The CSPS Rademacher model gives 1 nE \u03c3 h supw\u2208Rd|\u2225w\u22252\u22641 Pn i=1 \u03c3iw\u22a4xi i . The binding operation with a vector si is equivalent to the matrixvector product with a corresponding circulant matrix SC i . Each w\u22a4SC i can be written as an independent random rotation leading to n independent \u02c6 wi terms, allowing the supremum to move into the summation to give 1 nE \u03c3 hPn i=1 sup \u02dc wi\u2208Rd|\u2225wi\u22252\u22641 \u03c3i \u02dc w\u22a4 i xi i . Applying the result for n independent linear models (Shalev-Shwartz & Ben-David, 2021) trained on one point gives the \ufb01nal complexity Pn i=1 \u2225xi\u22252 \u2264n maxi \u2225xi\u22252. 5. Experiments & Results We do not argue that CSPS is any true form of encryption, only that it is empirically effective at hiding the nature of inputs and outputs sent to an untrusted party. We will demonstrate this through a series of experiments to show that: 1) Compared to a network with the same design but without the HRR binding/unbinding of the secret s, that our approach has some loss of accuracy but is more accurate than prior approaches. 2) The loss of accuracy can be largely mitigated by averaging the results of \u226410 queries. 3) Our approach is robust to adversaries using unsupervised learning that try to infer class information. 4) Our approach is still robust to unrealistically strong adversaries that know the training data and classes, obtaining 1.5 \u22124.7\u00d7 random guessing accuracy. 5) Our approach is up to 290\u00d7 faster than existing provable methods. We also perform an extensive ablation study of alternative designs that show our approach performs considerably better than alternatives. Before our results, we brie\ufb02y review the datasets and training details. As the proposed method is doing image classi\ufb01cation, various well-known image classi\ufb01cation datasets are used for the experiments. These datasets are diverse in shape, color, channels, contents, and the number of classes. In total 5 image classi\ufb01cation datasets are utilized, namely, MNIST, SVHN, CIFAR-10, CIFAR-100, and MiniImageNet. Dataset details, along with training time, and data augmentation can be found in Appendix B. 5.1. Accuracy Results Our results focus on Top-1 classi\ufb01cation accuracy for all datasets, and Top-5 accuracy for datasets with 100 classes. We start by demonstrating the accuracy of our approach in Table 1, where \u201cBase\u201d indicates a network with the same total architecture (including U-Net backbone), but without any of the binding/unbinding of secrets or gradient reversal of the adversarial network. This shows that 1) our method can scale to Mini-ImageNet, a result not previously possible (Mishra et al., 2020), and 2) that there is some cost that our secret binding incurs on the accuracy of the result. The results in Table 1 are for a single attempt at the prediction process, and noise is introduced by the randomly selected secret s. We can average out this noise by sending k inputs x s1, x s2, . . . , x sk to be classi\ufb01ed, and averaging the resulting k predictions. This provides the result given in Figure 4, showing that k \u226410 is suf\ufb01cient to almost completely eliminate the accuracy drop. Additional discussion of this result is in Appendix C. We note the lower \fConnectionist Symbolic Pseudo Secrets Table 1. Accuracies of the Base model, and model with secret binding and unbinding using improved 2D HRR. Dataset Model Top-1 Top-5 MNIST 28 \u00d7 28 Base 98.80 \u2013 CSPS 98.51 \u2013 SVHN 32 \u00d7 32 Base 93.76 \u2013 CSPS 88.44 \u2013 CIFAR-10 32 \u00d7 32 Base 83.57 \u2013 CSPS 78.21 \u2013 CIFAR-100 32 \u00d7 32 Base 62.59 86.99 CSPS 48.84 75.82 Mini-ImageNet 84 \u00d7 84 Base 55.73 80.55 CSPS 40.99 66.99 accuracy numbers compared to more modern networks on these problems comes from the dif\ufb01culty of learning with random s vectors bound to the input. For example, our CIFAR-10 training accuracy is 86.17%, which is close to the test accuracy. For this reason over\ufb01tting does not appear to be a culprit in the results. The dif\ufb01culty of the trained network to work with random s vectors also provides intuition as to its success as a defense: the attacker with less access should have more dif\ufb01culty handling the HRR vectors, and thus inhibits their success. 0 1 2 3 4 5 6 7 8 9 10 Number of repeated secrets 0 20 40 60 80 100 Top-1 Accuracy (%) MNIST SVHN CIFAR-10 CIFAR-100 Mini-ImageNet Accuracy: 99.27 % Accuracy: 92.67 % Accuracy: 84.44 % Accuracy: 59.75 % Accuracy: 50.50 % Figure 4. Accuracy (y-axis) of CSPS after averaging k predictions (x-axis), which almost fully restores the accuracy lost due to the secret binding/unbinding. 5.2. Run-time Results To show the speed advantages of CSPS, we perform a comparison with HE that is unrealistically favorable to HE. HE has signi\ufb01cant design constraints for a neural network, and Table 2. Time to perform prediction on each dataset in its entirety, where the Homomorphic Encryption alternative is a single CNN layer and unrealistically small to minimize its runtime at the cost of all predictive accuracy. Dataset Our CSPS HE Est. MNIST 4.56 Seconds 2 Hours 46 Minutes SVHN 12.44 Seconds 55 Hours 32 Minutes CIFAR-10 7.58 Seconds 21 Hours 20 Minutes CIFAR-100 9.07 Seconds 43 Hours 53 Minutes Mini-ImageNet 28.37 Seconds Timeout Table 3. Amount of computation performed by the local user and the remote third party Dataset Remote % Local % MNIST 74.24 25.76 SVHN 65.06 34.94 CIFAR-10 66.08 33.92 CIFAR-100 66.78 33.22 Mini-ImageNet 74.42 25.58 thus we could not replicate our \u201cBase\u201d architecture with HE libraries like (Benaissa et al., 2021). Instead, we compare against a HE network with a single convolutional layer, followed by aggressive pooling and a fully connected layer, making the model extremely small, unable to learn with any predictive accuracy, and unusable with performance that is close to random guessing (except on MNIST). The results are in Table 2, where Mini-ImageNet failed to make a single prediction in under 24 hours. These settings are overly idealized for HE, and is still 290\u00d7 slower than our CSPS. Because CSPS uses standard deep learning code, there is no extraneous compute overhead for arbitrary precision math or multiple rounds of network communication. Thus we can look at the amount of compute saved by of\ufb02oading to a remote party in Table 3. We see that at least 65% of compute can be of\ufb02oaded, netting a 2.9\u22123.5\u00d7 reduction in cost. This cost savings can be important for low-power, battery, or compute constrained devices. The fastest prior work by (Mishra et al., 2020) requires 60 MB of extra communication per prediction on CIFAR-10 (that is 18, 000\u00d7 larger than the image being worked on, the entire corpus is only 200 MB), and is reported to be 5019\u00d7 slower than our approach. To the best of our knowledge, CSPS is thus the only method that can realize a real-world resource reduction. \fConnectionist Symbolic Pseudo Secrets 5.3. Realistic Adversary Following Biggio et al. (2014) we specify a realistic adversary that seeks to infer the nature of our model\u2019s outputs and class distribution. Because the output shape r \u2208RW \u00d7H\u00d7D has no relationship with the number of classes, they must attempt to use some unsupervised clustering to identify patterns within the predictions. We have applied several diverse clustering algorithms such as Kmeans (Arthur & Vassilvitskii, 2006; Raff, 2021), Spectral (Ng et al., 2002), Gaussian Mixture Model (GMM), Birch (Zhang et al., 1996), and HDBSCAN (Malzer & Baum, 2020) cluster to the outputs r = fW (\u00b7) that the adversary has access to. If they are able to perform clustering with greater than random chance, they may be able to extract information about how our model works. We pessimistically assume the adversary knows the exact number of clusters k that they should be looking for. Thus we use the Adjusted Rand-Index (ARI) to score how well the clusters perform with respect to the true class labels (Vinh et al., 2010). A near 0% ARI indicates there is no information to be extracted , and thus our clustering has performed well. We also consider the case where the adversary clusters on the bounded inputs they receive, \u02c6 x, for completeness. We note this is a poor attack avenue in realistic settings because multiple classes may exist per given set of input images, and that information is only leaked by the network fW (\u00b7) and not the inputs. The results are in Table 4, where the adversary performs best on MNIST with \u22641.5% ARI scores. On all other datasets, the score is \u22640.2%, indicating there is almost no label information to be extracted from the clusters using existing methods. This shows our approach is effective in hiding the nature of the output and predictions from the untrusted party. Note that on HDBSCAN zero scores are obtained because of degenerate results. HDBSCAN has the concept of \u201coutliers\u201d that do not belong to any cluster and assigns almost the entire test set to the \u201coutlier\u201d class, resulting in worst-case scores. We note \u02c6 x is larger than r, resulting in Spectral clustering timing out after several days of running. Overall the results clearly demonstrate that minimal amount of label information is leaked by the model. Figure 5 shows visually how CSPS is able to achieve this result. The result vectors r that the untrusted party has access to are plotted using UMAP (McInnes et al., 2018). Because fA(\u00b7) is trained with gradient reversal, fW (\u00b7) has learned a representation r that requires the secret s to extract the meaning from its representation. Thus the result is an embedding space that points from the same class are randomly dispersed and intermixed. When it is clustered the clusters do not correspond to the true class distributions, as shown on the right. Additional visualizations showing how the cluster labels do not correlate with class labels in Appendix D. Table 4. Clustering results of the adversary attempting to discover class information directly from the main network output r (top) and the bound image inputs \u02c6 x (bottom). All numbers are percentages, and the Adjusted Rand-Index is \u22641.5% for all cases. Since ARI accounts for random-chance clustering, the unrealistic adversary is not able to meaningful discern class information. MNIST SVHN CIFAR 10 CIFAR 100 Mini ImgNet K-Means 1.28 0.06 0.21 0.03 0.08 Spectral 0.01 0.01 0.00 0.00 0.02 GMM 1.28 0.06 0.17 0.04 0.09 Birch 1.51 0.03 0.13 0.05 0.07 HDBSCAN 0.00 0.00 0.00 0.00 0.00 K-Means -0.02 -0.01 0.18 0.54 0.42 GMM 0.01 0.00 0.09 0.61 0.44 Birch 0.20 0.00 0.14 0.45 0.35 HDBSCAN 0.00 -0.24 1.23 0.01 0.02 (a) True Class Labels (b) Birch Cluster Labels Figure 5. UMAP embeddings of output r of fW (\u00b7) (left) and the \u201cbest\u201d clustering (right). Without the secret s the class labels appear random (left), and clustering detects spurious density patterns with no correlation to the true labels. 5.4. Overly Strong Adversary An unrealistically powerful adversary would have access to the entire training set, the class labels, know the procedure of binding/unbinding secrets, and be able to train their own model that predicts the class label from the intermediate result r (i.e., knows everything but the secrets si). This is in fact what the adversarial network fA(\u00b7) performs, and represents the worst possible case scenario for the adversary\u2019s strength: that they can train their own model to ignore the secret s and extract the true labels. We thus perform this test by training a new model on the ground-truth pairs between ri and the class label yi, with the results shown in Table 5. Note an adversary of this strength already knows what the predictive task and type of data is, which is what CSPS is designed to hide. Success of CSPS at this strength shows ef\ufb01caciousness from task level to individual level protection beyond our intended goal, but also provides even greater evidence of task level protection. \fConnectionist Symbolic Pseudo Secrets Table 5. Accuracies of the Adversarial Network (lower is better) where the secret to unbind the output of the U-Net is unknown. Dataset Top-1 Accuracy (%) Top-5 Accuracy (%) MNIST 19.72 \u2013 SVHN 21.13 \u2013 CIFAR-10 12.91 \u2013 CIFAR-100 2.66 10.33 Mini-ImageNet 4.68 15.01 In this worst case situation, the adversary\u2019s predictions are little better than random-guessing performance. For MNIST, SVHN, and CIFAR-10 that would be 10%, with SVHN having the best results at just 2.1\u00d7 better than random-guessing. For CIFAR-100 and Mini-ImageNet random-guessing is 1%, and we see high ratios at 2.6 and 4.7\u00d7, but the total accuracy is still far below the 60% and 51% obtainable by knowing the secret s. This shows that our approach is highly effective at obscuring the information from the untrusted party, forcing them near random-guessing performance even in unrealistically powerful settings. This also reinforces that our approach is not real encryption, and should not be used when provable security is a requirement. Our results do indicate strong empirical security though, and the only method that is practical from a runtime perspective. While multiple classi\ufb01cations of the same image with different secrets improves accuracy for the user, the same can not be said for the adversary. Figure 6 shows that attack success improves by a minor amount only on MNIST for a network trained to take in k pairs of an image bound with different secrets. 0 1 2 3 4 5 6 7 8 9 10 Number of repeated secrets 0 10 20 30 40 Top-1 Accuracy (%) MNIST SVHN CIFAR-10 CIFAR-100 Mini-ImageNet Accuracy: 25.58 % Accuracy: 21.51 % Accuracy: 15.71 % Accuracy: 2.00 % Accuracy: 2.63 % Figure 6. Accuracy (y-axis) of Unrealistic Adversary after averaging k predictions (x-axis) of the secret binding/unbinding. In the most extreme scenario, the adversary would have access to the bound images along with the class labels and it may train a network to learn class labels directly from the Table 6. Accuracy predicting the inputs comparing defender CSPS and unrealistic adversary attacking the inputs Dataset Our CSPS Unrealistic Adversary MNIST 98.51 % 81.23 % SVHN 88.44 % 39.61 % CIFAR-10 78.21 % 43.40 % CIFAR-100 48.84 % 16.58 % Mini-ImageNet 40.99 % 16.00 % bound images. Results of this experiment are reported in Table 6. Even though this unrealistic adversary seemingly performed better with the access of correct class labels (note linear models can get 92% MNIST accuracy), still without the correct secret to unbind the image, it falls behind our CSPS method, providing a strong level of individual protection. 5.5. Model Inversion Adversary As our \ufb01nal attack, we use the Frechet Inception Distance (FID)(Heusel et al., 2017) to design an inversion attack. Given the bound input \u02c6 x = x s, the adversary has their own copy of the training data to compute the FID score of \u02c6 x \u02c6 s\u2020. Then the adversary can optimize their copy of \u02c6 s to try and \ufb01nd the secret that will result in a realistic looking image. We remind the reader that without CSPS, the adversary intrinsically receives the true input x whenever a prediction is made, and so no comparison to a baseline is possible. Our goal is purely to see if an inversion strategy yields cracks in the effectiveness of CSPS. (a) (b) (c) (d) (e) Figure 7. Model inversion attack by the adversary using projected gradient descent to unbind the bound images given sample of the original images. Images are shown in pairs where the original image is shown in left and generated image is shown in right. Results for MNIST (a), SVHN (b), CIFAR-10 (c), CIFAR-100 (d), and Mini-ImageNet (e) all con\ufb01rm the adversary can not extract the nature of the bound inputs even when optimizing for visual realism to extract the secret. Examples of the attack are presented in Figure 7, showing that inverting the original images is highly challenging. This attack involves the adversary having the true training data, \fConnectionist Symbolic Pseudo Secrets knowing the procedure to extract the input, and using gradient descent to attempt to \ufb01nd a secret that maximizes the apparent realism of the result via FID scores. A second inversion strategy assumes the adversary has examples of secrets s and encoded images r, and attempts to learn an auto-encoder that minimizes \u2225r \u2212s\u22252 2, to directly predict the secret from a bound image. We \ufb01nd that this strategy also fails to provide meaningful results, as demonstrated in Figure 8. (a) (b) (c) (d) (e) Figure 8. Inversion attack using a trained auto-encoder to directly predict the secret s. Results for MNIST (a), SVHN (b), CIFAR10 (c), CIFAR-100 (d), and Mini-ImageNet (e) show that the adversary can not meaningfully predict the secret from new outputs r. 6." + }, + { + "url": "http://arxiv.org/abs/2101.02047v3", + "title": "Unified Learning Approach for Egocentric Hand Gesture Recognition and Fingertip Detection", + "abstract": "Head-mounted device-based human-computer interaction often requires\negocentric recognition of hand gestures and fingertips detection. In this\npaper, a unified approach of egocentric hand gesture recognition and fingertip\ndetection is introduced. The proposed algorithm uses a single convolutional\nneural network to predict the probabilities of finger class and positions of\nfingertips in one forward propagation. Instead of directly regressing the\npositions of fingertips from the fully connected layer, the ensemble of the\nposition of fingertips is regressed from the fully convolutional network.\nSubsequently, the ensemble average is taken to regress the final position of\nfingertips. Since the whole pipeline uses a single network, it is significantly\nfast in computation. Experimental results show that the proposed method\noutperforms the existing fingertip detection approaches including the Direct\nRegression and the Heatmap-based framework. The effectiveness of the proposed\nmethod is also shown in-the-wild scenario as well as in a use-case of virtual\nreality.", + "authors": "Mohammad Mahmudul Alam, Mohammad Tariqul Islam, S. M. Mahbubur Rahman", + "published": "2021-01-06", + "updated": "2021-07-22", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "main_content": "Introduction In egocentric vision such as for virtual reality (VR), hand plays an instrumental role as a medium of interaction [1]. The gesture of a hand and the location of its \ufb01ngertips are essential information for a computer to understand the state of the interaction medium [2]. Additionally, in VR environments, the recognition of hand gestures [3], and detection of \ufb01ngertips [4] are es\u2217Corresponding author Email addresses: mdmahmudulalam007@umbc.edu (Mohammad Mahmudul Alam), mtislam@princeton.edu (Mohammad Tariqul Islam), mahbubur@eee.buet.ac.bd (S. M. Mahbubur Rahman) \fsential to interact between the virtual world and the real world. Existing hand gesture recognition and \ufb01ngertip detection approaches can be broadly classi\ufb01ed into two categories traditional image processing and current deep learning-based approaches. The early image processing approach relies mostly on the background segmentation and the shape and color of hand in the so-called handcrafted algorithms. Due to these dependencies, these methods often tend to fail in the presence of complex background, illumination e\ufb00ects, and in the variation of size and color of a person [5]. On the contrary, the deep learning approach using convolutional neural networks (CNNs) has shown much better performance in these scenarios due to its capability to extract relevant features through learning algorithms. Since a given egocentric hand gesture has a given number of visible \ufb01ngertips, traditional direct regression-based deep learning algorithms need to recognize hand gestures \ufb01rst, and afterward, they use corresponding trained \ufb01ngertip detection model to detect the position of the \ufb01ngertips [6]. The problem arises since the number of visible \ufb01ngers in a gesture can be variable but the number of outputs of a CNN must be \ufb01xed. Therefore, these algorithms require training di\ufb00erent \ufb01ngertip detection models for di\ufb00erent hand gestures [7]. In this paper, we address this issue by proposing a uni\ufb01ed approach to predict both the probabilistic output of the egocentric gesture of \ufb01ngers and the positional output of all the \ufb01ngertips using one forward propagation of a CNN. In the probabilistic output of gesture, the high probability indicates the existence of a visible \ufb01nger while the low probability indicates the hidden \ufb01nger. In general, the visible and hidden \ufb01ngers are represented as labels \u20181\u2019 and \u20180\u2019, respectively. Hence, each gesture of hand can be recognized by the unique sequence of binary numbers by taking into account the probabilistic information of the \ufb01ngers. Moreover, the proposed method estimates the coordinate position of \ufb01ngertips by averaging the regressed ensemble of \ufb01ngertip coordinates using a fully convolutional network (FCN), instead of using conventional direct regression using a fully connected (FC) layer. Thus, the estimation of the probability of \ufb01ngers in a gesture and their 2 \frelative sequence, and accurate positional information of \ufb01ngertips make the overall hand gesture recognition and \ufb01ngertip detection algorithm highly robust and reliable. Also, it is less likely to predict false positives and false negatives as compared to the existing direct regression [8] and Heatmap-based [9] frameworks. In particular, the proposed detection method results in signi\ufb01cantly less pixel error as compared to the direct regression approach where pixel coordinates are directly regressed from an FC layer of a learning model. Besides, the proposed approach provides less localization error when compared to the Heatmap-based framework. In the following subsections, a literature review of previous works is presented and then the scope of analysis is given. Finally, speci\ufb01c contributions of this work are listed. 1.1. Related Works Related works not only cover the egocentric hand gesture recognition, but also the generalized hand gesture recognition, because of the overlapping nature of the existing methodologies. For a detailed review of the \ufb01eld, the authors would like to suggest reading the survey paper by Bandini and Zari\ufb00a [1]. Hand Gesture recognition and \ufb01ngertip detection can be categorized into three different groups. The \ufb01rst group of works is concerned about gesture recognition. The second group of works is concerned with the detection of \ufb01ngertips and the third group focuses on both gesture recognition and \ufb01ngertip detection. The works on these groups are discussed in the following subsections. 1.1.1. Gesture Recognition Hand gestures are mainly di\ufb00erent combinations of \ufb01ngers producing di\ufb00erent shapes of a hand. Thus, the primary focus of gesture recognition methods that use image processing is shape matching or measuring dissimilarity among hand shapes. For instance, Ren et al. [10] presented a part-based gesture recognition system that uses dissimilarity measure and template matching for an HCI application of arithmetic computation by gesture command. Discriminative 2D Zernike moments are also used for the recognition of static hand gestures of the ASL [11]. In [12], CNN 3 \fis used for hand gesture recognition in an HCI system, wherein the gesture is utilized to trigger mouse and keyboard events and to control a simulated robot. Lin et al. [13] proposed that the background of a hand can be segmented \ufb01rst by using the Gaussian mixture model (GMM) and then the binarized image can be feed to a CNN classi\ufb01er for learning instead of directly using the captured RGB image for hand gesture recognition. Di\ufb00erent architectures of neural networks are applied for hand gesture recognition. Koller et al. [14] embedded a CNN within an iterative expectation-maximization (EM) algorithm for the classi\ufb01cation of hand shapes particularly in the case of continuous and weakly labeled data. Nunez et al. [15] reported a method that combines the CNN and the long short-term memory (LSTM) network for skeleton-based temporal 3D hand gesture recognition. Xu et al. [16] employed egocentric depth images for recognizing hand gesture or action. 1.1.2. Fingertip Detection Image processing-based \ufb01ngertip detection algorithms generally use background segmentation, contour analysis, and convex envelope techniques. Such a system is presented by Nguyen et al. [17] where they \ufb01rst use a CNN-based hand detector, and then apply thresholding for hand segmentation in the detected region, and \ufb01nally use the convex hull technique for \ufb01ngertip detection. Deep learning-based \ufb01ngertip detection mostly uses direct regression to predict the coordinate position of \ufb01ngertips from the \ufb01nal FC layer of the CNN. However, Alamsyah et al. [18] use an object detection algorithm by employing the region-based CNN (R-CNN) for predicting \ufb01ngertips with an assumption that each \ufb01ngertip is a class independent object. Huang et al. [6] report a two-stage cascaded CNN-based direct regression for joint detection of \ufb01ngertip and \ufb01nger for a given hand gesture in egocentric vision. Similarly, Liu et al. [7] use a bi-level cascaded CNN for detection of \ufb01ngertips in a predetermined gesture in the egocentric videos. In the same vein, Huang et al. [19] use two-stage CNN to detect \ufb01ngertips from a hand image for an application of air writing wherein a \ufb01ngertip acts like a pen. Jain et al. [20] report the detection of only the index \ufb01ngertip using a 4 \fdirect regression approach for a mixed-reality (MR) application in which the \ufb01ngertip functions as a gestural interface for smartphones or head-mounted devices. Wetzler et al. [21] mainly focus on CNN-based \ufb01ngertip detection using a Kinect camera. This method uses a computationally extensive global orientation regression approach and an in-plane derotation scheme of depth images to predict the coordinate of \ufb01ngertips. 1.1.3. Gesture Recognition and Fingertip Detection An algorithm that detects a variable number of visible \ufb01ngertips in a gesture implicitly recognizes the gesture too. For example, Prakash et al. [22] use a convex hull-based algorithm for detecting a variable number of visible \ufb01ngertips, and hence, recognizing gesture concurrently for a limited HCI application. In contrast, Lai et al. [23] use two-step method for gesture recognition in a similar application setting. First, \ufb01ngertips are detected using discrete curve evolution and then the gesture is recognized by partitioning the evolved curves detected from \ufb01ngertips. Similarly, Meng et al. [24] approximates the contours and convexity defect to \ufb01nd the coordinate positions of \ufb01ngertips and then the gesture is recognized by using features such as the number of \ufb01ngers, the Hu moments of a region bounded by the contour, and the compactness and the convexity of detected contour. Lee et al. [25] estimates the scale-invariant angle between the \ufb01ngers to determine the di\ufb00erent number of visible \ufb01ngertips. Afterward, \ufb01ngertip gestures are recognized using a contour analysis of the \ufb01ngers. Nguyen et al. [26] use a deep learning-based approach where a modi\ufb01ed multi-task segmentation network is employed for both segmentation of hand and detection of a variable number of \ufb01ngertips. Wu et al. [9] represent the pixels of each \ufb01ngertip as samples of 2D Gaussian distribution in the output tensor of Heatmap-based FCN in egocentric settings. By applying a suitable threshold, only the visible \ufb01ngertips are detected that determines the gesture at the same time. 5 \f1.2. Scope of Analysis Existing literature on egocentric hand gesture recognition and \ufb01ngertip detection uses both image processing and deep learning-based approaches to confront the challenges. However, the image processing-based approaches have the dependency on background, hand shape and color thus tend to fail in complex and diverse scenarios. Moreover, the approaches that use the convex hull technique for gesture recognition and \ufb01ngertip detection have their instinctive disadvantages. For instance, although they can recognize the gesture and detect \ufb01ngertips, they cannot classify \ufb01ngers and thus cannot apprise which \ufb01ngertips have been detected. This prevents these methods from detecting correct gestures with positional information of \ufb01ngertips. Consequently, we argue that deep learning-based detection will be more robust in diverse environments and \ufb01nger classi\ufb01cation. Nevertheless, deep learning-based direct regression approaches [6] directly regress the \ufb01ngertips in a predetermined gesture. So, there remains a scope of work in identifying hand gestures and \ufb01nding \ufb01ngertips concurrently. The direct regression approaches are simple, easy to implement, and require no post-processing. However, the CNN-based standard direct regression approach makes more pixel error as compared to the Heatmap-based methods. So, it is worthwhile to \ufb01gure out a new way of direct regression approach that will result in less pixel error than the Heatmap-based solution with a slightly increased post-processing cost. Besides, Heatmap[9] and segmentation network-based [26] approaches use a higher-order (3rd) tensor representation which possesses complexity during postprocessing. Hence, a uni\ufb01ed gesture recognition and \ufb01ngertip detection algorithm with a lower order (1st and 2nd) tensor representation will reduce the post-processing complexity. Therefore, based on the motivations stated above, the development of CNN-based uni\ufb01ed egocentric hand gesture recognition and \ufb01ngertip detection algorithm is worth investigating. 6 \f1.3. Speci\ufb01c Contributions In this paper, a CNN-based uni\ufb01ed egocentric hand gesture recognition and \ufb01ngertip detection algorithm is proposed for many potential applications in HCI. The speci\ufb01c contributions of the paper are as follows: \u2022 A uni\ufb01ed egocentric hand gesture recognition and \ufb01ngertip detection algorithm using a lower order representation with a lower level of post-processing complexity is proposed \u2022 A new direct regression approach is introduced where an ensemble of \ufb01ngertips position is directly regressed from FCN and later ensemble average is taken for the \ufb01nal position of \ufb01ngertips \u2022 A higher level of accuracy in classi\ufb01cation and a lower level of localization error in regression as compared to the well known direct regression and Heatmap-based framework is achieved through experimentations The rest of the paper is organized in the following order. In Section 2, the proposed method is presented in detail. Section 3 includes the experiments and results along with a comparison with the existing methods and ablation study. Section 4 shows the performance of the algorithm in the real-life images and a use-case of the method in the VR environment. Finally, Section 5 provides a conclusive remark. 2. Proposed Method The proposed method is a CNN-based uni\ufb01ed egocentric hand gesture recognition and \ufb01ngertip detection algorithm that combines the classi\ufb01cation of gestures and regression of \ufb01ngertips together. Using a single CNN both the probabilistic output for each of the \ufb01ngers is predicted and the positional output of \ufb01ngertips is regressed in one forward propagation of the network. In the following subsections, \ufb01rst, the uni\ufb01ed detection algorithm is proposed, then CNN architecture for 7 \fimplementing the algorithm and \ufb01ngertip detection method is explained. Finally, the optimization of the network is described. 2.1. Uni\ufb01ed Detection We unify the classi\ufb01cation and regression into a single CNN using a lower-order binary representation. Hand gestures are the combination of di\ufb00erent visible \ufb01ngers where the total number of \ufb01ngers in hand N (1 \u2a7dN \u2a7d5) is \ufb01xed. However, in a speci\ufb01c gesture, the number of visible \ufb01ngers l (l \u2208 1, 2, \u00b7 \u00b7 \u00b7 , N) is variable. Thus, for a speci\ufb01c gesture to locate the \ufb01ngertips, the number of x-, and y-coordinates to be regressed from a CNN is 2l. As the number of outputs of a CNN must be \ufb01xed and l is variable here, we have addressed this issue by predicting the probabilistic output of length N and regressing the positional output of length 2N from a single CNN. The probabilistic output is the binary representation of each \ufb01nger, where \u20181\u2019 corresponds to the visible \ufb01nger, and \u20180\u2019 corresponds to the \ufb01nger being hidden. Consequently, each gesture will generate a unique sequence of binary numbers and from this sequence, the gesture can be recognized. Concurrently, as the binary sequence represents the visibility of \ufb01ngers in a gesture, the positional output of the \ufb01ngertips of the hidden \ufb01nger can be set as don\u2019t care and ignored. Suppose, the probabilistic output of the CNN of length N is (p1, p2, \u00b7 \u00b7 \u00b7 , pN) and the positional coordinate output of the CNN of length 2N is ((x1, y1), (x2, y2), \u00b7 \u00b7 \u00b7 , (xN, yN)) then the \ufb01nal output will be (p1 \u00d7 (x1, y1), p2 \u00d7 (x2, y2), \u00b7 \u00b7 \u00b7 , pN \u00d7 (xN, yN)). From the \ufb01nal output, any (0, 0) coordinate will be considered as a hidden \ufb01nger and ignored. If (0, 0) coordinate is considered as probable \ufb01ngertip positional output, the probabilistic output can be further processed as (2pn \u22121) where n (n \u22081, 2, \u00b7 \u00b7 \u00b7 , N) to change the output range from (0, 1) to (\u22121, 1), and then only negative coordinates will be ignored. Figure 1 shows two example images of hand gestures wherein Example-1 only thumb, index, and pinky \ufb01ngers are visible and the middle and the ring \ufb01ngers are hidden. So, the ground truth (GT) probabilistic binary output sequence for Example-1 will be [1 1 0 0 1]. Likewise, for 8 \f(a) Example-1 (b) Example-2 Figure 1: Illustrative images of the two di\ufb00erent hand gestures are shown in (a) and (b). Example-2 the GT probabilistic binary output sequence will be [0 1 1 1 1]. These are not only unique sequences for speci\ufb01c gestures but also apprise the visibility of the \ufb01nger in a particular gesture which helps to determine which \ufb01ngertip coordinates to ignore from the positional coordinates output of the CNN. During prediction, the probabilistic output will predict the visibility of \ufb01ngers in a gesture. For a visible \ufb01nger, it will give a higher con\ufb01dence value and for a hidden \ufb01nger, it will give a lower con\ufb01dence value. So, a con\ufb01dence threshold \u03c4 (0 < \u03c4 < 1) needs to be set above which the \ufb01nger is visible and below which is hidden. Therefore, the criteria of detecting the visibility p\u2032 n of \ufb01ngers in a gesture from con\ufb01dence value pn where n (n \u22081, 2, \u00b7 \u00b7 \u00b7 , N) is given by p\u2032 n = \uf8f1 \uf8f4 \uf8f4 \uf8f2 \uf8f4 \uf8f4 \uf8f3 1, pn > \u03c4 0, pn < \u03c4 (1) For positional output, we propose an ensemble of direct regression from FCN where an ensemble of \ufb01ngertips coordinates is regressed at \ufb01rst, and then the ensemble average is taken for \ufb01nal positional output of length 2N (both x-, and y-coordinates of N \ufb01ngers). Here, the ground truth ensemble of positional output is generated by stacking the same ground truth positional output 2N times for training purposes. The idea behind the stacking of the same output and creating an ensemble of positional output is that each output of the regression from the FCN will correspond to the di\ufb00erent input features of the previous layer. Whereas, each output of the FC layer corresponds to all the input features of the previous layer. As a result, the output from the FCN will be more 9 \findependent of a particular feature, and it is expected that even if few outputs may deviate from the ground-truth value which will be mitigated after taking the ensemble average. Therefore, a matrix X of size 2N \u00d7 2N is regressed at \ufb01rst from FCN, and then column-wise ensemble average is taken as the \ufb01nal output of \ufb01ngertips position e X given by e X = 1 2N 2N X i=1 X(:, i) (2) 2.2. CNN Architecture Design For gesture recognition and \ufb01ngertip detection, the relevant portion of the hand is cropped from the input image using a bounding box and resized to (128 \u00d7 128). The resized image is used as the input to the proposed network for learning. During detection, the real-time object detection algorithm \u2018you only look once\u2019 (YOLO) [27] is used for hand recognition in the \ufb01rst stage. Later, that hand portion can be cropped and resized to feed to the proposed framework. For feature learning, 16-layers visual geometry group (VGG) con\ufb01guration given in [28] is employed. This output is utilized to generate both the probabilistic output and positional output. First, the output of the feature learning stage is \ufb02attened and two FC layer is used back-to-back for better classi\ufb01cation. Each of the FC layers is followed by a recti\ufb01ed linear unit (ReLU) activation function and a dropout layer. Finally, an FC layer is appended at the end to reduce the feature vector size to the same as that of the desired probabilistic output P of length N given by P = h pt pi pm pr pp i\u22a4 (3) where from pt to pp are the probability of thumb (t), index (i), middle (m), ring (r), and pinky (p) \ufb01nger, respectively. A sigmoid activation function is applied to the output of the \ufb01nal FC layer to normalize the probabilistic output. Moreover, the output of the feature learning stage is up-sampled followed by a ReLU activation function. Next, a convolution operation with a single \ufb01lter is performed to further reduce the size of the feature vector to the same as that of the desired 10 \f VGG-16 for Feature Learning Final Layer Output (4\u0d484\u0d48512) Up sampling (12\u0d4812\u0d48512) ng Conv. Layer (10\u0d4810\u0d481) Input Image Cropped and Resized Flatten n FC-1 FC-2 Probabilistic Output (5\u0d481) \u0373 \u0374\u0011\u0003\u0dcd\u001b\u123a\u01e3 \u01e1 \u008b\u123b \u0b36\u0b52 \u0b67\u0b40\u0b35 Positional Output (10\u0d481) Post Processing (8192) (1024) (1024) Figure 2: A block diagram of the uni\ufb01ed gesture recognition and \ufb01ngertip detection algorithm depicting the CNN architecture with input and output. ensemble of positional output X of size 2N \u00d7 2N given by X = \uf8ee \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8f0 xt yt xi yi xm ym xr yr xp yp xt yt xi yi xm ym xr yr xp yp xt \u00b7 \u00b7 \u00b7 \u00b7 \u00b7 \u00b7 yp . . . ... ... . . . xt \u00b7 \u00b7 \u00b7 \u00b7 \u00b7 \u00b7 yp \uf8f9 \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fb (4) where xf and yf (f \u2208 t, i, m, r, p) stand for the coordinate position of the \ufb01ngertips from thumb to pinky \ufb01nger successively. In the \ufb01nal convolution operation, a linear activation function is applied. Finally, the column-wise ensemble average is taken as the \ufb01nal output of the \ufb01ngertip positions. The overall system with CNN architecture is presented in Figure 2. The activation functions and dropout layers are not shown in the \ufb01gure for brevity. 2.3. Optimization In the proposed framework, the probabilistic output and the positional output need to be optimized independently at the same time and thus two loss functions are de\ufb01ned. The probabilistic output predicts the binary sequence of \u20181\u2019 and \u20180\u2019 considering the visibility of the \ufb01nger, and therefore, the following binary cross-entropy loss function is proposed to optimize the probabilistic output given by L1 = 1 NM M X j=1 N X k=1 \u2212{ P(j k) loge \u02c6 P(j k) + (1 \u2212P(j k)) \u00d7 loge (1 \u2212\u02c6 P(j k)) } (5) 11 \fwhere N and M represent the length of the probabilistic output and batch size, respectively. This loss function is the average of the loss over the batch. The positional output regresses the ensemble of \ufb01ngertips coordinate position which is a matrix of size (2N \u00d72N). To optimize the positional output, the following mean squared error (MSE) loss function is proposed given by L2 = 1 4N2M M X j=1 2N X k=1 2N X l=1 1 finger {X(j k l) \u2212\u02c6 X(j k l)}2 (6) where 1 finger denotes the visibility of the \ufb01nger which is used for masking. If any \ufb01nger is hidden in the gesture, the network should not be penalized for that \ufb01ngertip regression. Hence, using the masking, \ufb01ngertip detection loss for the hidden \ufb01nger is eliminated. Finally, the total loss is the sum of the probabilistic and positional losses given by L = L1 + L2 (7) To optimize both of the loss functions L1 and L2, the commonly referred adaptive moment estimation (ADAM) optimizer is employed. This optimizer utilized the moving averages of both the \ufb01rst moment mk and second moment vk of the gradient of the loss functions that are given by [29] mk = \u03b21 \u00d7 mk\u22121 + (1 \u2212\u03b21) \u00d7 (d(Lq)k dwk ) (8) vk = \u03b22 \u00d7 vk\u22121 + (1 \u2212\u03b22) \u00d7 (d(Lq)k dwk )2 (9) where q (q \u22081, 2), \u03b21 and \u03b22 (0 < \u03b21, \u03b22 < 1) are the two hyper-parameters that control the decay rate of the moving averages, and k stands for a particular iteration. Finally, the update of the weights of the model is given by wk = wk\u22121 \u2212 \u03b7 mk \u221avk + \u01eb (10) where \u03b7 (\u03b7 > 0) is the learning rate and \u01eb (\u01eb > 0) is a in\ufb01nitesimal number used for avoiding zero division error. 12 \fAlgorithm 1: Uni\ufb01ed Egocentric Hand Gesture Recognition and Fingertip Detection Input: image, models Output: probability P\u2032, position e X /* (xtl, ytl): top-left, (xbr, ybr): bottom-right coordinates of hand bounding box */ 1 (xtl, ytl), (xbr, ybr) \u2190yolo(image) 2 cropped image \u2190image[ytl : ybr, xtl : xbr] 3 height, width \u2190cropped image.shape 4 probability P, position X \u2190proposed method(cropped image) /* post-processing */ 5 p\u2032 n \u2200p\u2032 n \u2208P\u2032 \u2190 \uf8f1 \uf8f4 \uf8f4 \uf8f2 \uf8f4 \uf8f4 \uf8f3 1, pn > \u03c4 0, pn < \u03c4 \u2200pn \u2208P 6 e X \u2190 1 2N P2N i=1 X(:, i) 7 n \u21900 /* index for probability */ 8 for i = 1 to 2N by 2 do 9 if P\u2032[n] == 1.0 then /* transforming coordinates to the original image */ 10 e X[i] \u2190e X[i] \u00d7 width + xtl 11 e X[i + 1] \u2190e X[i + 1] \u00d7 height + ytl 12 n \u2190n + 1 2.4. Detection During detection, in the \ufb01rst stage, the hand is detected using the YOLO object detection algorithm. Afterward, the detected hand portion from the image is cropped and resized to feed to the proposed network. The network predicts the probabilistic output of \ufb01ngers and regresses the ensemble of \ufb01ngertip positions. The probabilistic output of the network predicts a higher con\ufb01dence value if the \ufb01nger is visible and a lower con\ufb01dence value if the \ufb01nger is hidden in a gesture. To estimate a binary output sequence representing the array of visible \ufb01ngers in hand, a con\ufb01dence threshold \u03c4 is set. Due to the equal probability of the visibility or invisibility of the \ufb01ngers, the con\ufb01dence threshold \u03c4 is set to be 50%. As the proposed network directly regresses the ensemble of \ufb01ngertip positional output X, a column-wise ensemble average is estimated as the \ufb01nal \ufb01ngertip positional output e X. The entire step-by-step detection process is presented in 13 \fAlgorithm 1. 3. Experiments and Results Experiments are performed based on the proposed method to validate the uni\ufb01ed egocentric hand gesture recognition and \ufb01ngertip detection algorithm. This section \ufb01rst presents the characteristics of the dataset on which experiments are carried out and a short description of data augmentation which is applied during the training period of the network. Afterward, the training and detection procedure of gesture recognition and \ufb01ngertip detection are explained. Next, a short description of the comparing methods and performance metrics is provided. Finally, the results of the performance of the proposed approach are reported and compared with the existing methods which are presented both in terms of classi\ufb01cation of hand gesture and regression of \ufb01ngertips. All the training and testing code concerning the experimentations and results along with the pre-trained weights of the model are publicly available to download. 1 3.1. Dataset In this experiment, the SCUT-Ego-Gesture database [9] is employed for experimentation that contains eleven di\ufb00erent datasets of single-hand gestures. Among these gesture datasets, eight are considered in the experimentation as they represent digit-type hand gestures. The eight datasets include 29, 337 RGB hand images in the egocentric vision each having a resolution of 640 \u00d7 480. Each of the datasets is partitioned into the test, validation, and training sets. First, for the test set 10% images of each of the datasets are taken by randomly sampling one every ten images. Next, for the validation set 5% images of the remaining images of the datasets are used by randomly sampling one every twenty images. Finally, the rest of the images of the datasets are employed for the training set. The number of images utilized in the test, validation, and training sets of di\ufb00erent gesture classes are provided in Table 1. Figure 3 shows visual examples of hand gesture images 1 Project: https://github.com/MahmudulAlam/Unified-Gesture-and-Fingertip-Detection 14 \fTable 1: The list of the number of images used in the test, validation, and the training sets of the generic database Gesture Class Test Set Validation Set Training Set Total SingleOne 337 151 2886 3374 SingleTwo 376 169 3218 3763 SingleThree 376 169 3223 3768 SingleFour 376 169 3222 3767 SingleFive 375 169 3211 3755 SingleSix 375 169 3213 3757 SingleSeven 377 169 3227 3773 SingleEight 338 152 2890 3380 Total 2930 1317 25090 29337 (a) SingleOne (b) SingleTwo (c) SingleThree (d) SingleFour (e) SingleFive (f) SingleSix (g) SingleSeven (h) SingleEight Figure 3: Visual examples of each of the eight gestures in the database are shown from (a) to (h). of di\ufb00erent classes where each gesture is constituted by a variable number of \ufb01ngers. The list of names of the images used for the test, validation, and the training sets is made publicly available. 2 2 Dataset: https://github.com/MahmudulAlam/Unified-Gesture-and-Fingertip-Detection/ tree/master/dataset 15 \f3.2. Data Augmentation To reduce the risk of over\ufb01tting, data augmentation is applied during training by including new training images arti\ufb01cially generated from the existing images of the datasets. In particular, the on-the-\ufb02y data augmentation process is used that generates new training images by applying random rotation, translation, shear transformation, illumination variation, scaling, cropping, additive Gaussian noise, and salt noise. The augmented images are generated randomly in each batch. As a result, the trained gesture recognition and \ufb01ngertip detection model is learned from a large dataset. Hence, the trained model is expected to be generalized. 3.3. Training To train the proposed gesture recognition and \ufb01ngertip detection model, the relevant ground truth portion of the hand from the input image is cropped and resized to (128 \u00d7 128) using bilinear interpolation which is the input of the CNN. The model predicts a probabilistic output vector P of length 5 and regresses an ensemble of positional output matrix X of size (10 \u00d7 10 \u00d7 1). To generate the outputs of the desired size, the output tensor of the VGG-16 feature learning stage of size (4 \u00d7 4 \u00d7 512) is \ufb02attened to a vector of length 8192. The output vector length of the FC layers is chosen to be 1024 and the dropout rate to be 0.5. The \ufb01nal FC layer having an output length of 5 is used to generate the probabilistic output. To produce the ensemble of the positional output of \ufb01ngertips, the output tensor of the feature learning stage is three times up-sampled to (12 \u00d7 12 \u00d7 512). Next, this output is convolved with a single \ufb01lter of size (3 \u00d7 3) that results in a matrix of desired output size (10 \u00d7 10 \u00d7 1). The proposed network is trained for a total of 300 epochs where the learning rate is lowered from 10\u22125 to 10\u22127 in a step by step process for better convergence. The parameter of the ADAM optimizer \u03b21, \u03b22, and \u01eb is chosen to be 0.9, 0.999, and 10\u221210, respectively, with a batch size of 64. Figure 4 shows the learning curves of the uni\ufb01ed gesture recognition and \ufb01ngertip detection model in terms of loss function both in the training and validation stages. Speci\ufb01cally, Figure 4(a) 16 \f ! \" \"! # #! $ %&'()* +\", +\"+\"# +\" +. +, ++# loge (Probabilistic Loss \ue2381) /0123234560'717282*92(5:'** ;182<192'3560'717282*92(5:'** (a) Convergence of probabilistic loss function L1 ! \" \"! # #! $ %&'()* +, ++. +/ +! +0 loge (Positional Loss \ue2382) 1234545678'*494'53:7;'** <3:4=394'578'*494'53:7;'** (b) Convergence of positional loss function L2 ! \" \"! # #! $ %&'()* +\" +, ++. +# loge (Total Loss \ue238) /01232345/'61758'** 9172:162'35/'61758'** (c) Learning curves in terms of the total loss L Figure 4: The learning curves of the proposed uni\ufb01ed gesture recognition and \ufb01ngertip detection model. The convergence of the probabilistic, positional, and total loss functions are shown from (a) to (c), respectively. 17 \fshows the convergence of probabilistic loss function L1 and Figure 4(b) shows the convergence of positional loss function L2. Figure 4(c) shows the learning curves in terms of the total loss L where the probabilistic and positional loss functions are combined. It can be seen from the learning curves that the proposed model is free from over\ufb01tting. During training, we have lowered the learning rate after few epochs and made changes in the augmentation, e.g., amount of rotation, translation, illumination, and so on. As a result, there are sudden \ufb02uctuations in the loss curves, especially visible in Figure 4(a). In a case, where a \ufb01nger is hidden in the gesture, the positional loss is not penalized, and thus, we can observe less \ufb02uctuation as in Figure 4(b). As a matter of fact, modifying augmentation helped the model to be more robust and generalized and learn from a diverse dataset. In other words, the \ufb02uctuations in the learning curves reveal that the method avoids the problem of over\ufb01tting. 3.4. Comparing Methods The proposed method is compared with the existing direct regression approach [8] and the Heatmap-based gesture recognition and \ufb01ngertip detection algorithm called \u2018you only look what you should see\u2019 (YOLSE) [9]. Before comparing to the proposed method, a brief description of these algorithms is provided here. \u2022 YOLSE Approach: The YOLSE method of hand gesture recognition and \ufb01ngertip detection algorithm is proposed by Wu et al. [9] in 2017. They proposed a Heatmap-based approach using a fully convolutional network by representing each \ufb01ngertip as a 2D Gaussian distribution in the output tensor. Each layer of the tensor represents a speci\ufb01c \ufb01nger. The algorithm predicts a tensor and later from each layer of the tensor, the peak value is calculated. If the peak value exceeds a given threshold then the peak location is considered as the position of a visible \ufb01ngertip. If the peak value falls below the threshold then that \ufb01ngertip is considered hidden. 18 \f\u2022 Direct Regression Approach: Mishra et al. [8] proposed the direct regression-based hand gesture and \ufb01ngertip detection algorithm in 2019. They employed MobileNetV2 [30] architecture as a backbone model and later produced a linear output using global average pooling. Afterward, from the same linear output, they used three fully connected (FC) layers for gesture classi\ufb01cation, \ufb01nger identi\ufb01cation, and estimation of \ufb01nger position. This algorithm is referred to as the Direct Regression approach as the \ufb01nal positional output of the \ufb01ngertips are directly regressed from the FC layers. 3.5. Performance Metrics The performance of the classi\ufb01cation of egocentric hand gestures and that of estimation of the \ufb01ngertips position are evaluated separately. The performance of the classi\ufb01cation is assessed in terms of four measures, namely, accuracy, precision, recall, and F1 score. The higher the value of accuracy or F1 score, and the closer the value of precision or recall to unity, the better is the performance of the classi\ufb01cation algorithm. In all of these evaluation metrics, unless otherwise stated, the con\ufb01dence threshold is set to 50%. To evaluate the performance of estimation of \ufb01ngertip position, the error in terms of mean Euclidean distance between ground truth pixel coordinate and regressed pixel coordinate is calculated as D f \u2212\u02c6 D f = 1 S \u27e8P, 1\u27e9 S X k=1 \u27e8P,1\u27e9 X j=1 (p\u2032 f)j k q {(xf)j k \u2212(\u02c6 xf)j k}2 + {(yf)j k \u2212(\u02c6 yf)j k}2 (11) where f (f \u2208t, i, m, r, p), S stands for the total number of correctly recognized gestures in the test set in a particular class, and \u27e8P, 1\u27e9is the number of total \ufb01ngers in the gesture. 3.6. Results Table 2 shows the results of egocentric gesture recognition in terms of the accuracy, precision, recall, and F1 score of the comparing methods. The overall performance in terms of the mean value of these metrics is also shown in this table. The name of the methods is pre\ufb01xed by GT as no hand detector is included as preprocessing rather ground truth bounding box is used to directly 19 \fTable 2: Performance of gesture classi\ufb01cation of the comparing methods in terms of Accuracy, Precision, Recall, and F1 score Method Metric Gesture Mean SingleOne SingleTwo SingleThree SingleFour SingleFive SingleSix SingleSeven SingleEight GTYOLSE Accuracy (%) 96.72 98.16 98.46 97.99 98.87 99.08 98.74 96.86 98.11 Precision (%) 85.13 97.63 96.88 97.89 100.00 98.33 97.22 93.31 95.80 Recall (%) 86.65 87.77 90.96 86.17 91.20 94.40 92.84 78.40 88.55 F1 Score 0.8588 0.9244 0.9383 0.9165 0.9540 0.9633 0.9498 0.8521 0.9196 YOLOYOLSE Accuracy (%) 97.00 98.16 98.46 98.19 99.01 99.04 98.70 97.00 98.20 Precision (%) 86.94 96.26 96.62 98.50 100.00 98.87 97.75 93.71 96.08 Recall (%) 86.94 89.10 91.22 87.23 92.27 93.60 92.04 79.29 88.96 F1 Score 0.8694 0.9254 0.9384 0.9252 0.9598 0.9616 0.9481 0.8590 0.9234 GTDirect Regression Accuracy (%) 99.97 99.90 99.86 99.86 99.76 99.62 99.52 99.73 99.78 Precision (%) 100.00 99.73 99.47 99.47 99.46 98.66 98.40 97.60 99.10 Recall (%) 99.70 99.47 99.47 99.47 98.67 98.40 97.88 100.00 99.13 F1 Score 0.9985 0.9960 0.9947 0.9947 0.9906 0.9853 0.9814 0.9879 0.9911 YOLODirect Regression Accuracy (%) 99.69 99.93 99.93 99.90 99.86 99.93 99.90 99.59 99.84 Precision (%) 97.95 99.47 99.73 99.47 100.00 99.73 99.73 98.80 99.36 Recall (%) 99.41 100.00 99.73 99.73 98.93 99.73 99.47 97.63 99.33 F1 Score 0.9867 0.9973 0.9973 0.9960 0.9946 0.9973 0.9960 0.9821 0.9934 GTProposed Method Accuracy (%) 99.97 100.00 100.00 99.97 99.93 99.93 99.93 99.93 99.96 Precision (%) 99.70 100.00 100.00 99.73 100.00 100.00 99.47 99.70 99.82 Recall (%) 100.00 100.00 100.00 100.00 99.47 99.47 100.00 99.70 99.83 F1 Score 0.9985 1.0000 1.0000 0.9987 0.9973 0.9973 0.9974 0.9970 0.9983 YOLOProposed Method Accuracy (%) 99.90 100.00 100.00 99.93 99.90 99.93 99.90 99.90 99.93 Precision (%) 99.12 100.00 100.00 99.47 100.00 100.00 99.21 100.00 99.72 Recall (%) 100.00 100.00 100.00 100.00 99.20 99.47 100.00 99.11 99.72 F1 Score 0.9956 1.0000 1.0000 0.9973 0.9960 0.9973 0.9960 0.9955 0.9972 crop the relevant hand portion from an input image. The results of each method are also presented by including the YOLO hand detector in the \ufb01rst stage, and in this case, the name of the methods is pre\ufb01xed by YOLO. It can be observed from Table 2 that the proposed method has outperformed 20 \fTable 3: Performance of \ufb01ngertip positional accuracy of the comparing methods in terms of the mean pixel (px) error Method Gesture Mean Error (px) SingleOne SingleTwo SingleThree SingleFour SingleFive SingleSix SingleSeven SingleEight GTYOLSE 5.71 \u00b1 15.29 4.16 \u00b1 3.89 3.51 \u00b1 1.92 3.95 \u00b1 4.76 3.74 \u00b1 1.61 3.59 \u00b1 1.56 3.89 \u00b1 1.66 5.22 \u00b1 2.48 4.22 \u00b1 4.15 YOLOYOLSE 5.06 \u00b1 9.53 4.31 \u00b1 4.56 3.56 \u00b1 2.20 3.6 \u00b1 2.46 3.76 \u00b1 1.65 3.62 \u00b1 1.51 3.98 \u00b1 2.68 5.14 \u00b1 2.66 4.13 \u00b1 3.41 GTDirect Regression 7.98 \u00b1 5.57 7.23 \u00b1 3.80 6.64 \u00b1 3.36 7.04 \u00b1 3.22 6.68 \u00b1 2.45 6.71 \u00b1 3.10 7.47 \u00b1 2.91 9.04 \u00b1 4.34 7.35 \u00b1 3.59 YOLODirect Regression 11.20 \u00b1 9.13 7.89 \u00b1 4.51 7.10 \u00b1 3.52 7.69 \u00b1 3.51 6.97 \u00b1 2.55 7.90 \u00b1 4.04 8.26 \u00b1 3.64 10.71 \u00b1 6.63 8.47 \u00b1 4.69 GTProposed Method 4.51 \u00b1 3.14 3.89 \u00b1 1.91 3.62 \u00b1 1.80 3.79 \u00b1 1.89 3.63 \u00b1 1.46 3.4 \u00b1 1.48 3.64 \u00b1 1.51 5.68 \u00b1 3.51 4.02 \u00b1 2.09 YOLOProposed Method 6.78 \u00b1 7.37 4.23 \u00b1 3.00 3.87 \u00b1 2.05 4.31 \u00b1 2.43 3.81 \u00b1 1.64 4.29 \u00b1 3.54 4.04 \u00b1 2.03 7.37 \u00b1 6.67 4.84 \u00b1 3.59 Table 4: Timing analysis of Proposed Method and comparison with other comparing methods Method Total Parameters YOLO (ms) Fingertip Detection (ms) Postprocessing (\u00b5s) Total (ms) YOLSE 2,781,669 24.00 21.82 115.25 45.94 Direct Regression 2,589,775 24.00 19.78 63.95 43.84 Proposed Method 24,163,654 24.00 21.99 88.44 46.08 the other gesture recognition methods and attained very high accuracy in all classes. In particular, the proposed method provides gesture recognition accuracy of at least 99.90% and an F1 score as high as 0.99. In estimating the position of \ufb01ngertips, the distance error between the ground truth coordinate, and the regressed coordinate among the di\ufb00erent classes is calculated. Table 3 shows the results of the mean and standard deviation of the regression error in pixel (px) for di\ufb00erent methods. It is seen from this table that, the proposed \ufb01ngertip regression approach achieves a better result in 21 \fterms of the mean and standard deviation of the pixel error as compared to the Direct Regression method, but a comparable performance with the YOLSE method. However, the superiority of the proposed method over the YOLSE method is clear when comparing it with the GT hand image. Nevertheless, the proposed method with the YOLO hand detector has achieved a mean pixel error of 4.84 px with a standard deviation of 3.59 px. In gesture classi\ufb01cation, the Direct Regression approach shows a competitive performance with the proposed one. But the mean accuracy, precision, and recall of gesture classi\ufb01cation of the proposed method are 1.76%, 3.79%, and 12.10% higher than the YOLSE method, respectively. On the other hand, in the case of \ufb01ngertip detection, the YOLSE method shows a competitive performance with the proposed one. However, the mean and standard deviation of the detection error of the Proposed Method is 42.86% and 23.45% less compared to the Direct Regression approach, respectively. Therefore, the Proposed Method is robust both in gesture classi\ufb01cation and \ufb01ngertip detection without any compromise. Figure 5 shows the confusion matrices depicting the performance of the classi\ufb01cation of gesture by the YOLSE approach, the Direct Regression approach, and the Proposed Method where each row represents the actual class of gesture and each column represents the predicted class of gesture. The \ufb01gure illustrates that the proposed model has very little confusion in classifying gestures. Figure 6 shows examples of visual output of the proposed gesture recognition and \ufb01ngertip detection algorithm of each gesture class where not only each \ufb01ngertip position is detected but also the type of hand gesture is recognized by classifying each \ufb01nger. The experiments are performed on a computer with Intel Core i7 CPU with 16 GB memory and NVIDIA RTX2070 Super GPU with 8 GB memory and some of the training portions are conducted using an NVIDIA Titan Xp GPU. The average forward propagation time of the proposed network is 21.99 ms or 45 frames per second. Thus, the proposed method satis\ufb01es the requirements of real-time implementation. Moreover, a timing analysis of the proposed method and the 22 \f\u0000\u0003\u0001 \u0000\u0004\u0001 \u0000\u0005\u0001 \u0000\u0006\u0001 \u0000\u0007\u0001 \u0000\b\u0001 \u0000\t\u0001 \u0000 \u0001 \u0000\u0003\u0001 \u0000\u0004\u0001 \u0000\u0005\u0001 \u0000\u0006\u0001 \u0000\u0007\u0001 \u0000\b\u0001 \u0000\t\u0001 \u0000 \u0001 \u0004\u000b\u0004 \u0002 \u0002 \u0002 \u0002 \u0003 \u0002 \u0004 \u0004\t \u0005\u0005\u0002 \u0002 \u0002 \u0002 \u0002 \u0002 \u0002 \u0003 \b \u0005\u0006\u0004 \u0004 \u0002 \u0002 \u0002 \u0002 \u0002 \u0004 \u0003\u0003 \u0005\u0004\u0006 \u0002 \u0002 \u0004 \u0002 \u0002 \u0002 \u0002 \u0007 \u0005\u0006\u0004 \u0003 \u0004 \u0002 \u0002 \u0002 \u0002 \u0002 \u0002 \u0005\u0007\u0006 \u0006 \u0007 \u0002 \u0002 \u0002 \u0002 \u0002 \u0006 \u0005\u0007\u0002 \u0003\u0004 \u0004\u0005 \u0002 \u0002 \u0002 \u0002 \u0002 \u0004 \u0004\b\u0007 \u0002 \u0007\u0002 \u0003\u0002\u0002 \u0003\u0007\u0002 \u0004\u0002\u0002 \u0004\u0007\u0002 \u0005\u0002\u0002 \u0005\u0007\u0002 (a) YOLSE Approach \u0000\u0003\u0001 \u0000\u0004\u0001 \u0000\u0005\u0001 \u0000\u0006\u0001 \u0000\u0007\u0001 \u0000\b\u0001 \u0000\t\u0001 \u0000 \u0001 \u0000\u0003\u0001 \u0000\u0004\u0001 \u0000\u0005\u0001 \u0000\u0006\u0001 \u0000\u0007\u0001 \u0000\b\u0001 \u0000\t\u0001 \u0000 \u0001 \u0005\u0005\b \u0002 \u0002 \u0002 \u0002 \u0002 \u0002 \u0003 \u0002 \u0005\t\u0007 \u0003 \u0002 \u0002 \u0002 \u0002 \u0002 \u0002 \u0002 \u0005\t\b \u0002 \u0002 \u0002 \u0002 \u0002 \u0002 \u0002 \u0002 \u0005\t\b \u0002 \u0002 \u0002 \u0002 \u0002 \u0002 \u0002 \u0004 \u0005\t\u0004 \u0002 \u0003 \u0002 \u0002 \u0002 \u0002 \u0002 \u0002 \u0005\t\u0006 \u0003 \u0002 \u0002 \u0002 \u0002 \u0002 \u0002 \u0002 \u0005\t\u0007 \u0004 \u0005 \u0002 \u0002 \u0002 \u0002 \u0003 \u0002 \u0005\u0005\u0006 \u0002 \u0007\u0002 \u0003\u0002\u0002 \u0003\u0007\u0002 \u0004\u0002\u0002 \u0004\u0007\u0002 \u0005\u0002\u0002 \u0005\u0007\u0002 (b) Direct Regression Approach \u0000\u0003\u0001 \u0000\u0004\u0001 \u0000\u0005\u0001 \u0000\u0006\u0001 \u0000\u0007\u0001 \u0000\b\u0001 \u0000\t\u0001 \u0000 \u0001 \u0000\u0003\u0001 \u0000\u0004\u0001 \u0000\u0005\u0001 \u0000\u0006\u0001 \u0000\u0007\u0001 \u0000\b\u0001 \u0000\t\u0001 \u0000 \u0001 \u0005\u0005\t \u0002 \u0002 \u0002 \u0002 \u0002 \u0002 \u0002 \u0002 \u0005\t\b \u0002 \u0002 \u0002 \u0002 \u0002 \u0002 \u0002 \u0002 \u0005\t\b \u0002 \u0002 \u0002 \u0002 \u0002 \u0002 \u0002 \u0002 \u0005\t\b \u0002 \u0002 \u0002 \u0002 \u0002 \u0002 \u0002 \u0003 \u0005\t\u0005 \u0002 \u0002 \u0003 \u0002 \u0002 \u0002 \u0002 \u0002 \u0005\t\u0005 \u0004 \u0002 \u0002 \u0002 \u0002 \u0002 \u0002 \u0002 \u0005\t\t \u0002 \u0003 \u0002 \u0002 \u0002 \u0002 \u0002 \u0002 \u0005\u0005\t \u0002 \u0007\u0002 \u0003\u0002\u0002 \u0003\u0007\u0002 \u0004\u0002\u0002 \u0004\u0007\u0002 \u0005\u0002\u0002 \u0005\u0007\u0002 (c) Uni\ufb01ed Detection Approach Figure 5: Confusion matrices depicting the performance of the gesture classi\ufb01cation by the experimental methods shown in (a) to (c). Here (1) to (8) are representing SingleOne to SingleEight gestures. Pinky Ring Middle Index Thumb SingleOne SingleTwo SingleThree SingleFour SingleFive SingleSix SingleSeven SingleEight Figure 6: A visual representation of the outputs of the proposed gesture recognition and \ufb01ngertip detection model where not only each \ufb01ngertip is detected but also each \ufb01nger is classi\ufb01ed. comparing ones is presented in Table 4. Although the proposed method has way more parameters compared to the others, the total amount of time required for the proposed method remains almost the same. 3.7. Ablation Study To unfold the full utility and comprehend the contribution of the proposed egocentric gesture recognition and \ufb01ngertip detection algorithm, we have experimented by alternating and removing 23 \fdi\ufb00erent components of the system. The description of the experiments are as follows: (1) The proposed method predicts an ensemble of \ufb01ngertip coordinates and subsequently, ensemble average is taken to predict the \ufb01nal \ufb01ngertip coordinate. In our \ufb01rst ablation study, we have removed the ensemble averaging from the post-processing stage shown in Figure 2 and incorporated an averaging layer at the end of the positional output of the network. After the prediction of the ensemble of \ufb01ngertip coordinates, a Global Average Pooling layer is annexed to take the average which is used as the \ufb01nal output of the \ufb01ngertip coordinates. (2) In our second ablation study, instead of taking an ensemble average of the ensemble of \ufb01ngertip coordinates, a random sample from the ensemble output is taken. Therefore, we have randomly sampled one of the ten outputs from the ensemble output. That randomly chosen \ufb01ngertip coordinates are used as the \ufb01nal \ufb01ngertip coordinates prediction. As we are randomly sampling one output from the ensemble, the post-processing stage is not necessary here. (3) In our third ablation study, we have directly regressed the \ufb01ngertip coordinated from the proposed network. In this experiment, we have removed the Upsampling and Convolution stages after the feature learning stage. The feature learning stage output is used to directly regress the \ufb01ngertip coordinates using an FC layer with a sigmoid activation function. As we are directly regressing here, the post-processing stage is not necessary here also. The detection error in terms of pixels (px) of the \ufb01ngertip coordinates of each class of gestures for the aforementioned studies is presented in Table 5. In all the cases the ground truth hand bounding box is utilized. The table shows the mean and standard deviation of detection errors of each class as well as for the overall case. In the \ufb01rst study, we have used an averaging layer to take the ensemble average. The averaging layer does not require any parameter to learn, and it is just like averaging in the post-processing stage. Therefore, it is expected to have a similar performance 24 \fTable 5: Results of the ablation study of the proposed egocentric hand gesture recognition and \ufb01ngertip detection algorithm Method Gesture Overall Error (px) SingleOne SingleTwo SingleThree SingleFour SingleFive SingleSix SingleSeven SingleEight (1) Method with an Averaging Layer Included in the CNN Architecture 4.64 \u00b1 3.17 3.94 \u00b1 1.86 3.61 \u00b1 1.86 3.85 \u00b1 1.95 3.63 \u00b1 1.53 3.51 \u00b1 1.5 3.65 \u00b1 1.46 5.8 \u00b1 3.77 4.08 \u00b1 2.14 (2) Method with Randomly Sampled Output from the Ensemble 5.21 \u00b1 3.43 4.57 \u00b1 2.3 4.24 \u00b1 2.23 4.28 \u00b1 1.98 4.17 \u00b1 1.87 3.98 \u00b1 1.78 4.23 \u00b1 1.68 6.41 \u00b1 3.56 4.63 \u00b1 2.35 (3) Method with Direct Regression of Fingertip Coordinates 5.97 \u00b1 3.75 6.23 \u00b1 2.21 6.01 \u00b1 2.49 5.94 \u00b1 2.11 5.12 \u00b1 1.66 5.45 \u00b1 2.32 4.88 \u00b1 2.25 6.88 \u00b1 3.29 5.81 \u00b1 2.51 Proposed Method 4.51 \u00b1 3.14 3.89 \u00b1 1.91 3.62 \u00b1 1.80 3.79 \u00b1 1.89 3.63 \u00b1 1.46 3.4 \u00b1 1.48 3.64 \u00b1 1.51 5.68 \u00b1 3.51 4.02 \u00b1 2.09 as the proposed method which is apparent from the table. In the second study, we randomly sample \ufb01ngertip coordinates from the ensemble. Since the ensemble average mitigates the deviation of the prediction from the ground truth value, the output as random samples deviates the performance from the proposed method. In the third study, we directly regressed the \ufb01ngertip coordinates using an FC layer. The FC layer uses all the features from the previous layer whereas the proposed ensemble output from FCN uses di\ufb00erent input features of the previous layer. Therefore, the di\ufb00erence of errors between the results of the direct regression approach and that of the proposed method is expected as shown in Table 5. 4. Detection In-The-Wild and Application In VR To evaluate the performance of the proposed method in real-life scenarios, 25 publicly available hand gesture images are collected from the internet. The imaging conditions of this wild set of gesture images are quite di\ufb00erent as compared to the SCUT-Ego-Gesture database. In particular, they are di\ufb00erent in terms of background, illumination, resolution, and pose of the \ufb01ngers. More25 \fPinky Ring Middle Index Thumb Figure 7: Prediction of the model using random images collected over the internet to show the real-life usability of the proposed method. over, the hand shape and the color in these images di\ufb00er from the SCUT-Ego-Gesture database. Fig. 7 shows the output images with the prediction of the proposed method. It is seen from the output images that the proposed method is capable of successfully predicting all the gestures and detects all the \ufb01ngertips of the images collected from the internet despite being di\ufb00erent from the database to a large extent. To show the real-life feasibility of the proposed method in VR applications, we have also demonstrated a proof-of-concept VR application. In this demonstration, we have placed a virtual 26 \f (a) Scale=0.75 (b) Scale=0.85 (c) Scale=0.95 (d) Scale=1.05 (e) Scale=1.15 Figure 8: VR application of the proposed egocentric gesture recognition and \ufb01ngertip detection algorithm. In this experimental demo application, the scale of the virtual 3D object, i.e. car, is modi\ufb01ed depending the number of \ufb01ngers in a gesture shown in from (a) to (e) where the scale is incremented from 0.75 to 1.15. 3D object, e.g., car, on the surface and modi\ufb01ed its scale based on the number of \ufb01ngers in the egocentric gesture. The initial scale of the virtual 3D object is set as 0.75 which is for one \ufb01nger only and the scale is set as 1.15 for \ufb01ve \ufb01ngers. The scale value is incremented by 0.10 for each \ufb01nger. Figure 8 shows an illustration of the VR application where the scale of the virtual object is incremented with the number of \ufb01ngers. Therefore, in real-life HCI, VR, and MR applications, the proposed method can play an indispensable role. 5." + } + ], + "Edward Raff": [ + { + "url": "http://arxiv.org/abs/2310.19978v1", + "title": "Scaling Up Differentially Private LASSO Regularized Logistic Regression via Faster Frank-Wolfe Iterations", + "abstract": "To the best of our knowledge, there are no methods today for training\ndifferentially private regression models on sparse input data. To remedy this,\nwe adapt the Frank-Wolfe algorithm for $L_1$ penalized linear regression to be\naware of sparse inputs and to use them effectively. In doing so, we reduce the\ntraining time of the algorithm from $\\mathcal{O}( T D S + T N S)$ to\n$\\mathcal{O}(N S + T \\sqrt{D} \\log{D} + T S^2)$, where $T$ is the number of\niterations and a sparsity rate $S$ of a dataset with $N$ rows and $D$ features.\nOur results demonstrate that this procedure can reduce runtime by a factor of\nup to $2,200\\times$, depending on the value of the privacy parameter $\\epsilon$\nand the sparsity of the dataset.", + "authors": "Edward Raff, Amol Khanna, Fred Lu", + "published": "2023-10-30", + "updated": "2023-10-30", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "stat.CO", + "stat.ML" + ], + "main_content": "Introduction Differential Privacy (DP) is currently the most effective tool for machine learning practitioners and researchers to ensure the privacy of the individual data used in model construction. Given parameters \u03f5 and \u03b4, on any two datasets D and D\u2032 differing on one example, an (approximately) differentially private randomized algorithm A satisfies Pr [A(D) \u2208O] \u2264exp{\u03f5} Pr [A(D\u2032) \u2208O] + \u03b4 for any O \u2286image(A) [1]. Note that lower values of \u03f5 and \u03b4 correspond to stronger privacy. While DP has had many successes in industry and government, DP-based machine learning methods have made little progress for sparse high-dimensional problems [2, 3, 4, 5]. We believe that this issue arises because, to the best of our knowledge, given a dataset with D features and a training algorithm with T iterations, all current iterative DP regression algorithms require at least O(TD) training complexity, as shown in Table 1. This makes it impractical to use these algorithms on any dataset with a large number of features. Our solution to this problem is to take already existing algorithms, and remove all redundant computations with mathematically equivalent steps. This ensures that, by construction, we retain all proofs of correctness \u2014 but end with a faster version of the same method. Table 1: A summary of prior methods for solving L1 regularized Logistic Regression, which do not take advantage of sparsity in the input data. Method Complexity Frank-Wolfe Methods [6, 7, 8, 9] O(TND) ADMM [10] O(TNDM) Iterative Gradient Hard Thresholding Methods [9, 11, 12] O(TND) Coordinate Descent [13] O(TND) Mirror Descent [8] O(TNDM) We are interested in creating a differentialy private machine learning algorithm which scales to sparse datasets with high values of D, so we look toward the LASSO regularized logistic regression model [14]. Specifically, given a dataset {x1, . . . , xN} \u2208RD which can be represented as a design matrix \u2217Co-first authors for equal contribution. 37th Conference on Neural Information Processing Systems (NeurIPS 2023). arXiv:2310.19978v1 [cs.LG] 30 Oct 2023 \fX \u2208RN\u00d7D, {y1, . . . , yN} \u2208{0, 1}, and a maximum L1 norm \u03bb, we wish to solve b w = arg min w\u2208RD: \u2225w\u22251\u2264\u03bb 1 N N X i=1 L(w \u00b7 xi) (1) where L(\u00b7) is the loss function. In this paper, we will use the logistic loss to avoid exploiting any closed-form updates/solutions in the linear case, but our results are still applicable to linear regression. To do so with DP, we use the Frank-Wolfe algorithm as it is well studied for L1 constrained optimization and regularly used in DP literature [15, 16]. Frank-Wolfe is also desirable because when properly initialized, its solutions will have at most T nonzero coefficients for T training iterations. Though DP noise can reduce accuracy and cause increased density of solutions through suboptimal updates, when considering problems with more than 1 million features, we are unlikely to perform 1 million iterations, so a benefit is still obtained [17]. We develop the first sparse-dataset friendly Frank-Wolfe algorithm to remediate the problem of no sparse algorithms for high-dimensional DP regression. Because a sparse-efficient Frank-Wolfe algorithm does not exist today even for non-private problems, our work proceeds in three contributions: 1. We analyze the numerical updates of the Frank-Wolfe algorithm to separate the task into (a) \u201cqueue maintenance\u201d for determining the next coordinate to update and (b) sparse updates of the solution and intermediate variables. If the average column sparsity of X is Sc < D and the average row sparsity of X is Sr < N, we show for the first time that Frank-Wolfe can update its solution in SrSc work per iteration. 2. We show that in the non-private case, the queue to select the next coordinate can be maintained in O(\u2225w\u22250 log D) time, but is cache-unfriendly. 3. Finally, we develop a new Big-Step Little-Step Sampler for DP Frank-Wolfe that can be maintained and sampled from in O( \u221a D log D) time. We test our algorithm on high-dimensional problems with up to 8.4 million datapoints and 20 million features on a single machine and find speedups ranging from 10\u00d7 to 2, 200\u00d7 that of the standard DP Frank-Wolfe method. Critically, our approach is mathematically equivalent, and so retains all prior proofs of correctness. The remainder of our work is organized as follows. In section 2 we will review related work in more detail and discuss the lack of sparse dataset algorithms for Frank-Wolfe and DP. Next, we develop the nonprivate and DP variants of our sparse-friendly Frank-Wolfe in section 3. In section 4 we demonstrate the empirical correctness of our algorithm, a speedup in runtime for the DP case, and new state-of-the-art accuracy in high-dimensional logistic regression. Finally, we conclude in section 5. 2 Related Work As best as we can find, no works have specifically considered a sparse-dataset efficient Frank-Wolfe algorithm. Additionally, we find that work studying L1-regularized DP regression has not reached any high-dimensional problems. Our work is the first to show that Frank-Wolfe iterations can be done with complexity sub-linear in D for sparse datasets. It also produces a sparse weight vector. For DP regression, the addition of noise to each variable has limited exploration of sparse datasets. Most works study only one regression problem with less than 100 dense variables or are entirely theoretical with no empirical results [11, 10, 6, 18, 19]. To the best of our knowledge, the largest scale attempt for high-dimensional logistic regression with DP is by Iyengar et al., who introduce an improved objective perturbation method for maintaining DP with convex optimization [16]. This required using L-BFGS, which is O(D) complexity for sparse data and produces completely dense solution vectors w [20]. In addition, Wang & Gu attempted to train an algorithm on the RCV1 dataset but with worse results and similar big-O complexity to Iyengar et al. [11]. While Jain & Thakurta claim to tackle the URL dataset with 20M variables, their solution does so by sub-sampling just 0.29% of the data for training and 0.13% of the data for validation, making the results suspect and non-scalable [21]. Lastly, a larger survey by Jayaraman & Evans shows no prior work considering the sparsity of DP solutions and all other works tackling datasets with less than 5000 features [22]. In contrast, our work does no sub-sampling and directly takes advantage of dataset sparsity in training 2 \fhigh-dimensional problems. Our use of the Frank-Wolfe algorithm also means our solution is sparse, with no more than T non-zero coefficients for T iterations. In addition to no DP regression algorithm handling sparse datasets, we cannot find literature that improves upon the O(D) dependency of Frank-Wolfe on sparse data. Many prior works have noted this dependence as a limit to its scalability, and column sub-sampling approaches are one method that has been used to mitigate that cost [23, 15, 24]. Others have looked at distributed Map-Reduce implementations [25] or adding momentum terms [26] as a means of scaling the Frank-Wolfe algorithm. However, our method is the first to address dataset sparsity directly within the Frank-Wolfe algorithm, and can generally be applied to these prior works with additional derivations for new steps. Other methods that apply to standard regression, use the L2 penalty [27, 28] would require modification to use our approach. Similarly, pruning methods [29] may require adaption to account for the privacy impact. Of particular note is [30] which tackles sub-linear scaling in the number of rows N when N > D, but is primarily theoretical. Their work is the first to introduce the idea of \u201cqueue maintenance\u201d that we similarly leverage in this work. Despite a conceptually similar goal of using a priority queue to accelerate the algorithm, [30] relies on maximum inner product search, where our queues are of a fundamentally different structure. To the best of our knowledge, the COPT library is the only Frank-Wolfe implementation that makes any effort to support sparse datasets but is still O(D) iteration complexity, so we base our comparison against its approach [31]. No DP regression library we are aware of supports sparse datasets [32]. 3 Methods Throughout this section, sparsity refers to an algorithm\u2019s awareness and efficiency of handling input data which contains mostly zero values. We will first review the only current method for using FrankWolfe with sparse data and establish its inefficiency. We will then detail how to produce a generic framework for a sparse dataset efficient Frank-Wolfe algorithm using a queuing structure to select the next coordinate update. Having established a sparse friendly framework, we will show how to obtain O(NSc + T\u2225w\u2217\u22250 log D + TSrSc) complexity for the non-private Frank-Wolfe algorithm by using a Fibonacci heap, where w\u2217is the solution at convergence. Then we replace the Fibonacci heap with a sampling structure to create a DP Frank-Wolfe algorithm with O(NSc + T \u221a D log D + TSrSc) complexity. 3.1 Frank-Wolfe Iterations in sub-O(D) Time Algorithm 1 Standard Sparse-Aware FrankWolfe 1: w0 \u21900 2: \u00af y \u2190X\u22a4y O(NSc) 3: for t = 1 to T \u22121 do 4: \u00af vt \u2190Xwt O(NSc) 5: \u00af qt \u2190\u2207L(\u00af vt) O(N) 6: \u00af zt \u2190X\u22a4\u00af qt O(NSc) 7: \u03b1t \u2190\u00af zt \u2212\u00af y O(D) 8: j \u2190 arg minj \f \f \f \f\u03b1(j) t + Lap \u0012 \u03bbL\u221a 8T log 1 \u03b4 N\u03f5 \u0013\f \f \f \f O(D) 9: dt = \u2212wt O(D) 10: d(j) t \u2190d(j) t \u2212\u03bb \u00b7 sign \u0010 \u03b1(j) t \u0011 O(1) 11: gt = \u2212\u27e8\u03b1t, dt\u27e9 O(D) 12: \u03b7t = 2 t+2 O(1) 13: wt+1 = wt + \u03b7tdt O(D) 14: end for 15: Output wT The only work we could find of any sparse-input aware Frank-Wolfe implementation is the COPT library, which contains two simple optimizations: it pre-computes a re-used dense vector (i.e., all values are non-zero) and it uses a sparse matrix format for computing the vector-matrix product Xw [31]. The details are abstracted into Algorithm 1, where each line has a comment on the algorithmic complexity of each step. Note to make the algorithm DP, we have added a +Lap \u0012 \u03bbL\u221a 8T log(1/\u03b4) N\u03f5 \u0013 to draw noise from a zero-mean Laplacian distribution with the specified scale, where \u03bb is the constraint parameter and L is the L1\u2212Lipschitz constant of the loss function L(\u00b7). If a non-private Frank-Wolfe implementation is desired, this value can be ignored. In this and other pseudo-codes, we explicitly write out all intermediate computations as they are important for enabling sparse updates. \u00af y is an intermediate variable for the labels in the gradient of a linear problem that is pre-computed once and reused. \u00af z is a 3 \ftemporary variable. \u00af q and \u03b1 are the gradients with respect to each row and column respectively. \u00af v is the dot product of each row with the weight vector, and d is the update direction. The iteration subscript t will be dropped when unnecessary for clarity. The superscript (j) denotes updating the j\u2019th coordinate of a vector and leaving others unaltered. While lines 2, 4, and 6 of the algorithm exploit the sparsity of the data, this only reduces the complexity to O(NSc) plus an additional dense O(D) work for lines 7 through 13 and O(N) work for line 5. This results in a final complexity of O(TNSC + TD). For high dimensional problems, especially when N \u226aD, this is problematic in scaling up a Frank-Wolfe based solver. To derive a Frank-Wolfe algorithm that is more efficient on sparse datasets, we will assume there is an abstract priority queuing structure Q that returns the next coordinate to update j for each iteration. We will detail how to design a Q to use this algorithm in non-private and DP cases in the following two sections. Sparse wt Updates Algorithm 2 Fast Sparse-Aware Frank-Wolfe for Linear Models 1: w \u21900 2: wm \u21901 3: \u02dc g \u21900 4: \u00af y \u2190X\u22a4y O(NSc) 5: scale \u2190 LN\u03f5 2\u03bb\u221a 8T log(1/\u03b4) 6: Q \u2190Priority Queue or Sampling Algorithm 7: for t = 1 to T \u22121 do 8: if t = 1 then 9: \u00af v \u2190Xw O(NSc) 10: \u00af q \u2190\u2207L(\u00af v) O(N) 11: \u00af z \u2190X\u22a4\u00af q O(NSc) 12: \u03b1 \u2190\u00af z \u2212\u00af y O(D) 13: Q.add(j, |\u03b1(j)| \u00b7 scale) \u2200 j \u2208 {1, . . . , D} O(D) 14: end if 15: j \u2190Q.getNext() Select coordinate to update 16: \u02dc d \u2190\u2212\u03bb \u00b7 sign \u0000\u03b1(j)\u0001 O(1) 17: gt \u2190\u02dc g \u2212\u02dc d \u00b7 \u03b1(j) O(1) 18: \u03b7t = 2 t+2 O(1) 19: wm \u2190wm(1 \u2212\u03b7t) 20: w(j) \u2190w(j)\u03b7t \u02dc d/wm 21: \u02dc g \u2190\u02dc g(1 \u2212\u03b7t) + \u03b7t \u02dc d\u03b1(j) 22: for all rows i of X with feature j do O(Sr) 23: \u00af v(i) \u2190\u00af v(i) + \u03b7t \u02dc d \u2217X[i, j]/wm O(1) 24: \u03b3 \u2190\u2207L(wm \u00b7 \u00af v(i)) \u2212\u00af q(j) O(1) 25: \u00af q(j) \u2190\u00af q(j) + \u03b3 O(1) 26: \u03b1 \u2190\u03b1 + \u03b3 \u00b7 X[i, :] O(Sc) 27: \u02dc g \u2190\u02dc g + \u03b3 \u00b7 X[i, :]\u22a4w \u00b7 wm O(Sc) 28: end for 29: Q.update(k, |\u03b1(k)| \u00b7 scale) \u2200k gradients updated 30: end for 31: Output w In line 13 of Algorithm 1, if we ignore the change to coordinate j of dt, we can write wt+1 = wt \u2212\u03b7twt, which can be re-written as wt+1 = (1 \u2212\u03b7t)wt. If we represent the weight wt+1 = w \u00b7 wm with a co-associated multiplicative scalar wm, we can alter wm \u2190wm \u00b7 (1 \u2212\u03b7t) to have the same effect as altering all D variables implicitly. Then the jth coordinate can be updated individually, allowing line 13 of Algorithm 1 to run in O(1) time. We will use the same trick to represent \u00af vt = \u00af v \u00b7 wm as it has the same multiplicative scale. Sparse \u03b1 and \u00af v Updates The dot product scores \u00af vt and column gradients \u03b1t are intrinsically connected in their sparsity patterns. When the jth value of wt is altered, multiple values of \u00af vt change, which propagates changes to the gradients \u03b1t. However, the elements {i} of \u00af vt that change are only those where rows {i} in X use feature j. Let \u02dc dj represent a perturbation to the j\u2019th update direction. Each row i that uses the j\u2019th feature will then be alter the variable \u00af v(i) by \u2212\u03b7t \u02dc dX[i, j]. This lets us handle line 10 sparsely. Each row {i} that changes in \u00af vt propagates to the values in \u00af q. We can represent the change in gradient value between iterations as \u03b3, which can then be used to sparsely update the \u03b1 values by noting that X\u22a4\u00af q would change only by the non-zero columns of X[i, :]. So we can compute the update \u03b1 by \u03b3 \u00b7 X[i, :]. By updating \u03b1 directly, we do not need to account for the contribution of \u00af y after the first iteration. Sparse gt Updates The final variable of interest is the Frank-Wolfe convergence gap gt = \u2212\u03b1\u22a4 t dt. Instead of recomputing this every iteration, we can keep a base value \u02dc g that is altered based on both of the prior two insights. When wm is updated, we re-scale \u02dc g by (1 \u2212\u03b7t) and add \u03b7t \u02dc d\u03b1(j), the change in the dot product \u2212\u03b1\u22a4 t dt caused by just the jth coordinate update. After the sparse \u03b1 updates, \u02dc g is again updated by \u03b3X[i, :]\u22a4w \u00b7 wm. 4 \fFast Frank-Wolfe Framework The final procedure that forms the foundation for our results is given in Algorithm 2. The scale variable holds the noise parameter required for DP and can be ignored for non-private training. Lines 6, 13, 15, and 30 require an abstract priority queue that is populated with values proportional each feature\u2019s gradient. This mechanism is different in the non-private and private cases, and we will detail them in the following two sections. The first iteration of Algorithm 2 performs the same calculations as Algorithm 1, but for all subsequent iterations, values will be updated in a sparse fashion. Lines 16-21 update the multiplicative wm variables and perform the single coordinate updates to w and \u02dc g, all taking O(1) time to complete. Lines 22-29 handle the updates for \u03b1 and \u00af v, which requires looping over the rows that use feature j, which we expect to have O(Sr) complexity. Within the loop, we use one row of the matrix to perform a sparse update which we expect to have O(Sc) complexity. The gradients \u03b1(k) that get updated by this loop are updated in the priority queue Q 2 in O(1) time per update, so they do not alter the final complexity. The final complexities of our algorithm are now dependent on the complexity of Q.getNext(), which will differ in the non-private and private cases. 3.2 Algorithmically Efficient Non-Private Frank-Wolfe We first analyze the complexity of the non-private Frank-Wolfe algorithm, though our ultimate goal is to make the private case more efficient. The algorithm we detail in the non-private case will be of superior big-O complexity and perform significantly less FLOPs than the standard Frank-Wolfe implementation but will not be faster in practice due to constant factor overheads which we will explain. Algorithm 3 Fibonacci Heap Frank-Wolfe Queue Maintenance 1: function GETNEXT( ) 2: j \u2190\u22121 Assume \f \f\u03b1(\u22121)\f \f returns \u2212\u221e 3: repeat 4: c \u2190Q.pop() O(log D) 5: if \f \f\u03b1(c)\f \f > \f \f\u03b1(j)\f \f then 6: j \u2190c 7: end if 8: until \f \f\u03b1(j)\f \f > |Q.peekPriority()| 9: Re-insert removed items c\u00b7\u00b7\u00b7 using priorities \f \f\u03b1(c\u00b7\u00b7\u00b7)\f \f 10: Output j 11: end function 12: function UPDATE(i, v) 13: vcur \u2190Current priority of item i 14: if \u2212vcur > \u2212v then Min-Heap 15: Q.decreaseKey(i, v) O(1) 16: end if 17: end function The primary insight in building an algorithmicallyfaster non-private algorithm is to use a Fibonacci Heap, which allows for O(log D) removal complexity and amortized O(1) insertion and decreaseKey operations. We use the negative magnitude as the key in the min-heap. Our insight is that we can decrease a key j whenever |\u03b1(j)| increases, and ignore cases where |\u03b1(j)| decreases. This means the negative priority is an upper bound on true gradient magnitude. This is favorable because the vast majority of updates to the queue are intrinsically of a magnitude too small to be selected (hence why the solution is sparse) and so even with an inflated magnitude are never top of the queue. These stale gradients will cause some items to reach the top of the queue incorrectly, which is easy to resolve as detailed in Algorithm 3. The current item c is popped off the queue, and compared against the current best coordinate j. This loop continues until the top of the queue has a smaller priority than the current gradient magnitude \u03b1(j). Because the stale magnitude can only be larger than the true gradient, once we satisfy this condition it must be the case that no item in the queue can have a higher priority. Thus, the procedure is correct. The number of items we expect to have to consider must be proportional to the number of non-zero coefficients in the weight vector. This gives a final complexity of getNext O(\u2225w\u2217\u22250) for the number of non-zeros in the final solution, multiplied by the O(log(D)) cost per pop() call, giving a final complexity of O(NSc + T\u2225w\u2217\u22250 log D + TSrSc). While this is of superior algorithmic complexity compared to the standard Frank-Wolfe implementation, it has been long known that Fibonacci heaps have high constant-factor overheads that prevent 2There are multiple ways to implement this, but we find a na\u00efve re-iteration over the loop to update based on the final values the fastest due to reduced memory overheads. 5 \ftheir practical use [33, 34]. We still find the result useful in being the first to demonstrate a faster iteration speed, as well as a deterministic case to verify the correctness of our approach. For this reason we will use Algorithm 3 to show that our method converges at the same rate and with fewer FLOPs compared to the standard Frank-Wolfe implementation, as it does not suffer from the randomness required for differential privacy3. Our concern about this inefficiency is limited, as many faster algorithms exist for non-private LASSO regression that are orders of magnitude faster than using Frank-Wolfe [35, 36, 37], so other tools can suffice. However, no tools for high-dimensional and sparse DP LASSO regression exist except for the Frank-Wolfe method, which we make more efficient in the next section. 3.3 Algorithmically Efficient Differentially Private Frank-Wolfe Algorithm 4 Big-Step Little-Step Exponential Sampler 1: function GETNEXT( ) 2: j \u21900 3: o \u2190exp(v(j) \u2212z\u03a3) 4: logTw \u2190 log U(0,1) exp v(i)\u2212z\u03a3 5: c \u21900 6: while c < N do 7: Xw \u2190log U(0,1) logTw 8: while exp \u0010 c(c mod \u230a \u221a N\u230b) \u2212z\u03a3 \u0011 \u2212o < Xw do 9: Xw \u2190Xw\u2212(exp \u0010 c(c mod \u230a \u221a N\u230b) \u2212z\u03a3 \u0011 \u2212o) 10: o \u21900 11: c \u2190\u230a \u221a N\u230b\u2212c mod \u230a \u221a N\u230b 12: end while 13: while exp \u0000v(c) \u2212z\u03a3 \u0001 < Xw do O( \u221a N log N) 14: Xw \u2190Xw \u2212exp \u0000v(c) \u2212z\u03a3 \u0001 15: o \u2190o + exp \u0000v(c) \u2212z\u03a3 \u0001 16: c \u2190c + 1 17: end while 18: if c < N then 19: j \u2190c 20: c \u2190c + 1 21: tw \u2190exp \u0000exp \u0000v(j) \u2212z\u03a3 \u0001\u0001 \u00b7 logTw 22: logTw \u2190 log U(tw,1) exp(v(j)\u2212z\u03a3) 23: if j mod \u230a \u221a N\u230b\u0338= 0 then 24: o \u21900 25: else 26: o \u2190o + exp \u0000v(j) \u2212z\u03a3 \u0001 27: end if 28: end if 29: end while 30: end function 31: function UPDATE(i, v) 32: vcur \u2190current priority of item i 33: k \u2190i mod \u230a \u221a N\u230b 34: c(k) \u2190c(k) + log \u0010 1 \u2212evcur\u2212c(k) + ev\u2212c(k)\u0011 35: z\u03a3 \u2190z\u03a3 + log (1 \u2212evcur\u2212z\u03a3 + ev\u2212z\u03a3) 36: end function For our DP Frank Wolfe algorithm, we convert from the Laplacian mechanism originally used by Talwar et al. to the Exponential Mechanism [38]. Rather than adding noise to each gradient and selecting the maximum, in the exponential mechanism each coordinate j is given a weight \u221dexp \u0010 \u03f5u(j) 2\u2206u \u0011 where u(j) is the score of the jth item and \u2206u is the sensitivity [39]. This poses two challenges: 1. We need to select a weighted random sample from D options in sub-O(D) time to pick the corrdinate j to maintain the DP of the exponential mechanism. 2. We need to do this while avoiding the numeric instability of raising a gradient to an exponential, which will overflow for relatively small gradient magnitudes. We tackle both of these issues by adapting the A-ExpJ algorithm of [40], and use their naming conventions to make comparison easier. This algorithm works on a stream of items with weights wi, and in O(1) space can produce a valid weighted sample from the stream. It does so by computing a randomized threshold Tw and processing samples until the cumulative weights P i wi > Tw, at which the final item in the sum becomes the new sample. A new value of Tw is computed, and the process continues until the stream is empty. This process requires generating only O(log D) random thresholds for a stream of D items. We exploit the fact that we have a known and fixed set of D items to develop a new version of this algorithm that can be updated in constant time, is numerically stable for a wide range of weights, and can draw a new sample in O( \u221a D log D) time. The key idea is to form large groups of variables and 3We note that there can be mild disagreement on update order caused by numerical differences in brute-force recalculation of Algorithm 1 and the updates of Algorithm 2, but we observe no issues with this. 6 \fkeeping track of their collective weight. If the group\u2019s weight is smaller than Tw, then the entire group can be skipped to perform a \u201cBig-Step\u201d. If the group\u2019s weight is larger than Tw, then the members of the group must be inspected individually to form \u201cLittle-Steps\u201d. For this reason we term our sampler the \u201cBig-Step Little-Step\u201d sampler, and this procedure is shown in Algorithm 4. The scale of gradients can change by four or more orders of magnitude due to the evolution of the gradient during training and the exponentiation of the Exponential Mechanism. For this reason, all logic is implemented at log scale, and a total log-sum-weight z\u03a3 is tracked. Every exponentiation of a log-weight then subtracts this value, performing the log-sum-exp trick to keep the sample weights in a numerically stable range4 Similarly, each group has a group log-sum weight, and we denote the vector of the group weights as c. There are \u221a D groups so that each group has \u221a D members. On lines 34 and 35 of Algorithm 4, a log-sum-exp update is used to update the group sum c(k) and total sum z\u03a3 (which are already log-scale since there was no exponentiation on line 30 of Algorithm 2). In both cases we always expect the group sum to be larger, and so we use c(k) and z\u03a3 as the maximal values to normalize by in each update. Lines 31 and 32 select the \u201cBig-Step\u201d group to update for the change in weight of the i\u2019th item. Lines 8-12 and 13-17 perform the same loop at two different scales. 8-12 perform big steps over groups, and must handle that the starting position could be in the middle of a group from a previous iteration, making it a partial group. For this reason, there is a \u201cgroup offset\u201d o that subtracts the weight of items already visited in the group. Once a Big-Step is made, on line 11 the position is incremented by the group size modulo the current position, so that each step starts at the beginning of the next group regardless of starting position, handling the case of starting from a previous little step\u2019s location. Then lines 13-17 perform little steps within a group, and it is known that a new item must be found in the little group, otherwise, lines 8-12 would have repeated due to having a sum smaller than Tw. The remainder of lines 2-5 and 18-30 work as the standard A-ExpJ algorithm, except each calculation, is done at log-scale or exponentiated if an item is needed at a non-log scale, for example on line 215. Each of the log(D) random variates needed by Algorithm 4 corresponds to the selection of a new current sample. In the worst cases, each of these samples will belong to a different group, necessitating exploring O(log D) groups each of size \u221a D by construction, giving a total sampling complexity of O( \u221a D log D). Just as with the Fibonacci Heap, the update procedure is O(1) per update, and so the final DP-FW complexity becomes O(NScT \u221a D log D + TSrSc). As we will demonstrate in our results, this provides significant speedups over the standard Frank-Wolfe for sparse datasets. This is because, by design, the Algorithm 4 procedure is very cache friendly, performing linear scans over \u221a D items at a time making pre-fetching very easy, and thus has only O(log D) cache-misses when performing Little-Step transitions. 4 Results Table 2: Datasets used for evaluation. We focus on cases that are high-dimensional and sparse. Dataset N D RCV1 20,242 47,236 20 Newsgroups.Binary \u201cNews20\" 19,996 1,355,191 Malicious URLs, \u201cURL\" 2,396,130 3,231,961 Webb Spam Corpus, \u201cWeb\" 350,000 16,609,143 KDD2010 (Algebra), \u201cKDDA\" 8,407,752 20,216,830 Having derived a sparsity-aware framework for implementing Frank-Wolfe iterations in time proportional to the sparsity of the data, we will now demonstrate the effectiveness of our results. Since our goal is to support faster training with sparse datasets, we focus on high-dimensional problems listed in Table 2 where D \u2265N. Note that to the best of our knowledge, the RCV1 dataset at D = 47k is the highest-dimensional 4Very small weights will still underflow, but by definition this happens when their probability to be selected is several orders of magnitude lower and thus are astronomically unlikely to be chosen anyway. Adding a small 10\u221215 value guarantees a chance to be selected and maintains DP via technically adding more noise than necessary. 5We note that the double exponentiation on this line is correct and numerically stable. The first exponentiation will produce a value in the range of [0, 1] for the second exponentiation to use per the A-ExpJ algorithm 7 \fdataset any prior work has used to train a DP logistic regression model [16], and its D is 428\u00d7 smaller than the largest D we consider. Our results will first focus on the non-private Frank-Wolfe due to its deterministic behavior and clear convergence criteria via the Frank-Wolfe gap gt6. This will allow us to show clearly that we reduce the total number of floating point operations per second (FLOPs) required, though we note that in practice the runtime remains similar due to cache inefficiency. After establishing that Algorithm 2 requires fewer FLOPs, we turn to testing the DP version leveraging our Big-Step Little-Step Sampler Algorithm 4, showing speedups ranging from 10\u00d7 to 2, 200\u00d7 that of the standard DP Frank-Wolfe algorithm when training a model on sparse datasets. Due to the computational requirements of running all tests, we fix the total number of iterations T = 4, 000 and maximum L1 norm for the Lasso constraint to be \u03bb = 50 in all tests across all datasets. This value produces highly accurate models in all non-private cases, and the goal of our work is not to perform hyper-parameter tuning but to demonstrate that we have taken an already known algorithm with established convergence rates and made each iteration more efficient. All experiments were run on a machine with 12 CPU cores (though only one core was used), and 128GB of RAM. The total runtime for all experiments took approximately 1 week and exploring larger datasets was limited purely by insufficient RAM to load larger datasets in memory. Our code was written in Java due to the need for explicit looping, and implemented using the JSAT library [36]. When comparing Algorithm 1 and our improved Algorithm 2, the latter will be prefixed with \u201c-Fast\u201d in the legend to denote the difference. 4.1 Non-Private Results We first begin by looking at the convergence gap gt over each iteration to confirm that we are converging to the same solution, which is shown in Figure 1. Of note, it is often impossible to distinguish between the standard and our fast Frank-Wolfe implementations because they take the exact same steps. Differences that occur are caused by nearly equal gradients between variables and are observable via inspection of gt. In all cases, the solutions returned achieve identical accuracy on the test datasets. 100 101 102 103 10\u22124 10\u22122 100 102 Iteration Convergence Gap gt Alg. 1 New20 URL RCV1 KDDA Web 100 101 102 103 10\u22124 10\u22122 100 102 Iteration Alg. 2 + Alg. 3 New20-Fast URL-Fast RCV1-Fast KDDA-Fast Web-Fast Figure 1: Convergence gap gt (y-axis, same scale for both plots) as the number of iterations increases (x-axis), showing that our Algorithm 2 (dotted lines) converges to the same solutions as Algorithm 1, with minor differences due to numerical floating point changes (i.e., both plots look nearly identical, the desired behavior). This shows our new approach maintains solution quality. In Algorithm 2, the updating in differences can cause catastrophic cancellation due to the zig-zag behavior of Frank-Wolfe iterates (updating the same coordinate j multiple times with differing signs on each update), resulting in similar magnitude sign changes that result in slightly different 6In the DP case gt is especially noisy and hard to leverage meaningfully, often starting lower and increasing. This makes it a non-informative measure 8 \fresults numerically compared to re-computing the entire gradient from scratch. Choosing an adaptive stepsize \u03b7t may alleviate this issue in future work. 100 101 102 103 100 101 102 103 Iteration FLOPs(Alg 1.)/FLOPS(Alg 2 & 3) New20 URL RCV1 KDDA Web Figure 2: The y-axis (larger is better) shows how many times fewer FLOPs our Alg. 2 + Alg 3 needs compared to the original Frank-Wolf (Alg. 1). The x-axis is how many iterations into training have been performed. Note that in all cases the difference is difficult to differentiate reaching 1,000 iterations as we reduce the number of FLOPs by orders of magnitude per iteration. Next, we empirically validate the O(\u2225w\u2217\u22250) number of times we must query the Fibonacci Heap for selecting the next iterate. The ratio of the number of times we must pop an item from the Heap against the value of \u2225w\u2217\u22250 is plotted over training iterations in Appendix Figure 3. We can see in all cases the ratio is \u22643, and so few calls to the heap are necessary. Finally, we look at the convergence rate gt as a function of the number of FLOPs required to obtain it, as shown in Figure 2. It is clear that we avoid orders of magnitude more operations than a na\u00efve implementation of Frank-Wolfe would normally require, providing the foundation for our faster DP results in the next section. Unfortunately, the Fibonacci Heap has poor caching behavior, resulting in no meaningful difference in runtime for the sparse case. 4.2 Differentially Private Results Having established that Algorithm 2 avoids many floating point operations by exploiting sparseness in the training data, we now turn to training DP versions of the normal and our faster Frank-Wolfe algorithms. The total speedup in the runtime of our Algorithm 2 over Algorithm 1 is shown in Table 3. As can be seen, our results range from a 10\u00d7 speedup for the URL dataset at the low end, up to a 2, 200\u00d7 speedup for the KDDA and URL datasets at \u03f5 = 0.1. In addition we ablated using Algorithm 2 with the brute force noisy-max as an ablation, which shows smaller speedups. This demonstrates that the combination of Algorithms 2 and 4 are necessary to get our results. Table 3: How many times faster our Algorithm 2+Algorithm 4 over the standard FW implementation. In addition, just Algorithm 2 using the noisy-max sampling is included as an ablation, showing that both Alg. 4 & 2 combined are necessary to obtain maximum speed. \u03f51 \u03f50.1 Dataset Alg. 2+4 Alg. 2 Alg. 2+4 Alg. 2 News20 81.69 17.83 93.51 19.05 URL 9.99 1.02 2451.80 95.58 RCV1 19.44 1.36 20.37 1.82 Web 581.25 21.24 537.65 20.79 KDAA 1239.64 206.50 2245.56 368.96 We note that the speedup of our method is a function of the sparsity of informative and non-informative features. This is more noticeable on the URL dataset which jumps from a 10\u00d7 speedup to a 2, 400\u00d7 speedup when moving from \u03f5 = 1 down to \u03f5 = 0.1. This is because the URL dataset has 200 dense features that are highly informative, and the remaining features are all sparse. When a feature is dense, there is no advantage to using Alg. 2 & 4, and so no benefit is obtained. At the lower noise level of \u03f5 = 1, the denser (and thus slower) informative features are selected more frequently, resulting in longer total runtime. As the noise increase with \u03f5 = 0.1, the sparser non-informative features are selected more often, which reduces the average amount of work per update. This phenomena occurs in most datasets as denser features intrinsically have more opportunities to be discriminative, but is uniquely pronounced on the URL dataset. This is ultimately a desirable property of our method, as large values of \u03f5 > 10 are effectively not private, and so faster methods of non-private training should be used instead. Our Algorithm 2 with 9 \fthe Big-Step-Little-Step Sampler of Algorithm 4 will increase in its effective utility as the desired amount of privacy increases. Table 4: Even at high privacy \u03f5 = 0.1 we obtain non-trivial Accuracy and AUC on most datasets by using T = 400, 000 iterations. Because the datasets are so high dimensional, we still obtain sparse solutions (rightmost column). Dataset Accuracy (%) AUC (%) Sparsity (%) RCV1 90.53 97.29 5.81 News20 92.37 98.65 75.19 URL 73.23 82.60 89.44 Web 75.42 92.51 99.90 KDDA 85.25 53.31 85.30 As our final test to highlight the utility and importance of our approach, we re-run each dataset using \u03bb = 5000 with T = 400, 000 iterations at a real-world useful \u03f5 = 0.1. As shown in Table 4 this results in non-trivial accuracy and AUC for all datasets but KDDA, and is only possible by performing hundreds of thousands of training iterations. Iyengar et al. [16] show the best prior results at \u03f5 = 0.1 for RCV1 64.2% accuracy, and in fact, we trail the non-private accuracy of 93.5% by only 3% points (note as well that their solution has 0% sparsity). This is made possible by simply performing far more iterations, which is computationally intractable with prior methods. We also note that we obtain significant sparsity on the higher dimensional datasets News20, URL, Web, and KDDA due to the fact that T < D for each of them, and the Frank-Wolfe will by construction have a number of non-zero coefficients \u2264T. 5" + }, + { + "url": "http://arxiv.org/abs/2310.17867v1", + "title": "Reproducibility in Multiple Instance Learning: A Case For Algorithmic Unit Tests", + "abstract": "Multiple Instance Learning (MIL) is a sub-domain of classification problems\nwith positive and negative labels and a \"bag\" of inputs, where the label is\npositive if and only if a positive element is contained within the bag, and\notherwise is negative. Training in this context requires associating the\nbag-wide label to instance-level information, and implicitly contains a causal\nassumption and asymmetry to the task (i.e., you can't swap the labels without\nchanging the semantics). MIL problems occur in healthcare (one malignant cell\nindicates cancer), cyber security (one malicious executable makes an infected\ncomputer), and many other tasks. In this work, we examine five of the most\nprominent deep-MIL models and find that none of them respects the standard MIL\nassumption. They are able to learn anti-correlated instances, i.e., defaulting\nto \"positive\" labels until seeing a negative counter-example, which should not\nbe possible for a correct MIL model. We suspect that enhancements and other\nworks derived from these models will share the same issue. In any context in\nwhich these models are being used, this creates the potential for learning\nincorrect models, which creates risk of operational failure. We identify and\ndemonstrate this problem via a proposed \"algorithmic unit test\", where we\ncreate synthetic datasets that can be solved by a MIL respecting model, and\nwhich clearly reveal learning that violates MIL assumptions. The five evaluated\nmethods each fail one or more of these tests. This provides a model-agnostic\nway to identify violations of modeling assumptions, which we hope will be\nuseful for future development and evaluation of MIL models.", + "authors": "Edward Raff, James Holt", + "published": "2023-10-27", + "updated": "2023-10-27", + "primary_cat": "stat.ML", + "cats": [ + "stat.ML", + "cs.AI", + "cs.LG" + ], + "main_content": "Introduction In Multiple Instance Learning (MIL) we have a dataset of N labeled points, which we will represent as X with associated labels y \u2208{\u22121, 1} for the negative and positive labels respectively. As originally described, the MIL problem involves each datum Xi \u2208X being a bag of multiple instances, where Xi = {x1, x2, . . . , xni} is a bag of ni instances. Each instance xj \u2208Xi is a D-dimensional vector, and every bag Xi may have a different total number of items ni. Given instant level classifier h(\u00b7), most MIL algorithms work by predicting \u02c6 yi = max\u2200xj\u2208Xi h(xj). As originally described, the positive/negative label of each bag Xi has a special meaning. By default, a bag\u2019s label is negative (y = \u22121). The label of a bag will become positive (y = 1) if and only if a positive instance xj is present inside the bag, at which point the entire bag\u2019s label becomes positive. Because instance-level labels mapping each xj \u2192y \u2208{\u22121, 1} are not given, the MIL problem is to infer the instance-level labels from the whole-bag level labeling. This implies a critical asymmetric nature to the given labels and how they must be handled. A value of y = \u22121 tells us that all instances are negative in the given bag, whereas a label of y = 1 tells us that one or more instances have a positive label. For this reason, swapping the positive and negative labels in a MIL problem is not 37th Conference on Neural Information Processing Systems (NeurIPS 2023). arXiv:2310.17867v1 [stat.ML] 27 Oct 2023 \fsemantically meaningful or correct, whereas, in a standard classification problem, the labels can be interchanged without altering the semantics of the learning task. The MIL problem occurs with frequency in many real-world applications, in particular in the medical community where the presence of any abnormal cell type (i.e., instance) is the confirmative indicator for a larger organism\u2019s disease (i.e, bag and label). As the MIL problem implies, and the medical example makes explicit, the MIL model has an implicit casual assumption: the right combination of positive indicators dictate the output label, and so the MIL model is both a valuable inductive bias toward the solution and a guard against physically implausible solutions. Algorithms that fail, or intentionally forgo, the MIL constraints may appear to obtain better accuracy \"in situ\" (i.e., the lab environment). But if it is known that the MIL assumption is true, ignoring it creates a significant risk of failure to generalize \"in vivo\" (i.e., in real production environments). In the clinical context, this is important as many ML algorithms are often proposed with superior in situ performance relative to physicians [1], but fail to maintain that performance when applied to new clinical populations [2\u20134]. In this case, respecting underlying MIL properties eliminates one major axis of bias between situ and vivo settings and higher confidence in potential utility. In the cyber security space, respecting the MIL nature eliminates a class of \"good word\" style attacks [5\u20137] where inconsequential content is added to evade detection, an attack that has worked on production anti-virus software [8\u201310]. These reasons are precisely why MIL has become increasingly popular, and the importance of ensuring the constraints are satisfied. Notably, this creates a dearth of options when more complex MIL hypotheses are required, as CausalMIL and mi-Net succeed by restricting themselves to the Standard MIL assumption. The creation of MIL models that satisfy this, and other more complex hypotheses, are thus an open line of research that would have potentially significant clinical relevance. Similarly, users with more niche MIL needs may desire to more thoroughly test their models respect the constraints critical to their deployment. Our work has demonstrated that many articles have not properly vetted the more basic MIL setting, and so we suspect other more complex MIL problems are equally at risk. Our work contributes the identification of this issue, as well as a strategy to avoid repeat occurrences, by developing algorithmic unit tests where a synthetic dataset is created that captures specific properties about the desired solution. The test will fail if an invariant of the algorithm\u2019s solution is not maintained, as summarized in Table 1. Such failures indicate that a MIL method is not properly constrained, and the learning goal is not being achieved. We construct three such datasets for the MIL problem, which can be reused by any subsequent MIL research to mitigate this problem. Based on these results, we would suggest practitioners/researchers begin with CausalMIL and mi-Net as a solid foundation to ensure they are actually satisfying the MIL hypothesis, and thus avoiding excess risk in deployment. Table 1: Our work proposes a test of the Standard Multiple Instance Learning (MIL) hypothesis and two tests for the Threshold MIL. Most modern deep learning MIL models do not test or prove that they respect any MIL hypothesis, and our tests show that most are insufficient. When a model passes the Threshold test, but fails the Standard test, it is still not a valid Threshold MIL model because the tests do not guarantee correctness, they are like software unit tests that identify failures. Model mi-Net MI-Net MIL-Pooling Tran-MIL GCN-MIL CausalMIL Hopfield Claim Standard Standard Standard Threshold Threshold Standard Threshold Standard Test \u2713 \u2717 \u2717 \u2717 \u2717 \u2713 \u2717 Threshold Tests \u2717 \u2713 \u2717 \u2717 \u2717 \u2713 \u2713 This paper is organized as follows. In \u00a7 2 we will review broadly related works, including prior work in non-deep-MIL and deep-MIL literature. It is in this related work we will denote the baseline algorithms we test, in particular the five deep-MIL models that form the foundation of most current deep-MIL research, and a sixth deep-MIL method that is little-known but does pass our tests. Next in \u00a7 3 we will define three algorithmic unit tests for MIL models. The first tests the fundamental MIL assumption that all models must respect, and the second and third tests extend to a generalized version of the MIL problem known as \u201cthreshold\u201d MIL. Prior deep-MIL works might tacitly assume they can tackle the generalized MIL, but make no formal specification of the types of MIL models they tackle. Then we apply our tests to six deep-MIL and seven older Support Vector Machine based MIL models in \u00a7 4, demonstrating how different algorithms pass and fail different unit tests. In doing 2 \fso we provide hard evidence that the foundations of most current deep-MIL works are invalid, and thus dangerous to use in any case where the MIL assumption is used for casual or clinically relevant constraints. For example, although a cancer diagnosis should occur only because cancer was detected, a non-MIL model could learn the absence of something unrelated as a false signal, causing it to overfit. In addition, we discuss cases where a known non-MIL algorithm still passes the unit test, prompting a discussion on how unit tests should be used for invalidation, not certification. Finally, we conclude in \u00a7 5. 2 Related Work Concerns about reproducibility within the fields of machine and deep learning have increased in recent years. Prior works have studied reproducibility issues with respect to dataset labels/integrity [11\u201313], comparison methodology, and making conclusions on improvement [14\u201318], discrepancies between math and floating-point precision [19], discrepancies between code and paper, [20], and false conclusions in repeatability [21]. We note that none of these prior efforts on reproducibility would have identified the MIL assumption violation that we identify in this work. Our situation is a different aspect of reproducibility in that the methods under test can be reproduced/replicated, but the methods themselves fundamentally are not designed to enforce the modeling assumptions, and no testing was done to ensure that they do. By developing tests meant to elicit certain and specific behaviors of a MIL model, we show how unit tests may be developed for an algorithm in the form of synthetic datasets. Within the reproducibility literature, we believe the work by Ahn et al. [19] is the most similar to ours, where they develop a mathematical framework for characterizing the reproducibility of an optimization procedure when initialization and gradient computations are not exact. This is motivated by the fact that proofs of optimization do not account for floating point precision in the majority of cases, so specialized domain tests can be useful. A key point is that having source code is helpful, but does not confer correctness of the property of interest [22]. In contrast, our work is more empirical in that we actually implement our proposed tests, and our tests are born not from a mismatch between math and implementation but from the observation that prior works have neglected the mathematical work to ensure their methods follow the MIL assumptions. Other relevant works in reproducibility have looked at methodological errors [15, 23, 24]. The MIL problem bears resemblance to a niche set of defenses used within the malware detection literature. To defend against adversarial attacks, \u201cnon-negative\u201d [5] or \u201cmonotonic\u201d [6] models were developed where features can only be positive (read, malicious) indicators, and by default, all files would be marked negative (benign) absent any features. This is similar to the MIL model assumption that there is no positive response unless a specific instance is present, and indeed, MIL approaches have been used to build malware detectors that are interpretable and not susceptible to attack [8]. 2.1 Relevant Multi-Instance Learning Work An explosion of interest in MIL literature has occurred due to its relevance in medical imaging and other tasks where the MIL assumption aligns with clinically relevant or important physical/causal constraints of the underlying system. Shockingly, we find much of the literature does not test or ultimately respect this core MIL assumption, resulting in models that are at risk of over-fitting their training data and learning clinically/physically invalid solutions. Simulated benchmarks are common in MIL literature, but focus primarily on the Standard formulation and accuracy [25]. [26] built synthetic benchmarks of MIL tasks, but did not formalize what kinds of MIL tasks or attempt to check if a model was violating the underlying generative MIL hypothesis1. The key difference of our work is to create synthetic datasets to test that a model respects the MIL assumptions, rather than benchmark accuracy. We will first review the older, predominantly Support Vector Machine (SVM) history of MIL models that we will test in this work. Then we consider the more recent deep learning counterparts. 1Two tasks are Standard MIL, one is Threshold MIL, and a fourth is indeterminate but closest to the Generalized MIL of [27] 3 \f2.1.1 Historical Non-Deep MIL Issues with under-specification of the MIL problem had been previously identified in the seminal survey of Foulds and Frank [27], who synthesized many implicit MIL extensions into a set of generalized and well-specified MIL types. As noted by this prior work, many MIL papers from this time period do not provide proof or attempt to enforce the MIL assumption. Thus while they may be exploring a broader scope of the MIL hypothesis space, the developed solutions may still fundamentally not satisfy the definition of any MIL model. We will briefly review some significant non-deep MIL models that we include as comparison points, and their status with respect to the MIL assumption. Most notably the mi-SVM and MI-SVM algorithms [28] are correct by construction to the standard MIL model that we will discuss further in \u00a7 3.1. The MI-SVM in particular introduces the idea of a \u201cwitness\u201d, where the bag label is inferred from a singular maximum-responding instance, thus incorporating the standard MIL assumption. SIL is intentionally MIL violating by construction [29]. NSK and STK algorithms [30] were previously recognized to not abide by the MIL hypothesis [27], even though the paper includes formal proofs on the learning theory, the MIL constraints were neglected. Not analyzed previously, we also include two additional models. Firstly, the MissSVM [31], which uses a semi-supervised SVM approach that uses the \u201csingle witness\u201d approach to guarantee the standard MIL model. Secondly, the MICA model [32], which is invalid under the standard MIL model because it uses a convex combination of points in the positive bag, and thus does not preclude the possibility of a negative sample. 2.1.2 Deep MIL Models The first MIL neural network by Zhou and Zhang [33] was later re-invented as the \u201cmi-Net\u201d2 model, and directly translates the \u201cwitness\u201d strategy [28] to a neural network, using weight sharing to process each bag independently, produce a maximal score, and then takes the max over those scores to reach a final decision. This re-invention was done by Wang et al. [34] who added \u201cMI-Net\u201d as a \u201cbetter\u201d alternative by concatenating the results across bags, allowing a final fully-connected layer to make the prediction by looking at all instances without any constraints. This error allows the MI-Net to learn to use the absence of an instance as a positive indicator, thus violating the MIL assumption. This is true of the MIL pooling layer by Ilse et al. [35] (which forms the basis of their Attention MIL), the Graph Neural Network based GNN-MIL of [36], the Transformer based TransMIL [37], and the Hopfield MIL model of [38]. These latter five deep-MIL models have formed the foundation of many extensions that have the same fundamental designs/prediction mechanisms, with various tweaks to improve training speed or handle large medical images [39\u201343]. For this reason, we will test these five deep-MIL models as exemplars of the broader deep-MIL ecosystem, and show that all five models fail a simple test. Two additional deep tests are included, which we note as distinct (because they respect MIL but are rarely used) from the preceding five highly popular methods. The mi-Net, which is the older and not widely used model of [33] respects the standard MIL assumptions. Second is CausalMIL [44, 45], the only recent line of MIL research of which we are aware that properly considers the standard MIL assumption, producing an enhanced version of the \u201cwitness\u201d strategy. It does so by representing the problem as a graphical model to infer per-instance labels. While Zhang et al. [44] note the causal nature of MIL modeling to inform their design, they did not document that other deep-MIL approaches fail to respect the MIL assumptions. 3 MIL Unit Tests The prior works in deep-MIL research have all cited the seminal Dietterich et al. [46] for the MIL problem without elaborating further on the assumptions of the MIL model. As denoted by Foulds and Frank [27], there are many different generalizations of the MIL hypothesis to more complex hypothesis spaces, all of which require respecting that it is the presence of some item(s) that induce a positive label. We will focus on Weidmann\u2019s Concept Hierarchy [47] that includes Dietterich et al. [46] as the most basic MIL hypothesis space, and test it along with a generalization of the MIL problem. We note that an algorithm passing a test is not a certificate of correctness. Thus, 2Notably this was a mischaracterization and should have been named \u201cMI-Net\u201d going by the original naming scheme, but the names mi-Net and MI-Net with incorrect designation have stuck, and so we repeat them. 4 \fif an algorithm passes the generalized Weidmann MIL tests (specified below), but fails the basic Dietterich test (specified below), it means the model fails all possible MIL models because it has failed the most foundational MIL test. Our code for these tests can be found at github.com/ NeuromorphicComputationResearchProgram/AlgorithmicUnitTestsMIL. We will now formalize the general MIL problem in a notation that can capture both the standard and Weidmann versions of the MIL problem. We leverage this formalization to make it clear what properties our unit tests are attempting to capture, and to discuss how a non-MIL model learns invalid solutions. For all of the tests we consider, let h(x) be a function that maps a instance vector x to one of K concept-classes \u2208C = {\u2205, 1, 2, . . . , K} (i.e., h(x) \u2208C), which includes the null-class \u2205. This null class has the role of identifying \u201cother\u201d items that are unrelated to the positive output decision of the MIL problem. The null-class is the fundamental informative prior and useful constraint of the MIL problem space, where any item belonging to \u2205does not contribute to a negative class label prediction. That is to say, only the occurrence of the concept classes c1, . . . , cK can be used to indicate a positive label in a valid MIL model [46, 27]. For all k \u2208[1, . . . , K] where ck \u2208Z\u22650 let g({c1, c2, . . . , cK}) be a function that takes in the set of the number of times concept ck occurred in a bag, and outputs a class label y \u2208{\u22121, 1} for a negative or positive bag respectively. Given a MIL bag X = {x1, . . . , xn}, let 1[predicate] be the indicator function that returns 1 if and only if the predicate is true. Then we can express the generalized MIL decision hypothesis space by Equation 1. g K [ k=1 ( X \u2200x\u2032\u2208X 1 [h(x\u2032) = k] )! (1) This generalized form can cover multiple different versions of the MIL problem by changing the constraints on the size of the concept class C and the decision function g(\u00b7). In the remaining sub-sections, we will use this framework to specify the MIL model being tested, how the test works, and how an invalid MIL-model can \u201csolve\u201d the problem by violating the constraints. This is done by specifying constraints on C and g(\u00b7) that define the class of MIL models, and a unit test that checks that these constraints are being respected by the algorithm. We will do so by specifying a NegativeSample and PositiveSample function that returns bags X that should have negative and positive labels respectively. Each function will have an argument called Training, as a boolean variable indicating if the bag is meant to be used at training or testing time. This is because we will alter the training and testing distributions in a manner that should be invariant to a valid MIL model, but have a detectable impact on non-MIL models. For this reason, we will refer to data obtained when Training=True as the training distribution and Training=False as the testing distribution. In each unit test, our training bags will have a signal that is easy to learn but violates the MIL assumption being tested. There will be a second signal corresponding to the true MIL decision process, that is intentionally (mildly) harder to detect. At test time (i.e., \u00acTraining), the easy-butincorrect signal will be altered in a way that does not interfere with the true MIL classification rule. If a model receives a training distribution AUC > 0.5, but a testing distribution AUC of < 0.5, then the model is considered to have failed a test. This is because a normally degenerate model should receive an AUC of 0.5, indicating random-guessing performance. To obtain an AUC < 0.5 means the model has learned a function anti-correlated with the target function. If this occurs simultaneously with an AUC of > 0.5, it means the model has learned the invalid non-MIL bait concept, which is designed to be anti-correlated in the testing distribution. To simplify the reading of each algorithmic unit test, we will use \u223cN(a, Id \u00b7 b) to indicate a vector is sampled from the multivariate normal distribution with d dimensions that has a mean of \u00b5 = \u20d7 1 \u00b7 a and a covariance \u03a3 = Id \u00b7 b. In all cases, we use d = 16 dimensions, but the test is valid for any dimensionality. In many of our tests, the number of items will be varied, and we denote an integer z sampled from the range [a, b] as z \u223cU(a, b) when an integer is randomly sampled from a range. When this value is not critical to the function of our tests, the sampling range will be noted as a comment in the pseudo-code provided. 5 \f3.1 Presence MI Assumption and Test We begin with the simplest form of the MIL decision model as expressed by Dietterich et al. [46]. In this case, the concept class is unitary with K = 1, C = {\u2205, 1}, giving the positive classes as the only option, and the non-contributing null-class \u2205. The decision function g({c1}) = c1 \u22651, that is, the label is positive if and only if the positive concept c1 has occurred at least once within the bag. Given these constraints, we design a simple dataset test to check that an algorithm respects these learning constraints on the solution. We will abuse notation with h(N(0, Id \u00b7 1)) := \u2205to indicate that the space of samples from a normal distribution as specified is defined as corresponding to the null-class \u2205. This first value will be the general \u201cbackground class\u201d that is not supposed to indicate anything of importance. Algorithm 1 Single-Concept Standard-MIL 1: function NEGATIVESAMPLE(Training) 2: if Training then // Poisoning 3: Add p \u223cN(\u221210, Id \u00b7 0.1) to bag X 4: for b iterations do 5: Add x \u223cN(0, Id \u00b7 1) to bag X 6: return X, y = \u22121 7: function POSITIVESAMPLE(Training) 8: if \u00acTraining then // Poisoning 9: Add p \u223cN(\u221210, Id \u00b7 0.1) to bag X 10: for k iterations do// k \u223cU(1, 4) 11: c \u2190coin flip 12: if c is True then 13: x \u223cN(0, Id \u00b7 3) 14: else 15: x \u223cN(1, Id \u00b7 1) 16: Add x to bag X 17: for b iterations do 18: Add x \u223cN(0, Id \u00b7 1) to bag X 19: return X, y = 1 To make a learnable but not trivial class signal, we will have two positive class indicators that never cooccur in the training data. Half will have h(N(0, Id \u00b7 3)) := c1 and the other half will have h(N(1, Id \u00b7 1)) := c1. We remind the reader that this is a normal distribution in a d-dimensional space, so it is not challenging to distinguish these two classes from the background class N(0, Id \u00b7 1), as any one dimension with a value \u22653 becomes a strong indicator of the c1 class. Finally will have a poison class h(N(\u221210, Id \u00b7 0.1)) := \u2205that is easy to distinguish from all other items, and at training time always occurs in the negative classes only. If we let \u02dc g(\u00b7) and \u02dc h(\u00b7) represent the MIL-violating class-concept and decision function that should not be learned. This creates an easier-tolearn signal, where \u02dc h(N(\u221210, Id\u00b70.1)) := \u2205and the remaining spaces \u02dc h(N(0, Id\u00b71)) = \u02dc h(N(0, Id\u00b73)) = \u02dc h(N(1, Id \u00b7 1)) := c1 , with a decision function of \u02dc g({\u2205, c1}) := \u2205\u22640. This \u02dc g is easier to learn, but violates the MIL assumptions by looking for the absence of an item (the \u2205class) to make a prediction. It is again critical to remind the reader that the MIL learning problem is asymmetric \u2014 we can not arbitrarily re-assign the roles of \u2205and c1, and so \u02dc g(\u00b7) \u0338= g(\u00b7) because we can not use \u2205in place of c1. The entire algorithmic unit test is summarized in Alg. 1, that we term the Single-Concept StandardMIL Test. We choose this name because there is a single concept-class c1 to be learned, and this test checks obedience to the most basic MIL formulation. Because this test is a subset of all other MIL generalizations, any algorithm that fails this test is not respecting the MIL hypothesis. Theorem 1. Given an algorithm A(\u00b7) : X \u2192R, if trained on Alg. 1 and tested on the corresponding distribution. A fails to respect the MIL hypothesis if the training AUC is above 0.5, and the test AUC is below 0.5. Proof. g({\u2205, c1}) = c1 \u22651 is the target function. Using \u2205p to represent the background poison signal and \u2205B to represent the indiscriminate background noise. Let \u02c6 c1 denote the N(0, Id \u00b7 3) samples and \u02c6 c2 the N(1, Id\u00b71) samples. The training distribution contains negative samples (y = \u22121) of the form {\u2205p = 1, \u2205B}, and positive samples (y = 1) of the form {\u2205B \u22651, \u02c6 c1 = 1} and {\u2205B \u22651, \u02c6 c2 = 1}. By exhaustive enumeration, only two possible logic rules can distinguish the positive and negative bags. Either the (MIL) rule \u02c6 c1 \u22651 \u2228\u02c6 c2 \u22651 \u2261c1 \u22651 (where c1 \u2190\u02c6 c1 \u2228\u02c6 c2, which is allowed under [46]), or the non-MIL rule \u2205p = 0. However, a MIL model cannot legally learn to use \u2205p because it occurs only in negative bags. 6 \fThus if the training distribution has an AUC > 0.5 but test distribution ACU < 0.5, it has learned the non-MIL rule and failed the test. 3.2 Threshold-based MI Assumption and Tests Algorithm 2 Multi-Concept Standard-MIL 1: function NEGATIVESAMPLE(Training) 2: if Training then// Poison 3: Add p \u223cN(\u221210, Id \u00b7 0.1) to bag X 4: c \u2190coin flip 5: if c is True then 6: x \u223cN(2, Id \u00b7 0.1) 7: else 8: x \u223cN(3, Id \u00b7 0.1) 9: Add x to bag X 10: for b iterations do // b \u223cU(1, 10) 11: Add x \u223cN(0, Id \u00b7 1) to bag X 12: return X, y = \u22121 13: function POSITIVESAMPLE(Training) 14: if \u00acTraining then// Poison 15: Add p \u223cN(\u221210, Id \u00b7 0.1) to bag X 16: for k iterations do// k \u223cU(1, 4) 17: Add x \u223cN(2, Id \u00b7 0.1) to bag X 18: Add x \u223cN(3, Id \u00b7 0.1) to bag X 19: for b iterations do // b \u223cU(1, 10) 20: Add x \u223cN(0, Id \u00b7 1) to bag X 21: return X, y = 1 We now turn to the threshold-based MIL assumption, of which the presence-based assumption is a sub-set. In this case, we now have a variable number of concept classes K, and we have a minimum threshold tk for the number of times a concept-class ck is observed. Then \u2200k \u2208[1, K], it must be the case that ck \u2265tk for the rule to be positive. More formally, we have C = {\u2205, 1, 2, . . . , K} and we define the decision function g(\u00b7) as: g ({c1, c2, . . . , ck}) = K ^ k=1 ck \u2265tk (2) where V is the logical \u201cand\u201d operator indicating that all K predicates must be true. It is easy to see that the Presence-based MIL is a subset by setting t1 = 1 and tk = 0, \u2200k > 1. Thus any case that fails Alg. 1 is not a valid Threshold MIL model, even if it passes the test we devise. We will implement two different tests that check the ability to learn a threshold-MIL model. 3.2.1 Poisoned Test For our first test, we use a similar \u201cpoison\u201d signal h(N(\u221210, Id \u00b7 0.1)) := \u2205that is easier to classify but would require violating the threshold-MIL decision function in Equation 2. This poison occurs perfectly in all negative bags at training time, and switches to positive bags at test time. For the threshold part of the assumption under test, we use a simple K = 2 test, giving C = {\u2205, 1, 2}. The two exemplars of the classes will have no overlap this time, given by h(N(2, Id \u00b7 0.1)) := c1 and h(N(3, Id \u00b7 0.1)) := c2, with one item selected at random occurring in every negative bag, and both items occurring between 1 and 4 times in the positive labels. This tests that the model learns that t1 = t2 = 1. Last, generic background instances h(N(0, Id \u00b7 1)) := \u2205occur in both the positive and negative bags. The overall procedure is detailed in Alg. 2. As with the presence test, the MIL-violating decision function \u02dc g({\u2205, c1, c2}) = c\u2205\u22640 to indicate a positive label, which is looking for the absence of a class to make a positive label, fundamentally violating the MIL hypothesis. Though this test is fundamentally a similar strategy to the presented unit test, the results are significantly different, as we will show in \u00a7 4. This test will help us highlight the need to produce algorithmic unit tests that capture each property we want to ensure our algorithms maintain. Theorem 2. Given an algorithm A(\u00b7) : X \u2192R, if trained on Alg. 2 and tested on the corresponding distribution. A fails to respect the threshold MIL hypothesis if the training AUC is above 0.5, and the test AUC is below 0.5. Proof. See appendix, structurally similar to proof of Theorem 1. 3.2.2 False-Frequency Reliance Our last test checks for a different kind of failure. Rather than a violation of the MIL hypothesis entirely, we check that the model isn\u2019t learning a degenerate solution to the threshold-MIL model. To do so, we will again use K = 2 classes as before, so the decision function g(\u00b7) does not change with the same t1 = t2 = 1 thresholds, with the same positive instances h(N(2, Id \u00b7 0.1)) := c1 and h(N(\u22122, Id \u00b7 0.1)) := c2. The negative training bags X will include one or two samples of 7 \feither c1 or c2, not both. The positive training will contain one or two samples of each c1 and c2. This gives a direct example with no extraneous distractors of the target threshold-MIL model, g({c1, c2}) = (c1 > t1) \u2227(c2 > t2). Algorithm 3 False Frequency MIL Test 1: function NEGATIVESAMPLE(Training) 2: if \u00acTraining then 3: t \u223cU(35, 40) 4: else 5: t \u223cU(1, 2) 6: c \u2190coin flip 7: for t iterations do 8: if c is True then 9: Add x \u223cN(\u22122, Id \u00b7 0.1) to bag X 10: else 11: Add x \u223cN(2, Id \u00b7 0.1) to bag X 12: for b iterations do // b \u223cU(1, 10) 13: Add x \u223cN(0, Id \u00b7 1) to bag X 14: return X, y = \u22121 15: function POSITIVESAMPLE(Training) 16: for t \u223cU(1, 2) iterations do 17: Add x \u223cN(\u22122, Id \u00b7 0.1) to bag X 18: for t \u223cU(1, 2) iterations do 19: Add x \u223cN(2, Id \u00b7 0.1) to bag X 20: for b iterations do // b \u223cU(1, 10) 21: Add x \u223cN(0, Id \u00b7 1) to bag X 22: return X, y = 1 However, it is possible for a model that is not well aligned with the MIL model to learn a degenerate solution \u02dc h that maps \u02dc h(N(2, Id \u00b7 0.1)) := c1 and \u02dc h(N(\u22122, Id \u00b7 0.1)) := c1, and thus learns an erroneous \u02dc g({c1, c2}) := c1 \u2265\u02dc t1. While this solution does respect the overall MIL hypothesis, it indicates a failure of the model to recognize two distinct concept classes c1 and c2, and thus does not fully satisfy the space of threshold-MIL solutions. Theorem 3. Given an algorithm A(\u00b7) : X \u2192R, if trained on Alg. 3 and tested on the corresponding distribution. A fails to respect the threshold MIL hypothesis if the training AUC is above 0.5, and the test AUC is below 0.5. Proof. See appendix, structurally similar to the proof of Theorem 1. 4 Results We will now review the results of our three unit tests across both deep-MIL models and prior SVM based MIL algorithms. In every deep learning case, we generate 100,000 training bags with 10,000 test bags. Each model was trained for 20 epochs. Each network was trained using the Adam optimizer using three layers of the given deep model type. We found this was sufficient for each model type to nearly perfectly learn the training set, with the exception of the Hopfield network that struggled to learn under all tests even in extended testing with varying layers and model sizes. For the SVM models the O(N 3) training complexity limited the training size. MissSVM and MICA were trained on only 200 samples because larger sizes took over a day. All others were trained on 1,000 samples. A test set of 10,000 was still used. For each SVM model, we use a Radial Basis Function (RBF) kernel K (x, x\u2032) = exp \u0010 \u2212\u03b3 \u2225x \u2212x\u2032\u22252\u0011 , where \u03b3 was set to be 0.100 in all tests. This value was found by running each algorithm on a sample of N = 50 training bags across each of the training sets, to find a single value of \u03b3 from 10\u22124 to 103 that worked across all SVM models and tests. This was done because the SVM results took hours to run, and obtaining the best possible accuracy is not a goal. The point of our tests is to identify algorithms that appear to learn (high training numbers) but learn the wrong solution (< 0.5 test AUC). For this reason, a simple and fast way to run the algorithms was more important and equally informative. In our experiments, the only models known and designed to conform to the standard Presence MIL assumption are mi-Net, mi-SVM, MI-SVM, and MissSVM. For this reason, we expect these models to pass the first test of Alg. 1. We note that none of the models being tested was designed for the Threshold MI assumptions that comprise the second two tests. Still, we will show how the results on the Threshold tests are informative to the nature of the model being investigated. We remind the reader that each unit test can be solved perfectly by a model respecting the appropriate MIL assumptions. 4.1 Presence Test Results Our initial results are in Table 2, showing the training and testing accuracy and AUC for each algorithm against the unit test described by Alg. 1. All deep-MIL models introduced after mi-Net [33] and tested here have failed the test, with the exception of [44]. This makes the increased accuracy/improvement 8 \fTable 2: Results for the standard MIL assumption test Alg. 1. Any algorithm that fails this test (testing AUC < 0.5) is fundamentally invalid as a MIL algorithm under all circumstances, and should not be used in cases where the MIL assumptions are important. Failing algorithms are shown in italics. Training Testing Algorithm Acc. AUC Acc. AUC mi-Net 0.991 0.998 0.993 1.000 MI-Net 1.000 1.000 0.000 0.000 MIL-Pooling 1.000 1.000 0.000 0.000 Tran-MIL 1.000 1.000 0.000 0.000 GNN-MIL 1.000 1.000 0.000 0.000 CausalMIL 0.999 0.999 0.996 1.000 Hopfield 0.624 0.495 0.500 0.488 mi-SVM 0.999 1.000 0.935 1.000 MI-SVM 1.000 1.000 0.986 1.000 SIL 0.992 1.000 0.766 0.998 NSK 1.000 1.000 0.000 0.000 STK 1.000 1.000 0.466 0.000 MICA 0.500 1.000 0.500 1.000 MissSVM 0.995 1.000 0.449 0.551 on MIL problems of many prior work suspect. This is because any test could be learning to check for the absence of a feature, a violation of the MIL assumption that Alg. 1 tests, and thus learning the kinds of relationships that are explicitly forbidden by the hypothesis space. The results of the older SVM literature are interesting. As noted by Foulds and Frank [27], the NSK and STK models are not actually MIL-respecting, and thus fail the test. However, the SIL model was explicitly designed to ignore the MIL assumption, yet still passes this test. The MICA algorithm, while not designed to ignore MIL explicitly is not designed to enforce it either, so it also passes the test. While the MIL respecting MissSVM passes but only marginally. We find these results informative and instructive. They demonstrate that algorithmic unit tests are not certificates of correctness. Rather, failure of these tests is a certificate of an errant algorithm, but may produce false positives. While the design of a more powerful test is beyond the scope of this article, the work presented here provides practical caveats for the use of such tests in future studies. Any future MIL paper can use the tests and provide results to the reader to help boost confidence, but the test should not itself be used as a means of proving the correctness. Table 3: Results for the Threshold MIL assumption test Alg. 2. Any algorithm that fails this test (testing AUC < 0.5) learns the invalid relationship that the absence of an instance indicates a positive label. Failing algorithms are shown in italics. Training Testing Algorithm Acc. AUC Acc. AUC mi-Net 0.735 0.999 0.500 0.000 MI-Net 0.991 0.807 0.000 0.827 MIL-Pooling 0.999 1.000 1.000 1.000 Tran-MIL 0.955 0.949 0.500 0.000 GNN-MIL 0.978 0.997 0.624 0.678 CausalMIL 0.717 0.745 0.500 0.500 Hopfield 0.624 0.540 0.500 0.503 mi-SVM 0.500 0.857 0.500 0.818 MI-SVM 0.759 0.887 0.727 0.828 SIL 0.500 0.861 0.500 0.732 NSK 1.000 0.889 0.889 0.966 STK 0.947 0.991 0.000 0.000 MICA 0.500 0.998 0.500 0.490 MissSVM 0.640 0.943 0.499 0.763 Of note, CausalMIL is the only recent deep-MIL model we evaluated which is designed to respect the standard MIL assumption, and passes the test accordingly. While CausalMIL was not designed for the threshold MIL, it still passes the next two tests but with a marginal AUC, near 0.5. This is reasonable since it is testing a scenario beyond CausalMIL\u2019s design. Indeed it would be acceptable even if CausalMIL failed the next tests, because they are beyond its scope (which happens to mi-Net). The goal is that models are tested to the properties they purport to have. 4.2 Threshold Results Our next two unit tests cover two different aspects of the Threshold MIL assumption: 1) that they can learn to require two concepts to denote a positive class, and 2) that they do not degrade to relying on frequency (i.e., perform the desired counting behavior of each class). Any algorithm that passes either of these tests, but fails the Presence test, is still an invalid MIL algorithm by both the Presence and Threshold models because the Presence MIL model is a subset of the Threshold model. The results of our first test of Alg. 2 on learning two concepts are shown in Table 3, where only the MIL-Pooling model learns the completely correct solution. This test is most valuable in showing how mi-Net, which is a valid Presence MIL model, is not a valid Threshold MIL model, reaching an AUC of 0. One may wonder why the mi-Net performs poorly, while the mi-SVM and MI-SVM pass the test with peculiar results. In the case of the mi-SVM, its label propagation step means that instance \u223cN(2, Id \u00b7 0.1) and instance \u223cN(3, Id \u00b7 0.1) will receive inferred negative labels (from negative 9 \fbags), and positive labels (from positive bags). There are proportionally more \u223cN(3, Id \u00b7 0.1) samples with positive labels, though, and each positive bag, by having more samples, can select the most-extreme data point (largest positive values in each coordinate) to infer that the positive bags are \u201cmore positive\u201d than a negative bag. This results in a non-trivial AUC of 82%. In the mi-SVM case, the 50% accuracy remains because the overlapping and conflicting labels cause the optimization of the slack terms \u03be to become degenerate. Because the MI-SVM does not result in conflicted labels by using the \u201cwitness\u201d strategy, it instead can respond to the most maximal item in a bag learning to key off of the most right-tail extreme values of \u223cN(3, Id \u00b7 0.1) to indicate a positive label, because the positive bags are more likely to have such extreme values by having more samples, and avoiding the conflicting label problem of mi-SVM. By contrast, the mi-Net model fails due to the increased flexibility of the neural network to learn a more complex decision surface, \u201cslicing\u201d the different maximal values to over-fit onto the training data, resulting in degenerate performance. Note that mi-Net\u2019s results do not change with the removal of the poisoned item at test time, as otherwise, its accuracy would degrade to zero. The MI-Net instead suffers from this problem, and by using the poison token ironically learns a less over-fit solution, allowing it to obtain a non-trivial AUC. Table 4: Results for the Treshold MIL assumption test Alg. 3. Any algorithm that fails this test (testing AUC < 0.5) is not able to learn that two concepts are required to make a positive bag. Failing algorithms are shown in italics. Training Testing Algorithm Acc. AUC Acc. AUC mi-Net 0.689 0.744 0.740 0.496 MI-Net 0.957 0.992 0.500 1.000 MIL-Pooling 0.997 0.999 0.500 0.477 Tran-MIL 0.989 0.998 0.994 1.000 GNN-MIL 0.965 0.995 0.475 0.000 CausalMIL 0.688 0.752 0.496 0.602 Hopfield 0.625 0.493 0.500 0.515 mi-SVM 0.500 0.738 0.500 0.054 MI-SVM 0.770 0.875 0.511 0.518 SIL 0.500 0.778 0.500 0.180 NSK 1.000 1.000 0.500 0.000 STK 1.000 1.000 0.996 1.000 MICA 0.985 0.999 0.482 0.481 MissSVM 0.785 0.935 0.327 0.093 The discussion on why mi-SVM and MI-SVM are able to pass the Alg. 2 test is similarly instructive as to why they perform worse on the Alg. 3 test as shown in Table 4. This test checks that the models do not learn to \u201ccheat\u201d by responding to the magnitude of the values or the frequency of a specific concept class occurrence. Because the frequency of concept classes changes from train-to-test, {mi, MI}-SVMs learn to over-focus on the magnitude of coordinate features to indicate a positive direction, which inverts at test time. Thus the performance of both methods drops significantly, and the mi-SVM ends up failing the test. We also note that between the two Threshold tests, we see different algorithms pass/fail each test. MILPooling, Tran-MIL and STK, and NSK have dramatic changes in behavior from test to test. By developing unit tests that exercise specific desired properties, we are able to immediately elucidate how these algorithms fail to satisfy the Threshold-MIL assumption. Because Tran-MIL and STK pass Figure 3 but fail Figure 2, we can infer that both Tran-MIL and STK are able to successfully learn the concept that \u201ctwo concepts are required to occur\u201d property, but are also able to learn to detect the absence of an instance as a positive indicator, and so fail the test. 5" + }, + { + "url": "http://arxiv.org/abs/2306.09951v1", + "title": "You Don't Need Robust Machine Learning to Manage Adversarial Attack Risks", + "abstract": "The robustness of modern machine learning (ML) models has become an\nincreasing concern within the community. The ability to subvert a model into\nmaking errant predictions using seemingly inconsequential changes to input is\nstartling, as is our lack of success in building models robust to this concern.\nExisting research shows progress, but current mitigations come with a high cost\nand simultaneously reduce the model's accuracy. However, such trade-offs may\nnot be necessary when other design choices could subvert the risk. In this\nsurvey we review the current literature on attacks and their real-world\noccurrences, or limited evidence thereof, to critically evaluate the real-world\nrisks of adversarial machine learning (AML) for the average entity. This is\ndone with an eye toward how one would then mitigate these attacks in practice,\nthe risks for production deployment, and how those risks could be managed. In\ndoing so we elucidate that many AML threats do not warrant the cost and\ntrade-offs of robustness due to a low likelihood of attack or availability of\nsuperior non-ML mitigations. Our analysis also recommends cases where an actor\nshould be concerned about AML to the degree where robust ML models are\nnecessary for a complete deployment.", + "authors": "Edward Raff, Michel Benaroch, Andrew L. Farris", + "published": "2023-06-16", + "updated": "2023-06-16", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "stat.ML" + ], + "main_content": "Introduction Companies are increasingly concerned with adversarial attacks to their machine learning (ML) models. In adversarial attacks, a third party wishes to subvert a company\u2019s interests by using ML to trick their victims\u2019 ML models to behave in a way that injures or re\ufb02ects poorly on the company. Simple examples illustrate the risk. Microsoft\u2019s Tay chatbot learned on-the-\ufb02y and was poisoned by Twitter users to produce racist tweets and producing significant backlash (Davis 2016; Wolf et al. 2017). Google\u2019s image classi\ufb01er labeled two African Americans as \u201cgorillas\u201d which similarly caused public outcry (Barr 2015). Notably, both attacks were perpetrated by regular humans attempting to confuse or \ufb01nd \ufb02aws in the algorithms. While these attacks are non-adversarial from a ML perspective, the signi\ufb01cant risk of adversarial attacks stems from the thought: how dangerous could this be if vulnerability discovery is automated using ML? The risk of adversarial ML attacks is relevant to companies deploying ML models of all kinds and particularly ML security applications \u00a92023 \fRaff, Benaroch, & Farris such as fraud, malware, and intrusion detection. Among the outcomes are fraud detection models that could be subverted, self-driving car companies that may be at risk of liability, loan applications that may create excess loss or be forced to behave in apparently discriminatory ways. Techniques for addressing or managing the risk of adversarial ML attacks are an active problem of research. The canonical wisdom to manage the risk of adversarial attacks is to develop so-called robust ML models (Ilyas et al. 2019). A ML model is robust if a third party cannot reliably force the model to behave in a desired way. Robust ML models o\ufb00er the bene\ufb01t of being resistant (but not immune) to adversarial attacks. However, making ML models robust is non-trivial and involves a signi\ufb01cant up-front training cost (Madry et al. 2018) and ongoing cost in the form of a higher error rate (Ilyas et al. 2019). Moreover, most companies for which the risk of adversarial attacks is real are not equipped to address that risk or to develop a strategy for managing it. In a sample from 28 companies in 11 industries, only 6 companies were ready to dedicate sta\ufb00to building robust ML models (Siva Kumar et al. 2020). Simultaneously, most practitioners are completely unfamiliar with issues related to adversarial ML (Bieringer et al. 2022; Boenisch et al. 2021). We argue that immediately tackling the issue of robustness is likely counterproductive to most companies. Instead, we recommend recognizing a distinction between security and robustness in practice. Security goes beyond a ML model\u2019s accuracy and involves the infrastructure around its maintenance, validation, and deployment that build con\ufb01dence in a reliable process. Robustness of a ML model can leverage the same processes but is not just the process \u2013 it is the mechanisms by which adversarial ML attacks speci\ufb01cally are mitigated, and have a distinct cost. Our recommendation stems from the recognition that the need to deal with robustness is not uniform across companies or scenarios. More speci\ufb01cally, it stems from multiple factors about how adversarial ML works, the likelihood that an attack will be deployed, the rate of attacks, and the cost of lower accuracy robust models o\ufb00er. To justify our argument we use the following strategy: First we review the relevant related literature in section 2, and show that the perspective of non-ML solutions to AML problems is apparently absent from the literature. Then we review the primary types of adversarial ML attacks in use today and the threat models that describe the knowledge an attacker needs to successfully perform an adversarial attack in section 3. In section 4 we develop a stylized model of the cost trade-o\ufb00s for the development of a robust ML model. This model then allows us to characterize, for each conjoint attack-threat model, the implied parameter values in our stylized model. By evaluating the model parameters for each attack-threat model pair, we demonstrate that adversarial attacks are less probable in most business contexts. To further justify our assumptions of risk and thus need of robust ML, in section 5 we propose a number of general design choices that can be used to mitigate the risk of an AML attack. These recommendations are applied to the most prevalent and purported \u201creal-world\u201d AML attacks we found in section 6 as case studies. Finally, section 7 o\ufb00ers guidelines for contexts where pursuing robustness in ML models is warranted along with a standard operating procedure for managers to follow if adversarial ML is a risk to them. We present our conclusions in section 8. 2 \fYou Don\u2019t Need Robust Machine Learning to Manage Adversarial Attack Risks 2. Related Work To the best of our knowledge, ours is the \ufb01rst work to discuss questions around quantifying risk from both an organizational and design perspective. Other important macro-scale aspects of adversarial attacks have been discussed but not yet connected to larger system and process designs as an e\ufb00ective defense. (Mirsky et al. 2023) surveys the motivations and methods for how an attacker may decide and act against a victim. (Brown et al. 2018) made important observations on how the threat model of an attack could be greater than what many academics consider by going beyond the standard \u2225\u00b7 \u2225p \u2264\u01eb restrictions. (Zhou et al. 2022) mapped adversarial attacks and defenses against lessons learned and frameworks from cybersecurity, but their focus remained on attacks and defenses based on machine learning, and not alteration of a larger system or the managerial decision process in determining if the risk is acceptable. There exist many general surveys and discussions of adversarial machine learning at many di\ufb00erent levels, which broadly do not discuss the larger system design (Yuan et al. 2017; Hu et al. 2022; Wang et al. 2022; Li et al. 2021; Biggio and Roli 2018). Mohseni et al. (2022) proposed a Taxonomy of ML Safety by looking at a safety critical design of machine learning systems. However, we \ufb01nd that the survey provides no discussion on how design changes around the machine learning can mitigate the concern for attacks against the system. Deldjoo et al. (2021) look at adversarial attacks against recommender systems. While issues about real-world problems are discussed (e.g., attacking a real-world system must factor in the temporal nature of recommendations changing over time), no practical real-world attacks are documented \u201cin the wild\u201d. Wang et al. (2022) surveyed poisoning attacks again an ML model, and did not mark any examples of this occurring in real life. Though they do mention \u201cWell-intentioned\u201d poisoning to enforce things like copyright detection (i.e., defense for something that will be made public), they do not make the full jump to recognizing poisoning as a method of countering future data theft (i.e., defense for something intended to remain private). Their survey also does not identify any recognition of how classic cryptographic key signing can be used to mitigate the risk of poisoning attacks. Hu et al. (2022) perform an extensive survey of model inversion attacks, and do note that Di\ufb00erential Privacy provides a provably secure method of mitigating these attacks. While they mention that DP often has a trade-o\ufb00that may be too expensive, we note that it has had many successful uses in practice and thus provides a means for mitigating this class of attacks. The larger insight that a system can alter it\u2019s design to better leverage di\ufb00erential privacy is not discussed. From a di\ufb00erent perspective Paleyes et al. (2022) focuses on how to deploy modern machine learning systems and the challenges such deployments face, with adversarial machine learning being but one concern. While they enumerate the basic attack types and some notes on the risk, they provide no guidance on how to defend against such issues from either an ML or whole system design perspective. We note as well that there has been limited discussion on ML from an institutional perspective in terms of maintenance and holistic design, but none that we are aware of that have tackled the heart of design changes to deal with AML risks. Sculley et al. (2015) talked about the technical debt of building real-world solutions, and others have talked about issues in misspeci\ufb01cation causing in\ufb02ated expectations, disappointment, and lack of 3 \fRaff, Benaroch, & Farris trust (D\u2019Amour et al. 2020). From a broader system design perspective of computer systems at large, seminal work by Friedman and Nissenbaum (1996) discussed a number of ways bias can be introduced or emerge as a product of the larger picture (e.g., historical context, design choices, and system usage). These works share a broad theme of nontechnical mitigations to technical problems, which is applicable to AML. Though we will include such design changes, we also discuss technical solutions from outside ML to the AML problem. Others have investigated industry-speci\ufb01c design concerns (Kaymakci et al. 2021), refactoring/maintenance (Tang et al. 2021; Gesi et al. 2022; Arpteg et al. 2018), and reproducibility (Forde et al. 2018; Ra\ufb00and Farris 2022; Ra\ufb002019), but do generally focus on narrow problems and do not address larger systematic changes required to achieve technical-ML goals. 3. Machine Learning Models and Robustness 3.1 Adversarial ML Attacks Adversarial examples are samples of input for a ML classi\ufb01cation system that are very similar to a normal input example but cause the ML system to make a di\ufb00erent classi\ufb01cation. Adversarial examples exploit certain properties of ML classi\ufb01ers and are explicitly and purposefully identi\ufb01ed using speci\ufb01c algorithms called adversarial attacks. Though the mathematics of performing AML are not key to our survey as we focus on non-ML and design solutions, we brie\ufb02y review them. In most AML literature this would be a d-dimensional feature vector x passed into a model f(\u00b7) for which there is a desired output y. The goal mean for the adversary A(\u00b7) who has the power to perturb the input by some p-norm threshold \u01eb, such that \u2225A(x) \u2212x\u2225p \u2264\u01eb and achieves the goal that f(A(x)) \u0338= y. Many possible targets of attack exist. In one simple scenario, attacks could allow spam or phishing attacks to go undetected by existing ML models, by forcing detection (or classi\ufb01cation) models to make incorrect conclusions. This kind of attack can exacerbate existing cyber-security issues. In another plausible scenario, ML systems for screening credit-card charges could be fooled into classifying fraudulent transactions as non-fraudulent, allowing the adversary to cause direct \ufb01nancial harm and self-enrichment. Other attack scenarios could result in personally identi\ufb01able information (PII) data leaks or result in data-theft, such as replicating a company\u2019s large investments in data labeling, warehousing, data cleaning, and model building. It is useful to group adversarial attacks into three general types, ordered based on the nature of the risk to the enterprise (Siva Kumar et al. 2020): \u2022 Poisoning attacks seek to modify the data used to train a victim\u2019s ML algorithm so that the attacker\u2019s goals are achieved whenever a model is trained on the poisoned data. In\ufb02uencing models to have very low accuracy could amount to a \u201cDenial of Service\u201d attack. Poisoning could also insert \u201cbackdoors\u201d that allow the adversary to control a model by including a special key in model input, or otherwise altering the ML model\u2019s behavior. \u2022 Inversion attacks seek to obtain information about the model itself or the data used to train it, be it by observing its behavior or by physical inspection of its parameters. They allow an adversary to create their own copy of a ML model (i.e., theft of 4 \fYou Don\u2019t Need Robust Machine Learning to Manage Adversarial Attack Risks capability) or to infer the data used in the model (e.g., PII violations and extracting individuals\u2019 data from the model). \u2022 Evasion attacks trick a victim ML model into making an errant prediction due to what should have otherwise been a benign manipulation of the input data. The classic example of an evasion attack is how non-robust Computer Vision models can be fooled into making an incorrect, nonsensical prediction by altering a single pixel in the input image. The real-world risk of these attacks cannot be properly accounted for without considering the threat-model, which is a description of the assumptions of information required for the attack to operate successfully. The three general cases are: \u2022 White-box attacks: the adversary knows everything about the victim\u2019s ML model, including the algorithm used and any defensive techniques, and has its own copy of the training data. \u2022 Grey-box attacks: the adversary knows some information, but not all details. Their ability to interact with the system is limited in some ways. \u2022 Black-box attacks: the adversary has only minimal access to the model, such as via an API that receives input cases and returns answers. Their ability to perturb any data is limited to data before it reaches the system. They do not know what kind of model or data is used. 3.2 Robust ML Models A touted solution to adversarial ML attacks is to build robust ML models that are less susceptible to attacks. \u201cRobustness\u201d is a term widely used in the context of ML to indicate that a ML system cannot be fooled by adversarial examples1. The exact nature of how to make ML models robust is an issue of active research. For the purposes of this survey we will use a broad de\ufb01nition: robust models are ones that are not easily subverted by an adversary. One might infer that a robust model should be more accurate on all kinds of naturally occurring data and situations, though this is not commonly the case in practice. Obtaining robustness is non-trivial and imposes two signi\ufb01cant costs. Making a model robust can require a signi\ufb01cant capital expenditure. While a conventional (non-robust) ML model can cost between $40-$100,000 to train (Strubell et al. 2019), creating a robust ML model requires expertise and signi\ufb01cantly more training that can easily be 100 times to over 1000 times as expensive computationally (Madry et al. 2018). Moreover, when re-training must be done on regularly (e.g., on a quarterly basis) this cost is further ampli\ufb01ed. Another cost trade-o\ufb00is reduced accuracy. Today, robust models are usually less accurate than non-robust models. Non-robust models tend to learn correlative, not causal, relationships that are brittle and thus susceptible to exploitation (Ilyas et al. 2019). These correlations are often useful in terms of predictive accuracy, but by their nature are also non-truths than an adversary can exploit. By contrast, robust models currently have lower accuracy as they forgo weak signals that are correlative but useful. 1. e.g., see https://www.robust-ml.org/ 5 \fRaff, Benaroch, & Farris Considering the cost trade-o\ufb00s presented by robust models, we posit that model robustness may be warranted only in a small set of circumstances. To make our case, we present a stylized model that shows the current best course of action for most \ufb01rms is to maximize the accuracy of their ML application models and focus less on producing robust variants of their models. 4. Stylized Model of Evasive Robustness-Security Trade O\ufb00s The following stylized model formalizes the trade-o\ufb00s in cost presented by robust ML models. It enables us to compare the risk associated with adversarial attacks against a nonrobust model with those against a model robust to adversarial attacks. Following this convention, we model risk exposure to an attack as RE = (probability of an attack) \u00d7 (cost consequence of the attack). For simplicity, assume the following parameters: A \u2013 accuracy rate of the normal model, i.e., (1 \u2212A) is the error rate of the normal model p \u2013 fraction of all predictions that are adversarial attacks; we pessimistically assume that all adversarial attacks on a non-robust model are successful, and optimistically assume that all adversarial attacks on a robust model are unsuccessful z \u2013 reduction in accuracy rate in the robust model; the accuracy rate ofthe robust model is (A \u2212z) (1 \u2212(A \u2212z)) = (1 \u2212A + z) \u2013 error rate of a robust model cn & ca \u2013 cost of a normal predictive error, and cost of an adversarial predictive error Ignoring the cost of training a robust model, the break-even point between a normal vs. robust model is REnormal = RErobust, which can be expanded in terms of our assumptions as Equation 1 and then simpli\ufb01es to Equation 2. cn(1 \u2212p)(1 \u2212A) | {z } Normal Errors + cap |{z} Normal Victim = cn(1 \u2212p)(1 \u2212A + z) | {z } Robust Errors + cap(1 \u2212A + Z) | {z } Robust Victim (1) pcaz + cn = p (caA + cnz) (2) If we assume for simplicity that cn = ca, the equality simpli\ufb01es to A p = z, and the break-even condition would indicate that the cost of errors on adversarial attacks, ca, must exceed the frequency of attack multiplied by the base accuracy of the model. For example, if a normal model was 95% accurate in production use, and we believe 1% of predictions are adversarial, a robust model must be at least 95%-(95%\u00d71%)=94.05% accurate to be attractive to build. The more frequent attacks are, the more leniency there is to the penalty z. This only considers the cost of reduced accuracy in robust models, one of the two cost trade-o\ufb00s of robustness. 6 \fYou Don\u2019t Need Robust Machine Learning to Manage Adversarial Attack Risks If we factor in the second cost trade-o\ufb00and consider the training cost of a robust model, the risk exposure becomes more lopsided. A non-robust model is at risk of victimization, fraud, and other issues. A robust model presents partial mitigation to that risk and brings with it the risk that a large premium will be paid for negative net impact. Speci\ufb01cally, if we add the cost premium for making the model robust, denoted DR, the break-even point between a normal vs. robust model becomes: cn (1 \u2212p) (1 \u2212A) + cap = Dr + (1 \u2212A + z) (cn (1 \u2212p) \u2212cap) If we continue to assume that cn = ca, the break-even condition indicates that cn = Dr Ap\u2212z the cost of errors on adversarial attacks, cn, must exceed the cost premium, DR, ampli\ufb01ed by the frequency of adversarial attacks (against the baseline accuracy) modulated by the loss in accuracy z, for a robust model to be worthwhile building. The numerator will always be < 1, so this can only increase the costs \u2013 and notably the penalty z can push the cost negative, indicating that the un-satis\ufb01ability of the inequality due to the added training costs. The above analyses lead us to recommend against building a robust model for most companies. The bene\ufb01t of a robust model is greatest when the standard model is the least accurate. This implies that the robust model will not be e\ufb00ective because it will be further degraded. Of course, the recommendation may be di\ufb00erent under certain model parameters. This highlights the importance of determining the normative risks of errors when considering a robust model. More importantly, it is essential to determine if the risk is asymmetric and realistic in order to fully de\ufb01ne the risk exposure. Next, we review these concerns in greater detail by tailoring the stylized model to various conditions. 4.1 A Stylized Model of Cost-Bene\ufb01t in Building Robust Models There are two key factors that one may argue against our initial recommendation. We list these two below, so that we may further analyze the spectrum of scenarios and risk factors. \u2022 Cost of Adversary Attack. It is not realistic to assume equal costs for normal and adversary predictive errors (cn = ca). Consider an errant approval for a loan by an ML system. In the normal context, people may voluntarily return the money, or the legal system provides a means to compel the return of capital (at some expense). In the adversarial case, an attacker may have arranged a transfer to an uncooperative jurisdiction or arranged for money laundering via the dark web (van Wegberg et al. 2018). In such cases, the cost for errors may di\ufb00er by orders of magnitude (ca >> cn), and thus make the development of robust predictive models viable. \u2022 Rate of Adversarial Attacks. The rate of likely attacks and the rate of their success may vary by the type of attack and threat model. A model trained on only publicly available data that lacks any PII information is unlikely to be the target of theft, as there is no competitive advantage or unique value in the data used to construct the model. Similarly, a model that is used for internal purposes that do not interact with any customer is less likely to be targeted for evasion compared to a fraud model that interacts with real (and potentially adversarial) customers. 7 \fRaff, Benaroch, & Farris In light of this analysis, Table 1 presents a 3\u00d73 grid that intersects types of adversarial attacks (poisoning, inversion, and evasion) with threat models (white-, greyand black-box) and derives, for each attack-threat model pair, implied parameter values for our stylized model. This captures the conditions for return on investment in developing robust of ML models. As seen in Table 1, the nine possible attack/threat model combinations are categorized into four distinct groups that inform the risk analysis process. These groups are based on the viability of the attack-threat combination and tools that exist today to mitigate that threat. These categories are: \u2022 Realistic: the attack could be carried out with a reasonable expectation of success. The threat-model supports the attack (i.e., the attack could happen in real life), and it can be achieved with measurable impact. A robust model would be an important defensive posture in these cases. \u2022 Unrealistic: The attack is not practical in most cases and unlikely to occur, absent negligence. The cost of developing a robust model is not justi\ufb01ed. \u2022 Solvable: The attack could be carried out in practice, but there are readily available techniques that can very e\ufb00ectively mitigate the risk without the need to deploy a robust model. \u2022 Impractical: the attack could be carried out, but the information required to perform the attack is so signi\ufb01cant that it would present an unreasonable cost for the attacker. A robust model is unwarranted in this case because the probability of an attack is low. As shown in Table 1, white-box attacks are generally impractical because the adversary is a powerful attacker who knows everything about the victim\u2019s ML systems and data. Such an adversary probably has easier ways to e\ufb00ect negative outcomes. For example, white-box evasion attacks to alter medical imaging to change a patient\u2019s diagnosis to/from cancer instead of benign (Finlayson et al. 2019). While the thought is horrifying, the amount of e\ufb00ort required to access and alter information, undetected, in order to pull o\ufb00such an attack is considerably more than simply altering a medical record to achieve the same result (Ra\ufb00et al. 2019a). Similarly, in the context of malware detection, it is easier to evade all modern anti-virus systems by employing easy, commoditized \u201cpacking\u201d functions that obfuscate the contents of the malware, without the adversary having to rely on ML to craft undetectable malware (Aghakhani et al. 2020). In cases such as these, alternate methods for attack eliminate the need for the attacker to perform an adversarial ML attack. In practice, the di\ufb03culty of performing a real-world white-box attack is the likely reason for the lack of observed adversarial attacks in the wild. Nonetheless, white-box attacks can happen, especially when a cybersecurity incident results in a data ex\ufb01ltration event (Nadler et al. 2019). However, this would necessarily occur after a cybersecurity incident, making a robust model a secondary line of defense \u2013 rather than primary. In summary, the most likely conditions where a robust ML model is useful are those where a company\u2019s IT infrastructure has already been compromised. Compared to white-box attacks, gray and black-box attacks are more reasonable, especially when dealing with computer vision. Neural networks that are pre-trained on the 8 \fYou Don\u2019t Need Robust Machine Learning to Manage Adversarial Attack Risks Table 1: Table evaluating the relative risk of a stylized model of adversarial attacks. The values in each table entry correspond to the stylized model in section 4, and are inferred by the scenario and our judgment. Threat Model Attack Type Black-Box Gray-Box White-Box Poisoning p\u22480 z=Large DR= Large ca= Large RE=p \u00d7 ca\u22480 Unrealistic p\u22480 z= Large DR=Large ca= Large RE=p \u00d7 ca\u22480 Unrealistic p\u22480 z= Large DR= Large ca= Large RE=p \u00d7 ca\u22480 Impractical Inversion and Modeling Stealing p=low z=Low DR=Low ca=High RE=p \u00d7 ca > 0 Solvable p=low z=Low DR=Low ca=High RE=p \u00d7 ca > 0 Solvable p\u22480 z=High DR=\u221e ca=High RE=p \u00d7 ca\u22480 Impractical Evasion p=Low z=Low DR=Large ca=Low-High RE=p \u00d7 ca > 0 Realistic p=Medium z=Medium DR=Large ca=Low-High RE=p \u00d7 ca > 0 Realistic p\u22480 z=Large DR=Large ca=Low-High RE=p \u00d7 ca\u22480 Impractical publically available ImageNet dataset (He et al. 2015; Russakovsky et al. 2015) are ubiquitous starting points for building computer vision systems. This makes some of the details of such a network easy to guess. Nonetheless, when researchers have evaluated the feasibility of gray-box attacks in real-world settings with imperfect knowledge, attacks are far less successful than would normally be expected. These attacks have a 33% or lower success rate, compared to 100% success rate in the white-box case (Richards et al. 2021). This signi\ufb01cantly reduces the scope of viable attacks happening in the real world, especially when we consider that black-box attacks have even less information available. As a result, the risk of black/gray attacks depends on the type of attack. This discussion leads us to focus on gray and black-box threat models, under which evasion attacks are realistic, inversion attacks are solvable, and poison attacks are unrealistic. Evasion attacks are the most realistic because they require less information about the victim\u2019s ML model. In the Evasion attack scenario, only commonly available API access is required to submit data and observe an outcome. The attacker can submit multiple queries via the API, creating an attack one step at a time. This creates a trade-o\ufb00for handling the evasion attack avenue: pay the training cost premium for robustness or take on greater risk. Analytically, we see why the surprising suggestion may be to delay robustness. 9 \fRaff, Benaroch, & Farris Inversion attacks, where information is leaked by the model, are also realistic but solvable. Like Evasion attacks, inversion attacks need only API access. However, tools to mitigate this risk exist for Inversion attacks. For attacks trying to obtain the original training data or extract PII information, a technique known as Di\ufb00erential Privacy (Dwork et al. 2006) provides a tool that is: (1) easy to add in an API, and (2) provides provable security to the results. While Di\ufb00erential Privacy comes at some cost of accuracy because it works by adding randomness to the process, it can be \ufb01ne-tuned to balance between the extremes. Notably, Di\ufb00erential Privacy has been used successfully by the U.S. Census Bureau (Machanavajjhala et al. 2008), Google (Erlingsson et al. 2014), LinkedIn (Rogers et al. 2020), and Microsoft (Ding et al. 2017). Though not a complete solution to data theft, it can help slow down the theft process (Cheng et al. 2020). In sum, under this setting, we do not see a need to add robustness to the ML model because a di\ufb00erent tool allows us to obtain stronger guarantees as a post-processing impact with proven industry success. Model theft through an inversion attack is problematically unrealistic. If an adversary wishes to steal a model, wouldn\u2019t it be easier to just build their own version? Stealing a model requires time and resources, combined with the fact that the stolen model will only be as good as or worse than the original. This leaves the attacker always behind the victim in model capabilities. For example, OpenAI recently created the GPT language model (Radrof et al. 2019) and licensed it to Microsoft for exclusive use $1 billion (noa 2020a). Yet much of the capability was replicated by the open-source community and released for free a year later (Black et al. 2022). Last are poisoning attacks. In more thorough evaluations of poisoning attacks that still favor the attacker, they are shown to be often ine\ufb00ective (Radiya-Dixit et al. 2022). Similarly, there are practical options to avoiding poisoning through data oversight processes (i.e., supply-chain validation applied to your data labeling to ensure you know who is labeling and how) or rolling back to data versions before poisoning attacks became a public threat. If the attacker needs to modify signi\ufb01cant amounts of data, their e\ufb00orts are likely better spent in other ways. 5. Design Mitigations That Can Avoid the Need for Robust ML Having elucidated a trade-o\ufb00between robust and non-robust models, that is mitigated by the likelihood of an attack occurring against the average entity, we now seek to answer how such risks can be managed without relying on robust ML. This is, we argue, the most desirable outcome because it allows obtaining bene\ufb01ts of robustness with lesser costs. We say lesser because mitigations are not free, they could create additional friction for a user or more work for an implementer. Still, our contention is the below recommendations are better in the larger degree of certainty they provide operators to make an informed risk decision, and can con\ufb01dently reduce the likelihood of attack p in the stylized model from section 4. In section 6 we will review several \u201creal-world\u201d adversarial ML attacks that largely could have been mitigated by these recommendations. Because they are meant to be general-purpose recommendations, we avoid speci\ufb01c scenarios unless didactic in nature. 10 \fYou Don\u2019t Need Robust Machine Learning to Manage Adversarial Attack Risks 5.1 Poisoning Mitigation 5.1.1 Cryptographic Signatures of Data/Label Pairs A simple strategy we have not seen discussed in the case of poisoning is to create an auditable trail of validity. That is, the threat model of poisoning attacks is often that the attacker can alter the label of your already collected data, or the content of the image. If one augments the data labeling pipeline with a cryptographic digital signature (Goldwasser et al. 1988) of the tuple (data, label) a poisoning attack\u2019s likelihood becomes signi\ufb01cantly reduced. Any alteration of the data, or the label, will result in the signature failing to validate, and thus knowledge that an attack (or data corruption) has occurred. Systems for designing and implementing key management are widely used with NIST guidance (Barker et al. 2013). While it may still be plausible for an attacker to poison the source of the data, it imposes considerably stronger requirements on the adversary to be e\ufb00ective. Either they must: 1. Create a perturbed image, which will get labeled correctly (because they alter before entry to the labeling process) and thus must require a more powerful attack to subvert a downstream model despite correct labeling. 2. Infect the labeling process, e.g., by being hired as a labeler, and generating su\ufb03cient bad labels to alter the results while also not getting detected as a nefarious labeler. 3. Somehow obtain both (1) and (2) simultaneously. In all three cases, the attacker can only impact new data, which gives the defender the ability to roll back to a known good state as a further mitigation. In addition case (2) becomes increasingly more di\ufb03cult if data is passed to multiple labelers to obtain better quality labels, which is a standard recommended practice (Ratner et al. 2020; Whitehill et al. 2009; Ratner et al. 2016). 5.2 Model Inversion Mitigation 5.2.1 Modify Predictive Task to Enable Better Differential Privacy Di\ufb00erential privacy works best when it is naturally challenging for one datum\u2019s contents to be distinguished from others. This tends to occur with increasing frequency as the amount of data used increases, but is still susceptible to outliers in the data. An option we do not see discussed to improve the conditions of di\ufb00erential privacy success is to re-cast the features or predictive task used in the process. Applying normalizing transformations such as the Box-Cox transform naturally make the data better behaved to a limited distribution, and outliers or rare classes can be lumped into a single \u201cother\u201d category to increase the probability mass of a singular event (easier to make private) than many unique or extreme values (hard to keep private). While this may reduce the utility of the model, it provides a means of engineering around limitations of di\ufb00erential privacy. 5.2.2 Air-Gaped Storage of Archival vs Active Data Separate from the use of signed data/label pairs discussed in \u00a7 5.1.1, a further mitigation against model theft is to use an air-gapped separated between an archival store of trusted 11 \fRaff, Benaroch, & Farris labeled data, and a working production set of data. This working set can then be defensively poisoned or \u201cwatermarked\u201d (Maini et al. 2021; Song and Shokri 2020; Liu et al. 2018), such that if theft of the data occurred via a cyber-security incident, it may be possible to identify a third party using the stolen data. 5.3 Evasion Mitigation 5.3.1 Restrict Cases Where Predictions are Made While a seemingly tautological argument, one can mitigate the risk of evasion attacks by predicating the prediction on a \ufb01rst factor. For example, requiring sign-up with a credit card, use of an RSA key to receive API access, or other barriers to use can e\ufb00ectively mitigate the risk of attack. The key to such cases is to make the barrier to entry a greater risk or e\ufb00ort to attack than the ML model itself, and it sets a new \ufb02oor to the minimum amount of e\ufb00ort required to attack, and thus lower risk. Restricting predictions need not literally mean \u201crestrict when predictions are made via a source of friction\u201d. Another form of restriction is to use additional non-ML and humancrafted rules, so long as they are done so with the intent to limit the ability to subvert the other rule itself. This is an intrinsically easier task to do when one is already hand-crafting a business process or rule to reason through its validity, and if it could be easily subverted, is likely not a good rule to use. Finally, restricting predictions can also mean restricting the useful lifetime of the predictions. Every attack intrinsically requires some amount of time to construct, and if the utility of the attack expires because the underlying model has changed in a non-trivial way, a signi\ufb01cant barrier to attack e\ufb00ectiveness is created. For example, quarterly retraining of a production model is a slow means of expiring the useful life of an attack (essentially creating concept drift for the attacker), and given real-world misspeci\ufb01cations can dramatically reduce attack success (Richards et al. 2021). 5.3.2 Audit Predictions with Gold-Label Evaluation A practice we recommend should be done in any situation regardless of concern for adversarial attack, randomly auditing the predictions made by going through a vigorous labeling process imposes a probabilistic ceiling on the largest value of p, the probability of a prediction being attack, that may occur in practice. Critically this then allows one to more empirically apply the stylized risk framework of section 4. It is worth noting that auditing can go beyond simple input/output checks by incorporating lessons from cybersecurity & marketing: i.e., \u201cknow your customer\u201d. For example, if a user of a machine-translation service appears to live in France, and is querying multiple models in multiple language pairs that do not include French, there is a greater risk they are performing some subversive behavior. By obtaining customer information and knowledge about intended and emergent use cases, errant and unusual behaviors can be \ufb02agged for follow-up to validate their authenticity. Whether the \ufb02agging results in an automated response would be a factor of risk of attack success vs user friction in using the service. 12 \fYou Don\u2019t Need Robust Machine Learning to Manage Adversarial Attack Risks 6. Analysis of \u201cReal-World\u201d Situations Much literature and documentation exist today on \u201creal-world\u201d cases of adversarial machine learning. If we take real-world to mean that the model/events under consideration occurred in a non-academic setting (i.e., a business, government, or organization that did not desire the events to happen) in an intended malicious fashion (i.e, an actual or risk of harm occurring with intentionality from the perpetrator), these conditions are often not satis\ufb01ed. To emphasize this, we survey a number of published surveys, papers, and publicly documented examples of allegedly real-world cases of adversarial machine learning. In doing so we document a number of issues that occur that prevent this, and pair with them how to design mitigations from section 5 could have alleviated risk. We categorize the potential issues that would limit the attack\u2019s risk as: 1. The work involves a threat model where the adversary must either choose to forgo an easier alternative to achieve the same goals. 2. The work involves a threat model where considering a real-world motivation and goals of an attacker, the attacker\u2019s goals would not be satis\ufb01ed. In such a case, there is no reason to perform the attack. 3. The defender can use existing techniques outside of machine learning to largely mitigate the likelihood of being attacked or attack success. 4. The attack has occurred in a purely academic context with curated datasets, and does not consider the scope of a full system. 5. The attack was performed by academics, but in a manner simulating real world situations or even against real production systems, but did not occur with malice. While the attack can take place, its not clear who would want to perform the attack in real-life situations, and what the actual level of risk is. 6. The attack was performed by a third party and disclosed to the victim as a part of \u201cvulnerability disclosure\u201d, and the victim took remediating actions. There are clear abilities and reasons for an adversary to perform the attack, but it might not have been con\ufb01rmed in the wild. Our case studies are derived primarily from MITRE Atlas case-study list of real-world adversarial attacks2. We \ufb01lter from this list any case study that either: 1) does not have any other reference to the event with details. Or 2) Relied on human-only e\ufb00orts to perform the attack (e.g., the Microsft Tay example). We augment this with examples of notable or highly cited works that purport to be \u201cpractical\u201d or \u201creal-world\u201d examples of adversarial attacks. This leaves us with eight examples as summarized in Table 2. We note that as shown, we do not have examples of Gray-Box inversion or Poisoning attacks, or black-box examples of poisoning attacks, despite our e\ufb00orts to \ufb01nd examples of these situations explicitly. We speculate this is due to the di\ufb03culty of such situations as noted in Table 1. We now go through each case study and brie\ufb02y summarize it, the issues with its realism per our six issue types, and how the scenario could have been remediated. Such remediation 2. https://atlas.mitre.org/studies/ 13 \fRaff, Benaroch, & Farris Table 2: Summary of the threat model used for each case study. The mitigations proposed in section 5 reduce the attack likelihood p in all threat models. Threat Model Attack Type Mitigations Black-Box Gray-Box White-Box Poisoning Cryptographic Signatures of Data/Label pairs *Face Detection \u201dhat\u201d *Facial Recognition Leak Inversion and Model Stealing Modify Predictive Tasks to Enable Better Di\ufb00erential Privacy, Air-Gaped Storage of Archival vs Active Data *Translation model Theft *Search Result Copying *Audio Speaker Veri\ufb01cation Evasion Restrict Predictions Use, Audit Predictions *Government Tax Theft *Spam Filter *\u201dGood Strings\u201d Evasion may not be perfect but highlights what we believe would be the most time/cost-e\ufb03cient method of addressing the risk of adversarial attack. In all cases we \ufb01nd that robust ML methods appear to be most e\ufb00ective when either 1: a prior cyber-security event resulted in data theft, making white-box attacks an enhanced risk, or 2: the model requires deployment to end-users where a motivated adversary can reverse-engineer the details of the model from the deployed executable, allowing e\ufb00ective white-box attack. In all other situations, we \ufb01nd that a more thorough red-team style analysis and the mitigation strategies from section 5 could be su\ufb03cient. When appropriate, we make note of any particular \u201ctakeaway\u201d lesson from each case study. 6.0.1 Malware Evasion via \u201cGood Strings\u201d Situation: Anti-Virus product Cylance has a machine-learning-based detector and whitelist used to avoid false positives. The white list could be reverse engineered from the product, and then tokens used by the white list inserted into malicious \ufb01les. This resulted in benign predictions (Ashkenazy and Zini). Issues: item 6. Remediation: Robust machine learning methods are one of the only viable options in this scenario and are reported to be a part of the mitigation used. While other techniques could have helped, the fundamental issues that enable the tack are challenging to mitigate any other way. Take away: The attack works in particular because AV companies make their products available to home users, giving anyone su\ufb03cient access to perform the attack. Notably, this is also a case where better literature may have helped, as it was a rediscovery of the \u201cgood word\u201d attack on spam \ufb01lters that has a number of potential mitigations (Lowd and Meek 2005; Jorgensen et al. 2008; Fleshman et al. 2019; Incer et al. 2018). 6.0.2 Machine Translation Model Theft Situation: A machine-translation model can be replicated by querying the product with sentences to gain examples in another language, and then a replicate model can be trained 14 \fYou Don\u2019t Need Robust Machine Learning to Manage Adversarial Attack Risks that matches the performance of the original service (Wallace et al. 2020). Issues: item 4, item 5, item 2. Remediation: \u00a7 5.3.2 None is needed as the attack considers only one pair of languages, and not the over 100 languages that such products support3. Considering all pairs of languages would require running the attack over 5000 times and collecting original real sentence data for each language to then translate. At this scale of e\ufb00ort, there is little reason to steal the model rather than build their own translation pipeline from the ground up. Take away: The e\ufb00ort needed to perform an inversion attack needs to be obviously less than the e\ufb00ort to build a product/system to perform the same end goal in a natural manner. If there is the possibility that the attack is of comparable cost, but also carries with it a risk of legal repercussions and inability to compete long-term, then the total cost accounting for risk and opportunity cost is likely higher. 6.0.3 Facial Recognition Dataset Leak Situation: The training data and code for a facial recognition service were obtainable by anyone because a web service was improperly con\ufb01gured. This created the potential for white-box attacks if previously exploited (Whittaker 2020; Cameron et al. 2020). Issues: item 6 Remediation: \u00a7 5.2.2,\u00a7 5.2.1: Robust machine learning methods should become part of the solution, but risk could have been reduced by having a watermarked \u201clive\u201d dataset, with the original un-altered data air-gapped from the internet. Better design by only allowing a limited query response (i.e., \u201cmatch/no-match\u201d) could have further reduced risk. Take away: The white-box attack threat and need for robust ML was a secondary defense caused by a lapse in basic cyber-security. 6.0.4 Face Detection Avoiding \u201cHat\u201d Situation: A method is proposed to print out an \u201cadversarial patch\u201d (a piece of printed paper with distorted content) and place it on a hat, such that wearing the hat inhibits facial recognition systems (Komkov and Petiushko 2021). Issues: item 4, item 2, item 1. Remediation: \u00a7 5.3.1, \u00a7 5.3.2: The attack does not take into account how a system would be used, and their own results show e\ufb03cacy drops signi\ufb01cantly when applied to other recognition models. Simply using a dynamic threshold, adding cropping, or a human preprocessor to crop out the obvious sticker, would mitigate the attack. Notably, if the goal is to avoid recognition, the overt sticker itself signals to the other party that the individual is trying to hide their identity \u2014 and may thus draw more scrutiny than otherwise. Take away: Playing out the \u201cgame\u201d of how a hypothetical larger system may react to an attack may itself mitigate the attack. By trying to avoid detection via attacking the system, the attacker may yield the inverse e\ufb00ect of making their presence more obvious. This observation applies to other notable works that attempt to avoid detection via ostentatious garments (Wu et al. 2020). 3. https://translate.google.com/intl/en-GB/about/languages/ 15 \fRaff, Benaroch, & Farris 6.0.5 Spam Filter Evasion Situation: A spam \ufb01lter provided by the company ProofPoint was evaded by observing meta-data the product places in emails, building a copy-cat model, and performing a transfer attack against the deployed model (?). Issues: item 2, item 6. Remediation: \u00a7 5.3.1 None, in particular, are needed, as multiple other parts of the full detection system were not attacked (noa 2020b), leaving the system as a whole still functional and low risk. 6.0.6 China Government Tax Office Theft Situation: A real-world government facility in China used facial recognition as a means of validating invoices for payment. Attackers stole photos of other people, and used deep-fakes to mimic responsiveness and validate as the stolen party\u2019s identity to invoice the government using the stolen identity. $77 million was fraudulently obtained (Olson 2021). Issues: item 3. Remediation: \u00a7 5.3.1: While not perfect, the attack could have been signi\ufb01cantly mitigated by using more challenging biometric authentication like \ufb01ngerprints. An even better option would be to require in-person registration and issuance of a cryptographic key to sign and verify identity. Multiple similar methods could be combined. Take away: A system was needlessly made more vulnerable to attack by employing facial identi\ufb01cation via machine learning, when other alternatives are more reliable in isolation or even in combination with facial identi\ufb01cation. 6.0.7 Search Result Copying Situation: Google provided evidence that Microsoft\u2019s Bing was copying search results in 2011 by tracking search queries in browsers and extensions (?). Issues: None. Remediation: No remediations were apparently needed. While Microsoft denies they were copied in the manner described, no legal action was taken (noa 2011). The long-term result has been little to no discernable impact on Google\u2019s market dominance4. Take away:Poisoning \u201cattacks\u201d are useful from the defender\u2019s perspective to gain information about whether or not the information is being stolen/used by others. However, no ML was necessary in this case from the attacker or defender, despite the subject (recommender systems via search) being an intrinsic ML problem. 6.0.8 Audio Practical Speaker Verification Situation: A system for user veri\ufb01cation by recognizing an individual\u2019s speech is used attacked, by creating a universal perturbation that can be played while speaking \u2014 and fool the system into accepting a false speaker as valid (Zhang et al. 2021). Issues: item 4, item 3, item 2. Remediation: \u00a7 5.3.1, \u00a7 5.2.1: The attack appears to require a room of known size and content, with a speci\ufb01c distance between speaker and microphone. More broadly, the attack 4. https://www.statista.com/statistics/216573/worldwide-market-share-of-search-engines/ 16 \fYou Don\u2019t Need Robust Machine Learning to Manage Adversarial Attack Risks can be defeated by white-listing to pre-registered devices (e.g., a speci\ufb01c phone number) that adds another layer of defense to the system. We note as well the setup already includes mitigation against leaking information about whom is authorized by requiring the use of a random phrase to be spoken, rather than a user-speci\ufb01ed one. Take away: While impressive, the constraints apparently necessary to make the attack work in an increasingly realistic physical conditions do not inform how the larger process can be changed to mitigate threats. 7. When and Why Should we Focus on Robustness Given our current analysis, it may seem that there is little reason to deal with the robustness question. This is wrong and not our message. Robustness is a challenge to implement, but it\u2019s also a challenge for attackers to develop attacks in the real world, and the failures that must occur to make an attack more likely (data leakages or cyber security incidents) also make many adversarial attacks redundant. But, given su\ufb03cient time and resources, an adversary will be able to successfully deploy an adversarial ML attack. If a company focuses only on standard security and improved ML modeling, the adversarial attack will eventually become the lowest-e\ufb00ort attack vector. This means that the risk trade-o\ufb00will change over time, and robustness should be on a long-term roadmap for companies at a minimum. That said, our results do present important themes of when an adversarial attack is an especially high risk, and so robustness of ML methods should be a primary concern. Adversarial ML attacks are a special risk for governments, for whom the adversaries are literal nation-states with enormous resources to pursue many attack vectors simultaneously, as well as banks and \ufb01nancial institutions for which attacks are a continuous threat due to the high potential reward. In addition, companies that produce ML models as a part of a software supply chain, where their models will be used by customers down-stream(noa 2021) and have obtained signi\ufb01cant commercial success. While such attacks are currently a fashion of classic cyber-security issues, a realistic threat is for a hacker to compromise the supply chain and poison/steal/evade the ML models in use, so that they can stealthily in\ufb02uence or attack the downstream users of this system. This also explains why other targets, like health institutions, do not yet seem to su\ufb00er from adversarial ML attacks. Hospitals\u2019 payroll, accounts receivable, and payable, are attack vectors that may use ML-based fraud detection. But attacks today are primarily focused around ransomware (Mans\ufb01eld-Devine 2016; Spence et al. 2018), as the bene\ufb01t to an adversarial attack is low, and the e\ufb00ort high, for an attacker today. For institutions that satisfy being either (1) high-value reward for attackers if successful, (2) exist as a provider in a ML supply chain, or (3) are a government entity or provider to a government entity, we recommend the following three-part strategy: \u2022 Machine Learning Risk Assessments: A critical component to managing adversarial attacks is to develop a better understanding of the models currently used. Using the expertise of its employees or an external contractor/specialist, companies should review applications that integrate ML and catalog plausible attacks and the circumstances under which these attacks may occur. A risk assessment of these models will 17 \fRaff, Benaroch, & Farris illuminate which attack types and threat models the client is susceptible to. Once cataloged, a course of action can be identi\ufb01ed to mitigate or prevent these risks. \u2022 Robust Machine Learning Mitigations: Once the highest risk models have been identi\ufb01ed, more advanced machine learning development can occur to improve the model\u2019s robustness. General purpose techniques for robustness have been improving each year, and are currently reasonably e\ufb00ective to apply in cases like computer vision (Carlini et al. 2022; Nie et al. 2022). However, by incorporating knowledge about the speci\ufb01c problem being solved, it is possible to build signi\ufb01cantly more robust defenses at a lower total cost. This has been shown successfully in a defense being e\ufb00ective for multiple years in computer vision (Ra\ufb00et al. 2019b) and for malware detection (Fleshman et al. 2019). \u2022 Extrinsic Risk Reduction: Finally, we note that many avenues of reducing adversarial attacks involve no machine learning at all. Instead, changes in process or environmental factors can reduce the cost of being attacked and the risk of attacks occurring. For example, the aforementioned example of Microsoft\u2019s Tay chatbot was allowed to update and redeploy the model based on live Twitter data, without any human signo\ufb00. Instead, a process for curating the data coming in, and reviewing new model updates before deployment, would have signi\ufb01cantly deterred the risk. The cost of such reviews are ultimately minor compared to the cost of both the public relations fallout, and the cost of developing a robust version of Tay. Another example is that current laws allow some potential recourse, but no clear answers, on the liability and legal procedures around adversarial ML (Shankar et al. 2018). Companies can identify the changes that would simplify and support a health ecosystem of ML providers and risks so that companies can operate with con\ufb01dence. 7.1 The Intrinsic Value of Attack/Defense Research Beyond the risk analysis we have performed, we make special note that this should not be seen as a dismissive article against research in adversarial attacks and defenses. Indeed we argue absent any real-world attacks occurring, the questions are of a fundamentally important scienti\ufb01c nature. They speak to questions about intrinsic user trust in a system, that such innocuous changes can cause dramatic deviations from expectations speak to a fundamental scienti\ufb01c question: what do our methods learn and why. We believe and argue that this is an ever-green argument for this research direction to continue. 8." + }, + { + "url": "http://arxiv.org/abs/2204.04372v1", + "title": "A Siren Song of Open Source Reproducibility", + "abstract": "As reproducibility becomes a greater concern, conferences have largely\nconverged to a strategy of asking reviewers to indicate whether code was\nattached to a submission. This is part of a larger trend of taking action based\non assumed ideals, without studying if those actions will yield the desired\noutcome. Our argument is that this focus on code for replication is misguided\nif we want to improve the state of reproducible research. This focus can be\nharmful -- we should not force code to be submitted. There is a lack of\nevidence for effective actions taken by conferences to encourage and reward\nreproducibility. We argue that venues must take more action to advance\nreproducible machine learning research today.", + "authors": "Edward Raff, Andrew L. Farris", + "published": "2022-04-09", + "updated": "2022-04-09", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "cs.SE" + ], + "main_content": "INTRODUCTION To start, we must be clear that by reproducibility we are referring to the ability of an independent team to recreate the same qualitative results, and by replication we are referring to the use of code to re-create the same results. These terms have been used inconsistently across different \ufb01elds of study at various points throughout time (Plesser, 2018). Many major machine learning conferences have appointed reproducibility chairs, and in doing so have almost uniformly converged on using check-boxes to indicate that a submission includes code, or asking authors to answer vague questions about reproducibility. Some venues explicitly ask for code, others do not. Reviewers often believe that code indicates reproducibility. There appears to be prevailing belief that, if authors open-source their code and ensure the code reproduces the paper\u2019s results, we can solve the reproducibility crisis (Forde et al., 2018a; Kluyver et al., 2016; Zaharia et al., 2018; Forde et al., 2018b; Paganini & Forde, 2020; Gardner et al., 2018). Our contention is that open source and associated replicability aides, are good \u2014 but that this idealized notion is not a Pareto optimal improvement over papers that do not share source code. We argue that there are pros and cons to including source code with papers when we consider the long-term health of the \ufb01eld. The pros are widely known, and have been explored since the advent of digital communication (Claerbout & Karrenbach, 1992). In this opinion piece, we argue the cons: current evidence (though more is dearly needed) suggests open-source code may improve replication, but creates new issues in reproducibility. Toward our argument, we have a fundamental axiom: if work can be replicated (i.e., using author\u2019s original code and data) but not reproduced, then the work constitutes, at best, ineffective science (Drummond, 2009). It is \ufb01ne for authors to produce such works, but in the long term, we do not truly understand the mechanism of action or the truth of our methods unless they are reproducible. Ideally, we desire that the fraction of works that are reproducible increases over time. We will begin our argument in section 2 by noting prior history of reproducible research in other \ufb01elds, and describing how we are slowly re-learning lessons discovered long ago that show how having code does not solve reproducibility by our axiom. We provide notable examples on how code did not bene\ufb01t, or even delayed, important understanding of machine learning methods in section 3 with the seminal word2vec and Adam. These are not arguments that these works are useless or \u201cwrong\u201d, but that code negatively impacted better scienti\ufb01c understanding in the former, 1 \fML Evaluation Standards Workshop at ICLR 2022 and provided no bene\ufb01t in the latter. Finally we will conclude with the argument that conferences must create reproducibility tracks that include explicit guidelines for reviewers on how to judge submissions, so that we can advance the study of reproducibility before blindly stepping toward ineffective solutions. 2 WE HAVE FORGOTTEN HISTORY, NOW WE ARE REPEATING IT The machine learning community has only just begun to expend serious effort towards the study of reproducibility with respect to itself as a domain. The discoveries are unnerving, and have strong parallels with historical \ufb01ndings in other domains. Signi\ufb01cant early work in the study of code reproducibility was done by Hatton (1993; 1997); Hatton & Roberts (1994), performing static analysis across C and FORTRAN code as well as having multiple implementations of the same algorithm, and providing the exact same inputs and parameters to each independent implementation. Their results found a high defect rate, more than 1 issue per 150 lines of code, and that the precision of independent implementations was only one signi\ufb01cant \ufb01gure. FORTAN and C still form the foundation of scienti\ufb01c computing, including machine learning packages like NumPy (Harris et al., 2020), Tensor\ufb02ow (Abadi et al., 2016), and Pytorch (Paszke et al., 2019). These projects are important components of the computational foundation of our \ufb01eld, yet often focus on the pursuit of optimal performance at the expense of other goals such as maintainability and portability, the need for multidisciplinary teams for success (e.g., where we often consider \u201capplications\u201d a secondary track or area that is often stigmatized), and most importantly, the high dif\ufb01culty of verifying the correctness of the equations and math implemented (Carver et al., 2007). Indeed as history repeats itself, recent work has identi\ufb01ed cases where the same models implemented with different packages or hardware accelerators present reduced precision or accuracy to the order of one signi\ufb01cant \ufb01gure and even greater variation in run-time consistency (Zhuang et al., 2021). Even within a single set of hardware and implementation, our most widely used libraries often have non-deterministic implementations that can cause 10% variances in results (Pham et al., 2020). A more nuanced version of the above point stems from how we de\ufb01ne replication: does it simply involve the code, or must it also include the data? The latter is how the terminology is most historically used, and common in other sciences (Plesser, 2018). This is challenging in machine learning due to the intrinsic \u201cfuzzyness\u201d of what we are working toward: we intrinsically wish to use machine learning for tasks where thorough speci\ufb01cation of the data is too dif\ufb01cult to implement in code. We can again look to other \ufb01elds, like software engineering, that attempted to perform reproductions that included the data process over software repositories (Gonz\u00b4 alez-Barahona & Robles, 2012). Their work found that missing or minute details could prevent or signi\ufb01cantly impede reproduction. Indeed it becomes unsurprising then that we have only recently discovered considerable labeling issues within foundational datasets like MNIST, CIFAR, and ImageNet (Northcutt et al., 2021). While data sheets and model cards have been proposed to partially address this issue, (Gebru et al., 2021; Mitchell et al., 2019) they are proposed without any scienti\ufb01c study to answer if these interventions mitigate the underlying problem. It is good for producers and users of datasets to carefully think about the data in use, but we fear that absent evidence, these approaches may have no direct tangible impact1. Indeed studies of dataset replication (where no model card exists) have been shockingly successful in some ways (no evidence of adaptive over-\ufb01tting) and identify concerns not addressed in model cards or data sheets (Engstrom et al., 2020) with similar results over applied domains such as social media analysis (Geiger et al., 2021). As such, we argue that there is extensive prior evidence that predicts the current trends in machine learning reproducible research: having code available means relatively little to the question of reproducibility, especially in light of inconsistent methods of comparison used through decades of machine learning literature, leading to invalid conclusions of \u201cimprovement\u201d (Bradley, 1997; Alpaydin, 1999; Cawley & Talbot, 2010; Dem\u02c7 sar, 2006; Benavoli et al., 2016; Bouthillier et al., 2021; 2019; Dror et al., 2017; 2018; 2019), necessitating that even a system with no bit-rot would not solve the concerns of our \ufb01eld. 1Their ability to change thoughts and focus areas of others, creating positive secondary impacts, is more likely, but a separate matter beyond our discussion. 2 \fML Evaluation Standards Workshop at ICLR 2022 3 HOW CAN OPEN SOURCED CODE HARM US? Given that we have clear evidence that simply having original source code is not suf\ufb01cient to enable reproducibility, we must now ask: can withholding code ever lead to an improvement in reproducibility? It is important to be clear that we are not arguing that no-code is always or even usually better. We are arguing that a lack of code creates a different kind of forcing function for adoption. We recognize2 that code sharing is likely to lead to faster adoption of a method that works, but obscures long-term bene\ufb01ts to reproducible work. If a paper\u2019s method must be re-implemented due to a lack of code, this process organically validates said method. The paper\u2019s method only gets used and cited3 when others can successfully reproduce it, converging on methods that work and forcing deeper understanding by a broader population. Further still, this forces the community at large to be effective communicators and to better understand the details and science required to reproduce one\u2019s results. The need to enable reproducibility drove Taswell (1998) to develop a proposal to better specify wavelet transforms, which also enabled better replicability of his methods. We \ufb01nd tangential evidence for this within machine learning where 36% of papers could not be reproduced from their content, even though many provided source code (Raff, 2019). To further exemplify how we believe this to be an issue, we will draw from highly successful academics to critique with a bias to avoid undue harm or stress to early career researchers (in similar spirit to Lipton & Steinhardt (2019) ). The seminal word2vec (Mikolov et al., 2013) algorithm is our \ufb01rst consideration. A publication who\u2019s ubiquity and impact in research and application is enormous, and to the best of our knowledge, has never been replicated. Understanding how and why word2vec worked was studied by many (Goldberg & Levy, 2014) due to its utility and effectiveness, but was done through the originally released code (or direct translations into other languages). Yet it took six years for any public documentation of the fact that the paper and code simply do not perform the same steps Bhat (2019), making it impossible for anyone to reproduce. Clearly, word2vec was important and valuable for the community, but there are counterfactual questions that we argue suggest the long-term health of our research would have been better if Mikolov et al. (2013) never released their source code. First, there is an unknown amount of person hours wasted by researchers, students, and others attempting to understand the mechanisms of an algorithm that was inhibited by faulty foundations4. Second, failure to reproduce by others would be a forcing function on the original authors to re-examine their code and paper to correctly document how and why it works. By releasing the code, this feedback cycle is inhibited. This could also explain how follow-up work with paragraph2vec (Le & Mikolov, 2014) has similarly evaded reproduction, even by the paper\u2019s co-authors5. A different perspective on this matter is seen in the Adam optimizer (Kingma & Ba, 2015), which has become a widely used default method. This case is interesting in that the simplicity of the approach has enabled many reproductions, but both the code and the paper lack details on how the default parameter values were derived. Subtle corrections to the math of Adam in weight decay (Loshchilov & Hutter, 2019) and the \u01eb parameter (Yuan & Gao) can yield large improvements in the quality of results, as the default values of Adam are not ideal for all cases. While we should, in general, have no reason to believe in a one-size-\ufb01ts-all approach, the lack of study around these details is itself lending to reproducibility challenges in our \ufb01eld: the \u201cright\u201d way to set these parameters (amongst dozens of others in a network) was unstudied, and many sub-\ufb01elds began tweaking the defaults for their kinds of networks, creating confusion and slowing reproduction of subsequent research. This kind of issue is not new. Poorly documented accounts of differences in LBFGS (Liu & Nocedal, 1989) implementation results can be found 6, though we are not aware of any thorough documentation or study of them. This again suggests an issue with an incomplete description in the paper, a problem that code can not reduce \u2014 but can hide for a period of time. 2Without the same quality of evidence, indeed we are not sure how to design a good experiment for this. But this is an opinion paper, so we feel some indignant right to be opinionated. 3There are certainly edge cases where a method that does not replicate will be used and cited, we are talking in the more general broader case of directly building upon or relying on a method. 4Not to mention feelings of inadequacy, anxiety, and stress by students attempting to become researchers in what is already a needlessly high-stress environment. 5https://groups.google.com/g/word2vec-toolkit/c/Q49FIrNOQRo/m/DoRuBoVNFb0J 6https://discourse.julialang.org/t/optim-jl-vs-scipy-optimize-once-again/61661/5?page=2 3 \fML Evaluation Standards Workshop at ICLR 2022 We also argue that relying on open source code creates an academic moral hazard. Distilling the the essence of the scienti\ufb01c contribution, and communicating it effectively, is the task of an author. Although code does not solve reproducibility, it does enable replication and provides short term bene\ufb01ts in citation rate and adoption (Raff, 2022), thus allows the manuscript to defer \u201cnuscance\u201d details to the code. Reviewers can run the code to con\ufb01rm that \u201cit works\u201d, without checking that the code actually performs the method described precisely or simply be unaware of key confounding design choices. We preemptively rebut an argument that having code does allow checking an approach. We rebuke this argument by noting that decades of research shows that reading and debugging code does not ensure the same kind of mental processing as reading prose (Ivanova et al., 2020; Perkins & Simmons, 1988; Letovsky, 1987). This is con\ufb01rmed by the demonstrated positive impact of high quality code comments on understanding code (Nurvitadhi et al., 2003). As such the reading of code is a more challenging mental process than reading well constructed prose in a paper, and while helpful is not an alternative to effective communication. This fact, combined with the examples of Adam and word2vec show ways that code, regardless of how easy it is to implement, can harm reproducibility. We fear that an over-emphasis on code will seed new reproducibility problems. 4" + }, + { + "url": "http://arxiv.org/abs/2204.03829v1", + "title": "Does the Market of Citations Reward Reproducible Work?", + "abstract": "The field of bibliometrics, studying citations and behavior, is critical to\nthe discussion of reproducibility. Citations are one of the primary incentive\nand reward systems for academic work, and so we desire to know if this\nincentive rewards reproducible work. Yet to the best of our knowledge, only one\nwork has attempted to look at this combined space, concluding that\nnon-reproducible work is more highly cited. We show that answering this\nquestion is more challenging than first proposed, and subtle issues can inhibit\na robust conclusion. To make inferences with more robust behavior, we propose a\nhierarchical Bayesian model that incorporates the citation rate over time,\nrather than the total number of citations after a fixed amount of time. In\ndoing so we show that, under current evidence the answer is more likely that\ncertain fields of study such as Medicine and Machine Learning (ML) do correlate\nreproducible works with more citations, but other fields appear to have no\nrelationship. Further, we find that making code available and thoroughly\nreferencing prior works appear to also positively correlate with increased\ncitations. Our code and data can be found at\nhttps://github.com/EdwardRaff/ReproducibleCitations .", + "authors": "Edward Raff", + "published": "2022-04-08", + "updated": "2022-04-08", + "primary_cat": "cs.DL", + "cats": [ + "cs.DL", + "cs.AI", + "cs.LG" + ], + "main_content": "INTRODUCTION A reproducibility crisis has been called for many scienti\ufb01c domains, including arti\ufb01cial intelligence and machine learning (Donoho et al., 2009; Baker, 2016; Hutson, 2018; Vul et al., 2008). It is paramount that all disciplines work to remedy this situation and push for reproducible work both as good science, and to mitigate such crises. Such work has begun in various \ufb01elds with different strategies (Errington et al., 2021; Poldrack, 2019; Collaboration, 2015; Sculley et al., 2015; Gardner et al., 2018), yet the incentive structure around producing reproducible work has received almost no attention. We note that the difference in terminology between reproduction and replicating is long, with con\ufb02icting terminology across \ufb01elds and years (Plesser, 2018), we will use both terms interchangeably as our study focuses exclusively on cases where a different team independently performs the same experiments to obtain the same/similar results. Citations are the primary reward for academic outputs, and to our knowledge only the work of SerraGarcia & Gneezy (2021) has ever considered studying the relationship between papers that reproduce and the number of citations received. They used data on replication results from the \ufb01elds of Psychology (Collaboration, 2015), Economics (Camerer et al., 2016), and Social Sciences (Camerer et al., 2018). Distressingly, they conclude that non-reproducing work is cited more than reproducing works. Our work revisits this hypothesis and data, and draws a different conclusion. We will show in section 3 that there are methodological issues that prevent a robust conclusion from being formed with the data and approach presented in (Serra-Garcia & Gneezy, 2021). Next, we will propose a Bayesian hierarchical model to alleviate these issues and allow further insight into the citation/replication question by incorporating a model of the citation rate changing over time in section 4. In section 5 we show our model is a signi\ufb01cantly better \ufb01t to the data, and concludes that citation rate is unrelated or positively correlated with reproduction success, depending on the \ufb01eld being studied. Finally, we will conclude in section 6. 1 arXiv:2204.03829v1 [cs.DL] 8 Apr 2022 \fML Evaluation Standards Workshop at ICLR 2022 2 RELATED WORK The study of paper citation has a long and multi-disciplinary history (Lotka, 1926; Shockley, 1957; Price, 1965; 1976; Potter, 1981; Redner, 1998), with many works proposing different power law variants to describe the distribution of citations. Most work that has looked at citations over time are looking at population level changes in citation distributions (Bornmann & Mutz, 2015; Varga, 2019; Wallace et al., 2009). We are aware of only one prior work that looked at the citation rate by year through studying the impact of publication-vs-arXiv (Traag, 2021). This work also modeled citation rates as a Poisson, similar to Serra-Garcia & Gneezy (2021), which we will argue is an inappropriate model for citation count data. Used by Serra-Garcia & Gneezy (2021) were negative citations, a type of citation classi\ufb01cation that can provide further insight into behaviors and results. The taxonomy of citation types, their labeling, and prediction (Kunnath et al., 2022) are another lens through which insight may be gained, but is beyond the scope of our study. Dietz et al. (2007) produced one of the \ufb01rst applications of Bayesian modeling to the study of citation behavior and in\ufb02uences. Our task is different, and so our model bares little resemblance, but the overall strategy we argue is worth further study. Several latent factors exist in bibliometric study to which modern machine learning may yield bene\ufb01ts, and the scale of bibliometric data provides fertile ground to new and technical challenges to advance the \ufb01eld. 3 ISSUES WITH EXISTING MODELING While the Negative Binomial model has been previously identi\ufb01ed to empirically have better performance at citation prediction (Thelwall & Wilson, 2014), the Poisson model is still very popular. We note though that there is an easier way to show the Poisson model is in fact, inappropriate, for the bibliometric research it is used. The Poisson model assumes the mean and variance are equal, and if the variance is larger than the mean, the model suffers from overdispersion that prevents meaningful results. A statistical test (Cameron & Trivedi, 1990) con\ufb01rms with p < 0.001 that this is the case for citation data, which in the data from (Serra-Garcia & Gneezy, 2021) has a mean of 438 citations but a variance of 504,639. While Serra-Garcia & Gneezy (2021) used the Poisson model in their work on the connection between replication and reproducibility, we note there are additional factors that lead us to challenge their initial conclusion. The \ufb01rst is a data issue of reproducibility itself: N = 80 documents were noted in (Serra-Garcia & Gneezy, 2021), but the data provide N = 139 instances. We are unable to determine the correct selection criteria1 to render only 80, and so proceed forward with the larger number of samples. Table 1: Results indicating if successfully reproduced papers have more (positive) or less (negative) citations than papers that failed to reproduce. Models tested include Poisson verse NegativeBinomial (NB) regressions using the original three domains with Google Scholar (GS) or Semantic Scholar (SC) citations each, and an additional case using SC with a fourth set of reproduction results from the Medical domain (+M). Poisson-GS Poisson-SC Poisson-SC+M NB-GS NB-SC NB-SC+M coef p coef p coef p coef p coef p coef p Reproduced 0.0172 0.129 0.1138 <0.001 0.5775 <0.001 0.0172 0.150 4.4592 <0.001 0.5777 0.004 To demonstrate the lack of robustness to the prior methodology, we will perform several repetitions of the overall approach choosing between: 1. Using the Poisson model versus a Negative-Binomial model 1The authors graciously spent considerable time working with us, and we did not have the same software licenses to use their saved results. One hypothesis from the authors was that non-signi\ufb01cant results were excluded, but only removed 16 samples when we went through the data provided. Cross-discipline reproducibility and data sharing standards poses an interesting question beyond our scope. 2 \fML Evaluation Standards Workshop at ICLR 2022 2. Using the original Google Scholar (GS) citation count data provided vs citation data from Semantic Scholar (SC) 3. Using the original data with (SC) additionally with reproduction results from the Medical domain, adding a fourth \ufb01eld (+M). 2 4 6 8 0.75 0.8 0.85 0.9 Citation in Year Correlation Pearson Figure 1: Correlation between Google Scholar and Semantic Scholar in the number of citations for each document per year. After multiple-test correction all years were signi\ufb01cantly correlated with p < 0.001 in all cases. This provides six total results, presented in Table 1 using (Seabold & Perktold, 2010). We can see in that no case do we observe a negative indication that papers which fail to replicate are cited more. However, we do see inconsistent conclusion about the impact of replication itself. When using Google Scholar the conclusion is there is no relationship, and when using Semantic Scholar the conclusion is a strong relationship. This challenge is not a factor of these citations sources having dramatic disagreement, as can be seen in Figure 1 both are highly correlated in the per-year citations of the documents. This issue is instead that of model \ufb01t, as the highest adjusted R2 \ufb01t amongst the Negative Binomial models is 0.0039. The source of this discrepancy is inappropriate merging of all data sources into one pool. The papers selected from Economics, Psychology, Social Science, and Medicine where all selected with biases toward higher citation rates \u2014 largely through selection of high impact factor sources. The citation rate per \ufb01eld, or journal, are not the same, as shown in Figure 2. Imbalances in the number of papers from each source that happened to replicate or not amplify spurious noise, resulting in low model \ufb01t and unstable conclusions. No Yes 100 101 102 103 Replicated Total Citations Citations By Field Economics Psychology Social Medicine No Yes 100 101 102 103 Replicated Total Citations Citations By Jounral AER QJE JEPLMC JPSP PS Science Nature NEJM JAMA Lancet Blood JNCI Figure 2: Total number of citations accumulated for replicated and failed to replicate papers grouped by \ufb01eld (left) and journal (right). 4 METHODOLOGY To address these problems, we propose a Bayesian hierarchical model that incorporates the citation rate over time, rather than the cumulative total number of citations. Our interest in citation rate over time is of interest not merely for model \ufb01t, but primarily because we are interested if the types of citation patterns vary between reproducible and non-reproducible papers. That is to say, some papers do not start to accumulate citations for a considerable amount of time, others reach a steady-state of 3 \fML Evaluation Standards Workshop at ICLR 2022 citations, and others reach a peak citation rate before their citation rate drops. A total-citation rate model can not reveal anything about this question. Reproducable Citation Styles Field Observations Ridge Penalty \u03b2 Reproducable \u03b1 \u03c9 \u03b2 Reproducable Prior Gate Mean Prior Gate Gate Concentration Prior \u03c6 Obs z Shift Base bias Figure 3: Plate diagram of our proposed citation-replicated model. The observations are done against a Negative-Binomial model. The high level plate diagram of our approach is presented in Figure 3, which we will discuss at a high level with the detailed generative story given by Algorithm 1. The coef\ufb01cients \u03b2 are with respect to each Field, with a hierarchical prior used over then and a shared ridge regression penalty (variance of the Gaussian distribution). NegBinomial2(n | \u00b5, \u03d5) = \u0012n + \u03d5 \u22121 n \u0013 \u0012 \u00b5 \u00b5 + \u03d5 \u0013 n \u0012 \u03d5 \u00b5 + \u03d5 \u0013 \u03d5 . (1) The observations are done with respect to a zero-in\ufb02ated Negative-Binomial model, parameterized with a mean and dispersion factor \u00b5 and \u03d5 as shown in Equation 1. The zero-in\ufb02ation serves two purposes. First, some papers do receive zero citations for some time before becoming popular, and the zero-in\ufb02ation model prevents down-weighting the citation rate \u00b5 from these zero citations. Second, it allows us a convenient way to handle the fact that papers were published at different times, and thus for a desired horizon of T years not all papers will have T years of existence to accumulate citations. When a year has not yet occurred, we force the zero in\ufb02ation gate to effectively mask the year with no impact on the model. We used a target of T = 10 years in all cases. Each paper receives it\u2019s own gate value with a hyper prior shared over all samples. We use the proportional Beta hyper prior as shown in Equation 2 with a non-informative prior over \u00b5. BetaProportion (\u03b8 | \u00b5, \u03ba) = 1 B(\u00b5\u03ba, (1 \u2212\u00b5)\u03ba)\u03b8\u00b5\u03ba\u22121(1 \u2212\u03b8)(1\u2212\u00b5)\u03ba\u22121 (2) To represent the impact of the t\u2019th year\u2019s citation rate of the i\u2019th sample \u00b5i,t we model a base citation rate \u00b5i modulated by an annual base citation multiplier sampled from a Gamma prior centered at a mean of 1.0 (i.e., no change in annual citation rate). The impact of the compounding base rate can be delayed (but not increased, as that implies pre-publication citations) by a shift factor samples from a positive Laplacian scaled so that the entire T years may be selected by the prior would prefer no shift. We do not give each sample it\u2019s own base and shift as it allows signi\ufb01cant over-\ufb01tting of the model to ignore the impact of the coef\ufb01cients \u03b2. Instead we use a Dirichlet process to sample from a pool base/shift pairs \u2014 where reproducible and non-reproducible papers each receive a separate Dirichlet process sampling from the same pool. We enforce a sparse process by putting a Beta prior over the \u03b1 parameter of the processes so that we may see if there is a difference in the types of citation styles between papers (e.g., do non-replicating papers more frequently have decaying base rates < 1). In each experiment there is one pool of base/shift pairs, and two sets of distributions \u03c9 over those pools. One \u03c9S for reproduced papers and one \u03c9F for the non-reproduced. In this way the model can 4 \fML Evaluation Standards Workshop at ICLR 2022 inform us if there appears to be a difference (\u03c9S \u0338= \u03c9F ) in citation styles (base/shift pairs) between the populations. Algorithm 1 Our Hierarchical Bayesian generative story for modeling citation rates. The + indicates distributions truncated to be non-negative. Require: N observations with ri \u2208{S, F} for successful or failed reproduction and fi indicating the \ufb01eld of research for the paper. \u03bbridge \u223cHalfCauchy(0,1) \u03b1 \u223cBeta(1, 10) \u25b7A Beta distribution used to encourage sparse solutions \u03c9S \u223cDirichlet(\u03b1) \u25b7A different distribution over all base/shift values for reproducible . . . \u03c9F \u223cDirichlet(\u03b1) \u25b7and non-reproducible papers for all i \u22081, . . . , \u221edo \u25b7Citation Styles for \u03c9\u2217will sample from shift \u223cLaplace+(0, years out/6) base \u223c\u0393(100, 100) \u25b7This Gamma distribution will encourage values near 1, as values \u00bf 2 are undesirable in being unrealistic. end for [ \u03b2\ufb01eld \u223cN(0, 1) \u25b7Hierarchical Reproducible Prior for all Field of Study i do \u03b2\ufb01eld i \u223cN( [ \u03b2\ufb01eld, \u03bbridge) bi \u223cCauchy(0, 1) \u25b7Bias term is independent between Fields end for [ gate\u00b5 \u223cU(0, 1) \u25b7Uninformative prior on the mean rate of no citations occurring. [ gate\u03ba \u223c\u0393(1, 20) \u03d5 \u223cCauchy+(0, 5) for all Observations i do z \u223cCategorical(\u03c9ri) \u25b7Select the citation style base/shift for this sample based on the distribution w.r.t. the sample replicating or not log(\u00b5i) \u2190\u03b2\ufb01eld fi \u00b7 1[ri = S] + bfi \u25b7The rate is modi\ufb01ed based on the paper replicating or not. gatei \u223cBetaProportion( [ gate\u00b5, [ gate\u03ba) for all Time steps t do \u00b5i,t \u2190\u00b5i \u00b7 basemax(t\u2212shiftz,0) z accumulate Zero-In\ufb02ated Negative Binomial loss NetBinomial2(yi|\u00b5i,t, \u03d5) with gate probability gatei end for end for The full model is detailed in Algorithm 1. We use NumPyro (Phan et al., 2019) to implement the model with the NUTS sampler (Hoffman & Gelman, 2014). In all cases we use 500 burn-in iterations followed by 2,250 steps with a thinning factor of 3. 5 RESULTS Now that we have speci\ufb01ed our approach to understanding how citations may be impacted by a paper\u2019s ability to replicate, we will present out results in two sections. First we will consider the results with respect to the previous \ufb01elds of study (Medicine, Economics, Psychology, Social) and show that we obtain consistent results and reasonably believe them to be a more reliable model. Second we will repeat the study applied to data from machine learning (Raff, 2019). This data is studied separately because it has a different kind of selection bias, and a different set of available features to consider, than the other data. 5.1 SCIENCE RESULTS We begin by examining the conclusions inferred by our model on the three versions of the data, Google Scholar, Semantic Scholar, and Semantic Scholar with the medical domain added. The results can be found in Figure 4, showing consistent conclusions of no correlation between \ufb01eld and 5 \fML Evaluation Standards Workshop at ICLR 2022 citation rate of reproducible papers for any of the three original \ufb01elds. When Medicine is added we observe that it does show high citation rate for reproducing papers, without changing the conclusion of the other \ufb01elds. \u22120.2 \u22120.1 0 0.1 0.2 Hyper Prior Economics Psychology Social Effect Sizes Google Scholar Data \u22120.4 \u22120.2 0 0.2 0.4 Hyper Prior Economics Psychology Social Effect Sizes Semantic Scholar Data \u22120.4 \u22120.2 0 0.2 0.4 Hyper Prior Economics Psychology Social Medicine Effect Sizes Semantic Scholar +M Figure 4: The results of the coef\ufb01cients \u03b2 for the different Fields of study when using Google Scholar data (left), Semantic Scholar (middle), and Semantic Scholar with the addition of the medical papers (right). The x-axis is coef\ufb01cient value and the forest plot shows the estimated value and 95% credible interval. Beyond the consistency of the conclusions, we are further con\ufb01dent in our approach\u2019s conclusions due to better model \ufb01t. The Google Scholar case producing an R2 = 0.41, and the Semantic Scholar data with/without the Medicine papers at R2 = 0.24 and 0.19 respectively. We arguably would not expect very high R2 values considering the model is characterizing populations of citation rates based only on the \ufb01eld, as prior work focusing on predicting citations using venue, author, and content information achieved R2 = 0.74 (Yan et al., 2011). This approach has also provided further insight into the nature of reproduction and citations, that the reward behaviors are not consistent across \ufb01elds (subject to unobserved confounders). The question then becomes: do reproducible papers have a different style of citation patterns (i.e., accumulating or decreasing citation rates at a different pace) compared to no reproducible work? 5 11 12 13 17 19 20 23 25 26 27 28 29 30 32 34 36 37 38 39 40 41 42 46 49 0 0.05 0.1 0.15 0.2 Latent Citation Style Probability No Reproduction Reproduction 0 2 4 6 8 10 \u22121 0 1 10 Years Since Publication log(Citation Rate Multiplier) 5, 1.2 11, 2.3 12, 1.1 13, 1.7 17, 1.3 19, 0.8 20, 1.0 23, 3.7 25, 1.9 26, 1.1 27, 0.8 28, 2.7 29, 5.9 30, 2.5 32, 1.98 34, 1.3 36, 1.5 37, 1.1 38, 1.4 39, 1.3 40, 1.4 41, 1.2 42, 1.5 46, 0.9 49, 2.2 Figure 5: The discovered latent citation styles and their proportion of use in reproduced and failed to reproduce papers (left) and the log multiplicative effect of the citation rate over time (right). Note the right legend shows \u201cCitation Style, Mean Occurrence Rate of the Style\u201d. The y-axis is a symmetric logarithm scale with linear behavior in the [\u22121, 1] range, where 0 indicates no change in the citation rate. Error bars are the 95% credible interval inferred by the model. Per the design of our model, in Figure 5 we can investigate the citation rates over time as inferred by our model, shown for the Semantic Scholar + Medicine case. In this instance we do not observe any difference in the citation rates or style between (non)reproducing papers. A maximum of 50 components were allowed for computational tractability, and non-present components are ones the model learned to discard with near-zero probabilities. We note of particular interest that most latent citation styles only have an impact starting two years out from publication, a result consistent with prior work which found the \ufb01rst two years of citations to be highly predictive of the long-term 6 \fML Evaluation Standards Workshop at ICLR 2022 cumulative number of citations (Stegehuis et al., 2015). This provides another degree of con\ufb01dence in the validity of our general approach, though we do not make claim that our simple model of citation rate is the best possible choice. The data is also interesting in that we observe behaviors not normally discussed in bibliometric literature: papers who\u2019s citation rate decreases with time. This is indeed not directly observable in the common modeling approach of looking at cumulative citations after a point in time. We further \ufb01nd citation style 29 uniquely interesting as a \u201crunaway success\u201d, quickly multiplying the citation rate by exp(10) \u2248104.35 after ten years. 5.2 MACHINE LEARNING RESULTS Having shown our model allows for more robust conclusions around the impact replicatiable results has on citation rate, we turn to the machine learning reproductions documented by Raff (2019). Many of the papers were selected by the author\u2019s personal interests, rather than impact factor, so we do not \ufb01nd it appropriate to include it in the same hierarchical model. The ML data also includes numerous other quanti\ufb01cation\u2019s about the paper not present in the prior section, so we treat it separately. We use the same approach without a hierarchical prior over \ufb01eld since it is one population of papers. The adjusted R2 of the model is 0.31 using Semantic Scholar for the citation data, inline with the prior experiments. \u22121.5 \u22121 \u22120.5 0 0.5 1 1.5 Reproducable Code Available Theory Emperical Balanced Num References Number of Equations Number of Proofs Total Tables and Figures Number of Tables Number of Graphs/Plots Number of Other Figures Conceptualization Figures Book Conference Journal Tech Report Workshop Forest Plot of Effect Sizes Reduces Citation Rate \u2014 Increase Citation Rate Figure 6: Forest plot of the coef\ufb01cients \u03b2 of various features, with 95% credible intervals. The results Figure 6 show that reproducible papers, and papers that make their code available, both receive higher rates of citation. The former is desirable, and the later indicates a strong motivation for authors to open source their code beyond the arguments around replication (Kluyver et al., 2016; Claerbout & Karrenbach, 1992; Callahan et al., 2016; Errington et al., 2021; Forde et al., 2018). The sharing of code is generally argued to be bene\ufb01cial, but we do note that it captures methodological \ufb02aws as well \u2014 and is thus not a panacea to concerns around reproduction (Dror et al., 2019; 2017; 2018; Sun et al., 2020; Bouthillier et al., 2019; 2021). We are also encouraged that more references per page has a higher citation rate, under the belief this corresponds to more thorough documentation of prior work and good scholastic behaviors. The reduced citation rate for Conceptualization Figures, which attempt to convey the intuition of a method, is interesting. Raff (2019) noted no relationship between this variable and replication, while later work found that papers which use conceptualization \ufb01gures take less time/human effort to reproduce (Raff, 2021). This type of scienti\ufb01c communication appears to have a particularly complex relationship with reproduction and the incentives around reproduction that thus warrants further study. 7 \fML Evaluation Standards Workshop at ICLR 2022 The last points of note are that publishing in Journals, and more tables appear to increase citation rate while publishing in a workshop reduces it. Publishing in a workshop having a lower citation rate makes sense intuitively, though it is perhaps interesting that tech reports (like arXiv) have no relationship \u2014 and it is worth studying whether workshops being a \ufb01nal \u201chome\u201d for a paper may have a negative perception. This result is also possible due to the noted bias in the data, which we believe may explain the result that Journal publications have a higher citation rate, since ML as a \ufb01eld generally prefers conferences over journals. Last, we have no particular intuition about why having more tables per paper may lead to more citations \u2014 unless it is a matter of making it easy for future papers to re-use the reported results, a hypothesis proposed in (Raff, 2019). 1 3 10 11 13 16 18 23 28 29 30 32 35 44 49 0 0.05 0.1 0.15 0.2 0.25 Latent Citation Style Probability No Reproduction Reproduction 0 2 4 6 8 10 \u221210 \u22121 0 1 10 Years Since Publication log(Citation Rate Multiplier) 1, 1.9 3, 5.7 10, 2.2 11, 2.3 13, 1.7 16, 5.2 18, 6.6 23, 3.7 28, 2.7 29, 5.9 30, 2.5 32, 2.0 35, 7.0 44, 5.7 49, 2.2 Figure 7: The discovered latent citation styles and their proportion of use in reproduced and failed to reproduce papers (left) and the log multiplicative effect of the citation rate over time (right) for the Machine Learning data. Note the right legend shows \u201cCitation Style, Mean Occurrence Rate of the Style\u201d. The y-axis is a symmetric logarithm scale with linear behavior in the [\u22121, 1] range, where 0 indicates no change in the citation rate. Last, we look at the latent citation patterns again in Figure 7, and note that style 29 does has a signi\ufb01cant difference between reproducible and non-reproducible works2. By chance this component again represents a \u201crunaway success\u201d, indicating a preference for degree of success toward reproducible works. 6" + }, + { + "url": "http://arxiv.org/abs/2105.02936v1", + "title": "Exact Acceleration of K-Means++ and K-Means$\\|$", + "abstract": "K-Means++ and its distributed variant K-Means$\\|$ have become de facto tools\nfor selecting the initial seeds of K-means. While alternatives have been\ndeveloped, the effectiveness, ease of implementation, and theoretical grounding\nof the K-means++ and $\\|$ methods have made them difficult to \"best\" from a\nholistic perspective. By considering the limited opportunities within seed\nselection to perform pruning, we develop specialized triangle inequality\npruning strategies and a dynamic priority queue to show the first acceleration\nof K-Means++ and K-Means$\\|$ that is faster in run-time while being\nalgorithmicly equivalent. For both algorithms we are able to reduce distance\ncomputations by over $500\\times$. For K-means++ this results in up to a\n17$\\times$ speedup in run-time and a $551\\times$ speedup for K-means$\\|$. We\nachieve this with simple, but carefully chosen, modifications to known\ntechniques which makes it easy to integrate our approach into existing\nimplementations of these algorithms.", + "authors": "Edward Raff", + "published": "2021-05-06", + "updated": "2021-05-06", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "cs.MS", + "stat.ML" + ], + "main_content": "Introduction Before one can run the \ud835\udc3e-means algorithm, a prerequisite step is needed to select the initial \ud835\udc3e-seeds to use as the initial estimate of the means. This seed selection step is critical to obtaining high quality results with the \ud835\udc3e-means algorithm. Selecting better initial centers \ud835\udc5a1, . . . , \ud835\udc5a\ud835\udc3ecan improve the quality of the \ufb01nal \ud835\udc3e-means clustering. A major step in developing better seed selection was the \ud835\udc3e-means++ algorithm. This was the \ufb01rst to show that the seeds it \ufb01nds are log-optimal in expectation for solving the \ud835\udc3e-means problem [Arthur and Vassilvitskii, 2007]. For a dataset with \ud835\udc5bitems \ud835\udc3e-means++ requires \ud835\udc42(\ud835\udc5b\ud835\udc3e) distance computations. If \ud835\udc43processors are available \ud835\udc3e-means++ can be done in \ud835\udc42(\ud835\udc5b\ud835\udc3e/\ud835\udc43). However, the amount of communication overhead to do \ud835\udc3e-means in parallel is signi\ufb01cant. To remedy this, Bahmani et al. [2012] introduced \ud835\udc3e-means\u2225which retains the \ud835\udc42(\ud835\udc5b\ud835\udc3e/\ud835\udc43) complexity and performs a constant factor more distance computations to signi\ufb01cantly reduce the communication overhead while still yielding the same log-optimal results [Bachem et al., 2017]. When working in a distributed environment, where communication must occur over the network, this can lead to large reductions in run-time [Bahmani et al., 2012]. The cost of \ud835\udc3e-means++ has long been recognized as being an expensive but necessary step for better results [Hamerly, 2014], with little progress on improvement. Modern accelerated versions of \ud835\udc3e-means clustering perform as few as 1.2 total iterations of the dataset [Ry\u0161av\u00fd and Hamerly, 2016], making \ud835\udc3e-means++ seed selection take up to 44% of all distance computations. Outside of exact \ud835\udc3e-means clustering, faster seed selection can help improve stochastic variants of \ud835\udc3e-means [Bottou and Bengio, 1995; Sculley, 2010] and is useful for applications like corset construction [Bachem et al., 2015], change detection [Ra\ufb00et al., 2020], tensor algorithms [Jegelka et al., 2009], clustering with Bergman divergences [Nock et al., 2008], and Jensen divergences [Nielsen and Nock, 2015]. Applications with large \ud835\udc3ehave in particular been neglected, even though \ud835\udc3e\u226520, 000 is useful for scaling kernel methods [Si et al., 2017]. In this work, we seek to accelerate the original \ud835\udc3e-means++ and \ud835\udc3e-means\u2225algorithms so that we may obtain the same provably good results in less time without compromising on any of the desirable qualities of \ud835\udc3e-means++ or \ud835\udc3e-means\u2225. We will review work related to our own in \u00a7 2. Since the bottlenecks and approach to accelerating these two algorithms are di\ufb00erent we will review their details and our approach to accelerating them sequentially. In respect to \ud835\udc3e-means++ in \u00a7 3, we show how simple application of the triangle inequality plus a novel dynamic priority queue allows us to avoid redundant computations and keep the cost of sampling new means low. In \u00a7 4 we address \ud835\udc3e-means\u2225and develop a new NearestInRange query that allows us to successfully use a metric index to prune distance computations even though it is restricted to corpora normally too small to be useful with structures like KD-trees. We then perform empirical evaluation of our modi\ufb01cations in \u00a7 5 over a larger set of corpora with more diverse properties covering \ud835\udc3e\u2208[32, 4096]. In doing so, we observe that our accelerated algorithms succeed in requiring either the same or less time across all datasets and all values of \ud835\udc3e, making it a Pareto improvement. Finally, we will conclude in \u00a7 6. 2 Related Work Many prior works have looked at using the triangle inequality, \ud835\udc51(\ud835\udc4e, \ud835\udc4f) + \ud835\udc51(\ud835\udc4f, \ud835\udc50) \u2265\ud835\udc51(\ud835\udc4e, \ud835\udc50), to accelerate the \ud835\udc3e-means algorithm. While the \ufb01rst work along this line was done by Phillips arXiv:2105.02936v1 [cs.LG] 6 May 2021 \f[2002], it was \ufb01rst successfully popularized by Elkan [2003]. Since then, several works have attempted to build faster \ud835\udc3e-means clustering algorithms with better incorporation or tighter bounds developed through use of the triangle inequality [Hamerly, 2010; Ding et al., 2015; Newling and Fleuret, 2016]. Despite the heavy use of the triangle inequality to accelerate \ud835\udc3e-means clustering, we are aware of no prior works that apply it to the seed selection step of \ud835\udc3e-means++ and \ud835\udc3emeans\u2225. We belive this is largely because these methods can not accelerate the \ufb01rst iteration of \ud835\udc3e-means, as they rely on the \ufb01rst iteration\u2019s result to accelerate subsequent iterations. Since \ud835\udc3e-means++ is e\ufb00ectively a single iteration of \ud835\udc3e-means, their approaches can not be directly applied to the seed selection step. In our work to accelerate \ud835\udc3e-means\u2225using metric index structures a similar historical theme emerges. Prior works have looked at using index structures like KD-trees [Pelleg and Moore, 1999] and Cover-trees [Curtin, 2017] to accelerate the \ud835\udc3e-means clustering algorithm, but did not look at the seed selection step. Similarly we will use a metric indices to accelerate \ud835\udc3e-means\u2225, but we will develop an enhanced nearest neighbor query that considers a maximum range to meaningfully prune even when using small values of \ud835\udc3e. Most work we are aware of focuses on extending or utilizing the \ud835\udc3e-means++ algorithm with few signi\ufb01cant results on improving it. The most signi\ufb01cant in this regard is the AFKMC[Bachem et al., 2016a] algorithm and its predecessor KMC [Bachem et al., 2016b]. Both can obtain initial seeds with the same quality as \ud835\udc3e-means++ with less distance computations but scale as \ud835\udc42(\ud835\udc5b/\ud835\udc43+ \ud835\udc5a\ud835\udc3e2), where \ud835\udc5ais a budget factor. This makes them less e\ufb00ective when a large number of CPUs \ud835\udc43is available or when \ud835\udc3eis large. Neither work factored in actual run-time. [Newling and Fleuret, 2017] showed that these implementations are actually 3.3\u00d7 slower when overheads are factored in. We consider run-time in our own work to show that our improvements materialize in practice. 3 Accelerating \ud835\udc3e-Means++ We start with the \ud835\udc3e-means++ algorithm where we present detailed pseudo-code in Algorithm 1. We detail the method and how it works when each data point \ud835\udc65\ud835\udc56has with it an associated weight \ud835\udc64\ud835\udc56, as this is required later on. The algorithm begins by selecting an initial seed at random, and then assigning a new weight \ud835\udefd\ud835\udc56to each data point \ud835\udc65\ud835\udc56, based on the squared distance of \ud835\udc65\ud835\udc56to the closest existing seed. At each iteration, we select a new seed to the set based on these weights and return once we have \ud835\udc58total seeds. This requires \ud835\udc58iterations through the dataset or size \ud835\udc5bresulting in \ud835\udc42(\ud835\udc5b\u00b7 \ud835\udc58) distance computations. Note that we cache the distance between each point \ud835\udc65\ud835\udc56 and it\u2019s closest mean into the variable \ud835\udefc\ud835\udc56. We will maintain this notation throughout the paper and use \ud835\udefc\ud835\udc56as shorthand. The \ufb01rst step toward improving the \ud835\udc3e-means++ algorithm is to \ufb01lter out redundant distance computations. To do this, we note that at each iteration we compare the distance of each point \ud835\udc65\ud835\udc56to the newest mean \ud835\udc5a\ud835\udc58against the previous closest mean \ud835\udc5a\ud835\udc57, where 1 \u2264\ud835\udc57< \ud835\udc58. That is, we need to determine if \ud835\udc51(\ud835\udc65\ud835\udc56, \ud835\udc5a\ud835\udc58) < \ud835\udc51(\ud835\udc65\ud835\udc56, \ud835\udc5a\ud835\udc57). To do this, we can use Lemma 1 as introduced and proven by Elkan [2003], Algorithm 1 K-Means++ Require: Desired number of seeds \ud835\udc3e, data points \ud835\udc651, . . . , \ud835\udc65\ud835\udc5b, data weights \ud835\udc641, . . . , \ud835\udc64\ud835\udc5b 1: Weight of each data point \ud835\udc64\ud835\udc56\u22650 2: \ud835\udefd\ud835\udc56\u2190\ud835\udc64\ud835\udc56/\u00cd\ud835\udc5b \ud835\udc57=1 \ud835\udc64\ud835\udc57, \u2200\ud835\udc56\u2208[1, \ud835\udc5b] 3: \ud835\udc5a1 \u2190\ud835\udc65\ud835\udc56, where \ud835\udc56is selected with probability \ud835\udefd\ud835\udc56 4: \ud835\udc58\u21901 5: \ud835\udefc= \u00ae \u221e 6: while \ud835\udc58< \ud835\udc3edo 7: for \ud835\udc56\u2208[1, \ud835\udc5b] do 8: \ud835\udefc\ud835\udc56\u2190min(\ud835\udefc\ud835\udc56, \ud835\udc51(\ud835\udc5a\ud835\udc58, \ud835\udc65\ud835\udc56)) 9: \ud835\udc4d\u2190\u00cd\ud835\udc5b \ud835\udc56=1 \ud835\udc64\ud835\udc56\u00b7 \ud835\udefc2 \ud835\udc56 10: for \ud835\udc56\u2208[1, \ud835\udc5b] do 11: \ud835\udefd\ud835\udc56\u2190\ud835\udc64\ud835\udc56\u00b7 \ud835\udefc2 \ud835\udc56/\ud835\udc4d 12: \ud835\udc58\u2190\ud835\udc58+ 1 13: \ud835\udc5a\ud835\udc58\u2190\ud835\udc65\ud835\udc56, where \ud835\udc56is selected with probability \ud835\udefd\ud835\udc56 14: return initial means \ud835\udc5a1, . . . , \ud835\udc5a\ud835\udc3e Lemma 1. Let \ud835\udc65be a point and let \ud835\udc4fand \ud835\udc50be centers. If \ud835\udc51(\ud835\udc4f, \ud835\udc50) \u22652\ud835\udc51(\ud835\udc65, \ud835\udc4f) then \ud835\udc51(\ud835\udc65, \ud835\udc50) \u2265\ud835\udc51(\ud835\udc65, \ud835\udc4f) 3.1 Applying the Triangle Inequality We can use the distance between \ud835\udc5a\ud835\udc5dand \ud835\udc5a\ud835\udc57to determine if computing \ud835\udc51(\ud835\udc65\ud835\udc56, \ud835\udc5a\ud835\udc58) is a fruitless e\ufb00ort by checking if \ud835\udc51(\ud835\udc5a\ud835\udc57, \ud835\udc5a\ud835\udc58) > \ud835\udc51(\ud835\udc65\ud835\udc56, \ud835\udc5a\ud835\udc57). This is already available in the form of \ud835\udefc\ud835\udc56as presented in Algorithm 1. We then only need to compute \ud835\udc51(\ud835\udc5a\ud835\udc57, \ud835\udc5a\ud835\udc58)\u2200\ud835\udc57< \ud835\udc58, of which there is intrinsically less than \ud835\udc58unique values at each iteration. Thus, we can compute \ud835\udefe\ud835\udc57= \ud835\udc51(\ud835\udc5a\ud835\udc57, \ud835\udc5a\ud835\udc58) once at the start of each loop, and we can re-use these \ud835\udc58values for all \ud835\udc5b\u2212\ud835\udc58distance comparisons. Applying this bound we can avoid many redundant computations. As there are still \ud835\udc3etotal iterations to select \ud835\udc3emeans, each iteration will perform \ud835\udc58comparisons to previous means and \ud835\udc5b\u2212\ud835\udc58, we get at most \ud835\udc5bdistance comparisons per iteration making the worst case still \ud835\udc42(\ud835\udc5b\ud835\udc58) distance computations for the \ud835\udc3e-means++ algorithm. 3.2 Avoiding Subnomral Slowdowns A non-trivial cost exists in lines 9-13 of Algorithm 1 where we must compute the probability of selecting each point as the next mean and then perform the selection. This requires at least 3 \u00b7 \ud835\udc5b\ufb02oating point multiplications which can be a bottleneck in low dimensional problems. This can be exasperated because squared distance to the closest center \ud835\udefc2 \ud835\udc56 naturally becomes very small as \ud835\udc58increases resulting in subnormalized \ufb02oating point values. Subnormals (also called denormal) attempt to extend the precision of IEEE \ufb02oats, but can cause 100\u00d7 slowdowns in computation [Dooley and Kale, 2006]. Depending on hardware, subnormals can also interfere with pipelining behavior and out-of-order execution, making a single subnormal computation highly detrimental to performance [Fog, 2016]. This is particularly problematic because pruning based on the triangle inequality works best on low dimensional problems, and the normalization step prevents us from realizing speedups in terms of total run-time. To circumvent this bottleneck, we develop a simple approach to create a dynamic priority queue that allows us to 2 \f\ud835\udf063\ud835\udc64\u22121 3 \ud835\udefc\u22122 3 True \ud835\udf064\ud835\udc64\u22121 4 \ud835\udefc\u22122 4 True \ud835\udf062\ud835\udc64\u22121 2 \ud835\udefc\u22122 2 False True \ud835\udf061\ud835\udc64\u22121 1 \ud835\udefc\u22122 1 Dirty Priority Re-prioritize Lowest Possible \ud835\udf064\ud835\udc64\u22121 4 \ud835\udefc\u22122 4 False \ud835\udf063\ud835\udc64\u22121 3 \ud835\udefc\u22122 3 False Dirty Priority True \ud835\udf061\ud835\udc64\u22121 1 \ud835\udefc\u22122 1 False \ud835\udf062\ud835\udc64\u22121 2 \ud835\udefc\u22122 2 Re-queue Figure 1: Example of priority re-queueing strategy for \ud835\udc5b= 4 items. Initially, it is not clear if items 2, 3, or 4 are the next to sample. All dirty items are removed from the queue until we reach a clean item and then re-inserted after \ufb01xing their priorities. We do not need to consider any item after the \ufb01rst clean item. sample the next mean accurately without having to interact with most of the samples per iteration. We start with the elegant sampling without replacement strategy introduced by Efraimidis and Spirakis [2006]. Given \ud835\udc5bitems 1, . . . , \ud835\udc5b with weighted probabilities \ud835\udc641, . . . , \ud835\udc64\ud835\udc5b, it works by assigning each item \ud835\udc56a priority \ud835\udf06\ud835\udc56\ud835\udc64\u22121 \ud835\udc56 where \ud835\udf06\ud835\udc56is sampled from the Exponential distribution with \ud835\udf06= 1 (i.e., \ud835\udf06\ud835\udc56\u223cExponential(1)). To select \ud835\udc3eitems without replacement, one selects the \ud835\udc3evalues with highest priority (smallest \ud835\udf06\ud835\udc56\ud835\udc64\u22121 \ud835\udc56 values). This can normally be done with the quick-select algorithm in \ud835\udc42(\ud835\udc5b) time. For \ud835\udc3e-means++ seeding we want to instead use the priority \ud835\udf06\ud835\udc56\ud835\udc64\u22121 \ud835\udc56\ud835\udefc\u22122 \ud835\udc56 in order to produce random samples. The term \ud835\udc64\u22121 \ud835\udc56\ud835\udefc\u22122 \ud835\udc56 acts as the weight for datum \ud835\udc56being selected, and it is a combination of the original relative weight of the datum \ud835\udc64\ud835\udc56 and the squared distance to the nearest seed \ud835\udefc2 \ud835\udc56. At the start we sample \ud835\udf06\ud835\udc56\u223cExponential(1) once. During each round, we update all \ud835\udefc\ud835\udc56values and leave \ud835\udf06\ud835\udc56\ufb01xed. It is trivial to see that this does not alter the expectation of any point being selected conditioned on the point \ud835\udc56already being removed. This is because all \ud835\udf06\ud835\udc56are sampled independently, and so the removal of any \ud835\udf06\ud835\udc56does not impact the relative weights of any other point. Thus, we can use the weighted sampling without replacement strategy of Efraimidis and Spirakis [2006] to select the seeds. We performed a sanity check by implementing this naive approach and making no other changes. This resulted in the same quality solutions over many trials with the same statistical mean and variance. At \ufb01rst glance, this strategy obtains no bene\ufb01t as the value of \ud835\udefc\ud835\udc56will change on each iteration. Each value of \ud835\udefc\ud835\udc56changing means that the relative ordering of all remaining priorities \ud835\udf06\ud835\udc56\ud835\udc64\u22121 \ud835\udc56\ud835\udefc\u22122 \ud835\udc56 will also change. This requires a full quick-select run on each iteration to discover the new maximum priority item. However, we note that \ud835\udefc\ud835\udc56can only decrease with each iteration, and thus the priority of any given sample either remains constant or decreases. Our \ufb01rst contribution is the realization that this property can be exploited to reduce the cost of sampling so that only a subset of priorities need to be considered to sample the next point. We can instead create a priority queue using a standard binary heap to select the next smallest value of \ud835\udf06\ud835\udc56\ud835\udc64\u22121 \ud835\udc56\ud835\udefc\u22122 \ud835\udc56 and maintain a marker if the priority of an item \ud835\udc56has become dirty. An item is dirty if and only if the item has a higher priority than it actually should. If there is a clean item \ud835\udc67in the queue, then all items with a lower apparent priority than \ud835\udc67must have a true priority that is still lower than \ud835\udc67. Thus, we need only \ufb01x the priority of items higher than \ud835\udc67. See Figure 1 for an example of this queue for a dataset of \ud835\udc5b= 4 items. Item 2 is clean, and all items with a higher priority (3 and 4) are dirty. That means item 2 has the lowest possible priority that could be the next true sample because it is possible the values of items 3 and 4 will become larger (read, lower priority) once the updated values of \ud835\udefc3 and \ud835\udefc4 are computed. Thus, we can remove all items in the queue until we reach item 2 and then re-insert them into the queue with their correct priorities. In this hypothetical example, item 4 still had a lower priority after updating, and so will become the next mean when we them remove it from the queue. Item 1 occurred after item 2 because it had a lower priority. Even though item 1 was dirty, we did not need to consider it because its priority can only decrease once \ud835\udefc1 is updated. Because Item 2 was clean, its priority will not change, and there is no possibility of item 1 being selected. 3.3 Accelerated \ud835\udc3e-Means++ Algorithm 2 Our Accelerated K-Means++ Require: Desired number of seeds \ud835\udc3e, data points \ud835\udc651, . . . , \ud835\udc65\ud835\udc5b, data weights \ud835\udc641, . . . , \ud835\udc64\ud835\udc5b 1: \ud835\udf06\ud835\udc56\u223cExponential(1), \u2200\ud835\udc56\u2208[1, \ud835\udc5b] 2: Weight of each data point \ud835\udc64\ud835\udc56\u22650 3: Priority Queue \ud835\udc44with each index \ud835\udc56given priority \ud835\udf06\ud835\udc56/\ud835\udc64\ud835\udc56 4: dirty\ud835\udc56\u2190False 5: \ud835\udc5a1 \u2190\ud835\udc65\ud835\udc44.Pop() 6: \ud835\udefc= \u00ae \u221e, \ud835\udc58\u21901, \ud835\udf19\ud835\udc56\u21900 7: for \ud835\udc58\u2208[1, \ud835\udc3e) do \u22b2For each new center \ud835\udc58 8: for \ud835\udc57\u2208[1, \ud835\udc58) do \u22b2Get distance to previous centers 9: \ud835\udefe\ud835\udc57\u2190\ud835\udc51(\ud835\udc5a\ud835\udc58, \ud835\udc5a\ud835\udc57) 10: for \ud835\udc56\u2208[1, \ud835\udc5b] do 11: if 1 2\ud835\udefe\ud835\udf19\ud835\udc56\u2265\ud835\udefc\ud835\udc56then 12: continue \u22b2Pruned by Lemma 1 13: if \ud835\udc51(\ud835\udc5a\ud835\udc58, \ud835\udc65\ud835\udc56) < \ud835\udefc\ud835\udc56then 14: \ud835\udefc\ud835\udc56\u2190\ud835\udc51(\ud835\udc5a\ud835\udc58, \ud835\udc65\ud835\udc56) 15: \ud835\udf19\ud835\udc56\u2190\ud835\udc58 16: dirty\ud835\udc56\u2190True \u22b2Priority may now be too high 17: Create new stack \ud835\udc46 18: while dirty\ud835\udc44.Peek() do \u22b2All items that could be selected 19: \ud835\udc56\u2190\ud835\udc44.Pop() 20: \ud835\udc46.Push(\ud835\udc56) 21: for \ud835\udc56\u2208\ud835\udc46do \u22b2Update true priority 22: \ud835\udc44.Push(\ud835\udc56, \ud835\udf06\ud835\udc56/\u0000\ud835\udc64\ud835\udc56\u00b7 \ud835\udefc2 \ud835\udc56 \u0001) 23: dirty\ud835\udc56\u2190False 24: \ud835\udc5a\ud835\udc58\u2190\ud835\udc65\ud835\udc44.Pop() \u22b2Select new mean by clean top priority 25: return initial means \ud835\udc5a1, . . . , \ud835\udc5a\ud835\udc3e The \ufb01nal algorithm that performs the accelerated computation is given in Algorithm 2. Lines 8-12 take care to avoid redundant distance computations, and lines 16-23 ensure that the dynamic priority queue allows us to select the next mean without considering all \ud835\udc5b\u2212\ud835\udc58remaining candidates. Combined, we are able to regularly gain reductions both in terms 3 \fof total time taken as well as the number of distance computations required. Through the use of our dynamic priority queue we \ufb01nd that we regularly consider less than 1% of total remaining \ud835\udc5b\u2212\ud835\udc58items. This is important when we work with low-dimension datasets. When the dimension is very small (e.g., \ud835\udc51= 2 for longitude/latitude data is a common use case), there is little computational cost in the distance computations themselves, and so much of the bottleneck in runtime is contained within the sampling process. Our dynamic queue avoids this bottleneck allowing us to realize the bene\ufb01ts of reduced distance computations. 4 Accelerating \ud835\udc3e-Means\u2225 Now we turn our attention to the \ud835\udc3e-means\u2225algorithm detailed in Algorithm 3. While \ud835\udc3e-means\u2225requires more distance computations, it is preferred in distributed environments because it requires less communication which is a signi\ufb01cant bottleneck for \ud835\udc3e-means++ [Bahmani et al., 2012]. It works by reducing the \ud835\udc3erounds of communication to a \ufb01xed number of \ud835\udc45\u226a\ud835\udc3erounds, yet still obtains the log-optimal results of \ud835\udc3e-means++ [Bachem et al., 2017]. In each of the rounds, \u2113new means are sampled based on the weighted unnormalized probability \u2113\ud835\udc64\ud835\udc56\ud835\udefc2 \ud835\udc56. With the standard defaults of \ud835\udc45= 5 and \u2113= 2\ud835\udc3e, we end up with an expected \ud835\udc452\ud835\udc3e> \ud835\udc3e total means. These \ud835\udc45\u00b7 \u2113potential means are weighted by the number of points that they are closest to and then passed to the \ud835\udc3e-means++ algorithm to reduce them to a \ufb01nal set of \ud835\udc3e means, which produces the \ufb01nal result. Note this last step requires \ud835\udc42(\ud835\udc3e2) distance computations when naively using Algorithm 1, making it necessary to accelerate the \ud835\udc3e-means++ algorithm in order to e\ufb00ectively accelerate \ud835\udc3e-means\u2225for datasets with large \ud835\udc3e. Algorithm 3 K-Means\u2225 Require: Desired number of seeds \ud835\udc3e, \ud835\udc651, . . . , \ud835\udc65\ud835\udc5b, data weights \ud835\udc641, . . . , \ud835\udc64\ud835\udc5b, rounds \ud835\udc45, oversampling factor \u2113 1: Weight of each data point \ud835\udc64\ud835\udc56\u22650 2: \ud835\udefd\ud835\udc56\u2190\ud835\udc64\ud835\udc56/\u00cd\ud835\udc5b \ud835\udc57=1 \ud835\udc64\ud835\udc57, \u2200\ud835\udc56\u2208[1, \ud835\udc5b] 3: \ud835\udc501 \u2190\ud835\udc65\ud835\udc56, where \ud835\udc56is selected with probability \ud835\udefd\ud835\udc56 4: \ud835\udc58\u21901, \ud835\udc58prev \u21900, \ud835\udefc= \u00ae \u221e 5: for \ud835\udc5f\u2208[1, \ud835\udc45] do 6: for \ud835\udc56\u2208[1, \ud835\udc5b] do 7: for \ud835\udc57\u2208(\ud835\udc58prev, \ud835\udc58] do 8: \ud835\udefc\ud835\udc56\u2190min \u0000\ud835\udefc\ud835\udc56, \ud835\udc51\u0000\ud835\udc50\ud835\udc57, \ud835\udc65\ud835\udc56 \u0001\u0001 9: \ud835\udc58prev \u2190\ud835\udc58 10: \ud835\udc4d\u2190\u00cd\ud835\udc5b \ud835\udc56=1 \ud835\udc64\ud835\udc56\u00b7 \ud835\udefc2 \ud835\udc56 11: for \ud835\udc56\u2208[1, \ud835\udc5b] do 12: if \ud835\udc5d\u223cBer(min(1, \u2113\u00b7 \ud835\udc64\ud835\udc56\u00b7 \ud835\udefc2 \ud835\udc56/\ud835\udc67)) is true then 13: \ud835\udc58\u2190\ud835\udc58+ 1, \ud835\udc50\ud835\udc58\u2190\ud835\udc65\ud835\udc56, \ud835\udefc\ud835\udc56\u21900 14: Let \ud835\udc64\u2032 \ud835\udc56\u2190\u00cd\ud835\udc5b \ud835\udc57=1 \ud835\udc64\ud835\udc57\u00b7 1[\ud835\udc51(\ud835\udc50\ud835\udc56, \ud835\udc65\ud835\udc57) = \ud835\udefc\ud835\udc57] \u22b2Weight set to number of points closest to center \ud835\udc50\ud835\udc56 15: return K-Means++(\ud835\udc3e, \ud835\udc501, . . . , \ud835\udc50\ud835\udc58, \ud835\udc64\u2032 1, . . . , \ud835\udc64\u2032 \ud835\udc58) \u22b2Run Algorithm 1 Since \ud835\udc3e< \ud835\udc45\u00b7 \u2113\u226a\ud835\udc5b, the \ufb01nal step of running \ud835\udc3e-means++ is not overbearing to run on a single compute node, and the sampling procedure is no longer a bottleneck that requires subversion. In a distributed setting, the \u2113new means selected are broadcast out to all worker nodes, which is possible because \u2113\u226a\ud835\udc5b, and thus requires limited communication overhead. However, the ability to use the triangle inequality becomes less obvious. Using the same approach as before, similar to Elkan [2003], would require \ud835\udc42(\ud835\udc3e2) pairwise distances computations between the new and old means, and more book-keeping overhead that would reduce the e\ufb00ectiveness of avoiding distance computations. Another strategy uses an algorithm like the Cover-Tree that accelerates nearest neighbor searches and supports the removal of data points from the index [Beygelzimer et al., 2006; Izbicki and Shelton, 2015]. Then, we could perform an allpoints nearest neighbor search [Curtin et al., 2013]. However, we are unaware of any approach that has produced a distributed cover-tree algorithm that would not run into the same communication overheads that prevents the standard \ud835\udc3emeans++ from working in this scenario. As such, it does not appear to be a worth while strategy. 4.1 Nearest In Range Queries Another approach would be to \ufb01t an index structure C to only the \u2113new points, and for each non-mean \ud835\udc65\ud835\udc56\ufb01nd its nearest potentially new assignment by querying C . Since \u2113is \ud835\udc42(\ud835\udc3e) this is too small a dataset for pruning to be e\ufb00ective with current methods. To remedy this, we note that we have additional information available to perform the search. The value \ud835\udefc\ud835\udc56which indicates the distance of point \ud835\udc65\ud835\udc56to its closest current mean. As such, we introduce a NearestInRange search that returns the nearest neighbor to a query point \ud835\udc5eagainst an index C if it is within a radius of \ud835\udc5fto the query. Since most points \ud835\udc65\ud835\udc56will not change ownership in a given iteration, a NearestInRange search could be able to prune out the entire search tree, and it will increase its e\ufb00ectiveness even if \ud835\udc3eis small. To do this, we use the Vantage Point tree (VP) algorithm [Yianilos, 1993] because it is fast to construct, has low overhead which makes it competitive with other algorithms such as KD-trees and Cover-trees [Ra\ufb00and Nicholas, 2018], and simple to augment with our new NearestInRange search. The pseudo-code for the standard VP search is given in Algorithm 4, where GetChild, Search, and Best are auxiliary functions used by the Nearest function to implement a standard nearest neighbor search. The VP has a left and right child, and it uses a value \ud835\udf0fto keep track of the distance to the nearest neighbor found. It also maintains two pairs of bounds, nearlow, nearhigh indicating the shortest and farthest distance to the points in the left child and farlow, far\u210e\ud835\udc56\ud835\udc54\u210edo the same for the right child. A standard Nearest Neighbor search calls the Nearest function with \ud835\udf0f= \u221e, and the bound is updated as the search progresses when it fails to prune a branch. Our contribution is simple. The NearestInRange function instead sets \ud835\udf0f= \ud835\udc5f, the minimum viable radius. It is easy to verify that this can only monotonically improve the pruning rate of each search. Since \ud835\udf0fbounds the distance to the nearest neighbor, and we know from the \ud835\udf36values an upper-bound on the distance to the nearest neighbor, the modi\ufb01cation remains correct. The rest 4 \fAlgorithm 4 Nearest Neighbor Search in VP Tree 1: function GetChild( \ud835\udc59\ud835\udc5c\ud835\udc64) 2: if \ud835\udc59\ud835\udc5c\ud835\udc64= \ud835\udc61\ud835\udc5f\ud835\udc62\ud835\udc52then 3: return left child 4: return right child \u22b2Else, return other 5: function Search(\ud835\udc5f, \ud835\udf0f, \ud835\udc59\ud835\udc5c\ud835\udc64) 6: if \ud835\udc59\ud835\udc5c\ud835\udc64= \ud835\udc61\ud835\udc5f\ud835\udc62\ud835\udc52then 7: \ud835\udc4e\u2190nearlow, \ud835\udc4f\u2190nearhigh 8: else 9: \ud835\udc4e\u2190farlow, \ud835\udc4f\u2190farhigh 10: return \ud835\udc4e\u2212\ud835\udf0f< \ud835\udc5f< \ud835\udc4f+ \ud835\udf0f \u22b2i.e., is this True or False? 11: function Best(\ud835\udf0f, \ud835\udf0f\u2032, \ud835\udc3c\ud835\udc37, \ud835\udc3c\ud835\udc37\u2032) 12: if \ud835\udf0f< \ud835\udf0f\u2032 then 13: return \ud835\udf0f, \ud835\udc3c\ud835\udc37 14: return \ud835\udf0f\u2032, \ud835\udc3c\ud835\udc37\u2032 \u22b2Else, return other 15: function Nearest(\ud835\udc5e, \ud835\udf0f, \ud835\udc3c\ud835\udc37) 16: \ud835\udc5f\u2190\ud835\udc51(\ud835\udc5d, \ud835\udc5e) 17: \ud835\udf0f, \ud835\udc3c\ud835\udc37\u2190Best(\ud835\udf0f, \ud835\udc5f, \ud835\udc3c\ud835\udc37, \ud835\udc5d) 18: \ud835\udc5a\u2190nearhigh+farlow 2 19: \ud835\udc59\ud835\udc53\u2190\ud835\udc5f< \ud835\udc5a \u22b2True/False, search near/left child \ufb01rst? 20: if Search(\ud835\udc5f, \ud835\udf0f, \ud835\udc59\ud835\udc53) then 21: \ud835\udf0f\u2032, \ud835\udc3c\ud835\udc37\u2032 \u2190GetChild( \ud835\udc59\ud835\udc53).Nearest(\ud835\udc5e, \ud835\udf0f, \ud835\udc3c\ud835\udc37) 22: \ud835\udf0f, \ud835\udc3c\ud835\udc37\u2190Best(\ud835\udf0f, \ud835\udf0f\u2032, \ud835\udc3c\ud835\udc37, \ud835\udc3c\ud835\udc37\u2032) 23: if Search(\ud835\udc5f, \ud835\udf0f, \u00ac\ud835\udc59\ud835\udc53) then 24: \ud835\udf0f\u2032, \ud835\udc3c\ud835\udc37\u2032 \u2190GetChild( \u00ac\ud835\udc59\ud835\udc53).Nearest(\ud835\udc5e, \ud835\udf0f, \ud835\udc3c\ud835\udc37) 25: \ud835\udf0f, \ud835\udc3c\ud835\udc37\u2190Best(\ud835\udf0f, \ud835\udf0f\u2032, \ud835\udc3c\ud835\udc37, \ud835\udc3c\ud835\udc37\u2032) 26: return \ud835\udf0f, \ud835\udc3c\ud835\udc37 27: function NearestInRange(\ud835\udc5e, \ud835\udc5a\ud835\udc4e\ud835\udc65\ud835\udc45\ud835\udc4e\ud835\udc5b\ud835\udc54\ud835\udc52) 28: return Nearest(\ud835\udc5e, \ud835\udc5a\ud835\udc4e\ud835\udc65\ud835\udc45\ud835\udc4e\ud835\udc5b\ud835\udc54\ud835\udc52, \u22121) \u22b2This simple function, used in-place of Nearest, is our contribution. of the algorithm remains unaltered, and can simply terminate the search faster due to a meaningful initial bound. Thus, to build an accelerated \ud835\udc3e-means\u2225we build an index C on the newly selected means. We do comparisons against that \ufb01ltered with our NearestInRange search, as detailed in Algorithm 5. For the \ufb01rst iteration, the loop on lines 7-11 will be fast with only \ud835\udc501 to determine the initial distribution, and on every subsequent round we have a meaningful value of \ud835\udefc\ud835\udc56that can be used to accelerate the search. If none of the \u2113new candidates \ud835\udc50\ud835\udc58prev, . . . , \ud835\udc50\ud835\udc58are within a distance of \ud835\udefc\ud835\udc56to each point \ud835\udc65\ud835\udc56, then the NearestInRange function will return a negative index which can be skipped. In addition, we use our accelerated \ud835\udc3e-means++ algorithm Algorithm 2 in the \ufb01nal step rather than the standard algorithm. This allows us to accelerate all parts of the \ud835\udc3e-means\u2225method while also keeping the simplicity and low communication cost of the original design. The Vantage Point tree is a small index since it is built upon a small dataset of \u2113points, and the index can be sent to every worker node in a cluster in the exact same manner. 5 Experimental Results Now that we have detailed the methods by which we accelerate the \ud835\udc3e-means++ and \ud835\udc3e-means\u2225algorithms, we will evalAlgorithm 5 Our Accelerated K-Means\u2225 Require: Desired number of seeds \ud835\udc3e, \ud835\udc651, . . . , \ud835\udc65\ud835\udc5b, data weights \ud835\udc641, . . . , \ud835\udc64\ud835\udc5b, rounds \ud835\udc45, oversampling factor \u2113 1: Weight of each data point \ud835\udc64\ud835\udc56\u22650 2: \ud835\udefd\ud835\udc56\u2190\ud835\udc64\ud835\udc56/\u00cd\ud835\udc5b \ud835\udc57=1 \ud835\udc64\ud835\udc57, \u2200\ud835\udc56\u2208[1, \ud835\udc5b] 3: \ud835\udc501 \u2190\ud835\udc65\ud835\udc56, where \ud835\udc56is selected with probability \ud835\udefd\ud835\udc56 4: \ud835\udefc= \u00ae \u221e, \ud835\udc58prev \u21900, \ud835\udc58\u21901 5: for \ud835\udc5f\u2208[1, \ud835\udc45] do 6: C \u2190new index built from {\ud835\udc50\ud835\udc58prev, . . . , \ud835\udc50\ud835\udc58} 7: for \ud835\udc56\u2208[1, \ud835\udc5b] do 8: \ud835\udc57\u2190C .NearestInRange(\ud835\udc65\ud835\udc56, \ud835\udefc\ud835\udc56) 9: if \ud835\udc57\u22650 then 10: \ud835\udefc\ud835\udc56\u2190\ud835\udc51(\ud835\udc50\ud835\udc57, \ud835\udc65\ud835\udc56) 11: \ud835\udc58prev \u2190\ud835\udc58 12: \ud835\udc4d\u2190\u00cd\ud835\udc5b \ud835\udc56=1 \ud835\udc64\ud835\udc56\u00b7 \ud835\udefc2 \ud835\udc56 13: for \ud835\udc56\u2208[1, \ud835\udc5b] do 14: if \ud835\udc5d\u223cBer(min(1, \u2113\u00b7 \ud835\udc64\ud835\udc56\u00b7 \ud835\udefc2 \ud835\udc56/\ud835\udc4d)) is true then 15: \ud835\udc58\u2190\ud835\udc58+ 1, \ud835\udc50\ud835\udc58\u2190\ud835\udc65\ud835\udc56, \ud835\udefc\ud835\udc56\u21900 16: Let \ud835\udc64\u2032 \ud835\udc56\u2190\u00cd\ud835\udc5b \ud835\udc57=1 \ud835\udc64\ud835\udc57\u00b7 1[\ud835\udc51(\ud835\udc50\ud835\udc56, \ud835\udc65\ud835\udc57) = \ud835\udefc\ud835\udc57] \u22b2Weight set to number of points closest to center \ud835\udc50\ud835\udc56 17: return K-Means++(\ud835\udc3e, \ud835\udc501, . . . , \ud835\udc50\ud835\udc58, \ud835\udc64\u2032 1, . . . , \ud835\udc64\u2032 \ud835\udc58) \u22b2Run Algorithm 2 uate their e\ufb00ectiveness. The two measures we are concerned with are the following: 1) reducing the total number of distance computations and 2) the total run-time spent. Measuring distance computations gives us an upper-bound on potential e\ufb00ectiveness of our algorithm, and allows us to compare approaches in an implementation and hardware independent manner. Measuring the run-time gives us information about the ultimate goal, which is to reduce the time it takes to obtain \ud835\udc3eseeds. However, it is sensitive to the hardware in use, the language the approach is implemented in, and the relative skills of program authors. For this work we used the JSAT library [Ra\ufb00, 2017]. The \ud835\udc3e-means++ algorithm was provided by this framework, and we implemented the \ud835\udc3e-means\u2225and accelerated versions of both algorithms using JSAT. This way all comparisons with respect to run-time and the \ud835\udc3e-means++ and \u2225algorithms presented are directly comparable. Our implementations have been contributed to the JSAT library for public use. Prior works that have investigated alternatives to \ud835\udc3emeans++ have generally explored only a few datasets with \ud835\udc37< 100 features and less than 4 values of \ud835\udc3e, sometimes testing only one value of \ud835\udc3eper dataset [Bachem et al., 2016b]. For example, while MNIST is regularly tested in seed selection, it is usually projected down to 50 dimensions \ufb01rst [Hamerly, 2010] due to being di\ufb03cult to accelerate. Since our goal is to produce accelerated versions of these algorithms that are uniformly better, we attempt to test over a wide selection of reasonable scenarios. In Table 1 we show the 11 datasets we use, with \ud835\udc37\u2208[3, 780], and \ud835\udc5bcovering four orders of magnitude. We will test \ud835\udc3e\u2208[32, 4096] covering each power of two so that we may understand the behavior as \ud835\udc3echanges and to make sure we produce an improvement even when \ud835\udc3eis small. To the best of our knowledge, this is a larger number of datasets, range and values of \ud835\udc3e, and range 5 \fand values of \ud835\udc37to be tested compared to prior work1. Dataset \ud835\udc5b \ud835\udc37 Phishing 11055 68 cod-rna 59535 8 MNIST 60000 780 aloi 108000 128 Range-Queries 200000 8 Skin/NoSkin 245057 3 covertype 581012 54 SUSY 5000000 18 Activity Rec. 33741500 5 HIGGS 11000000 28 Web2 45811883 5 Table 1: Datasets used. Left is the dataset, ordered by number of samples (\ud835\udc5b). Right most column indicates the number of features \ud835\udc37. Unless stated otherwise, all experiments were done with a single CPU core from an iMac with a 3.5 GHz Intel i5 CPU with 64 GB of RAM. The phishing dataset is only tested up to \ud835\udc3e= 2048, because at \ud835\udc3e= 4096 we would be selecting over 1/4 of the dataset as means, at which point the purpose of \ud835\udc3emeans++ style seeding is being defeated by selecting too large a portion of the corpus. All results are averaged over 5 runs, and took four months to complete in our compute environment. 5.1 \ud835\udc3e-Means++ Results We start with the \ud835\udc3e-means++ results with the reduction in distance computations shown in Figure 2. In the worst case for \ud835\udc3e= 32 on the MNIST dataset, we still have to do 98% of the distance computations as the standard algorithm, but this improved to only 63% by \ud835\udc3e= 4096. The best case is observed with the Web dataset, starting out with only 15% of the distance computations at \ud835\udc3e= 32 and only 0.1% by \ud835\udc3e= 4096, a 739\u00d7 improvement. Across all the datasets, we see that the factor reduction in distance computations is a monotonic improvement for \ud835\udc3emeans++. We never see any case where our accelerated approach performs more distance computations than the naive approach. This con\ufb01rms our decision to do an extra \ud835\udc58\u22121 distance computation between the newest mean \ud835\udc5a\ud835\udc58and the previous means \ud835\udc5a1, . . . , \ud835\udc5a\ud835\udc58\u22121. As we noted in the design of our accelerated variant, we must avoid over-emphasising the performance of just reduced distance computations as the cost of re-normalizing the distribution to sample the next mean is a non-trivial cost. This is especially true when we are able to reduce the distance computations by \u226516\u00d7 for several of our datasets. The results showing the run-time reduction are presented in Figure 3. In all cases, our accelerated version of \ud835\udc3e-means++ is always faster than the standard algorithm. As expected, MNIST has the lowest speedup based on the number of distance computations avoided. At \ud835\udc3e= 32 we achieved only a 3.4% reduction in time but was 1.5\u00d7 faster by \ud835\udc3e= 4096. Dynamic Priority Impact Since the normalization step is non-trivial, especially when \ud835\udc37is small, we see that the actual speedup in run-time is not as strongly correlated with the dimension \ud835\udc37. The Covertype 1We are aware of no prior work in this space that has considered \ud835\udc37> 1024, where pruning methods are unlikely to succeed due to the curse of dimensionality. We consider this reasonable and beyond scope, as such scenarios are usually sparse and best handled by topic models like LDA. 25 26 27 28 29 210 211 212 20 25 210 K Distance Reduction Phishing Cod-RNA MNIST aloi Range Queries Skin Covtype SUSY Activity Recognition HIGGS Web Figure 2: Factor reduction in distance computations for our accelerated \ud835\udc3e-means++ algorithm compared to original. 25 26 27 28 29 210 211 212 20 22 24 K Speedup Phishing Cod-RNA MNIST aloi Range Queries Skin Covtype SUSY Activity Recognition HIGGS Web Figure 3: Run-time Speedup for our accelerated \ud835\udc3e-means++ algorithm compared to the standard algorithm. dataset (\ud835\udc37= 54) had the 4th largest reduction in distance computations, but it had the largest reduction in run-time with a 17\u00d7 improvement at \ud835\udc3e= 4096. Our ability to still obtain real speedupson these datasetsis because ourdynamic priority queue allows us to consider only a small subset of the dataset to accurately select the next weighted random mean. This can be seen in Figure 4, where a subset of the datasets are shown with the fraction of the corpus examined on the y-axis. As the datasets get larger our dynamic queue generally becomes more e\ufb00ective, thus reducing the number of points that need to be checked to \u22641%. To con\ufb01rm that our dynamic priority queue\u2019s results are meaningful, we perform an ablation of Algorithm 2 where the dynamic priority queue on lines 18-23 are replaced with the standard sampling code from Algorithm 1. We run both versions and record the speedup when our dynamic queue is used in Table 2 for \ud835\udc3e= 4096. Here we can see that with the exception of the cod-rna dataset, where there is a < 2% slowdown (on the fastest dataset to run), our approach gives a 5%\u2212231% speedup in all other cases with a median improvement of 20%. 6 \f0 1,000 2,000 3,000 4,000 10\u22128 10\u22125 10\u22122 K Fraction of Dataset Examined Cod-RNA MNIST Range Queries Covtype Activity Recognition HIGGS Web Figure 4: The fraction of the remaining candidates that need to be examined (y-axis, log scale) to select the \ud835\udc58\u2019th mean (x-axis, linear scale) using our dynamic priority queue. Table 2: Ablation testing of speedup from using our new dynamic priority queue to perform seed selection at every iteration. Positive values indicate faster results using our dynamic queue, where our pruning from Algorithm 2 was used with/without the dynamic queue. Dataset Speedup cod-rna 0.983 Phishing 2.313 MNIST 1.059 aloi 1.059 Range-Queries 1.342 Skin/NoSkin 1.217 covtype 1.877 SUSY 1.259 Activity Recognition 1.207 HIGGS 1.279 Web 1.725 We also note that for all \ud835\udc3e< 4096 we still observe bene\ufb01ts to our queue, but the variance does increase to the degree of speedup. We did not observe any performance regressions larger than 3% in extended testing. 5.2 \ud835\udc3e-Means\u2225 Results In Figure 5 we show the factor reduction in distance computations, which mirrors the overal trends of Figure 2. The results have improved by an additional \u22482 \u22124\u00d7 with a 579\u00d7 reduction in distance computations on the Activity Recognition dataset. The MNIST dataset still had the least improvement, but still obtained a more signi\ufb01cant 88% reduction in distance computations at \ud835\udc3e= 32. The approximately 4\u00d7 improvement in distance computations also carries over to the total run-time, as shown in Figure 6. We observe a more consistent behavior because the cost of normalizing and sampling the new means is reduced to only \ud835\udc45= 5 rounds of sampling. Where our accelerated \ud835\udc3e-means++ had the relative improvement drop signi\ufb01cantly for small \ud835\udc37< 10 datasets due to this overhead, our accelerated \ud835\udc3e-means\u2225algorithm sees the ordering remain relatively stable. For example, the Activity Recognition dataset enjoys the greatest reduction in distance computations as well as runtime, and the 579\u00d7 reduction in distance computations closely matches the 551\u00d7 reduction in run-time. The HIGGS dataset 25 26 27 28 29 210 211 212 20 25 210 K Distance Reduction Phishing Cod-RNA MNIST aloi Range Queries Skin Covtype SUSY Activity Recognition HIGGS Web Figure 5: Factor Reduction in Distance Computations needed to perform \ud835\udc3e-means\u2225seed selection. Larger is better. 25 26 27 28 29 210 211 212 20 25 210 K Speedup Phishing Cod-RNA MNIST aloi Range Queries Skin Covtype SUSY Activity Recognition HIGGS Web Figure 6: Run-time speedup for our accelerated \ud835\udc3e-means\u2225algorithm compared to the standard algorithm. Larger is better. has the lowest improvement in run-time with a 1.02\u00d7 speedup at \ud835\udc3e= 32 and 2.9\u00d7 at \ud835\udc3e= 4096. We also note that the NearestInRange query provided an additional 1.5 \u22124\u00d7 speedup in most cases, but was highly dependent on the dataset and value of \ud835\udc3e. 6" + }, + { + "url": "http://arxiv.org/abs/2012.09932v1", + "title": "Research Reproducibility as a Survival Analysis", + "abstract": "There has been increasing concern within the machine learning community that\nwe are in a reproducibility crisis. As many have begun to work on this problem,\nall work we are aware of treat the issue of reproducibility as an intrinsic\nbinary property: a paper is or is not reproducible. Instead, we consider\nmodeling the reproducibility of a paper as a survival analysis problem. We\nargue that this perspective represents a more accurate model of the underlying\nmeta-science question of reproducible research, and we show how a survival\nanalysis allows us to draw new insights that better explain prior longitudinal\ndata. The data and code can be found at\nhttps://github.com/EdwardRaff/Research-Reproducibility-Survival-Analysis", + "authors": "Edward Raff", + "published": "2020-12-17", + "updated": "2020-12-17", + "primary_cat": "stat.ML", + "cats": [ + "stat.ML", + "cs.AI", + "cs.LG", + "stat.AP" + ], + "main_content": "Introduction There is current concern that we are in the midst of a reproducibility crisis within the \ufb01elds of Arti\ufb01cial Intelligence (AI) and Machine Learning (ML) (Hutson 2018). Rightfully, the AI/ML community has done research to understand and mitigate this issue. Most recently, Raff (2019) performed a longitudinal study attempting to independently reproduce 255 papers, and they provided features and public data to begin answering questions about the factors of reproducibility in a quanti\ufb01able manner. Their work, like all others of which we are aware, evaluated reproducibility using a binary measure. Instead, we argue for and demonstrate the value of using a survival analysis. In this case, we model the hazard function \u03bb(t|x), which predicts the likelihood of a an event (i.e., reproduction) occurring at time t, given features about the paper x. In survival analysis, we want to model the likely time until the occurrence of an event. This kind of analysis is common in medicine where we want to understand what factors will prolong a life (i.e., increase survival), and what factors may lead to an early death. In our case, the event we seek to model is the successful reproduction of a paper\u2019s claims by a reproducer that is independent of the original paper\u2019s authors. Compared to the normal terminology used, we desire factors that decrease survival meaning a paper takes less time to reproduce. In the converse situation, a patient that \u201clives forever\u201d would be equivalent to a paper that cannot be reproduced with any Copyright \u00a9 2021, Association for the Advancement of Arti\ufb01cial Intelligence (www.aaai.org). All rights reserved. amount of effort. We will refer to this as reproduction time in order to reduce our use of standard terminology that has the opposite connotation of what we desire. Our goal, as a community, is to reduce the reproduction time toward zero and to understand what factors increase or decrease reproduction time. We will start with a review of related work on reproducibility, speci\ufb01cally in machine learning, in section 2. Next, we will detail how we developed an extended dataset with paper survival times in section 3. We will use models with the Cox proportional hazards assumption to perform our survival analysis, which will begin with a linear model in section 4. The linear model will afford us easier interpretation and statistical tools to verify that the Cox proportional hazards model is reasonable for our data. In section 5 we will train a non-linear Cox model that is a better \ufb01t for our data in order to perform a more thorough analysis of our data under the Cox model. Speci\ufb01cally, we show in detail how the Cox model allows us to better explain observations originally noted in (Raff 2019). This allows us to measure a meaningful effect size for each feature\u2019s impact on reproducibility, and better study their relative importance\u2019s. We stress that the data used in this study does not mean these results are de\ufb01nitive, but useful as a new means to think about and study reproducibility. 2 Related Work The meta-science question of reproducibility \u2014 scienti\ufb01cally studying the research process itself \u2014 is necessary to improve how research is done in a grounded and objective manner (Ioannidis 2018). Signi\ufb01cant rates of non-reproduction have been reported in many \ufb01elds, including a 6% rate in clinical trials for drug discovery (Prinz, Schlange, and Asadullah 2011), making this an issue of greater concern in the past few decades. Yet, as far as we are aware, all prior works in machine learning and other disciplines treat reproducibility as a binary problem (Gundersen and Kjensmo 2018; Glenn and P.A. 2015; Ioannidis 2017; Wicherts et al. 2006). Even works that analyze the difference in effect size between publication and reproduction still view the issue result in a binary manner (Collaboration 2015). Some have proposed varying protocols and processes that authors may follow to increase the reproducibility of their work (Barba 2019; Gebru et al. 2018). While valuable, we lack quanti\ufb01cation of their effectiveness. In fact, little has arXiv:2012.09932v1 [stat.ML] 17 Dec 2020 \fbeen done to empirically study many of the factors related to reproducible research, with most work being based on a subjective analysis. Olorisade, Brereton, and Andras (2018) performed a small scale study over 6 papers in the speci\ufb01c sub-domain of text mining. Bouthillier, Laurent, and Vincent (2019) showed how replication (e.g., with docker containers), can lead to issues if the initial experiments use \ufb01xed seeds, which has been a focus of other work (Forde et al. 2018). The largest empirical study was done by Raff (2019), which documented features while attempting to reproduce 255 papers. This study is what we build upon for this work. Sculley et al. (2018) have noted a need for greater rigor in the design and analysis of new algorithms. They note that the current focus on empirical improvements and structural incentives may delay or slow true progress, despite appearing to improve monotonically on benchmark datasets. This was highlighted in recent work by Dacrema, Cremonesi, and Jannach (2019) who attempted to reproduce 18 papers from Neural Recommendation algorithms. Their study found that only 7 could be reproduced with \u201creasonable effort.\u201d Also concerning is that 6 of these 7 could be outperformed by better tuning baseline comparison algorithms. These issues regarding the nature of progress, and what is actually learned, are of extreme importance. We will discuss these issues as they relate to our work and what is implied about them from our results. However, the data from (Raff 2019) that we use does not quantify such \u201cdelayed progress,\u201d but only the reproduction of what is stated in the paper. Thus, in our analysis, we would not necessarily be able to discern the issues with the 6 papers with insuf\ufb01cient baselines by (Dacrema, Cremonesi, and Jannach 2019). We note that the notion of time impacting reproduction by the original authors or using original code has been previously noted (Mesnard and Barba 2017; Gronenschild et al. 2012) and often termed \u201ctechnical debt\u201d (Sculley et al. 2015). While related, this is a fundamentally different concern. Our study is over independently attempted implementation, meaning technical debt of an existing code base can not exist. Further, these prior works still treat replication as a binary question despite noting the impact of time on dif\ufb01culty. Our distinction is using the time to implement itself as a method of quantifying the degree or dif\ufb01culty of replication, which provides a meaningful effect size to better study replication. 3 Study Data The original data used by Raff (2019) was made public but with explicit paper titles removed. We have augmented this data in order to perform this study. Speci\ufb01cally, of the papers that were reproduced, the majority had their implementations made available as a part of the JSAT library (Raff 2017). Using the Github history, we were able to determine end dates for the completion of an algorithm\u2019s implementation. In addition the original start dates of each implementation are recorded by Mendeley which were used for the original study. Combined, this gives us start and end dates and survival times for 90 out of the 162 papers that were reproduced. The remaining 44% of reproduced papers, for which we could not determine any survival time, were unfortunately excluded from the analysis conducted in the remainder of this paper. 0 500 1,000 1,500 0 2 4 6 \u00b710\u22123 Days to Reproduce Density Figure 1: Histogram of the time taken to reproduce. The dark blue line shows a Kernel Density Estimate of the density, and the dashes on the x-axis indicate speci\ufb01c values. Summary statistics on the time taken to reproduce these 90 papers are shown in Figure 1. While most were reproduced quickly, many papers required months or years to reproduce. Raff (2019) noted that the attempts at implementation were not continuous efforts, and noted many cautions of potential bias in results (e.g., most prominently all attempts are from a single author). Since we are extending their data, all these same biases apply to this work \u2014 with additional potential confounders. Total amount of time to implement could be impacted by life events (jobs, stresses, etc.), lack of appropriate resources, or attempting multiple works simultaneously, which is all information we lack. As such readers must temper any expectation of our results being a de\ufb01nitive statement on the nature of reproduction, but instead treat results as initial evidence and indicators. Better data must be collected before stronger statements can be made. Since the data we have extended took 8 years of effort, we expect larger cohort studies to take several years of effort. We hope our paper will encourage such studies to include time spent into the study\u2019s design and motivate participation and compliance of cohorts. With appropriate caution, our use of Github as a proxy measure of time spent gives us a level of ground truth for the reproduction time for a subset of reproduced papers. We lack any labels for the time spent on papers which failed to be reproduced and successful reproductions outside of Github. Survival analysis allows us to work around some of these issues as a case of right censored data. Right censored data would indicate a start time s, an amount of time observed to, but not the successful event (reproduction) e. If te is the amount of time needed to observe the event e (i.e., true amount of time needed to reproduce), then we have to < te. If we can estimate a minimum time 0 < \u02c6 to < to, we can still perform a reasonable and meaningful survival analysis. We work under the assumption that more time was likely spent on papers that were not reproduced than ones that were, and so assume that the average time spent on successful papers ( \u02c6 to) than the actual amount of time spent (to). As such, we assign every non-reproduced paper the average amount of time for the data in Figure 1, or 218.8 days of effort, as \fan approximate guess at the amount of time that would be expended. We note that we found our analysis to be robust to a wide range of alternative constants, such as the median time (53.5 days). While the exact values in the analysis would change, the results were qualitatively the same. A repetition showing qualitative similarity of using the median can be found in Appendix C. A survival model does not provide us the means to circumvent the cases where we lack the observed time to for successfully reproduced papers. Due to the reduced number of data points (the 72 papers not implemented in JSAT\u2019s Github), we have excluded some of the variables from analysis in this paper. Speci\ufb01cally, the Venue a paper was published in and the Type of Venue. This reduced dataset resulted in signi\ufb01cant skew in some of the sub-categories of these \ufb01elds, making comparisons untenable (e.g., the number of papers in Workshops was reduced to one example). The above protocol was used to create the data for this study, and will be made available for others to study and perform additional analysis on. Our lack of ground truth \u201clevel of effort\u201d for all of the original data is a source of potential bias in our results, and should be used to caution on taking the results as any kind of proclamation or absolute truth. That said, we \ufb01nd the analysis useful and able to elucidate \ufb01ner details and insights that were not recognizable under the standard binary analysis of reproduction. 4 Linear Hazard Analysis We start with standard Cox model, where we use a linear set of coef\ufb01cients \u03b2 \u2208Rd to control how much our various features impact the predicted reproduction time, \u03bb(t|xi) = exp \u0000x\u22a4 i \u03b2 \u0001 \u00b7\u03bb0(t), where \u03bb0(t) is a baseline hazard function. The Cox model imposes a proportional assumption on the nature of the hazard. The base hazard rate \u03bb0(t) may by different for every individual, but the proportional assumption means that we expect the impact of altering any covariate to have the same proportional effect for every instance. e.g., specifying the hyperparamters used in a paper would always reduce the reproduction time by a factor of X%. The Cox model provides us a number of bene\ufb01ts to our analysis, provided that it is a reasonable assumption. First, it allows us to estimate \u03b2 without knowing, or even modeling, the base hazard function \u03bb0(t). Second, though not unique to the Cox model, it supports right censored data. If an instance is right censored, we have waited for the event to occur for some unit of time t, but have not yet seen the event occur at a later point in time te. This allows us to model all of the failed reproduction attempts as papers which may be reproducible, but for which we have not yet put in suf\ufb01cient effort to reproduce the paper. This allows us to use our estimated effort spent on non-reproduced papers without causing signi\ufb01cant harm to the underlying model. 4.1 Cox Proportional Hazard Validation The question we must \ufb01rst answer: is the Cox proportional model assumption reasonable for our data? First, as a baseline, we have the results of the non-parametric tests performed by (Raff 2019). Next, we train a linear logistic regresTable 1: p-values for each features\u2019 importance. Feature Independent Logistic Cox Year Published 0.964 0.613 0.92 Year Attempted 0.674 0.883 0.45 Has Appendix 0.330 0.201 0.07 Uses Exemplar Toy Problem 0.720 0.858 0.20 Looks Intimidating 0.829 0.035 0.20 Exact Compute Used 0.257 1.000 0.39 Data Available <0.005 0.644 0.81 Code Available 0.213 0.136 0.18 Number of Authors 0.497 0.542 0.68 Pages 0.364 0.702 0.82 Num" + }, + { + "url": "http://arxiv.org/abs/2012.09390v1", + "title": "Classifying Sequences of Extreme Length with Constant Memory Applied to Malware Detection", + "abstract": "Recent works within machine learning have been tackling inputs of\never-increasing size, with cybersecurity presenting sequence classification\nproblems of particularly extreme lengths. In the case of Windows executable\nmalware detection, inputs may exceed $100$ MB, which corresponds to a time\nseries with $T=100,000,000$ steps. To date, the closest approach to handling\nsuch a task is MalConv, a convolutional neural network capable of processing up\nto $T=2,000,000$ steps. The $\\mathcal{O}(T)$ memory of CNNs has prevented\nfurther application of CNNs to malware. In this work, we develop a new approach\nto temporal max pooling that makes the required memory invariant to the\nsequence length $T$. This makes MalConv $116\\times$ more memory efficient, and\nup to $25.8\\times$ faster to train on its original dataset, while removing the\ninput length restrictions to MalConv. We re-invest these gains into improving\nthe MalConv architecture by developing a new Global Channel Gating design,\ngiving us an attention mechanism capable of learning feature interactions\nacross 100 million time steps in an efficient manner, a capability lacked by\nthe original MalConv CNN. Our implementation can be found at\nhttps://github.com/NeuromorphicComputationResearchProgram/MalConv2", + "authors": "Edward Raff, William Fleshman, Richard Zak, Hyrum S. Anderson, Bobby Filar, Mark McLean", + "published": "2020-12-17", + "updated": "2020-12-17", + "primary_cat": "stat.ML", + "cats": [ + "stat.ML", + "cs.AI", + "cs.LG" + ], + "main_content": "Introduction Cybersecurity has received increased attention from machine learning practitioners and researchers due to the number of challenges that exist within the space. Industry datasets are routinely measured in petabytes (Spafford 2014), have noisy labels, are both structured and unstructured, suffer from continuous concept drift (Kantchelian et al. 2013), and adversarial attacks have been well motivated as a daily occurrence for decades (Rajab et al. 2011). In this work we are interested in the task of static malware detection, where using the on-disk byte representation, one wishes to predict if a new executable program is benign or malicious. Current industry models rely heavily on domain knowledge feature extraction, which is time consuming and expensive, and requires intimate knowledge of Windows and low-level assembly and software design. Because malware authors adapt, this feature engineering is a continuous processes, which can require reverse engineering effort to determine what new features should be extracted. To quantify the cost of such efforts, a single program can Copyright \u00a9 2021, Association for the Advancement of Arti\ufb01cial Intelligence (www.aaai.org). All rights reserved. take weeks for experts with decades of experience to reverse engineer (Votipka et al. 2019), so the ability to build models that perform their own feature extraction and construction can save an enormous amount of time if successful. Toward this goal, we follow the approach of MalConv, which proposed to tackle the problem of malware detection as a time series classi\ufb01cation problem (Raff et al. 2018). For an input \ufb01le x of T bytes in length, a neural network must learn to produce an output label y \u2208{Benign, Malicious}. The MalConv architecture was relatively small (\u22481 million parameters) but represented a meaningful malware detector by performing convolutions over raw byte inputs. Additionally, the work identi\ufb01ed a number of challenges with this task. In particular, their approach would process up to 2 MB of a \ufb01le\u2014equivalent to a time series prediction problem with T = 2, 000, 000 steps. The next longest time series task we are aware of is only on the order of \u226416, 000 steps (van den Oord et al. 2016). Due to the extreme length of raw byte inputs, the MalConv solution required an NVIDIA DGX-1 with 128 GB of GPU memory to train over one month of compute time. This has made MalConv dif\ufb01cult to replicate, while simultaneously neglecting the fact that 2 MB is relatively small with respect to the distribution of observed executable \ufb01le sizes, where the tails can reach in excess of 100 MB. In this work, we produce a solution to the high memory cost to train MalConv, making the memory use invariant to the length of the input \u2014 allowing us to train on data points in excess of 200,000,000 time steps in length using a single GPU. This reduces the memory used to train MalConv by a factor of 116\u00d7 while simultaneously providing an up to 25.8\u00d7 speedup, reducing the compute requirements from a DGX-1 down to a free Google Colab instance. Our approach leverages the sparse gradients of temporal max pooling to cap the memory requirement during training on long inputs. By signi\ufb01cantly reducing the runtime and memory constraints of MalConv, we are able to explore more advanced architectures for the task of time series classi\ufb01cation. In particular, we develop a new global channel gating (GCG) that allows us to enhance MalConv to learn interactions of features across the entire input space. GCG can be implemented in only 7 lines of Python, making it easy to implement while improving the accuracy of the end-to-end deep malware detection model. Although we explicitly address malware detection where long input sequences are dramatic, our contributions are relevant arXiv:2012.09390v1 [stat.ML] 17 Dec 2020 \fgenerally to deep neural networks with long input sequences, some of which are discussed in the following section. We also note the task of learning interactions over a sequence of unprecedented length of intrinsically interesting from a pure ML perspective and bene\ufb01ts a real-world task. We have organized the paper as follows. In section 2 we will review the work related to MalConv, which has received signi\ufb01cant attention from the cybersecurity community that motivates our research, as well as other work in the domain of processing long input sequences. Next we will detail our approach to making the memory cost of MalConv style architectures invariant to feature length in section 3. These improvements are necessary to make our global channel gating possible, which we detail in section 4. The results detailing our speedups and memory improvements are presented in section 5, followed by our conclusions in section 6. 2 Related Work The desire to perform malware classi\ufb01cation from raw bytes, to alleviate expensive and constant feature engineering, has long existed. This was originally based on the Normalized Compression Distance (NCD) (Li et al. 2004), which has found extensive use for this task (Wehner 2007; Bailey et al. 2007; Hayes, Walenstein, and Lakhotia 2008; Bayer et al. 2009; Borbely 2015; Alshahwan et al. 2015; Faridi, Srinivasagopalan, and Verma 2019; Men\u00b4 endez et al. 2019; S. Resende, Martins, and Antunes 2019; Walenstein and Lakhotia 2007). Recent works like LZJD (Raff and Nicholas 2017, 2018; Raff, Aurelio, and Nicholas 2019) and BWMD (Raff, Nicholas, and McLean 2020) are built from compression algorithms and useful in unsupervised settings, but are less effective in supervised ones. We will use these methods as baselines to compare against. MalConv was the \ufb01rst proposed approach to detect malware from raw bytes, processing inputs of up to 2 MB in length (Raff et al. 2018). Through a broad search across network architectures, the authors report that many classical \u201cbest practices\u201d for neural network architectures did not apply. For example, they found that BatchNorm prevented convergence, and that a network with 1 layer of extremely wide convolutions performed better than deeply stacked narrow \ufb01lters. Since (Raff et al. 2018), a number of others have replicated their approach or proposed alterations to better study it, but all have reduced the input size in order to reduce computational costs. Authors from the anti-virus company Avast restricted their study to \ufb01les that where \u2264512 KB (Kr\u02c7 c\u00b4 al et al. 2018). Their work is notable for being the \ufb01rst to compare the approach with hand engineered domain knowledge features from their production malware classi\ufb01er. They found that the CNN was close in performance, and combining the domain and CNN features improved accuracy by 4%, indicating the CNN was learning features or feature interactions not previously found by domain experts. FireEye did an in-depth reverse engineering of what a MalConv-like network learned showing it corresponded well to what an analyst would look for, but had to restrict their model to 100 KB(Coull and Gardner 2019). Anderson and Roth (2018) introduced the Ember dataset and found MalConv slightly worse than hand-crafted features, but needed 25 hours/epoch to train on up-to 1 MB. Recent work(Galinkin 2019) has even shown MalConv has an ability to generalize across x86 architectures, detecting x86 macOS and Linux malware when trained only on Windows data. Other works have used the same or similar architectures to perform malware detection on datasets other than Windows executables, including for Android APKs(Hasegawa and Iyatomi 2018), PDF \ufb01les(Jeong, Woo, and Kang 2019), as well as malicious JavaScript and Visual Basic code detection(Stokes, Agrawal, and McDonald 2018). These works have all demonstrated the value of the byte based approach to malware detection, but simultaneously show the computational limitations. These solutions all suffer from an arti\ufb01cial limit in the maximum \ufb01le size imposed by memory constraints; these potentially degrade performance and enable easy evasion in an adversarial scenario. Many works have shown MalConv is susceptible to evasion (Demetrio et al. 2019; Kolosnjaji et al. 2018; Kreuk et al. 2018; Fleshman et al. 2018), but these attacks can be thwarted at a cost to accuracy (Fleshman et al. 2019). This defense is only moderately effective because MalConv can be thwarted by simply inserting the malicious payload after the 2 MB \ufb01le limit. Because malware authors are real active adversaries attempting to evade detection, this is a serious limitation. After years of activity and development, our work \ufb01nally removes this trivial limitation from this research area, which also makes (Fleshman et al. 2019) more effective. While MalConv has received signi\ufb01cant interest for its applications in malware detection, few other works within machine learning approach the same length of sequence processing. Recent work extending the Transformer approach to more ef\ufb01ciently handle long inputs has reached T = 64, 000 time steps (Kitaev, Kaiser, and Levskaya 2020). While the Transformer is able to learn more robust representations than our current work, it is still orders of magnitude too short to be able to process most executable \ufb01les. Work by Voelker, Kaji\u00b4 c, and Eliasmith (2019) proposed an extension of Recurrent Neural Networks, showing them to be capable of learning on synthetic time series of T = 1, 000, 000 steps. Their approach requires over an hour to process a single time series of this length, making it computationally infeasible \u2014 where our approach enables MalConv to run on a similar length input in under 42 milliseconds. While our approach improves the representational power of MalConv and is faster to train, it has less representational power compared to these other works. We provide more details on the failed attempts with transformers, and other approaches in a \u201cWhat Did Not Work\u201d Appendix A. Our approach to \ufb01xing the memory cost of MalConv is similar to checkpoint (or \u201crematerialization\u201d) (Griewank and Walther 2000). This approach involves re-computing results during the backward pass to avoid saving results in the forward pass, trading more compute for less memory but guaranteeing identical results. All work in this domain has focused on ways to balance this trade off for different types of acyclic network graphs (Chen et al. 2016; Gruslys et al. 2016; Kumar et al. 2019; Kusumoto et al. 2019; Beaumont et al. 2020). Our work instead performs recomputation in the forward pass, so that the backward pass produces an equivalent result, while using less compute time and less memory. \fAlthough we focus exclusively on the application of malware detection from byte sequences, we note that other domains may similarly bene\ufb01t from tools for classi\ufb01cation over long time series. For example, Genome Wide Association Studies (GWAS) can exceed 500,000 time steps in length, and have long dealt with issues in discovering interactions across GWAS(Wu et al. 2010). When constrained to smaller sequences with T \u22645000, architectures similar to MalConv have found use for GWAS based prediction tasks (Liu et al. 2019). 3 Fixed Memory Convolution Over Time The original MalConv architecture is shown in Figure 1. It contains an embedding layer (of R8) that is used over an alphabet of 257 tokens, 256 bytes + an End of File marker. These are fed into two sets of 128 convolutional \ufb01lters with a width of 512 and a stride of 5121, which are then used in a gating approach proposed by (Dauphin et al. 2017). The gated result is then converted to a \ufb01xed length feature vector using temporal max pooling (i.e., global max pooling, or max pooling over time), after which it is fed into a simple fully connected layer for prediction of the benign/malicious label. Since there is only one layer, the receptive window size W is equal to the kernel width 512. Raw Byte Embedding 1D Conv 1D Conv \u03c3 \u2297 Temporal Max-Pooling Fully Connected Softmax Figure 1: Original MalConv architecture (Raff et al. 2018) with \u22481M parameters, but required 128 GB of GPU RAM to train. \u2297indicates element-wise product, and \u03c3 the sigmoid activation. Despite its simplicity, MalConv was the \ufb01rst architecture to demonstrate that neural networks could learn to perform malware detection from raw bytes and the \ufb01rst to show classi\ufb01cation over time series/sequences of up to T = 2, 000, 000 steps. However, only the \ufb01rst 2 MB of the input was processed in training MalConv because it required 128 GB of GPU memory to train on a batch of 256 \ufb01les up to the 2MB limit. This is owing to the large memory cost of performing an embedding and convolution over a time series of 2 million steps (1 for each byte), and the resulting activations alone require almost all of the GPU memory. Every subsequent work we are aware of has processed less than the original 2 MB cap. To overcome these issues, we developed a novel Temporal Max-Pooling strategy that makes memory costs invariant to the sequence length T. Importantly, we do this by noting that Temporal Max-Pooling causes the gradient with respect to the sequence to be sparse. For C channels, saving all C \u00b7 T 1Originally a width and stride of 500 was used, but it has been noted in several works that using a power of two performs better due to assembly code being aligned on powers of two when written to an executable. activations is unnecessary, as only C values will actually be used, one for each channel. Thus we are using many times more memory than needed to train, and also performing redundant GPU computations on the backward pass since the majority of gradient values are exactly 0. When working with normal images and standard applications of max-pooling, the sparsity ratio may be 1:2 or 1:4, which is generally not sparse enough to make exploitation of that fact useful. This is because every non-zero value requires storing its associated index, doubling the memory use of those values. Second, operations on dense vectors/matrices result in more ef\ufb01cient computation, delivering computational throughput closer to the theoretical limits of modern hardware when using modern BLAS libraries and software like CUDNN. As such, libraries like PyTorch and Tensor\ufb02ow do not support sparse gradients through max-pooling. Chunk 1 x x x x x x x x x MaxPool FC Layers & Softmax Figure 2: Diagram of Temporal Max Pooling with \ufb01xed memory. The original input (top) is a 1D sequence with 3 channels and is broken up into four chunks based on window size W = 3. Without gradient computation/tracking, the maximum activation index is found within each chunk. Solid colors show max values kept, \u201c\u00d7\u201d max in chunk but no maximal. Winning indices are copied to a new shorter sequence (bottom), which runs with gradient tracking. The result is the same output and gradient, but \ufb01xed memory cost. Conversely, we obtain the bene\ufb01ts of sparse activations while also retaining the higher computational throughput of dense operations, without requiring any new code, as follows. 1. Turn off gradient computation (e.g., with torch.no_grad(): if using PyTorch) and break the input sequence of length T into at most T/(W \u00b7 2) overlapping chunks of size W \u00b7 3. 2. Perform max pooling over each chunk; for each channel track the maximum value and its absolute index. 3. Compare values within each chunk to a set of global winners. If a new chunk\u2019s maximal activation exceeds the global winner, it becomes the new global winner. Once we have processed all chunks, we know the C locations, one for each channel, that will win the max-pooling over time. The chunks overlap so that this computation is correct, and not impacted by windowing issues. With these C locations, we may simply concatenate their values into a new sequence of length T \u2032 = C \u00b7 W. This new sequence is now small enough that the full set of embedding, \fconvolutional layers, and temporal max pooling can be done in a dense fashion (retaining computational ef\ufb01ciency bene\ufb01ts), using memory proportional to what would be achieved with sparsity-exploiting code. The total memory use is now independent of the original sequence length T, and a diagram of the process is presented in Figure 2. Details on windowing artifacts: We noted that in the concatenation of different chunks in Figure 2 into one new sequence, it is technically possible for a new index to become the maximal activation due to the receptive window length W crossing between two chunks that were previously not adjacent. This results in a pattern that has potentially not been seen previously, which thus creates new activation values. We have never observed this issue in practice, and so have not taken any special steps to avoid this situation (with more details in Appendix C). This hypothetical issue could be prevented by performing the convolution and a max-pool over the chunks independently. Then, the pooled results could be concatenated and a second round of pooling performed. We have not observed any issues warranting this extra complexity and overhead. 4 Global Channel Gating With an ef\ufb01cient method for handling large input sequences, we can explore a broader set of neural network architectures. In particular, we note a weakness in the original design of MalConv: the use of temporal max-pooling after a single convolutional layer results in a somewhat myopic model: learned features are purely local in nature. That is, with the existing architecture, the model output does not consider interactions between features that are far apart in time within an input sequence/\ufb01le. To demonstrate why this is important in malware detection, consider that a common feature to learn/extract is the use of encryption libraries, which may indicate, for example, functionality common in ransomware. However, if the program does not access the \ufb01le system, the use of encryption becomes less suspicious and less likely to indicate malware. In its current embodiment, it is impossible for MalConv to learn logic like this because the presence/absence of the associated information may be in disparate regions of the input, and the receptive window of the network (512 bytes) is far smaller than most inputs (221 bytes). To endow our network with the ability to learn such relationships while retaining computational tractability, we develop a new attention inspired gating approach we call global channel gating (GCG). The idea is that given a long time sequence with C channels, X = {x1, x2, . . . , xT } where xt \u2208RC, we want to globally suppress certain time steps based on the content of all channels. We approach this in a style similar to the gated linear unit and the additive attention (Dzmitry Bahdana et al. 2015), using a learned context \u00af g \u2208RC, as shown in Equation 1. GCGW (xt, \u00af g) = xt \u00b7 \u03c3 (x\u22ba t tanh (W \u22ba\u00af g)) (1) The entries of the vector xt \u2208RC at time t may be suppressed by the scalar quantity on the right hand side of the GCG equation. Due to the sigmoid operation \u03c3(\u00b7), xt will be scaled by a value in the range of [0, 1], resulting in a context sensitive suppression of each entry in the vector. Embed 1D Conv 1D Conv Temporal Max-Pool Fully Connected \u03c3 \u2297 Embed 1D Conv 1D Conv Fully Connected \u2297 \u03c3 Input Temporal Max-Pool GCG Softmax Context Feature X \u2208RT\u00d7C = {x1, x2, . . . , xT } g \u2208RC 1x1 Conv LeakyReLU Figure 3: Our new proposed architecture with global channel gating (GCG). The blue thick dashed sub-network shows the context extractor, which is used to suppress information found from the Feature extraction sub-network (red, thinly dashed sub-network). We detail a new malware classi\ufb01cation architecture which we term MalConv with GCG in Figure 3 that leverages GCG. The top half of the network serves to learn a global context \u00af g, which is used as input to the GCG. The bottom half of the architecture shows the feature sub-network, which uses a different embedding layer to perform initial feature extraction and uses GCG to selectively suppress regions of the input, allowing for potential feature interactions over time. The inputs to GCG are a state vector from the top half context network, and a sequence over time generated from the bottom half, which has Equation 1 applied point-wise over time. This is followed by temporal max pooling, where we apply the \ufb01xed memory approach from section 3 to make the training feasible with \ufb01xed memory costs. 4.1 Gating via convolution Care must be taken to implement the GCG approach effectively. The naive strategy to implement GCG requires reshaping the input array, and either running over every time step with a for loop to extract a slice xt and perform a dot product, or alternatively, duplicating the context \u00af g into a larger matrix and performing a larger BLAS operation against the input X. The \ufb01rst approach suffers from excessive Python and auto-grad overhead in our testing. The latter approach is more ef\ufb01cient in terms of FLOPs, but still cumbersome and slow due to the duplication of \u00af g. Instead, we exploit the nature of grouped convolutions (Krizhevsky, Sutskever, and Hinton 2012) to ef\ufb01ciently implement the GCG over time. Given a batch of B time series, we reinterpret the input activation/context \u00af g as a set of 1D convolution weights/\ufb01lters in a B \u00d7 C \u00d7 1 matrix, and perform a grouped convolution with B groups. Thus we convolve the context with the input X where the window size \fdef gcg(self, X, g): # X.shape = (B, T, C) B, T, C = X.size(0), X.size(1), X.size(2) # g.shape = (B, C) # create context vector z = tanh(W \u22bag) # self.w references a nn.Linear(C, C) layer , \u2192 z = torch.tanh(self.w(g)) # Size is (B, C), but we need (B, C, 1) to use as a 1d conv filter , \u2192 z = torch.unsqueeze(z, dim=2) # roll the batches into the channels x_tmp = X.view(1,B*C,-1) # apply a conv with B groups; each batch gets its own context applied , \u2192 # This computes x\u22ba t z forall t = 1...T x_tmp = F.conv1d(x_tmp, z, groups=B) # x_tmp has a shape of (1, B, T); re-order as (B, 1, T) , \u2192 gates = x_tmp.view(B, 1, -1) # effectively apply xt \u00b7 \u03c3(x\u22ba t tanh(W \u22bag)) return X * torch.sigmoid( gates ) Figure 4: PyTorch code demonstrating how to implement global channel gating in a computationally ef\ufb01cient manner. The input context g is projected and re-shaped, such that it can be used as the \ufb01lter weights in a 1D convolution grouped by the batch size. This results in computing the dot product over time. is 1 (considering only one time-step at a time), the B different contexts become the number of output \u201cchannels\u201d, and by grouping each context is applied only to its appropriate input. The grouped convolution allows us to apply the different \ufb01lters to each batch in one operation. We \ufb01nd this easiest to demonstrate with code, and present a working PyTorch implementation of GCG in Figure 4. With this additional insight, the GCG operation is no more expensive than a 1 \u00d7 1 convolution, allowing us to leverage it for inputs with hundreds of millions of time-steps without issue. 5 Results The Ember2018 corpus (Anderson and Roth 2018) has 600,000 training samples and 200,000 test samples. At \u22481 TB it is our primary test set due to size and dif\ufb01culty. Both training and testing sets are evenly split between benign and malicious, and all testing samples were \ufb01rst observed after all training samples. The predecessor 2017 corpus was explicitly noted to be \u201ceasy\u201d due to the way it was created, and MalConv obtained an AUC of 99.8%, close to that of a domain knowledge approach which achieved 99.9% AUC. We prefer the 2018 corpus because it was designed to be more challenging, and MalConv obtains an accuracy of only 91% on the newer corpus. The domain knowledge features were less impacted, dropping to only 99.6% AUC. This better demonstrates the gap between current deep learning and domain knowledge based approaches for classifying malware. We also use the Common Crawl to collect 676,843 benign PDF 103 104 105 106 107 108 10\u22123 10\u22122 10\u22121 100 File Size (bytes) Cumulative Fraction Malware Benign Figure 5: Distribution of \ufb01le lengths (x-axis, log-scale) and percentage of \ufb01les of an equal or lesser size (y-axis, log-scale) for all \ufb01les in the Ember2018 corpus. The largest \ufb01le is 271.1 MB. \ufb01les and VirusShare (Roberts 2011) 158,765 malicious ones. This gives 464 GB of data total, with 10% used as a test set. Malicious PDF \ufb01les are easier to detect than malicious executables, so the effect size of our improvements are expected to be smaller. We include this test to show that our methods still work on other types of data. The distribution of \ufb01le lengths in bytes is shown in Figure 5, with the longest \ufb01le corresponding to a time series with 271,082,368 time steps. This is 135.5\u00d7 longer than the original MalConv work, and thus two orders of magnitude longer than any previous time-series classi\ufb01cation task we are aware. We were able to train Malconv with and without GCG on these data without any truncation. This removes the trivial adversarial attack of moving malicious code past the 2 MB limit. For all networks, we trained using the Adam optimizer (Kingma and Ba 2015) with the recently proposed decoupling of weight-decay (Loshchilov and Hutter 2019), using the recommended default parameters. A batch size of 128 was used in each experiment. All experiments were performed on a DGX-1. We note that our improved training procedure no longer requires this level of compute power, however, we do this to appropriately compare training time in our experiments with previous work. We denote MalConv trained with the original approach, truncating to the \ufb01rst 2MB of the input \ufb01le, as \u201cMalConv (2MB, Orig)\u201d. In what follows, we use \u201cMalConv\u201d to denote the original architecture from Figure 1 trained with our new \ufb01xed-memory approach speci\ufb01ed in section 3. Finally, our new MalConv with GCG from section 4 will be the last model we train for comparison. Both MalConv and MalConv with GCG are trained to processes the entirety of the input \ufb01les, up to 271 MB. We train all models for 20 epochs, using the result from the last epoch. For MalConv we use a \ufb01lter size of 512, a stride of 512, 128 channels for each 1D Conv block, and an embedding \fdimension of 8. For MalConv with GCG we use a \ufb01lter size of 256, a stride of 64, 256 channels for each convolution, and an embedding dimension of 8. For all models we incorporate the suggestion of (Fleshman et al. 2019) of including a special token after the EOF that maps to an embedding vector of all zeros. Details on the hyper-parameter selection, including attempts at improving the standard MalConv, can be found in Appendix D. Below we will show the results indicating how our methods have improved MalConv, and we provide a discussion of other attempts to improve upon the MalConv approach that were unsuccessful, and how they impacted our approaches\u2019 \ufb01nal design in Appendix A 5.1 Training MalConv with \ufb01xed-memory max-pooling Table 1: Results on training time and computational ef\ufb01ciency. Model Time Per Epoch GPU RAM MalConv (2MB, Orig) 21 hr 29 min 128 GB MalConv 1 hr 10 min 1.1 GB MalConv w/ GCG 4 hr 5 min 2.2 GB We \ufb01rst evaluate the impact of our \ufb01xed-memory approach to training over long sequences. The original MalConv required 128 GB of GPU memory, and 21.5 hours per epoch on the Ember2018 dataset. In Table 1 we can see the timing information and memory use compared to our new approaches. Our \ufb01xed-memory approach to temporal max pooling results in signi\ufb01cant bene\ufb01ts, with a 116\u00d7 improvement in memory use and a 18.4\u00d7 reduction in training time. This takes MalConv training down from the order of a month to just a day. We note that the results are further improved when we consider that \ufb01xed-memory pooling is faster while processing more data, since it considers the entirety of each \ufb01le. Since 14.2% of \ufb01les are greater than 2MB, we are actually processing a total 1.4\u00d7 more data than the original MalConv, making our speedup effectively 25.8\u00d7 per byte processed. Our new approach makes it possible now for anyone with a GPU and data to train MalConv. Without these speed and memory improvements, our new MalConv with GCG architecture would not have been possible to train. Naive scaling of the results indicates we would have needed 256 GB of GPU RAM (which would have only been possible with a DGX-2), and approximately 1 month of training time. 5.2 Improved accuracy In Table 2 we show the classi\ufb01cation performance of all three models, and two state of the art compression based methods LZJD and BWMD using 9-nearest neighbor classi\ufb01cation. We see that training MalConv with the prior approach but on the entire sequence length has no appreciable difference in accuracy (\ufb02uctuations of 0.1 percentage points). This shows 1) that we are able to still learn effectively while processing more information, and 2) that our approach does not hinder Table 2: Ember 2018 results on accuracy and AUC for each model. Model Accuracy AUC MalConv (2MB, Orig) 91.27 97.19 MalConv 91.14 97.29 MalConv w/ GCG 93.29 98.04 LZJD 73.43 84.98 BWMD 81.97 91.12 ence throughout our dataset. e public research on learning malware classi\ufb01ers (and deep network e lack of an industrial-sized publicly available datasets. This causes where different results are mutually directly incomparable, if repro-term vision to make available some of our data in a form and volume e deep learning community. Embedding 192 \u21e5(N/ 4096 z }| { 4 \u00b7 4 \u00b7 4 \u00b7 8 \u00b7 8) Fully Connected Fully Connected Fully Connected Fixed Embedding Conv 32 (stride 4) Conv 32 (stride 4) Max pooling 4 Conv 16 (stride 8) Conv 16 (stride 8) Global Average 8 \u21e5N 48 96 96 128 192 192 160 128 2 Fully Connected Embedding Fully Connected Fully Connected Fully Connected Global Average 192 192 160 128 2 Fully Connected Fixed Embedding SELU ReLU Figure 1: Our convnet. is visualized in Figure 1; several reof the input sequence is \ufb01rst embedtor of the form (\u00b11/16, . . . , \u00b11/16) esentation where constant 1/16 was ed no performance difference between mbeddings. educing the computational load. To urden, we apply experimentally tuned and the second block of convolutions, that using strides of 3,5,7 and 9 (nonve order causes relative drop roughly etrics we have measured.3 alize the convolutional layers by ranniform distribution according to Glohe fully connected layers according The training loss is the usual crossple contributing to the loss 7 times as much as every malicious sample. o batches of 128 similarly sized \ufb01les padded by zeros at the end (right ned by the Adam optimizer (Kingma and Ba (2014)) with the default scores on the validation set, we stop the training shortly after the third Embedding Fixed Embedding 8 \u21e5N Conv 512 Conv 512 \u21e5 Embedding Global Max 128 \u21e5(N/512) Fully Connected 128 2 Fully Connected Figure 2: MalConv 1.1. Both kernel size and stride is 512. by the zero false positives target. or low false positive rates so that they lse malware detections under the real minance of clean \ufb01les. We formalize nder the Receiver Operator Curve re01] of the false positive rate. For conin percentages of the maximal possill refer to such score as the restricted changes in our architecture and the s in the restricted AUC score. On the ations improves cross-entropy and/or of Global Average: -20% relative \ufb01les with equal weight: -10% relative ded to cover majority of the obfuscated \ufb01les by using unpackers. s the only hyper parameter of our network consciously tailored to the executaft Visual C++) align the beginnings of so-called sections within the executable g., 4096). Figure 6: \u201cAvastConv\u201d architecture from (Kr\u02c7 c\u00b4 al et al. 2018). Their approach was originally trained on entire executables, but each executable was \u2264512 KB. training in any way. As noted previously, parsing all of the input \ufb01le is also bene\ufb01cial for thwarting the trivial attack of moving all malicious code to the end of an executable. We also see that our MalConv with GCG improves upon the accuracy by 2.2% and AUC by 0.87% of the original MalConv architecture. We prefer evaluation on the Ember 2018 corpus because it is both large and challenging. Our evaluations on the PDF corpus are done to show that our improvements transfer to other \ufb01le types as well. On our PDF corpus we obtain an Accuracy of 99.16% and an AUC of 99.76%. MalConv with GCG improves this to 99.42% and 99.80%. Because PDF \ufb01le are easier to processes, the baseline MalConv is already nearing maximal performance, so the gain is smaller \u2014 but shows our GCG approach is still an improvement. 5.3 Ablation testing of Avast architecture A dif\ufb01culty of research in this space is that large testing of over millions of executables can only be done in partnership with commercial anti-virus companies, who have large corpora to test against. Of the prior works we discussed in section 2, the work by Kr\u02c7 c\u00b4 al et al. (2018) is of particular interest for two reasons, 1) they found that global max pooling produced a 20% relative drop in performance compared to their use of global average pooling, and 2) it is the only extension to MalConv we are aware that is easy to adjust with our \ufb01xed memory max pooling form section 3. The architecture they use, which we will call AvastConv, is given in Figure 6. Its primary differences are the use of more layers of smaller \ufb01lter widths (32 followed by 16), a hard coded embedding rather than a learned embedding, and the aforementioned use of global average pooling instead of global max pooling. \fThe pooling difference was the largest factor according to the ablation testing by (Kr\u02c7 c\u00b4 al et al. 2018) at 20%. They found that \ufb01xed vs learned had no performance impact, and other differences between their and our current architectures accounted for no more than a 4% difference. The biggest untested factor between these works is that their study was constrained to smaller executables \u2264512KB in size, where in our work we consider unbounded size with inputs over 200 MB in size. As such, we choose to perform a small ablation test against this architecture, replacing the global average pooling with our new \ufb01xed memory temporal max pooling. Training their architecture for 20 epochs, we obtain an accuracy of 85.8% and an AUC of 94.6%. These results are signi\ufb01cantly lower than MalConv and our improved version shown in Table 2. While it may be possible that global averaging would restore performance to their approach, there is not enough remaining accuracy for a 20% relative improvement to occur. This would seem to indicate their initial results on the strength of global pooling are not as strong when factoring in larger \ufb01le sizes. This is bene\ufb01cial from the perspective that we can use max pooling to achieve \ufb01xed memory cost, which is not possible with average pooling. These results also give credence to a relatively shallower architecture with wider convolutional \ufb01lters, which is maintained in our current design. This runs in contrast to normal applications of CNNs in the vision, signal, and natural language processing domains, where the community has more \ufb01rmly rested on smaller \ufb01lters with more layers being the canonical design approach. 5.4 Example of interactions over time with GCG http://booomaahuuoooapl[.]ru/ http://eoufaoeuhoauengi[.]ru/ http://maeobnaoefhgoajo[.]ru/ http://ashihsijaediaehf[.]ru/ http://plpanaifheaighai[.]ru/ C2 URLs HttpQueryInfoA InternetOpenUrlA InternetOpenA WININET.dll URLDownloadToFileW Internet Connectivity GetModuleFileNameW FindClose FindNextFileW SetFileAttributesW GetVolumeInformationW Benign Content (21,226 21,244) (14,992 15,104) (21,722 21,794) GCGW(\u00b7, g) = 12x 6x 6x Feature Impact 3\u21926 0\u219210 3\u21921 T : Figure 7: The time steps T that feature content is found is shown in parenthesis. \u00af g suppressed activations from benign content, and increased focus on internet functionality/ This shows GCG can related disconnected portions with no overlapping receptive \ufb01eld, learning complex interactions. As our last result, we demonstrate an example of how our new GCG has effectively learned non-linear interactions across large time ranges. In Figure 7 we show a diagram of the relevant content from a malicious sample, and how the context impacts the selected features once the temporal maxpooling is performed. In particular, three types of content found: Command-and-Control (\u201cC2\u201d) URLs used for remote control of the malware, system calls used for internet connectivity, and other innocuous benign content. The blue lines into the context vector \u00af g denote the number of \ufb01lters that had their maximum activation occur in the byte range of that content, with 12 \ufb01lters selecting the C2 URLs, and 6 each for the Internet connectivity and other benign content. The values in parenthesis indicate the time-step T (i.e., byte location) of the content within the \ufb01le. We can see that the C2 URLs are \u22656,122 steps/bytes away from the rest of the content, far larger than the receptive \ufb01eld of the convolutions. After the GCG gating is applied to the activations of the feature sub-network, we see a large change in what is selected by the \ufb01nal max-pooling operation. Without GCG, none of the internet connectivity features were selected. In this case, the GCG suppresses the activations of other regions in the binary, but opens the gate (i.e., GCGW (x21,226, \u00af g) \u22481.0) for the internet content. As such, 10 of the \ufb01lters now activate for this region. Similarly, the number of \ufb01lters activating for the C2 URLs increases from 3 to 6. We also see that the innocuous content for working with the \ufb01le system is suppressed by the gate, reducing activations from 3 down to 1. Combined, the C2 URLs and the use of APIs to connect to them over the Internet are signi\ufb01cant indicators for con\ufb01rming this \ufb01le as malicious, which the network successfully performs. This is helpful information for malware analysts, and others who wish to know how the malware performs. This malicious examples demonstrates that our GCG mechanism successfully learns the kinds of nuanced interactions we desire over large time ranges. The use of \ufb01le system and internet connectivity is intrinsically non-suspicious on their own. Correctly focusing on the right content requires observing the suspicious URLs contained withing the \ufb01le. Simultaneously, this shows MalConv learning to perform sub-tasks, like determining if a URL looks suspicious or not, to make these informed contextual gating decisions. Because this kind of analysis is expensive, we include more results from the PDF corpus in Appendix B. 6" + }, + { + "url": "http://arxiv.org/abs/2009.03779v1", + "title": "Automatic Yara Rule Generation Using Biclustering", + "abstract": "Yara rules are a ubiquitous tool among cybersecurity practitioners and\nanalysts. Developing high-quality Yara rules to detect a malware family of\ninterest can be labor- and time-intensive, even for expert users. Few tools\nexist and relatively little work has been done on how to automate the\ngeneration of Yara rules for specific families. In this paper, we leverage\nlarge n-grams ($n \\geq 8$) combined with a new biclustering algorithm to\nconstruct simple Yara rules more effectively than currently available software.\nOur method, AutoYara, is fast, allowing for deployment on low-resource\nequipment for teams that deploy to remote networks. Our results demonstrate\nthat AutoYara can help reduce analyst workload by producing rules with useful\ntrue-positive rates while maintaining low false-positive rates, sometimes\nmatching or even outperforming human analysts. In addition, real-world testing\nby malware analysts indicates AutoYara could reduce analyst time spent\nconstructing Yara rules by 44-86%, allowing them to spend their time on the\nmore advanced malware that current tools can't handle. Code will be made\navailable at https://github.com/NeuromorphicComputationResearchProgram .", + "authors": "Edward Raff, Richard Zak, Gary Lopez Munoz, William Fleming, Hyrum S. Anderson, Bobby Filar, Charles Nicholas, James Holt", + "published": "2020-09-06", + "updated": "2020-09-06", + "primary_cat": "cs.CR", + "cats": [ + "cs.CR", + "cs.IR", + "cs.LG", + "stat.ML" + ], + "main_content": "INTRODUCTION Machine Learning has become more involved in malware detection systems and cybersecurity, but older signature-based approaches are still an important tool. In particular, Yara [3] is widely used to specify signatures and perform searches. Yara is a tool to combine \u2217Work done while at Elastic. Publication rights licensed to ACM. ACM acknowledges that this contribution was authored or co-authored by an employee, contractor or affiliate of the United States government. As such, the Government retains a nonexclusive, royalty-free right to publish or reproduce this article, or to allow others to do so, for Government purposes only. AISec\u201920, November 13, 2020, Virtual Event, USA \u00a9 2020 Copyright held by the owner/author(s). Publication rights licensed to ACM. ACM ISBN 978-1-4503-8094-2/20/11...$15.00 https://doi.org/10.1145/3411508.3421372 content matching against simple regular expressions with logic rules, and these rules \u2018fire\u2019 if the predicates are satisfied. These predicates combined are often called \u2018Yara Rules\u2019, and may be used to identify specific malware families, the presence of CVEs, specific signatures of functionality, or generic indicators of maliciousness. Developing effective Yara rules can be very time intensive, especially for junior analysts who lack deeper expertise and intuition on what should be included in a Yara rule to achieve a goal. For example, a related task in performing reverse engineering (a task that may be necessary to build good signatures for difficult malware samples) can take several hours to weeks for a single file, even for expert users with over a decade of experience [33]. In our experience, analysts rarely get through all of their \u201cnecessary\u201d tasks and work under a continually growing backlog of samples that need to be analyzed or have rules created. Despite Yara\u2019s widespread use, only a few works have attempted to automate the development of Yara rules. We consider the problem of trying to develop a Yara rule to identify a specific malware family given only a limited number of example files from that family. A common workflow in developing Yara rules is to manually inspect multiple files to determine common contents, wrapped by trial-and-error refinement of the developed rules, where success is measured against coverage and false positive rates on a collection that includes benign or out-of-family files. In this paper, we are concerned with two practical use-cases. First, a \u201chunt\u201d team is deployed to an unfamiliar network after the discovery of malicious files. To determine the extent of the attack they craft Yara rules to perform a broader search across the network. In this scenario, there are two primary concerns: 1) Yara rules that generate a lot of false positives (e.g. returning a significant amount of benign files) could slow the investigation and 2) security workers often have fewer (\u226410) samples when creating a Yara rule. The second scenario is based on scaling Yara rule construction to track specific malware families that are known to be difficult to correctly classify, due to their structural resemblance to benign software, or because they are polymorphic in nature. We test this scenario on live production data to demonstrate that our approach could save analysts significant time spent constructing Yara rules. We stress that our objective is not to entirely replace a human analyst in producing Yara rules. A skilled analyst will likely be able 1 arXiv:2009.03779v1 [cs.CR] 6 Sep 2020 \fAISec\u201920, November 13, 2020, Virtual Event, USA Raff et al. to produce better rules than our tool given the time, and techniques such as packing will successfully thwart our tools. The goal is to provide a tool that can produce rules that are good enough that they can often be used without alteration, or quickly improved so that analysts can get through their workload faster. Given a satisfying AutoYara result, analysts can spend their limited and valuable time working on more challenging samples and tasks that do not yet yield to automation. r u l e Analyst { s t r i n g s : $a = \"191231235959 Z0U1 \" , $b = \" downloader \" wide $c = \" 1 . 0 . 2 . 4 1 7 \" wide c o n d i t i o n : $a and $b and $c } r u l e YarGen { s t r i n g s : $s1 = \"5054585 <5@5D5H5T5X5 \\ \\ 5 5 d5h5 \" fullword a s c i i $s2 = \"0 0 $0 (0 ,0004080 D0T0X0d0h0l0 ( 6 \" fullword a s c i i . . . $s20 = \" them together since they co-occur, and we the predicates built from biclusters. This results in a \u201cdisjunction of conjunctions\u201d rule formulation, where, referring back to Figure 2, we build the rule over the features Fi as (F1 \u2227F2 \u2227F3) \u2228(F5 \u2227F6 \u2227F7 \u2227F8) \u2228(F7 \u2227F8 \u2227F9 \u2227F10). In this way can build complex rules with multiple terms, with sharing of terms between clauses. The difficulty is in performing the biclustering process itself. In particular, the majority of biclustering algorithms require specifying the number of biclusters in advance [27], which is unknown in our Figure 2: Illustration of the type of biclustering we wish to find. It tells us that three groups of features (red, green, & blue) could be useful to create a signature that covers all 9 samples. It also identifies that the \u201cEvil.exe\u201d feature would not be useful in this case. We do not care if biclusters overlap, as multiple clauses can use the same features. situation, and enforce no overlap between biclusters. We desire a biclustering method that can determine the number of biclusters automatically, even if it is only one bicluster, allows overlapping biclusters, and will discard rows and columns that do not fit in any bicluster. We develop this by extending the seminal SpectralCoClustering approach of [8]. Their approach is widely used for both its simplicity and effectiveness. By computing the Singular Value Decomposition (SVD) of a normalized input matrix A \u2208Rr,c, they create a new matrix Z \u2208Rr+c,log2 k, where the first r rows of Z correspond to the original r rows of A, and the remaining rows of Z correspond to the columns of A. The number of features in the transformed matrix is log2 k, where k is the desired number of biclusters. The biclusters are then found by running the k-means algorithm on Z, and the rows of Z found in the cluster tell us which rows/columns of A are in the final biclustering. [8] proved that this corresponds to a weighted cut of the bipartite graph to perform biclustering. We augment this strategy to jointly perform biclustering while automatically determining the number of clusters. The details are outlined in Algorithm 1, where S are the sample inputs and G the set of input features that make the rows and columns, respectively, of the matrix A. Lines 1 through 7 proceed in the same manner as standard SpectralCoClustering, except we use a larger set of features for the matrix Z. We then use a Variational Gaussian Mixture Model (VGMM) [6, 25] to perform the clustering instead of k-means, as the VGMM algorithm can automatically determine the number of clusters to use. In particular, we use a diagonal covariance constraint on the GMM so that if we have extraneous clusters, the VGMM can learn to ignore the extra dimensions in Z, which should exhibit homogeneity in the coefficients due to the excessive number of 3 \fAISec\u201920, November 13, 2020, Virtual Event, USA Raff et al. Algorithm 1 Adaptive SpectralCoClustering Require: Set of files/data points S, and set of n-gram features G 1: construct matrix A \u2208Rr,c, where r = |S| is the number of data points and c = |G| is the number of columns / n-gram features 2: Ri,i = \u00cdc j=1 \u02dc Ai,j 3: Cj,j = \u00cdr i=1 \u02dc Ai,j 4: Compute normalized matrix An = R\u22121/2AC\u22121/2 5: Set max SVD dimensions \u2113\u2190log2 (min(r,c)/2) \u25b7The following two lines show \u2019Scale\u2019 based normalization. One could also use bistochastic or log-based normalization proposed in [12] 6: U ,S,V \u2190ThinSVD(An, \u2113+1) 7: Z \u2190 \u0014 R\u22121/2U C\u22121/2V \u0015 , a new dataset in Rr+c,\u2113 8: \u00b51, . . . , \u00b5k, \u03a31, . . . , \u03a3k \u2190Variational GMM [6, 25] clustering results on the r + c rows of Z 9: B \u2190empty set of bi-clusters 10: for all GMM clusters \u00b5i, \u03a3i do 11: \u03b1i \u2190all rows j of Z s.t. P (zk | N(\u00b5i, \u03a3i)) > 1/(k + 1) 12: for all \u03b1i do \u25b7Filter out biclusters that contain essentially only rows from S or only columns from G 13: if \u00cd \u2200j \u2208\u03b1i 1[j \u2264r] \u22641 or \u00cd \u2200j \u2208\u03b1i 1[j > r] \u22641 then 14: discard/remove cluster \u03b1i 15: cmin \u2190min \u0010 5, arg maxi \u00cd \u2200j \u2208\u03b1i 1[j > r] \u0011 16: rmin \u2190min \u0010 5, arg maxi \u00cd \u2200j \u2208\u03b1i 1[j \u2264r] \u0011 17: for all \u03b1i do 18: if \u00cd \u2200j \u2208\u03b1i 1[j \u2264r] < rmin or \u00cd \u2200j \u2208\u03b1i 1[j > r] < cmin then 19: discard \u03b1i 20: return remaining k\u2032 biclusters B = (R1, C1), . . . , (Rk\u2032, Ck\u2032) where the rows of S in each bicluster \u03b1i correspond to Ri = {j \u2208\u03b1i | j \u2208Z[1 : r]}, and the columns/features of G selected are Ci = {j \u2208\u03b1i | j \u2208Z[r + 1 : r + c]} dimensions no longer forming a meaningful cut in the bipartite graph clustering. On line 11 we use the probabilities of cluster membership computed by the VGMM to select any row (sample) with a probability \u22651/(k + 1) of belonging to each bicluster. We use k + 1 in the denominator so that multi-bicluster membership can still occur with k = 2 biclusters. In lines 12-18 of the algorithm, we perform removal of extraneous clusters. First we remove clusters that contain only rows from A\\S, and only columns from A\\G, as these biclusters are degenerate and uninformative. In the next step we filter out clusters that contain less than 5 rows or columns from A as being spurious, unless the largest clusters contain fewer than 5 rows/columns, at which point we set the limit to the largest observed. This allows us to avoid spurious clusters while adapting to work in scenarios with small sample sizes that occur in malware analysis (e.g., \u226410 samples), but would not normally be of interest in standard biclustering applications. 4 AUTOYARA DESIGN In designing the AutoYara tool, a number of design constraints informed our approach. First, the tool must be light-weight enough that it can run on low resource machines (e.g., a laptop with 4 GB of RAM or less) to support the maximal number of analysts, who do not always have significant compute resources available. Toward this goal, we needed to minimize memory use and model size in memory, as well as reliance on any GPU resources. This allows analysts who take \u201cfly-away\u201d kits with them to unfamiliar networks to begin investigations into the network, encountering whatever novel malware that may be present [17]. We also need the tool to produce Yara rules within minutes, as our experience has been that analysts will not, in general, use tools requiring them to wait hours or more. Finally, we need to produce Yara rules that can be interpreted by analysts. This makes it is possible for analysts to gain insights that may aid their work by inspecting the rules or even modify the rules to improve them. We assume the user will provide multiple files that share some intrinsic nature (e.g., same malware family), which we would like to identify with a Yara rule. We focus on building Yara rules based on specific byte patterns. For this reason we use large byte n-grams, where n \u2208[8, 1024]. Prior work has developed an algorithm to extract the top-k most frequent n-grams for large values of n with limited memory, in time almost invariant to n, and has noted that these large byte-grams are interpretable to malware analysts [20]. To make sure that we only consider interesting n-grams for rule construction, we will perform filtering of the selected n-grams. First, we will use a large training corpus of 600,000 files to find generally frequent n-grams. If an n-gram is frequent across a large portion of these files, it is unlikely to make a good signature \u2014 as signatures need to be specific. To store these compactly, we will use a Bloom Filter for each n-gram size, storing the top 1 million most frequent n-grams for n \u22088, 16, 32, . . . , 1024 if they occur in more than 0.1% of the training files. Algorithm 2 Filter Simple Require: Set of files/data points S, and set of n-gram features G, bloom filters Fn 1: for all byte n-grams \u0434i \u2208G do 2: z = \u00cd j \u2208\u0434i 1[j = 0x00] + 1[j = 0xFF] \u25b7Count number of bytes equal to 0 or 255 3: if z \u2265|\u0434i |/2 then 4: remove \u0434i 5: else if H(\u0434i) \u22641.0 then \u25b7Byte entropy of too small 6: remove \u0434i 7: else if \u0434i \u2208bloom filter F|\u0434i | then 8: remove \u0434i 9: For each pair of n-grams \u0434i and \u0434j, if they both occur in exactly the same files in S, keep the n-gram with the highest entropy and discard the other. 10: return remaining n-grams \u0434i that were not removed. We use two other strategies for removing n-grams unlikely to be useful for clustering. First we consider the entropy of an n-gram x, as given in Equation 1, where Pi(x) denotes the proportion of bytes with value i (i.e, Pi(x) = \u00cdn\u22121 j=0 1[x[j] = i]/n). 4 \fAutomatic Yara Rule Generation Using Biclustering AISec\u201920, November 13, 2020, Virtual Event, USA H(x) = \u2212 255 \u00d5 i=0 Pi(x) \u00b7 log (Pi(x)) (1) The byte entropy of a sequence would then be in the range of [0,8], with 8 corresponding to content that appears completely random, and 0 for the same value repeated alone. For context, natural language text usually has an entropy value \u22484. We use a filter of 1.0 to remove n-grams. We also check if more than half of the bytes have the value 0 or 0xFF, which are commonly used in padding and can be unreliable. This gives us a simple filtering strategy given by Algorithm 2. Algorithm 3 AutoYara Require: Benign & malicious training corpora C. Top-k value k. Initial n-gram sizenl, minimum entropymh, and filter threshold ft . 1: function BuildIndex(Corpus C) 2: for i \u22088, 16, 32, 64, 128, . . . , 1024 do 3: G \u2190Find top-k most frequent ni-grams using KiloGram Algorithm[20]. 4: Create new counting Bloom Filter Fni 5: for all \u0434 \u2208G do \u25b7Populate Bloom Filter 6: c \u2190Count(\u0434) 7: Fni .Insert(\u0434, c) 8: function BuildYaraRule(Sample files S) 9: Current Best Rule Rbest \u21900 10: Current best score sbest \u21900 11: for i \u22088, 16, 32, 64, 128, . . . , 1024 do 12: G \u2190Find top-k most frequent ni-grams using [20]. 13: FilterSimple(G, Fni mh, ft ) \u25b7Using Algorithm 2 14: B \u2190Bicluster(S, G) \u25b7Using Algorithm 1 15: Create empty rule R \u2190\u2205 16: Create set covered \u2190\u2205 17: for Row-column tuple R, C \u2208B do 18: Let Count(c) be the number of times feature c \u2208C be the number of files in S that feature/ni-gram c occurred in. 19: Let \u03c3i:j indciate the variance of Count(ci), Count(ci+1), . . ., Count(cj) where the counts are sorted from minum to maximum. 20: t \u2190arg min s s \u00b7 \u03c32 1:s + (n \u2212s) \u00b7 \u03c32 s:n 21: R \u2190R \u2227(t of C) 22: covered \u2190covered \u222aR 23: s \u2190|covered | |S| \u00b7 min(5,|\u00d0 R,C\u2208B \u00d0 \u2200c\u2208C|) 5 24: if s > sbest then \u25b7We found a better YaraRule 25: sbest \u2190s 26: Rbest \u2190R 27: return Yara rule Rbest We now have all the information we need to specify our new AutoYara algorithm for constructing Yara rules from raw bytes. This algorithm is shown in Algorithm 3. First the BuildIndex function creates Bloom Filters for each value of n considered. These take up about 33 MB of disk each, and is done once in advance. The BuildYaraRule does the majority of the work to create a Yara rule that hopefully matches the set of files given in S. For every n-gram size n, we will extract the top-k most frequent grams, use Algorithm 2 to remove unlikely features, and then Algorithm 1 to create a biclustering of the data. As described in \u00a7 3, each feature used within a bicluster is merged into a larger clause by \u201cand\u201ding the terms together, and \u201cor\u201ding the biclusters together. To improve the quality of the biclustering, on lines 19-21, we do not naively \u201cand\u201d every n-gram found within a bicluster. Instead, we select a threshold t of the rules to be found, as not every feature will always appear in a new file. This threshold is selected based on the same heuristic used in decision trees. We take the number of occurrences of each feature c \u2208C in the input samples and sort them from fewest to most frequent occurrences. We then find the split that minimizes the variance in counts and use that as the threshold for the number of features/n-grams used to satisfy this specific clause. This approach assumes that there will be some set of n-grams that are useful features and common, and a second population of n-grams that are excessively frequent because they are generically frequent. These biclusters are evaluated based on the coverage of the input files S, and we rely on the length of the rules and the ngrams themselves to avoid false positives. If a developed Yara rule R obtains 100% coverage, it will be selected as the final rule. We note that in line 20, we also include a penalty based on the number of n-gram components used in the candidate rule. If |R| denotes the number of n-gram features used in a rule, we are penalizing the rule by a factor of |R|/5 if |R| < 5. This is done to avoid false positives, as we prefer rules with more terms to bias our method to low false-positive rates. Our final implementation of AutoYara is in Java to enable use on multiple operating systems and fast execution time. It can be found at https://github.com/NeuromorphicComputationResearchProgram, and uses the JSAT library [19] to implement the biclustering. 5 DATASETS To train our AutoYara system, we will use the Ember 2017 corpus [4], which contains a training set of 300,000 benign and 300,000 malicious files. We use only this data to create our Bloom Filters used during the entire process. By using both benign and malicious files to build our Bloom Filters, we obtain better coverage to hopefully ensure a low false positive rate. We explicitly do not use the test set of Ember at any point. First, the test set is organized by benign vs. malicious, which is not the ultimate goal of AutoYara. More importantly, we want to maximize the difficulty of our evaluation to better judge the generalization properties of AutoYara. By using different data that was collected from different sources, we decrease the similarity between training and testing data, thus getting a better judgment of generalization and utility to real life situations in which analysts would use this tool [2, 14, 24, 28]. We expected AutoYara to be run only on malicious files that are related in some manner (e.g., a malware family) of interest to the analyst. For this purpose, we construct a larger dataset using VirusShare [26] to get a large corpus of malware. To determine the family labels for this corpus, we use the AVClass tool [30] to coalesce the outputs of multiple anti-virus products from VirusTotal [1] reports collected by [31]. This gives us a larger dataset to perform evaluations on, but is still susceptible to noisy labels provided by 5 \fAISec\u201920, November 13, 2020, Virtual Event, USA Raff et al. AV products, and imperfect coalescing by AVClass in selecting a final label. As such this is our noisiest test data. Because we want to investigate the impact on the number of samples with respect to tool accuracy, we decided to select malware families from VirusShare that had at least 10,000 examples each. This gave us 184 total malware families. For each family, we randomly selected 2,000 files to be a test set and sample from the remainder for training. For each family, we will ask AutoYara (trained on Ember) to create a Yara rule, for which we will determine the true-positive rate from the 2,000 samples. To measure false-positive rates, we will evaluate each generated rule on the remaining 183 other families. However, it is possible that the false positive rate may differ significantly between benign and malicious out-of-class samples. To maximize the difficulty and best judge generalization, we use the 200,000 benign and 200,000 malicious files used by [22], as a special held-out corpus. This allows us to measure false-positive rates as low as 10\u22126, and we will routinely see that our method can obtain exactly 0 false positives. We will also use a dataset provided by Elastic taken from a production environment. In particular, Elastic had 24 different malware families for which there was a production need to write Yara rules to identify these specific families. Two expert analysts (A and B) with \u22655 years experience recorded their time/progress on these families through the course of a normal workday, allowing us to show how AutoYara can save an analyst over 44% of their time in rule construction to better meet their mission requirements. A third analyst (C) with \u22652 years experience was given no other tasking but to process all 24 families and create rules for them. Analyst C had not previously used Yara, but had prior reverse engineering experience. 6 EXPERIMENTAL VALIDATION Now that we have described our AutoYara concept, and reviewed the procedure for determining the search policy AutoYara uses to build Yara rules, we will investigate its performance on three tasks. For these tasks, the guidance Elastic uses is that a rule needs to have a false positive (FP) rate \u22640.1% for a rule to be potentially useful. The utility of a rule depends on the true positive (TP) rate and the degree of need for the rule. To simplify analysis, we will describe the performance of the rules as a whole using the F\u03b2 metric, as given in Equation 2. The F\u03b2 score gives us a measure where we can state that a true positive is more/less important than a false positive. As such, we use \u03b2 = 0.001, corresponding to each false positive being worse than a false negative in line with our desire that FP should be \u22640.1%. F\u03b2 = \u00001 + \u03b22\u0001 \u00b7 true positive \u00001 + \u03b22\u0001 \u00b7 true positive + \u03b22 \u00b7 false negative + false positive (2) First, we will compare AutoYara with YarGen and the Greedy approach of prior works on the 184 families that were discussed above. This will show that AutoYara dominates YarGen with respect to F\u03b2 score, and that the Greedy approach does not work when only a small number of samples are given. Second, we will do a larger scale test that simulates behavior in a malware hunt situation, where an analyst deploys to a remote network and has the goal of finding malware on a network that is known or suspected to be compromised. This will show that we can generate rules with extremely low false positives, even when querying against \u226590,000,000 files. Third, we will perform a true real-world task with AutoYara compared to two professional analysts performing their work at Elastic, which shows that AutoYara can be a useful tool as it matches professional analyst performance on a number of malware families. 6.1 Large Scale Testing In this first experiment, we are interested in the applicability of all methods to creating Yara rules over a large range of families and sample sizes ranging from n = 2 to n = 212 examples. The Greedy approach quickly becomes disqualified due to not generating enough viable rules, producing \u226417 samples with a FPR\u22640.1% for n \u22658 samples. VxSig is also disqualified from this section because it is too computational demanding to run, requiring an estimated 603 years to run over all settings. This leaves YarGen and AutoYara which can be tested across all 184 families and sample sizes. AutoYara produced 50-114 viable rules for each value of n, and YarGen produced 6 \u2212121 variable rules for each value of n. We compare the average F\u03b2 score between AutoYara and YarGen over the viable rules (\u22640.1% FPR) generated from VirusShare in Figure 3. This shows that AutoYara produces rules of a significantly higher quality than YarGen. In fact, rules generated by YarGen are not generally usable until 128 samples are given, which is more than an analyst would have available in most situations. This is particularly true for hunt missions, which we will discuss in Section 6.2. 22 24 26 28 210 212 Trained On 0.0 0.2 0.4 0.6 0.8 1.0 F = 0.001 AutoYara YarGen Figure 3: Comparison of AutoYara and YarGen average F\u03b2 score on 184 malware families. Confidence intervals are constructed using the Jacknife resampling. To further explain why AutoYara produces a higher score, and that false positives alone are insufficient, we plot the F\u03b2 score of all generated rules in Figure 4, where the x-axis shows the false positive rate (log scale) and the y-axis the true positive rate (linear scale). The size of the dots show how many files were used to create the rule, ranging from 2 to 256 for visual clarity. YarGen contains a a majority of rules all located at the bottom left corner with 0% FP and 0% TP, making the rules ineffective. The only rules that obtain higher TP rates are those trained on more files, and we can 6 \fAutomatic Yara Rule Generation Using Biclustering AISec\u201920, November 13, 2020, Virtual Event, USA 0 10 5 10 4 10 3 10 2 10 1 False Positive Rate 0.0 0.2 0.4 0.6 0.8 1.0 True Positive Rate F = 0.95 F = 0.75 F = 0.5 Algorithm AutoYara YarGen Trained On 2 4 8 16 32 64 128 256 Figure 4: Scatter plot of the F\u03b2 score achieved by individual rules created from AutoYara and YarGen on 184 malware families. Dot size indicates the number of training samples used to create the rule. The black dashed line shows the desired minimum false positive rate of 0.1%, and the solid lines show the curve to achieve a minimum F\u03b2 = 0.95, 0.75, 0.5. see that YarGen suffers a bias of decreasing TP rate as the FP rate increases. This is the worst case performance behavior and shows how it biases YarGen to obtaining lower F\u03b2 scores. In contrast, AutoYara produces several rules with 0 false positives at a variety of true positive rates. It can also be seen that as the TP rate increases at 0 FPs, the number of samples trained on increases, causing the average F\u03b2 score to increase with sample size. This demonstrates our strategy is succeeding in obtaining low FPs over a wide range of sample sizes. In the cases where AutoYara does not produce as good a rule in terms of FPs, it is not biased toward also lowering its TP rate \u2014 allowing it to obtain generally better rules for any fixed desired FP rate. Due to YarGen\u2019s performance, and the high monetary and human time cost to perform the experiments detailed in the following sections, we do not further consider it in comparison to AutoYara. 6.2 Retro Hunt Results Our second round of testing is motivated by a common type of \u201chunt\u201d team operation, where analysts will deploy to networks that they are not familiar with to search for malware present on those networks. Such an operation may be spurred by knowledge or supposition of a compromised network, and may or may not include knowledge of what kind of malware is being searched for. When malware is found on the network, analysts often begin to search the rest of the network for malware of the same type. AutoYara can be used to accelerate this task by constructing rules from a small set of observed files, and then analysts can use Yara with existing tooling to scan the larger network. In this application false positives on other malicious families are still useful, though not the target. Any malware found on the network is of interest to analysts and may be important, even if it was not of the same type Table 1: Results analyzing files returned by VirusTotal Retro Hunt given a Yara rule generated using AutoYara and VxSig. Best method shown in bold. TP%, FP% on malware, and FP% on benign ware is estimated based on up to 50 samples returned by VT. Family/Samples Method New VT Hits TP% FP% Mal FP% Benign APT 26 Fancy Bear AutoYara \u226510,000 0 92 8 28 VxSig 4,079 0 98 2 APT 28 AutoYara \u226510,000 2 54 44 61 VxSig N/A \u2014 \u2014 \u2014 ATMDtrack_DPRK AutoYara 2 100 0 0 3 VxSig 0 \u2014 \u2014 \u2014 CloudHopper/APT10 AutoYara 5,118 0 100 0 229 VxSig 1,251 0 92 6 CobaltGroup AutoYara \u226510,000 0 98 2 9 VxSig 1 0 100 0 Dridex AutoYara 36 100 0 0 5 VxSig 1 100 0 0 Dyre AutoYara 1 100 0 0 8 VxSig \u226510,000 2 92 16 EquationGroup AutoYara 4 100 0 0 10 VxSig 1 100 0 0 GamaredonGroup AutoYara 26 100 0 0 7 VxSig \u226510,000 0 76 24 GrandCrab AutoYara 62 100 0 0 7 VxSig N/A \u2014 \u2014 \u2014 GreenBugAPT AutoYara 5 100 0 0 4 VxSig 5 100 0 0 GreyEnergy AutoYara 2 100 0 0 3 VxSig 2 100 0 0 OlympicDestroyer AutoYara 2 100 0 0 4 VxSig 5 80 20 0 Shamoon AutoYara 0 \u2014 \u2014 \u2014 2 VxSig 1 100 0 0 Sofacy AutoYara 453 6 54 40 3 VxSig \u226510,000 0 72 28 Sugar AutoYara 923 100 0 0 17 VxSig 1,764 56 44 0 Thrip AutoYara \u226510,000 0 98 2 76 VxSig \u226510,000 0 98 2 Turla (Uroburos) AutoYara \u226510,000 0 90 10 11 VxSig \u226510,000 0 88 12 WannaCry AutoYara 6520 100 0 0 2 VxSig N/A \u2014 \u2014 \u2014 Petya AutoYara \u226510,000 0 2 98 5 VxSig 482 2 98 0 that was expected. Only benign false positives are a nuisance in this case, as they divert analyst time into investigating non-issues. To simulate this scenario, we use the Retro Hunt capability of VirusTotal[1] (VT), combined with samples of malware shared by Twitter user @0xffff08002. For each family, we use AutoYara and VxSig to construct a Yara rule, and submit that rule to Retro Hunt. Retro Hunt will then return hits against that rule for all executables submitted to VirusTotal within the last 90 days. With over one million new unique files submitted per day3, this allows us to get a 2https://twitter.com/0xffff0800/status/1155876158740869121 3https://www.virustotal.com/en/statistics/ 7 \fAISec\u201920, November 13, 2020, Virtual Event, USA Raff et al. Table 2: Comparing against three professional analysts. For each cell, we report the family coverage rate (top), false positive rate (middle) and time required in minutes to create the rule (bottom). Analyst B used YarGen as part of their work. Highlighted columns indicate an automated tool produced a usable rule (\u22640.1% FP). Bold indicates best results for tooling. baldr baofa bkff conju darkvnc dragonmess ertfor firefly jongiti ladyoffice navattle nezchi olympic destroyer phds phtominer pikachu plurox potukorp sekur subroate wininf wuca xpantispyware zcash 85.71 90.01 100 84.62 15.38 80 88.24 100 84.62 100 75.00 88.24 100 83.33 81.25 46.67 100 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 Analyst A 8 9 10 10 8 4 6 8 9 10 30 14 5 9 10 10 10 100 90.01 50 100 100 100 62.50 100 100 0 100 11.79 0 13.96 2.16 0.05 0 0 0 0 0.025 0.025 Analyst B 78 5 70 40 20 20 15 5 10 30 25 66.67 64.29 63.64 100 0 84.62 0 40.00 88.24 100 61.54 100 75 88.24 100 83.33 0 81.25 20 26.67 100 16.67 61.54 100 0.1098 0.00025 0.014 0 0.004 0.003 0 0 0 0 0 0.00125 0 0.001 0 0 0 0.00175 0.16425 3.0745 0.019 0.00325 0 0 Analyst C 25 15 20 20 35 20 25 40 25 20 20 25 20 15 25 20 25 20 40 60 10 60 40 20 66.67 64.26 0 23.53 50 0 0 0 52.94 100 23.08 0 75 0 100 83.33 0 87.5 0 26.67 100 0 0 0 0 0.2243 0 0 35.65 0 0 0 0 0 0 16.65 0 0 0 0 0 0.0005 0 0.4475 0.0755 0 0 0 VxSig 10.2 1386.4 26.2 474.1 24.2 35.9 32.1 24 53.55 11.9 57.7 17.6 34.7 47.8 7.5 20.1 6.0 159.4 20.8 46.1 68.2 28.0 42.1 4.1 100 100 90.01 100 100 100 92.31 80 88.24 100 84.62 100 100 100 100 100 100 93.75 100 100 100 100 100 100 83.69 10.34 0 83.25 83.71 6.84 81.09 13.11 0 83.72 16.45 2.025 10.62 12.80 0 0.725 83.74 83.36 70.95 83.40 83.40 83.40 76.24 0 Greedy 0.5 0.7 0.9 1.4 0.6 1.2 0.3 0.3 0.4 0.5 0.6 1.2 1.9 1.0 0.9 0.4 0.5 2.2 0.4 0.2 0.2 0.3 0.5 1.4 66.67 14.29 100 94.12 50.00 76.92 23.07 20 41.18 66.67 84.62 100 75.00 88.24 66.67 83.33 0 56.25 20.00 26.67 88.24 66.67 61.54 66.67 50.73 0 0 0 14.19 0 0.0017 0.00050 0 0 0.00025 0 0 0 0 0.00075 0 0.0025 0 0 0 0 0 0 AutoYara 0.9 1.3 1.5 1.5 0.9 7.6 1.0 0.9 1.0 1.5 0.7 1.0 1.5 2.6 0.9 0.6 0.8 1.2 1.2 0.8 6.0 1.9 0.4 1.5 better understanding about false-positive rates. The terms of our VT subscription limit the number of queries we are allowed to run, and so only AutoYara and VxSig are evaluated in this section. We submitted AutoYara rules for 14 malware families to VT, with the results shown in Table 1. Lines that begin with \u201cN/A\u201d indicate a failure to produce a rule, which only happened with VxSig. Note that in each of these cases, AutoYara was able to produce a rule with 100% TP rate. This may indicate a strength to our approach allowing for signature construction from all portions of the file. In many cases VT returns only a few hits, but some rules return hundreds if not thousands of hits. Due to the time-intensive nature of analyzing these results, we only review up to 50 returned hits to estimate the results. We can see that AutoYara achieves a 100% TP rate on 11 out of the 20 families, requiring no human interaction at all. This is noteworthy due to the fact that these rules are run over 90 million files, indicating our biclustering approach obtains exceptionally low false-positive rates. In 15 of the 20 cases AutoYara performed better or had identical results as VxSig. In half of the instances where VxSig performed better, the results were a marginal improvement but still did not obtain any TP hits (APT26 and CobaltGroup). There are some failure cases, of varying degrees of severity. The Shamoon malware did not fire on any new samples, so no results were returned. In this case we learn nothing from the rule, but no analyst time was wasted on false positives. The Sofacy malware returns 60% malware, though most of which appears to be from other families. This result is still useful in the hunt situation, but the FP rate on benign applications is still high. The Petya rule generated had a significant false positive issue, and would not be informative in practice. Overall these results are encouraging, and indicate that AutoYara can be useful for hunt activities when a limited number of examples of malware may be available. We are able to generate useful rules in the majority of cases, using a small number of samples, often with no false positives over 90 million files. 6.3 Industry Human Comparison Testing Our last test is based on real-world work performed by professional malware researchers. As part of day-to-day operations, a need to develop Yara rules to detect known and particularly challenging families exists. Three malware analysts were asked to develop Yara rules for 24 malware families. The stated goal was to provide maximal coverage for malware samples with minimal false-positive rate. While there is some tolerance for cross-tagging other malicious families with a Yara rule intended for a single family, a false positive rate of \u22650.1% on benign files was unacceptable. The results with TP and FP rates are shown in Table 2, where Analysts A and B are both considered experts with multiple years of experience. Analyst B used YarGen as part of their standard workflow, and found it was insufficient on its own in all cases \u2014 so Analyst B\u2019s results subsume YarGen. Several rows of the table are empty because neither analyst was able to complete all their work, and new priorities eventually subsumed these and were never completed. This high demand on their time, and its time intensive nature, is part of the problem we are trying to solve. For example, Analyst B spent 78 minutes on the baldr family, but did not create a usable rule due to a false-positive rate of 11.8%. Analyst C was not a part of the business workflow, and so was able to spend two weeks generating signatures for all 24 families. On this production data, the results of AutoYara are significant. For 21/24 families, AutoYara successfully produced useful rules with reasonable TP rates and exceptionally low FP rates, either obtaining exactly 0 FPs on over 400,000 test files, or extremely low rates such as 0.00025%. While AutoYara usually performed slightly worse than analysts, it produced better results than one or more analysts on 10/24 of the families (bkff, ertfor, firefly, navattle, nezchi, olympicdestroyer, phds, sekur, subroate, and wuca). Based on these results, Analyst B could have saved 44% of the time they spent working on families that AutoYara was able to capture, and instead focused on the more difficult samples like baldr, darkvnc, or plurox. Similarly, Analyst C could have saved 86% and Analyst A could 8 \fAutomatic Yara Rule Generation Using Biclustering AISec\u201920, November 13, 2020, Virtual Event, USA have saved 100% of their time, and could have instead focused on these more challenging cases that did not avail to automation. In comparison, the Greedy approach used by many prior works was only able to produce usable rules on 5/24 families, and only on the easiest samples. VxSig was able to produce rules for 10/24 families, only half of AutoYara. Note that in only two instances did VxSig outperform an analyst, and it requires up to 7.9 hours to process a single family. This significantly hampers VxSig\u2019s usability. Table 3: AutoYara results improved by an inexperienced user with one attempt at editing the produced rule. Time Human+AutoYara Family (min) TP% FP% baofa 2 50 0 darkvnc 1 50 0.83 jongiti 2 70.59 0 ladyoffice 1 100 0 subroate 2 46.67 0 zcash 1 100 0 Another benefit to AutoYara is that it is easy for an Analyst to modify the rule to improve the performance. In Table 3 we show the results where a user with only one course on reverse engineering, and no professional experience developing rules, attempted to modify the rules produced by AutoYara. For these 6 families they were able to improve the TP/FP rate, requiring only a few minutes for each file. 7 WHEN WILL AUTOYARA WORK WELL? We now investigate some of the reasons why/when AutoYara will work well. First we will show that structural consistency in byte strings across samples is related to AutoYara\u2019s performance, which is expected. Diving deeper, we investigate the generated rules to better understand the type of content AutoYara uses, how the generated rules perform, and how the performance compares to a domain expert\u2019s results. 7.1 Byte Similarity Investigation To help better understand when and why AutoYara performs well, we performed an investigation of the similarity between malware samples using SSDEEP[13]. SSDEEP creates a similarity digest from the raw bytes of the input files, and can return a score in the range of 0 to 100. In general, any score \u226520 is a \u201cmatch\u201d, and scores quickly drop off to 0 for non-similar content. While SSDEEP comparisons are heuristic, we found visualizing the connected components of malware families based on their pairwise SSDEEP scores to be a useful diagnostic. For example, in Figure 5a and Figure 5b we see the graphs created for the Olympic Destroyer and Dragonmess malware families. Both families exhibit clustering into a few densely connected subgraphs. Unsurprisingly, AutoYara rules perform very well on these families. The intuition is that the high byte similarity of these inputs make for convenient rules. These results support the claim that a biclustering based approach to represent a single family is fruitful. The data itself tends to lead to natural sub-family populations which are easier to represent with the biclustering process. 338b6fd849b50a88a10c37d16d80d5f7 19c539\ufb002c50a0efd52bb5b93d03665a 99 cfdd16225e67471f5ef54cab9b3a5558 99 ca0eaca077aa67f2609f612cefe7f1f3 41 6e69acf44a9a48730cca47c2d3320b78 99 99 46 99 41 99 41 (a) Olympic Destroyer 6cec253a52365d72600d6201959c7ae0 a78f41ea798099b4638c9615ea53d710 96 7be45b9213506e0aacb5121fca86f530 96 c52708f5bde9e635676d766ca9551f40 96 ca0e13622ba244d5b9c8c9c3030a0c90 96 53f3a077b8f4d75f194fda1e9c794c60 96 dbc05c23d548ec1dbb9bb98090a90b10 97 4c1ac01c4151427c45184b4dfed72780 97 fbea3936000722dc0590df22e696b650 96 bb5892d85bd6715b2a304ad8eb2279e0 96 c1d86b5e3dd208ad7e6bb90e786ed0e7 96 79a612013493bccbea24c162b83ea060 96 702c7ad086454c7f7abeb12e1883f710 96 fcc2745a5bac016463cfb667aae6be70 96 5232e952368f38153f2c2ec6a7c5df50 96 61465d8da228dd01d1e159a7cb6286c0 96 fd5e4418f35360f7b71306cc6d6db9e0 97 be861bebbbd50522fcfbe5a702848c00 96 902235d07262de6e65d3dfea28e78910 97 b2396b90b4a4fef1bedaa6b\ufb008c4eae0 96 dc0505f1b0c0a19f3697d8704e98be50 96 9b00714e8e888093a580a202ee502c80 97 1354a44c7bdf51f1faa06330d6072b4f 97 4b678106b94fafd238400bf2ad9f055f 96 97 97 97 97 96 97 97 97 97 96 97 97 97 97 97 97 96 97 97 96 99 99 97 97 97 96 97 97 97 97 96 97 97 97 97 97 97 96 97 97 96 99 97 97 97 96 97 97 97 97 96 97 97 97 97 97 97 96 97 97 96 99 97 97 96 97 97 97 97 96 97 97 97 97 97 97 96 99 97 96 99 97 96 97 97 97 97 96 97 97 99 97 97 97 96 97 97 96 99 97 96 96 96 96 96 96 96 96 96 96 99 97 96 96 97 97 96 97 97 97 96 97 97 97 97 99 97 96 97 97 96 99 97 97 97 96 97 97 97 97 97 99 96 97 97 96 99 97 97 96 97 97 97 97 97 97 96 97 97 96 99 97 96 97 97 97 97 97 97 96 97 97 96 99 97 96 96 96 99 96 96 96 96 96 96 97 99 97 97 97 97 97 96 99 97 96 99 97 97 97 97 97 96 97 97 96 99 97 97 97 97 96 97 97 96 99 97 97 97 96 97 97 96 99 97 97 96 97 97 96 99 97 96 97 97 96 99 97 96 96 97 97 96 97 96 99 97 96 99 97 97 96 99 a4cf30c50256ba06896084a7f026b5be 9a8ad6ed8bc2f0c1d93780872df42843 99 58aa9f7358f7d5d50fb5f5daf7ede54a 97 3282b838ac2be98c62dba76167190edb 96 7b4f66ba560336d737dc94b7a8d75bc1 96 d07a50e5f039f37e7a157d08125bb68d 97 c06abce7bb368e3fce6d86b2c320ecea 96 dc00b82a4c104b4736a4aea381631f34 96 3ec1d18862835cf68adbce7a56086c75 96 b894b68eaf583cd8d9dcb7f0bd03d41f 97 1c26f518d05ce77e12d79561227f680f 97 \ufb00cf0d8193c3f231fc5235a1206be3f9 97 b58a1e058ef7f36da6fc6518e28ea1c0 97 3e575d07731b0c3a262cd25751b22cf5 97 88f8daf26743804bc853dba8db42f0d6 99 c4c4a060a54addf1709b423873267be9 97 760b58d4a63a6b83a74\ufb005c663adbcaf 96 ee6642ede35cad9c77b75c048769bc5f 100 609555b11ea913e47ca3ca158f3464a8 96 fc54390705f5935a6fd0cc553b3a1a95 97 678fcfc33c54ae0f02c9f1156f2e045d 97 79e28d0681918e6d73a554db92a2c020 97 df1\ufb0074e07f29238447fd6ef6189cc41 97 c6885eacf9acdacfa03c2ac9edc2ed03 100 9c1a10e036365695300260871649b890 97 a875a2f4fc121591e29775bb13719292 96 cb892651ec45f3055f0edfd3372a8169 97 f045324097f30f9f0b18ed50f1089986 96 d340a40201ed47b59afbcaf537e79be3 94 99a7fbf685a45ba447036971ae4f260a 96 8b6708e584e4ec5bcdc78750ba3cb8fb 96 0738836ab4d5481c3dfa4b9cb86a722c 96 37ec821a0b841a28595ceabfd0c44b0f 97 3b6278237726239b9170bf71c5f527f9 97 242a79bb73e21d7c7cda10e157d8c5df 97 43262cf914b2e3cae127ca1867bd96fd 97 45899b703e893ed2459f72f3ca834ba6 96 f07eeee36f6a00aad436cbc9d1d5be66 96 881b75b396436e834efb582134741263 96 807f1f6c08dcb1cb88b752cb6736ede0 83 83507f4c64464fd2551d9c40b381beaa 96 723b04e934ec440eb4bc5126f1503cef 96 97 99 99 99 99 99 99 99 99 99 99 99 100 99 97 100 99 99 99 99 99 100 99 99 99 99 91 99 99 99 99 99 99 99 99 99 97 85 99 99 96 96 97 96 97 96 97 97 97 97 97 97 97 96 96 96 96 97 97 97 96 97 96 97 96 86 96 96 97 97 96 97 97 97 96 96 83 96 96 97 99 97 97 97 99 99 99 99 99 99 99 96 97 97 99 99 99 99 97 99 97 99 97 90 97 97 97 99 99 99 99 97 97 96 85 97 97 99 97 97 97 99 99 99 99 99 99 99 99 97 97 97 99 99 99 97 99 97 99 97 88 97 97 97 99 97 99 99 97 97 99 86 97 97 99 99 99 99 99 99 99 99 99 99 97 99 99 99 99 99 99 99 99 99 99 99 88 99 99 99 99 99 99 99 99 99 97 85 99 99 97 97 99 99 99 99 99 99 99 96 97 97 97 99 99 99 97 99 97 99 97 88 97 97 97 99 97 99 99 97 97 96 85 97 97 99 99 99 99 99 99 99 99 96 97 97 97 99 99 99 97 99 97 99 97 90 97 97 99 99 97 99 100 99 97 96 85 97 97 99 99 99 100 99 99 99 96 97 97 97 99 99 99 97 99 97 100 97 90 97 97 99 99 97 99 99 99 97 96 85 97 97 99 99 99 99 99 99 97 99 99 99 99 99 99 99 99 99 99 99 88 99 99 99 99 99 99 99 99 99 97 85 99 99 99 99 99 99 99 97 99 99 99 99 99 99 99 99 99 99 99 88 99 99 99 99 99 99 99 99 99 97 85 99 99 99 99 99 99 97 99 99 99 99 99 99 99 99 99 99 99 88 99 99 99 99 99 99 99 99 99 97 85 99 99 99 99 99 97 99 99 99 99 99 99 99 99 99 100 99 88 99 99 99 99 99 99 99 99 99 97 85 99 99 99 99 97 99 99 99 99 99 99 99 99 99 99 99 88 99 99 99 99 99 99 99 99 99 97 85 99 99 99 97 100 99 99 99 99 99 100 99 99 99 99 91 99 99 99 99 99 99 99 99 99 97 85 99 99 97 99 99 99 99 99 99 99 99 99 99 99 88 99 99 99 99 99 99 99 99 99 97 85 99 99 96 96 96 97 97 97 96 97 96 97 96 86 96 96 96 97 96 97 97 96 96 96 88 96 96 97 97 99 99 99 100 99 97 99 97 94 97 97 97 99 97 99 99 97 97 96 85 97 97 97 99 99 99 97 99 97 99 97 90 97 97 97 99 97 99 99 97 97 96 85 97 97 99 100 99 97 99 97 99 97 90 97 97 97 99 99 99 99 97 97 96 85 97 97 99 99 99 99 99 99 99 88 99 99 99 99 99 99 99 99 99 97 85 99 99 99 99 99 99 99 99 88 99 99 99 99 99 99 99 99 99 97 85 99 99 99 99 99 99 99 88 99 99 99 99 99 99 99 99 99 97 85 99 99 99 97 99 97 94 97 97 97 99 97 99 99 97 97 96 85 97 97 99 99 99 88 99 99 99 99 99 99 99 99 99 97 85 99 99 99 97 88 97 97 97 99 97 99 99 97 97 96 85 99 97 99 88 99 99 99 99 99 99 99 99 99 97 85 99 99 88 97 97 97 99 97 99 99 97 97 96 85 97 97 88 88 90 88 90 88 88 90 88 86 75 88 88 97 99 99 97 99 99 97 97 96 85 97 97 97 99 97 99 99 97 97 96 85 97 97 99 97 99 99 99 97 96 85 97 97 99 99 99 99 99 97 85 99 99 99 99 97 97 96 85 97 97 99 99 99 97 85 99 99 99 99 97 85 99 99 97 96 85 97 97 96 85 97 97 83 96 96 85 85 97 (b) Dragonmess Figure 5: Connected components graph based on SSDEEP similarities for two families where AutoYara did well. We found that AutoYara had a higher failure rate for the AVClasslabeled corpus, compared to the production data. We suspect that this was caused by a larger amount of label noise in those tests, since the AVClass tool is not perfect, and depends on the output from several AV products \u2014 which are also not perfect. This accumulation of error could have caused increased noise, making any attempts at rule construction difficult. f337285d54e439ba82fd063429a66e39 464ea751ace69b57a1ec707bf8e76493 72 699bdb845f240726a9d966d1fe503384 548acd78a9785c939bf5155810260cf8 69 265726e47156c467babd52c758603f92 50 94a1c9bfe25103aefa35c5de678d2a13 57 31af9d7d7\ufb00604d8bfafa789e6121c17 75 8c9af217c45fa9a67a0cd177a78b2bb6 47 4f249ecaed1a781afcfe848f6e6b7faa 68 539de2b355568ddb85735f01528ce327 60 6676fb5607ece84e6d5a21b4877a2a75 72 6c4fe3d9e3b4297245ad7a43960af6ac 75 2f937ee9a531f040e6cab2930707ae0b 82 0845fdac1f23b9bcfa19f462fed3872b 75 9a7faa553d3a1c9596120511213d1459 75 4ba8f3c2e5e72c38fa5e5783df86cecc 55 50325842eca3a26a64c3949a5905b50e 75 738ae7ba35fa87d7ec565850249e20f9 57 44 52 68 43 63 49 65 68 71 69 69 57 68 47 49 44 41 44 43 46 47 47 50 49 49 44 41 52 46 58 50 50 54 57 57 55 52 57 50 47 71 55 68 72 77 72 75 58 74 60 49 46 49 46 50 46 49 47 49 44 58 65 68 74 68 65 57 65 55 e5eaf00131d7a9db6d57a584acd0292b 43 57 57 57 57 55 57 57 54 68 74 71 71 57 66 52 80 72 71 58 74 54 77 79 60 77 58 72 55 72 54 55 72 54 57 47 58 \ufb008ecd7d21c095ad46f20961148de3df 41 d9354293ca59755eb00cf2bfb17d0b10 5c9bd3910a5f59d599d181e20bcf88ee 97 a442a10a588ca39\ufb008b7a9cc89334597 7176568d44625c8207f8391e54432f7d 75 fc55390d231e85cb4a72d80bce2fd152 85 85 b9f6d7fc85cb1adac143d2772acf6eb1 c32cbe561bc488\ufb003628f2de11e9c8d2 85 e41bf0dd59f8652555d1451535eae5c7 e010f8d4868c9903bdc3a9213bc78381 43 316c96ac484\ufb008abfa2c017dd766561b c0adebcc94b32a2226797380582b5ba9 65 9ec9cfa5bc3c06448031b60699c1b622 66 66 fbfe266fd53a5f9ea980aecc06ad\ufb0011 09681d2bd414d35d9527da67d76ce97f 57 288f73fa6f1d27121b172eb99624bbda 63 833bf18d5\ufb00afd455eabc3ec791ce542 57 116451f4e3b92647031e1b47bc90373b 68 66 57 58 66 74 54 4fc4001b24b10ee0c4bc0e7ca8b57338 81ce6760df8095319f4568fc63d587c2 66 659dc48fb5637d3ae5a6b5f4e3e54fd9 d2a05934e3a517296f7527160a7a8de8 77 5a1a07f9bcb\ufb008c8\ufb00d3322022fc4e7f 5151f89fb9723091cfc7709380b09507 46 8927675964f6e15a0a6c195b869a17de e88386f53a5f75eb3c2940a07e06218f 99 16edd1390276b\ufb00c9a02e511b\ufb00fccca f97529dd3d5eb3f58c2b6f95565878fd 50 3d7f9267ea84e726a285ba3ab619009c 44 075f8a7ee462ba3442d117a7ebec73ee 49 3748ce078694a55922a2113a960cf6b2 50 c5e49dddf7f22c6d1e810c255c0f43e5 46 aa153449991b3e6d69c5edaa23c1a3d5 41 239efa68e0b868bf279cc4a3db929330 47 49 49 46 44 46 49 47 47 43 44 49 50 41 44 49 50 52 41 41fbf91132e33\ufb0006f09101d9aae71b5 41 d2393baac5eef9cb2717e2061114c14d bc6f53063033a0d8c4fad87b34aa3559 83 b6f5f8c65e1f814067c415fcbddddd8f 83 85 971d1f5bd543b064c46efd0c6012d040 66e067b8b34a5e7b236dba\ufb00f3ee2837 57 87437\ufb00d933fb45bb6fbce3f67f61f33 56a9c817717bf9fa47700dfd30093460 43 7d14f4c332f10b7b6877b61092ed865f 44 55 691ed77269967fdd5c2e0bd59b43acca 493b1b13211b863c2199bd31e30adb68 79 83d49b7cbe18be3ba53d540a4efcaa5b 82 2a16234cf52976301cb4beb3382b97c2 79 77 75 77 d61ce1360e8e6d4e4319c1d5f3934df3 e39930b2fb427792503040c7def10120 43 18ee84da584fdb6f2aa28e9e50bd640a c8bc363a6bae0644d6505fcf4a436050 66 f10373d7c0bcab5074529172250d9c55 44 44 17040f019\ufb009d9733728ded075c59c85 dde5c88f6326ef5cce371537b7b4c3a2 47 1219c62d2ed4bdf956ea19259fa55770 fe676fbb70d4f232a9a981f16a63f300 54 7d54c57473ee34f4cb45146375350ce0 50 cb6ba2140f2cf721c0088fe8736d00d2 47 fdb52c005cfac3e27b3f06c27aad9ecc 46 74 58 55 50 57 46 0583373c1504ad261f96fe322db82406 34183158645c325efe81d6ee8d83e02c 50 fdbcc14287d147a0b9192f2cf3c9cdf8 50 50 bde21911c5a1\ufb00991400f718b5806c15 cf6ce4502d55a5fd1da7861fe136215e 55 8116a86f60ab29b60c7f27d614e4b654 abe6e4fa41468eca461e5070aeca09bb 99 eb633f489674b16aea8110f375ed5e4f e265569b9814725d986fb5614bb5ceba 72 aaf9a30b59c59aa915737e4628bb4685 46 569e8b00fd84de6f925f14b9dba2d4fc 63 47 66 49 fd7bfd5d62cc7d4a73912\ufb000f3db9e4a 5edb949d33ce37feded061bc2782a700 88 14c3aea536817c81d3330fc4d945dfd6 88 bb1b16ab2fa535341f7286329b64f4a9 88 1e9cb49f0a5c016d1507bef57e1ac122 88 c35883474be2217daf2b0c27005fd383 88 85 85 85 85 91 85 94 85 91 85 c6513e04ad676d1e4c16b5ca5d8a9fb0 1a02fae168a8c9855ab6d2b87de08902 50 8b91e48ca74a06775a72a0fa3062334d ad47357ce4b8c545c44ae4c01902d857 55 ad02b8962f8a79ee3cbeb79ac8e3a9ab c17\ufb0003ec99ac79a3afdc800ae38aab0 44 5454eccca9a028d67e9cca38253af196 396a8fda2b8a8cc7125bfe04ccfba302 49 44811fee24c7dfa97b2631d37891a696 02238c04bd73da8fed72a9028dc7f600 43 809b6685e2498e877089bd30bb270850 be268278530dd3edb2c6985c7cc0c5f7 63 b29d66b21a692ee7e95d60e555e52ad4 b790741de2395b2eb7b733aa198b7709 57 7736f24985eb0cc9f9afa9e7d74b01d5 5e5222261e499f14ddc59edd5796ead4 97 73ba51edcaa4411aed4154a419a11652 117e1e95e6468dfeb9514d4e540a596b 41 a5c8ca245b5626357f2f6693\ufb00405dfb a0598f823003f8a6870e19628f711f48 52 c209edb058494faa701a391009a20a12 5ef3ae1ef07f230ad6130360710075e2 66 (a) Xpaj 3ddf0321456d1baf4658bc1f13be22\ufb00 d34d2ae47ed2957d6ecec9e3fd17937a 58 127900f098b0e082a14d760245987d13 be5aeea1bf14c28fcec18c34aaf13dcb 46 46922b35f0d856966136a\ufb0065c0eb48c 105072929098bd030b0f7db6\ufb002c696b 63 b96c9d1fb3f04f386843ce89c89f0718 71 5576b8c31aadc10c64d521d3ca2228b5 80 6483b435934f0a89f5f7f8c9a0224090 72 7c6a4b677e2e7216c4fa9503466a4173 61 16a5a38fbabc3c68a1b6e429bab5ddfd 80 c76ebe514bebd7a4c9e5799d\ufb00a624dc 58 5e17fbd26dc95a5080594c17c8c49e43 80 56fafc412d4dad2b1e97e9e8839b742d 69 fa33c5fd62cb1a2f3077dceea84f441e 94 \ufb00a48d24d838039\ufb005cd79325bc35bcd 65 af791ed5606d0a8a0a58e55a68dd0191 66 6845d42ad193e04f017c9b3b4b3f78a4 71 717\ufb00a01e9c6e38d8b51311c534c5d3d 96 afc01418f9f761900d71ab3edda7d480 63 a0774c8dcc1568c8a64339667325521a 96 79df243d64153e6a46c736e7c45f6b62 63 a9bb8aa567f187e6a0b2e71cdec8d8de 63 f6c3ad4e38c984d726ec1f693e45b6a3 60 b465014388dc5cc1d7a47751d126c1a4 96 d1a2ed8955386a46eb5241e40b09a2af 63 793313441d8c0db5f88f6973099cc176 71 8f540123a4a5c0354c7e6d41a8d56c87 68 b0d240b43a9671b0da68e6ae1c6363ed 69 ae86a39526be100a3a32c76ee61c2b20 68 df6f9ae45a9daed23760bb607a88d49e 66 64e28f86d97b562e9f758f7e5cd4955f 69 f8348dd0ea8e13233453a35ebec197a1 72 e227aec8b9eb36f818155cb47a0d1378 57 61 57 57 50 61 58 61 60 68 60 52 54 63 57 65 50 55 55 65 57 55 54 55 57 58 63 54 54 61 58 55 58 55 65 61 68 57 54 55 66 58 68 71 54 50 72 58 58 61 58 63 58 60 58 58 57 60 72 55 69 63 80 63 57 65 80 57 80 57 57 60 79 58 58 58 61 60 61 66 66 50 66 61 54 60 61 69 58 55 60 72 63 69 57 57 55 72 54 60 71 69 65 54 60 66 55 57 47 58 57 55 50 50 52 61 54 58 50 50 54 58 50 52 63 61 58 50 57 55 49 50 72 63 80 60 57 61 80 58 79 55 61 58 79 58 58 57 57 61 58 66 58 55 54 55 60 52 49 54 55 57 57 50 54 49 57 49 54 50 55 49 54 60 52 54 79 80 60 60 65 79 57 79 57 57 57 80 57 55 58 58 63 58 66 66 57 71 58 55 72 69 52 74 57 58 58 71 52 60 58 60 61 54 63 58 60 66 61 68 96 65 96 60 61 61 96 60 68 63 66 66 72 71 69 57 54 60 68 57 66 58 52 52 66 55 57 55 79 58 74 55 58 52 57 60 57 61 52 52 52 61 57 57 55 58 54 57 55 57 50 66 58 68 54 54 58 68 58 58 55 60 60 58 60 80 50 63 97 63 63 63 97 60 66 65 66 65 66 69 68 57 65 57 57 60 65 50 54 55 61 60 55 58 60 47 60 65 65 97 63 68 63 66 66 68 74 69 60 54 52 60 50 54 57 55 60 58 55 61 52 57 61 54 65 55 55 58 54 58 55 52 61 57 57 55 55 55 50 61 55 55 60 68 66 71 69 68 71 69 60 55 57 57 58 50 60 57 50 57 58 60 58 60 57 54 68 61 55 57 57 52 65 80 60 61 55 60 58 63 54 57 71 54 66 52 50 21c082e9b40f2643a229f99865eeb457 81f574123319b041943648c529f820a4 71 3b22218cfe62f007431d9e19229641e7 60 f43e5242d80c084d060de9d98071f904 74 12a8e695cd65ca799052f7dbf854c0d6 71 1e81474d191112647f3a521ea68e5c37 68 bde00f4cca7720a59104fc05151d1be2 74 b6a2b402020c6e902795422e7e61f2c1 75 22aed193231d315568fa1300353c3571 71 8ad31ad1aeebe13310408b5d91bf3b30 69 b86d511d36626f08626ad234b0243fe5 77 0902edb0e0e0758e78e8ea4e7771b639 75 b7a070b85f0e4cdafcde3a61902f63df 72 c6ec36c997ecc47de8c0cb620c075f4b 72 e9476ed2eabde59edfd2ae34a73db4f3 61 b301a7231f688db9c728915e8ef3acd0 74 ca3d819ad0845d7e6334c27e3c0f8b87 69 757df9958d521f097f04a2944afe9211 74 13\ufb007d4f4a9e9a9593c9b5434aae730b 69 a07706e5615ccd1f069a5f7df871c4a3 72 11a197dd0d11b2a9f710875894cf8499 77 d4ae2057b8777b05bb18eaadc2529eed 74 cee876e26d6d7c8597ddb725c367d30b 74 ca3d28efe1256140bef0e6fc75646903 75 6f1dc113b9421189ee481251a2db6d25 75 f2c57839fc7cb1df725\ufb0096c2bdb5285 72 952f7947aca2b56c61002ef5a9a77a42 71 aab031d0dbb803391f093d219864ea1c 75 b1874a4655bbdbf23231d9ea1b26d77f 68 746a9c79cbc2861591fe4f3d92414129 69 1fb1f403e42a0cbf8167735ca9feb956 68 6420d94f9a7c3b11bf77a23a860f0fa9 75 e785e5dc42b3627284b6b33c161d0d6f 77 85e5a9a54b29efccad5f643cf142c9c5 71 6d6a1cea3d9bfc4ea05711b4e5ce1e94 75 b4d1714f0d15ef16bdd37d70c341bee4 72 65 74 71 69 72 75 71 69 77 75 69 74 65 79 72 74 71 72 75 75 72 71 75 72 72 72 72 74 72 72 72 66 77 75 65 63 57 63 68 69 58 68 66 69 68 97 66 68 68 60 69 68 58 72 68 66 63 63 61 65 61 61 65 66 58 65 69 72 71 74 79 72 74 74 77 74 75 66 83 72 75 71 74 74 77 74 72 74 74 74 74 74 74 71 74 79 72 77 80 69 74 75 71 69 71 71 66 72 60 79 68 74 69 69 72 71 69 71 72 72 72 72 68 68 69 68 74 74 72 72 71 72 68 72 69 74 68 71 57 71 66 75 68 74 72 72 68 68 71 74 68 69 69 71 69 68 71 68 69 74 79 75 74 74 74 74 75 63 79 72 79 71 74 77 75 75 74 74 74 74 79 71 71 71 72 75 74 74 77 75 72 77 82 72 80 63 80 74 83 74 75 79 77 72 79 79 72 71 75 71 72 72 74 79 71 77 79 69 74 74 94 72 69 75 96 75 66 94 71 72 69 71 71 72 71 71 69 69 68 71 75 65 74 74 74 69 68 74 57 75 66 71 71 68 69 71 71 69 74 74 74 69 66 71 72 69 74 65 74 72 75 75 74 63 77 75 82 71 75 77 75 75 72 77 74 71 75 74 74 72 77 77 74 79 79 72 77 61 77 72 79 74 75 74 82 79 79 72 82 74 83 72 74 72 72 77 72 82 80 68 69 75 96 72 65 94 69 72 68 69 69 68 66 72 65 68 68 72 71 63 72 72 63 77 69 77 71 71 72 75 74 74 72 69 72 77 71 69 71 72 74 72 74 77 63 68 63 58 69 63 60 68 63 61 61 61 61 65 58 61 65 66 55 61 60 77 77 74 75 79 80 75 75 77 72 74 79 72 72 75 75 75 74 79 77 74 68 93 69 74 69 69 72 69 69 69 69 69 72 75 72 68 72 72 74 75 74 80 75 75 75 75 74 75 75 72 75 75 79 74 77 79 68 71 72 72 71 74 74 71 71 66 74 71 69 71 66 71 74 72 72 74 72 72 71 69 72 68 71 68 72 74 66 75 75 74 69 75 72 74 72 72 68 72 69 75 77 68 75 75 77 71 74 75 75 75 71 71 72 74 75 69 79 79 72 72 71 74 72 71 72 69 72 75 68 77 77 75 72 71 72 68 74 72 72 74 72 77 75 74 71 75 74 74 74 72 77 71 75 74 72 72 74 75 74 72 75 68 77 74 71 71 72 72 72 77 71 77 75 69 74 69 75 72 72 77 77 71 69 72 71 68 72 77 69 72 74 68 79 74 74 71 68 72 69 77 66 75 75 72 77 80 68 72 80 1fc1a28a693dd93b81850\ufb0065e17d2c9 4d935c97647eb8c7b901cc02cf320496 99 d1d49a591ef6b97b5f7fb39de17b5218 47 f346000bf25141dd02ca5aefd3795807 96 c33233a79d33c6ab8f207f938065e086 96 fdc336e5cd3d7a1a20edeb5cedbd6722 47 44 99 99 44 44 43 99 99 44 43 68ee501dc99327755e38\ufb00be67935631 c46f3fb98f6c1e77cb267d5faf785408 99 79d3280c3b1d8e53fb797a0ef0a58de6 2db860d7d4c5827bf2591f01b73f719c 41 6aaacfadb21360ea40e4a26bd522c002 41 8b524072d5b0f8d167211d603e54a468 41 e535653c62cd94a06890af74c1964546 41 0ccf06d959a7ecb84001e82c4a93cc59 46 6e708330aaa54c5c55d239a41bb10ae7 41 06d140237b291a1fccf87041d7513f24 41 5a841eb4c3af513ee499e4858a30d86d 46 2a828086a0a14ed\ufb00e5c50f34feaac5b 41 3904535cf6e0c9fc240733f3ca9f3dd2 43 41 43 50 41 43 a432c13aad02c231364d47c16b999a39 41 0fc7622f13f3f3da2abeed0c\ufb00929f71 43 9077d18f61f03f3f701cb19a0e2117f1 41 1731e10f2f15debd65607efc2ba80844 41 10b2cdbbe6ca7bbd558386e27005ee5e 41 1984b0f2de1fac3934fee01aeac0a219 41 5082faa6c62a6be1cf1ccf3c5462a105 43 811c8b07b6e3e4ed9968d3e710c5f2b3 43 5f5af75e90daed44179d7c7c0ba78ca2 44 28286d36a1101ddfc77c9e871959430b 44 bbba76326663250aacf90f16801ec1d2 41 46 47 41 41 41 47 973708529e0bb3c777a5305515d045fb 43 d731eed7433e7458c5eb82d166719c5f 44 d8722caacb60c2c3405320541dcd9242 43 84b5ec0255e6609a355cd79c1b06fa3b 43 18a83312254b34370bbb5c0f8db6d123 43 931da6ae49a4f2becf592d56924a38ca 41 67454ef2040fabc0ae055e74892d83de 41 43 46 46 50 46 41 43 44 49 49 47 41 44 41 41 41 43 41 4fa4d9773ce13e60f5cade698683d1b4 41 e5a863d08f3304bc50397a5516eb3b5b 44 0d39459e2ed69e22d44fdef82efdc372 43 3cb8e8486c43eb898e972470caa6be52 44 c170ce44469cfd4ce19d2eb6214d62cb 44 5281df5dde4358c420383292d070b132 44 266419f883c709c52e6ccdacc1983be6 44 cd7cfc5e67fcfbe4ae35f5e9d4f35266 46 91d589bcc18e66a52a6ef0c93e4ce6dc 41 46 43 46 41 41 41 50 46 44 43 43 44 43 44 41 47 52 49 49 41 41 43 43 44 46 46 43 43 41 46 78c2c1e10facb7742c87e2db5a5588e2 47 65dc80af862ce04553ed62b284d60856 49 c87b4ae63dd8e2cf293fa115608d96ef 41 58e411b658cef3348630ebf1774a1e7e 41 fc5b3471e086070211dd70a7832dd3c8 44 50 47 50 43 44 47 44 47 41 49 52 47 49 50 44 44 44 49 47 43 47 41 44 47 44 47 49 43 44 55 44 50 41 41 41 6cb2197adb21e01284581c4a44400c5f 41 559f5685c5c30b9886bf1b64236c6548 44 2223f4b1a39f351bfba58ed41c10c5a9 43 cb909550c817556eec90495567122923 46 7df365fd721d0afe14921c46b7cc7f18 41 47 55 46 46 43 49 44 41 44 43 43 44 41 41 43 44 46 41 49 46 43 49 44 46 44 46 44 41 41 43 1e0a0ddce1a93ee989ca9a4fb7dab924 43 47 47 54 41 44 50 43 46 41 41 43 44 44 43 43 41 43 41 43 43 52 43 43 49 43 41 47 47 41 44 46 44 49 49 50 46 43 43 44 54 43 49 44 47 47 50 44 46 52 41 46 46 47 41 43 47 46 41 41 41 43 41 43 41 50 46 49 55 43 49 47 50 50 41 52 47 52 50 47 47 55 49 46 43 58 49 43 47 46 41 44 43 47 41 41 44 41 41 41 44 44 41 43 41 46 43 41 9fa85ce37dcb360fecd8cb08b7196348 52 49 50 44 49 43 41 49 50 43 50 50 50 50 54 54 50 52 47 43 44 50 49 44 54 52 52 44 44 52 49 43 46 54 50 50 57 52 50 41 41 43 41 41 44 43 44 43 46 41 43 46 47 58 41 43 54 44 50 52 49 60 52 54 41 50 46 47 46 47 44 44 47 43 46 46 41 44 46 41 57 47 44 41 46 47 41 44 43 d090522fa29fc64aa50381993c303010 43 f1ab249555b778e61268ee5bab1f54a2 41 41 49 54 55 49 52 44 44 54 54 41 46 57 52 55 52 54 50 50 55 54 57 55 54 41 41 46 41 41 43 44 50 43 41 43 41 44 43 44 44 46 44 41 52 49 50 41 54 41 43 47 50 52 58 54 50 43 44 52 43 46 50 52 52 44 43 43 41 60 47 44 52 57 50 58 43 41 49 49 44 54 55 58 52 54 54 57 49 43 61 41 46 52 58 44 49 43 54 50 44 47 55 46 47 50 55 52 41 49 49 50 44 50 41 41 41 44 47 47 41 44 50 46 49 54 55 52 47 41 52 47 50 41 43 41 54 50 49 47 54 44 55 49 54 46 47 54 47 41 46 44 47 49 54 47 55 44 50 54 47 41 49 49 49 44 46 44 43 43 44 41 06b2cc1851fc4d8750aceb710aa8ee2c 41 49 47 50 43 44 41 41 41 46 43 46 43 41 44 43 43 43 44 49 43 41 50 41 41 43 52 52 50 44 46 41 41 41 41 41 41 41 41 43 47 41 41 41 43 41 43 46 41 41 49 41 54 44 55 46 41 54 47 43 47 47 47 44 49 49 49 47 55 43 47 41 44 49 50 41 49 47 57 50 41 43 44 49 41 41 41 41 41 43 41 47 46 41 43 41 17e2f78e318b052cc86233d41378de88 21c98795be8c153a7369bc4e878e4211 54 24d5123b14825b4ed87d54040e3371db 46 b07d699e0c84ad53bbd1d665cce6dd6b 66 afd5c7b68c179075508bb5ec3408251d 49 70d116435b57ec01a78707e63eb92840 50 10a56fa496c93c48da8fe4887a0da6da 54 4bfa02a79c6fcfbb677dc16c8e307f4e 47 bfe680159ba\ufb0018733deb098cdc81c82 54 47 65 60 58 54 44 47 47 52 49 46 46 46 58 63 50 47 49 55 50 46 50 50 50 49 50 49 44 2d6ae1656e7a4313afa8f1f13b3e3499 0ef5d5161d0\ufb00aeec80e5ba7d22dbaa7 91 17ab6d0a0b1dce014a174ec79ef65c35 fdc7bb8c1a95ad281e72005bdc7d3586 46 33ad0c91387a6a6d0e4efe2c54d96d7c 76a41adb03041ea3d6b6463e05e56832 52 6d867658697c4057fee61f79f0eab185 65 49 58aef2c879f8416ee977ddca09bbc06d 4c8de19a424d121e6514a7a930c16ba7 46 3bba4a700d4a39809d63322cc60ac0ea 46 47 437325d40152d2783cb21f66d7cca5fb 41 285e3bfb29b91f5477e572c2dca74a25 6582c7e2817a88cf5bad734d6c7fd3e8 99 956e02a0edddc326faec1fed6e276ddd 99 99 e5c32ca6f129751fb43631c1c22cbed1 48e958e3785be0d5e074ad2cfcf2fee4 79 50343dd3b4527e81f3c5b901e046697f 80cba394b180a259a5b729e5e6aad246 57 c62c26cf59420a6803ca\ufb0030c172067c 50 58d15e1f660d9810df3ef0ac976\ufb009a4 52 0abde8b7b6c514d71f5770c0bf486be3 49 fcf62c3eed61b315f906da2a67021c4e 58 b4c15be1082c22dd2f0422fa044012af 49 613a06c381efb622a9b5efc3e81500fe 50 6cc208de0f12baec48cbf07421e1e765 44 59493dbac0d597391be9f06bbbeee76c 49 49 55 49 55 52 46 49 44 44 47 43 43 46 54 49 49 55 50 43 46 46 91 47 43 49 43 44 46 44 52 49 505665be9649e4f962047faa7161098a cccb9c6e2ec8d1e88b0049ae7a8d1dcc 58 087667567bc7824e34a13a07baa4ac38 43 7c6b8af8c029bd384fbc395964c968e6 d0197228d87ccbdbc48e4d7d227\ufb00f59 99 ed2ae97e863c36862a34e9912f31ec44 90d82f5\ufb00f57b840aee5691bcf6505d3 55 414e8e499ac29aa045177a4135256a73 e4944054eb64436fc528d6848971ca07 100 b556360a57c116ac2803af7a3b3b3a3f 4b9efce698dd2d462ee9fa51da2b1de1 96 1ba2f3c8ddda8f4feec9ba5c4a1c584f 96 96 004d12a0b0747d86686444d5de1522e5 2a3d3fed089ee88ea340dbd1657586da 100 e78aafecee88d26311b26bbe341a2926 d43c1dd699a384e5ce417afab8e37\ufb00a 46 85d8ae433468076ef9ae01e4d3066db3 933fee03ba956efe3e0c6fda8474aa7c 43 04f0eb7a44223500952751ad6f3ab31a 44 ae3a0de3ce04be49ebbd8ecf27ebadea 41 26c4b9a9ec1a452a421662bf5d1a3e4d 46 41 41 5e56bc2079e3c9ac249b51ecc90ccb72 ed173be472c77102d139eefc21d592ec 41 1c696f1f244f3f65defc225e40f8cb83 e21e482900ce016432de2186db441f91 91 8cd60ca0e493dab1011f184395e10339 b5c3\ufb001563efe12c9edad27dc8d6299a 44 4f665775b8ae8ba0c193a97316e6cebc 408851bc89a7e16f12c028eacc9f32a3 54 eeda5f8b10b62090a136bf6441e317f2 47 49 3082e4779e212971d0fe5cd41e46f4d8 b527156003a0aabfcc50909e9fba326b 99 2b6e1616e5054ce49f96b40c5a6f2723 7e068028c23f3fa49b7e8a3438c4a1e1 60 da3b2ec568f403aa42c623db6b8c5ce6 5534ca4553817fe1ccdfebaac5069238 74 a133d32175a19c00fc2014728e7628e1 bb1a434ba95795ccd284d432a8f54ceb 47 0281d66756b7429e88bb242ad74d734c 66\ufb0020831aaa01a68d01580a8c573ee4 63 dfb96b87a3b3bdb3be81d9fabe4a4aa3 c6031d9709e2a9daf20fa0aaeb4a580b 55 e97c7bcbfc5d8fcbdbf2da09c31dcfdc 779abc60800eb5445486bc302eef859a 41 441e1c0f11baee9ad5bf6b416c108d23 c1aceb8756b3d8c4e0b845de8e18b905 43 4abf2771263adb23277978262917efd2 9bf11d5b0f818987b4b272da4642c293 99 (b) Zlob Figure 6: Families where AutoYara did not perform well. The graphs in Figure 6a and Figure 6b support this hypothesis, showing much greater disparity in pairwise similarity of files from the Xpaj and Zlob families that were most challenging for rules produced by AutoYara. While there are dominant clusters, the presence of a large number of small tuples indicates a potentially noisy input set that would thus reduce the effectiveness of any tool attempting to produce a useful rule. 7.2 Reverse Engineering Investigation In addition, we conducted reverse engineering of the rules that AutoYara created. We looked at rules generated for the Baldr and Conju family in detail, and noticed that all the main file section were targeted. Rules targeting the resource sections were strings belonging to the application manifest and DLL names, rules in the text sections pointed to regular functions blocks, rules in the data section were long strings not easily interpretable, and in the case of the Conju family many of the rules targeted the decompression stub of UPX. In general the content of the AutoYara rules were 9 \fAISec\u201920, November 13, 2020, Virtual Event, USA Raff et al. baldr baofa bkff conju darkvnc dragonmess ertfor firefly jongiti ladyoffice navattle nezchi olympicdestroyer phds photominer pikachu plurox potukorp sekur subroate wininf wuca xpantispyware zcash .text .rdata .data .rsrc .reloc MMS UPX .nsp OTHER .vmp File Section 66 14 23 30 22 2 44 49 28 48 67 43 36 56 24 8 6 33 8 9 17 22 1 19 29 68 41 18 11 14 27 72 18 11 27 18 30 8 6 14 68 100 17 50 19 17 60 3 13 27 1 1 12 2 16 23 11 33 17 2 9 95 1 27 23 13 3 41 5 28 7 60 70 26 18 8 36 3 4 6 3 3 2 73 94 AutoYara 20 40 60 80 100 Percentage of Yara rule's strings targeting file section (a) AutoYara-generated rules baldr baofa bkff conju darkvnc dragonmess ertfor firefly jongiti ladyoffice navattle nezchi olympicdestroyer phds photominer pikachu plurox potukorp sekur subroate wininf wuca xpantispyware zcash .text .rdata .data .rsrc .reloc MMS UPX .nsp OTHER .vmp File Section 100 21 3 43 24 1 27 25 5 67 25 60 86 4 100 12 100 100 100 75 30 2 14 54 50 8 51 9 77 16 75 34 100 32 10 60 21 25 53 13 4 50 11 16 2 50 17 3 25 27 43 72 8 1 92 100 Manual 20 40 60 80 100 Percentage of Yara rule's strings targeting file section (b) Manually-generated rules Figure 7: File sections targeted by AutoYara and analyst show that both tend to use similar section to generate rules. AutoYara tends to target the .text section, while analyst tend to target the .data section. \u201cMMS\u201d describes rules that were in multiple file sections; \u201cOTHER\u201d corresponds condensed file sections that were not relevant to our analysis. reasonable items to target. For code sections in particular, AutoYara did not tend to target the exact same functions, but the functions selected seemed reasonably specific of the sample. We also analyzed the different file sections each of the rules generated by AutoYara and Analyst C (the only analyst to investigate all malware families, hence we use their results). Figure 7a and Figure 7b shows that both AutoYara and our analyst wrote rules targeting the .text, .rdata, .data, .rsrc, and .reloc sections. We also noticed that when AutoYara generated rules, it had a tendency to target the .text section, whereas our analyst tended to write rules targeting the data section. It\u2019s likely that AutoYara targets the .text section due to its high entropy and because it can find blocks of instructions that are shared across multiple binaries. On the other hand, analysts tend to use the data section more because it contains globally accessible or predefined data such as strings or constants that are easier to identify, extract, and understand. The tendency of AutoYara to target high-entropy areas can also contribute to its small bias towards the .UPX1 sections of UPX-packed binaries (shown as positive values on the UPX row in Figure 7a). However, further exploration of this subject is needed to determine if there are strong correlations, or if this is an artifact of our small sample size. Due to the high cost of such experiments such a larger manual study may be difficult to perform. We are aware of no other work that has compared manual-vs-automatic generated rules. Finally, we look at the commonality in rule behavior, with results in Table 4. \u00af X indicates the mean number of file sections (e.g., \u00af X = 2 if .text and .data are represented in the rule) that a rule component (each string within a Yara rule) hit upon in the test data. Using this we can see that AutoYara and manually built rules have a high correlation, both tending to have strings that hit only one file section, or multiple section, indicating that a degree of similarity in the types of content used. The Similarity column indicates the overlap in the test set for what executables the final rule triggered on. A similarity of 100% means that AutoYara and the human analyst\u2019s intersection is perfect, and 0% indicates no overlap in the files flagged. From these results we see that again, AutoYara and manual domain expert constructed rules tend to agree upon and find similar files in the test set, with most differences caused by differing false-positive rates (e.g., ladyoffice, where AutoYara has only 2 hits that are TPs, and the analyst hits only the 3 TPs). 8" + }, + { + "url": "http://arxiv.org/abs/2006.09271v2", + "title": "A Survey of Machine Learning Methods and Challenges for Windows Malware Classification", + "abstract": "Malware classification is a difficult problem, to which machine learning\nmethods have been applied for decades. Yet progress has often been slow, in\npart due to a number of unique difficulties with the task that occur through\nall stages of the developing a machine learning system: data collection,\nlabeling, feature creation and selection, model selection, and evaluation. In\nthis survey we will review a number of the current methods and challenges\nrelated to malware classification, including data collection, feature\nextraction, and model construction, and evaluation. Our discussion will include\nthoughts on the constraints that must be considered for machine learning based\nsolutions in this domain, and yet to be tackled problems for which machine\nlearning could also provide a solution. This survey aims to be useful both to\ncybersecurity practitioners who wish to learn more about how machine learning\ncan be applied to the malware problem, and to give data scientists the\nnecessary background into the challenges in this uniquely complicated space.", + "authors": "Edward Raff, Charles Nicholas", + "published": "2020-06-15", + "updated": "2020-11-15", + "primary_cat": "cs.CR", + "cats": [ + "cs.CR", + "cs.LG", + "stat.AP", + "stat.ML" + ], + "main_content": "Introduction The impact of malicious software, or \u201cmalware\u201d, such as viruses and worms, is a long standing and growing problem. As society becomes increasingly dependent on computer systems, this impact only increases. Single incidents regularly cost companies tens of millions of dollars in damages (Musil, 2016; Riley and Pagliery, 2015; Frizell, 2015). In 2014, for example, the total economic cost associated with malware distributed through pirated software, a subset of all malware, was estimated to be nearly $500 billion (Gantz et al., 2014). Overall malware has proven to be effective for its authors, as indicated by the exponential growth of new malware (Spafford, 2014; AV-TEST, 2016a; F-Secure, 2016). This growth in malware only increases the need for better tools to stop malware and aid analysts and security professionals. One speci\ufb01c area for improvement is malware classi\ufb01cation. The task of malware classi\ufb01cation has been long studied, and generally refers to one of two related tasks: 1) detecting new malware (i.e., distinguishing between benign and malicious applications) and 2) differentiating between two or more known malware types or families. The former of these we will refer to as malware detection, and it is intrinsically useful in stopping the spread of malware. Anti-virus (AV) products currently perform this function using a predominantly signature-based approach. Signatures are intrinsically speci\ufb01c to the malware they detect, and can be labor-intensive for an analyst to create. This makes signatures unlikely to scale as malware becomes more prevalent, an issue publicly recognized by AV vendors (Yadron, 2014; Hypponen, 2012). The second class of malware classi\ufb01cation we will refer to as malware family classi\ufb01cation. Analysts and security professionals use this process to sort through new binaries and process an ever growing amount of data. In this case we assume or know that the binary is malicious, and wish to NeurIPS 2020 Workshop: ML Retrospectives, Surveys & Meta-Analyses (ML-RSA). \fdesignate it as a member of a speci\ufb01c family. For example, Con\ufb01cker is a speci\ufb01c malware family that was prevalent from 2008 through 2009, and evolved through \ufb01ve major revisions (Porras, 2009). Modern obfuscation techniques employed by malware authors means that a single variant of a piece of malware may have several instantiations in the wild that do not have the same MD5 or SHA hash, but do have substantially identical functionality. Thus there is a need to automatically determine if newly observed binaries in fact belong to a previously known family. Such a \ufb01nding aids in attribution, and reduces the number of \ufb01les that analysts must look at. The \ufb01eld of machine learning would seem the \ufb01rst and most likely avenue to provide relief to the malware classi\ufb01cation problem, and indeed ML has been applied to this problem for decades (Kephart et al., 1995). While the problem of malware detection is an issue that spans across many \ufb01le formats (Tabish et al., 2009) and operating systems, we will focus on the case of Microsoft Portable Executable (PE) binaries for 32 and 64-bit versions of the Windows OS, as was \ufb01rst considered by Schultz et al. (2001). In many cases the same issues and methods have been applied to other malware domains, such as malicious PDF \ufb01les, or malware for the Linux or Android platforms. We focus on Windows executables because it is one of the longest standing platforms, and thus has the longest history of continuous research. Machine learning is an important contributor to a solution due to its focus on generalization, meaning that the models learned should work effectively on new malware specimens, which have not previously been seen. The importance of having methods that generalize is only growing, as a recent study has found that 94% of applications on a sample of 100 million machines are unique to the machine (Li et al., 2017). This means a system deployed widely will have to deal with hundreds of millions of unique \ufb01les that would have never been encountered before. The combination of machine learning and malware draws from many branches of Computer Science: From high level algorithms, statistics, and information theory to build classi\ufb01cation models, to low level details on assembly instruction set architectures, software design, and networking needed to understand how malware works and propagates. Challenges from across this spectrum interact to make malware classi\ufb01cation one of the more dif\ufb01cult tasks to which one could apply machine learning. Given that researchers often (if not always) have limited resources and time, it is not surprising that this area has received less attention and struggled to make progress, even as machine learning becomes an increasingly mainstream tool. In particular we note that a wide array of potential machine learning methods have not yet been explored for malware classi\ufb01cation. Thus in this survey we make an effort to address techniques we feel should be investigated, especially if they are particularly appropriate or readily available in free software projects. Our primary goal is to provide a common base of understanding for both malware researchers and machine learning practitioners who are interested in this combined space. What techniques have been looked at, what are their strengths or challenges, and what areas are in need of continued work? We present the information about what has been done and why effective application is a challenge from the bottom up, starting with the dif\ufb01culties of getting representative data in section 2. This is a critical issue that has been overlooked in many prior works, as all subsequent stages depend on that data being representative of the population at large. Next we will discuss the features that have been extracted from these binaries in section 3, focusing on both static and dynamic features and their respective strengths and weaknesses. The next three sections will be on the types of machine learning approaches that have been applied to these features. More \u201cstandard\u201d machine learning methods used on simple feature vectors are discussed in section 4. In section 5 we will discuss the machine learning models used that understand the input representation as being a type of sequence, which maps better to the true nature of the data. The most expressive representational form, that of a graph, is discussed in section 6. While this form is one that often best encodes the nature of the features used, it is also the least frequently used. Once we have chosen our data, features, and model, the next step is evaluation in section 7. Finally, we discuss some future research directions in section 8 and our conclusions in section 9. In each section of this survey we will attempt to provide a broad overview of what is currently done and the challenges that are impeding progress or need to be considered. It is unfortunate that for many of the issues we will discuss, there is little quantitative information about how widespread or signi\ufb01cant their impact is. The complexity of malware ranges from simple applications relying on user error and unpatched vulnerabilities to work, to viruses like Stuxnet written (it is said) by nation state actors, which attempt to evade detection and may have selective intent that goes unnoticed or unused on most systems (Kushner, 2013). This broad spectrum of 2 \fsophistication means that different issues and countermeasures may have a more or less noticeable impact on a learning system depending on the current prevalence of such measures, the malware we would like to classify, and the systems on which a solution would be deployed. This can change over time and it is not currently feasible to tackle all of these issues at once. For these reasons we refrain from declaring any method the \u201cstate of the art\u201d for malware classi\ufb01cation, and instead focus on the pros and cons of the various approaches, as well as the underlying issues that cause this slippery situation. In particular, we will focus on any theoretical shortcomings that would prevent a system from working in practice, such as, for example, any machine learning processes which an adversary could circumvent with minimal effort if they wished to do so. 2 Data Collection Challenges As with many applications, the \ufb01rst task to building a machine learning model is to obtain data that accurately represents the distribution of binaries that will be observed. It is indeed well known that obtaining more and better labeled data is one of the most effective ways to improve the accuracy of a machine learning system (Domingos, 2012; Halevy et al., 2009). However, by its very nature the potential scope of what a binary can do is unbounded. There is no way for us to randomly sample from the binaries that may exist in the world, and we have no way to measure how much of the \u201cspace\u201d of binaries we have covered with any given dataset. Beyond the unbounded scope, the malware domain poses a number of unique challenges to data collection. This makes it almost impossible to perform canonical best practices, such as having multiple labelers per \ufb01le and judging inter-labeler agreement (Geiger et al., 2020). When obtaining data, it is often the case that malware is the easiest to get. Not only are there websites dedicated to collecting malware sent in by volunteers (Roberts, 2011; Quist, 2009), but it is not unusual for a researcher to obtain their own malware specimens through the use of honeypots (Baecher et al., 2006). A honeypot is a system connected to the Internet that intentionally tries to get infected by malware, often by leaving open security holes and foregoing standard protections. At the same time, both of these sources of malware can have data quality issues. Honeypots will have data biased toward what that system is capable of collecting, as malware may require interaction from the honeypot through speci\ufb01c applications in order to successfully infect a machine (Zhuge et al., 2007). That is, a malware sample\u2019s infection vector may rely on a speci\ufb01c version of Firefox or Chrome to be running, and it may not be possible to account for all possible application interactions. Malware may also attempt to detect that a potential target is in fact a honeypot, and avoid infection to defer its detection (Krawetz, 2004). The issues that bias what malware is collected by honeypots are also likely to impact the quality of larger malware repositories, as users may run honeypots and submit their catches to these larger collections. Malware repositories will also have a self-selection bias from those who are willing to share their malware and take the time to do so. Benign data, or \u201cgoodware\u201d, has proven to be even more challenging to physically obtain than malware. This is in part because malware actively attempts to infect new hosts, whereas benign applications do not generally spread proli\ufb01cally. As far as we are aware, no work has been done to quantify the diversity or collection of benign samples, or how to best obtain representative benign data. Most works take the easiest avenue of data collection, which is to simply collect the binaries found on an installation of Microsoft Windows. This tactic can lead to extreme over-\ufb01tting, where models literally learn to \ufb01nd the string \u201cMicrosoft Windows\u201d to make a determination (Seymour, 2016; Raff et al., 2016). The population of binaries from Windows share too much of a common base to be useful for training more general models Instead, the model learns to classify everything that does not come from Microsoft as malware (Seymour, 2016). This bias is strong enough that even using only a subset of the information will still lead to over-\ufb01tting (Raff et al., 2017). This issue is particularly wide spread, and occurs in almost all cited papers in this survey. The signi\ufb01cant exception to this are papers produced by corporate entities that have private data they use to develop anti-virus software. When this goodware bias issue is combined with the fact that there is no standard data-set for the task of malware detection, it is almost impossible to compare the results from different papers when different datasets are used. In addition, prior work using benign samples from clean Microsoft installations may signi\ufb01cantly over-estimate the accuracy of their methods. Only recently has effort been made to address this lack of a standard dataset for malware detection. Anderson and Roth (2018) released the EMBER dataset, which contains features extracted from 1.1 million benign and malicious binaries. EMBER is the \ufb01rst standardized corpus that has been 3 \freleased for malware detection. Their work has taken important steps toward reproducible science and a shared benchmark, but more work is still needed. By the author\u2019s own admission, the method of its labeling makes it an \u201ceasy\u201d corpus. If users want to create new features from the raw binaries, they have to obtain the binaries themselves independently \u2014 as the authors are unable to share the raw \ufb01les due to copyright concerns. Information regarding malware family is also not present in the original version. A 2018 version of the Ember corpus (released in 2019, so that labels would be of higher con\ufb01dence) has attempted to rectify a number of these issues by using more challenging data and malware family information. Once data has been obtained, labeling the data must follow (when labels do not come \u201cfor free\u201d as they do with honeypots). The issue of labeling malware into families, or determining if an unknown binary is or is not malware, is labor intensive and requires signi\ufb01cant domain knowledge and training. This is in contrast to many current machine learning domains, like image classi\ufb01cation, where labeling can often be done by individuals with no special expertise and with minimal time. For example, an expert analyst can often take around 10 hours to characterize a malicious binary (Mohaisen and Alrawi, 2013). This observation of the expense to understand what a \ufb01le does is not unique, with a recent survey reporting hours to weeks of time, with participants ranging from 3-21 years experience (Votipka et al., 2019).This effort makes manually labeling large corpora impractical. For an entirely expert-labeled corpus for malware family classi\ufb01cation, the largest public corpus we are aware of was developed by Upchurch and Zhou (2015). They grouped 85 malware samples by functional similarity into a total of 8 groups. For benign vs malicious labeling, many have attempted to circumvent this issue through the use of anti-virus (AV) products. One popular strategy is to upload the binaries to websites such as VirusTotal, which will run several dozen AV products against each binary, and return individual results. If more than 30% of the AVs claim a \ufb01le is malicious, it is assumed malicious. If none of the AVs say it is malware, it is assumed benign. Any specimen that tests between these two levels (at least one but less than 30% of the products say it\u2019s malicious) is then discarded from the experiment (Berlin et al., 2015; Saxe and Berlin, 2015). We note there is nothing particularly special about choosing 30% as the threshold, and many works have used different rules. Others have used \u22654 AV hits as a splitting point between malicious and benign (Incer et al., 2018), or left the issue unspeci\ufb01ed (Kolosnjaji et al., 2016). A recent study by Zhu et al. (2020) of different thresholding strategies found that using a threshold of \u226415 as the decision point between benign and malicious is a reasonable compromise to varying factors. This includes the fact that 1) AV decisions \ufb02uctuate over time, stabilizing after several months. 2) The false positive rate of AV engines in not trivial for novel \ufb01les, 3) the false positive rate on packed benign \ufb01les can be signi\ufb01cantly higher, and 3) many AV engines have correlated answers and some appear to alter their own decisions based on the results of other AV engines over time. We note that these results regarding the use of VirusTotal labels are only for a benign vs malicious determination, and an equally thorough study of family labeling using VirusTotal has not yet been presented. While this selection is easy to perform, the labels will be intrinsically biased to what the AV products already recognize. More importantly, binaries marked by only a few AV products as malicious are likely to be the most important and challenging examples. This middle ground will consist of either benign programs which look malicious for some reason (false positives), or malicious binaries that are not easy to detect (false negatives). Removing such examples will arti\ufb01cially in\ufb02ate the measured accuracy, as only the easiest samples are kept. Removing such dif\ufb01cult to label points will also prevent the model from observing the border regions of the space between benign and malicious. The aforementioned EMBER dataset uses this style of labeling, and hence the \u201ceasy\u201d designation (Anderson and Roth, 2018). This AV-bias issue also hampers effective model evaluation, as we are skewing the data and thus the evaluation to an easier distribution of benign and malicious samples. This causes an arti\ufb01cially large accuracy by any of the metrics we will discuss later in section 7. Only recently have some of these AV biases been categorized and described. Botacin et al. (2020) has shown that the detection rate of AV products may vary by country (i.e., is this malware global, or country speci\ufb01c, in its proliferation), executable type (e.g., COM \ufb01les vs. DLL), and family type (e.g., ransomware vs trojans). These biases will naturally be embedded into any model and evaluation built from labels that are AV produced. Further, using older \ufb01les to try and maximize 4 \fcon\ufb01dence is not a guaranteed workaround, since AV engines will have label regressions over time, where they stop detecting suf\ufb01ciently old \ufb01les as malicious (Botacin et al., 2020). We also note that the subscription service to VirusTotal allows for downloading the original \ufb01les based on their hash values. This is how users can get the raw version of the EMBER dataset, or create their own preliminary datasets. However, the subscription to VirusTotal is not cheap (even with academic discounts), and may be beyond the budget of smaller research groups or those just getting into this space. As such it represents an unfortunate barrier to entry, especially since VirusTotal is widely adopted within the industry. When the desired labels are for malware families, the use of AV outputs becomes even more problematic. The family labels provided by AVs are not standardized and different AV products will often disagree on labels or type (Bailey et al., 2007a). While more advanced methods exist than simple thresholding (e.g., 3/5 of AVs say the label is \u201cCon\ufb01cker\u201d) for determining benignity (Kantchelian et al., 2015) and malware family (Sebasti\u00e1n et al., 2016), the use of many AV products remains the only scalable method to obtain labels. High quality family labels require manual analysis, which as noted before, requires days-to-weeks of effort. Worse still, malware authors have historically copied/stolen code from one-another, which can make determining a speci\ufb01c family (and the related problem of attribution) even more dif\ufb01cult (Calleja et al., 2019). Beyond the issue of collecting data, there is also the fact that binaries exhibit concept drift, meaning the population as a whole changes over time. This is true of both benign and malicious binaries, as changes will percolate through the population as the Windows API changes, code generation changes with newer compilers, libraries fall in and out of favor, and other factors. It then becomes important to investigate the performance of a classi\ufb01cation system as change occurs (Masud et al., 2011), which is not widely explored. The distribution of malware in particular drifts at a faster rate, as malware authors attempt to modify their code to avoid detection. For example, Rajab et al. (2011) performed an extensive study of web based malware on Google\u2019s Safe Browsing infrastructure. Over four years they saw an increase in malware that relied on social engineering, a short lifespan for the use of most exploits documented by Common Vulnerabilities and Exposures (CVEs), and an increase in attempts at \u201cIP cloaking\u201d to obscure their source. The fast evolution of malware is a result of an adversarial scenario, and only further complicates the development of a long term solution (Kantchelian et al., 2013; Singh et al., 2012). 3 Features for Binaries Feature extraction is the prerequisite step to applying any machine learning method. In the domain of malware analysis for PE binaries, the approaches are generally divided into one of two groups: static or dynamic. Dynamic features are extracted from running a binary and extracting the features of interest from its execution. The fact that a program\u2019s execution may alter from run to run and in different environments is why the features are called dynamic. Given the potentially malicious intent of any arbitrary executable, dynamic analysis is often done through a virtual machine (VM). Conversely, static features are extracted from the binary itself without any attempt to execute it. A summary of the most commonly used features, and the representations that are regularly used, is given in Table 1. 3.1 Dynamic Analysis Features There are a number of common feature types to extract from dynamic analysis. For example, an early type of dynamic analysis was to modify the linker in the operating system to wrap each function call to the OS or other libraries with a special prologue and epilogue (Willems et al., 2007). In doing so the functions called could be tracked in the order of their occurrence and one could obtain a sequence of API or function calls. Such trackings of API calls can be used in many ways, and is often interpreted as a sequential ordering or as a directed graph (Elhadi et al., 2014; Fredrikson et al., 2010). Special tracking can be added for common tasks, such as registry edits, \ufb01les created or deleted, mutex operations, and TCP/IP calls (Rieck et al., 2008). These are all common tasks or operations that malware might perform, so recording extra information (such as method arguments) can be bene\ufb01cial to analysis. Ultimately, there are many ways to combine the API functions called and the operations performed, with many works using one of or both options, and tracking different subsets of actions. These approaches are often called \u201cbehavior based\u201d, and make up a large portion 5 \fTable 1. Summary of the features commonly used for malware analysis. Feature Source columns indicate whether the feature type is commonly obtained via dynamic or static analysis. Feature Representation indicates the which ways of interpreting the original features are used. The \ufb01xed-length column does not consider cases where a approach is used that converts sequences and graphs to \ufb01xed length representations while retaining signi\ufb01cant information of the sequential nature. Feature Source Feature Representation Feature Type Static Dynamic Fixed-Length Sequence Graph Bytes \u2713 \u2713 \u2713 Header Values \u2713 \u2713 Entropy \u2713 \u2713 Assembly \u2713 \u2713 \u2713 \u2713 \u2713 API/Function Calls \u2713 \u2713 \u2713 \u2713 \u2713 System Calls \u2713 \u2713 \u2713 \u2713 Network Traf\ufb01c \u2713 \u2713 \u2713 Performance Counters \u2713 \u2713 \u2713 System Changes \u2713 \u2713 \u2713 Contextual \u2713 \u2713 \u2713 of the dynamic features used. Directly related to tracking of API calls is tracking system calls. For our purposes, we de\ufb01ne a system call as a service provided by the Windows kernel, and (usually) accessed via an entry point in Ntdll.dll. (Russinovich et al., 2012a,b) There are several hundred of these functions, and they are often called by the APIs Microsoft provides, but less often by user code. In fact, use of functions in Ntdll.dll by user code is regarded as a malware indicator (Sikorski and Honig, 2012). One advantage of tracking system calls, rather than all calls to the Windows API, is that the set of system calls tends to remain stable from one version of Windows to another, for the sake of compatibility. The same technology that allows for API call traces can also be used to track changes to the state of the system. Such system changes may include the registry edits and \ufb01les created, as well as processes that started or ended, and other various con\ufb01gurable settings within the OS (Bailey et al., 2007b; Kirat et al., 2014). System changes may also be obtained from system logs (Berlin et al., 2015), which can be used as a convenient feature source with minimal overhead (since the system was going to collect such logs anyway) or for retroactively detecting malware and determining the time of infection. Though not as popular, more granular information can be extracted as well. It is possible to record the sequence of assembly instructions as they run (Dai et al., 2009). Though this approach in particular can require additional feature selection and processing, as the amount of data can grow quickly and the length of program execution may be unbounded. Another option is to track the results of various performance and hardware counters that are present in modern CPUs as well as process related counters tracked by the OS (Tang et al., 2014). These could include the number of memory pages being allocated or swapped, voluntary and forced context switches, cache hits and misses, and other various \ufb01elds. The intuition being that the performance behavior of malware will be distinct from benign applications due to the different nature of their operation. Another less frequently used approach is to monitor the network traf\ufb01c and content that a binary may produce (Stakhanova et al., 2011; Nari and Ghorbani, 2013; Wehner, 2007; Perdisci et al., 2010; Mohaisen and Alrawi, 2013). Many different malware applications make use of command-andcontrol servers ( the existence or location of which may be obfuscated) to direct the actions of infected hosts, making it a potentially informative behavior. Use of the local network is also one of the most common ways for malware to self proliferate. While the population of malware that does not use the Internet or any local network may be small, it may also be one of the more interesting and important ones to classify correctly. The methods discussed in this section make up the majority of features that are extracted via dynamic analysis. While the set of options may seem simple, the systems to capture them represent their own signi\ufb01cant engineering efforts. Many such systems have been developed over time, and we refer the reader to (Egele et al., 2008) for a survey of the numerous systems for dynamic analysis and their 6 \frelative pros and cons. The focus of this work will remain not on the method of collection, but what is collected and the challenges that are faced in doing so. 3.2 Dynamic Analysis Challenges At \ufb01rst glance, a preference for dynamic over static features may seem obvious. The actual behavior of an application would intuitively be a strong indicator of the intent of a binary, and an effective way to group applications and measure their similarity. However, this perforce requires allowing the binary to execute \u2014 which opens a proverbial can of worms that must be considered. For safety, malware must generally be run inside a Virtual Machine where its effects can be contained and reverted. But the malware authors are aware of this, and can attempt to detect that the malware is being run in a controlled environment and then alter the malware\u2019s behavior in response. It is even possible for malware authors to detect which speci\ufb01c emulator they are being run in, be it standard Virtual Machine emulation software (e.g., VirtualBox) or custom environments used by AVs (Blackthorne et al., 2016). For evasive malware, that means the apparent behavior inside of a safe Virtual Machine may differ in a substantial way from when the same program is running on real hardware (Kirat et al., 2014). This makes features built from running binaries inside a VM less reliable. Unfortunately there exist a number of potential ways for a malicious author to detect a virtual environment, and there is no simple way to prevent such detection. One particular avenue is through CPU and timing attacks (Kang et al., 2009) that are applicable to hypervisor virtualization (Popek and Goldberg, 1974). For a bare-metal hypervisor that allows most instructions to run at native speed, it is necessary to intercept and emulate certain instruction calls (such as changing from ring 3 to ring 0) in order to keep the VM contained. Such instructions will incur a signi\ufb01cant performance penalty due to the extra overhead to intercept and emulate them. While this is normally acceptable, as such cases are the minority of instructions, the performance discrepancy may be used by the binary to determine that it is running under emulation, and thus alter its behavior. Similarly, if the whole system is being emulated equally \u201cslowly\u201d, malware could request information about the CPU, network card, and other hardware to determine if the time to execute is abnormally slow for the given hardware or inferred hardware age. Even beyond just timing attacks, the numerous possible discrepancies between real and emulated hardware have lead many to consider the task of creating a virtual-machine undetectable by malware effectively impossible (Gar\ufb01nkel et al., 2007). One avenue of research to circumvent this problem is to force binaries to follow some path of execution (Peng et al., 2014; Brumley et al., 2007). Such approaches successfully avoid the issue of allowing malware to determine its own behavior, at the cost of not necessarily knowing which execution path to take to observe desired behavior. That is to say, we do not know which execution path and sequence will exhibit the malicious behavior we wish to detect. Even if we ignore looping constructs and backwards branches, if a binary has b conditional branches (e.g. If-Else statements) in it, there may be up to 2b different possible execution paths to take. Some heuristics to select execution paths must be applied, and this may be dif\ufb01cult given the unusual behavior of malicious binaries. For example, one may heuristically switch execution paths if one path causes illegal behavior or results in an interrupt from the OS. However, such behavior may be intentional, in causing side effects or triggering a bug that the malware intends to exploit. Even given the pros and cons between execution in a virtual environment and forced execution, both approaches share a common issue in application. Behavior of malware may depend on the user environment in a non-trivial way. A trivial case would be malware behavior dependent on a bug speci\ufb01c to an OS version, such as Windows XP over Windows Vista. It has been found that malware may depend on speci\ufb01c applications being installed and running at the same time as the malware, and the interactions between programs in regular use (Rossow et al., 2012). Such scenarios are not general or easily covered in experimental testing, and can cause a large discrepancy between the lab and deployments to real users. Such cases may easily cause a machine learning model to stop generalizing, or miss certain subsets of malware in practice. Another environmental factor in dynamic analysis is Internet traf\ufb01c and connectivity. Allowing unconstrained Internet access to running malware is risky at best, and opens ethical concerns in allowing malware under examination to infect and attack other machines. Yet disconnecting Internet access entirely may dramatically alter the behavior of malware, not including the possibility of 7 \fmalware updating itself or downloading new functionality. Maximizing the containment of malware while allowing Internet access can require extensive design and engineering effort (Kreibich et al., 2011). A further complication exists in experiment reproducibility, as the servers malware connects to may change or go of\ufb02ine over short periods of time. When these servers do not respond, or even if they do, the malware\u2019s behavior may change or cease altogether. This makes dynamic analysis of older malware dif\ufb01cult, as these servers are unlikely to return(Ra\ufb01que and Caballero, 2013). The issue of reproducibility and infection can be partially addressed by network emulation, in which the host environment running the malware intercepts and alters network traf\ufb01c, and potentially provides fake responses, in order to let the malware run as if it had Internet connectivity while keeping it isolated (Graziano et al., 2012). These issues are signi\ufb01cant impediments in using network traf\ufb01c as a reliable feature, and only further complicate dynamic analysis. A new approach to help make dynamic feature extraction more reproducible is to design special VM recording techniques, which save all of the non-deterministic events so that a VM can be replayed at a later point in time (Severi et al., 2018). While powerful and storage ef\ufb01cient, if the malware successfully detects the VM at \ufb01rst run and alters its behavior (or fails to run properly for other reasons), the replay will always re\ufb02ect this failure. 3.3 Static Analysis Features By its very nature, static analysis greatly reduces the scope of features options to consider for classi\ufb01cation. One common choice is to use the raw-bytes themselves as features (Raff et al., 2016; Kolter and Maloof, 2006; Stolfo et al., 2007). A subset of the raw-byte approach is simply to search for and extract what appear to be ASCII strings (Islam et al., 2010). This approach assumes the least amount of knowledge and is widely applied to other \ufb01le types because of its ease of application. Another approach is to instead compute a windowed entropy over the raw bytes, mapping each \ufb01le to a entropy sequence (Han et al., 2015; Baysa et al., 2013; Sorokin, 2011). Regardless of how processed, these approaches have an attractive simplicity at the cost of ignoring relevant properties. For example, while the raw bytes may be processed as one long linear sequence, the locality within a binary is non-linear. Different portions will relate to others through pointers in the storage format as well as various local and long jumps in the assembly. It is also common to build histograms from this information to reduce it to a \ufb01xed length format (Saxe and Berlin, 2015). Using more domain knowledge, it is also popular to parse the PE-Header (Mic, 2013) for relevant information, extracting the \ufb01elds and imports and encoding them as numeric and categorical features (Sha\ufb01q et al., 2009; Raff et al., 2017). Being able to process the PE-Header is also important for \ufb01nding and disassembling the binary code, which is one of the more popular feature types to use (Santos et al., 2010; Moskovitch et al., 2008) in static analysis. As mentioned in subsection 3.1, assembly sequences can be used in dynamic analysis as well. The difference then becomes what assembly sequences appear in the \ufb01le and overall structure, versus the sequence of instructions actually run (Damodaran et al., 2015). In each case one may observe sequences not seen by the other. The dynamic version may not run all of the code present, and the static version may not \ufb01nd obfuscated instructions. The extraction of PE-Header disassembly from static analysis are more readily available, and provided by many open-source projects. For the PE-Header, there are projects like PortEx (Hahn, 2014), pe\ufb01le1, and this functionality is even built-into the Go language runtime2. For disassembly, relevant projects include Capstone3, Xori4, Distorm5, BeaEngine6, and others. Many different disassemblers have become available in part because disassembling a binary is non-trivial, especially when malware may attempt to create obscure and obfuscated byte code that attempts to thwart disassembly. Each of the many options available have different pros and cons in terms of run-time, accuracy, supported architectures, and other issues. 1https://github.com/erocarrera/pefile 2https://golang.org/pkg/debug/pe/ 3http://www.capstone-engine.org/ 4https://github.com/endgameinc/xori 5https://github.com/gdabah/distorm 6https://github.com/BeaEngine/beaengine 8 \fOnce a binary is successfully disassembled (which requires the PE Header), it is also possible to resolve API function calls from the assembly using the Import Address Table (Ferrand and Filiol, 2016). The IAT stores the functions the library wishes to load as well as the virtual address at which the function will be stored. Then any jump or call function\u2019s arguments can be converted to the canonical target function. This allows us to not only use the imported functions and APIs as features in a \ufb01xed-length feature vector (function present / absent), but also as a sequence or graph of API call order. Finally, the most knowledge-intensive and time-consuming option is to consult malware analysts on what information to look for, and attempt to automatically extract said information (Dube et al., 2012). Such approaches may obtain a distinct advantage from expert opinion, but will require additional work to update due to concept drift as malware authors adjust their code to avoid detection. 3.4 Static Analysis Challenges While the feature extraction process is often simpler for static analysis, it exhibits its own set of problems that must be dealt with. Notably the contents and intent of a binary are often obfuscated, with the \ufb01rst line of obfuscation being the use of packing (Wei et al., 2008). Packing wraps the original binary content inside a new version of the binary, often storing the original version with some form of compression or encryption. Upon execution, the packed version of the binary extracts the original version and then executes it. Packing may be applied recursively multiple times, with different types of packers each time, to maximize the effort needed to extract its contents. This technique has been widely used in malware, and among well-meaning software developers. Packing is often employed as an attempt to thwart reverse engineering by a competitor, avoid or delay the prevalence of \u201ccracked\u201d versions of commercial software, or just to reduce the \ufb01le size for transfer (Guo et al., 2008). There are attempts to encourage the authors of packing software to cooperate with AV vendors to add information that would reduce the magnitude of this problem (Singh and Lakhotia, 2011), incorporating \u201ctags\u201d that would make it easier to determine if a packed binary is safe and where it came from. Currently it remains that it is not suf\ufb01cient to simply detect packing and infer maliciousness. The development of automated unpacking tools is an active area of research (Martignoni et al., 2007; Royal et al., 2006), however it generally requires some level of emulation of the binary. This brings back many of the issues discussed in subsection 3.2 with performing dynamic analysis. Though there has been some work in static unpacking (Coogan et al., 2009), the dynamic approach to this problem has been the preferred method in most works. Packing is often considered as a \u201ccatch all\u201d that thwarts all static analysis, and always increases the entropy of the original \ufb01le. Recent work by (Aghakhani et al., 2020) has challenged many of these \u201cknown\u201d assumptions about packing. In a large study they have shown it is possible for machine learning based models to make benign vs malicious distinctions even when contents are packed, provided the packers are not too novel and the training distribution of packers is properly accounted for. They also showed that many packers lower the entropy of resulting \ufb01le. A result in particular that is counter-intuitive from this work is the utility of byte n-grams as features. This feature type will be discussed more in section 3, but many assumed packing would invalidate such byte based processing from being useful. There exists other types of obfuscations as well within the binary code itself, including a host of possible obfuscations done by packers (Roundy and Miller, 2013). Some simpler forms of obfuscation may include the use of extra instructions that don\u2019t change the result of a program, executing instructions generated at run-time (separate from unpacking), and unnecessary or unreachable code (Christodorescu et al., 2005, 2007). There also exist other sophisticated obfuscation techniques that are widely used, but from which information can be extracted with some effort. Polymorphic malware alters itself each time it propagates, creating numerous different versions that are all functionally equivalent while obfuscating the entry point or decryptor of a binary (Newsome et al., 2005). Metamorphic malware goes further, potentially altering all of the binary code as it propagates (Konstantinou, 2008). Some malware even implements its own virtual machine for a custom instruction set in which the malicious code is written (Sharif et al., 2009). Analysis can get particularly dif\ufb01cult when multiple forms of obfuscation are used, since none of them are mutually exclusive. There have been attempts to develop fully generic deobfuscation systems that do not rely on knowing in advance which obfuscation technique 9 \fis being used, but such attempts have not yet been fully successful (Yadegari et al., 2015). Granted that a competent malware analyst can reverse engineer many if not most obfuscated \ufb01les, with the right tools and suf\ufb01cient time, (Schrittwieser et al., 2016), such efforts are expensive and do not scale. It has been shown that deobfuscating malware improves the recall of signature based approaches (Christodorescu et al., 2007). The presence of obfuscation may be a malware indicator in its own right, and such a feature could be useful in building a machine learning model. Hence, it is not clear that deobfuscation should be attempted in each and every case, and arguments could be made either way. This question deserves further study. 3.5 Contextual Features A third type of features are what we will call contextual features. These are features that are not properties of the malicious binary itself, but come from the context of how the malware may exist or be distributed. The use of contextual features is less common in research, but has been reported to be highly successful in practice. Such systems are generally graph-based in their approach. For example Chau et al. (2011) used information about the \u201creputation\u201d of the machines at which an executable \ufb01le was found to make a determination about maliciousness, without looking at the content of the \ufb01le itself. Others have followed this same strategy, and attempt to more precisely de\ufb01ne the relations between \ufb01les to improve results (Tamersoy et al., 2014), and to merge both relations and \ufb01le dependent features (Ye et al., 2011). Beyond measuring reputation of machines, the reputation of the domain name or IP address from which a \ufb01le was downloaded can also be used to classify the downloaded binary as malicious if the source address has low reputation. This, as well as counter-measures, were discussed by Rajab et al. (2011). Others have created more elaborate graphs based on how and from where the \ufb01le was downloaded, including the benign applications (e.g., Internet Explorer) that are also involved in the process (Kwon et al., 2015). In a similar theme, Karampatziakis et al. (2012) looked at making classi\ufb01cations for \ufb01les that are found in the same container (e.g., a zip or rar \ufb01le). This approach is based on the hypothesis that if any \ufb01le found in a container is malicious, all are more likely to be malicious. A similar approach has recently been proposed to leverage the \ufb01le name of the malware itself, rather than its contents, to predict maliciousness (Nguyen et al., 2019; Kyadige and Rudd, 2019). While not suf\ufb01cient on its own, it may be useful in conjunction with other features (Kyadige and Rudd, 2019) or in investigative/prioritization situations where the whole \ufb01le may not be available (e.g., \ufb01le path was stored in a log, but the \ufb01le itself has since been deleted) (Nguyen et al., 2019). 3.6 Contextual Challenges The contextual information we have discussed can include a mix of both static (\ufb01les rest on the same system) and dynamic sources (reputation of IP addresses and download path). As such it is not a third type of information, but the nature of the contextual information being outside of the executable makes it intrinsically different from the others we have discussed. The biggest impediment to using a contextual approach so far appears to be access to the contextual information itself. All of the works we have discussed make use of data owned by private companies and is measured in the millions of \ufb01les, and can not be made generally available to all researchers. This makes the reproducibility and comparison issues discussed in section 2 especially pertinent. Similar to the issues discussed in subsection 3.2, the contextual information is sensitive to time. Unless recorded for posterity, it will not be possible to perform a historical study of how contextual features would have performed. This applies to both the static and dynamic sources of contextual information. 4 Machine Learning Methods Most machine learning methods for classi\ufb01cation work on \ufb01xed length feature vectors, i.e., x \u2208RD where D is the number of features used. This is a natural representation of information for many domains, however it is a general mismatch for the case of malware classi\ufb01cation. With the exception 10 \fof features from the PE header and some expert knowledge features, almost every feature choice discussed in section 3 is sequential in nature. This leaves us with two choices, both less than ideal: make some simpli\ufb01cations to the problem so that we obtain \ufb01xed-length feature vectors, or restrict ourselves to the more limited set of models that support classi\ufb01cation of sequences. Below we discuss the primary method by which \ufb01xed length features are constructed, and the many algorithms that have have been used for both \ufb01xed-length vector and sequence-based classi\ufb01cation. Other methods that more directly tackle the true nature of our feature choices will be discussed in section 5 and section 6. A natural question to ask is which of the learning approaches and feature combinations work best. Unfortunately this question cannot be easily answered due to the data issues discussed in section 2. For the case of malware detection, many results are likely overestimated, and the lack of any common benchmark dataset further hinders any attempts to compare and contrast results. When distinguishing between malware families the VX-Heaven corpus provides a shared benchmark, but it is a sub-optimal barometer. Not only is the corpus outdated to the point that it does not re\ufb02ect modern malware, but no particular effort is made to balance the number of samples from each malware family. That is to say, both the types of malware and individual malware families are not evenly distributed. This makes interpretation of results more dif\ufb01cult, especially as many works sub-sample the corpus in different ways to remove families or malware types with fewer samples. Given these dif\ufb01culties, for each learning algorithm we discuss the feature types and scenarios where they seem to perform well, and the situations where they perform worse or we believe their utility has been over-estimated. In addition we will give background to relevant extensions and advancements in the machine learning literature that could be relevant to future progress in the \ufb01eld of malware classi\ufb01cation, but have not yet been explored. 4.1 N-Grams The \ufb01rst item we discuss is not a learning algorithm, but a method of constructing features. N-Grams are a \u201cbread and butter\u201d tool for creating feature vectors from sequence information, though capture very little sequential information (and hence are included in this section). Despite this, n-grams have been widely used in malware classi\ufb01cation, starting with the work of (Abou-Assaleh et al., 2004) that connected the methods being used with those in the domain of Natural Language Processing (NLP). Since then, n-grams have been one of the most popular feature processing methods for malware classi\ufb01cation, and have been used for processing bytes, assembly, and API calls (Dahl et al., 2013) into bag-of-words type models. To give a more concrete example of this process, the byte sequence 0xDEADBEEF would have the 2-grams DEAD, ADBE, and BEEF. At training time all possible 2-grams would be counted, and each 2-gram found would map to an index in a high-dimensional feature vector. The feature vector for 0xDEADBEEF would have 3 non-zero values, the speci\ufb01c values determined by some feature weighting scheme such as TF-IDF or Okapi (Robertson and Walker, 1994), though a binary present/absent value is popular as well. There exists a particular desire to use larger values of n for malware classi\ufb01cation due to the limited semantic meaning contained within only 6 or so bytes or instructions. To give this context, a 6-bytegram is not large enough to capture a whole instruction 2.4% of the time (Ibrahim et al., 2010). This is due to the variable length encoding of the x86 instruction set, and a valid x86 instruction can be up to 15 bytes in length. Similarly, an assembly 6-gram is often not suf\ufb01cient to cover the behavior of a larger function. A simple function can compile to dozens of instructions, let alone more complicated functions which may easily be hundreds to thousands of instructions in length. While large values of n are desirable, they are also computationally demanding. As n increases, the number of possible n-grams grows exponentially. Counting and tracking these itself is expensive, and feature selection is required before deploying. As such, the use of n > 6 has historically been rare. Some work has been done to speedup the collection of n-grams by approximately selecting the top-k most frequent n-grams as an initial feature selection process (Raff and Nicholas, 2018). This is based on the observation that n-grams tend to follow a power-law distribution, and that useful predictive features tend to have a minimum frequency (Luhn, 1958). Later work developed this into a probabilistic algorithm for selecting the top-k n-grams in a faster manner with \ufb01xed memory cost, testing values of n = 8192 (Raff et al., 2019a). This study found that n \u226564 was surprisingly useful, and had the bene\ufb01t that a malware analyst could reverse engineering the meaning of a large n-gram to better understand what the model had learned. Their work showed that predictive performance 11 \fwas maximized around n = 8, and that n-gram features had a surprisingly long shelf life, still being effective in detecting benign/malicious software up to 3-years newer than the training data. 4.1.1 N-Gram Coalescing To help mitigate the computational issues with n-grams while retaining as much information as possible, approaches analogous to word stemming have been applied for both byte and assembly n-grams. In NLP stemming attempts to coalesce words with similar semantic meaning into one base form (e.g., \u201crunning\u201d, \u201cran\u201d, and \u201crunner\u201d, all get mapped to \u201crun\u201d). This coalescing may lose important nuance, but can also bene\ufb01t in a reduction to a more powerful subset of features. For x86 assembly, the likelihood of seeing an exact match for most instructions and their operands values together is quite low. This results in extra features and may fail to match instructions that are essentially the same. The simplest workaround to this problem is to map each line of assembly to just the instruction being executed (Dolev and Tzachar, 2008; Moskovitch et al., 2008), e.g. mov eax, 4 is mapped to just mov. Shabtai et al. (2012) argued in favor of this approach, noting that the main \u201cengine\u201d or component of malware could be easily re-located between different versions of a \ufb01le. This would change the relative offsets, and thus the operands \u2014 causing the same code to no longer match. By removing the operands completely this issue is resolved, at the cost of speci\ufb01city. It is then up to the learning method, empowered by appropriately sized assembly n-grams, to learn to detect these patterns. Another alternative was proposed by Masud et al. (2008), which balances the extremes of removing the operands of the assembly and keeping them in their entirety. They noted that an instruction will have some number of parameters and each parameter could be coalesced into a location type, either memory, register, or constant corresponding to where the value used came from: either an access to memory, directly from a register, or the immediate value from the call to an instruction. For example, the instruction mov eax, 4 would be coalesced to mov.register.constant and mov [eax], 4 to mov.memory.constant. We note that in this form it does not matter that a register was used in the \ufb01rst parameter, it is that the operand value came from a memory access that determined the type. Reducing the representation space via coalescing is intuitive and attractive, but it can also obfuscate important information depending on the task. The instruction name itself, such as cmp is in fact already performing some level of coalescing. This is because while the assembler accepts one \u201ccmp\u201d instruction, this instruction will be converted to one of nine different opcodes when assembled. Zak et al. (2017) found that \u201cdisambiguating\u201d the speci\ufb01c opcode an instruction was compiled down to improved the predictive performance of assembly n-grams using both of the aforementioned forms of operand coalescing. This however was only for static analysis, and results may differ when instructions are extracted in a dynamic manner. For bytes and assembly, n-perms have been proposed as an alternative scheme (Karim et al., 2005; Walenstein et al., 2007), particularly for clustering malware. An n-perm represents an n-gram and all possible permutations of an n-gram. An equivalent re-phrasing is: to map every n-gram to a canonical n-perm based on the contents of the n-gram, ignoring their order (e.g., ACB and BCA would both map to ABC). This con\ufb02ation dramatically reduces the number of features created as n increases, allowing the consideration of larger values of n. This same notion has been re-developed for assembly as well (Dai et al., 2009), as a way to circumvent metamorphic malware which will re-order instructions and add super\ufb02uous instructions as a form of obfuscation. 4.2 Linear Models One of the simplest and most effective classes of machine learning methods are linear models, the general objective function for which is given in (1). In it we have N data-points, a label yi for each data-point, and a weight vector w that de\ufb01nes the solution. The value that w takes is dependent on the loss function \u2113, the regularizing function R(w) and its weight \u03bb. The basic goal is to assign every feature Dj an importance weight wj, that is positive or negative depending on which class it is an indicator of. Given a positive and negative class (malicious and benign), we obtain a classi\ufb01cation decision by examining the sign of the dot product sign(wT x), which is between the weight vector w and a data point x. Despite their simplicity, linear models can be highly effective, Especially when 12 \fdealing with high dimensional data sets, where more sophisticated non-linear models may have only minimal improvement in accuracy (Chang et al., 2010; Yuan et al., 2012). N X i=1 \u2113(wT xi, yi) + \u03bbR(w) (1) For the loss function \u2113the two most common choices are the Logistic loss (2a) and the Hinge loss (2b). The Logistic loss corresponds to performing Logistic Regression, and the Hinge loss corresponds to using a Support Vector Machine (SVM) (Cortes and Vapnik, 1995). As presented below the value y indicates the true label for a data-point, and the value s would be the raw score for the data-point \u2014 the dot product between the weight vector w and the feature vector x. \u2113(s, y) = \u001a log(1 + exp(\u2212y \u00b7 s)) Logistic (2a) max(0, 1 \u2212y \u00b7 s) Hinge (2b) When training a linear model, the choice of \u2113does not have a signi\ufb01cant impact on accuracy or training time. The choice of the regularizing function R(w), and the amount of regularization we apply, \u03bb, have a much more impact on model performance. For R(w) the L2 norm (R(w) = 1 2 \u2225w\u22252 2)7 is the most common, and a search over penalties \u03bb is done to \ufb01nd the value that best prevents over\ufb01tting to the training data. By increasing the value of \u03bb we increase the penalty for model complexity, and encourage w to approach \u20d7 0. The other common choice of regularizer, the L1 norm (R(w) = \u2225w\u22251), is also a potentially useful choice, especially when dealing with high dimensional data that can result from the use of n-grams. This is often called Lasso regularization (Tibshirani, 1994) and will result in exact zeros occurring in the weight vector w, meaning it performs its own feature selection as part of training. When a hard zero is assigned as a coef\ufb01cient, the associated feature has no possible impact on the model, and can be removed. Lasso regularization also comes with theoretical and practical robustness to extraneous and noisy features (Ng, 2004), where a model trained with L1 regularization will perform better than one trained with L2 regularization as more and more unrelated features are added. This makes it an excellent \ufb01t for n-gram based feature vectors, which can quickly reach D > 1 million and has been successfully applied to byte n-grams to improve accuracy and interpretability (Raff et al., 2016). The L1 norm does have some weaknesses: it\u2019s a biased estimator and can reduce accuracy under certain situations. But L1 can be combined with the L2 norm to form what is known as Elastic-Net regularization (R(w) = 1 2 \u2225w\u22251 + 1 4 \u2225w\u22252 2) (Zou and Hastie, 2005). The Elastic-Net often provides the best of both worlds, resulting in models with higher accuracy and retaining the sparsity bene\ufb01ts of Lasso. The simplicity of linear models provides the practical bene\ufb01t of many tools being publicly available and able to scale to large datasets. The popular LIBLINEAR library supports both L1 and L2 regularized models for both the Hinge and Logistic loss functions (Fan et al., 2008). LIBLINEAR uses exact solvers specially designed for each combination of loss \u2113and regularizer R(w). Similarly the library Vowpal Wabbit (Langford et al., 2007) implements the same algorithms using online methods, meaning it trains one datapoint at a time and can stream the data from disk. This allows Vowpal Wabbit to train faster and scale to terabyte size corpora. While the online training approach may sometimes result in lower accuracy than the approaches used in LIBLINEAR, the difference is usually minimal (if there is a noticeable difference at all). Linear models are also attractive because they are fast to apply, making them realistic for the real-time goals of AV systems. 4.3 Kernel Methods Kernel methods are an extension of the linear methods discussed in subsection 4.2. Most commonly used with Support Vector Machines (SVMs) using a kernel trick K, the objective function is given 7The 1 2 term is included because it makes the math slightly more convenient when deriving the update. Otherwise it is of no signi\ufb01cance, and is sometimes rolled into the value of \u03bb 13 \fin (3). We note that for SVMs the regularization penalty is usually expressed with the term C, where larger values indicate less regularization. These forms are equivalent where C = 1/(2\u03bbN). The kernel-trick is used to project the original data-set into a different space. A linear solution is found in this new space, which may be non-linear in the original space. N X i=1 max(0, 1 \u2212yiK(w, xi)) + \u03bb \u2225w\u22252 2 (3) A valid kernel K represents the inner product in this new feature space, but does not require us to explicitly form it8. This allows us to obtain classi\ufb01ers that are non-linear in the original feature space (and thus potentially achieve a higher accuracy). We can always pick the linear kernel (4a), which results in a linear model. Practically, two of the more common choices are the polynomial (4b) and Radial Basis Function (RBF) (4c) kernels. The polynomial kernel is particularly helpful to illustrate the intuition behind the kernel-trick, as we can easily compute (\u03b1 + \u03b2)10 with two operations, an addition and an exponentiation. This is computing the inner product in the polynomial space explicitly, but avoids actually expanding the polynomial. If were were to explicitly form the feature space \ufb01rst by expanding the polynomial, we would end up performing 36 exponentiations, 10 additions, and 38 multiplications. K(a, b) = \uf8f1 \uf8f4 \uf8f4 \uf8f2 \uf8f4 \uf8f4 \uf8f3 aT b Linear (4a) (aT b + c)p Polynomial (4b) exp \u0010 \u2212\u03b3 \u2225a \u2212b\u22252\u0011 RBF (4c) The price for this \ufb02exibility is generally computational, as solving the kernelized version can take O(N 3) time and O(N 2) memory. On top of that, a parameter search must be done for the values (such as \u03b3) used in the kernel. This is in addition to the regularization penalty C. Most malware data-sets being used are on the order of 40,000 samples or less, which is still in the range of available tools like LIBSVM (Chang and Lin, 2011). More advanced techniques that do not attempt to obtain the exact solution also exist, allowing the use of kernel methods to larger data-sets (Engel et al., 2004; Hsieh et al., 2014). One of the challenges with the malware domain is the multiplicity of feature options and potential representations. For most machine learning techniques it is necessary to reduce these down to a single feature vector of \ufb01xed length for each data-point. This often results in an over-simpli\ufb01cation for the malware domain. The use of more sophisticated kernels to alleviate this problem is a yet unexplored possibility. For example, one challenge is that the PE format speci\ufb01es many different section types, the most common being sections for imports, exports, binary code, and data. However any of these section types may occur in a binary with any multiplicity9 (e.g., one could have \ufb01ve different executable sections). The standard approach, if differentiating between sections, is to operate as if all instances of a section type were a part of one section. Instead, one could use a kernel that matches sets of feature vectors (Grauman and Darrell, 2005; Bo and Sminchisescu, 2009), allowing the model to learn from these directly. Kernels can also be de\ufb01ned directly over strings (Lodhi et al., 2002; Leslie et al., 2002), which could be useful for comparing the function names de\ufb01ned within an executable or in handling unusual data content, such as URLs that can be found within malware (Raff et al., 2016). To handle the function graphs that may be generated from dynamic analysis, kernels over graphs may also be de\ufb01ned (Vishwanathan et al., 2010; Neuhaus and Bunke, 2007) and has seem some small amount of use for malware classi\ufb01cation (Anderson et al., 2011). Furthermore, the composition of multiple kernels via additions and multiplications also forms a new and valid kernel. This would provide a direct method to incorporate multiple modalities of information into one classi\ufb01er. For example, we could combine a kernel over graphs on API call sequences, a linear kernel for assembly n-gram features, and a kernel over strings found in the \ufb01le into one larger kernel. However these options with kernels are largely unexplored for the malware classi\ufb01cation problem. 8The kernel trick is usually more formally explained as a Reproducing kernel Hilbert space (RKHS). We avoid it to reduce the mathematical background needed for this review 9up-to a hard limit on the number of sections speci\ufb01ed by the PE format 14 \f4.4 Decision Trees Methods based on Decision Trees have been popular in machine learning, and a number of variants and extensions to them exist. The two most popular base decision tree algorithms are C4.5 (Quinlan, 1993) and CART (Breiman et al., 1984). A number of desirable properties have resulted in their widespread use among many domains, making them some of the most widely used algorithms in general (Wu et al., 2007). In particular, decision trees are able to handle both categorical and numeric features simultaneously, are invariant to shifts or re-scaling of numeric features, can handle missing values at training and test time, are fast to apply at test time, and often obtain competitive accuracies while requiring minimal parameter tuning. All of these properties can be highly relevant to malware classi\ufb01cation, where a mix of numerical and categorical features may be common, and there are often real-time requirements for deployment on user machines. Missing values can be a common issue as well, as obfuscations performed by the malware may prevent successful extraction of a given feature. For these reasons many researchers have used decision tree based methods for malware classi\ufb01cation (Perdisci et al., 2008; Dube et al., 2012; Anderson and Roth, 2018). They are easy to apply, provided as a standard tool in most machine learning libraries across several languages (Pedregosa et al., 2011; Hall et al., 2009; Gashler, 2011; Meng et al., 2016; Raff, 2017; Bifet et al., 2010) and with many stand-alone tools dedicated to more powerful extensions (Chen and Guestrin, 2016; Wright and Ziegler, 2015; Ke et al., 2017). Kolter and Maloof (2006) used boosted decision trees in their seminal byte n-gramming paper. Boosting is one of many ensemble methods that work to improve the accuracy of decision trees by intelligently creating a collection of multiple trees, where each tree specializes to a different subset of the data. While they chose the AdaBoost algorithm because it performed best on their data, were able to utilize the interpretability of decision trees to gain insights to their model. An example of how one would be able to read a decision tree is given in Figure 1. Certi\ufb01cate table\u2019s size \u22652 \u00b7 104 bytes File size \u2265106 Malware Benign is DLL? is 64bit? Benign Malware Malware Figure 1. A hypothetical decision tree. Raff et al. (2017) used the Random Forests (Breiman, 2001) and Extra Random Trees (Geurts et al., 2006) ensembles to naturally handle the many disparate value types found within the PE-Header. Values from the header can be binary variables, multi-label variables, and numeric values with varying sizes and scales. For example, some values in the header will give the byte offset to another part of the binary, which could be anywhere from a few hundred to millions of bytes away. Most algorithms would have dif\ufb01culty learning from this value range, and it can be dif\ufb01cult to normalize effectively. also exploited the tree based approaches to obtain ranked feature importance scores (Breiman, 2003; Louppe et al., 2013), another method by which one can glean information about what a decision tree has learned. Some have worked on developing enhanced ensembling methods for decision trees to improve malware classi\ufb01cation accuracy (Menahem et al., 2009). Even when there is no reason to use a tree based approach in particular, many malware classi\ufb01cation works still include them as one of many models to try (Elovici et al., 2007; Alshahwan et al., 2015; Masud et al., 2008; Menahem et al., 2009; Moskovitch et al., 2009a; Anderson and Roth, 2018). The widespread success of decision trees has made them a valuable tool inside and outside the domain of malware classi\ufb01cation. This has lead to a large literature of decision tree techniques to tackle various problems, many of which may be applicable to malware classi\ufb01cation. For example, the popular AdaBoost algorithm often over\ufb01ts in the presence of signi\ufb01cant labeling errors. An extension known as Modest AdaBoost is more robust to this issue, and may lead to improvements (Vezhnevets and Vezhnevets, 2005) in generalization. Another possibility is to use decision trees to 15 \fdeal with concept drift. While malware datasets with date-\ufb01rst-seen labels are not publicly available10, there already exists a literature of decision tree methods designed to work with changing data streams (Hulten et al., 2001). This also relates to how the accuracy of a malware classi\ufb01cation system should be evaluated, which we will discuss further in section 7. 4.5 Neural Networks Neural Networks have seen a recent resurgence in the machine learning community. Though older literature often referred to the technique for classi\ufb01cation as Multi-Layer Perceptrons, newer work has placed an emphasis on the depth of trained networks and is often referred to as Deep Learning. We will provide a brief overview of neural networks, and refer the reader to Goodfellow et al. (2016) for a more thorough introduction to modern architectures, activations, and training methods. Neural networks get their name from their original inspiration in mimicking the connections of neurons in the brain (though the interpretation is often taken too literally). A neuron is connected to multiple real valued inputs by a set of synaptic weights, corresponding to real-valued multipliers. An activation function f(x) is used to produce the output of the neuron, where the input is the weighted sum of every input connected to that neuron. Generally the initial features fed into a network are called the input layer, and a set of neurons that produce our desired outputs form the output layer, from which a loss is derived (such as cross-entropy, or mean squared error). In-between is some number of hidden layers, where we specify the number of neurons, connectivity, and activations for each layer. The classic approach is to connect every neuron in one layer to every neuron in the preceding layer to form a fully connected network. A diagram of this arrangement is presented in Figure 2. x0 x1 x2 x3 Input layer h1 0 h1 1 h1 2 h1 3 Hidden layer h2 0 h2 1 h2 2 h2 3 h2 4 Hidden layer y1 Output layer Figure 2. Diagram of a simple 1-layer neural network. Green nodes are input features. Yellow nodes are for the bias variable. Blue nodes are hidden layers. Red nodes are the output layer. The weights for such a network are learned through an algorithm known as back-propagation (Rumelhart et al., 1986), which is performing gradient decent on the function created by the neuron graph. The view of neural networks as a large function graph has become increasingly popular, and allows for fast development using Automatic Differentiation. The user speci\ufb01es the functions used by the network, and the software automatically computes the gradients with respect to any weight in the network. This has helped to fuel the resurging neural network literature and is a feature supported by many deep learning frameworks (Tokui et al., 2015; Chollet, 2015; Abadi et al., 2016). The fundamental strategy enabled by such an approach is that the user should avoid feature engineering, and instead alter the network architecture to the needs of the problem. This works as neural networks have been found to learn their own complex feature hierarchies and representations from raw data alone (Ranzato et al., 2012; Goodfellow et al., 2014). This ability has allowed neural networks to become the state of the art in both speech processing (Graves et al., 2006) and image classi\ufb01cation (Krizhevsky et al., 2012), signi\ufb01cantly outperforming prior approaches. 10Such information can be obtained from the paid-for API of VirusTotal 16 \fWhile many works in malware classi\ufb01cation have made use of neural networks (Firdausi et al., 2010; Perdisci et al., 2008; Liangboonprakong and Sornil, 2013; Hardy et al., 2016), they are often based on dated approaches to neural network construction. Advances in optimization algorithms (gradient descent), activation functions, architecture design, and regularization have dramatically changed the \u201cbest practices\u201d of neural networks while also improving their performance. One of the \ufb01rst effective applications of a modern neural network style was by Saxe and Berlin (2015), who used a private corpus to obtain reasonable accuracies. Their work performed the feature engineering manually by processing a combination of entropy, string, and PE header features. While their model performed well, it was not compared with any other machine learning algorithms, making it dif\ufb01cult to determine the bene\ufb01ts of neural networks in this particular application. The work of Saxe and Berlin (2015) is also notable for its model evaluation, which we will discuss further in section 7. Raff et al. (2017) also looked at using a neural network, but instead presented it with raw byte information and did no feature engineering. Their work provided initial evidence that neural networks can reproduce the same feature learning on raw bytes, but was limited to a relatively small (and \ufb01xed size) subset of the PE header. As Raff et al. (2017) noted, the behavior and locality within malware is markedly different from signal and image processing tasks. Malware lacks the rigid scope and especially locality properties these other \ufb01elds enjoy. As an example, it is easy to see how in an image the correlation between a pixel and its neighbors is relatively consistent throughout any image. But for a binary jumps and function calls can directly connect disparate regions, causing correlations between potentially arbitrary locations. No work has yet been done on determine what kinds of architectures can learn best from this type of locality complexity. Another interesting application of modern neural network design is by Huang and Stokes (2016). Their system looked at System-call like features extracted via dynamic analysis, reducing the feature set by applying domain knowledge about the relationship function calls. Their architecture was modi\ufb01ed to perform both malware detection (benign or malicious) and family classi\ufb01cation (with 100 malware families) simultaneously. The jointly trained model resulted in a relative improvement of 26% over a model trained to do only malware detection on the same data. This shows the potential for intelligent architecture design choices to provide real gains in accuracy. This is also a method to enable more data use for training, as it is easier to get more malware data labeled with malware family labels than it is to get more benign data. The portion of the network trained to predict malware family can then be trained with this additional data, without biasing the malware detection portion of the network due to class imbalance. 5 Machine Learning Over Sequences Many of the feature types we have discussed, such as assembly instructions, can be more accurately described as a sequence of events or values. Using n-grams to convert them to \ufb01xed length feature vectors allows us to use the wider breadth of models discussed in section 4, at the cost of ignoring most of the sequential nature intrinsic to the data. In this section we will review a number of techniques that are designed speci\ufb01cally for processing sequences, some of which will work directly on the sequence level, while others may attempt to create \ufb01xed-length representations more intelligently. In the latter case, the primary difference compared to the n-gram approaches discussed in subsection 4.1 is that n-grams only capture a very small amount of local sequence information. Approaches in this section will more generally capture larger amounts of the sequential structure in the data. Some of the methods we will talk about face unique challenges regarding sequence length. For example, assembly sequences from binaries can be hundreds of thousands of steps in length or more, which signi\ufb01cantly outpaces the longest sequences in other domains. While byte and assembly sequences are obviously the longest, potentially millions of steps long, higher level events and features extracted via dynamic analysis can easy reach hundreds of thousands of steps in length (Pascanu et al., 2015). These sequences are far longer than what machine learning is normally applied to, meaning the tools to tackle this problems are often lacking. For example, the longest sequence length we are aware of for neural networks outside of malware is in audio generation. Here the WaveNet architecture was applied to a sequence length of 16,000 steps (Oord et al., 2016). This 17 \fwas an order of magnitude larger than what others had achieved, yet is still an order of magnitude smaller than the sequences in the malware space. 5.1 Hidden Markov Models Given the sequential nature of our feature choices, such as byte sequences, instructions, and API calls, the Hidden Markov Model (HMM) has become a popular choice (Damodaran et al., 2015; Sha\ufb01q et al., 2008; Wong and Stamp, 2006; Konstantinou, 2008), as it explictly models sequences and can handle sequences of variable lengths. Given a sequence of observations O = O1, O2, . . . , OT that are discrete (Ot \u2208{1, . . ., KO}), HMMs make three simplifying assumptions: 1. A sequence of hidden (or unobserved) variables X = X1, X2, . . . , XT , where each state is discrete (Xt \u2208{1, . . . , KX}) governs the observed values O. 2. Each observation Ot is generated solely by state Xt, and P(Ot|Xt) is parameterized by a KX \u00d7 KO emission matrix B. 3. Each hidden state Xt is generated solely by state Xt\u22121, and P(Xt|Xt\u22121) is parameterized by a KX \u00d7 KX emission matrix A. The \ufb01rst hidden state is a special case, and is speci\ufb01ed by an initial probability matrix \u03c0. X1 X2 X3 . . . XT O1 O2 O3 OT A A A A B B B B Figure 3. First-Order Hidden Markov Model, hidden states are gray and observed states are white. Thus the matrices A, B, and \u03c0 fully specify a HMM, an example of which is given in Figure 3. A and B are emission matrices, where the row index r corresponds to the source token being r, and the column c indicate the probability of token c preceding token r. Because these are generative models, to apply HMMs for malware classi\ufb01cation a separate HMM must be \ufb01t to each class from the sequences corresponding to that class. At test time we get a new sequence \u02dc O, and compute the probability P( \u02dc O|Ai, Bi, \u03c0i) for each HMM we learned, choosing the class with the highest probability11. For a more thorough review of HMMs we refer the reader to (Rabiner, 1989; Ghahramani, 2001). One can construct m\u2019th-order HMMs to try to capture more history. However the learning process scales at O(KO m+1), which quickly becomes computationally intractable. This makes HMMs a better \ufb01t to shorter sequences such as API call sequences, if they are to be used. It may also be the case that the use of HMMs, as generative models, make the learning problem harder than necessary. Given data x and labels y, it models the joint distribution P(x, y), which may be more complicated than the posterior distribution (i.e., benign or malicious) P(y|x). While it is not always the case that discriminative models perform better (Ng and Jordan, 2002), we suspect that it is likely true for malware classi\ufb01cation given that just modeling a binary (i.e., P(x)) is its own highly complicated problem, and the joint distribution is intrinsically harder still. Deploying a solution using HMMs over bytes or instructions could also be problematic when we consider that malware evolves in an adversarial manner. It is not dif\ufb01cult for polymorphic code to generate assembly matching a speci\ufb01c low order transition distribution (Song et al., 2010), which would allow a malicious author to generate binaries matching a particular distribution. 5.2 Byte Similarity Measures A number of methods have been used that seek to measure the similarity between two arbitrary byte sequences. These methods make no assumptions about the contents of formating of the underlying 11As presented this assumes each class is equally likely. This is generally not the case 18 \fbytes, and thus can be used for any arbitrary byte input. This makes them attractive for malware classi\ufb01cation, as they can simply use the raw binary as the target for similarity measures. Classi\ufb01cation is then done by doing a nearest neighbor search for the most similar known binaries, and assigning a benign/malicious label based on the k nearest neighbors\u2019 labels. We note here three methods that have been used for malware analysis and detection to varying degrees. 5.2.1 Similarity Digests One method readily available for such work is the use of similarity digests from the domain of digital forensics(Harichandran et al., 2016). These digests are similar to an MD5 checksum, but seek to minimize the change in output hash for changes in the input sequence. These techniques provide a similarity measure between the hash functions, which can then be used as the similarity between the digests. The state-of-the-art in this domain is the sdhash algorithm(Roussev, 2009, 2010). The faster ssdeep is also popular, but is not as accurate as sdhash for most problems(Kornblum, 2006). Ssdeep in particular is unlikely to be useful for malware analysis, as ssdeep is sensitive to contentreordering. Since PE \ufb01les and binary programs make regular use of pointers and can be almost arbitrarily reordered, this is a signi\ufb01cant problem. Although sdhash does not suffer from this same limitation, its use for malware detection has been only moderate. While these digests have been highly successfully in their original domain, searching for similar \ufb01les in a known database, they have not performed as well for malware classi\ufb01cation. Some have found that their results can be improved by changing how the similarity scores are computed(Li et al., 2015), in order to eliminate undesirable properties. For example, the original scoring method did not guarantee symmetry (i.e., sim(a, b) \u0338= sim(b, a)). However, they have been useful in related problems to malware detection, such as \ufb01nding code-reuse among malware(Upchurch and Zhou, 2016). 5.2.2 Normalized Compression Distance The Normalized Compression Distance (NCD) (Li et al., 2004) is a distance function based on the ability of a compression algorithm to compress two sequences to their smallest possible size. The NCD distance is de\ufb01ned in (5), where C is a function that returns the compressed length of an input, and ab represents the joining of sequences a and b (normally done by simple concatenation). Given an oracle that can perform perfect compression, the NCD is a valid distance metric. This oracle cannot exist, and so NCD must be approximated using algorithms such as LZMA. Because NCD works based on compression, it can be applied to a wide variety of features and has become quite popular, especially for discovering malware families and relationships. It has been used successfully on raw bytes (Wehner, 2007) and on behavior sequences from dynamic analysis (Bailey et al., 2007a). NCD(a, b) = C (ab) \u2212min (C(a), C(b)) max (C(a), C(b)) (5) The \ufb02exibility of NCD, in that it can be applied to anything encodable as a sequence of bytes, makes it a powerful tool given the multiple different features we may wish to extract. NCD has also seen considerable use for malware detection due to its accuracy, which improves with the quality of the compression algorithm used. Yet the limits of existing compression algorithms also mean that NCD has dif\ufb01culty in the face of long sequences (Cebri\u00e1n et al., 2005), causing the distance metric to break down. Attempts to improve NCD have been made by changing how the concatenation ab is done in practice (Borbely, 2015), but room for improvement still exists. When sequences are short enough that NCD works well, a yet unexplored possibility is to use it with some of the kernels discussed in subsection 4.3. A simple merging would be to replace the Euclidean distance in the RBF kernel with the result of NCD, producing K(a, b) = exp \u0000\u2212\u03b3 \u00b7 NCD(a, b)2\u0001 . 5.2.3 Lempel-Ziv Jaccard Distance Inspired by the successes and background of NCD, the Lempel-Ziv Jaccard Distance (LZJD) has been developed for malware classi\ufb01cation (Raff and Nicholas, 2017a). It draws from the observation that the compression algorithms that tend to work best in this domain make use of the LempelZiv dictionary encoding scheme(Alshahwan et al., 2015; Borbely, 2015), and that the compressed 19 \foutput itself is never used. Instead LZJD creates the compression dictionary, and then measures the similarity of binaries using the Jaccard distance (6) between the dictionaries. J(A, B) = |A \u2229B| |A \u222aB| (6) This alone was shown to be more accurate for nearest-neighbor classi\ufb01cation of malware, and can be made nearly three orders of magnitude faster than NCD through the use of min-hashing(Broder, 1997), thus alleviating the runtime cost of NCD. In addition to being faster, LZJD retains the distance metric properties lost by NCD(Raff and Nicholas, 2017a). The use of the Jaccard similarity / distance also means that it is a valid kernel, and can be directly applied to the methods discussed in subsection 4.3. Raff and Nicholas (2017b) developed a method of converting LZJD\u2019s dictionary into a \ufb01xed length feature vector using a technique known as the \u201chashing trick\u201d (Weinberger et al., 2009; Li et al., 2012). More interesting is their observation that the compression dictionary is sensitive to byte ordering, and single byte changes can cause large changes to the dictionary. They exploited this weakness to develop an over-sampling technique for tackling class imbalance, an issue we will discuss further in subsection 8.3. This was later re\ufb01ned to require less hyper-parameter tuning for easier use (Raff et al., 2019b). LZJD represents an approach of applying the intuition of NCD to a speci\ufb01c compression algorithm, the Lempel Ziv approach. Another approach along this theme is the Burrows Wheeler Markov Distance (BWMD), which again applies the intuition of NCD to the Burrows Wheeler compression algorithm (Raff et al., 2020). The Burrows Wheeler method is not as effective a compression approach as Lempel Ziv, and is re\ufb02ected in BWMD not being quite as accurate as LZJD in nearest neighbor search. The bene\ufb01t from BWMD comes from it producing a euclidean feature vector, rather than a a digest like LZJD does. This makes BWMD compatible with a wider class of ML algorithms, which showed how BWMD could produce better clustering and orders of magnitude faster search by leveraging more appropriate algorithms that require euclidean feature vectors (Raff et al., 2020). 5.3 Convolutional and Recurrent Neural Networks As we discussed in subsection 4.5, neural networks have become popular algorithms and can be viewed as a graph de\ufb01ning a large and potentially complicated function. This \ufb02exibility allows them to be extended to sequence tasks by replicating the same network for every step in the sequence. This is often referred to as \u201cweight sharing\u201d, and leads to the idea of the Convolution Neural Network (CNN)(LeCun et al., 1989). The success of CNNs in both image and signal processing has been long recognized (LeCun and Bengio, 1998). CNNs embed a strong prior into the network architecture that exploits the temporal/spatial locality of these problems. The convolution operator essentially learns a neuron with a limited receptive \ufb01eld, and re-uses this neuron in multiple locations. Thus a neuron that learns to detect edges can be reused for each part of the image, since edges can occur just about anywhere in an image. This property is not a perfect \ufb01t for malware sequence problems, and it remains to be seen if they will be useful despite the structural mismatch. CNNs may be a better \ufb01t at higher levels of abstraction, and have been applied to system call traces (Kolosnjaji et al., 2016). We also note that on their own, convolutions do not completely handle the variable length problem that comes with the malware domain. One common method of dealing with variable length sequences is to further extend the weight sharing idea, by adding a set of connections from one time step to the next, using the previous timestep\u2019s activations as a summary of everything previously seen. This high-level idea gives rise to Recurrent Neural Networks (RNNs), and we refer the reader to Lipton et al. (2015) for a deeper introduction to the history and use of RNNs. We note that the CNN and RNN encode different priors about the nature of sequences, and can be used together in the same large architecture, or be kept disjoint. We will again refer the reader to Goodfellow et al. (2016) for a broader background on neural networks. Below we will give only high-level details pertinent to models used in malware classi\ufb01cation literature. Naive construction of a RNN often leads to dif\ufb01culties with both vanishing and exploding gradients (Bengio et al., 1994), making the training process dif\ufb01cult. One older solution to this problem is the 20 \fEcho State Network (ESN) (Jaeger, 2001). The ESN circumvents the RNN learning dif\ufb01culty by selecting the recurrent weights via a stochastic process, so that no learning of the recurrent portion is needed. This may also be interpreted as a stochastic process by which we convert varying length sequences to \ufb01xed length feature vectors, after which any of the techniques discussed in section 4 may be applied. The parameters that control the stochastic process can be adjusted to sample different types of ESNs, and cross validation can be used to select the hyper-parameters that worked best. This simple strategy has worked well for many problems, and can be applied to a number of different types of learning scenarios (Luko\u0161evi\u02c7 cius, 2012). The ESN has been used by Pascanu et al. (2015) to process a set of high-level events, including API calls, for malware classi\ufb01cation and found the ESNs to have an accuracy rate almost twice as high as an n-gram based approach. Benign / Malicious input1 input2 ... inputn\u22121 inputn (a) A 2-layer RNN Benign / Malicious inputn inputn\u22121 ... input2 input1 (b) A 2-layer bi-directional RNN Figure 4. Two simpler RNN architectures for classifying a sequence. In each case the rows of the diagram represent the same neuron being applied on inputs at different time steps. The input includes the bottom up input (previous layer) as well as the previous time step (which is implicitly zero when not present). In the Deep Learning literature the Long Short Term Memory (LSTM) unit (Hochreiter and Schmidhuber, 1997) has also helped to overcome a number of dif\ufb01culties in training the recurrent connections themselves, especially in combination with recent advances in gradient-based training and weight initialization. Training works by extending back-propagation \u201cthrough time\u201d (Werbos, 1988), which amounts to unrolling the sequence of input and output transitions over the course of the sequence (i.e., weight sharing across time). This produces a directed acyclic graph, on which normal back-propagation can be applied. Two examples of this are given in Figure 4, where the neurons in each column all share the same weights. Back-propagation can then be done normally on this unrolled architecture, and the shared weights will be updated based on the average gradient for each time the shared weight was used. This also means that any of the architecture types discussed in subsection 4.5 can be used with RNNs, either before, after, or in-between recurrent stages, and trained jointly with the rest of the network. Kolosnjaji et al. (2016) have exploited the \ufb02exibility of neural networks to combine LSTMs with CNNs for malware classi\ufb01cation based on API call sequences. The architecture combination allows the CNNs to learn to recognize small groupings of co-occurring API calls, and the LSTM portion allows the information from multiple co-occurrences to be captured through the whole call trace to inform a decision, and were found to out-perform HMMs by 14 to 42 percentage points. Similar work was done by Tobiyama et al. (2016). Using just LSTMs, Raff et al. (2017) instead looked at byte sequences and were able to show that LSTMs could learn to process the bytes of the PE header sequentially to arrive at accurate classi\ufb01cations. They also used an attention mechanism to show that the LSTM learned to \ufb01nd the same features as a domain knowledge approach learned to use when the features were manually extracted. The purpose of an attention mechanism is to mimic the human ability to focus on only what is important, and ignore or down-weight extraneous information. This has become a tool often used in Machine Translation, and offers a more interpretable component to a neural network. CNNs to process the raw bytes of a \ufb01le where \ufb01rst introduced by Raff et al. (2018), which treated the raw bytes of a executable \ufb01le as a 2 million byte long sequence. Their work found that many of 21 \fthe best practices for building neural networks for image, signal, and natural language processing did not carry over to learning from raw bytes. Notably this required using a shallower and wider network (rather than deep and narrow), and the abandonment of layer normalization techniques like Batch-Norm(Ioffe and Szegedy, 2015). Kr\u02c7 c\u00e1l et al. (2018) expanded this work on their own corpus, but also compared with analyst-derived features used by Avast for their AV product. In doing so they found the approach was competitive with their classical AV, but that combining the features learned by the CNN with those their analysts developed had the best accuracy. This indicates that the CNN approach is learning to detect features that were overlooked by the expert analysts. Otherwise the features would be redundant, and their combination should have no impact. The ability of CNNs and RNNs to handle variable length sequences makes them an attractive tool for malware classi\ufb01cation, however they have yet to receive signi\ufb01cant application to that task. Part of this is likely due to the added computational burden they bring. Regular neural network algorithms often require the use of GPUs to train for days at a time. RNNs exacerbate this situation with more computations and a reduction in the parallelism of the problem, making it challenging to scale out training across multiple nodes. It is also dif\ufb01cult to train RNNs for sequences longer than a few thousand time steps, which is easily exceeded by call traces, byte sequences, and entropy measurements. It is likely that intelligent combinations of RNNs with other architectures or advancements in training ef\ufb01ciency will be needed before wide use is seen. 5.4 Haar Wavelet Transform For numeric sequences, such as the windowed entropy of a \ufb01le over time, wavelets provide a versatile tool for tackling the problem in a number of ways. Wavelets have long been used to perform machine learning over sequences, and are particularly popular for signal processing tasks (Chaovalit et al., 2011). At a high level, wavelets simply aim to represent an underlying signal f(t) sampled at Nf points using a combination of waves added together. A wave is just a function over time that starts and ends at zero, and goes above and below zero in the middle. The seminal Fast Fourier Transform is one type of continuous wavelet that represents a function as a combination of sine and cosine waves. For malware classi\ufb01cation, the Haar wavelet is becoming increasingly popular and has been used in a number of different ways (Wojnowicz et al., 2016a; Han et al., 2015; Baysa et al., 2013; Sorokin, 2011). Of all possible wavelets, the Haar wavlet is discrete and the simplest possible wavelet. The Haar wavelet \u03c8(t) is de\ufb01ned in (7), and has non-zero vales in the range [0, 1). It is positive in the \ufb01rst half of the range, and then negative in the second. \u03c8(t) = \uf8f1 \uf8f2 \uf8f3 1 if t \u2208[0, 1 2) \u22121 if t \u2208[ 1 2, 1) 0 otherwise (7) \u03c8j,k(t) = 2j/2\u03c8(2jt \u2212k) (8) Since an arbitrary function f(x) cannot in general be approximated by adding combinations of (7), we need another function that moves the location of the wavelet. Thus the Haar function is de\ufb01ned by (8), and is used in a hierarchy of levels to approximate functions. Both j and k are integers, where j is the level of the hierarchy and k \u2208[0, 2j \u22121] shifts the location of the wavelet. The smallest levels of the hierarchy (j = 0) represent course information over large portions of the sequence, and the largest values (j \u2192\u221e) represent the smallest details that apply only to speci\ufb01c time steps t. f(t) = c0 + log2 Nf X j=0 2j\u22121 X k=0 cj,k \u00b7 \u03c8j,k(t) (9) We can now represent any function f(t) using (9). However, in practice this only works for sequences that are a power of 2 in length, so truncation to the smallest power of two or padding the sequence with zeros is often necessary. Computing the Haar wavelet transform then determines the values for the coef\ufb01cients cj,k\u2200j, k. The discrete Haar transform takes only O(Nf) time to compute, making it a viable tool for longer sequences (Walker, 2008). 22 \fA common use case of the Haar transform is to de-noise signal data, and has been used in that context for malware classi\ufb01cation (Sha\ufb01q et al., 2009). The intuition being that the higher level signal comes from the coef\ufb01cients from the earlier levels, so we can remove noise by setting cj,k = 0\u2200j \u2265z, for some threshold z. Once set to zero, we can then re-construct the original signal which should now have less noise present. While the de-noising application is fairly straightforward and is a standard use case for wavelets, once we have this representation there exist a number of possibilities for comparing sequences. 5.4.1 Haar Distance Struzik and Siebes (1999) propose comparing time series by de\ufb01ning a distance over the coef\ufb01cients cj,k. First they de\ufb01ne a correlation between two Haar transforms as given in (10), where the Kronecker delta function \u03b4(ja, ka, jb, kb) returns 1 iff ja = jb and ka = kb, otherwise it returns zero. Using this a normalized distance between two sequences is de\ufb01ned by (11). While this would allow for a nearest neighbor classi\ufb01er to be used, this speci\ufb01c approach has not yet been used for malware classi\ufb01cation. C(fa, fb) = log2 Nfa X ja=0 2ja \u22121 X ka=0 log2 Nfb X jb=0 2jb \u22121 X kb=0 cja,kacjb,kb\u03b4(ja, ka, jb, kb) (10) DHaar(fa, fb) = \u2212log \f \f \f \f \f C(fa, fb) p C(fa, fa) \u00b7 C(fb, fb) \f \f \f \f \f (11) 5.4.2 Haar Energy Features Another approach is to extract a \ufb01xed-length representation from the Haar transform, and use those features as inputs into a classi\ufb01er (Pati and Krishnaprasad, 1993). Wojnowicz et al. (2016a) used this approach on entropy sequences, and found them to be moderately effective independently, but not suf\ufb01cient for a complete system. For a sequence of length Nf, we can compute log2 Nf energy levels according to equation (12). In the case of Wojnowicz et al. (2016a), they decided to use all possible levels and handled different length sequences by building a new classi\ufb01er for every different power of two. Energyj = 2j\u22121 X k=1 (cj,k)2 (12) 5.5 Dynamic Programming A method of comparing numeric sequences that has become increasingly common in the literature is to perform distance computations using dynamic programming methods such as the Levenshtein distance (Levenshtein, 1966; Bellman, 1954). While the use of so-called edit-distances could be applied directly to raw binaries, a run-time complexity of O(N 2), where N is the length of a binary or entropy sequence, is prohibitive. The Haar transform has been used to make such approaches practical. The Haar transform is used to discretize the numeric entropy sequence into bins of varying size (resulting in a shorter sequence), which are then used in the dynamic programming solution (Han et al., 2015; Baysa et al., 2013; Sorokin, 2011; Shanmugam et al., 2013). While similar at a high level, these approaches all have a number of differences and details that do not lend themselves to a compact summary. Instead we summarize a similar dynamic programming distance for sequences called Dynamic Time Warping (DTW) (Berndt and Clifford, 1994). DTW works directly upon numeric time series, and does not need the discretization provided by the Haar Transform (though the wavelets could be used to pre-process the signal before computing DTW). This method has received minimal investigation for malware classi\ufb01cation (Naval et al., 2014), but has been used for related tasks such as estimating the prevalence or malware infection (Kang et al., 2016) and detecting botnets (Thakur et al., 2012). Its description is also representative of a simpler common ground between the prior work in this area (Han et al., 2015; Baysa et al., 2013; Sorokin, 2011; Shanmugam et al., 2013). 23 \fGiven two sequences fa(t) and fb(t) of lengths Nfa and Nfb respectively, DTW attempts to \ufb01nd a continuous path of point-wise distance computations that minimizes the total distance. Doing so requires \ufb01nding a sequence of dilations and contractions of one sequence to \ufb01t the other, such that it maximizes the similarity between their shapes. In other words, DTW attempts to measure the distance between two sequences based on overall shape, and ignoring any local contractions or elongations of a sequence. To do so, we de\ufb01ne the distance using the recursive equation (13), where DTW(fa, fb) = DTW(Nfa, Nfb). This equation can be solved using dynamic programming, and is similar to the Levenshtein style distances that have been more prevalent in the malware classi\ufb01cation literature. DTW(i, j) = (fa(i) \u2212fb(j))2 + \uf8f1 \uf8f4 \uf8f4 \uf8f2 \uf8f4 \uf8f4 \uf8f3 0 if i = j = 1 min \uf8f1 \uf8f2 \uf8f3 DTW(i \u22121, j) DTW(i \u22121, j \u22121) DTW(i, j \u22121) otherwise (13) Like many other dynamic programming methods, DTW takes O(NfaNfb) time to compute and has the disadvantage that it is not a true distance metric, as it does not obey the triangle inequality. However, the popularity of DTW in other domains warrants its consideration for malware classi\ufb01cation, especially as an existing body of literature addresses many of the known problems and challenges. In particular there are multiple methods of speeding up the DTW computation and retrieval, including the construction of indexes (Keogh and Ratanamahatana, 2005), a fast O(max(Nfa, Nfb)) approximation (Salvador and Chan, 2007), and a de\ufb01nition for a DTW centroid (Petitjean et al., 2014). There even exist methods of learning constraint costs to modify the DTW, which can improve the accuracy of the constrained DTW (Ratanamahatana and Keogh, 2004). This could simplify the manually constructed costs and constraints used in existing malware research (Han et al., 2015; Baysa et al., 2013; Sorokin, 2011). 6 Graph Algorithms While representing information as a sequence reduces the gap between abstraction and the true nature of the data, this is still a simpli\ufb01cation in many instances. For example, while a binary may be one long sequence of bytes, the order in which the bytes are evaluated and accessed may be nonlinear. A yet richer representation is as a graph G of vertices V and edges E. Graph analysis and algorithms have been widely studied and used in many domains, and malware classi\ufb01cation is no exception. Such techniques have been used most commonly for representing assembly(Alam et al., 2015; Anderson et al., 2011; Hashemi et al., 2016) and API and system calls (Elhadi et al., 2014) collected from dynamic analysis. While these two feature types have already seen use as sequences and with classical machine learning, graphs have also been used for features not well represented in other forms, like mapping the relationship between a malware per se and the \ufb01les it creates or accesses (Karampatziakis et al., 2012). Similar to section 5, we will review the common graph-based approaches that have been used for malware classi\ufb01cation. While the appropriateness of a graph representation has been widely recognized in prior work, little has been done to fully exploit such representations. Most of the prior work in graphs for malware classi\ufb01cation use either graph matching, or attempt to construct more meaningful feature vector representations that will allow us to use the wide breadth of machine learning methods in section 4. While there exists a rich diversity in what may be done with graphs, a full survey of graph methods is beyond the scope of this work. Instead we will review the two high-level approaches that have been used in the malware classi\ufb01cation literature. 6.1 Graphs to Feature Vectors Many of the techniques for directly comparing graphs are computationally demanding, often requiring approximations or simpli\ufb01cations to be made. For this reason it is often desirable to convert a graph representation to a \ufb01xed length feature vector. Once converted, the classical machine learning methods discussed in section 4 can be used. The simplest approach is to manually craft features about nodes in the graph, and then use those for classi\ufb01cation (e.g., as used by Kwon et al. (2015) ). 24 \f6.1.1 Graph Features and Descriptors There exist naive ways to create a feature vector from a graph, such as \ufb02attening the n by n adjacency matrix into a vector of length n2 (Eskandari and Hashemi, 2012). But such approaches are often impractical, causing excessively high dimensional spaces (making learning more challenging) and relying on extremely sparse graphs. Another simple approach to creating feature vectors from graphs is to derive informative statistics about the graph itself. Such features could include the number of vertices or degrees with a given label, the average number of in/outbound edges for a vertex, and various other statistics. This approach was used by Jang et al. (2014) over the system call graph of a binary. This approach has the bene\ufb01t of being quick to implement and apply, but does not take full advantage of the rich information that can be stored within a graph. It also requires the developer to have some notion about which statistics will be informative to their problem. These kinds of approaches are often not suf\ufb01cient in practice, but are easier to apply and generally not compute intensive. Outside of malware classi\ufb01cation, machine learning combined with these simple feature types can be used to accelerate other approaches as a kind of low-pass \ufb01lter, preclassifying items as likely or unlikely to be related (Lazarescu et al., 2000). This \ufb01ltering can be used to reduce the compute time of the other approaches we will discuss in this section, which are often more compute intensive. 6.1.2 Graph Embedding A less labor-intensive means of converting a graph into a feature vector is to perform some type of embedding, where the graph is converted to a \ufb01xed-length feature vector through some decomposition (Luo et al., 2003). This is often done by taking the adjacency matrix representation of a graph, and performing an eigenvalue decomposition (or singular value decomposition (SVD)), and using the eigenvalues as the feature vectors. This captures information about the graph as a whole, and does not require manually specifying items of interest. These embeddings can be used directly for machine learning, or can also be used as a \ufb01rst step toward other methods discussed in this section, such as graph matching in subsection 6.2 (Luo and Hancock, 2001). Hashemi et al. (2016) used this strategy with assembly code, representing an edge in the graph every time one instruction followed another and weighting the edges by the number of times the instruction pairing occurred. Anderson et al. (2011) has also used this strategy for assembly sequences in combination with other feature transforms of a graph. A unique approach by Slawinski and Wortman (2019) de\ufb01ned graphs over the extract abstract syntax trees of functions, and used a PageRank based embedding to better capture information from statically extracted features. Using just the static features in a \ufb01xed length feature vector they obtained 89.19% accuracy, which increased to 98.28% when leveraging the graph weighted embedding (Slawinski and Wortman, 2019). 6.2 Graph Matching The most prevalent method in use for malware classi\ufb01cation is the concept of graph matching. Given two graphs G1 and G2, in the case of this work, the goal is to derive some distance function that will give us a measure of how close G2 is to G1. This can be done with all types of graphs, and the methods used may change with the graph size and type. The general strategy to create this distance measure is to determine the amount of correspondence between the graphs, often by determining how to transform one into the other. There are various strategies to using graph matching as well, with some works even de\ufb01ning their own matching algorithms (Haoran Guo et al., 2010), though more simple heuristics are more common (Kolbitsch et al., 2009). One method is to create templates of what malware looks like, and match to that template (Haoran Guo et al., 2010). Graph matching is also popular for malware family classi\ufb01cation, where such systems are often designed for low false positive rates (Park and Reeves, 2011). Graph matching can also be used for normal nearest neighbor type classi\ufb01cation against a large database, though this is often avoided due to computational expense (Hu et al., 2009). 6.2.1 Graph Edit Distance One common approach to graph matching is to create a graph edit distance. Computing the exact graph edit distance is generally NP-complete, and so various different approximations are used in 25 \fpractice. Such approximations are often done via dynamic programming strategies, as discussed in subsection 5.5. These approximations can still be too slow to use for larger corpora, which has also spurred the use of indexing structures and other optimizations to accelerate queries for similar binaries(Hu et al., 2009). Park et al. (2010) determined distances by \ufb01nding the maximal common sub-graph between G1 and G2, and returned the edit similarity as the cardinality of the sub-graph over the maximum cardinality of G1 and G2. Their graph also used vertex and edge labels depending on the arguments used. Related work has used this common sub-graph approach to build systems with the goal of having minimal false positive rates (Park and Reeves, 2011), producing a kind of system-call \u201csignature\u201d for \ufb01nding malware families. Elhadi et al. (2014) used graph matching on labeled graphs derived from both the system call traces and the operating system resources used by each call. The edges and vertices in the graph had different labels depending on the whether the vertices came from each respective group. 6.3 Other Approaches There are many ways to perform similarity comparisons between graphs, not all of which can be described as a variant of graph matching. Lee et al. (2010) used a metric similar to the Jaccard distance, by having a \ufb01nite space of possible vertices they were able to take the intersection of the graphs\u2019 edges over the union of the edges. Part of the motivation of this was faster compute time so that the system could be practical for real use. Their approach was also unique in that they used call traces from static code analysis, rather than dynamic execution. However, this did necessitate that their approach use unpacking software before code analysis. Graphs also need not be the method by which similarity is done, but can still be an integral component of the system. Eskandari et al. (2013) used an extensive graph-based approach to process API and system calls from both static and dynamic analysis, and used the graph structure with node classi\ufb01cation to infer what a dynamic analysis would have looked like given only the static analysis of the binary. This was to obtain the speed bene\ufb01ts of static analysis (at runtime), while retaining the accuracy bene\ufb01ts of dynamic analysis. Yet their system used a simple \ufb01xed length feature vector for the \ufb01nal determination of maliciousness. 7 Evaluation and Metrics We have now extensively considered the many predictive approaches that have been applied to malware classi\ufb01cation. While such techniques are often the most exciting or interesting part of an approach, it is often done without fully considering how such systems should be evaluated. Despite its importance, evaluation metrics and choices are a commonly overlooked part of the machine learning process. Even more so, such choices often do not consider the evaluation of a system as a whole. Most works will simply use overall accuracy or AUC, which are described with some other less frequently used metrics in Table 2. While maximizing the performance of these metrics can be revealing and informative on its own, it is often done without necessarily justifying why these metrics are the ones to be used, or considering other explicit or implied goals. Each possible metric will be biased towards certain properties, which may impact what model appears to work best even, if it performs worse in practical use (Powers, 2011). Thought should be given to system constraints, the metric of performance, and the speci\ufb01c use case for deployment. The latter of these concerns should in general drive the process as a whole, informing us as to which constraints we can computerize and which scoring methods most closely re\ufb02ect our needs. Early work by Marx (2000) developed a thorough set of guidelines and processes for evaluating an AV product, from ease of user by customers to developing and weighting a testing set, and how to perform testing for different types of malware. The crux of their argument is the necessity for evaluation to re\ufb02ect real world use. It was recognized then that accurately re\ufb02ecting real world use of malware detectors is anything but trivial, and (we argue) has only gotten more challenging over time. 26 \fTable 2. A few metrics that have been used within the malware classi\ufb01cation literature. Most works have used only accuracy and AUC. Metric Description Accuracy The number of data points classi\ufb01ed correctly, divided by the total number of data points Balanced Accuracy Same as accuracy, except each class has equal weight to the \ufb01nal result. See (Brodersen et al., 2010). Precision The number of true positives divided by the true positives and false positives Recall The number of true positives divided by the true positives and false negatives AUC Takes the integral of the Receiver operating characteristic curve, which is a plot of the true positive rate against the false positive rate. See (Bradley, 1997). F1 / F-Measure The harmonic mean between precision and recall. In this section we will review a number of different scenarios and constraints that may impact how we choose to design and evaluate a malware classi\ufb01cation system. Intrinsically related to this is the data quality issues we discussed in section 2. Having high quality data not only means better models but more accurate estimation of model performance. While we have already reviewed why this is not a given, for brevity we will generally assume that the data issue has been resolved in this section. 7.1 Malware Detection The most obvious goal of malware detection is to act as a replacement for anti-virus products. In this case accuracy and AUC are both acceptable target metrics that are widely used in machine learning. This does not necessarily make them the best metrics when we consider the user of such a system. Numerous works have emphasized the importance of having low false-positives in this scenario (Ferrand and Filiol, 2016; Masud et al., 2008; Alazab et al., 2011; Zhou and Inge, 2008). This is because excessive false positives will be aggravating for the end user, who presumes their desirable goodware applications will continue to work without issue when using an anti-virus. If required to add applications to a white-list too frequently, they will get frustrated and stop using the product. This leaves them exposed to malware, and the accuracy of the system becomes irrelevant. While no consensus has been reached as to an acceptable false positive rate, most work that emphasizes the issue achieve rates between 1% and 0.1% (Yan, 2015; Sha\ufb01q et al., 2009). Existing anti-virus products have been tested with fewer than 20 false positives out of over 1 million test \ufb01les (AV-TEST, 2016b), though the exact details of the test set are not public. Another use case for malware detection is to rank a backlog of binaries for analysis or investigation. This scenario can occur for malware analysts or incident response personal, when a large number of items need to be considered and it is important to \ufb01nd the malware as soon as possible. The AUC metric is in fact well designed for this use case, as it can be interpreted as measuring the quality of a ranking. In this way the metric accurately aligns to the goal, in that we want to make sure analysts spend their time looking primarily at malware \ufb01rst. If left unordered, the chance of dealing with a benign \ufb01le before all malware is dealt with increases, which takes up valuable time but does not result in any new insights. 7.2 Malware Family Classi\ufb01cation When performing malware family classi\ufb01cation, the normal procedure is to collect some number of malware families C, and divide the dataset into a training and testing set (or use cross validation) to evaluate the accuracy of determining the malware family for new binaries. Because this situation is not commonly needed for desktop deployment, the same run-time constraints are not often emphasized for this task. This evaluation is somewhat \ufb02awed with respect to real-life use case scenarios, as inevitably new malware families will be found that are not in the existing set. We argue that there should to be a pseudo C +1\u2019th family for \u201cnot in existing families\u201d when evaluating malware family classi\ufb01cation. The binaries that will belong to this C + 1\u2019th class should also come from multiple 27 \fmalware families, and not be present in the training data. This will better evaluate the quality of a system with respect to situations that will be encountered during real-world usage, but does increase the design burden by requiring both a classi\ufb01cation and anomaly detection ability. The importance of correctly marking a \ufb01le as \u201cnot previously seen\u201d is exempli\ufb01ed in forensic use cases, where a set of known malware is compared against potentially terabytes of data(Roussev, 2010; Kornblum, 2006; Roussev and Quates, 2012). Similarity preserving hash functions, which have low false positive (and recall) rates, are often used in this scenario. If we accept the need for a \u201cnot previously seen\u201d category, it is also worth considering if benign \ufb01les should be included in the evaluation. Normally malware family classi\ufb01cation is done under the assumption that all \ufb01les are malicious. In practical use, errors will occur \u2014 and so it is likely a benign \ufb01le will occasionally be given. It seems reasonable that the \u201cbest\u201d option (given that the benign \ufb01le was already mislabeled as malicious) is to mark the benign \ufb01le as \u201cnot previously seen\u201d. This is an issue we have not yet seen discussed or included in evaluations. We also note that while a rich diversity of metrics have been used with binary classi\ufb01cation problems, such as accuracy, AUC, F1, Matthews correlation coef\ufb01cient, etc., most works on malware family classi\ufb01cation simply use accuracy. This is not necessarily a good choice, as the distribution of malware families is not uniform. While balanced accuracy is one alternative, it can also pose a problem with very rare malware families. These families will be harder to classify yet have a more signi\ufb01cant impact on the balanced accuracy. There is also a question as to how many malware families should be used when evaluating. Others have also proposed that malware should be instead grouped by behavior or effects, given the non-standardized and inconsistent nature of malware family labeling (Bailey et al., 2007a). 7.3 Training and Inference Constraints The most common type of constraints that are considered are run time requirements. In particular many works have been concerned with real-time inference (Dahl et al., 2013; Alam et al., 2015; Khan et al., 2010), which usually implies a strict or moderate limit on memory use as well. This scenario makes perfect sense from a deployment perspective, for systems that would act in the stead of anti-virus products or network appliances that would inspect live traf\ufb01c. If such systems were to impose too signi\ufb01cant a latency on application start up or network connections, people would stop using the product due to the loss in productivity and aggravation. If this real-time requirement is violated, accuracy becomes irrelevant because the solution is not used. This situation would also discourage us from considering the full breadth of dynamic features, as certain combinations may prove too compute intensive to be practical. While many works report sub-second classi\ufb01cation time per datum, there appears to be no current consensus on where the threshold for \u201creal-time\u201d is for malware classi\ufb01cation, and most works simply emphasize that their solutions execute quickly. Another consideration that is less frequently addressed is training time and scalability, particularly as corporate malware collections are on the order of hundreds of millions, if not billions, of samples (Spafford, 2014). In particular it would be ideal for a malware training system to require only one pass over the training data, as data access often becomes the bottleneck at larger scales (Wojnowicz et al., 2016b). Not only does reducing the training time save money (as less resources are needed), but it also allows for tackling the problem of concept-drift through change-detection (Gama et al., 2004; Baena-Garcia et al., 2006; Bo-Heng Chen and Kun-Ta Chuang, 2014). This is a common method of dealing with the dif\ufb01culties of adapting a model to a changing concept. Instead one uses a change detection algorithm to determine when the concept has drifted signi\ufb01cantly enough that accuracy may begin to decrease. At that time one then simply re-trains the model on the newest data, and switches all old models to the most recent and up-to-date version. Though an important consideration is that older data may still be useful to train on, making it necessary to balance between new data and older (yet still informative and representative) data. We refer the reader to (Gama et al., 2014) for a survey of many approaches to change detection. In practice the change detection may not be needed if one instead trains at a \ufb01xed interval, say every few months. We are not aware of any work that quanti\ufb01es this problem on a large dataset over many time frames. 28 \f7.4 Specialized Needs In a conversation about the constraints and metrics that must be met for a malware classi\ufb01cation system, it is also important to discuss scenarios with specialized needs. These may be uncommon deployments or scenarios where a system designed for the majority of potential users does not satisfy important requirements. By the nature of such a scenario, we cannot enumerate all possible specialized needs. Instead we present an example scenario that has had some investigation, and how that may impact the measures of success. A particular variant of the malware detection problem is to detect speci\ufb01c types of malware, such as ones that actively try to avoid detection(Stolfo et al., 2007; Kirat et al., 2014). This can be important for researchers and analysts who wish to track more sophisticated threats, or desire to study a particular type of malware. If such binaries are going to be manually analyzed afterwards, we want to make sure that selected binaries are worth an analyst\u2019s time, and so a high precision is required from the system. It would also be appropriate to evaluate the precision at multiple thresholds, re\ufb02ecting the potential capacity to analyze such binaries on the part of a team that is abnormally busy, under normal workload, or blessed with excess availability. Another way this may be desirable is based on the type of infrastructure that needs protection. If a company\u2019s services were deployed in a cloud environment, malware that brings down a single VM may not be a signi\ufb01cant issue, as one can easily provision and replace the infected VM. However, malware that ex\ufb01ltrates data on the VM to a third party may cause the loss of proprietary or con\ufb01dential information, and thus be a heightened concern. In this case we want a malware evaluation model adjusted to the type of exploits which can cause the most damage to a particular entity and environment. 7.5 Evaluating Over Time It is also important to evaluate a system based on the longevity of the model\u2019s utility. That is to say, we may desire a model that is more robust to concept drift, perhaps at some cost in terms of another metric. This would be important for any system that in some way has limited Internet connectivity, making it expensive (or impossible) to update the model over time. This does not necessarily prevent malware from trying to infect the device when online, or from someone with physical access attempting to install malware on the device. In this case the model needs to be robust to malware over longer periods of time to thwart such efforts until an update of the model can be completed. In our view, Saxe and Berlin (2015) is one of the more important attempts at performing such an evaluation. They used the compile date provided in the PE header to split the dataset into before and after July 31st, 2014. Under this scenario their malware detection rate dropped from 95.2% to 67.7% for a \ufb01xed false positive rate of 0.1%. This dramatic drop shows the importance of considering time in evaluations, which is not a common practice. One issue is that the compile date in the PE header is easily modi\ufb01ed, and so malware authors can alter the value seen. File \ufb01rst-seen date may be a usable alternative source for this information, but is necessarily less precise. Performing evaluations split in time like this also requires signi\ufb01cant data from a wide breadth of time. This exacerbates the need for good benign data mentioned in section 2. Not addressed in Saxe and Berlin (2015), but also worth considering, is evaluating the performance on the test set as a function of time \u2014 which would allow one to more precisely characterize the longevity of generalizing information. The EMBER dataset follows this evaluation protocol, with date-\ufb01rst-seen information provided by an external source rather than the time stamp of the \ufb01le (Anderson and Roth, 2018). This avoids the problems caused if the malware lies about its creation date, but has less precision. Valuable information could be obtained by doing a comparative study between these different types of date information, seeing how well they correlate, and how the choice impacts results. There are still many important questions related to a malware model\u2019s performance over time that have not been answered. Most notably, for how long is a model effective? What are the important factors to a model\u2019s longevity (i.e., is data or the model type more or less important)? How old can training data be before it becomes uninformative or even detrimental to training? Presuming that such a threshold for training data exists, do benign and malicious data have a different \u201cshelf-life\u201d in terms of utility for training? 29 \f7.6 Evaluating Under Adversarial Attack Machine Learning models are generally susceptible to adversarial attacks. In this context, an adversarial attack against a machine learning model means that an abstract adversary is able to modify an input such that it has not meaningfully changed, yet a machine learning model will run an increased risk of misclassifying the modi\ufb01ed example. We refer the reader to Biggio and Roli (2018) for a review of the history of adversarial machine learning and how it is normally performed. For malware classi\ufb01cation, we have a real live adversary (the malware\u2019s author) and the use of adversarial machine learning techniques can become a new tool for the malware author to avoid detection. As such the problem of attack and defending malware classi\ufb01ers against such attacks is an important area for study, and potentially included as part of the evaluation metrics (Fleshman et al., 2018). The application of these techniques to the malware space is not trivial, however. Normally an input crafted by an adversary will modify its input parameters over a continuous space of values, all of which are valid inputs. For example, in an image the pixel values will change to other adjacent pixel values. However binaries can\u2019t be altered in the same way, and changing arbitrary bytes can be destructive to the executables\u2019 function. Anderson et al. (2017) showed one of the \ufb01rst attack methods for arbitrary binaries against arbitrary malware detectors (including regular AV products). They did this by de\ufb01ning a space of possible non-destructive transformations that could be applied to the binary so that its functionality would be maintained. They then trained a reinforcement learning algorithm to learn which transforms it should apply. Importantly, by uploading a sample of \ufb01les to Virus Total they found that their adversarial modi\ufb01cations, which have no impact on functionality and no awareness of how the AV products worked, was able to evade several of them. To circumvent the issue of arbitrary byte changes breaking an executable Kolosnjaji et al. (2018) and Kreuk et al. (2018) concurrently developed an attack that avoids this dif\ufb01culty. They added an unused section to the binary, which is allowed by the PE format, and constrained all of their modi\ufb01cations to this lone section. This allows for inserting arbitrary byte content to trick the detector, but avoids impacting the functionality in any way. While developed as an attack against the bytebased CNNs discussed in subsection 5.3, the approach can be leveraged against other methods as well. Recently Fleshman et al. (2019) proposed an approach that defeats this attack in all cases, but at a cost to accuracy. These recent works have operated in the threat model that the adversary (malware author) can only add features. While this threat model is reasonable, it is not perfect. This is especially true in the static analysis case, where whole \ufb01le transformations like packing already exist. We expect future work to attempt new ways to add information to the static case. The dynamic case is less clear, as the malicious behavior needs to eventually run for the malware to be malicious. This is muddied further by malware detecting its environment, as discussed in subsection 3.2. For dynamic analysis, we expect malware authors would further invest in VM evasion techniques, and there always exists the potential for more sophisticated attempts to hide actions taken from the inspecting VM. While these results are all new, they represent an important area in which we must evaluate each model\u2019s accuracy both against unseen binaries, and adversarialy altered ones. This is in addition to the adversarial techniques that were developed to avoid classical AVs (Poulios et al., 2015; Thomas Kittel et al., 2015). 8 Future Research Potential At this point we have now reviewed the current data, models, and evaluation methods used for malware classi\ufb01cation. This includes many challenges, and some potential improvements that exist throughout the whole process. While some areas have received more and less attention than others, we note that there exist relevant techniques in machine learning that have gone almost or are completely un-applied to the problem of malware classi\ufb01cation. We brie\ufb02y review a number of these that we believe could lead to future progress. 30 \f8.1 Multiple Views of Malware Given the wide array of potential feature sources, extraction methods, representations, and models that can be used for malware classi\ufb01cation, we could consider each of these combinations a different \u201cview\u201d of the malware problem. Each will be biased toward detecting different types of malware with different representations. This can be a bene\ufb01cial scenario for ensembling, where multiple models are combined to form a decision. It is commonly found that ensembles work best when the members of the ensemble make uncorrelated errors, which is more likely when each model has a different \u201cview\u201d of what malware looks like. Despite this seemingly good \ufb01t, we have found little work in using ensemble methods to combine multiple distinct feature types. Singh et al. (2015) used a SVM to combine the predictions of three models and found it to perform considerably better (though they did not connect that this is a form of Stacking ensembles (Wolpert, 1992) ). However, in their experiments all three models used assembly instructions as the source feature \u2014 reducing the potential diversity of the model. Masud et al. (2008) used three different feature types (assembly, byte n-grams, and PE header imports), but instead combined all feature types into one large feature vector. Eskandari et al. (2013) did some work providing a different take on looking at a binary in multiple different ways. They used API call traces features collected by both static and dynamic analysis, where both were used at training and only static analysis was used during testing. They modeled what decisions might have been made in the static version from the dynamic, so that at testing time the faster static-only analysis could be used. There may be many other ways to exploit the rich diversity in feature and models types that can be applied to malware classi\ufb01cation, instead of focusing on \ufb01nding the single method that performs best. For example, it is possible that different model and feature types work best for differing types of binaries. A Mixture of Experts (Jacobs et al., 1991) ensemble could be built that learns to recognize which models will work best on a given binary, and having that limited subset vote on the decision. We believe this is a mostly unexplored avenue for future improvements in malware classi\ufb01cation. 8.1.1 Static and Dynamic Staged Views While we have discussed some work that uses features collected from both static and dynamic analysis (Islam et al., 2013; Damodaran et al., 2015), we have not seen any work that considers models that are \ufb01rst used in a static pipelines, followed by a dynamic one if the static feature model was unsuccessful. This intentionally keeps the feature spaces disjoint, and creates a cascade reminiscent to those used in early face detection work (Viola and Jones, 2001). This approach recognizes that dynamic analysis is considerably more expensive than static analysis, both in terms of time to decision and computational resources. It is unrealistic to expect to run everything through dynamic analysis, and so a tiered approach should be used instead. This appears to be an accepted and common practice amongst AV vendors12, but we are not aware of any published research recognizing the need for such staged approaches. 8.2 Dealing with Labeled Data Expense Given the time intensive and data gathering issues discussed in section 2, further research is warranted in how to perform malware classi\ufb01cation with minimal labeled data. In this vein, it is surprising that no work has yet been done to apply semi-supervised learning to malware classi\ufb01cation. Semi-Supervised learning involves building a classi\ufb01er using both labeled and unlabeled data (Zhu, 2008). Semi-Supervised learning also provides a training time workaround for when only a few antivirus products mark a binary as malicious, casting doubt as to its true labeling. A semi-supervised solution would be to use all the data for which we are sure of the label (no anti-virus \ufb01res, or almost all \ufb01re) as labeled training data, and the ambiguous cases as unlabeled training data. In this way we avoid poisoning the learning process with bad labels, but do not throw away potentially valuable information about the decision surface. The best type of semi-supervised algorithm to use in this domain is not obvious, and many algorithms make differing assumptions about the nature that unlabeled data and the impact it should have on the decision surface. If we are to use a small amount of labeled data, it is also important that we label the most informative data possible. Active Learning is a technique that can help with this. Active Learning starts with an 12For example, see https://cloudblogs.microsoft.com/microsoftsecure/2017/12/11/detonating-a-bad-rabbit-windows 31 \finitial dataset, and a collection of unlabeled data. The model then can select data points for which it would like to request the label. While this framework is often used to help derive algorithms that learn more ef\ufb01ciently and quickly, it is also directly applicable to deciding which data points we should get labels for. The potential impact of having an active learning system was shown by Miller et al. (2016), where a simulation of human labelers found that their system\u2019s detection rate could be improved from 72% to 89%, while maintaining a 0.5% false positive rate. However, their approach to active labeling was heuristic, and did not leverage the full potential literature of active labeling methods. There has been little other research in this area for malware classi\ufb01cation (Moskovitch et al., 2009b), and so many questions remain. Are the features best for prediction also the best for active learning? What kinds of active learning algorithms work best for this domain? How are these methods impacted by the concept-drift of binaries over time? All of these questions, as far as the authors are aware, have yet to be explored. 8.3 Learning with Class Imbalance Most machine learning algorithms presume equally balanced classes (He and Garcia, 2009) at training time, and thus also at testing time. Learning from imbalanced classes can naturally cause the algorithm to favor the more numerous class, but also be detrimental in failing to learn how to properly separate the classes. Class imbalance is a common problem in the malware domain (Patri et al., 2017; Cross and Munson, 2011; Li et al., 2017; Yan et al., 2013; Moskovitch et al., 2009a; Yan, 2015), which makes it especially important to consider the evaluation metric used for both malware detection and family classi\ufb01cation (as mentioned in subsection 7.1 and subsection 7.2). One way to tackle such imbalance problems is to over-sample the infrequent classes or under-sample the more populous ones. These approaches are common, but can be out-performed by a more intelligent over-sampling or under-sampling process(Laurikkala, 2001). Indeed there is an existing literature for such methods focusing on both approaches(Lema\u00eetre et al., 2017), which have seen almost no application to the malware problem. Exploring their applicability to this domain and how such methods may be adapted for the dif\ufb01culties of sequence and graph based features is, as far as we are aware, an open problem area. Oversampling the minority class is an intuitively desirable approach, as it allows us to use the larger amount of of majority class data \u2014 and thus more data to train on overall. However, naive oversampling can lead to over\ufb01tting (Prati et al., 2009; Kubat and Matwin, 1997). One technique to do this more intelligently is to interpolate new datums from the training distribution, and a popular algorithm SMOTE takes this approach and has many variants as well (Nguyen et al., 2011; Han et al., 2005; Chawla et al., 2002). This also assumes a natural \ufb01xed length feature vector, and that interpolated instances are intrinsically meaningful. This may not be the case for all malware features, and may not be readily possible for sequence or graph based approaches to malware classi\ufb01cation. Under-sampling is often done intrinsically for malware detection, where the ratio of malicious to benign samples available is large. While less intuitively desirable than oversampling, smarter approaches for this exist as well. One approach is to train multiple models, with different subsets of the majority class used in each model, allowing for the creation of an ensemble(Liu et al., 2009). Other techniques also attempt to more intelligently select the sub-samples(Kubat and Matwin, 1997). We are aware of relatively little work that explores the problems of class imbalance in this space. Moskovitch et al. (2009a) investigated training set proportion\u2019s impact on differing test-set proportions, but worked under the assumption that malware will make up 10% of seen \ufb01les in a network stream, and in later work simply set the training ratio equal to this ratio (Moskovitch et al., 2009b). Raff and Nicholas (2017b) looked at developing an algorithm speci\ufb01c oversampling technique, evaluating on an assumption of equal class ratio. It is generally thought that malware will make the minority of samples in deployment, which is supported by a large study on over 100 million machines which found that the number of benign singletons outnumbers malicious ones at a ratio of 80:1 (Li et al., 2017). However, they also found that this ratio changed over time. We think further study is warranted to determine how applicable this rule of thumb is. Do all institutions share the same ratio of benign to malicious, or would certain institutions or individual users who are targets for malware authors have lower ratios? Similarly, do different types of equipment (e.g., desktop computer, router, or mobile device) see differing ratios of malware? How do these rates change with geographical area? Lastly, we suspect there are a 32 \fnumber of niche professions with special needs that will see differing distributions. For example, cyber crime investigators inspecting a recovered hard drive may expect to see a much higher ratio of malware given their targeted goals. 8.4 The Good, the Bad, and the Annoying For malware detection, we also note the the distinction of malicious versus benign may be overly strong. Some anti-virus products refer to what is known as \u201cpotentially unwanted programs\u201d13 (PUP). This third class sits in a gray area between benign and malicious. While these PUP binaries do not intentionally do any harm, they may be annoying or aggravating to the user. They also can present a labeling challenge, where it is not clear on which side of the line between benign vs malicious they should fall. While one could treat this as a three class problem, it would be more accurate to apply ordinal regression to alleviate the issue. Ordinal Regression is classi\ufb01cation where the class labels are ranked and the error grows with the distance between the correct label and the predicted label (Gutierrez et al., 2016). These errors need not necessarily be linear. Consider for example our case of benign, PUP, and malicious labels. The errors for marking benign as PUP could be 0.1 points, and PUP as malware results in 0.9 units of error. Then marking something benign as malicious would incur 0.1+0.9=1.0 units of error. This type of ordinal regression could be further extended to take into account the severity of certain types of malware. For example, one could instead classify binaries as \u201cbenign, PUP, low risk malware, high risk malware\u201d. This would naturally require a distinction between risk levels for malware, which could be user dependent. One reasonable distinction between low and high risk malware could be data loss. A machine attached to a botnet would then be low risk, but ransomware would be high risk. Such alternative labeling schemes are an open area for research and discussion. 8.5 Automatic Malware Remediation One unit of functionality currently provided by anti-virus products is the removal of found malware. This is a critical function of many anti-virus products, and the time it takes analysts to devise a removal strategy can become part of the lag time between initial malware discovery and updates being sent to user systems. Automating this task, even to only an incremental degree, would help reduce human time spent on this problem and provide relief to users when new malware is found. We are not aware of any work that has yet explored the ability for a machine learning system to determine how to remove malware from an already infected system. Given a corpus annotated with the various methods by which malware may attempt to gain persistence on a machine, it would seem plausible to detect these mechanisms and develop a \u201cremoval policy\u201d, a series of steps to perform that would remove the malware. This would fall into the domain of Reinforcement Learning as a method of learning and executing such policies. Multiple malware infections could complicate this task in interesting ways, increasing the challenge involved. The task is also impacted by the operating system version in used, updates installed and available, and other applications installed on the machine. It is unlikely that initial work will address all of these confounding factors at once, but the situations presents a challenging area for AI that could have important practical consequences if successful. One small advantage to the situation, because it is applied only to found malware, arises since it is realistic to use more compute intensive methods for determining a removal policy. This task is unexplored at the moment, and we are not aware of any corpora with such labels. 8.6 Malware, Natural Language Processing, and Reports A common task for malware analysts is to generate reports of their \ufb01ndings, which are shared with other security processionals to make them aware of new threats, and how to identify the malware, and other valuable information. These documents may be in any format, and may contain a variety of detail labels. Little work has been done on this data, mostly consisting of performing a variety of NLP tasks on the reports themselves (Lim et al., 2017; Pingle et al., 2019). 13As an example, the free anti-virus ClamAV has signatures for such software https://www.clamav.net/documents/potentially-unwanted-applications-pua 33 \fIn reality, the reports represent one mode of a multi-model data tuple, the textual report of behavior and unique identi\ufb01es, and the unstructured binary executable(s) that are the subject of the report. A regular occurrence is a new malware family being discovered, and these reports may serve as a source of additional information that could be used to detect novel malware families with limited labeled examples. Other possibilities include work in generating these reports from the malware itself, following inspiration from the automated statistician work (Nguyen and Raff, 2019; Steinruecken et al., 2019; Hwang et al., 2016; Grosse et al., 2012; Lloyd et al., 2014). This could aid in both developing/adapting models more quickly, as well as disseminating information faster. A variety of possibilities exist at this unique intersection of malware detection and NLP that have yet to be explored. 9" + }, + { + "url": "http://arxiv.org/abs/1912.13046v1", + "title": "A New Burrows Wheeler Transform Markov Distance", + "abstract": "Prior work inspired by compression algorithms has described how the Burrows\nWheeler Transform can be used to create a distance measure for bioinformatics\nproblems. We describe issues with this approach that were not widely known, and\nintroduce our new Burrows Wheeler Markov Distance (BWMD) as an alternative. The\nBWMD avoids the shortcomings of earlier efforts, and allows us to tackle\nproblems in variable length DNA sequence clustering. BWMD is also more\nadaptable to other domains, which we demonstrate on malware classification\ntasks. Unlike other compression-based distance metrics known to us, BWMD works\nby embedding sequences into a fixed-length feature vector. This allows us to\nprovide significantly improved clustering performance on larger malware\ncorpora, a weakness of prior methods.", + "authors": "Edward Raff, Charles Nicholas, Mark McLean", + "published": "2019-12-30", + "updated": "2019-12-30", + "primary_cat": "cs.CR", + "cats": [ + "cs.CR", + "cs.IR", + "cs.LG", + "stat.ML" + ], + "main_content": "Introduction Compression algorithms can be used to measure the similarity between arbitrary sequences with little required domain knowledge or expertise. They have been used in bioinformatics(Mantaci et al., 2008), time series classi\ufb01cation and clustering(Keogh et al., 2004), and malware analysis (Borbely, 2015). The bioinformatics and malware analysis domains can be particularly attractive for compression-based similarity measures. Both of these domains involve \u201cshort\u201d sequences of tens of thousands of steps, and can often reach 108 steps in length. Other machine learning techniques often fail to work when dealing with sequences of such variety and length. In this work, we note that the Extended Burrows Wheeler Transform (EBWT) (Mantaci et al., 2005) is a compressionbased distance metric designed explicitly around the Burrows Wheeler Transform (BWT) (Burrows and Wheeler, 1994) algorithm for use in bioinformatics. While EBWT has been useful in that domain, we have discovered a number of weaknesses in this method that reduce its effectiveness and prevent it from being useful in other domains, such as malware detection. To remedy these issues, we develop a new BWT-inspired distance measure that we refer to as the Burrows Wheeler Markov Distance (BWMD). Unlike EBWT, BWMD is a Copyright c \u20dd2020, Association for the Advancement of Arti\ufb01cial Intelligence (www.aaai.org). All rights reserved. valid1 distance metric, and can scale to far larger problems that EBWT cannot tackle due to computational limits. Compared to other compression-based distances, our BWMD is the \ufb01rst to work by embedding a sequence into a Euclidean vector space. This gives a signi\ufb01cant advantage to our approach in terms of clustering and query speed. This advantage is achieved by using algorithms that are designed around Euclidean distance, like k-means, that other methods cannot leverage. We will begin by reviewing related work in the compression distance space, and the needed details of the BWT, the prior method EBWT, and a related method known as LZJD, in \u00a7 2. Next we will begin with a description of the new BWMD in \u00a7 3. In \u00a7 4 we will develop a number of new theoretical insights, proving 1) how EBWT has dramatic failure cases that violate our intuition of how a distance measure should work, 2) that BWMD does not have these failure cases, and 3) comparing how EBWT, BWMD, and LZJD handle randomness, and 4) that BWMD has unique properties in this regard. We will then move into empirical results in \u00a7 5 by comparing BWMD with EBWT on DNA sequence clustering, where we show that BWMD is able to cluster DNA sequences of varying lengths that EBWT fails to cluster in a meaningful way. In \u00a7 6 we will show how BWMD is able to scale to malware classi\ufb01cation and clustering tasks that are beyond EBWT\u2019s computational ability. Though LZJD provides better classi\ufb01cation accuracy at this task, BWMD provides superior clustering results. Finally we will conclude in \u00a7 7 2 Compression Distances Compression in general can be seen as one way of performing many machine learning tasks, and has deep connections to statistical methods. Following this intuition, Li et al. (2004) introduced the Normalized Information Distance (NID) as a method of measuring similarity using compression. Given a function K(x) that computes the Kolmogorov complexity (i.e., return the length of the shortest computer program that produces x as output), and the associated conditional Kolmogorov complexity K(x|y) (i.e., the length of the shortest computer program that produces x as output given y as input), the NID is a metric as de\ufb01ned in (1). The Kolmogorov 1A distance metric is considered true, or valid, if it adheres to the properties of re\ufb02exivity, symmetry, and triangularity. arXiv:1912.13046v1 [cs.CR] 30 Dec 2019 \fcomplexities are uncomputable functions, making NID of no practical use. NID(x, y) = max (K(x|y), K(y|x)) max (K(x), K(y)) (1) To remedy this situation, Li et al. (2004) went on to present the Normalized Compression Distance (NCD), which replaces the uncomputable K(x) with C(x), which returns the length of x in bytes after running a compression algorithm. To approximate K(x|y), the concatenation of x and y (denoted by x\u2225y) is used, giving the \ufb01nal form of NCD in (2). The compression algorithm chosen impacts the quality of the results. The bzip and LZMA algorithms have been popular due to a combination of reasonable run time performance and generally satisfactory compression ratios. NCD(x, y) = C(x\u2225y) \u2212min (C(x), C(y)) max (C(x), C(y)) (2) Where C(x) is the (integer) length of object x when compressed. NCD, and compression-based distances in general, do not require signi\ufb01cant feature engineering to be applied in practice. This has made them popular for genomic phylogeny (Li et al., 2004; Cilibrasi and Vitanyi, 2005) and malware analysis (Bayer et al., 2009; Apel et al., 2009; Bailey et al., 2007; Karim et al., 2005), where sequences are longer than what most other techniques can handle (104 \u2212108 steps in length), and it can be dif\ufb01cult to extract more sophisticated features manually. However, using compression algorithms naively in NCD leads to dif\ufb01culties with computational scalability and reduced accuracy/failure cases due to the fact that compression algorithms were not designed for similarity analysis (Borbely, 2015; Cebri\u00e1n et al., 2005). For these reasons, some have looked at converting known clustering algorithms into explicit distance measures. For example, the Lempel Ziv Jaccard Distance (LZJD) (Raff and Nicholas, 2017a) converts the LZMA algorithm into a true distance measure, and for malware classi\ufb01cation tasks has superior accuracy and runtime compared to NCD2. 2.1 Extended Burrows Wheeler Transform Table 1: BWT F Rotation L $ $easypeasy y a asy$easype e a asypeasy$e e e easy$easyp p e easypeasy$ $ p peasy$easy y s sy$easypea a s sypeasy$ea a y y$easypeas s y ypeasy$eas s The Burrows Wheeler Transform (BWT) (Burrows and Wheeler, 1994) is a core component of the bzip compression algorithm, and has been widely used in information retrieval applications due to its ability to accelerate search queries(Ferragina and Manzini, 2005). The BWT takes an input string u of length n = |u|, over an alphabet \u03a3, and produces a new string u\u2032 = bwt(u). Through the use of an end-of-\ufb01le (EoF) marker, the BWT is invertible, so u can be recovered from u\u2032 without loss. 2A Java (Raff and Nicholas, 2018b) and Python (Raff et al., 2019) implementations of LZJD are available. BWT\u2019s utility in compression is best understood through example. Consider Table 1, where the BWT transform of the string \u201ceasypeasy\u201d is shown. BWT adds a special EoF marker \u201c$\u201d, and lexicographically sorts every single-character rotation of the string (observe column F, which is in sorted order). The BWT output is then the last column of each string, shown in column L. By computing the BWT version of the string, we can see how runs of the same character (\u201cee\u201d, \u201caa\u201d, \u201css\u201d) have been created that previously did not exist. Simple runlength encoding can then be applied to produce a compressed version of the string. These sorted rotations can be computed in O(n) time, and provide a simple method of compression. To turn this into a distance measure, Mantaci et al. (2005) developed the Extended Burrows Wheeler Transform (EBWT). The EBWT works by de\ufb01ning the BWT over a pair of inputs u and v, and computing the sorted order of both sequences. An example of this is shown in Table 2. Table 2: EBWT computation example bwt(u=bcaa) bwt(v=ccbab) EBWT Merge Source a a b c a b c c b a a b c u a b c a b a b c c a b c a u b c a a b c c b a a b c c b v c a a b c b a b c b a b c c v -c c b a b b c a a u --b c c b a v --c a a b u --c b a b c v --c c b a b v The distance between the two sequences u and v is de\ufb01ned by (3), where rep(i) returns how many times the i\u2019th source occurred in a row, and that only t source transitions occurred. ebwt(u, v) = t X i=1 max(rep(i) \u22121, 0) (3) Again, this concept is easier understood through example. Considering Table 2, we can see that the source string sequence is uuvvuvuvv. If we group this by transitions, we get u2v2uvuv2. Thus the distance ebwt(u, v) = 1 + 1 + 0 + 0 + 0 + 1 = 3. Mantaci et al. (2005) developed the EBWT for applications in bioinformatics, and developed theory to show a number of situations under which the EBWT will perform well or have desirable properties. However, it is still expensive to compute, requiring O(|u| + |v|) time for every distance computation. This makes EBWT less attractive as bioinformatic sequences become longer, and reduces its utility in other domains in which compression distances have found use, such as malware classi\ufb01cation. While it was known that EBWT did not satisfy the triangle inequality, preventing it from being a true distance metric, previously unreported theoretical issues also exist. We will discuss these issues in \u00a7 4. 3 Burrows Wheeler Markov Distance Inspired by the prior work we have just discussed, we now develop a new distance measure based on the Burrows Wheeler \fTransform. We will refer to our method as the Burrows Wheeler Markov Distance (BWMD), and it is simple to implement. To begin, consider again Table 1, where BWT(\u201ceasypeasy\u201d) is shown. The BWT\u2019s effectiveness as a compression algorithm comes explicitly from its ability to re-order the content such that repetitions are reduced to a \ufb01rst-order occurrence. This is why run-length encoding is effective. Because \ufb01rst-order compression is independently effective after the BWT, we do not need to consider more complex interactions over the extent of a \ufb01le. We also do not care about the invertibility of any transform. Our goal is to de\ufb01ne a new feature space where we can perform effective machine learning and information retrieval. So we seek to build a small statistical summary of the data, rather than build an object from which we can recreate the original data. By measuring the similarity of these small summaries, we measure the similarity of the underlying sequences. Given that \ufb01rst-order compression is effective with the BWT, we chose to select a \ufb01rst-order statistical model. In particular, we can use a Markov model of the probability of observing token u\u2032 i given the previous token u\u2032 i\u22121. The transition matrix T \u2208R|\u03a3|\u00d7|\u03a3| can then be used as a statistical summary of the entire sequence BWT(u). This is all that is needed to describe BWMD, and a succinct description is given below. Each step takes O(n) time for an input sequence of n bytes. 1[z] is the indicator function, which returns 1 if z is true, and 0 otherwise, and \u03b1, \u03b2 are the rows and columns of the transition matrix. 1. For each sequence u in a corpus of size N 2. Compute u\u2032 = BWT(u) 3. Estimate \ufb02attened Markov transition vector x[\u03b1 + \u03b2 \u00b7 |\u03a3|] = 1 |u\u2032| \u22121 |u\u2032| X i=2 1 \u0002 u\u2032 i = \u03b1 \u2227u\u2032 i\u22121 = \u03b2 \u0003 4. Normalize x such that x[i] = p x[i]/ \u221a 2. After step (4) in the above process, we obtain from the input u a single feature vector x which we might use, in place of u, in machine learning or information retrieval algorithms. Regardless of the length of the input sequence u, the size of the vector x will depend only on the alphabet size |\u03a3|. When working on raw bytes, this would be a 2562 feature vector. While such a vector takes up to 256 KB per sequence u, the individual input data objects we consider in this work range from 1 MB in size up to 400 MB. This makes the BWMD description quite compact by comparison. When u is shorter in length, the vector x can be stored in a sparse form, making the memory cost O(min(|u|, |\u03a3|2)) The normalization in step (4) is also chosen intentionally. The Hellinger Distance is a metric over probability distributions. For the case of two discrete probability distributions P = (p1, . . . , pk) and Q = (q1, . . . , qk), the Hellinger Distance is de\ufb01ned as follows: H(P, Q) = 1 \u221a 2 v u u t k X i=1 (\u221api \u2212\u221aqi)2 (4) Due to the form of (4), the Hellinger distance corresponds to the Euclidean distance between two transformed and scaled versions of P and Q. By using the square root of the coef\ufb01cients in step (4), divided by \u221a 2, we have a feature vector x that has been placed into a space where the Euclidean distance can be computed as usual. The results are then equivalent to the Hellinger distance between Markov transition vectors, giving us a statistically sound interpretation for BWMD. That the BWMD is a valid distance metric also follows immediately from the use of the Euclidean distance, which is well known to be a valid metric, a property that the EBWT lacks. We explicitly use the Hellinger distance over other alternatives, like KL-Divergence, because it corresponds exactly to the Euclidean distance after transformation. This makes is a valid distance metric for which we can make mathematical statements about its behavior with little work, and prove it does not have the same shortcomings as prior methods in this space. In addition, most other clustering and fast retrieval algorithms are built around the Euclidean distance, making our method compatible with a maximal number of complimentary techniques. This is not the case for any prior compression based measure. The ability to use algorithms like k-means, where other alternatives cannot, provides BWMD with advantages in terms of clustering accuracy, as well as computational ef\ufb01ciency to handle big data. Note that because of the BWT, our approach is not equivalent to 2-grams. Our comparison with LZJD, which can be interpreted as adaptive variable-length gram(Raff and Nicholas, 2017b), and BitShred, which uses 16-grams, will show that our method is meaningfully more effective than simple ngram approaches. 4 Theoretical Results We begin by developing a stronger theoretical understanding of our new method, as well as the prior approach EBWT. Prior works have looked at a number of properties of the EBWT(Mantaci et al., 2007, 2008, 2005), and describe situations in which EBWT will behave as a metric for a subset of possible inputs, and that it is invertible like the standard BWT. However, our interest in BWT is as a general purpose similarity measure for information retrieval and machine learning applications. We begin by showing three undesirable properties of the EBWT that reduce our con\ufb01dence in its use for such applications. Then we will investigate the nature of our new BWMD in these same cases where EBWT might fail. 4.1 EBWT Shortcomings First we show a simple property that is a direct result of the EBWT measuring distance as a function of repeated source sequences. When we have two strings u and v in any alphabet \u03a3, it is necessarily the case that the distance is bounded below by the difference in sequence lengths |u| and |v|. If u and v differ signi\ufb01cantly, we are unlikely to be able to make meaningful similarity comparisons. Theorem 1. The distance ebwt(u, v) \u2265||u| \u2212|v|| \u22121. Proof. Consider any two strings u and v. The minimum distance involves the maximum number of transitions between string sources. If |u| < |v|, that means there can be at most 2 \u00b7 |u| transitions, going back and forth between u and v on \fthe merging. That necessitates |v| \u2212|u| repetitions at the end. Given the de\ufb01nition of ebwt in (3), that means the minimum distance between u and v must be |v| \u2212|u| \u22121. The above result leads us to suspect that EBWT will not be useful when the sequences being compared are of varying lengths. The greater the difference in sequence length, the more troubling this issue might become. Given the insight from Theorem 1, we move on to a more serious departure from our intuition of how a distance measure should behave. In particular, if u is a subset of v, we should expect the distance between u and v to be small. Instead, it is possible for EBWT to return the maximal distance under this scenario. Theorem 2. It is possible for ebwt(u, v) = |v|+|u|\u22122, the maximum possible distance, even if u \u2282v. Proof. Consider the string u = an1 and v = an2, such that n2 > n1, |u| = n1 and |v| = n2. Because of the topographical sorting, all rotations of u and v will have the same characters, and so the sorting will only resolve once the max substring length is reached. Since all rotations in u are of length n1, which is shorter than n2, a sorting will place all rotations of u before any rotations of v. This results in a transition pattern of un1vn2, and thus ebwt(u, v) = n1 \u22121 + n2 \u22121 = |u| + |v| \u22122. Theorem 2 de\ufb01es our expectations in the case of similar inputs. If we use the behavior of the NID from (1), we would expect the distance in this scenario to be small. Consider the proof\u2019s example with u = an1 and v = an2: we would expect NID(u,v) \u2264log(n2/n1). This is because v could be encoded as the sequence u repeated n2/n1 times, which in the worst case, can be represented in a number of bits logarithmic in the value being encoded (i.e., the nature of any big-integer representation is that the maximum value that can be represented grows exponentially with a doubling of the bits). We now show that EBWT likewise surprises us in the case of dissimilar inputs, where we can have u and v with no overlap in content, but EBWT identi\ufb01es as having maximal similarity. Theorem 3. It is possible for ebwt(u, v) = 0, even if there exists no shared characters between u and v. More formally, there exists u, v and an alphabet |\u03a3| \u2265|u|+|v| such that for every x \u2208u and z \u2208v, x / \u2208v \u2227z / \u2208u yet ebwt(u, v) = 0. Proof. Let a\u2225b denote the concatenation of a and b, and \r \rN i=1i the sequential concatenation of 1, 2, . . . , N. Without loss of generality, de\ufb01ne the string u = \r \rn0 i=12 \u00b7 i and v = \r \rn0 i=12 \u00b7 i + 1. Thus |u| = |v|, but in the lexicographical sorting u and v will alternate between rotations u[0] = 2, v[0] = 3, u[1] = 4, v[1] = 5, . . ., u[n0] = 2 \u00b7 n0, v[n0] = 2 \u00b7 n0 + 1. Thus ebwt(u, v) will always contain transitions with no source repetition, hence ebwt(u, v) = 0. The reader may note that the construction of Theorem 3 would allow one to argue that the scenario should return a small distance under the ideal Kolmogorov distance with NID. This could be argued because the construction of u and v allows v to be represented as v = u + 1. However, the same scenario can occur with randomized strings where the alphabet does not increment with any simple pattern and is \ufb01lled with random tokens, so long as there is no overlap in the tokens and the tokens \u201cbalance out\u201d once sorted (i.e., there is no f(\u00b7) s.t. v[i] = f(u[i]) yet v[i] > u[i]\u2227u[i+1] > v[i]\u2200i). This requires further expanding the alphabet size |\u03a3|, and as such makes Theorem 3 the least practical of our concerns. However, we \ufb01nd it enlightening as a theoretical shortcoming which we would prefer to avoid. While these issues give us cause for concern, EBWT has found use in practice. We will show in \u00a7 5 that our concern for EBWT\u2019s utility is more justi\ufb01ed when sequences have varying length. 4.2 Behaviors of BWMD To delve into BWMD\u2019s behavior, we will begin by analyzing the same scenarios used to show our theoretical concerns with the EBWT in the preceding section, as well as compare the behavior of BWMD to that of the LZJD algorithm. Corollary 1. Given u = an1 and v = an2, such that n2 > n1, then bwmd(u, v) = 0. Proof. Under the construction of the embedding of u, xu \u2208 R|\u03a3|, \u2225x\u22250 = 1 since there will be only one transition pattern of a\u2212> a. As such, the value at that index i will be xu[i] = \u221a 1/ \u221a 2 = 1/ \u221a 2. Since the embedding xv will have the same construction, and thus, xu[i] \u2212xv[i] = 0, and \u2200j \u0338= i, xu[j] = xv[j] = 0. Therefore, the distance between u and v will be zero. The above also does not conform to expectation with respect to the NID, because we are ignoring the length of the inputs in our computation of distances. The NID(u,v) would be greater than zero in this scenario and is necessitated by storing the difference in repetition lengths n2 and n1. This tells us BWMD will be less sensitive to differences in sequence length, which may be desirable or not, depending on the application. The behavior of LZJD in this scenario was used to prove its sensitivity to potential repetition of the input. It was shown that LZJD(an1, an2) = 1 \u2212 \u221a8n1+1\u22121 \u221a8n2+1\u22121 as a lower bound for a similar scenario (Raff and Nicholas, 2017a). This distance grows at a rate considerably faster than logarithmic, but is also better than the EBWT distance in this case. We conclude that both LZJD and BWMD have better behavior, but BWMD will lower-bound the NID and LZJD will upper-bound the NID. In a similar manner as Corollary 1 was shown, the same construction can be applied to Theorem 3\u2019s issue for BWMD. Corollary 2. For all u, v such that for every z \u2208v, x / \u2208v \u2227z / \u2208u, then bwmd(u, v) = 1, the maximum possible distance. The derivation follows from the fact that Pk i=1 \u221api 2 = Pk i=1 pi = 1. This means the distance computation will reduce to 1/ \u221a 2 \u221a 2 \u00b7 1 = 1. Therefore, the distance when the embeddings xu and xv have no intersection is maximized. In \fthis case, BWMD aligns well with the behavior we would expect from the NID. Likewise it is easy to see that LZJD will return its maximum distance of 1 in this scenario as well. LZJD measures the set intersection, so when the sets have no intersection, then maximal distance is achieved. 5 Genomic Clustering We begin by showing that our new BWMD has similar utility as the original EBWT distance for genomic phylogeny from DNA sequences. This was the original proposed use of the EBWT measure, where they evaluated the SingleLink Clustering results on mitochondrial DNA (mtDNA) (Mantaci et al., 2005). Such data can be obtained using the NIH GeneBank , which we have used to create a similar corpus of DNA sequences to compare the relative pros and cons of BWMD and EBWT. We will use both mtDNA as has been done in prior work, but also a more challenging case with chromosomal genomic scaffold DNA sequences. We will produce dendrograms for each tasks with Single-Link Clustering (SLINK). white rhino wallaroo cow blue whale house mouse \ufb01nback whale cat domestic yak sumatran orangutan chimpanzee zebra gorilla platypus opossum rat gray seal orangutan harbor seal western grey kangaroo tiger lion horse eastern grey kangaroo long beaked echidna gibbon pygmy chimpanzee wild yak humans (a) BWMD white rhino wallaroo cow blue whale house mouse \ufb01nback whale cat domestic yak sumatran orangutan chimpanzee zebra gorilla platypus opossum rat gray seal orangutan harbor seal western grey kangaroo tiger lion horse eastern grey kangaroo long beaked echidna gibbon pygmy chimpanzee wild yak humans (b) EBWT Figure 1: Single-Link Clustering result on mitochondrial mtDNA data. First, we will evaluate mtDNA data on 28 different species, and use Single-Link Clustering to produce a dendrogram of the species based on their mtDNA. The results are shown in Figure 1, with both EBWT and BWMD taking under 1 second to perform the clustering. Our goal is not to fully evaluate the quality of each dendrogram, but to show that both methods produce reasonable results in this case, which may be of interest to researchers in bioinformatics. Both EBWT and BWMD do reasonably well at this task, with differing mistakes, advantages, and disadvantages. EBWT gets most base level groups correct (e.g., lion, tiger, cat in one group, primates grouped together). There is a failure to properly group the harbor and gray seals as related to each other, and instead act as outliers which SLINK is forced into a cluster at the end at higher cost. EBWT also fails to group the white rhino with other members of the Ferungulates family (e.g., the horse and zebra would have been closest members) (Cao et al., 1998). BWMD was able to correctly pair the seals and placed white rhino with a larger family of Ferungulates (closest to horse, which is correct, and with the cows and yak which are members). But BWMD failed to place the mouse and rat together, and dispersed the zebra and lion from their more appropriate neighbors. Results with both methods are reasonable. However, the mtDNA task is an easier task. All of the mtDNA sequences are of similar sizes, with the western grey kangaroo being shortest at 15 KB in length and the lion being longest at 17 KB. Our theoretical results in \u00a7 4 would indicate that we may see more signi\ufb01cant issues if we had sequences of varying length. We explore this with a dataset of chromosome genomic scaffold DNA for a subset of the species evaluated. We selected one whole scaffold DNA sequence from a random chromosome for the 11 species where this was available. We selected \u201cunplaced genomic scaffold\u201d sequences for 3 remaining species (the yaks and tiger), which is a much shorter and incomplete amount of data. This gives us a minimum sequence size of 22 KB and a maximum of 33 MB. platypus rat cow tiger horse house mouse white rhino cat sumatran orangutan domestic yak chimpanzee pygmy chimpanzee human wild yak (a) BWMD platypus (15 M) rat (33 M) cow (24 M) horse (25 M) tiger (34 K) house mouse (26 M) white rhino (30 M) cat (16 M) sumatran orangutan (25 M) domestic yak (34 K) chimpanzee (9.4 M) pygmy chimpanzee (9.5M) human (26 M) wild yak (22 K) (b) EBWT platypus rat cow tiger horse house mouse white rhino cat sumatran orangutan domestic yak chimpanzee pygmy chimpanzee human wild yak (c) LZJD Figure 2: Single-Link Clustering on Genomic Scaffolding. The SLINK results are shown in Figure 2. At a base level, the cost to perform EBWT and its scalability issues are \fmore pronounced. BWMD takes only 47 seconds to perform SLINK clustering. EBWT took 28 minutes, making it over 35\u00d7 slower. When plotting the EBWT dendrogram in Figure 2b, we include the size of the DNA sequence in parentheses. When organized in this way, it becomes clear that the EBWT clustering is degenerate, and corresponds exactly to \ufb01le-size, rather than content. In contrast, the BWMD in Figure 2a produces reasonable groupings despite disparate sequence lengths. For example, (full scaffold) cat and (unplaced and incomplete) tiger are correctly grouped despite the cat sequence being 450\u00d7 longer. The BWMD results are not perfect: the orangutan and domestic yak were placed farther from the other (well-grouped) primates and ferungulates respectively. Overall we can see that groups which should be placed near each other are, without degrading to sequence length information. We include LZJD ( Figure 2c )in this scenario, to demonstrate that BWMD has increased value over other alternatives. Here we can see that LZJD suffers the same failure as EBWT in placing the three smallest partial segments into a single cluster. With the exception of correctly grouping the two chimpanzee species, the LZJD dendrogram is degenerate. These results are in line with our theory derived in \u00a7 4. When working with sequences of homogeneous length, EBWT performed well. But BWMD is able to handle disparate sequence lengths reasonably well, where EBWT degrades to grouping by sequence length rather than content. BWMD\u2019s ability to handle the original mtDNA data, as well as substantially better results with the irregular-sized scaffold DNA, is made more impressive by the fact that BWMD is encoding everything into R16 due to the small alphabet |\u03a3| = 4. This is a reduction in storage cost by a factor of up to 515.6\u00d7, and allows for more \ufb02exibility in creating a larger and searchable index using BWMD. 5.1 A note on BMWD\u2019s Disadvantage It is also worth noting that, from an information encoding perspective, BWMD is at a disadvantage in this testing over DNA data. EBWT is dimensionless, and has the representation capacity of the merged sorting of two different strings, meaning its representational capacity is a function of the sequence length under consideration. BWMD encodes each sequence into a \ufb01xed-length feature vector of size |\u03a3|2. Since we are working with DNA data, the alphabet \u03a3 = {A, T, C, G} is quite small. As such all DNA sequences in these experiments, up to 33 MB in size, are being embedded into a 16dimensional space. BWMD\u2019s ability to match or outperform EBWT means we must be doing a signi\ufb01cantly better job at leveraging the \ufb01rst-order information expressed by the Burrows Wheeler Transform. 6 Malware Results It can be dif\ufb01cult to reliably parse malicious software as malware authors may intentionally violate rules and format standards. Compression based similarity measures are useful in this case as they allow us to avoid these complex parsing issues. In this section we will look at the classi\ufb01cation accuracy of BWMD and LZJD for several datasets. Our expectation is that LZJD will have better classi\ufb01cation accuracy, due to Lempel-Ziv compressors being more effective than those based on Burrows Wheeler. However, we \ufb01nd that BWMD has signi\ufb01cant advantages when clustering is the goal accuracy by up to 65.6\u00d7, and is 24\u00d7 faster to search large corpora by obtaining sub-linear scaling with no loss in accuracy. Since LZJD cannot achieve this, this advantage will only increase with corpus size. Many prior works have looked at malware classi\ufb01cation and clustering by processing raw bytes, due to the dif\ufb01culty of parsing malware. We will compare with the seminal BitShred algorithm (Jang et al., 2011) for clustering. Other similar byte based distance functions such as Ssdeep and Sdhash where evaluated, but founds to have degenerate performance on our smallest and easiest corpora, which can be found in appendix \u00a7 6.1. Compression distances such as EBWT and NCD are not evaluated in this section, because it would require multiple compute months for our smallest dataset, and simply cannot scale to the size of our malware corpora. Table 3: Malware datasets used in experiments. Dataset Avg Size # Classes Train Test Storage Size EMBER 1.17 MB 2 600,000 200,000 936 GB VS20F 705 KB 20 160,000 40,000 141 GB Kaggle Bytes 4.67 MB 9 10,868 \u2014 50.8 GB Kaggle ASM 13.5 MB 9 10,868 \u2014 147 GB Drebin APK 1.37 MB 20 4,664 \u2014 6.4 GB Drebin TAR 1.84 MB 20 4,664 \u2014 8.6 GB For our evaluation, we will use several datasets summarized in Table 3. The EMBER dataset (Anderson and Roth, 2018) pertains to a binary classi\ufb01cation problem of \u201cbenign vs malicious\u201d for Windows executables. Because there are only two classes, clustering results will not be evaluated on this corpus. However it is by far the largest corpus, allowing us to explore the scalability of our algorithms. The raw \ufb01les can be obtained from VirusTotal (www.virustotal.com) and are nearly 1TB total. Our remaining datasets will be multiclass problems where each sample is a member of a speci\ufb01c malware family. We will use these to evaluate both classi\ufb01cation accuracy, as well as accuracy in clustering with respect to the class labels. Using VirusShare (Roberts, 2011) we create another Windows based dataset with 20 malware families. The families were determined using VirusTotal and the AVClass tool which determines a single canonical malware family label based on multiple Anti-Virus outputs (Sebasti\u00e1n et al., 2016). We select the 20 most populous families, and use 7,000 examples for training and 3,000 for testing. The last four datasets we use are evaluated in \u201ctwo forms\u201d, following prior work (Raff and Nicholas, 2017a). The Kaggle datasets are from a 2015 Kaggle competition sponsored by Microsoft (Ronen et al., 2018). In the \u201cBytes\u201d version our algorithms are run on the raw malware binary, and in the \u201cASM\u201d version the output of IDA-Pro\u2019s disassembler is used instead. From the Drebin corpus (Arp et al., 2014) we use the 20 most populous families, where the \u201cAPK\u201d version is the raw bytes of the Android APK (essentially a Zip \ufb01le with light compression), and the \u201cTAR\u201d version which unpacks the APK and recombines all content into a single tar \ufb01le. \f6.1 Malware Classi\ufb01cation We begin our analysis by looking at nearest neighbor classi\ufb01cation performance of various methods. The performance of each algorithm under this scenario gives us insight not only to its utility, but how effective it would be for analysts in \ufb01nding similar malware. Utility in this scenario requires both high accuracy and computational ef\ufb01ciency, as malware corpora are often measured in the terabyte to petabyte range. Small Scale Malware Classi\ufb01cation The Kaggle and Drebin corpora are considerably smaller in size, allowing us to test a wider selection of methods against them. In the below table we use balanced accuracy, where the weights of each \ufb01le are adjusted so that the total weight of each class is equal, because malware families are not evenly distributed in each corpus. Table 4: Balanced Accuracy results for 1-NN classi\ufb01cation on each dataset. Results show mean 10-Fold Cross Validation accuracy (standard deviation in parentheses). Best results in bold, second best in italics. Dataset Ssdeep Sdhash BitShred BWMD LZJD Kaggle Bytes 38.4 (1.4) 60.2 (2.3) 43.7 (1.9) 96.4 (2.2) 97.9 (1.4) Kaggle ASM 26.6 (2.2) 28.8 (1.3) 36.9 (1.6) 97.0 (1.8) 97.1 (1.7) Drebin APK 13.6 (1.6) 5.8 (0.5) 58.3 (3.9) 55.3 (4.4) 81.0 (4.0) Drebin TAR 24.2 (2.9) 8.3 (1.2) 65.1 (3.7) 76.3 (3.6) 87.9 (2.1) We can see from these results that the compression-based approaches, LZJD and BWMD, generally outperform other alternatives by a wide margin. As was expected, LZJD has higher accuracy that BWMD, since LZJD is based on a more effective compression algorithm. While this is a slight weakness of BWMD, its advantage comes in being orders of magnitude faster, as we will show in the large scale testing in the next section. This makes it the only method usable for largerscale corpora. This also shows that Ssdeep and Sdhash are simply not accurate enough to be considered for use, without regard to computational constraints. BWMD performed second best on every dataset, the only exception being a 3 point difference to the BitShred algorithm. However, BWMD outperformed BitShred by at least 11 points on all other datasets. Supporting our theoretical analysis in \u00a7 4, we also see hints that BWMD is better equipped to work with extremely long sequences. Most notably, BWMD is the only method which had improved accuracy and reduced variance when moving from Kaggle Bytes (4.67 MB) to Kaggle ASM (13.5 MB). This suggests that the disassembly may be in a form that allows the BWT to better capture \ufb01rst-order dependencies for compression. The fact that BWMD has non-trivial accuracy on Drebin APK (random guessing is 5%) is particularly impressive and worth noting. This is because the APK \ufb01les are essentially Zip \ufb01les with a standard structure, and the Zip compression format is a more effective one than most BWT based methods such as bzip. As such, that there is any \ufb01rst-order information exploitable for effective similarity search is impressive, and indicates the utility of BWMD in wider applications. Large Scale Malware Classi\ufb01cation On EMBER we use 9-Nearest Neighbors as our classi\ufb01er so that we can compute meaningful values for the Area Under the ROC Curve (AUC) (Bradley, 1997). In this malware detection context, AUC can be interpreted as an algorithm\u2019s ability to properly prioritize all malicious software above all benign software. This metric is useful for prioritizing work queues, and is therefore particularly pertinent. We evaluate only BWMD and LZJD due to computational constraints on this larger dataset. BWMD obtains an AUC of 98.3%, where LZJD acheives a slightly better 99.7% AUC. As expected, LZJD obtains a higher accuracy than BWMD, because LZJD is built upon a more effective compression algorithm. However, accuracy in isolation does not determine what method is best to use. Due to the large size of malware corpora, sub-linear scaling is needed to be useful for realistic sized datasets. BWMD does have an advantage over LZJD in its ability to scale to large corpora in an ef\ufb01cient manner. This is critical, since industry datasets routinely require comparisons to terabytes of \ufb01les or more (Roussev, 2010). Prior work has tried, with limited success, to scale LZJD to larger corpora. Using an extension of the Vantage Point (VP) tree (Yianilos, 1993), only a 2.5x speedup over brute force search was achieved(Raff and Nicholas, 2018a). Because BWMD operates by embedding \ufb01les into a Euclidean space, we can leverage specialized algorithms like the Dynamic Continuous Index (DCI) algorithm (Li and Malik, 2017) that only work for the Euclidean distance. DCI works by projecting the whole dataset down to different, random, embeddings, and allows obtaining the true nearest neighbors in a fast and ef\ufb01cient manner. LZJD is not compatible with such algorithms, resulting in BWMD being better equipped for this task. 102 103 104 105 106 106 108 1010 Training Set Size Time (ms) BWMD Brute BWMD VP LZJD Brute LZJD VP BWMD DCI Figure 3: Table shows 9-NN search retrieval speed on the Ember test set (in milliseconds, y-axis, log-scale) as the number of training points (x-axis, log-scale) increases. In Figure 3 we compare the total query time of BWMD and LZJD under different indices, as the training set size increases from 64 \ufb01les up to the full 600,000. We found the VP tree of minimum variance (Raff and Nicholas, 2018a) performed best compared to other algorithms like KD and Cover-trees, and so only its results are included. In the dashed line we show BWMD accelerated with the Dynamic Continuous Index (DCI) algorithm (Li and Malik, 2017). We can see that as the training corpus becomes larger, the \fVP trees are able to get small constant factor speedups, but are not able to reliably prune large portions of the search space. Because BWMD is in Euclidean space, it is the only method able to leverage the DCI algorithm and thus able to get signi\ufb01cant order-of-magnitude search speedups. This combination makes BWMD 24\u00d7 faster than LZJD (5.6 CPU hours compared to 5.6 days), and 834\u00d7 faster than BWMD with a brute force search. One can clearly see that DCI\u2019s scaling is sub-linear, and its advantage grows with the corpus size. This is obtained with no loss in accuracy on the Ember corpus, making BWMD the only effective approach for scaling to even larger corpora. 6.2 Malware Clustering In this section we will show that BWMD has signi\ufb01cant advantages in terms of clustering malware into families. This bene\ufb01t comes largely from BWMD mapping sequences into a Euclidean feature space, where we can leverage tried-andtrue algorithms like k-means to perform fast and useful clustering. LZJD is incompatible with k-means, and similar methods that require an explicit euclidean feature vector. As such LZJD, like BitShred, is constrained to distance based clustering methods like agglomerative clustering. This puts them at a signi\ufb01cant disadvantage compared to BWMD. Evaluating the quality of our clustering results, we will consider three measures: Homogeneity, Completeness, and VMeasure, as introduced by Rosenberg and Hirschberg (2007) and using the class labels as ground truth cluster assignments. Homogeneity measures how well an algorithm does at making each found cluster as \u201cpure\u201d as possible (i.e., only one class in each cluster). Completeness measures how well an algorithm groups all examples of a class into as few clusters as possible (i.e., all examples of one class in only one cluster). V-Measure is the harmonic average of Homogeneity and Completeness. All three metrics are measured on the scale [0, 1], with 0 being worst, and 1 being the maximum score. In performing the clustering, we will test using k = the true number of classes and k = 10\u00d7 the true number of classes. The former (k = C) is done to judge how well the clustering algorithms are able to recover the underlying ground truth. The latter (k = 10\u00b7C) is done as it corresponds best to how a malware analyst would desire to use these tools.It is easier to over-estimate the number of clusters than to predict the exact value of k, and by clustering an analyst would hope to reduce their workload by quickly checking that \ufb01les in the same cluster are related, and then performing an in-depth analysis on only a few representatives from each cluster (VanHoudnos et al., 2017). For this reason, we consider Homogeneity the most important of the three measures, as it corresponds with how an analyst would use clustering, followed by V-Measure, and then Completeness. BWMD is the only method that can leverage the kMeans algorithm, and we use Hamerly\u2019s variant because it avoids redundant computation while returning the exact same results(Hamerly, 2010). For LZJD and BitShred we use Average-Link clustering using a fast O(n2) algorithm (M\u00fcllner, 2011). While the original BitShred paper used Single-Link, we found Average link provided the best results across all metrics for both BitShred and LZJD. The results Table 5: Clustering performance of BWMD, LZJD, and BitShred. Best results shown in bold. k = C k = 10 \u00b7 C Dataset Metric BWMD LZJD BitShred BWMD LZJD BitShred Kaggle Bytes V-M 0.581 0.352 0.007 0.546 0.414 0.028 Homog 0.597 0.254 0.003 0.885 0.378 0.015 Complt 0.566 0.573 0.239 0.396 0.457 0.265 Kaggle ASM V-M 0.528 0.235 0.014 0.562 0.531 0.366 Homog 0.550 0.176 0.007 0.911 0.599 0.291 Complt 0.508 0.351 0.257 0.407 0.477 0.495 Drebin APK V-M 0.307 0.219 0.095 0.412 0.326 0.389 Homog 0.296 0.172 0.054 0.566 0.313 0.333 Complt 0.319 0.303 0.375 0.323 0.340 0.468 Drebin TAR V-M 0.403 0.248 0.065 0.508 0.478 0.386 Homog 0.416 0.177 0.036 0.754 0.503 0.332 Complt 0.391 0.413 0.335 0.383 0.455 0.460 VS20F V-M 0.353 0.009 0.009 0.449 0.204 0.056 Homog 0.328 0.005 0.005 0.562 0.137 0.030 Complt 0.381 0.249 0.221 0.374 0.400 0.378 are shown in Table 5, where we can see BWMD dominates LZJD and BitShred by our most important metrics, Homogeneity and V-Measure3. BWMD\u2019s advantage in this regard is often dramatic. For example, BWMD scores 2.34\u00d7 better on Homogeneity compared to LZJD when k = 10 \u00b7 C on the Kaggle bytes dataset, and 59\u00d7 better than BitShred. While BWMD does not always perform best by the Completeness metric, it is always competitive with the best scoring method, which is why BWMD dominates by V-Measure . The results overall clearly indicate that BWMD provides the best clusterings across multiple datasets, of different encodings, and different numbers of clusters, showing the \ufb02exibility of the compression-based approach. Because BWMD can leverage the k-means concept and the many ef\ufb01cient algorithms for its computation, it is also the most scalable for these methods. LZJD and BitShred are inherently limited by the O(n2) lower-bound complexity of hierarchical clustering 4 . For example, BWMD took only 27 minutes to over-cluster the 160,000 \ufb01les in the VS20F training set, the largest under consideration. This is 17.3\u00d7 faster than LZJD which took 7.76 hours, and 54.6\u00d7 faster than Bitshred at just over a day. 7" + }, + { + "url": "http://arxiv.org/abs/1909.06674v1", + "title": "A Step Toward Quantifying Independently Reproducible Machine Learning Research", + "abstract": "What makes a paper independently reproducible? Debates on reproducibility\ncenter around intuition or assumptions but lack empirical results. Our field\nfocuses on releasing code, which is important, but is not sufficient for\ndetermining reproducibility. We take the first step toward a quantifiable\nanswer by manually attempting to implement 255 papers published from 1984 until\n2017, recording features of each paper, and performing statistical analysis of\nthe results. For each paper, we did not look at the authors code, if released,\nin order to prevent bias toward discrepancies between code and paper.", + "authors": "Edward Raff", + "published": "2019-09-14", + "updated": "2019-09-14", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "cs.DL", + "stat.ML" + ], + "main_content": "Introduction As the \ufb01elds of Arti\ufb01cial Intelligence (AI) and Machine Learning (ML) have grown in recent years, so too have calls that we are currently in an AI/ML reproducibility crisis [1]. Conferences, such as NeurIPS, have added reproducibility as a factor in the reviewing cycle or implemented policies to encourage code sharing. Many are pursing work centered around code and data availability as one of the more direct methods of enhancing reproducibility. For example, Dror et al. [2] developed a proposal to standardize the description and release of datasets. Others have proposed taxonomies and ontologies over reproducibility based on the availability of algorithm description, code, and data [3, 4]. Others have focused on building frameworks for sharing code and automation of hyper parameter selection in order to enable easier reconstruction of results [5]. While the ability to replicate the results of papers through open sourced code and data is valuable and should be lauded, it has been argued that releasing code is insuf\ufb01cient [6]. The inability to reproduce results without code availability may suggest problems with the paper. This may be due to the following: insuf\ufb01cient explanation of the approach, failure to describe important minute details, or a discrepancy between the code and description. We will call the act of reproducing the results of a paper without use of code from the paper\u2019s authors, independent reproducibility. We argue that for a paper to be scienti\ufb01cally sound and complete, it should be independently reproducible. The question we wish to answer in this work is what makes a paper independently reproducible? Many have argued \ufb01ercely for different aspects of writing and publishing as critical factors of reproducability. Quanti\ufb01able study of these efforts is needed to advance the conversation. Otherwise, we as a community will not have scienti\ufb01c understanding that our work is addressing aspects of reproducibility. Gundersen and Kjensmo [7] de\ufb01ned several paper-properties of interest in regard to reproducibility. However, they de\ufb01ned a paper as reproducible purely as a function of the features without knowing if the selected features (e.g., method is described, data is available) actually impact a paper\u2019s reproducibility. As a \ufb01rst step toward answering this question, we performed a study of 255 papers that we have attempted to implement independently. We developed the \ufb01rst empirical quanti\ufb01cation about indepen33rd Conference on Neural Information Processing Systems (NeurIPS 2019), Vancouver, Canada. \fdent reproducibility by recording features from each paper and reproduction outcome. We will review the entire procedure and features obtained in section 2. In section 3 we will discuss which features were determined to be statistically signi\ufb01cant, and we will discuss the implication of these results. We will discuss the de\ufb01ciencies of our study in section 4, with subjective analysis in section 5, and then conclude in section 6. 2 Procedure and Features For clarity, we will refer to ourselves, the author of this paper, as the reproducers, distinct from the authors of the papers we attempt to independently reproduce. To perform our analysis, we obtained features from 255 papers. Inclusion criteria included papers that proposed at least one new algorithm/method that is the subject of reproduction, and papers where the \ufb01rst implementation and reproduction attempts occurred between January 1st 2012 through December 31st 2017. We chose varied paper topics based on our historical interest. No papers were included from 2018 to present, as some papers take more time to reproduce than others, which could negatively skew results for papers from the past year. If the available source code for a paper under consideration was seen before having successfully reproduced the paper, we excluded the paper from this analysis because at that point we are not a fully independent party. In line with this, any paper was excluded if the paper\u2019s authors had any signi\ufb01cant relationship with the reproducers (e.g., academic advisor, coworker, close friends, etc.) because intimate knowledge of communication style, work preferences, or the ability to have more regular communication could bias results. A paper was considered to be reproduced if the code for results were written by the reproducers, allowing the use of reasonable and standard libraries (e.g., BLAS, PyTorch, etc.), and the code reproduced the majority of claims from the paper. Speci\ufb01cally, we regarded a paper as reproducible if the majority (75%+) of the claims in the paper could be con\ufb01rmed with code we independently wrote. If a claimed improvement was measured in orders-of-magnitude, being within the same order-of-magnitude was considered suf\ufb01cient (e.g., a paper claims 700x faster, but reproducers observe 300x). This same order-of-magnitude criterion comes from an observation that such claims are highly dependent upon constant factor ef\ufb01ciency improvements that may be had/missing from both the prior methods, and the proposed method being replicated. Presence or absence of these improvements can cause, apparently, \u201cdramatic\u201d impacts without fundamentally changing the nature of the contribution we are attempting to reproduce. When compared to other algorithms, we consider a paper reproduced if the considerable majority (90%+) of the new algorithm\u2019s rankings correspond to those found in the paper (e.g., the claim is that the proposed method was most accurate on 95% of tasks compared to 4 other models, we want to see our reproduction be most accurate on at least 95% \u00b7 90% = 81% of the same tasks, compared to the same models). As a last resort, we considered getting within 10% of the numbers reported in the paper (or better), or in the case of non-quantitative results (e.g., GAN sample quality), we subjectively compare our results with the paper to make a decision. We include this \ufb02exibility in speci\ufb01cation to allow for small differences that can occur. While not common, we did encounter more than one instance where our independent reproduction achieved better results than the original paper. After this selection process, we are left with 255 papers, of which 162 (63.5%) were successfully replicated and 93 were not. We note that this is signi\ufb01cantly better than the 26% reproducibility determined by [7], who de\ufb01ned reproducibility as a function of the features they believed would determine reproduction. Below we will describe each of the features used. We attempt to catalog both features that are believed relevant to a paper\u2019s reproduction and features that should not be relevant, which will help us quantify if these expectations hold. We will use statistical tests to determine which of these features have a signi\ufb01cant relationship with reproduction. An anonymized version of the data can be found at https://github.com/EdwardRaff/ Quantifying-Independently-Reproducible-ML. 2.1 Quanti\ufb01ed Features We have manually recorded 26 attributes from each paper, which took approximately 20 minutes per paper to complete1. A policy for each feature was developed to minimize as much subjectivity as possible. Below we will review each feature, and how they were recorded, in order from least to 1Not done in a continuous run. Feature collection, and paper selection, and total time preparing the study data took approximately 6 months. 2 \fmost subjective. Each feature was obtained based on the body of the main paper only, excluding any appendices (unless speci\ufb01ed otherwise). Features to consider were selected based on two factors: 1) would one reasonably believe the feature should be correlated with the ability to reproduce a paper (positive or negative) and 2) was the feature reasonably available with little additional work? This was done to capture as much useful information as possible while also avoiding limiting our study to items where a priori one might believe that a feature\u2019s relevance (or lack thereof) to be \u201cobvious.\u201d Unambiguous Features: Some features are not ambiguous in nature. A few are simple and innate properties that require no explanation. This included the Number of Authors, the existence of an appendix (or supplementary material), the number of pages (including references, excluding any appendix), the number of references, the year the paper was published, the year \ufb01rst attempted to implement, the venue type (Book, Journal, Conference, Workshop, Tech-Report), as well as the speci\ufb01c publication venue (e.g., NeurIPS, ICML). Many papers follow a progression from TechReport to Workshop to Conference to Journal as the paper becomes more complete. For any paper that participated in parts of this progression, we use the version from the most \u201ccomplete\u201d venue under the assumption that it would be the most reproducible version of the paper allowing us to avoid issues with double-counting papers. We also include whether or not the Author Replied to questions about their paper. If any author replied to any email, it was counted as a \u201cYes\u201d. If no author ever replied, we marked it as \u201cNo.\u201d In all cases, every paper author was sent an email before marking it as \u201cNo.\u201d If a current email could not be found, we marked that the authors were not contacted. Mild Subjectivity: We spend more time expounding on the next set of features, which had minor degrees of subjectivity. We state below the developed procedure we used to make their quanti\ufb01cation practical and reproducible. \u2022 Number of Tables: The total number of tables in the paper, regardless of the content of those tables. While tables usually contain results, they often contain a wide variety of content, and we make no distinction between them due to their frequency and variety. \u2022 Number of Graphs/Plots: The total number of plots/graphs contained in the paper which includes scatter plots, bar-charts, contour-plots, or any other kind of 2D-3D numeric data visualization. \u2022 Number of Equations: Due to differing writing styles, we do not use equation number provided by the paper, nor do we count everything that might be typed between LaTeX \u201c$$\u201d brackets. We manually reviewed every line of every paper to arrive as a consistent counting process2. Inline mathematics were only counted if the the math involved 1) two or more variables interacting (e.g., x \u00b7 y) or 2) two or more \u201coperations\u201d (e.g, P(x|y) or O(x2)). If only one \u201coperation\u201d occurred (e.g, P(x) or x2), it was not considered. Inline equations were counted only once per line of text, regardless of how many equations occurred in a line of text. Whole-line equations were always counted, regardless of the simplicity of the equation. If multiple whole lines were used because of equation length (e.g., a \u201c+\u201d ), it was counted as one equation. If multiple whole lines were used due to showing a mathematical step or derivation, each step counted as an additional equation. Partial deference was given to equation numbers. If every line of an equation received its own number, they were counted accordingly. If a derivation over n whole lines received only one equation number, the equation was counted \u2308n/3\u2309times. \u2022 Number of Proofs: A proof was only counted if it was done in a formal manner, beginning with the statement of a corollary or theorem, and included at least an overview of how to achieve the proof. A proof was counted if it occurred in the appendix or supplementary material. Derivations of update rules or other equations did not count as a proof unless the paper stated them as a proof. This was done as a practical matter in reducing ambiguity and the process of collecting the information. \u2022 Exact Compute Speci\ufb01ed: If a paper indicated any of the speci\ufb01c compute resources used (e.g., CPU GHz speed or model number, GPU model, number of computers used), we considered it to have satis\ufb01ed this requirement. \u2022 Hyper-parameters Speci\ufb01ed: If a paper speci\ufb01ed the \ufb01nal hyper-parameter values selected for each dataset or the method of selecting hyper-parameters (e.g., cross validation factor) and the value range (e.g., \u03bb \u2208[1, 1000]), we consider it to have satis\ufb01ed this requirement. Simply stating that a grid-search (or similar procedure) was used was not suf\ufb01cient. If a paper introduced 2Not all papers have L AT EX available, and older papers are often scanned making automation dif\ufb01cult. 3 \fmultiple hyper-parameters but only speci\ufb01ed how a sub-set of the parameters where chosen, we marked it as \u201cPartial\u201d. \u2022 Compute Needed: We de\ufb01ned the compute level needed to reproduce a paper\u2019s results as needing either a Desktop (i.e., \u2264$2000), a consumer GPU (e.g., an Nvida Geforce type card), a Server (used 20 cores or more, or 64 GB of RAM or more), or a Cluster. If the compute resources needed were not explicitly stated, this was subjectively based on the computational complexity of the approach and amount of experiments believed necessary to reach reproduction. We stress that this compute level was selected based on today\u2019s common compute resources, not those available at the time of the paper\u2019s publication. \u2022 Data Available: If any of the datasets used in the paper are publicly available, we note it as having satis\ufb01ed this requirement. \u2022 Pseudo Code: We allow for four different options for this feature: 1) no pseudo code is given in the paper, 2) \u201cStep-Code\u201d is given, where the paper outlines the algorithm/method as a sequence of steps, but the steps are terse and high-level or refer to other parts of the paper for details, 3) \u201cYes\u201d, the paper has some pseudo code which outlines the algorithm at a high level but with suf\ufb01cient detail that it feels mostly complete, and 4) \u201cCode-Like\u201d, the paper summarizes the approach in great detail that is reminiscent of reading code (or is in fact code ). Subjective: We have a \ufb01nal set of features which we recognize are of a signi\ufb01cantly subjective nature. For all of these features, we are aware there may be signi\ufb01cant issues, and in practice, any alternative protocol would impose its own different set of issues. We have made the choices in an attempt to minimize as many issues as possible and make the survey possible. Below is the protocol we followed to reduce ambiguity and make our procedure as reproducible as possible for future studies, which will help the reader fully understand our interpetation of the results. \u2022 Number of Conceptualization Figures: Many papers include graphics or content for which the purpose is not to convey a result, but to try to convey the idea / method proposed itself. These are usually included to make it easier to understand the algorithm, and so we identify them as a separate item to count. \u2022 Uses Exemplar Toy Problem: As a binary \u201cYes\u201d/\u201cNo\u201d option, did the paper include an exemplar toy problem? These problems are not meaningful toward any application of the algorithm, but they are devised to show speci\ufb01c behaviors or create demonstrations that are easier to reproduce / help conceptualize the algorithm being presented. These are often 2D or 3D problems, or they are synthetically generated from some speci\ufb01ed set of distributions. \u2022 Number of Other Figures: This was a catch-all class for any \ufb01gure that was not a Graph/Plot, Table, or Conceptualization Figure as de\ufb01ned above. For most papers, this included samples of the output produced by an algorithm or example input images for Computer Vision applications. \u2022 Rigor vs Empirical: There have been a number of calls for more scienti\ufb01c rigor within the ML community[8], with many arguing that an overly empirical focus may in fact slow down progress [9]. We are not aware of any agreed upon taxonomy of what makes a paper \u201crigorous\u201d. Based on the interpretation that rigor equates to having grounded understanding of why and how our methods work, beyond simply showing that they do so empirically, we develop the following protocol: a paper is classi\ufb01ed as \u201cTheory\u201d (read, rigorous), if it has formal proofs, provides mathematical reasoning or explanation to modeling decisions, or provides mathematical reasoning or explanation to why prior methods fail on some dataset. By default, we classify all other papers as \u201cEmpirical.\u201d However, if a \u201cTheory\u201d paper also includes discussion of practical implementation or deployment concerns, complete discussion of hyper-parameter setting such that there is no ambiguity, ablation studies of decisions made, or experiments on production datasets, we consider the paper \u201cBalanced\u201d as having both theory and empirical components. \u2022 Paper Readability: We give each paper a readability score of \u201cLow\u201d, \u201cOk\u201d, \u201cGood\u201d, or \u201cExcellent.\u201d To minimize subjectivity in these scores, we tie each to the amount of times we had to read the paper in order to reach a point where we felt we had the proposed algorithm implemented in its entirety, and the failure to replicate would be a matter of \ufb01nding and removing bugs. The score of \u201cExcellent\u201d means that we needed to read the paper only once to produce an implementation, \u201cGood\u201d papers needed two or three readings, \u201cOk\u201d papers needed four or \ufb01ve, with \u201cLow\u201d being six or more reads through the paper3. \u2022 Algorithm Dif\ufb01culty: We categorize the dif\ufb01culty of implementing an algorithm as either \u201cLow\u201d, \u201cMedium\u201d, or \u201cHigh.\u201d We grounded this to lines of code for any paper successfully implemented 3This information was obtained from our own record keeping over time and paper-organizing software 4 \for which made its implementation available online. For ones never successfully implemented and without code, we estimated this based on our intuition and experience on where the implementation would have landed based on reading the paper. \u201cLow\u201d dif\ufb01culties could be completed in 500 lines of code or less, \u201cMedium\u201d dif\ufb01culty between 500 and 1,500 lines, and \u201cHigh\u201d was > 1,500 lines. In these numbers we assume using common libraries (e.g., auto-differentiation, BLAS, etc.). \u2022 Primary Topic: For each paper we tried to specify a single primary topic of the paper. Many papers cover different aspects of multiple problems, making this a challenge. We adjusted topics into higher-level categories so that each topic had at least three members, so that we could do meaningful statistics. Topics can be found in the appendix. \u2022 Looks Intimidating: The most subjective, does the paper \u201clook intimidating\u201d at \ufb01rst glance? 3 Results Table 1: Signi\ufb01cance test of which paper properties impact reproducibility. Results signi\ufb01cant at \u03b1 \u22640.05 marked with\u201c*\u201d. Feature p-value Year Published 0.964 Year First Attempted 0.674 Venue Type 0.631 Rigor vs Empirical* 1.55 \u00d7 10\u22129 Has Appendix 0.330 Looks Intimidating 0.829 Readability* 9.68 \u00d7 10\u221225 Algorithm Dif\ufb01culty* 2.94 \u00d7 10\u22125 Pseudo Code* 2.31 \u00d7 10\u22124 Primary Topic* 7.039 \u00d7 10\u22124 Exemplar Problem 0.720 Compute Speci\ufb01ed 0.257 Hyperparameters Speci\ufb01ed* 8.45 \u00d7 10\u22126 Compute Needed* 8.75 \u00d7 10\u22125 Authors Reply* 6.01 \u00d7 10\u22128 Code Available 0.213 Pages 0.364 Publication Venue 0.342 Number of" + }, + { + "url": "http://arxiv.org/abs/1908.00200v1", + "title": "KiloGrams: Very Large N-Grams for Malware Classification", + "abstract": "N-grams have been a common tool for information retrieval and machine\nlearning applications for decades. In nearly all previous works, only a few\nvalues of $n$ are tested, with $n > 6$ being exceedingly rare. Larger values of\n$n$ are not tested due to computational burden or the fear of overfitting. In\nthis work, we present a method to find the top-$k$ most frequent $n$-grams that\nis 60$\\times$ faster for small $n$, and can tackle large $n\\geq1024$. Despite\nthe unprecedented size of $n$ considered, we show how these features still have\npredictive ability for malware classification tasks. More important, large\n$n$-grams provide benefits in producing features that are interpretable by\nmalware analysis, and can be used to create general purpose signatures\ncompatible with industry standard tools like Yara. Furthermore, the counts of\ncommon $n$-grams in a file may be added as features to publicly available\nhuman-engineered features that rival efficacy of professionally-developed\nfeatures when used to train gradient-boosted decision tree models on the EMBER\ndataset.", + "authors": "Edward Raff, William Fleming, Richard Zak, Hyrum Anderson, Bill Finlayson, Charles Nicholas, Mark McLean", + "published": "2019-08-01", + "updated": "2019-08-01", + "primary_cat": "cs.CR", + "cats": [ + "cs.CR", + "cs.LG", + "stat.ML" + ], + "main_content": "INTRODUCTION In this work, we are interested in the task of finding the top-k most frequent n-grams in a large corpus. Given a corpus C of documents, and an alphabet A, there are |A|n possible n-grams, making the use of large n > 6 computationally infeasible for many applications. Still, n-grams have been a bread-and-butter tool for natural language processing and other related fields for decades, thanks to their simplicity and usefulness. As such, significant work has gone into engineering systems to work with n-grams [7, 24, 28]. This is also true for malware classification, where we wish to Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the owner/author(s). LEMINCS @ KDD\u201919, August 5th, 2019, Anchorage, Alaska, United States \u00a9 2019 Copyright held by the owner/author(s). determine whether a file is benign or malicious (malware detection), or to identify the specific family of a known malicious file (malware family classification). In particular, we are interested in selecting n-grams for large values of n. This is motivated by the use of byte n-grams as features for malware classification. There has long existed an intuitive need for larger values of n in this space due to the nature of content encoded in executable file formats. For example, if we consider byte n-grams for Microsoft Windows Portable Executable (PE) files, one x86 assembly code instruction could be up to 15 bytes long. This would require us to consider at least 16-grams to capture this one instruction in context. Early work determined large values like n = 15 performed best [2], but this was only possible because of the small corpus size (36.9 MB). Goldberg et al. [14] proposed using 20-grams since the average malware detection signature used in 1998 was 20 bytes in length. The seminal BitShred clustering work proposed 16-byte grams, but needed a cluster of 64 machines to scale past 60,000 files, and the use of feature hashing[44] meant they did not have the original features [16]. As the size of malware corpora has grown, the exponential cost in increasing the value of n has forced researchers to consider small values of n and other alternatives. Recent works that have looked at corpora with at least 400,000 files have been constrained to 6-grams or less [34]. Considering that the Anti-Virus (AV) industry is making use of datasets that range in size from ten million [21] to hundreds of millions [41] of files, the methods that exist today simply can\u2019t scale to the magnitude of industry corpora, and old results using hundreds of files are not sufficient to base decisions on. In this work, we introduce the KiloGram technique for efficiently finding the top-k most frequent n-grams for large values of k and n with high probability under the assumption of a power-law distribution to the n-grams. If L is the total number of observed n-grams, or bytes, in the corpus, our algorithm will take only O(L) time and O(B + k \u00b7 n) memory. The parameter B is a budget factor to control the accuracy of the method, and since n \u226ak \u226aB \u226aL, this memory cost is minimal. For our tests, for example, this B corresponds to using \u22489 GB of RAM to extract frequent n-grams from 5 TB of data. For n \u2208[2, 8], our approach is 60 times faster than previous works, and runtime does not increase with n, allowing us to test n = 1024 and beyond. This allows us to answer questions about arXiv:1908.00200v1 [cs.CR] 1 Aug 2019 \fLEMINCS @ KDD\u201919, August 5th, 2019, Anchorage, Alaska, United States E. Raff et. al. the behavior of byte based n-grams in a more conclusive way than prior work [46]. In \u00a7 2, we review related works that aim to increase the value of n for malware classification. The proposed KiloGram algorithm will be presented in \u00a7 3. We perform the first ever investigation of large n \u2208[8, 1024] for malware classification in \u00a7 4. We found that n = 8 performs well and generalizes well over three years of concept drift in malware. Ourr results show that the common assumption that n should be larger in the malware analysis space do not hold for larger modern corpora. Surprisingly, n = 1024 also results in nontrivial malware classification accuracy. We demonstrate in \u00a7 5 that large n-grams are interpretable to malware analysts, can help them automate laborious and error prone parts of their job, and can be combined with lower-effort domain knowledge to rival (as measured using the EMBER dataset) proprietary industry feature extractors built from decades of expertise. Finally we will conclude in \u00a7 6. 2 RELATED WORK N-grams have been used as features for malware analysis since the first work in automating malware detection in 1995 [19], and have consistently been used for malware classification systems ever since [20, 34, 35, 42]. Except when using small datasets (hundreds of MB or less), values of n > 8 are never tested in published literature due to their computational burden. However an intrinsic concern is that n-grams need to be larger. For example Ibrahim et al. [15] noted that a byte 6-gram was too short to fully capture an observed x86 instruction 2.4% of the time. In attempts to increase the n-gram length, some have developed techniques that attempt to coalesce multiple n-grams to a single canonical base form. One example of this is n-perms [18], where a sorted ordering is applied to every n-gram to map them to a single canonical form (e.g., ACB, BCA, and CAB would all map to ABC). This n-perm approach has been used for malware classification with a value of n as large as 10 [43]. While our focus is on processing the raw bytes of a binary, ngrams have been popular for assembly instructions as well. Similar coalescing techniques have been necessary to do any work with assembly due to computational constraints. Prior works have examined replacing all memory addresses, register references, and constants with generic mem, reg, and const placeholders [22], though it is more common to remove all instruction operands entirely [40]. While much work has gone into storing and processing known ngrams efficiently [24], little has been been done to try to extend the value of n itself in a time and memory efficient manner. The only prior work we are aware of was performed by Nagao and Mori [26]. They considered obtaining n \u2208[1, 255] by cleverly converting the most frequent n-gram calculation into a sorting problem, resulting in O(L log L) complexity and O(L) space. While exact, these bounds are worse than ours and necessitate slower out-of-core sorting than the proposed method. Furthermore, the method is limited to n \u2264255, and only tested up to n = 10. Our method relies on certain distributional assumptions to hold with high probability, but allows us significant speed and practicality benefits with O(L) time and O(B + k \u00b7 n) memory. 3 KILOGRAMMING Our goal is to find the top-k most frequent n-grams for large values of n. To do this, we build off of two prior works. First is the hash-gram approach [33]. Hash-grams find the top-k most frequent hashes of n-grams. They created a large table of size B = 231 \u221219 to store hashes, and simply ignored collisions. By using a rolling hash function h(\u00b7) [12] , they were able to obtain orders-of-magnitude speedup over normal n-gram tabulation, at the cost of losing information about what n-grams are actually being used. The hash-gram approach works under the common assumption that n-grams follow a Zipfian (power law) distribution [47]. The Zipfian distribution has probability mass function f (\u00b7) and cumulative distribution function F(\u00b7) given by f (x;p, |A|) = x\u2212p\u22121 H(p+1) |A| (1) F(x;p, |A|) = H(p+1) x H(p+1) |A| (2) where H(p) z = \u00cdz i=1 i\u2212p indicates the z\u2019th harmonic number of the p\u2019th order, and x \u2208[1, 2, . . . , |A|]. Under the Zipfian-distributed assumption, it was shown that hash-grams discover the correct top-k hashes with high probability [33]. The Zipfian distribution is a surprisingly good fit to human language and many other tasks [29], and as such has been a common and useful model for n-gram based features in natural language processing[9], as well as for n-grams over bytes from binary executables [34]. Naively, one would like to use an approach such as the SpaceSaving algorithm [23], which can return the top-k most frequent items from a stream. At a high level, it works as a kind of \u2019rank\u2019 based cache. If an item is in the Space-Saving data structure, its rank is increased as well as an associated count. If an item is not in the cache, the current item with the lowest rank is replaced, its rank increased, and it\u2019s error bound reset. Based on the current error bounds, it can estimate top-k most frequent items in a stream, and in some cases guarantee that they are the true top-k. Thanks to clever design, updates to the Space-Saving data structure are O(1). In this scenario, one would treat all possible n-grams as the stream to process, and select the top-k after processing the stream1. However this becomes computationally intractable as k increases, and for a Zipfian distribution with p = 0, the Space-Saving algorithm requires B = O \u0000k2 log(|A|)\u0001 buckets to obtain the true top-k n-grams, resulting in O \u0000nk2 log(|A|)\u0001 memory use. When we consider that the size of our alphabet is a function of the n-gram size (i.e., |A| = 256n), we get B = O \u0000nk2\u0001 and a total memory use of O \u0000n2k2\u0001, which is not tenable if we wish to consider larger k or large n, let alone both as we do in this work. Prior works have used k = 8, 000 as the largest k [8], which is insufficient for feature selection of n-grams where we need to preserve k \u2265100, 000. To resolve these issues, we introduce the KiloGram algorithm. This algorithm enables n-gram computation with n exceeding 1000 1Small scale tests on 80,000 files found that the computational overhead of the SpaceSaving structure is also significant. An attempt to find the top k = 10, 000 and n = 6 with B = 1, 000, 000 in this scenario took just as long as computing the exact n-grams in the first place, and failed to return any of the true top-k due to difficulty is knowing the correct budget size since the O notation hides constant factors. \fKiloGrams: Very Large N-Grams for Malware Classification LEMINCS @ KDD\u201919, August 5th, 2019, Anchorage, Alaska, United States by extending the hash-gram approach with a second pass that selectively leverages the Space-Saving algorithm. Its run-time complexity is O(L), with two iterations over the corpus to process ngrams and place them into a hash-table (first pass) or Space-Saving data structure (second pass): either insertion is O(1) complexity. The other operations in the proposed method are O(B) (e.g., quickselect), and since B < L, we arrive at O(L) total complexity. For memory, we require O(B) memory for the large tableT, and an additional O(k \u00b7n) memory for the storage of exact n-grams in the spacesaving data structure, so that memory complexity is O(B + k \u00b7 n). Algorithm 1 KiloGramming Require: Bucket size B, rolling hash function h(\u00b7), corpus of C documents, and desired number of frequent hash-grams k, and hashing stride s. 1: T \u2190new integer array of size B 2: for all documents x \u2208C do \u25b7O(L) for L total n-grams 3: for n-gram \u0434 \u2208x do 4: q\u2032 \u2190h(\u0434) mod B 5: if q\u2032 mod s = 0 then \u25b7Hashing-Stride check 6: T[q\u2032] \u2190T[q\u2032] + 1 7: Tk \u2190QuickSelect(T,k) 8: S \u2190new Space Saving structure with BS buckets. 9: for all documents x \u2208C do \u25b7Second pass over data 10: for n-gram \u0434 \u2208x do 11: q\u2032 \u2190h(\u0434) mod B 12: if q\u2032 \u2208Tk then 13: Insert \u0434 into S 14: return top-k entries from S The pseudo-code is given in Algorithm 1. On the first pass through the dataset, we use the hash-gram approach of creating a large table to find the top-k most frequent hashes, which under the assumptions of a Zipfian distribution, will find the true top-k hashes with high probability [33]. The hash-graming corresponds to lines 1\u20134 and line 6. Line 5 is an addition we will discuss soon in \u00a7 3.1. Once we have the set of the top-k hashes, we create a new SpaceSaving data structure to help us keep track of the corresponding top-k n-grams. We will perform a second pass over the data, and use the top-k list of hashes as a white list for the Space-Saving algorithm. In this way the majority of observed n-grams will not be processed because they do not have one of the specified hash values, and the Space-Saving structure allows us to filter out the collisions from the true most-frequent n-grams. We require only O(k) buckets in the Space Saving structure for all practical use cases, which we prove in \u00a7 3.2, resulting in O(k \u00b7 n) memory use for the second step. This dramatically reduces the amount of memory required, and runs orders of magnitude faster than attempting to use the SpaceSaving approach on the entire corpus. The second pass over the data requires less time to run than the first pass because fewer memory accesses are being performed (\u226599.99% of n-grams are non-frequent [34]), and these memory accesses result in more cache hits (smaller Space-Saving structure compared to large array T). In testing, the second pass can account for as little as 9.76% of the total runtime. 3.1 Hashing-Stride We introduce the concept of a hashing-stride of size s to further enhance the utility of the n-grams found so that they are useful for creating features. The application of the hash-stride is simple. For each n-gram \u0434, we will compute its hash q = h(\u0434). If q mod s , 0, the n-gram is discarded. Thus, hash-striding is simply a deterministic downsampling of input n-grams by a factor of s. Hash-striding is important to reduce redundancy caused from the sliding window effect across long common sequences. In particular, for a ubiquitous sequence of length \u2113> n, the resulting top k n-grams would be dominated by \u2113\u2212n + 1 equally frequent and essentially redundant sub-sequences. Including these n-grams in the top k effectively reduces k by a factor of (\u2113\u2212n). A naive alternative to reduce the number of n-grams considered for the top-k is to use a spatial stride z, where one steps by a constant number of z grams through the input sequence. However, if a frequent n-gram does not occur at intervals of exactly z, this approach would fail to identify occurrences of the n-gram, resulting in inaccurate counts or in the worst case, exclusion. By using a hashing-stride of s, we reduce the total expected number of unique n-grams to process by a factor ofs. This is because for any particular n-gram\u0434, we will always count its occurrence regardless of its offset within a file. This ensures that counts of n-grams are accurate. From an implementation perspective, hashing-stride allows one to perform a necessary first approximation to feature selection without having to perform any kind of communication or coordination between files, and without any additional significant computation. This also means we are technically selecting the top-k n-grams from |A\u2032|/s uniquen-grams, where A\u2032 is the set of observedn-grams from the possible alphabet A (i.e., |A\u2032| \u2264min(L, |A|)). We will continue to refer to this as just the \u201ctop-k\u201d for brevity. For all experiments, unless stated otherwise, we use a hash-stride of s = \u2308n/4\u2309. 3.2 KiloGrams under the Zipfian Distribution We now prove that Algorithm 1 preserves the correct top-k n-grams when A follows a Zipfian distribution. In what follows, L \u2265|A\u2032| represents the total number (including duplicates) of n-grams in the corpus. In the proof (see \u00a7 3.2.1), it is assumed that the first pass of the algorithm has obtained the true top-k hashes of the top-k n-grams, which was previously proven to occur with a high probability [33]. The proof continues by showing that given the true top-k hashes and p \u22651, the expected number of colliding non-frequent n-grams (including duplicates) is upper bounded by 6L/ \u0010 B\u03c02\u0011 . (3) Thus we may preserve the true top-k by having a sufficiently large hash-table to disambiguate the frequent and non-frequent collisions. Since our implementation is in Java we use B = 231 \u221219, the largest prime array size allowed by Java. This value is also realistic and requires only 8.6 GB of RAM, well within the capacity of a modern laptop. With this, Algorithm 1 across one petabyte of ngrams (L = 1015), we would expect at most 283,100 collisions. As such, adding a constant of 300,000 to the size of the Space-Saving structure S in Algorithm 1 should suffice for any application which could practically run on a single computer. We include 3 \u00b7 k as an \fLEMINCS @ KDD\u201919, August 5th, 2019, Anchorage, Alaska, United States E. Raff et. al. alternative hedge against any situation where our empirical data does not follow a power-law type distribution. Thus, in experiments, we use BS = max(k+300000, 3\u00b7k) buckets. In all of our experiments, the bound shown in Equation 3 was never violated. The Space-Saving structure is unnecessary for the proof, but included to ensure our approach will work should the true distribution depart from a Zipfian distribution. The Space-Saving algorithm also allows for a similar bound on the total number of buckets needed. Following the proofs in [23], and noting that B > n \u00b7 k for all of our experiments, we reach a bound that BS = O(k) for all possible Zipfian data streams. 3.2.1 Derivation of KiloGram Bound. For a uniform hash function, the expected number of collisions for any individual bucket is L/B. There are k buckets of interest corresponding to the topk most frequent n-grams. If f (x;p, |A|) is the Zipfian probability distribution function with cumulative distribution F(\u00b7), the total number of observed n-grams that do not collide with the top-k is L \u00b7 (1 \u2212F(k;p, |A|)). Then, the expected number of infrequent n-grams that collide with the top-k n-grams is then equal to this value times k/B: k \u00b7 L \u00b7 \u00a9 \u00ad \u00ad \u00ab 1 \u2212 H(p+1) k H(p+1) |A| \u00aa \u00ae \u00ae \u00ac /B (4) Equation 4 includes multiple occurrences of the same infrequent n-gram, and so is pessimistic. If one makes the Space-Saving algorithm large enough to handle all possible collisions, then the true top-k n-grams are obtained with high probability. This is because the Space-Saving algorithm degrades to a simple hash-table that counts everything exactly when the number of buckets in the Space-Saving data structure is greater than or equal to the number of unique items in the table. From a theoretical perspective, using the Space-Saving algorithm instead of a hash-table gives us added flexibility to deal with the rare possibility of having more than the expected number of n-gram collisions. More practically, we use the Space-Saving algorithm because real data is not truly Zipfian distributed, and this gives us a method of gracefully handling deviations from theoretical expectations. Considering the expected collisions in Equation 4, we can make some practical simplifications given hardware constraints and data assumptions. First, we pessimistically assume that p = 1, which is the worst case for \u201cinteresting\u201d power law distributions observed in real data, which generally fall in the range of [1, 4] [11]. Next, we pessimistically assume that the alphabet A is infinite in size. This technically degrades to the Zeta distribution, and is pessimistic because it maximizes the amount of probability mass that exists in the tail of the distribution (i.e., reduces the value of F(k; ,p, |A|)). Doing so, we obtain lim |A|\u2192\u221e\u2212k \u00b7 L \u00b7 \u00a9 \u00ad \u00ab H(2) k H(2) |A| \u22121\u00aa \u00ae \u00ac /B = kL \u0010 \u03c02 \u22126H(2) k \u0011 B\u03c02 which further simplifies to 6 \u00b7 k \u00b7 L \u00b7 \u03c8 (1)(k + 1) B\u03c02 where \u03c8 (\u03b1)(\u03b2) is the PolyGamma function. This simplification is significant, because \u2200k \u22651,k\u22121 > \u03c8 (1)(k+1), allowing us to replace the PolyGamma function with a pessimistic upper bound. Further, because lim k\u2192\u221e \u0010 1 k /\u03c8 (1)(k + 1) \u0011 = 1, this upper bound is tight. We may further simplify by replacing the PolyGamma evaluation with 1/k, yielding Equation 3. The bound in (3) states that the number of collisions is linear in the total number of n-grams processed. This is not a surprising result. More important is that we have a numerical upper bound that can be employed to reduce the number of collisions dramatically. Under a more pessimistic assumption that p < 1, the (3) bound would not hold. However, a similar result can be obtained using the Spave-Saving structure. For that algorithm, Zipfian data with p = 0 (the worst possible case) requires O(k2 log(|A|)) buckets. Since we have already found the top-k colliding hashes based on a table of size B, plugging this into the proof from [23] leads to a requirement of O(B\u22121k2 log(|A|)) buckets. Noting that for processing n-grams, |A| = 256n, this can be simplified to O(B\u22121nk2). We note that in all experiments in this paper, B > n \u00b7 k, which allows those terms to cancel. This leaves the Kilogram approach with a total of O(k) buckets to process any Zipfian dataset for all p \u22650, a considerable improvement compared to O(nk2) buckets for the Space-Saving algorithm alone. 4 CLASSIFICATION RESULTS To test and evaluate the proposed KiloGram approach, we make use of four datasets that include Windows PE files and Adobe Portable Document Format (PDF) files. The datasets are summarized in Table 1, with more detail in the appendix. Table 1: All datasets used in our experiments, including size of training and testing sets, and primary year the data is from. Dataset Year Train Test Storage Size Industry EXE 2014-2015 2,011,786 400,000 5 TB EMBER 2017 600,000 200,000 936 GB Public PDF 2018 75,1829 83,780 464 GB VirusShare-20C 2013-2018 160,000 40,000 141 GB The \u201cIndustry EXE\u201d dataset was provided to us, under a nondisclosure agreement, by a third party AV company. The training set contains 2 million Windows PE executables, evenly split between benign and malicious [32], and a test-set of 400,000 binaries, also evenly split[34]. The files from which the EMBER dataset [4] were created can be obtained from VirusTotal [1]. EMBER has an even split between benign and malicious, and since it is 2-3 years newer than Industry EXE, we can use it as an extreme test of generalization over time. This is important since malware is known to exhibit concept drift [17]. \fKiloGrams: Very Large N-Grams for Malware Classification LEMINCS @ KDD\u201919, August 5th, 2019, Anchorage, Alaska, United States Our \u201cPublic PDF\u201d dataset was constructed from VirusShare for PDF malware [36] (19% of the data) and using Common Crawl2 for benign PDF files. Our \u201cVirusShare-20C\u201d (or \u201cVS-20C\u201d) dataset was constructed from VirusShare [36], using AVclass to identify 20 PE malware families with exactly 10,000 total samples each [38, 39]. We use all four datasets in our evaluations throughout the paper, and observe consistent results across each in terms of the nature of larger n-gram sizes. For clarity, we consider each dataset in turn to highlight results and behavior across the four sets. Following [34], we use Elastic-Net regularized logistic regression [30, 45, 48] to train predictive models from the byte n-grams. Using the elastic net regularizer of \u2225w\u22251 + 0.5\u2225w\u22252 2 provides an important feature selection as part of the model training process, as the \u2225w\u22251 term will shrink insignificant features to 0, and provides empirical and theoretical robustness to high-dimensional problems with noisy and irrelevant features [27]. To parllelize the Kilo-Gram algorithm, we use the approach specified in [31] for lines 1-6, and naive parallelization of the Space-Saving algorithm using the approach of [8] to merge Space-Saving data structures for lines 8-13. QuickSelect is run as a single thread. For all datasets we use balanced accuracy [6], which re-weights the test data as if there were an equal number of files in all test sets. This is done to make the accuracy number comparisons more meaningful across each dataset, where there may be slightly different ratios of benign-to-malicious files. For our binary classification problems, we will also use the Area Under the ROC Curve (AUC) [5]. This metric is of particular interest in malware detection, since one wishes to select a threshold that corresponds to low false positive rates, and AUC is the integral of true positive rate across all false positive rates, without requiring one to select a threshold a priori (in contrast to accuracy). 4.1 Hashing-Stride Improves Performance First, we evaluate the inclusion of our hashing-stride approach, as discussed in \u00a7 3.1. The expectation is that, as n becomes larger, the performance of models built from the top-k most frequent features will drop due to an increasing redundancy in the top-k list. Our results back up this theoretical prediction, as shown in Figure 1. 23 24 25 26 27 28 29 210 70 80 90 100 n-gram size Balanced Accuracy s = 1 s = \u2308n/4\u2309 Figure 1: Balanced Accuracy results (y-axis) on the Public PDF dataset as we increase then-gram size (x-axis, log-scale), and alter the hashing stride s. Using a hashing-stride retains more performance as n becomes larger. 2http://commoncrawl.org Figure 1 shows that for small n \u226416, the absolute difference in accuracy is less than 0.1 in all cases, and the hashing-strides are correspondingly small values s \u2208[2, 4]. At n = 32 the performance gap increase slightly, and by n = 64 the difference becomes significant. Across all n \u2208[8, 1024], the use of a hashing-stride (s = \u2308n/4\u2309) dominates a naive approach without a hash-stride (s = 1). This result appeared across all datasets, so for the remainder of the paper all results are shown with the hashing-stride of s = \u2308n/4\u2309. In extended testing, we also investigated other ratios such as s = n/2 and s = n. While all s = O(n) performed better than s = 1, the choice of n/4 seemed to consistently perform best among the options tested. 4.2 Computational Efficiency of KiloGrams Computing the top-k most frequent n-grams has historically been computationally demanding, restraining most to consider only n \u2264 6 unless working with small datasets. We have shown, from a theoretical view, that the KiloGram algorithm is O(L) complexity and practically fixed memory cost at O(B + k \u00b7 n). We now show that this result is matched empirically. We measure runtime on a server with four Xeon E7-8870 CPUs for a total of 80 cores, 2 TB of RAM, and 40 TB of SSD storage. Because of the hashing-stride, we find that the runtime tends to decrease as n increases. For the VS-20C corpus, computing 8-grams took 27 minutes whereas 1024-grams took only 12 minutes. While our primary results come from the use of a powerful server due to the need to train large logistic regression models, we note that such high-end equipment is not necessary to perform the n-gramming. The nature of the KiloGram algorithm means that any machine with \u224810 GB of RAM should have no difficulty in performing the computation. To emphasize this, we re-ran the same KiloGram code on a workstation with a 10 core Xeon E5-2650 at 2.30GHz, 128 GB of RAM, and a 4 TB SSD. It took only 41 minutes to compute the 1024-grams on this machine. Thee KiloGram algorithm can apparently run on modest hardware thanks to its computational and memory efficiency. Even if one is interested in small values of n, the KiloGram approach exhibits superior run-time complexity and can provide dramatic speedups over naive approaches. On the largest corpus, Industry EXE dataset (2M files), KiloGram took \u226412 hours of computation for all values of n \u22641024. Mature code with threeyears of performance tuning required one month to compute 6grams in the classical way: a 60x speedup for Kilogram over this baseline. 4.3 Investigating Large n As we discussed in \u00a7 2, many have suggested the need for large ngram sizes in building models for malware classification. However, after an extensive literature review, we found that no prior work empirically evaluated large n-grams on a large modern dataset. We present the first evaluation of large n-grams, and show in Table 2 the balanced accuracy and AUC across all four datasets. The last \u201cInd-2-EMBER\u201d columns show results applying a model trained on Industry EXE to the EMBER test set, making a strong test for durability against concept drift over three years. Across each dataset, we found that predictive accuracy does not increase beyond n = 8. Indeed, the maximal performance on all \fLEMINCS @ KDD\u201919, August 5th, 2019, Anchorage, Alaska, United States E. Raff et. al. Table 2: Results as n increases, using hashing-stride. Industry EXE EMBER Public PDF VS-20C Ind-2-EMBER n Acc AUC Acc AUC Acc AUC Acc Acc AUC 8 98.2 99.8 99.2 99.9 98.9 99.7 95.2 97.6 99.7 12 97.5 99.7 98.9 99.9 98.7 99.7 93.8 97.4 99.4 16 96.7 99.5 98.6 99.8 98.7 99.6 92.3 95.9 98.9 24 96.4 99.4 97.9 99.7 98.6 99.6 88.1 95.5 98.4 32 96.0 99.3 97.1 99.4 98.2 99.6 85.2 93.9 97.9 64 94.9 99.1 96.3 99.2 92.0 99.3 87.4 92.9 96.8 128 94.0 98.7 93.6 97.8 92.8 99.0 79.4 88.9 94.9 256 92.6 98.0 90.3 95.6 91.3 98.5 76.5 86.6 91.9 512 92.2 96.8 78.7 84.8 86.5 96.8 71.7 71.9 69.9 1024 91.9 96.1 78.6 85.2 72.3 90.9 67.1 72.6 72.6 metrics occurs at n = 8. With some variation, we found that the performance in AUC degrades slowly for n \u226432 across all datasets, but accuracy sometimes degrades faster. For example, the gap on the Public PDF dataset for n = 8 and n = 32 is 0.7 points, but is a more significant 10.0 points for the VS-20C corpus. More surprising was that 1024-grams had any predictive utility at all, let alone reaching 90%+ accuracy or AUC across many of our datasets. Our intuition was that n-grams for large n would be extremely brittle, common accross only a few sample, and therefore ineffective for generalizing to new files. This was not the case, however, and suggests re-use (perhaps in the form of code, header information, resources, or compiler fingerprints) in EXE and PDF document formats that allow these n-grams to generalize. We also see from the \u201cInd-2-EMBER\u201d experiment that these n-grams can generalize across years of concept drift. At n = 8, a small loss of 0.6 points occurs. As n gets larger, the performance after three years drops faster, indicating that after n \u2265128, they lose significant robustness to concept drift. Given these results, we expect that as the size of n increases, the features may begin to correspond to ever more specific indicators of benign or malicious intent. For example, use of the Windows API function \u201cGetProcAddress\u201d is a common indicator of maliciousness across many Windows PE malware samples and can be detected with n \u22646 [34], but this indicator alone is not enough to detect malicious files since there are many benign use cases for this function. As n becomes larger, we expect to see features that instead address sub-populations of malware, rather than the population at large. In Figure 2, we plot the balanced accuracy as a function of the number of non-zero (NNZ) weights in the learned Elastic-Net regularized logistic regression model, where fewer non-zero features corresponds to a larger regularization penalty \u03bb. Here we see that for small n, there is a smooth and continuous increase in accuracy as more features are selected along the regularization path. As n increases, the behavior transitions to an initial rise in accuracy, followed by a plateau once a minimum number of features are obtained. The start of this plateau occurs earlier and the initial slope larger as n becomes larger. This behavior is intuitive, and corresponds with our expectation that the specificity of n-grams will increase with their size. For small values, a large number of n-grams are necessary to cover a wide range of smaller components that reflect the work and actions of larger features. At larger values of n, the model quickly selects 104 105 0.9 0.92 0.94 0.96 0.98 NNZ Balanced Accuracy 12-grams 16-grams 32-grams 64-grams 128-grams 256-grams 512-grams 1024-grams Figure 2: Balanced Accuracy results (y-axis) on the Industry EXE dataset as the number of non-zero weights (x-axis) learned by the logistic regression model increases. all \u201cuseful\u201d features. We believe this is because it becomes easier to delineate the predictive subset due to feature occurrences clustering around increasingly specific subsets of the population. Once a sub-population is well separated, additional features have little value unless they can \u201ccarve off\u201d a different sub-population, and so performance plateaus. A unique benefit of larger n-grams is their increased interpretability, which allows us to provide additional evidence to this interpretation of our results in \u00a7 5. 5 FEATURE ANALYSIS The previous sections described how KiloGrams are computed, and how they perform as features in a machine learning algorithm. For malware classification we find large KiloGrams have considerably more value in their application to analyst work-flow and integration into larger systems. In this section we will describe how malware analysts can use larger n-gram features in the course of their investigations, how they can be used in current signature based tools like Yara, and how they can be integrated with domain knowledge features to build a more powerful malware classification system. In going through a number of n-gram features, both experienced and junior analysts determined that it usually took a few minutes to understand what a single feature meant or represented, with some features taking longer. 5.1 Analyzing Individual Features In a machine learning context, there are many features that could be pulled from binary files for use in classification, such as information from the PE header, printable strings, or (in our case) raw byte sequences. Malware analysts, whose job usually involves dissecting pieces of malware to write detection signatures or understand how \fKiloGrams: Very Large N-Grams for Malware Classification LEMINCS @ KDD\u201919, August 5th, 2019, Anchorage, Alaska, United States they operate, work with many of these feature types in the course of these investigations. From the point of view of these analysts, a machine learning system based on small n-grams is opaque; it takes many of these features to compile enough evidence to apply a benign or malicious label, and since these n-grams may be only a portion of a single x86 instruction as discussed earlier, they are not very helpful in shedding light on why the algorithm chose the label it did. When working with malware analysts in production, the inability to understand what a particular feature means when it is found in a binary has been a source of frustration, and impeded adoption. The sheer size of KiloGrams changes this dynamic and presents a way to interpret features in what may otherwise be a black box. The presence of a KiloGram provides an immediate indication of where to start looking, and a large enough section of bytes to be meaningful to an analyst. Features that contribute the most to a malicious/benign decision can contain strings, embedded data like images, or code (that may or may not require disassembly or decompilation3, depending on file format). This can help analysts reach a conclusion about a binary\u2019s nature, which is important since the average time to process a malicious file is 10 hours [25]. The ability to understand what the features mean is also important to build trust so that developed solutions will be adopted and used. Below we will provide some examples of the interpretability of features found by malware analysts at different n-gram sizes. 5.1.1 EMBER Examples with 64-grams. In Figure 3, we see the disassembly of a code snippet discovered by a 64-gram that occurred in 8% of the EMBER dataset. Upon inspection this code assembles the string \u201cVirtualAlloc\u201d. This is then later used to obtain the \u201cGetProcAddr\u201d function in an obfuscated manner, so that the binary can then load other libraries at runtime. This is a technique to obfuscate the true intentions of the binary from malware analysts, and is considered a strong indicator of maliciousness. push eax #50 call DWORD PTR [ebp-0xcc] #ff9534ffffff mov DWORD PTR [ebp-0x20], eax #8945e0 mov DWORD PTR [ebp-0xa4], 0x74726956 #c7855cffffff56697274 mov DWORD PTR [ebp-0xa0], 0x416c617f #c78560ffffff75616c41 mov DWORD PTR [ebp-0x9c], 0x636f6c6c #c78564ffffff6c6c6f63 and DWORD PTR [ebp-0x98], 0x #83a568ffffff00 lea eax, [ebp-0xa4] #8d855cffffff push eax #50 push DWORD PTR [ebp+0xe] #ff750e xor bh,bh #30ff xchg ebp,eax #95 cmp bh,bh #95 .byte 0xff #ff Figure 3: Example of a disassembled 64-gram feature found in the EMBER dataset. The hex values of the raw bytes are shown in comments for each line of assembly. A considerable number of 64-grams contained sub-strings of registry keys. In the EMBER dataset, 10% of malware was found to have HKEY_LOCAL_MACHINE\\SOFTWARE\\Microsoft\\Windows\\Cur 3Disassembly is the process of converting raw bytes into the low-level assembly language they represent, while decompilation converts assembly instructions into a higher-level language like C. rentVersion\\Policies\\Explorer, which may be used to hijack the explorer process. Another 12% contained HKEY_LOCAL_MACH INE\\Software\\Microsoft\\Windows\\CurrentVersion\\Run for securing persistence for the malware. One 64-gram triggered on the image icon shown in Figure 8 in \u00a7 A.2 of the Appendix. Several malware samples contained precisely this icon in the resources section, causing it to become a predictive feature. This is another example of a feature with an easily understood nature, thanks to the large scale of 64-grams, which would have been uninformative if broken up into standard n \u22646 grams. 5.1.2 PDF Example from 512-grams. For an example of a 512gram, Figure 4 is a fragment from our Public PDF dataset discovered by simply looking through the extracted features. The scripting code was identified as JavaScript, which is often used in PDF files, but more frequently used (and obfuscated, as in this case) in malicious PDF files. The long string in the middle also stood out; 0x41 is the letter \u2019A\u2019 in hexadecimal format, and long strings of this character are often used by exploit writers to assist in crafting the correct string to take advantage of a vulnerability. ey=aba();ek=24;ek++;ef=6192;ef--;if(ey>=wf){ep=4810;if(ep!=null){ev \u230b =3742;ev+=0.013}eq=0.03;eq--;ed=we;ea=0.017;if(ea==0.0161){er= \u230b 'tae'}ex=sub(wd,wu);ez=5544;ez++;ej=true;es=wy;eu=8;eu--;eb='' \u230b ;en=9;en++;eh='';eo=0.034;eo++;ei=6343;ei++; if(ey<f){el=8952;el--;eb=r;mt=6172;mt+=0.0101;mw=1834;mw-=7 \u230b 115;eh='4c20600f0517804a3c20600f0f63804aa3eb804a3020824a6e2f80 \u230b 4a41414141260000000000000000000000000000001239804a6420600f0004 \u230b 00004141414141414141';me=6;me++}else if(ey<h){mm=18;mm++;eb=u;mg=false;mc=[7,35,21,4wp>null){wv= \u230b 0.0082;wv++}wf+=2177;wq=0.0032;wq+=3755;wd=zoa('qXM7reN6',15); \u230b wa=0.006;if(wa!=21){wr=0.014;wr++}wx=zoa('tCTF3OdREync',13);wz \u230b =[40,24,32,8,48,0,16,56];wj=0.008;if(wj<0){ws=null;ws+=12}w \u230b u=18690;wb=0.007;if(wb!=13){wn=[0,16,8,24];wh=17;wh+=7832}wu-= \u230b 7706;wo=28;if(wo==8){wi=['fen','lag','het']}wl=22;if(wl<437 \u230b 0) ,\u2192 ,\u2192 ,\u2192 ,\u2192 ,\u2192 ,\u2192 ,\u2192 ,\u2192 ,\u2192 ,\u2192 ,\u2192 ,\u2192 ,\u2192 ,\u2192 Figure 4: A 512-gram found in PDF dataset. This example contains obfuscated JavaScript code to build an exploit string targeting particular versions of PDF reader software. The next step in analysis is to find and extract the entire piece of embedded code. In this case a web search yielded the exploit code in a public repository. The code was then manually deobfuscated which revealed its functionality. Two different versions of an exploit string are built, depending on the software version (the target is likely to be Adobe Acrobat, the most popular PDF reader, but this was not confirmed). A successful exploit results in a visit to http://phjqxagpgdw.com/nte/goldmn.asp; and a quick search revealed a number of identified malicious domains with this naming scheme. 5.2 Features as Signatures As the value of n increases, the resulting features gain increased specificity in the files they target. It also becomes less likely to observe an exact n-byte sequence by chance. This inspired us to explore how well some of these larger n-grams might serve as Yara signatures for malware detection. Yara [3] is an industry standard regular expression tool designed for malware analysis. Yara rules are usually designed to have low false-positive rates, in the sense that \fLEMINCS @ KDD\u201919, August 5th, 2019, Anchorage, Alaska, United States E. Raff et. al. azero bancos buzus dapato domaiq hoax inbox karagany nebuler pasta pcclient scar shipup sillyfdc softpulse spyeye toggle vbinder zegost zwangi 0 0.5 1 F1 Score YarGen-F1 KiloGram-F1 Figure 5: The F1 score of Yara rules automatically generated from the VirusShare-20C training data, and evaluated on the test data. A different rule set is created for each family. if a specimen causes a rule to match, the specimen is probably malicious. Using KiloGrams this way may seem counter-intuitive: Our n-grams make powerful features for machine learning algorithms, which many believe will ultimately replace signature-based malware detectors. However, signature-based systems are still widely used, and will likely have a large role to play in layered defensive systems for the foreseeable future. YarGen[37] is the only currently-maintained tool we know of for automatically generating Yara signatures for Windows PE files. YarGen uses a number of domain knowledge processing steps to create a signature from several files. Where it is feasible, we use YarGen as a comparison to our method. For our approach, we will use the coefficients learned by logistic regression to select the 4000 n-grams most indicative of the class we are interested in. We then look at the false positive rate of each individual n-gram (on the training data), and discard any with a false-positive rate above 5%. If any two or more n-grams always co-occur, we randomly select one of the n-grams and discard the others. We then combine the remaining set to form a simple Yara rule, which looks for the exact n-grams, and fires if any of the ngrams occur. Normally a combination of sub-rules is necessary; for example, YarGen usually only fires if 3 or more out of a list of patterns match. However, the statistical improbability of any individual KiloGram makes them independently robust detectors. After creating a rule for many values of n, we select the rule with the best F1 score on the training set. 5.2.1 Results on VirusShare-20C Dataset. First we used the malware family dataset described in \u00a7 4 to automatically create Yara rules to identify specific families. This is a common and arduous task normally done manually by a malware analyst. We trained one family-vs-the rest for n \u2208[8, 1024], and found that no single value of n was best for all families. For 9 out of 20 families, n = 1024 did perform best, which is a trend counter to the use of KiloGrams as purely predictive features in \u00a7 4. We compared the results of our new approach to the existing YarGen in Figure 5, where we look at the F1 score for each family. A Wilcoxon signed rank test [13] shows that our KiloGram based approach is better, with p-value of 3.1 \u00d7 10\u22125. These results should not be taken to mean that YarGen is inferior to KiloGrams; YarGen is a tool that iterates over various features (strings, byte opcodes, etc) to find the best predictive rules. In a sense, KiloGrams represent a new class of features that tools such as YarGen could incorporate to improve their detection rates. We believe these results show that KiloGrams can be a valuable tool in the creation of signatures for malware detection, and the combination of these large features with machine learning tools can help automate the process of signature creation. For a final test the Yara rules used for each family were run over the EMBER benign test set, as having low false positives on benign files is a critical feature. Note that no benign datasets were used in the creation of these KiloGrams. The KiloGram based rules had a median false-positive rate of 0.0065%. This is 24\u00d7 better than YarGen, which had a median false-positive rate of 0.1595%. 5.2.2 Results on Industry EXE Dataset. We repeated the same experiments on our Industry EXE dataset, to create a generic \u201cmalware\u201d signature. This is unprecedented in the standard use of Yara, which is meant for identifying files of a specific nature. YarGen failed to run on 2 million files in a timely fashion, so we are unable to compare with any prior works in the goal of creating a generic malware signature. The results when attempting to use different values of n are shown in Table 3. Table 3: Yara generation results on Industry EXE n # of rules True Neg False Pos False Neg True Pos Precision Recall 32 25 36.643% 63.357% 15.742% 84.259% 57.08% 84.26% 64 23 81.554% 18.446% 56.791% 43.209% 70.08% 43.21% 128 22 98.599% 1.401% 78.44% 21.56% 93.90% 21.56% 256 4 100% 0% 95.034% 4.966% 100.00% 4.97% 512 31 99.987% 0.013% 78.777% 21.223% 99.94% 21.22% 1024 52 99.947% 0.054% 76.68% 23.32% 99.77% 23.32% 2048 35 99.989% 0.012% 78.392% 21.609% 99.95% 21.61% 4096 84 99.992% 0.009% 89.79% 10.21% 99.92% 10.21% 8192 145 99.684% 0.316% 89.61% 10.39% 97.05% 10.39% [256-4096] 206 99.938% 0.063% 74.958% 25.042% 99.75% 25.04% We can see that any n \u2208[256, 4096] produces signatures with low false positive rates, and surprisingly can catch up to 23% of the malware in the test set. Naively combining all KiloGrams in this range into one larger signature of 256 through 4096-grams boosts the recall up to 1/4 of the test set malware. This produced 125 false positives, which we investigated with VirusTotal[1]. Of these, 10 are now reported as OutBrowse malware; 45 behave very similarly to each other and are almost certainly adware/spyware; 10 are unsigned (a huge red flag) versions of mmc.exe (Microsoft Management Console) in various languages; and another 11 have other malicious indicators (such as 1 or more malicious AV reports or relationships with other malicious files). Only 49 of the 125 reported false positives display no evidence of being malware. Further, we tested the Industry EXE generated signatures on the EMBER test set, which is 2-3 years newer. This signature was still able to catch 8.7% of the EMBER malware, with a false positive rate of 0.0093% on the EMBER benign set. The ability of these signatures to catch mislabeled data in our test set, and still generalize to data three years later (despite the concept drift common in this domain) increase our confidence in the usefulness of KiloGrams as signatures. \fKiloGrams: Very Large N-Grams for Malware Classification LEMINCS @ KDD\u201919, August 5th, 2019, Anchorage, Alaska, United States 5.3 KiloGrams & Domain Knowledge Based on the analysis in \u00a7 5.1 we find large n-grams represent interesting and relevant features present in large sub-populations of the malicious or benign binaries. This leads us to ask, can combing large n-gram features with human-engineered features produce a stronger model? 10\u22123 10\u22122 10\u22121 100 101 102 103 0.96 0.97 0.98 C AUC Elastic-Net with 128-Grams Figure 6: AUC as a function of regularization parameter C using Elastic-Net to force most coefficients to zero. Selecting C = 10\u22121 gave 17,294 nonzero 128-gram features. To assess this, the top 100,000 128-grams were extracted from the EMBER training files. Using the Elastic-Net regularization path on the training data (Figure 6), we selected the regularization parameter,C, corresponding to the sparse subset that maximizes AUC. This resulted in C = 10\u22121 and 17,294 nonzero 128-gram features. We measured the lift provided by these 128-gram features by prepending the counts to two different domain knowledge feature sets: the 2,351 EMBER features crafted via domain knowledge, as well as a production set of feature extractors designed by malware analysis experts. These were compared to 128-grams alone, EMBER features alone, and to proprietary features alone. We trained a gradientboosted decision tree model using xgboost on each of these feature sets, with 200 boosting rounds, tree depths up to 9 levels, 50% column subsampling per tree, and a \u03b7 = 0.29 learning rate [10]. ROC curves on the validation features are shown in Figure 7. Adding 128-grams improved AUC in all cases. EMBER features alone achieved an AUC of 0.999597, and improved to 0.999718 when augmented with 128-grams. The proprietary features result in a slightly higher AUC at 0.999822, and further improved to 0.99985 when augmented with 128-grams. For a production malware detector deployed as an anti-virus, we care about the true positive (TPR) rate at n very low falsepositive rate (FPR). The zoomed inset in Figure 7 shows the TPR at at FPR of 5:10000, which is reasonable for a production system. At that rate, the TPR of EMBER with 128-grams is comparable to the proprietary features alone, and then further outperformed by proprietary features with prepended 128-gram counts. Also of interest is the ROC curve for 128-grams alone, which exhibits a peculiar jump in TPR near 2 \u00d7 10\u22123 FPR. This fits intuition that KiloGrams are essentially \u201cgetting the easy ones\u201d via the top k n-grams spanning large subsets of malicious or benign PE files. Feature combinations involving domain knowledge features cover the remaining samples. Indeed, it required 20 boosting rounds for a model trained on EMBER features to exceed 0.999 AUC on the 10\u22123 10\u22122 10\u22121 0.6 0.8 1 False Positive Rate True Positive Rate 128-grams EMBER EMBER+128-grams Proprietary Proprietary+128-grams 8 1 Figure 7: KiloGrams augment the EMBER features to create a model that rivals one built using proprietary features. evaluation set, but only 15 boosting rounds when augmented with KiloGrams. While the EMBER dataset has been noted as a relatively \u201ceasy\u201d dataset [4], the results are a positive indicator of the utility of large n-grams in conjunction with domain knowledge features. Based on these promising results, more extensive work is being prepared to test KiloGram augmented production features on industryrepresentative datasets. 6" + }, + { + "url": "http://arxiv.org/abs/1809.10276v1", + "title": "Growing and Retaining AI Talent for the United States Government", + "abstract": "Artificial Intelligence and Machine Learning have become transformative to a\nnumber of industries, and as such many industries need for AI talent is\nincreasing the demand for individuals with these skills. This continues to\nexacerbate the difficulty of acquiring and retaining talent for the United\nStates Federal Government, both for its direct employees as well as the\ncompanies that support it. We take the position that by focusing on growing and\nretaining current talent through a number of cultural changes, the government\ncan work to remediate this problem today.", + "authors": "Edward Raff", + "published": "2018-09-27", + "updated": "2018-09-27", + "primary_cat": "cs.CY", + "cats": [ + "cs.CY", + "cs.AI" + ], + "main_content": "Introduction As Arti\ufb01cial Intelligence (AI) and Machine Learning (AI) become increasingly important tools for streamlining existing processes and enabling new capabilities, the United States Federal Government\u2019s demand for these skills and capabilities only increases. The standard operating procedures of most agencies within the government make attraction and retention of individuals with these skill sets dif\ufb01cult. Compensation is one signi\ufb01cant roadblock to attracting initial talent. The average \"Data Scientist\" job nation wide pays $120,931 a year1, which would be the same salary as a Step 5 GS-15 employee working for the base General Schedule 2. This would require hiring staff into what is normally a senior level position with no room for future promotions, and almost no room for future increases in salary. This issue becomes more challenging when employees in this space are attracted to startups for their reduced bureaucracy, increased autonomy, and a further 10% salary premium compared to larger organizations Kim (2018). While exemptions to the GS pay scale are possible, the process needed to obtain such exemptions means that they are intrinsically limited. Thus compensation will remain a major competitive disadvantage for the U.S.G. when competing for talent. Presented at AAAI FSS-18: Arti\ufb01cial Intelligence in Government and Public Sector, Arlington, Virginia, USA. 1https://www.glassdoor.com/Salaries/ data-scientist-salary-SRCH_KO0,14.htm 2https://www.opm.gov/ policy-data-oversight/pay-leave/ salaries-wages/salary-tables/pdf/2018/GS. pdf Not only must the government work to retain its own talent, it also needs to work with the contracting agencies that supplement it\u2019s staff. While contractors can receive larger salaries than their U.S.G. employee counterparts, retention is still problematic with high demand. Potential de\ufb01ciencies in the U.S.G. space that make it dif\ufb01cult to retain skilled staff then intrinsically impact the contracting agencies ability to retain the same skill sets when working for the government. We take the position that it is possible to grow and retain more AI talent within the federal government space in the current competitive environment, provided some changes are made. We focus on changes that we believe are more realistic and obtainable for many leaders within the government, subject to the unique restrictions each may face. Namely, we recommend an approach of identifying and supporting AI \"champions\" with increased autonomy and support, pushing a culture of better intermingling of direct and contract employees, ensuring staff promotions do not leave \"AI Vacuums\", and increasing active collaborations with academia. 2 Growing Talent with AI Champions It is unlikely that the pay-scale and other systematic issues that make attracting AI talent dif\ufb01cult will be resolved in the near future. Kundra (2010) laid out a 25-point strategy to reform the U.S.G.\u2019s approach to Information and Technology Management, including dedicated career paths for IT management, working with congress to improve budgeting \ufb02exibility, and avoiding immutable \"Grand Plan\" design approaches. Eight years later many of these suggestions have not been fully realized, and a number require support from congress to make happen. Thus, we propose that the government focus more on growing AI talent internally. This puts more of the control within the hands of Technical Directors, and Agencies to act within their own organization to \ufb01ll their needs. In particular, if managers can identify a AI champion who can help shape and lead execution on mission goals, as well as grow the talent within the organization. It is crucial that this AI champion be given respect, breadth in autonomy, the freedom to investigate problems before providing a conclusion, the freedom to design solutions as they see \ufb01t to \ufb01ll a need, and the ability to say \"no\". These all relate to problems Leetaru (2016) identi\ufb01ed as common problems preventing the effective use of data science and AI in the government. \f2.1 Supporting AI Champions Li (2014) identi\ufb01ed general categories of support, ownership, and purpose as being necessary to obtaining and retaining talent. We argue that support is one area that many departments within the U.S.G. currently could improve. This includes support in the form of resources (having the right hardware and software), and education. Compute Support In terms of hardware, many organizations simply do not have access to the compute resources necessary. Especially when all staff are forced to use thinclient machines which are cost effective and ef\ufb01cient for many purposes, but not the often compute intensive needs of AI and ML. Organizations need to be prepared to buy signi\ufb01cant compute power for their staff if they wish to make effective use of AI, and should instead postpone their plans until funding for compute resources can be obtained. Once such funding exists, they should rely on their AI champion to de\ufb01ne the compute needs and integrate them with the procurement process. In our experience, many organizations treat every dollar spent on computer equipment as equal. In reality, differing vendors may have signi\ufb01cant changes in price for otherwise equivalent hardware. In addition, different algorithms may perform best on different types of hardware. Even simple choices such as a trade-off between more CPU cores or more RAM can be problem and algorithm speci\ufb01c as to what is best for the team and mission. It is also important to recognize that a balance in onpremise hardware and cloud compute resources is likely to exist. There are unique restrictions that can be imposed by a government agency\u2019s missions that necessitate the consideration of one of these sources in particular. When there is freedom to choose, we recommend that cloud compute be used as as the primary compute source until an AI champion can be found to help determine the path forward. Cloud compute\u2019s \ufb02exibility in provisioning and disposing of resources makes it a perfect \ufb01t when a compute strategy has not been determined, but progress still needs to be made without constraining the team to sub-optimal equipment for several years. Leveraging the different kinds of compute instances available can even help to make a hardware determination. Educational Support Education is also critical for growing and retaining AI talent, and the education responsibilities can not rest solely on the shoulders of the AI champion. Classes at local universities, team working-sessions / hackathons, and conference attendance are all crucial components of this effort. Support for the latter two within the government has been depressed in recent years. Simple acts such as providing a group working space and food have minimal cost compared to employee salaries and overheads. Yet using the GSA\u2019s SmartPay system Gsa (2015) to provide team lunches on an occasional basis appears to be a non-existent practice. This simple act can provide considerable bene\ufb01t to employee moral and retention, while also allowing a dedicated form to disseminate lessons learned / knowledge within a working group. Even if SmartPay can\u2019t be approved, the management can set a culture by example of bringing in food to share during such hackathon sessions. Conference attendance in particular has been restricted across the government as a whole since a 2012 memo mandated reduced conference spending and increased oversight Zients (2012). A more recent memo has recently rolled back a number of these requirements in light of it hindering education and training in support of these agencies\u2019 important functions Donovan (2016), but we \ufb01nd many organizations remain just as risk-adverse to approve conference travel and spending. Agencies need to work to remove this risk aversion to conference travel in order to train their staff. In particular, we note that considerable bene\ufb01t could be achieved by reducing the turn-around time from request to approach such that it could be done within three months. This would allow staff to better select conferences based on the announced accepted papers, workshops, and tutorials. The workshops and tutorials in particular could be of considerable value due to their more focused nature, but they may not be known in the advanced time frame often demanded by the current conference approval system for many within the U.S.G. 3 Make Contractors Part of the Team Contractors to the United States Federal Government, like contract and temporary workers in other sectors of the economy, can often feel like they are second class citizens within the organizations they support. This is normally caused by some disparate treatment perceived as unfair or unjust beyond recognizing the practical differences in employment. Problems could include preferential treatment of Government staff in working conditions, or discounting solutions proposed by contractors \u2014 even if they are experts in their area. While this second-class citizen problem is not true of all groups within the U.S.G., these issues are not new and treatment at client site directly impacts a contractor\u2019s desire to stay with both their client organization and their contracting company Boswell et al. (2012). Because contractors are currently a crucial component of the U.S.G.\u2019s workforce and ability to execute its mission, this issue should be of direct relevance and considerable importance to managers in the government. Integrating contractors as \"part of the team\" is about more than solving a second level retention issue for AI talent. If managers allow a greater intermixing of staff such that both federal employees and contractors could both be on teams lead by other employees or contractors, then a contractor could be leveraged as an AI champion as discussed in section 2. Leveraging contractors for the source of these champions can allow the government to circumvent the pay-scale issue in attracting and retaining talent, as the contractors are not constrained by the GS pay scale system. For organizations which do not currently have any signi\ufb01cant AI talent on staff, the contracting route allows them to leverage the \ufb02exibility of contract staf\ufb01ng to \ufb01nd the right champion that \"\ufb01ts\" with the organization\u2019s culture and unique needs, avoiding the greater risk of hiring a new federal employee that might not work out. This can be particularly important if attempting to \fhire talent from Silicon Valley that may not adjust well to the unique constraints imposed by work in the government space Leetaru (2016). Further, empowering contractors with the equality and respect needed for them to make the proactive decisions and changes necessary to function as an AI champion has been found to improve the job performance and satisfaction of contractors working in the IT industry Huang and Lin (2016). We also note that contractors, in particular ones from larger organizations, can bring with them an additional social network that can be of utility. Through their parent company, the contractor may be able to reach out to or discover others within the federal government with similar needs, have previously encountered similar problems, or have compute resources they are willing to share. Leveraging both the client organization\u2019s network and the contractor\u2019s network can lead to faster results. We note that this kind of potential knowledge transfer can include information about tangential issues, such as how to import or export analytic code, that is important for getting work done but may not be directly about a particular AI challenge. The contractor\u2019s network may also be effective for sharing information across agencies that encounter similar problems, but may not have regular communication or even be aware that both groups are tackling the same issue. 4 Top-Down Leadership from the Bottom-Up The United States government currently lacks top-down leadership in the AI space. A symptom of this is the lack of a nation AI or ML strategy, despite the U.S. being dominant in the \ufb01eld as a whole. At the same time South Korea, Japan, China, the United Kingdom, and Canada have already released national strategies with other countries actively developing strategies Carter, Kinnucan, and Elliot (2018). The United Arab Eremites has not only released a strategy, but created a Sate Minister for Arti\ufb01cial Intelligence and is pursuing AI as integral to the government\u2019s mission of improving the quality of life for its citizens Halaweh (2018). This issue is important to retention as it means many organizations lack a transformational leader Bass (1990) to help attract and retain talent and also improve productivity through the positive effects of a strong and consistent message Wright and Pandey (2009); Barling, Weber, and Kelloway (1996); Council (2004) and supports the creativity needed to perform effective data science Cheung and Wong (2011); Li (2014). While it will always be possible to hire outside talent to \ufb01ll this AI leadership need, we believe much of this leadership could come from promotions of current staff. Given the large potential bene\ufb01ts in applying AI successfully to government missions means the success that occur at a local level from fostering AI talent could be high-pro\ufb01le boosts to a career. This creates the potential for developing this leadership from the bottom-up, but also requires consideration. Individuals promoted need to work with their existing management and colleagues to coach and train their replacements. This ensures that the culture and talent developed are not transient with the manager\u2019s presence, but lasting components of the institution. If a promoted individual\u2019s replacement does not share or is not capable of continuing their mission of fostering AI talent, the staff that developed such talents will be at increased risk of leaving and creating an \"AI Vacuum\". Staff might leave to follow their former manager, or be lured to higher paying positions in industry when job satisfaction decreases. The essence of this consideration is to recognize that those who grow into AI leaders in the government are not fungible. Moving or promoting them without consideration may lead to talent loss or movement that hampers an organization and reduces productivity. This does not mean that such staff should not be promoted (indeed, we are arguing that their promotions will drive increased and wider AI talent growth!), but that supporting them includes encouraging them to identify and train their eventual successors. 5 Collaborate with Academia The need to reduce communication barriers and \"stove-pipe\" or \"silos\" within the Government has been long recognized, and duplicated efforts account for hundreds of millions in excess expenditure Dodaro (2018). Beyond wastefulness, it can also hinder the government\u2019s goals. For example then senior CIA of\ufb01cer Kindsvater (2003) discussed how the stovepiping in the Intelligence Community (IC) was inhibiting mission progress, and would become a larger problem as the IC\u2019s missions required more advanced and complex technology. This is a long standing issue that is unlikely to be resolved by managers today, and does impact the ability to retain and recruit AI talent by interfering with collaboration, creativity, and general employee happiness Leetaru (2016). For this reason we would encourage agencies to reach outside to university research groups as alternative collaboration partners to form symbiotic relationships. For the university group, the problems faced by the U.S.G. provide real world grounded needs that make for more compelling research and publications. Individual agencies, subject to individual circumstance, may be able to share unique data that enables the research and would not be possible without the government\u2019s assistance. Some organizations within the government may in addition be able to share compute resources that would be a signi\ufb01cant augmentation or dwarf those available to smaller research labs. For the government, the research group provides an augmentation of effective staff. Financially supporting a lab through a year of collaboration can be especially cost effective for the amount of work produced and time that graduate students may spend on the problem. The professor leading the research group can act as an important source of expert AI knowledge that would be challenging to retain as an employee, but can still help current employees grow in their abilities. When results are published, the paper provides a mechanism of bridging the silo-gap by connecting with others in the government attending the same venue that the paper is published in. Finally, the connection to both graduate and undergraduate students in a lab can create a recruiting pipeline of talent that could be hired in a few years time. We emphasize though the importance of making the relationship an active collaboration, with employees working \fwith students, in order to maximize the bene\ufb01ts. Simply sponsoring research may help solve some problem, but fail to realize the numerous possible ancillary bene\ufb01ts. 6" + }, + { + "url": "http://arxiv.org/abs/1807.00392v1", + "title": "Gradient Reversal Against Discrimination", + "abstract": "No methods currently exist for making arbitrary neural networks fair. In this\nwork we introduce GRAD, a new and simplified method to producing fair neural\nnetworks that can be used for auto-encoding fair representations or directly\nwith predictive networks. It is easy to implement and add to existing\narchitectures, has only one (insensitive) hyper-parameter, and provides\nimproved individual and group fairness. We use the flexibility of GRAD to\ndemonstrate multi-attribute protection.", + "authors": "Edward Raff, Jared Sylvester", + "published": "2018-07-01", + "updated": "2018-07-01", + "primary_cat": "stat.ML", + "cats": [ + "stat.ML", + "cs.AI", + "cs.LG" + ], + "main_content": "Introduction Arti\ufb01cial Neural Network methods are quickly becoming ubiquitous in society, spurred by advances in image, signal, and natural language processing. This pervasiveness leads to a new need for considering the fairness of such networks from many perspectives, including: how they are used, who can access them and their training data, and potential biases in the model itself. There are many reasons for desiring fair classi\ufb01cation algorithms. These include legal mandates to be non-discriminative, ensuring a moral or ethical goal, or for use as evidence in legal proceedings (Romei & Ruggieri, 2014). Despite the long-standing need and interest in this problem, there are few methods available today for training fair networks. When we say that a network is fair, we mean fair with respect to a protected attribute ap, such as age or gender. Our desire is that a model\u2019s predicted label \u02c6 y given a feature vector x is invariant to changes in ap. An initial reaction may be to simply remove ap from the feature vector x. While intuitive, this \u201cfairness through unawareness\u201d does not remove the correlations with ap that exist in the data, and so the result will still produce a biased model (Pedreshi et al., 2008). For this reason we need to devise approaches that explicitly remove the presence of ap from the model\u2019s predictions. 1Booz Allen Hamilton 2University of Maryland, Baltimore County. Correspondence to: Edward Raff . Proceedings of the 5 th Workshop on Fairness, Accountability and Transparency in Machine Learning, Stockholm, Sweden, 2018. Copyright 2018 by the author(s). We do so in this work by introducing a new method to train fair neural networks. Our approach, termed Gradient Reversal Against Discrimination (GRAD), makes use of a network which simultaneously attempts to predict the target class y and protected attribute ap. The key is that the gradients resulting from predictions of ap are reversed before being used for weight updates. The result is a network which is capable of learning to predict the target class but effectively inhibited from being able to predict the protected attribute. GRAD displays competitive accuracy and improved fairness when compared to prior approaches. GRAD\u2019s advantage comes from increased simplicity compared to prior approaches, making it easier to apply and applicable to a wider class of networks. Prior works in this space are limited to one attribute (but see Zafar et al., 2017) and require the introduction of multiple hyper-parameters. These parameters must be cross-validated, making the approaches challenging to use. Further, our approach can be used to augment any current model architecture, where others have been limited to auto-encoding style architectures. 2. Gradient Reversal Against Discrimination We now present our new approach to developing neural networks that are fair with respect to some protected attribute. We call it Gradient Reversal Against Discrimination (GRAD), and is inspired by recent work in transfer learning. Notably, Ganin et al. (2016) introduced the idea of domain adaptation by attempting to jointly predict a target label and a domain label (i.e., which domain did this data instance come from?). By treating the protected attribute as the new domain, we can use this same approach to instead prevent the network from being biased by the protected attribute ap. After several feature extraction layers the network forks. One branch learns to predict the target y, while the other attempts to predict the protected attribute ap. We term the portion of the network before the splitting point the \u201ctrunk,\u201d and those portions after the \u201ctarget branch\u201d and the \u201cattribute branch.\u201d The \ufb01nal loss of the network is sum of the losses of both branches, giving \u2113(y, ap) = \u2113t(y)+\u03bb\u00b7\u2113p(ap). Here, \u03bb determines the relative importance of fairness compared to accuracy. In practice, we \ufb01nd that performance is insensitive to particular choices of \u03bb, and any value of \u03bb \u2208[50, 2000] would perform equivalently. In our experiments we will use arXiv:1807.00392v1 [stat.ML] 1 Jul 2018 \fGradient Reversal Against Discrimination Raw Input x Feature Extraction Target Branch Attribute Branch \u2113t(y) \u03bb \u00b7 \u2113p(ap) Reverse Gradient \u2212\u2202\u03bb\u2113p(ap) \u2202\u03b8Att Branch Figure 1. Diagram of GRAD architecture. Red connection indicates normal forward propagation, but back-propagation will reverse the signs. \u03bb = 100 without any kind of hyper-parameter optimization. The values of both \u2113t(y) and \u2113p(ap) are calculated and used to determine gradients for weight updates as usual, with one important exception. When the gradients have been back-propagated from the attribute branch they are reversed (i.e., multiplied by \u22121) before being applied to the trunk. This moves the trunk\u2019s parameters away from optima in predictions of ap, crippling the ability to correctly output the protected attribute. Since the target branch also depends on the trunk parameters, it inherits this inability to accurately output the value of the protected attribute. No such reversal is applied to the gradients derived from y, so the network\u2019s internal state representations are suitable for predicting y but nescient of ap. It is instructive to consider why it may be insuf\ufb01cient to set up a loss function which directly punishes the network for correctly predicting ap. If this were the case, the network could achieve low loss by forming internal representation which are very good at predicting the protected attribute, and then \u201cthrow the game\u201d by simply reversing the correct prediction in the penultimate layer. (That is, a potential, reliable strategy to getting the wrong answer is to become very good at getting the right answer, and then lying about what one thinks the answer should be.) If this strategy is adopted then the representations necessary for correctly recovering ap from x would be available to the target branch when making its prediction of y, which is the situation we aim to prevent. Architecture Variants As mentioned above, many of the other neural approaches to fair classi\ufb01cation take an autoencoder or representation learning approach. This approach has its advantages. For instance, it allows the person constructing the fair model to be agnostic about the ultimate task that it will be applied to. Others like ALFR consider a target value directly, and so can not be re-used for other tasks, but may perform better in practice on the speci\ufb01c problem they were constructed for. Our GRAD approach, thanks to its comparative simplicity, can be used in both formulations. This makes it the only neural network-based approach to fairness that offers both task \ufb02exibility and speci\ufb01city. GRAD-Auto will designate our approach when using an auto-encoder as the target branch\u2019s loss. That is, if x is the input feature, \u02dc x will be the feature vector derived from x such that the protected attribute ap / \u2208\u02dc x. We then use \u2113Auto t (\u00b7) = ||htarget \u2212\u02dc x||2 2 as the loss function for the target branch, where htarget is the activation vector from the last layer of the target branch. This approach is in the same style as LFR and VFA, where a hidden representation invariant to ap is learned, and then Logistic Regression is used on the outputs from the trunk sub-network to perform classi\ufb01cation. GRAD-Pred will designate our task-speci\ufb01c approach, where we use the labels yi directly. Here we simply use the standard logistic loss \u2113Pred t (\u00b7) = log(1 + exp(\u2212y \u00b7 htarget)). In this case the target branch of the network will produce a single activation, and the target branch output itself is used as the classi\ufb01er directly. Since we are dealing with binary protected attributes, both GRAD-Auto and GRAD-Pred will have the attribute branch of the network use \u2113p(ap) = log(1 + exp(\u2212ap \u00b7 hattribute)). In the spirit of minimizing the effort needed by the practitioner, we do not perform any hyper-parameter search for the network architecture either. Implemented in Chainer (Tokui et al., 2015) we use two fully-connected layers for every branch of the network (trunk, target, & attribute) where all hidden layers have 40 neurons. Each layer will use batchnormalization followed by the the ReLU activation function. Training is done using the Adam optimizer for gradient decent. We emphasize that the heart of GRAD is the inclusion of the attribute branch with reversed gradient; this technique is \ufb02exible enough to be used regardless of the particular choices of layer types, sizes, etc. We train each model for 50 epochs, and use a validation set to select the model from the best epoch. We de\ufb01ne best by the model having the lowest Discrimination (see \u00a73.1) on the validation set, breaking ties by selecting the model with the highest accuracy. When multiple attributes are protected, we use the lowest average Discrimination. 3. Methodology There is currently ongoing debate about what it means for a machine learning model to be fair. We choose to use the same evaluation procedure laid out by Zemel et al. (2013). This makes our results comparable with a larger body of work, as their approach and metrics have been widely used through the literature (e.g., Landeiro & Culotta, 2016; Bechavod & Ligett, 2017; Dwork et al., 2017). We use the same evaluation procedure and metrics: Discrimination, Consistency, Delta, and Accuracy. \fGradient Reversal Against Discrimination 3.1. Metrics Given a dataset {x1, . . . , xn} \u2208D, we de\ufb01ne the ground true label for the ith datum as yi and the model\u2019s prediction as \u02c6 yi. Each are with respect to the binary target label y \u2208 {0, 1}. While we de\ufb01ne both yi and \u02c6 yi, we emphasize that only the predicted label \u02c6 yi is used in the fairness metrics. This is because fairness is not directly related to accuracy by equality of treatment. Discrimination is a macro-level measure of \u201cgroup\u201d fairness, and computed by the taking the difference between the average predicted scores for each attribute value, assuming ap is a binary attribute. Discrimination = \f \f \f \f \f P xi\u2208Tap \u02c6 yi |Tap| \u2212 P xi\u2208T\u00acap \u02c6 yi |T\u00acap| \f \f \f \f \f (1) The second metric is Consistency, which is a micro-level measure of \u201cindividual\u201d fairness. For each xi \u2208D, we compare its prediction yi with the average of its k nearest neighbors and take the average of this score across D. Consistency = 1 \u22121 N N X i=1 \f \f \f \f \f \f \u02c6 yi \u22121 k X j\u2208k-NN(xi) \u02c6 yj \f \f \f \f \f \f (2) Because Consistency and Discrimination are independent of the actual accuracy of the method used, we also consider the Delta = Accuracy \u2212Discrimination. This gives a combined measure of an algorithm\u2019s accuracy that penalizes it for biased predictions. We use these metrics in the same manner and on the same datasets as laid out in Zemel et al. (2013) so that we can compare our results with prior work. This includes using the same training, validation, and testing splits. When training our GRAD approaches, we perform 50 epochs of training, and select the model to use from the validation performance. Speci\ufb01cally, we choose the epoch that had the lowest discrimination and broke ties by selecting the highest accuracy. 3.2. Models Evaluated As a baseline for comparison against GRAD-Pred and GRAD-Auto, we will consider the same architecture but with the attribute branch removed. This produces a standard neural network, and will be denoted as NN. For comparison with other fairness-seeking neural network algorithms, we present prior results for Learning Fair Representations (LFR) (Zemel et al., 2013), Variation Fair Autoencoders (VFA) (Louizos et al., 2016), and Adversarial Learned Fair Representations (ALFR) (Edwards & Storkey, 2016) approaches. For all models on all datasets, we report the metrics as presented in their original publications, as we were unable to replicate VFA and ALFR\u2019s results. 4. Results The results are given in Table 1. For values unreported in their original work, we show a dash (\u201c\u2014\u201d) in the table. Our GRAD approach is shown in the top rows. The bottom three rows include the other approaches as explained in subsection 3.2. When we compare the standard neural network (NN) with its GRAD counterpart, we can see that the GRAD approach always increases the Delta and Consistency scores, and reduces the Discrimination. This shows its applicability across network types (classifying and auto-encoding). We can even see the GRAD approach improve accuracy on the Adult dataset by 5 percentage points. While we would not expect this behavior (i.e. a negative cost of fairness) in the general case, it is nonetheless interesting and it may indicate the protected attribute allows over\ufb01tting. Comparing the GRAD algorithms to the other neural networks LFR, VFA and ALFR, we see that GRAD is usually best or 2nd best in each metric. On both the German and Adult datasets, it achieves the best Discrimination and Consistency scores compared to any of the algorithms tested. On the German dataset VFA obtains a higher Delta score by having a high accuracy, though VFA has 4% discrimination compared to GRAD-Pred\u2019s 0.06%. On the Health dataset, GRAD-Auto and GRAD-Pred have near identical results.This is overall signi\ufb01cantly better than the LFR approach which has an 11 percentage point difference in Accuracy and Delta scores compared to the GRAD approaches. The VFA algorithm is similarly within a fractional distance, though Consistency is not reported for VFA. GRAD consistently produces the highest Consistency. On the Adult dataset where VFAE and ALFR get better accuracy, it may have come at a cost of lower Consistency. This couldn\u2019t be con\ufb01rmed since we could not replicate their results. 4.1. Multiple Protected Attributes In almost all prior works that we are aware, it is always assumed that there is only one attribute that needs to be protected. However, this is a myopic view of the world. All of the protected attributes that have been tested individually in this work, like age, race and gender, may all co-occur and interact with each other. We show this in Table 2 using the Diabetes dataset used in Edwards & Storkey (2016), which has both Race and Gender as features in the corpus. In this case GRAD-Pred and GRAD-Auto are protecting Race and Gender attributes. GRAD-Pred-R shows the results for protecting only Race, and GRAD-Pred-G shows for only protecting Gender. GRAD-Auto follows the same convention. Since Discrimination is computed with respect to speci\ufb01c \fGradient Reversal Against Discrimination Table 1. For each dataset we show Accuracy, Delta, Discrimination, and Consistency. Best results shown in bold, second best in italics. German Adult Health Algorithm Acc Delta Discr Cons Acc Delta Discr Cons Acc Delta Discr Cons NN-Auto 0.7350 0.5334 0.2016 0.8730 0.7635 0.7191 0.0444 0.9850 0.8506 0.7939 0.0567 0.9730 GRAD-Auto 0.6750 0.6296 0.0454 0.8705 0.7554 0.7452 0.0102 0.9924 0.8491 0.8491 0.0000 1.0000 NN-Pred 0.7500 0.3637 0.3863 0.6945 0.7022 0.6268 0.0754 0.8168 0.8440 0.7511 0.0929 0.9453 GRAD-Pred 0.6750 0.6744 0.0006 0.9705 0.7543 0.7543 0.0000 1.0000 0.8493 0.8486 0.0007 0.9999 LFR 0.5909 0.5867 0.0042 0.9408 0.7023 0.7018 0.0006 0.8108 0.7365 0.7365 0.0000 1.0000 VFAE 0.7270 0.6840 0.0430 \u2014 0.8129 0.7421 0.0708 \u2014 0.8490 0.8490 0.0000 \u2014 ALFR \u2014 \u2014 \u2014 \u2014 0.8251 0.8241 0.0010 \u2014 \u2014 \u2014 \u2014 \u2014 Table 2. Accuracy, Delta, Discrimination (with respect to Race and Gender), and Consistency for our new method on the Diabetes dataset. Last four rows show GRAD models when only Race (R) or Gender (G) are protected. Discrimination Algorithms Acc Delta Race Gender Cons NN-Auto 0.5735 0.5392 0.0412 0.0275 0.6411 GRAD-Auto 0.5765 0.5723 0.0055 0.0030 0.6288 NN-Pred 0.6286 0.5848 0.0418 0.0458 0.6464 GRAD-Pred 0.5980 0.5949 0.0028 0.0034 0.7180 GRAD-Auto-R 0.5851 0.5749 0.0003 0.0201 0.6404 GRAD-Auto-G 0.5640 0.5143 0.0981 0.0013 0.6093 GRAD-Pred-R 0.5844 0.5478 0.0020 0.0713 0.7538 GRAD-Pred-G 0.5941 0.5526 0.0785 0.0045 0.6849 attributes, in the table we show the discrimination scores with respect to both of the protected attributes. Since we have two protected attributes ap1 and ap2, we compute Delta = Accuracy \u2212( Discrimination(ap1) + Discrimination(ap2) )/2. In doing so, we can see that when two protected variables are present, the GRAD approach is able to reduce Discrimination and increase Delta for both the Autoencoder and the standard softmax predictive network. GRAD-Pred also continues to increase the Consistency with respect to the naive neural network. Comparing GRAD-Pred with GRAD-Pred-R and GRADPred-G is also critical to show that protecting both attributes simultaneously provides a signi\ufb01cant bene\ufb01t. On the Diabetes data, we see the model increase its discrimination with respect to Gender when only Race is protected. Similarly, when we protect Gender, discrimination with respect to Race increases. Explicitly protecting both is the only safe way to reduce discrimination on both. The model shifting to leverage other protected features is not surprising. When we penalize a feature which provides information, the model must attempt to recover discriminative information in other (potentially non-linear) forms from the other features. Thus the importance and utility of 100 101 102 103 0 0.2 0.4 0.6 0.8 1 \u03bb Adult Income 100 101 102 103 0 0.2 0.4 0.6 0.8 1 \u03bb Heritage Health Accuracy Discrimination Consistency Figure 2. Plots show the performance of GRAD-Pred as a function of \u03bb on the x-axis (log scale). A dashed vertical black line shows the value \u03bb = 100 used in all experiments. GRAD to protect both simultaneously is established. 4.2. Robustness to \u03bb We have discussed so far that a bene\ufb01t of the GRAD approach is a simplicity in application due to the having only one hyper-parameter \u03bb. We now show that this value \u03bb is largely robust to the value used. In Figure 2 we plot the Accuracy, Discrimination, and Consistency as a function of \u03bb for values in the range [1, 2000], which shows GRAD\u2019s consistent behavior. On the Adult dataset, we see results stabilize after \u03bb \u226510. The Health dataset looks \ufb02at through the entire plot since the variation is on the order of 10\u22123, making it indiscernible. Only the Adult and Health plots are shown due to space limitations. The Diabetes plot is similar, and the German dataset has more variability due to its small size (n = 1000). 5." + }, + { + "url": "http://arxiv.org/abs/1804.00069v2", + "title": "Engineering a Simplified 0-Bit Consistent Weighted Sampling", + "abstract": "The Min-Hashing approach to sketching has become an important tool in data\nanalysis, information retrial, and classification. To apply it to real-valued\ndatasets, the ICWS algorithm has become a seminal approach that is widely used,\nand provides state-of-the-art performance for this problem space. However, ICWS\nsuffers a computational burden as the sketch size K increases. We develop a new\nSimplified approach to the ICWS algorithm, that enables us to obtain over 20x\nspeedups compared to the standard algorithm. The veracity of our approach is\ndemonstrated empirically on multiple datasets and scenarios, showing that our\nnew Simplified CWS obtains the same quality of results while being an order of\nmagnitude faster.", + "authors": "Edward Raff, Jared Sylvester, Charles Nicholas", + "published": "2018-03-30", + "updated": "2018-10-23", + "primary_cat": "stat.ML", + "cats": [ + "stat.ML", + "cs.DS", + "cs.IR", + "cs.LG" + ], + "main_content": "INTRODUCTION The well known Jaccard similarity provides a valid kernel for measuring the similarity between sets. Given one set S and a second set O, it simply returns the ratio of their intersection over their union, J(S,O) = |S\u2229O | |S\u222aO | . Seminal work by Broder introduced the min-hashing idea, allowing J(S,O) to be computed accurately and efficiently by keeping only sketches of each set S and O, where a sketch is a sub-set of the original sets [1, 2, 5]. Min-Hashing has been used for effective and fast personalized recommendations algorithms [4], near-duplicate detection for web-pages [14] and images [3], malware clustering and classification [9, 16], and general information retrieval problems [20]. Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from permissions@acm.org. CIKM \u201918, October 22\u201326, 2018, Torino, Italy \u00a9 2018 Copyright held by the owner/author(s). Publication rights licensed to ACM. ACM ISBN 978-1-4503-6014-2/18/10...$15.00 https://doi.org/10.1145/3269206.3271690 Given a min-hash function Minhash, we can increase or decrease the sketch size K to increase accuracy of the approximation or decrease the storage cost and compute time of the sketch. Algorithm 1 demonstrates its operation, and is used by all min-hashing algorithms. This works because the probability of two sets producing the same min-hash for a given seed (k) is equal to the Jaccard similarity itself (i.e., \u2200k, P(Minhash(S,k) = Minhash(O,k)) = J(S,O)). Min-hashing can thus be seen as a sampling method to compute the similarity. Since the required min-hashes can be computed once per set, and require only an equality check, they are often faster to use in practice, especially for large systems. Algorithm 1 MinHash Approximation Require: Two sets S and O that we want to compute the similarity of. 1: s \u21900 2: for k \u22081, 2, . . . ,K do 3: if Minhash(S,k) = Minhash(O,k) then 4: s \u2190s + 1 5: end if 6: end for 7: return s/K In this work, we are interested in applications in which items from the set are weighted. That is to say, given an entry z \u2208S, we associate with z a positive value given by w(S,z). We can then say that \u2200z \u2208S,w(S,z) > 0. If a value q < S, then w(S,q) = 0. The Weighted Jaccard Similarity (WJS) (1), also known as the min-max kernel, is the generalization of the J(S,O) to this use case, and W JS(S,O) = J(S,O) when all weights are equal. While computing min-hashes for approximating the Jaccard similarity can be done in time O(D), where D is the number of items in the set [1, 2], approximating the WJS requires a more expensive O(DK) time per-set. Reducing the compute time required to construct these hashes is the focus of this work. We obtain constant time speedups of over 20x by mathematically simplifying the current approaches to sketching the WJS and then exploiting this simplicity to produce a simple and compact approximation. W JS(S,O) = \u00cd \u2200z \u2208S\u222aO min(w(S,z),w(O,z)) \u00cd \u2200z \u2208S\u222aO max(w(S,z),w(O,z))) (1) Manasse et al. [13] proposed the Consistent Weighted Sampling (CWS) algorithm for the WJS problem. CWS produces a sketch of K hashes directly from the weighted samples in the set. Each sample in the sketch has a probability of collision with a sample from another set equal to the WJS, which allows the WJS\u2019s estimation by taking multiple samples. This resulted in an ametorized O\u2217(DK) time algorithm. CWS was improved upon by Ioffe [8] to produce the Improved CWS (ICWS), which required only fixed constant time per hash, so \fCIKM \u201918, October 22\u201326, 2018, Torino, Italy Edward Raff, Jared Sylvester, and Charles Nicholas producing K hashes with a feature set of D features takes O(KD) time per data point. ICWS is considered the state of the art for approximating the WJS, as well as the L1 distance [8]. The ICWS algorithm is presented in Algorithm 2. The ICWS algorithms iterates through every item z in the set S, and computes a value az for each feature. The minimum az determines the min-hash, and returns it\u2019s min-hash as a tuple of two values. Each value must be the same to count as a match. The value of az is stochastic, which is necessary that different entries z \u2208S will be selected for different hash indexes k \u2208K. To ensure that two different sets S and O select the same values when equal, the Pseudo Random Number Generator (PRNG) is seeded using the feature z and hash index k. Algorithm 2 ICWS 1: procedure Minhash(Weighted Set S, hash index k) 2: for all z \u2208S do 3: Seed PRNG with tuple (z,k) 4: rz \u223cGamma(2, 1) 5: cz \u223cGamma(2, 1) 6: \u03b2z \u223cUniform(0, 1) 7: tz \u2190 j logw(S,z) rz + \u03b2z k 8: yz \u2190exp(rz(tz \u2212\u03b2z)) 9: az \u2190 cz yz exp(rz) 10: end for 11: z\u2217\u2190arg minz az 12: y\u2217\u2190yz\u2217 13: return tuple z\u2217,tz\u2217 14: end procedure While effective, the ICWS algorithm\u2019s O(KD) cost can make it prohibitively expensive when large hash sizes K are necessary, or when then the number of features D is large. For most applications, values of K in the hundreds or thousands is routinely necessary [19], and large values of D are common for the information retrieval problems ICWS is often applied to [2, 18]. For this reason a number of works have looked at improving the runtime efficiency of the ICWS algorithm. Assuming all values are stored in 32-bit floats and integers,1 the ICWS algorithm requires 2K memory for the sketch, and five random values sampled from the uniform distribution. This value of 5 comes from the two Gamma distributed values rz and cz, which are computed as x = \u2212logu1 \u2212logu2 , where u1,u2 \u223cUniform(0, 1). Thus five uniform-random numbers need to be generated at each step, which has a significant cost [21]. In addition to these memory and sampling requirements, a non-trivial amount of expensive floating point operations are necessary. Lines 3 through 9 of Algorithm 2 requires five logarithms, two exponentiations, and four multiplications/divisions. We count all of these operations as they tend to be the most expensive to perform, and often overshadow the cost of floating point additions/subtractions or basic integer arithmetic. In this work, we will introduce a simplified variant of the ICWS algorithm that improves hash creation time by an order of magnitude. 1The value tz\u2217is technically unbounded in size, but in practice would rarely exceed a 32 bit integer value. Our Simplified CWS strategy continues to work in all scenarios that ICWS does, such as when not all features are known in advance, or there is no cap on potential feature value magnitude. Further, existing techniques to improve ICWS\u2019s runtime are compatible with our approach. Our improved runtime comes from reducing the work per feature and hash (lines 3-9) down to just one floating-point multiplication. This simplification is inspired by prior work that allows us to return a sketch requiring only K memory units, which we will review with other related work in section 2. We will then derive our new Simplified CWS algorithm in section 3, and provide extensive empirical evidence of its quality and efficiency in section 4. Finally, we will conclude in section 5. 2 RELATED WORK The original Confidence Weighted Sampling algorithm was introduced by Manasse et al. [13], providing the first direct sketching method for the Weighted Jaccard Similarity. Earlier work required reducing the WJS problem to the standard un-weighted Jaccard, but this results in an explosion in the feature set size and is unwieldy in practice [7]. The CWS algorithm was quickly improved upon by Ioffe [8], denoted as the seminal ICWS algorithm. This approach works with arbitrary non-negative weighted sets as inputs and requires no communication between points (i.e., every datum can be hashed independently of any other information). However, the computational burden of this approach is still non-trivial. For this reason Ioffe also proposed to reduce the set of each input to only the 200 most frequently selected features. This approach was shown to work well heuristically and gain a speedup of 150x. However this approach only allows a speedup at inference time, as the dataset must first be hashed (or \u201csketched\u201d) to determine the most frequently selected features. This necessarily re-introduces communication costs. In addition, the number of most frequently selected features that need to be kept will be problem dependent. We note that this inference speedup strategy is compatible with most ICWS extensions, including the one presented in this work. A number of other works have attempted to remedy the computational constraints of the ICWS algorithm in various ways. Most of these works evaluate only one or two scenarios: classification performance, nearest-neighbor precision, or similarity bias. We evaluate all of these scenarios, which will be detailed further in section 4. Li [10] improved upon ICWS\u2019s memory requirements for storage by introducing the 0-bit CWS strategy. The normal CWS sketch contains a sequence of tuples (z\u2217,tz\u2217), and both values must be equal to consider the pair a match. Li\u2019s insight was that if z\u2217match or don\u2019t match between two sketches, it is most probable that tz\u2217 will similarly either match or not. Thus the tz\u2217can be dropped, while maintain the same fidelity as ICWS in practice. This does not meaningfully improve the runtime of ICWS in most cases, but does reduce the memory needed for storage by half. We refer to this approach as ICWS-0Bit, and it is the inspiration for our improvements. Li\u2019s evaluation was done for classification and word bias, two of the three scenarios we evaluate. Yang et al. [22] looked at leveraging the ICWS-0Bit algorithm. In their work they exploited its structure to create min-hashes over \fEngineering a Simplified 0-Bit Consistent Weighted Sampling CIKM \u201918, October 22\u201326, 2018, Torino, Italy streaming inputs. In this case the total weight coefficients for each histogram input are altered over time. In doing so they develop a related algorithm that can efficiently update a min-hash in O(K+D), but still require O(KD) for initial hash construction. This allows their approach to be orders of magnitude faster to update over re-building the hash over time. We will use a similar approach of simplifying ICWS-0Bit to reach our goals, which is faster initial construction of a min-hash. Further work has been done to carefully analyze the ICWS algorithm, and remove redundant steps to reduce the computational requirements while maintaining a mathematically equivalent algorithm. Wu et al. [21] showed that ICWS could be reduced to requiring only four uniform-random samples from a Pseudorandom number generator (PRNG) instead of five, four logarithms/exponentiations, and five floating point multiplications/divisions per feature and hash. Evaluating on classification and nearest neighbor precision (two of the three scenarios we evaluate), they found this reduced runtime by 20\u201333%, depending on the dataset, and noted the importance of reducing the number of PRNG calls is especially critical as the dataset size increases. In this work, we reduce the cost to just one floating point multiply, require no PRNG calls, and obtain a minimum speedup of over 7x, and up to 28x, dramatically improving upon recent results. One of the more novel approaches to the WJS problem was presented by Shrivastava [18], who developed a new approach that was not based on the ICWS or CWS algorithms. Their approach\u2019s runtime is dependent on the average similarity between points, as well as the largest maximum magnitude per feature. For this reason communication is needed to determine the maximum possible feature value of all possible features before the algorithm can start. This limits their approach\u2019s applicability to scenarios with bounded magnitudes and where all features are known up-front. For example, recent work in malware detection wouldn\u2019t be able to make use of this approach [17]. When applicable, Shrivastava [18] showed 1500x\u20136000x speedup for some datasets. Shrivastava also concluded that different approaches to the WJS sketching problem work best for different data sets, and that ICWS should still be preferred when the number of non-zero features is of a similar size as the sketch size K. Shrivastava evaluates only estimation bias, which is one of our three scenarios. 3 SIMPLIFIED CWS, 0-BITS WITH ONE FLOP Now that we have reviewed the literature on the Improved Confidence Weighted Sampling algorithm, we show how we can simplify ICWS by extending the reasoning of previous work. In doing so we can construct an implementation of our Simplified ICWS that will require minimal compute time while avoiding expensive PRNG sampling. 3.1 Simplifying the ICWS Algorithm Li [10] showed that the tz\u2217term in the minhash tuple was not strictly necessary to obtain a high quality approximation of the Weighted Jaccard Similarity. By removing this value from the hash, the size of the hash is reduced by half. Because this uses \u201czero bits\u201d of the tz\u2217portion of the hash, it was termed the 0-Bit CWS. This leaves only the selected feature index z\u2217as the value from the hash itself. This is made possible by the fact that the selected feature index z\u2217is selected from the z with minimum az value, and thus already has information regarding both tz and the feature\u2019s weight. Our contribution is the realization that if we are using this information, we can relax the procedure given by Ioffe [8] for the ICWS algorithm\u2019s consistency property, and thus, the algorithm\u2019s implementation. Given a fixed z, \u03b2z and cz, the consistency property is shown using the fact that tk\u2217is a unique integer satisfying the bounds logw(S,z\u2217) rz\u2217 +\u03b2z\u2217\u22121 < tz\u2217\u2264logw(S,z\u2217) rz\u2217 +\u03b2z\u2217. Because we do not keep or use the tz\u2217value, there is no practical need to maintain this bound. Thus we propose to remove the floor function used to compute this value, changing it simply to tz \u2190logw(S,z) rz + \u03b2z. This change allows us to propagate several simplifications forward through the ICWS algorithm. First, note that we get yz = exp \u0012 rz \u0012 logw(S,z) rz + \u03b2z \u2212\u03b2z \u0013\u0013 = exp \u0012 rz \u0012 logw(S,z) rz \u0013\u0013 = exp (logw(S,z)) = w(S,z) This allows the immediate removal of one random variable \u03b2z, and the substituting of w(S,z) for yz to obtain az = cz w(S,z) exp(rz). With some simple algebra we can re-write this term as az = w(S,z)\u22121cz \u00b7 exp(\u2212rz). This reduces the mathematical operations from two exponentiations, a logarithm, and four multiplications/divisions to just one exponentiation and two multiplications/divisions. However, four samples from the uniform distribution and an additional four logarithms are still needed to produce cz and rz. Some minor approximations and reductions can allow us to remove an additional exponentiation and a uniform random sample, at the cost of only one additional multiplication (which is less expensive). The uniformity property for ICWS was shown by determining that the probability of selecting az is equal to w(S,z)\u22121 \u00cd j w(S, j) [8]. Given that cz exp(\u2212rz) is a value fixed for all sets S, we can show that we maintain this uniformity property. This can be seen by noting that az is scaled at a rate of w(S,z)\u22121 by definition. Since we select the minimum value of az, it corresponds with the maximum value of yz, which as we have just shown, is w(S,z). The Gammabased terms are independent and so can be marginalized out, leaving the probability of selecting a feature w(S,i) as w(S,i)/\u00cd j w(S, j). Thus we maintain the ICWS algorithm\u2019s uniformity property without issue. Because we have altered one of the the values used in the sampling process, our expectation is that this new simplified approach will not match the exact behavior of ICWS, where the approaches we reviewed in section 2 do. But since the tz\u2217was not required, we do expect our new approach to provide similar accuracy and performance in machine learning and information retrieval applications. At first glance, these simplifications may appear to provide little more than what was obtained in prior work that reduced the number of operations in the ICWS algorithm [21]. However, We will show below that our simplification allows for considerable exploitation of the new form of az, allowing us to dramatically reduce the cost of producing sketches. \fCIKM \u201918, October 22\u201326, 2018, Torino, Italy Edward Raff, Jared Sylvester, and Charles Nicholas 3.2 Exploiting Simplicity for Efficiency Now that we have simplified the algorithm, we take critical note that the new az definition has the w(S,z) term entirely separated from the functions involving the random Gamma samples cz and rz. Rather than sample these values as we observe each feature, for each minhash k, we can pre-sample a pool of values from the distribution defined by the cz exp(\u2212rz) term. We can use a higher quality PRNG for this step, and the pool need only be sampled once for the entirety of the application. This pooling strategy is only possible because we have removed the coefficient value w(S,z)\u22121 from the computation the distribution of the az. In ICWS, these values are intertwined and so prevent pre-sampling the resulting distribution. When iterating over the features, we can select the pre-sampled value from the pool by using a much simpler Linear Congruential Generator (LCG) style PRNG on the feature index z combined with the minhash index k. This entire procedure can be found in Algorithm 3, and reduces the ICWS algorithm down to only one floating point multiply per feature and hash. Because we have now dramatically reduced the number of FLOPs it is worth noting what other, less costly, operations are being done. This includes one integer multiplication, one integer modulo operation, and a random-access lookup. Algorithm 3 Simplified CWS (SCWS) Require: An array T of length |T |, where T[i] \u223ccz exp(\u2212rz), and large primes p1 and p2 1: procedure Minhash(Weighted Set S, hash index k) 2: b \u2190kp2 3: for all z \u2208S do 4: \u03b3 \u2190(zp1 + b) mod |T | \u25b7LCG style index selection 5: az \u2190w(S,z)\u22121 \u00b7 T[\u03b3] \u25b7The only FLOP needed 6: end for 7: z\u2217\u2190arg minz az 8: return z\u2217 9: end procedure In our implementation, we choose primes p1 = 1073741827 and p2 = 1073741831. Making these values prime ensures that the modulus operation will result in an index selected uniformly from the pool\u2019s size. Because our system has a 32 KB L1 data cache size, we make the pool store 4000 floating point numbers. This ensures that the pool of values will remain in L1 cache, ensuring in turn that the random access lookup will return quickly and keep the procedure from stalling on a memory access. We will see in section 4 that despite this small pool size, we continue to get high quality results with our SCWS algorithm that closely match that of ICWS, while being an order of magnitude faster. 4 EXPERIMENTS We will now describe a number of experiments we performed to validate our new SCWS algorithm, comparing it to the original ICWS algorithm and the ICWS-0Bit algorithm we are inspired by. Because we have made a change to the ICWS algorithm to simplify it as a whole, we will see that our new approach does not closely mimic the original ICWS behavior like ICWS-0Bit does. Instead we gain a significant speed advantage over both ICWS and ICWS0Bit, while having qualitatively similar results. We will empirically demonstrate that SCWS: 1) continues to return an accurate estimate of the WJS between two points, 2) allows one to efficiently build classifiers using feature hashing, and 3) continues to provide good precision in selecting the true nearest neighbors under the WJS. We conclude our experiments with a test of the pool size to demonstrate that it need not be tuned to any particular problem, and the default size of 4000 is at or past the point of diminishing returns. All code was implemented in Java using the JSAT library [15]. 4.1 Accurate WJS via Word Similarity One test for the quality of a CWS scheme was proposed in Li and K\u00f6nig [11], where the bias and variance of their approaches were compared using the word document frequencies as the sets. We replicate their approach because it provides a more challenging case for our algorithms due to a heavier tail in the distribution of words. This means the weights for each word in a given document will have a greater variability, and thus better exercise the WJS properties than many common datasets. To make our protocol reproducible, we specify that we use the 20 News-groups corpus as our collection of documents. We use a simple tokenization on non-alphabetic characters and convert everything to lower-case. Each row corresponds to a document, and each column to a specific word in the corpus. The values in the matrix indicating the number of occurrences of a word in a given document. Our feature vectors for each word are then the columns of the generated data matrix. For each word-pair, we record the true WJS similarity, and plot the average difference between WJS and each of our CWS algorithms, as we increase the sketch size k from 1 up to 1000. These results can be found in Figure 1. As expected, we see the average difference between WJS and the CWS varieties approach zero as the value of k increases. Each experiment was run 1000 times to obtain a high precision estimate. Because a sketch of size K also contains a sketch of size K \u22121, we exploit this to provide a point-estimate across all values of K while keeping experimental evaluation time reasonable.2 Word pairs were selected to try to cover a diversity of scores and behaviors. In each case, we can see that the ICWS-0Bit algorithm almost perfectly follows that of the original ICWS, with some exceptions, like the \u201csubsidies-settlements\u201d pairing. SCWS clearly does not track the ICWS in exact behavior, but shows the same general characteristics. In some pairings, such as \u201cUnited-States\u201d, all three track together usually, the trackings are close even if SCWS is slightly more or less accurate, such as \u201cIBM-PC\u201d and \u201cHongKong\u201d. Other cases, like \u201cCar-Bike\u201d and \u201cSubsidies-Settlements\u201d show SCWS coming out ahead, though this is not always the case. Given these results, we can conclude that SCWS is not an unbiased estimate of the ICWS\u2019s behavior, as it frequently does not mimic it in the same way the ICWS-0Bit does. But more importantly, we can conclude that SCWS is empirically a high quality estimate of the WJS that retains the fidelity of ICWS\u2019s estimates. 2Generating a new sketch for every value of K would have resulted in an experimental runtime of several months. \fEngineering a Simplified 0-Bit Consistent Weighted Sampling CIKM \u201918, October 22\u201326, 2018, Torino, Italy 100 101 102 103 0 5 \u00b7 10\u22122 0.1 IBM-PC: 0.107 SCWS ICWS ICWS-0Bit 100 101 102 103 \u22121 0 1 2 \u00b710\u22122 Apple-Microsoft: 0.020 100 101 102 103 0 1 2 \u00b710\u22122 Bias Car-Bike: 0.025 100 101 102 103 \u22122 0 2 4 \u00b710\u22122 Subsidies-Settlements: 0.040 100 101 102 103 \u22120.1 0 0.1 0.2 0.3 Sketch Size K Hong-Kong: 0.321 100 101 102 103 0 0.1 0.2 0.3 0.4 Sketch Size K United-States: 0.442 Figure 1: Plots show the difference between each CWS algorithm, and the true WJS. The dotted black line shows the value of zero for a perfect estimate and our new SCWS is in red. Above each figure is the word-pair under test, with the true WJS. The x-axis shows the sketch size K, and the y-axis shows the bias of the WJS estimated provided by each CWS compared to the true WJS score. All figures share the same legend. The quality of results is approximately equivalent between all approaches in this test, a theme we will see in other sections as well. The other pertinent issue to the user of any CWS, is how long will it take to obtain some (minimal) level of accuracy? In Figure 1 we plotted the error in the approximated WJS with that of the true WJS as a function of the sketch size. We can instead plot this error with respect to runtime, as shown in Figure 2. From this view of the data, our SCWS approach uniformly dominates the other curves. The \u201cUnited-States\u201d word pair is particularly illustrative. Considering just sketch size, all methods were performing at in indistinguishable level. When time becomes a factor, our SCWS sketch construction is up 28.2 times faster to reach zero bias compared to either ICWS variant. This speed advantage is still present when we look at word pairs like \u201cHong-Kong\u201d, where ICWS initially has faster convergence to the true WJS. At sketch size of K = 1000, SCWS achieved an absolute error of approximately 0.012, where ICWS achieved this same error rate at only 10\u22123 10\u22122 10\u22121 100 101 102 103 0 5 \u00b7 10\u22122 0.1 IBM-PC: 0.107 SCWS ICWS ICWS-0Bit 10\u22123 10\u22122 10\u22121 100 101 102 103 \u22121 0 1 2 \u00b710\u22122 Apple-Microsoft: 0.020 10\u22123 10\u22122 10\u22121 100 101 102 103 0 1 2 \u00b710\u22122 Bias Car-Bike: 0.025 10\u22124 10\u22123 10\u22122 10\u22121 100 101 \u22122 0 2 4 \u00b710\u22122 Subsidies-Settlements: 0.040 10\u22123 10\u22122 10\u22121 100 101 102 \u22120.1 0 0.1 0.2 0.3 Avg. sketch time in ms Hong-Kong: 0.321 10\u22123 10\u22122 10\u22121 100 101 102 103 0 0.1 0.2 0.3 0.4 Avg. sketch time in ms United-States: 0.442 Figure 2: Same as Figure 1, but x-axis replaced with average time to construct the sketch (in milliseconds). K = 680. Even with ICWS having a smaller sketch size, SCWS at K = 1000 is 17.6 times faster to construct compared to ICWS at K = 680. This strengthens our results, showing that even when SCWS may require a larger sketch size for a particular dataset, the speed advantage can still be an order of magnitude faster. 4.2 Learning with SCWS We now demonstrate that our new SCWS is effective for building binary and multi-class classifiers. Following prior work, we will compare the performance of our approach with that of a linear SVM and a Kernel SVM using the exact WJS as its kernel [10], and show the performance over a range of the regularization penalty C that is common with the SVM. For each of our CWS algorithms, we will use the hashing scheme of Li et al. [12] to create feature vectors from CWS min-hashes. For their approach using 8-bit hashes and K = 4096 for our sketch size, and use the features to train a Linear SVM model. The 8-bit hashes with K = 4096 combination has been found to give the best classification results with diminished returns when increasing the hash size further [10, 12]. All feature values are re-scaled to the range [0, 1] to avoid any issues with negative weights. Since not all of the datasets we will use have a testing set, \fCIKM \u201918, October 22\u201326, 2018, Torino, Italy Edward Raff, Jared Sylvester, and Charles Nicholas we will use the standard training set and estimate generalization with 5-fold cross validation in all cases. Table 1: Summary of each dataset used, all sub-sampled so that N = 20,000. D Indicates the dimension of the dataset, and \u2018Density\u2019 the percentage of non-zero values in the corpus. The right-most column shows how many times faster SCWS was compared to the faster of ICWS and ICWS-0Bit. Time to Hash (seconds) Dataset D Density ICWS ICWS-0Bit SCWS Speedup a9a 123 11.3 186 185 9 19.9 cod-rna 8 99.8 106 107 11 9.0 covtype 54 22.1 160 160 12 13.1 MNIST 780 19.2 1,937 1,922 86 22.1 ijcnn1 22 59.1 173 175 22 7.7 w8a 300 3.9 161 162 10 15.4 RCV1 47,236 0.14 661 658 52 12.6 URL 3,231,961 0.004 1,516 1,491 105 14.6 We perform this evaluation using eight datasets with varying numbers of features and sparsity patterns, all of which are obtained from the LIBSVM website [6]. For each dataset we sub-sample the corpus down to 20,000 samples so that the Kernel SVM will run in a reasonable amount of time. These datasets can be found in Table 1, where we also show how long it took each of the CWS algorithms to produce the feature vectors (running the CWS algorithm, and hashing the returned sketch into the 8-bit feature representation). Of particular note is the right-most column of Table 1, which shows the relative speedup of SCWS compared to the faster of ICWS and ICWS-0Bit. Across all datasets, SCWS is between 7.7 and 22.1 times faster. The results of our approach can be seen in Figure 3, where the performance of our SCWS is comparable to that of ICWS and its 0-bit variant. In some cases SCWS has equal, slightly worse, or slightly superior performance compared to ICWS \u2014 depending on the value of C and the dataset under consideration. In most cases we can see the CWS algorithms outperform the linear SVM model, and often equal or outperform the accuracy of the WJS kernelized SVM. Tests were done over a large range of regularization values with C \u2208[10\u22123, 102]. This range proves informative across datasets, such as a9a, in which the CWS approaches matches the best kernel and linear SVMs when given a strong penalty of 10\u22123, but drops quickly as C increases. Similarly, all CWS have consistently high performance on the IJCNN corpus, and are beaten by the WJS kernel only for values of C \u226510. Overall, we argue that our SCWS algorithm shows a high fidelity in approximating the WJS, even if it does not mimic the exact behavior of ICWS in this case. This was predicted by our derivation in section 3, as we noted that the SCWS does make a simplifying change to the original ICWS algorithm. On some datasets, such as RCV1 and MNIST, our new SCWS perfroms better. On datasets like URL and a9a, all CWS approaches produce about the same score. There are also datasets like Covtype and and IJCNN where SCWS performs slightly worse. In each case the standard deviation is shown as a translucent shaded region of the same color, which indicates that the variability in performance is also consistent for each 10\u22123 10\u22122 10\u22121 100 101 102 0.7 0.75 0.8 0.85 a9a 10\u22123 10\u22122 10\u22121 100 101 102 0.7 0.8 0.9 cod-rna SCWS ICWS ICWS-0Bit Kernel-SVM Linear-SVM 10\u22123 10\u22122 10\u22121 100 101 102 0.65 0.7 0.75 0.8 Covtype 10\u22123 10\u22122 10\u22121 100 101 102 0.86 0.88 0.9 0.92 0.94 0.96 0.98 MNIST 10\u22123 10\u22122 10\u22121 100 101 102 0.9 0.95 1 w8a 10\u22123 10\u22122 10\u22121 100 101 102 0.88 0.9 0.92 0.94 0.96 0.98 IJCNN 10\u22123 10\u22122 10\u22121 100 101 102 0.7 0.8 0.9 1 C URL 10\u22123 10\u22122 10\u22121 100 101 102 0.85 0.86 0.87 0.88 0.89 0.9 C RCV1 Figure 3: Performance of linear models built from CWS algorithms, compared to a linear and kernel SVM. The x-axis C shows the regularization parameter\u2019s value, and the y-axis shows the accuracy of 5-fold cross validation. Each panel is with respect to a different dataset, which is indicated at the top of each sub-figure. All figures share the same legend. Note each figure has a different scale. Each CWS based method shows has an opaque highlited region indicating \u00b12\u03c3 (best viewed digitally and in color). approach on each datasets, even as the value of the regularization parameter C changes. We note that in the worst case our new SCWS approach was only 7.7 times faster than ICWS in creating the sketch and creating \fEngineering a Simplified 0-Bit Consistent Weighted Sampling CIKM \u201918, October 22\u201326, 2018, Torino, Italy 101 102 103 0 0.2 0.4 0.6 0.8 1 a9a Prec@1 ICWS ICWS-0Bit SCWS 101 102 103 0 0.2 0.4 0.6 0.8 1 a9a Prec@500 101 102 103 0 0.2 0.4 0.6 0.8 1 cod-rna Prec@25 101 102 103 0 0.2 0.4 0.6 0.8 1 cod-rna Prec@100 101 102 103 0 0.2 0.4 0.6 0.8 1 Covtype Prec@100 101 102 103 0 0.2 0.4 0.6 0.8 1 Covtype Prec@500 101 102 103 0 0.2 0.4 0.6 0.8 1 MNIST Prec@25 101 102 103 0 0.2 0.4 0.6 0.8 1 MNIST Prec@500 101 102 103 0 0.2 0.4 0.6 0.8 1 w8a Prec@1 101 102 103 0 0.2 0.4 0.6 0.8 1 w8a Prec@25 101 102 103 0 0.2 0.4 0.6 0.8 1 IJCNN Prec@25 101 102 103 0 0.2 0.4 0.6 0.8 1 IJCNN Prec@100 101 102 103 0 0.2 0.4 0.6 0.8 1 Sketch Size K URL Prec@1 101 102 103 0 0.2 0.4 0.6 0.8 1 Sketch Size K URL Prec@100 101 102 103 0 0.2 0.4 0.6 0.8 1 Sketch Size K RCV1 Prec@1 101 102 103 0 0.2 0.4 0.6 0.8 1 Sketch Size K RCV1 Prec@500 Figure 4: Plots show the precision of selecting the true k-NN according to WJS for each CWS. Above each figure is the dataset under test and neighbor limit. The x-axis shows the sketch size K, and the y-axis shows the precision achieved by each CWS. All figures share the same legend and scale. the 8-bit feature vectorization scheme of Li et al. [12]. Comparing ICWS and SCWS in this way is giving an advantage to ICWS, as the vectorization scheme has a similar intrinsic cost regardless of how the sketch was created. As the average number of non-zeros in the dataset increases, so does the relative advantage of our SCWS approach. For instance, on the MNIST dataset, we can see 22 fold speedup of our new SCWS compared to ICWS. This is a dramatic improvement over prior CWS works, which were able to obtain improvements of no more than 40% in execution time [21]. 4.3 Nearest Neighbor Precision In this third test of the effectiveness of our approach, we examine the precision of each CWS algorithm in correctly returning the k-nearest neighbors (k-NN) of a point compared to the true WJS. Our experiments follow the protocol used by Wu et al. [21]. We \fCIKM \u201918, October 22\u201326, 2018, Torino, Italy Edward Raff, Jared Sylvester, and Charles Nicholas will select a random set of 1000 points from the corpus to use as our query set, and search query points for their nearest neighbors. We will use precision at \u03ba as our target metric, measured for multiple sketch sizes and for \u03ba \u2208{1, 25, 100, 500}. (That is, we measure the precision when returning the single nearest neighbors, the 25 nearest neighbors, etc.) These tests will use the entirety of each corpus, rather than sub-sampling like the previous experiment of subsection 4.2. We present a selection of these results in Figure 4, showing each dataset twice at two of the four precision levels. This is due to space limitations, but showing the results for two levels allows us to demonstrate that the results are consistent across values of \u03ba given a specific dataset. We find that the relative performance of ICWS, ICWS-0 Bit, and SCWS are consistent across precision levels. In these results we can see the same trend that we saw in the two earlier sets of experiments. SCWS has the same general performance as ICWS, and may have higher, lower, or about equal precision depending on the dataset. In all cases, the SCWS precision is monotonically increasing as the sketch size increases, consistent with it approximating the true WJS. On the covtype, cod-rna, and IJCNN datasets, the precision of SCWS trails that of ICWS. We also note for these datasets that the 0-Bit approach follows but under-performs compared to the original ICWS algorithm. This may be an indication that while the tz\u2217term removed from the 0-bit approach (and the random variable \u03b2 tied to it) play an occasional role in improving the quality of results, it is not strictly necessary. The datasets which elicit this gap between ICWS and ICWS-0Bit are the only ones in which SCWS has a meaningful drop in precision. SCWS outperforms both variants of ICWS on the a9a dataset, and dramatically so on the RCV1 corpus. In the former case, SCWS has superior precision at low sketch sizes, and the approaches converge as the sketch size increases. For the latter case, SCWS starts at the same precision as ICWS for small sketches, but quickly outperforms it as the sketch size increases. 4.4 Robustness of Pool Size The runtime efficiency of our new SCWS method comes from using a pool T of pre-sampled values of the distribution \u223ccz exp(\u2212rz), rather than generating new random values manually each time. Values are selected from the pool by a simple indexing strategy to mimic sampling from the true distribution, but requires only one FLOP rather than multiple expensive PRNG generations and multiple FLOPs. An important question regarding the effectiveness of our approach is how big should this pool should be, and what is the pool size\u2019s impact on performance? We can test this by repeating some of our experiments with multiple pool sizes, to look at the change in performance as the size of the pool increases and decreases as other parameters are held constant. We do this for four classification and four precision problems. The results are presented in contour plots, where the y-axis is the size of the pool, and the x-axis shows the regularization parameter C of the SVM or the precision at \u03ba, for each respective task. The color indicates the accuracy or precision (both on a [0, 1] scale), with respect to the two variables. The goal is that the performance of our SCWS algorithm will be constant with respect to the pool size, once we reach a minimum threshold for the pool\u2019s intrinsic size. This would indicate that enlarging the pool of values results in no further improvement on the accuracy of our method. This is also critical to practical deployment of our method. If the performance was overly sensitive to the pool size, it would become a parameter that needs estimation for every new problem, requiring an expensive hyperparameter search. Because we can show that the performance is consistent once a minimum pool size is reached, this minimum size can be used for multiple problems by default, thus avoiding the additional overhead and keeping our approach practical. As we can see in Figure 5, this behavior bears out in practice. Above the black dashed line, which marks the pool size of 4000 used in all other experiments in this paper, we can see that the color is nearly-constant in the vertical direction. This means for a given regularization penalty C (or precision value \u03ba), the performance at the 4000 pool size is the same as for any larger pool size, and thus there is no need to waste additional memory increasing the pool of values. Indeed, in these plots it is clear that even a pool of 2 million values would have had no positive impact on accuracy compared to our much smaller, and more practical, 4000 limit. When one looks at the classification results in the second row (the MNIST, IJCNN, URL, and RCV1 datasets), which all have high accuracy for all values of C, we see this reflected in the contour plot. For some datasets, like MNIST, a pool size of only 512 would have been sufficient to get equivalent accuracy results. The a9a, cod-rna, and Covtype datasets had their best performance for small values of C, and again, the behavior in the contour plots matches this expectation. For a9a in particular, we obtain the same peak classification performance even when the pool has as few as 32 elements. Thus we can rely on the returned result as being accurate and representative of the CWS\u2019s performance. All three of these datasets show slight variability in their performance at higher values of C, which is what keeps the contour plots from having a perfectly uniform color. This is observable in the variance of Figure 3 as well, and so is not an issue with our pooling strategy. We note further that the performance in the most critical region of C, where the model obtains the best performance, is consistent. The precision tests show a similar pattern of consistency by the 4000 threshold. Empirically we see that we could use a smaller pool size in many cases, down to about 1,000 entries. This carries a particular importance because this pool size is four times smaller than the hash size itself. When combined with the fact that hundreds of feature values will be indexed into the pool per hash, the pool\u2019s size might seem disproportionately small relative to the number of values being accessed from it. That is to say, one\u2019s first intuition might be that the pool should be large enough such that values rarely get re-used, but in our case, we will reuse every value in the pool hundreds of times. Another positive phenomena in these results is that the region of the contour plot with the highest performance (darker-red color as the value approaches 1.0) is also the region of the plot least impacted by decreases to the pool size |T |. For example, on a9a SVM plot in the top left corner we see performance stabilize for all values of \u03ba at a pool size of about |T | = 2048. However, the SVM obtains its best accuracies when the regularize C \u226410\u22122, and in this region even a pool size of |T | = 32 is effective. In future work, \fEngineering a Simplified 0-Bit Consistent Weighted Sampling CIKM \u201918, October 22\u201326, 2018, Torino, Italy 10\u22123 10\u22122 10\u22121 100 101 102 101 102 103 104 105 106 C Pool Size a9a 10\u22123 10\u22122 10\u22121 100 101 102 101 102 103 104 105 106 C cod-rna 10\u22123 10\u22122 10\u22121 100 101 102 101 102 103 104 105 106 C Covtype 10\u22123 10\u22122 10\u22121 100 101 102 101 102 103 104 105 106 C w8a 0 0.2 0.4 0.6 0.8 1 10\u22123 10\u22122 10\u22121 100 101 102 101 102 103 104 105 106 C Pool Size MNIST 10\u22123 10\u22122 10\u22121 100 101 102 101 102 103 104 105 106 C IJCNN 10\u22123 10\u22122 10\u22121 100 101 102 100 101 102 103 104 105 106 C URL 10\u22123 10\u22122 10\u22121 100 101 102 100 101 102 103 104 105 106 C RCV1 0 0.2 0.4 0.6 0.8 1 100 101 102 103 101 102 103 104 105 106 Precision@\u03ba Pool Size a9a 100 101 102 103 101 102 103 104 105 106 Precision@\u03ba cod-rna 100 101 102 103 101 102 103 104 105 106 Precision@\u03ba Covertype 100 101 102 103 101 102 103 104 105 106 Precision@\u03ba w8a 0 0.2 0.4 0.6 0.8 1 100 101 102 103 101 102 103 104 105 106 Precision@\u03ba Pool Size MNIST 100 101 102 103 101 102 103 104 105 106 Precision@\u03ba IJCNN 100 101 102 103 101 102 103 104 105 106 Precision@\u03ba URL 100 101 102 103 101 102 103 104 105 106 Precision@\u03ba RCV1 0 0.2 0.4 0.6 0.8 1 Figure 5: Contour plots showing the impact on pool size with respect to learning performance. The y-axis shows the size of the pool. Color shows the accuracy of the model at each point, and the x-axis is the regularization parameter C or precision rate \u03ba. First two rows are classification problems, bottom two rows are nearest neighbor retrieval. Each panel is with respect to a different dataset, which is indicated at the top of each sub-figure. All figures share the same legend. The dashed line indicates the standard pool size of 4000 used in all experiments. any parameter tunning that needs to be done after the application of SCWS could be speed-up by using an adaptive value of K. The bottom two rows of Figure 5 are the same eight datasets for the information retrieval goal of correctly returning the true k-nearest neighbors according to the WJS. The results are further improved in this case where we see almost perfect uniformity of performance with respect to the pool size for every dataset once |T | \u22654, 000. The same trends we discussed above for the classification task (first two rows) are still visible. For example, a9a and MNIST could have obtained equivalent results with a pool size of 128 and 1024 respectively. It may seem unusual that performance across multiple datasets should be satisfiable with a single pool size T. We explain this behavior by interpreting the definition of az = w(S,z)\u22121cz exp(\u2212rz) of SCWS. As was previously discussed, the probability of feature z \fCIKM \u201918, October 22\u201326, 2018, Torino, Italy Edward Raff, Jared Sylvester, and Charles Nicholas being selected is directly proportional to the feature value w(S,z), which can be extracted from this definition. The sampledcz exp(\u2212rz) term, which is what the pool contains samples of, then acts as a random perturbation of the ordering \u2014 where the probability of perturbation is determined by the distribution, and interacts with the feature value w(S,z)\u22121. Thus the specific value returned by cz exp(\u2212rz) becomes irrelevant; we need only enough distinct values to enable selecting the minimum az value in a manner consistent with the true distribution. In this light, we believe it is easy to understand how our approach can work while using such a small pool size. 5" + }, + { + "url": "http://arxiv.org/abs/1801.05055v1", + "title": "Toward Metric Indexes for Incremental Insertion and Querying", + "abstract": "In this work we explore the use of metric index structures, which accelerate\nnearest neighbor queries, in the scenario where we need to interleave\ninsertions and queries during deployment. This use-case is inspired by a\nreal-life need in malware analysis triage, and is surprisingly understudied.\nExisting literature tends to either focus on only final query efficiency, often\ndoes not support incremental insertion, or does not support arbitrary distance\nmetrics. We modify and improve three algorithms to support our scenario of\nincremental insertion and querying with arbitrary metrics, and evaluate them on\nmultiple datasets and distance metrics while varying the value of $k$ for the\ndesired number of nearest neighbors. In doing so we determine that our improved\nVantage-Point tree of Minimum-Variance performs best for this scenario.", + "authors": "Edward Raff, Charles Nicholas", + "published": "2018-01-12", + "updated": "2018-01-12", + "primary_cat": "cs.DS", + "cats": [ + "cs.DS", + "cs.DB", + "stat.ML" + ], + "main_content": "Introduction Many applications are built on top of distance metrics and nearest neighbor queries, and have achieved better performance through the use of metric indexes. A metric index is a data structure used to answer neighbor queries that accelerates these queries by avoiding unnecessary distance computations. The indexes we will look at in this work require the use of a valid distance metric (i.e., obeys triangle inequality, symmetry, and indiscernibility) and returns exact results. Such indexes can be used to accelerate basic classi\ufb01cation and similarity search, as well as many popular clustering algorithms like k-Means (Lloyd, 1982; Kanungo et al., 2002), density based clustering algorithms like DBSCAN (Bi\u00e7ici and Yuret, 2007; Campello et al., 2013), and visualization algorithms like t-SNE (van der Maaten, 2014; Maaten and Hinton, 2008; Tang et al., 2016; Narayan et al., 2015). However, most works assume that the data to be indexed is static, and that there will be no need to update the index over time. Even when algorithms are developed with incremental updates, the evaluation of such methods is not done in such a context. In this work we seek to evaluate metric indexes for the case of incremental insertion and querying. Because these methods are not readily available, we modify three existing indexes to support incremental insertion and querying. 1 arXiv:1801.05055v1 [cs.DS] 12 Jan 2018 \fRaff and Nicholas Our interest in this area is particularly motivated by an application in malware analysis, where we maintain a database of known malware of interest. Malware may be inserted into the database with information about malware type, method of execution, suspected origin, or suspected author. When an analyst is given new malware to dissect, the process can be made more e\ufb03cient if a similar malware sample has already been processed, and so we want to e\ufb03ciently query the database to retrieve potentially related binaries. This triaging task is a common problem in malware analysis, often related to malware family detection (Hu et al., 2013; Gove et al., 2014; Walenstein et al., 2007; Jang et al., 2011). Once done, the analyst may decide the binary should be added to the database. In this situation our index would be built once, and have insertions into the database regularly intermixed with queries. This read/write ratio may depend on workload, but is unfortunately not supported by current index structures that support arbitrary distance metrics. This scenario inspires our work to build and develop such indexes, which we test on a wider array of problems than just malware. We do this in part because the feature representations that are informative for malware analysis may change, along with the distance metrics used, and so a system that works with a wide variety of distance measures is appropriate. To emphasize the importance of such malware triage, we note it is critical from a time saving perspective. Such analysis requires extensive expertise, and it take an expert analysis upward of 10 hours to dissect a single binary (Mohaisen and Alrawi, 2013). Being able to identify a related binary that has been previously analyzed may yield signi\ufb01cant time savings. The scale of this problem is also signi\ufb01cant. A recent study of 100 million computers found that 94% of \ufb01les were unique (Li et al., 2017), meaning exact hashing approaches such as MD5 sums will not help, and similarity measures between \ufb01les are necessary. In terms of incremental addition of \ufb01les, in 2014 most anti-virus vendors were adding 2 to 3 million new binaries each month (Spa\ufb00ord, 2014). Given our motivation, we will review the related work to our own in section 2. We will review and modify three algorithms for incremental insertion and querying in section 3, followed by the evaluation details, datasets and distance metrics in section 4. Evaluations of our modi\ufb01cations and their impact will be done in section 5, followed by an evaluation of the incremental insertion and querying scenario in section 6. Finally, we will present our conclusions in section 7. 2. Related Work There has been considerable work in general for retrieval methods based on k nearest neighbor queries, and many of the earlier works in this area did support incremental insertion and querying, but did not support arbitrary distance metrics. One of the earliest methods was the Quad-Tree(Finkel and Bentley, 1974), which was limited to two-dimensional data. This was quickly extended with the kd-tree, which also supported insertions, but additionally supported arbitrary dimensions and deletions as well(Bentley, 1975). However, the kd-tree did not support arbitrary metrics, and was limited to the euclidean and similar distances. Similar work was done for the creation of R-trees, which supported the insertion and querying of shapes, and updating the index should an entry\u2019s shape change(Guttman, 1984). However improving the query performance of R-trees involved inserting points in a speci\ufb01c order, 2 \fToward Metric Indexes for Incremental Insertion and Querying which requires having the whole dataset available from the onset(Kamel and Faloutsos, 1994), and still did not support arbitrary metrics. The popular ball-tree algorithm was one of the \ufb01rst e\ufb00orts to devise and evaluate multiple construction schemes, some which required all the data to be available at the onset, while others which could be done incrementally as data became available (Omohundro, 1989). This is similar to our work in that we devise new incremental insertion strategies for two algorithms, though Omohundro (1989) do not evaluate incremental insertions and querying. This ball-tree approach was limited to the euclidean distance primarily from the use of a mean data-point computed at every node. Other early work that used the triangle inequality to avoid distance computations had this same limitation (Fukunage and Narendra, 1975). While almost all of these early works in metric indexes supported incremental insertion, none contain evaluation of the indexes under the assumption of interleaved insertions and queries. These works also do not support arbitrary distance metrics. The \ufb01rst algorithm for arbitrary metrics was the metric-tree structure (Uhlmann, 1991a,b), which used the distance to a randomly selected point to create a binary tree. This was independently developed, slightly extended, and more throughly evaluated to become the Vantage-Point tree we explore in this work(Yianilos, 1993). However, these methods did not support incremental insertion. We will modify and further improve the Vantage-Point tree in section 3. Toward the creation of provable bounds for arbitrary distance metrics, the concept of the expansion constant c was made by Karger and Ruhl (2002). The expansion constant is a property of the current dataset under a given metric, and describes a linear relationship between the radius around a point, and the number of points contained within that radius. That is to say, if the radius from any arbitrary point doubles, the number of points contained within that radius should increase by at most a constant factor. Two of the algorithms we look at in this work, as discussed in section 3, make use of this property. The \ufb01rst practical algorithm to make use of the expansion constant was the Cover-tree (Beygelzimer et al., 2006), which showed practical speed-ups across multiple datasets and values of k \u2208[1, 10]. Their results were generally shown under Lp norm distances, but also included an experiment using the string edit distance. Later work then simpli\ufb01ed the Cover-tree algorithm and improved performance, demonstrating its bene\ufb01t on a wider variety of dataset and distance metrics (Izbicki and Shelton, 2015). Of the algorithms for metric indexes, the Cover-tree is the only one we are aware of with an incremental construction approach, and so we consider it one of our metrics of interest in section 3. While the Cover-tree construction algorithm is described as an incremental insertion process, the more e\ufb03cient variant proposed by Izbicki and Shelton (2015) includes a bound which requires the whole dataset in advance to calculate bounds, preventing the e\ufb03cient interleaving of insertions and queries1. Another algorithm we consider is the Random Ball Cover (RBC), which was designed for making e\ufb00ective use of GPUs with the euclidean distance (Cayton, 2012). Despite testing on only the euclidean distance, the algorithm and proof does not rely on this assumption \u2013 and will work with any arbitrary distance metric. We consider the RBC in this work due to its random construction, which allows us to devise an incremental construction procedure 1. The original Cover-tree did not have this issue, and so would meet our requirements for incremental insertion. We consider the newer variant since it is the most e\ufb03cient. 3 \fRaff and Nicholas that closely matches the original design and maintains the same performance characteristics. While the Random Ball Cover has inspired a number of GPU based follow ups (Li and Amenta, 2015; Kim et al., 2013; Gieseke et al., 2014), we do not assume that a GPU will be used in our work. Li and Malik (2016) develop an indexing scheme that supports incremental updates, but only works for the euclidean distance. They also do not evaluate the performance as insertions and queries are interleaved. 3. Metric Indexes Used Given the existing literature of metric indexes there appear to be no readily available methods that suit our needs. For this reason we take three algorithms and modify them for incremental index construction and querying. In particular, we adapt the Random Ball Cover, Vantage Point tree, and Cover-tree algorithms for incremental insertion. As classically presented, the \ufb01rst two methods methods are not designed for this use case. While the original cover tree algorithm did support incremental insertions, its improved variants do not. More importantly, as we will show in section 5, the Cover-tree has worse than brute-force performance with one of our distance metrics. With our modi\ufb01cations we satisfy three goals that have not yet been achieved in a single data structure: 1. New datapoints can be added to the index at any point 2. We can e\ufb03ciently query the index after every insertion 3. The index can be e\ufb03ciently used with any distance metric (a) Cover-trees produce a heiarchy of circles, but each node may have a variable number of children. Each node has a radius that upper bounds the distance to all of its children, and may partially overlap. (b) Vantage-Point trees divide the space using a hierarchy of circles. The in/outside of each space acts as a hard boundary when subdividing. (c) RBC selects a subset of representatives, and each point is assigned to its nearest representative (relationships marked with dashed blue line). Figure 1. Example partitionings for all three algorithms. Red circles indicate the radius from which one node covers out in the space. 4 \fToward Metric Indexes for Incremental Insertion and Querying While the latter point would seem satis\ufb01ed by the original Cover-tree algorithm, our results indicate a degenerate case where the Cover-tree performs signi\ufb01cantly worse than a brute force search. For this reason we consider it to have not satis\ufb01ed our goals. We also contribute improvements to both the Random Ball Cover and Vantage Point Tree structures that further reduce the number distance computations needed by improving the rate at which points are pruned out. These improvements can dramatically increase their e\ufb00ective pruning rate, which leads us to alter our conclusions about which method should be used in the general case. In the below descriptions, we will use S to refer to the set of points currently in the index, and n = |S| as the number of such points. A full review of all details related to the three methods is beyond this scope of this work, but we will provide the details necessary to understand what our contributions are to each approach. 3.1 Cover Tree The Cover-tree (Beygelzimer et al., 2006) is a popular method for accelerating nearest neighbor queries, and one of the \ufb01rst practical metric indexes to have a provable bound using the expansion constant c (Karger and Ruhl, 2002). The Cover-tree can be constructed in O(c6n log n) time, and answer queries in O(c12 log n) time. Izbicki and Shelton (2015) developed the Simpli\ufb01ed Cover Tree, which reduces the practical implementation details and increases e\ufb03ciency in both runtime and avoiding distance computations.2 To reproduce the Simpli\ufb01ed Cover Tree algorithm without any nearest-neighbor errors, we had to make two slight modi\ufb01cations to the algorithm as originally presented. These adjustments are detailed in section A. The Cover-tree algorithm, as its name suggests, stores the data as a tree structure where each node represents only one data point and may have any number of children nodes3. The tree is constructed via incremental insertions, which means we require no modi\ufb01cations to the construction algorithm to support our use case. However, at query time it is necessary for each node p in the tree to compute a maxdist, which is the maximum distance from the point represented by node p to any of its descendant nodes. This maxdist value is used at every level of the tree to prune children nodes from the search path. Insertions can cause re-organizations of the tree, resulting in the need to re-compute maxdist bounds. For this reason the Simpli\ufb01ed Cover-tree can not be used to e\ufb03ciently query the index between consecutive insertions. Because of the re-balancing and re-organization that occurs during tree construction, it is not trivial to selectively update the maxdist value based on the changes that have occurred. Instead we will use an upper bound on the value of maxdist. Each node in the tree maintains a maximum child radius of the form 2l, where l is an integer. This also upper bounds the maxdist value of any node by 2l+1 (Izbicki and Shelton, 2015). This will allow us to answer queries without having to update maxdist, but results in a loosening of the bound. The performance of this upper bounded version of the Cover-tree we will refer to as CoverB, and is more naturally suited to the use case of interleaved insertions and queries. 2. Izbicki and Shelton also introduced a Nearest Ancestor Cover Tree, but we were unable to replicate these results. The reported performance di\ufb00erence between these two variants was not generally large, and so we use only the simpli\ufb01ed variant. 3. The maximum number of children is actually bounded by the expansion constant c. 5 \fRaff and Nicholas We note as well that this relaxation on the maxdist based bound represents a compromise between the simpli\ufb01ed approach proposed by Izbicki and Shelton and the original formulation by Beygelzimer et al.. In the later case, the 2l+1 bound is used to prune branches, but all branches are traversed simultaneously. In the former, the maxdist bound is used to descend the tree one branch at a time, and the nearest neighbor found so far is used to prune out new branches. By replacing maxdist with 2l+1, we fall somewhere in-between the approaches. Using a looser bound to prune, but still avoiding traversing all branches. In our extensive tests of these algorithms, we discovered two issues with the original speci\ufb01cation of the simpli\ufb01ed Cover-tree. These are detailed in section A, along with our modi\ufb01cations that restore the Cover-tree\u2019s intended behavior. 3.2 Vantage Point Tree The Vantage Point tree (Yianilos, 1993; Uhlmann, 1991a) (VP-tree) is one of the \ufb01rst data structures proposed for accelerating neighbor searches using an arbitrary distance metric. The construction of the VP-tree results in a binary tree, where each node p represents one point from the dataset, the \"vantage point\". The vantage point splits its descendant into a low and high range based on their distance from the aforementioned vantage point, with half of the child vectors in each range. For each range, we also have a nearest and farthest value, and an example of how these are used is given in Figure 2. Figure 2. Example of a node in a vp-tree, with the vantage point in the center. The low-near bound is in red, the distance to the point closest to the center. The low-far (blue) and high-near (green) braket the boundry of the median. No points can fall between these bounds. The farthest away point provides the high-far bound in orange. This tree structure is built top-down, and iteratively splits the remaining points into two groups at each node in the tree. Rather than continue splitting until each node has no children, there is instead a minimum split size b. This is because there are likely too few points for which we can obtain good low/high bounds. Instead, once the number of datapoints is \u2264b, we create a \"bucket\" leaf node that stores the points together and uses the distance from each point to its parent node to do additional pruning. At construction time, since each split is done by breaking the tree in half, the maximum depth of the tree is O(log n) and construction takes O(n log n) time. Assuming the bounds are successful in pruning most branches, the VP-tree then answers queries in O(log n) time. 6 \fToward Metric Indexes for Incremental Insertion and Querying The bucketing behavior can provide practical runtime performance improvements as well. Some of this comes from better caching behavior, as bucket values will be accessed in a sequential pattern, and avoids search branches that can be more di\ufb03cult to accurately predict for hardware with speculative execution. This can be done for the VP-tree because its structure is static as it is created, where the Cover-tree cannot create bucket nodes due to the re-balancing done during construction. 3.2.1 Incremental Construction While the Cover-tree required minimal changes since its construction is already incremental, we must de\ufb01ne a new method to support such a style for the VP-tree. To support incremental insertions into a VP-tree, we must \ufb01rst \ufb01nd a location with which to store the new datapoint x. This can be done quite easily by descending the tree via the low/high bounds stored for each point, and updating the bounds as we make the traversal. One we reach a leaf node, x is simply inserted into the bucket list. However, we do not expand the leaf node when its size exceeds b. Ideally, these bounds will be changed infrequently as we insert new points. Getting a better estimate of the initial bound values should minimize this occurrence. For this reason we expand a bucket b once it reaches a size of b2. This gives us a larger sample size with which to estimate the four bound values. We use the value b2 as a simple heuristic that follows our intuition that a larger sample is needed for better estimates, allows us to maintain the fast construction time of the VP algorithm, and results in an easy to implement and replicate procedure. Algorithm 1 Insert into VP-tree Require: vp-tree root node p, and new datapoint x to insert into tree. 1: while p is not a leaf node do 2: dist \u2190d(x, p.vp) 3: if dist < (p.lowfar + p.highnear)/2 then 4: p.lowfar \u2190max (dist, p.lowfar) 5: p.lownear \u2190min (dist, p.lownear) 6: p \u2190p.lowChild 7: else 8: p.highfar \u2190max (dist, p.highfar) 9: p.highnear \u2190min (dist, p.highnear) 10: p \u2190p.highChild 11: Add x to bucket leaf node p 12: if |p.bucket| > b2 then 13: Select vantage point from p.bucket and create a new split, adding two children nodes to p. 14: return Thus our insertion procedure is given in Algorithm 1, and is relatively simple. Assuming the tree remains relatively balanced, we will have an insertion time of O(log n). This will also maintain the query time of O(log n). 7 \fRaff and Nicholas 3.2.2 Faster Search We also introduce a new modi\ufb01cation to the VP-tree construction procedure that reduces search time by enhancing the ability of the standard VP-tree search procedure to prune out branches of the tree. This is done by using an extension of the insight from subsubsection 3.2.1, that we want to make our splits only when we have enough information to do so. That is, once we have enough data to make a split, choosing the median distance from the vantage point may not be the smartest split. Original split Better split vp Figure 3. Example on how the split can be improved, with vantage point in black and other points sorted by distance to it. Colors correspond to Figure 2. Instead, we can use the distribution of points from the vantage point to choose a split that better bifurcates the data based on the distribution. An example of this is given in Figure 3, where the data may naturally form a binary split. This increases the gap between the lowfar and highnear bounds, which then allows the search procedure to more easily prune one of the branches. To do this quickly, so to minimize any increase in construction time, we borrow from the CART algorithm used to construct a regression tree(Breiman et al., 1984). Given a set of n distances to the vantage-point, we \ufb01nd the split that minimizes the weighted variance of each split arg min s s \u00b7 \u03c32 1:s + (n \u2212s) \u00b7 \u03c32 s:n (1) Where \u03c32 s:n indicates the variance of the points in the range of [s, n) when sorted by distance to the vantage point. Because (1) can be solved with just two passes over the n points (Welford, 1962; Chan et al., 1983), we can solve this quickly with only an incremental increase in runtime. The original VP tree selects the median distance of all points from the vantage point. This requires n distance computations, and an O(n) quick-select search. Finding the split of median variance still requires n distance computations, so that cost remains unchanged. However, a sort of O(n log n) must be done to \ufb01nd the split of minimum variance. 3.3 Random Ball Cover The Random Ball Cover (Cayton, 2012) (RBC) algorithm was originally proposed as an accelerating index that would make e\ufb03cient use of many-core systems, such as GPUs. This was motivated by the euclidean distance metric, which can be computed with high e\ufb03ciency when computing multiple distances simultaneously. This can be done by exploiting a decomposition of the euclidean distance into matrix operations, for which optimized BLAS routines are readily available. To exploit batch processing while also pruning distances, the 8 \fToward Metric Indexes for Incremental Insertion and Querying RBC approach organizes data into large groups and uses the triangle inequality sparingly to prune out whole groups at a time. Compared to the VP and Cover Tree, the RBC algorithm is unique in that it aims to answer queries in O(\u221an) time and perform construction in O(n\u221an) time. The training procedure of the RBC algorithm is to randomly select O(\u221an) centers from the dataset, and denote that set of points as R. These are the R random balls of the algorithm. Each representative ri \u2208R will own, or cover, all the datapoints for which it is the nearest neighbor, arg minx d(x, ri)\u2200x \u2208S \\ R, which is denoted as Lri. It is expected that each ri will then own O(\u221an) datapoints. Querying is done \ufb01rst against the subset of points R, from which many of the representatives are pruned. Then a second query is done against the points owned by the non-pruned representatives. To do this pruning, we need the representatives to be sorted by their distance to the query point q. We will denote this as r(q) i , which would be the i\u2019th nearest representative to q. Pruning for k nearest neighbor queries is then done using two bounds, d(q, ri) < d(q, r(q) k ) + \u03c8ri (2) d(q, ri) < 3 \u00b7 d(q, r(q) k ) (3) Where \u03c8ri = maxx\u2208Lri d(ri, x) is the radius of each representative, such that all datapoints fall within that radius. Each bound must be true for any ri to have the k\u2019th nearest neighbor to query q, and the overall procedure is given in Algorithm 2. Theoretically the RBC bounds are interesting in that they provide a small dependency on the expansion constant c of the data, where queries can be answered in O(c3/2\u221an) time. This is considerably smaller than the c12 term in cover trees, but has the larger \u221an dependence on n instead of logarithmic. However, the RBC proof depends on setting the number of representatives |R| = O(c3/2\u221an) as well, which we would not know in advance in practice. Instead we will use |R| = \u221an in all experiments. Algorithm 2 Original RBC Search Procedure Require: Query q, desired number of neighbors k 1: Compute sorted order r(q) i \u2200r \u2208R by d(r, q) 2: FinalList \u2190\u2205 3: for all ri \u2208R do 4: if Bounds (2) and (3) are True then 5: FinalList \u2190FinalList \u222aLri 6: k-NN \u2190BruteForceSearch(q, R \u222aFinalList) \u25b7distances for R do not need to be re-computed 7: return k-NN 3.3.1 Incremental Construction If our goal was to build a static index, the random selection of R may lead to a sub-optimal selection. It is possible that di\ufb00erent representatives will have widely varying numbers of members. For our goal of incrementally adding to an index, this stochastic construction becomes a bene\ufb01t. Because the representatives are selected randomly without replacement, 9 \fRaff and Nicholas it is possible to incrementally add to the RBC index while maintaining the same quality of results. Algorithm 3 Insert into RBC Index Require: RBC representatives R, associated lists Lr, \u2200r \u2208R, and new datapoint x to add to RBC. 1: Compute sorted order r(x) i \u2200r \u2208R by d(r, x) 2: Lr(x) 1 \u2190Lr(x) 1 \u222ax 3: \u03c8r(x) 1 \u2190max \u0010 d(r(x) 1 , x), \u03c8r(x) 1 \u0011 \u25b7keep radius information correct 4: if ceil (\u221an)2 \u0338= n then 5: return \u25b7else, expand R set 6: select randomly a datapoint lnew from S \u2200r\u2208R Lr 7: let rold be the representative that owns lnew, i.e., lnew \u2208Lrold 8: Lrold \u2190Lrold \\ lnew 9: rnew \u2190lnew 10: potentialChildren \u2190RadiusSearchRBC(rnew, arg maxr,\u2200r\u2208R \u03c8r) 11: Lrnew \u2190\u2205 12: R \u2190R \u222arnew 13: \u03c8rnew \u21900 14: for all y \u2208potentialChildren do 15: Let ry be the representative that owns y 16: if d(y, ry) > d(y, rnew) then \u25b7change ownership 17: Lry \u2190Lry \\ y 18: Lrnew \u2190Lrnew \u222ay 19: \u03c8ry \u2190arg max\u2200z\u2208Lry d(ry, z) \u25b7update radius info 20: \u03c8rnew \u2190max (\u03c8rnew, d(y, rnew)) The details of our approach are given in Algorithm 3. Whenever we add a new datapoint to the index, we \ufb01nd its representative and add it to the appropriate list L. This can be done in O(\u221an) time, consistent with the query time of RBC. Once the closest representative is found, the radius to the farthest point may need to be updated, which is trivial. For the majority (n \u2212\u221an) of insertions, this is all the work that needs to be done. For the remaining \u221an insertions, the total number of datapoints will reach a size such that we should have a new representative. The new representative will be selected randomly from all the points in S \\ R. We can \ufb01nd the all the datapoints that may belong to this new representative using a \"range\" or \"radius\" search. A radius search is given a query and radius, and returns all datapoints within the speci\ufb01ed radius of the query. In this case we give the new representative as the query and specify the range as the maximum \u03c8r in the RBC so far. This is by de\ufb01nition the maximum distance of any point to its representative, so any point that will be owned by the new representative must have a smaller distance. In the worst case scenario, we cannot prune any points using a radius search. This means at most n other points must be considered. But since this scenario can only occur \u221an times, we maintain the same construction time complexity of O(n\u221an) in all cases. We can also state that this approach yields an amortized O\u2217(\u221an) insertion time. 10 \fToward Metric Indexes for Incremental Insertion and Querying 3.3.2 Faster Search While the original RBC search is fast and e\ufb03cient on GPUs and similar many-core machines, it is not as e\ufb03cient for our use case. Our scenario of interleaved insertions and queries means will be querying with only a few datapoints at a time. This means we will not obtain a large enough group of queries points to obtain the batch and SIMD e\ufb03ciencies that were the original goal of Cayton (2012). Further, when we consider arbitrary distance metrics, we can not expect the same e\ufb03cient method of grouping calculations as can be done with the euclidean distance. Thus we have developed an improved querying method for the RBC search to make it more e\ufb03cient in our incremental insertion and querying scenario. Our improvements to the RBC search procedure can be broken down into three steps. First, we modify the search to create the k-NN list incrementally as we visit each representative r \u2208R. In particular we can improve the application of bound (2) by doing this. First, we note that in (2), the d(q, r(q) k ) term serves as an upper bound on the distance to the k\u2019th nearest neighbor. By building the k-NN list incrementally, we can instead use the current best candidate for k\u2019th nearest neighbor as a bound on the distance to the k\u2019th nearest neighbor. This works intuitively, as the true k\u2019th neighbor, if not yet found, must by de\ufb01nition have a smaller distance than our current candidate. Second, when visiting the points owned by each representative, l \u2208Lr, we can apply this bound again and tighten the bound further. This is done by replacing the \u03c8ri term of (2) by the distance of l to its representative r. Since this distance d(l, r) had to be computed when building the RBC in the \ufb01rst place, these distances can simply be cached at construction \u2014 avoiding any additional overhead. Third, to increase the likelihood of \ufb01nding the k\u2019th neighbor earlier in the process, we visit the representatives in sorted order by their distance to the query. Because our \ufb01rst modi\ufb01cation tightens the bound as we \ufb01nd better k\u2019th candidates, this will accelerate the rate at which we tighten the bound. The complete updated procedure is given in Algorithm 4. A similar treatment can improve the RBC search procedure for range queries. We note that one lines 2 through 4, we add all the children points of the closest representative Lr(q) 1 unconditionally. This satis\ufb01es requirements of the RBC search algorithm\u2019s correctness in the k nearest neighbor case, rather than just one nearest neighbor. We refer the reader to Cayton (2012) for details. The essence of its purposes is to pre-populate the k-NN list with values for the bounds checks done in lines 8 and 10. The \ufb01rst step of our new algorithm must still compute the distances for each ri, and |R| = \u221an. In addition, we add all the children of the closest represent r(q) 1 , which is expected to own O(\u221an) points. Thus this modi\ufb01ed RBC search is still an O(\u221an) search algorithm. Our work does not improve the algorithmic complexity but does improve its e\ufb00ectiveness at pruning. 4. Datasets and Methodology We use a number of datasets and distance metrics to evaluate our changes and the e\ufb03ciency of our incremental addition strategies. For all methods we have con\ufb01rmed that the correct nearest neighbors are returned compared to a naive brute-force search. Our evaluation will 11 \fRaff and Nicholas Algorithm 4 New RBC Search Procedure Require: Query q, desired number of neighbors k 1: Compute sorted order r(q) i \u2200r \u2208R by d(r, q) 2: k-NN \u2190{r(q) 1 } \u25b7sorted list implicitly maintains max size of k 3: for all l \u2208Lr(q) 1 do \u25b7Add the children of the nearest representative 4: k-NN \u2190k-NN \u222al 5: for i \u22082 . . . |R| do \u25b7visit representatives in sorted order 6: qr \u2190d(q, r(q) i ) 7: Add tuple r(q) i , d(r(q) i , q) to k-NN 8: if qr < k-NN[k].dist + \u03c8ri and (3) are True then 9: for all l \u2208Lr(q) i do 10: if qr < k-NN[k].dist + d(l, r(q) i ) then \u25b7d(l, r(q) i ) is pre-computed 11: Add tuple l, d(l, q) to k-NN 12: return k-NN cover multiple aspects of performance, such as construction time, query time, and the impact of incremental insertions of index e\ufb03ciency. We will use multiple values of k in the nearest neighbor search so that our results are relevant to multiple use-cases. Toward this end we will also use multiple datasets and distance metrics to further validate our \ufb01ndings. 4.1 Evaluation Procedure The approach used in most prior works to evaluate metric indexes is to create the index from all of the data, and then query each datapoint in the index search for the single nearest neighbor (Izbicki and Shelton, 2015). For consistency we replicate this experiment style, but do not use every datapoint as a query point. This results in worst case O(n2) runtime for some of our tests, preventing us from comparing on our larger datasets. Since our interest is in if the index allows for faster queries, we can instead determined this the average pruning e\ufb03ciency with extreme accuracy by using only small sample of query points. In tests using a sample of 1000 points for testing, versus using all data points, we found no di\ufb00erence in conclusions or results4. Thus we will use 1000 test points in all experiments. This will allow us to run any individual test in under a week, and evaluate the insertion-query performance in a more timely manner. When using various datasets, if the dataset has a standard validation set, it will not be used. Instead points from the training set will be used for querying. This is done for constituency since not every dataset has a standard validation or testing set. Our experiments will be performed searching for the k nearest neighbors with k \u2208{1, 5, 25, 100}. Evaluating for multiple values of k is often ignored in most works, which focus on the k = 1 case in their experiments (e.g. Izbicki and Shelton, 2015; Cayton, 2012; Yianilos, 1993), or will test on only a few small value of k \u226410 (Beygelzimer et al., 2006). This is despite many applications, such as embeddings for visualization (Tarlow et al., 2013; Maaten and Hinton, 2008; van der Maaten, 2014; Tang et al., 2016), using values of k as large as 100. By testing a range of 4. The largest observed discrepancy was of 0.3 percentage points 12 \fToward Metric Indexes for Incremental Insertion and Querying values for k we can determine if one algorithm is uniformly better for all values of k, or if di\ufb00erent algorithms have an advantage in one regime over the others. To evaluate the impact of incremental index construction on the quality of the \ufb01nal index, each index will be constructed in three di\ufb00erent ways. Di\ufb00erences in performance between these three versions of the index will indicate the relative impact that incremental insertions have. 1. Using the whole dataset and performing the classic batch construction method, by which we mean the original index construction process for each algorithm (referred to as batch construction) 2. Using half the dataset to construct an initial index using the classic batch method, and incrementally inserting the second half of the data (referred to as half-batch) 3. Constructing the entire dataset incrementally (referred to as incremental). For these experiments, the Cover-tree is excluded \u2014 as its original batch construction is already incremental (though does not support e\ufb03cient queries between insertions). In our results we will expect the RBC algorithm to have minimal change in performance, due to the stochastic nature of representative selection. The expected performance impact of the VP-tree is unknown, though we would expect the tree to perform best in batch construction, second best when using half-batch construction, and worst when fully incremental. Results will consider both the number of distance computations when including and excluding distanced performed during index construction. We note that runtime of all methods and tests correlates directly with number of distance computations done for our code. Comparing distance computations is preferred so that we observe the true impact of pruning, rather than e\ufb03ciency of micro optimizations, and is thus comparable to implementations written in other languages. We will also test the e\ufb00ectiveness of each method when interleaving queries and insertions. This will be evaluated in a manner analogous to common data structures, where we have di\ufb00erent number of possible read (query) and write (insert) ratios. 4.2 Data and Distances Used Now that we have reviewed how we will evaluate our methods, we will list the datasets and distance metrics used in such evaluations. A summary of which is presented in Table 1. Datasets and distance metrics were selected to cover a wide range of data and metric types, include common baselines, and so that experiments would \ufb01nish within a one-week execution window. Our \ufb01rst three datasets will all use the familiar euclidean distance(4). The \ufb01rst of which is the well known MNIST dataset (Lecun et al., 1998), which is a commonly used benchmark for machine learning in general. Due to its small size we also include a larger version of the dataset, MNIST8m, which contains 8 million points produced by random transformations to the original dataset (Loosli et al., 2007). We also evaluate the Forest Cover Type (Covtype) datasets (Blackard and Dean, 1999), which has historically been used for metric indexes. d(x, y) = \u2225x \u2212y\u2225 (4) 13 \fRaff and Nicholas Dataset Samples Distance Metric MNIST 60,000 Euclidean MNIST8m 8,000,000 Euclidean Covtype 581,012 Euclidean VxHeaven 271,095 LZJD VirusShare5m 5,000,000 LZJD ILSVRC 2012 Validation 50,000 EMD IMDB Movie Titles 143,337 Levenshtein Table 1. Datasets used in experiments, including the number of points in each dataset and the distance metric used. Finding nearest neighbors and similar examples is important for malware analysis (Jang et al., 2011; Hu et al., 2009). The VxHeaven corpus has been widely used for research in malware analysis (vxh), and so we use it in our work for measuring the similarity of binaries. VxHeaven contains 271k binaries, but malware datasets are routinely reaching the hundreds of millions to billions of samples. For this reason we also select a random 5 million element set from the VirusShare corpus (Roberts, 2011), which shares real malware with interested researchers. As the distance metric for these datasets, we will use the Lempel-Ziv Jaccard Distance (LZJD) (Ra\ufb00and Nicholas, 2017a), which was designed for measuring binary similarity and is based upon the Jaccard distance. LZJD uses the Lempel-Ziv algorithm to break a byte sequence up into a set of sub-sequences, and then uses the Jaccard distance (5) to measure the distance between these sets. Recent work has used LZJD for related tasks such as similarity digests for digital forensics, where prior tools could not be accelerated in the same manner since they lacked the distance metric properties (Ra\ufb00and Nicholas, 2017b). d(A, B) = 1 \u2212|A \u2229B| |A \u222aB| (5) One of the metrics measured in the original Cover-tree paper was the a string edit distance (Beygelzimer et al., 2006). They compared to the dataset and methods used in Clarkson (2002), however the available data contains only 200 test strings. Instead we use the Levenshtein edit distance on IMDB movie titles (Behm et al., 2011), which contains both longer strings and is three orders of magnitude larger. The simpli\ufb01ed Cover-tree paper evaluated a larger range of distance metrics (Izbicki and Shelton, 2015), including the Earth Mover\u2019s Distance (EMD) (Rubner et al., 2000). The EMD provides a distance measure between histograms, and was originally proposed for measuring the similarity of images. We follow the same procedure as for using the \"thresholded\" EMD (Pele and Werman, 2009), except we use the RGB color space5. We use the 2012 validation set of the ImageNet challenge (Russakovsky et al., 2015) for this distance metric, as it is the most computationally demanding metric of the ones we evaluate in this work. 5. Our software did not support the LabCIE color space previously used, and we did not notice any signi\ufb01cant di\ufb00erence in results for other color spaces. 14 \fToward Metric Indexes for Incremental Insertion and Querying 5. Evaluation of Construction Time and Pruning Improvements We \ufb01rst evaluate the impact of our changes to each of the three algorithms. For RBC and VP-trees, we have made alterations that aim to improve the ability of these algorithms to avoid unnecessary distance computations at query time. For the Cover-tree, we have made a modi\ufb01cation that will negatively impact its ability to perform pruning, but will make it viable for interleaved insertions and queries. We will evaluate the impact of our changes on construction time, query e\ufb03ciency under normal construction, and the impact incremental construction has on the e\ufb03ciency of the complete index. 5.1 Impact on Construction Time To determine the impact of the incremental construction and our modi\ufb01cations, we will compare each algorithm in terms of the number of distance computations needed to construct the index. We will do this for all three construction options, batch, half-batch, and incremental, as discussed in section 4. The time for only constructing the indices in these three ways are shown in Figure 4. We note that there is no distinction between the Cover and CoverB construction times, and that the cover-tree is always incremental in construction. For this reason we only show one bar to represent Cover and CoverB across all three construction scenarios to avoid graph clutter. Here we see the two performance characteristics observed. On datasets like MNIST, where we use the euclidean distance, RBC is the slowest to construct. This is expected, as it also has the highest complexity at O(n\u221an) time. We also note that the RBC radius search is not as e\ufb03cient at pruning, and fails to do so on most datasets. Only on datasets that are most accelerated, such as the Covtype dataset, does the RBC incremental construction avoid distance computations during construction. This empirically supports the theoretical justi\ufb01cation that we maintain the same construction time for the RBC algorithm, as discussed in subsubsection 3.3.1. The second slowest to construct is the Cover-tree, followed by the VP-trees which is fastest. On the VxHeaven dataset, with the LZJD metric, the construction time performance of the Cover-tree degrades dramatically, using two orders of magnitude more distance computations than the RBC. We believe this performance degradation is an artifact of the expansion constant c that occurs when using the LZJD metric. The VP tree has no construction time impact with c, and the RBC algorithm has a small O(c3/2) dependency compared to the Cover-tree\u2019s O(c6) dependence. On the VirusShare5m dataset, the Cover-tree couldn\u2019t be constructed given over a month of compute time. We also note that the Cover-tree had degraded construction performance on the IMDB Movies dataset using the Levenshtein distance. These results present a potential weakness in the Cover-tree algorithm. Barring the performance behavior of the Cover-tree, both the RBC and VP-tree have more consistent performance on various datasets. We note of particular interest that the incremental construction procedure for the RBC results in almost no change in the number of distance computations needed to build the index6. The radius search is rarely able to do any pruning for the RBC algorithm, and so the brute force degrades to the same number of 6. The same cannot be said for wall clock time, which is expected. 15 \fRaff and Nicholas RBC RBCImp VP VPMV Cover 106 107 MNIST RBC RBCImp VP VPMV Cover 108 109 1010 MNIST8m RBC RBCImp VP VPMV Cover 107 108 109 1010 VxHeaven Batch Half-Batch Incremental RBC RBCImp VP VPMV Cover 107 108 Covtype RBC RBCImp VP VPMV Cover 106.5 107 107.5 IMDB Movies RBC RBCImp VP VPMV Cover 106 107 ILSVRC Figure 4. Construction performance for each algorithm on each dataset. The y-axis represents the number of distance computations performed to build each index. Each algorithm is plotted three times, once using classic batch construction, half-batch, and incremental. The Cover-tree\u2019s construction algorithm is equivalent in all scenarios, so only one bar is shown. distance computations as the batch insertion. The Covtype dataset is the one for which each algorithm was able to do the most pruning, and thus has the most pronounced e\ufb00ect of this. The VPMV variant of the VP-tree also matches the construction pro\ufb01le of the standard VP-tree on each dataset, with slightly increased or decreased computations depending on the dataset. This is to be expected, as the standard VP-tree always produces balanced splits during batch construction. The incremental construction can also cause lopsided splits for both the VP and VPMV-tree, which results in a longer recurrence during construction, and thus increased construction time and distances. The VPMV-tree may also encourage such lopsided splits, increasing the occurrence of this behavior. Simultaneously, the incremental construction requires fewer distance computations to determine early splits, and so can result in fewer overall computations if the splits happen to come out near balanced. The data and metric dependent properties will determine which impact is stronger for a given case. The impact of incremental construction on the VP-trees is also variable, and can increase or decrease construction time. In either direction, the change in VP construction time is minor relative to the costs for Cover-trees and the RBC algorithm. Overall we can draw the following conclusions about construction time e\ufb03ciency. 1) that the VP-trees are fastest in all cases, and the proposed VPMV variant has no detrimental impact. 2) the RBC algorithms are the most consistent, but often slowest, and that the 16 \fToward Metric Indexes for Incremental Insertion and Querying RBCImp has no detrimental impact. 3) the Cover-tree is not consistent in its performance relative to the other two algorithms, but when it works well, is in the middle of the road. 5.2 Impact on Batch Query E\ufb03ciency We now look at the impact of our changes to the three search procedures on querying the index, when the index is built in the standard batch manner. This isolates the change in performance to only our modi\ufb01cations of the three algorithms. Our goal here is to show that RBCImp and VPMV are improvements over the standard RBC and VP-tree methods. We also want to quantify the negative impact of using the looser bounds in CoverB that will allow for incremental insertion and querying, which is not easy with the standard simpli\ufb01ed Cover-tree due to its use of the maxdist bound and potential restructuring on insertions (Izbicki and Shelton, 2015). 100 101 102 0.6 0.8 1 MNIST 100 101 102 10\u22122 10\u22121 100 MNIST8m 100 101 102 0.9 1 1.1 1.2 VxHeaven 100 101 102 10\u22124 10\u22123 10\u22122 10\u22121 100 k Covtype 100 101 102 0.6 0.8 1 k IMDB Movies 100 101 102 10\u22121 100 k ILSVRC Fraction of Distance Computations Needed RBC RBCImp VP VPMV Cover CoverB Figure 5. Number of distance computations needed as a function of the desired number of neighbors k. The y-axis is the ratio of distance computations compared to a brute-force search (shown at 1.0 as a dotted black line). Considering only batch construction, we can see the query e\ufb03ciency of these methods in Figure 5, where we look at the fraction of distance computations needed compared to a bruteforce search. This \ufb01gure factors in the distance computations needed during construction time, so the query e\ufb03ciency is with respect to the whole process. We remind the reader that this plot is construed from a random sample of 1000 randomly selected query points, and then scaled to have the same weight as if all test points were used. That is to say, if a corpus as n data points, we compute the average number of distance 17 \fRaff and Nicholas computations from a sample of 1000 points. The total number of distance computations is then treated as this average times n. This closely mimics the same results that would have been achieved by using all n points as queries, but keeps runtime manageable given our compute resources. In extended testing on corpora where it is feasible to compute this for all n points, just 100 samples reliably estimated the ratio to two signi\ufb01cant \ufb01gures, so our 1000 point estimates should allow us to reach the same conclusions with con\ufb01dence. One can see that for the RBC and VP-tree algorithms, our enchantments to the search procedure are e\ufb00ective. For the RBC algorithm in particular, more distance computations were done than the brute force search in most cases, but RBCImp dramatically improves the competitiveness of the approach. This comes at a loss of compute e\ufb03ciency when using the euclidean metric, which is where the RBC obtains its original speed improvements. But our work is looking at the general e\ufb03ciencies of the RBC for arbitrary distance metrics, which may not have the same e\ufb03ciency advantages when answering queries in batches. In this respect the pruning improvements of RBCImp are dramatic and important if the RBC algorithm is to be used. The VPMV reduces the number of computations needed compared to the standard VPtree in all cases. The amount of improvement varies by dataset, ranging from almost no improvement, to nearly an order of magnitude less distance computations for the Covtype dataset. Given these results our choice to produce unbalanced splits during construction is empirically validated. As expected, the CoverB variant of the simpli\ufb01ed Cover-tree had a detrimental impact on e\ufb03ciency, as it is relaxing the bound to the same one used in the original Cover-tree work(Beygelzimer et al., 2006). Among all tests, the CoverB-tree required 1.6 to 6.7 times as many distance computations as the standard Cover-tree, with the exact values given in Table 2 for all tested values of k. The few distance computations avoided for determining the tighter bound clearly make up for a considerable portion of the simpli\ufb01ed Cover-tree\u2019s improved performance. Table 2. For each dataset, the this table shows the multipler on the number of distance computations CoverB had to perform compared to a normal Cover-tree. Dataset k MNIST MNIST8m ILSVRC Covtype IMDB VxHeaven 1 1.57 6.73 2.07 2.27 1.70 0.97 5 1.38 5.71 1.96 2.16 1.44 0.98 25 1.25 2.75 1.81 1.97 1.29 0.98 100 1.16 2.44 1.67 1.73 1.20 0.98 While the Cover-tree was the most e\ufb03cient at avoiding distance computations on the MNIST dataset, the Cover-tree is the worst performer by far on the VxHeaven dataset. The increased construction time results in the Cover-tree performing 20% more distance computations than would be necessary with the brute force approach. We also see an interesting artifact that more distance computations were done on VxHeaven when using the tighter maxdist bound than the looser CoverB approach. This comes from the extra computations needed to obtain the maxdist bound in the \ufb01rst place, and indicates that more 18 \fToward Metric Indexes for Incremental Insertion and Querying distances computations are being done to obtain that bound then are saved in more e\ufb03cient pruning. 100 101 102 0.4 0.6 0.8 1 k Figure 6. Query performance on the VirusShare5m dataset. We also note that the VxHeaven dataset, using the LZJD distance, had the worst query performance amongst all datasets, with LZJD barely managing to avoid 5% of the distance computations compared to a brute-force search. By testing this on the larger VirusShare5m dataset, as seen in Figure 6, we can see that increasing the corpus size does lead to pruning e\ufb03ciencies. While the Cover-tree couldn\u2019t be built on this corpus, both the RBC and VP algorithms are able to perform reasonably well. The VPMV did best, avoiding between 57% and 40% of the distance computations a brute-force search would require. Viewing these results as a whole, we would have to recommend the VPMV algorithm as the best choice in terms of query e\ufb03ciency. In all cases it either prunes the most distances for all values of k, or is a close second to the Cover-tree (which has an extreme failure case with LZJD). 5.3 Impact of Incremental Construction on Query E\ufb03ciency For the last part of this section, we examine the impact on query pruning based on how the index was constructed. That is to say, does half-batch or incremental construction of the index negatively impact the ability to prune distance computations, and if so, by how much? Such evaluation will be shown for only the more e\ufb03cient RBCImp and VPMV algorithms that we will further evaluate in section 6. We do not consider the Cover-tree variants in this portion. As noted in subsection 3.1, the Cover-tree\u2019s construction is already incremental. Thus these indexes will be equivalent when given the same insertion ordering. The only change in Cover-tree e\ufb03ciency would be from random variance caused by changes in insertion order. The di\ufb00erence between the ratio of distance computations done for Half-Batch (H) and Incremental (I) index construction is shown in Figure 7. That is to say, if rH = Distance Computations with Half-Batch Distance Computations Brute Force , and rB has the same de\ufb01nition but for the Batch construction, then the y-axis of the \ufb01gure shows rB \u2212rH. This is also plot for the di\ufb00erence between 19 \fRaff and Nicholas 100 101 102 0 2 4 6 \u00b710\u22122 k MNIST 0 20 40 60 80 100 10\u22125 10\u22124 10\u22123 10\u22122 10\u22121 k MNIST8m 100 101 102 0 1 2 3 4 \u00b710\u22122 k VxHeaven RBCImp, H RBCImp, I VPMV, H VPMV, I 100 101 102 0 1 2 \u00b710\u22123 k Covtype 100 101 102 0 2 4 6 \u00b710\u22122 k IMDB Movies 100 101 102 0 0.2 0.4 0.6 0.8 1 \u00b710\u22122 k ILSVRC Figure 7. Di\ufb00erence in the number of distance computations needed as a function of the desired number of neighbors k. The y-axis is the di\ufb00erence in the ratio of distance computations compared to a brute-force search. We note that the scale on the y-axis is di\ufb00erent for various \ufb01gures, and the small scale indicates that incremental construction has little impact on query e\ufb03ciency. incremental construction, i.e., rB \u2212rI. When this value is near zero, it means that both the Batch and Half-Batch or Incremental construction approaches have avoided a similar number of distance computations. We remind the reader that Half-Batch is where the dataset is constructed using the standard batch construction approach for the \ufb01rst n/2 data-points, and the remaining n/2 are inserted incrementally. Incremental construction builds the index from empty to full using only the incremental insertions. Positive values indicate an increase in the number of distance queries needed. Negative values indicate a reduction in the number of distance queries needed, and are generally an indication of problem variance. That is to say, when the di\ufb00erence in ratios can go negative, it\u2019s because the natural variance (caused by insertion order randomness) is greater than the impact of the incremental construction. Such scenarios would generally be considered favorable, as it would indicate that our modi\ufb01cations have no particular positive or negative impact. We \ufb01rst note a general pattern in that the di\ufb00erence in query e\ufb03ciency can go up or down with changes in the desired number of neighbors k. This will be an artifact of both the dataset and distance metric used, and highlights the importance of testing metric structures 20 \fToward Metric Indexes for Incremental Insertion and Querying over a large range of k. Testing over a wide range of k has not been historically done in previous works, usually performing only the 1 \u2212nn search. In our results we can see that the RBC algorithm performs best in these tests. The RBCImp approach\u2019s pruning ability is minimally impacted by changes in construction for all datasets and values of k. The largest increase is on MNIST for k = 1, where the Half-Batch insertion scenario increases from 59.4% to 60.6%, an increase of only 1.2 percentage points. It makes sense that the RBCImp approach would have a consistent minimal degradation in query e\ufb03ciency, as the structure of the RBC is coarse, and our incremental insertion strategy closely matches the behavior of the batch creation strategy. The VPMV-tree does not perform as well as the RBCImp, and we can see that incremental construction always has a more larger, but still small, impact on its performance for all datasets. The only case where this exceeds a two percentage point di\ufb00erence is on the MNIST8m dataset, where a \u22487.6% point gap occurs for incremental and half-batch construction. The larger impact on the VPMV\u2019s performance is understandable given that our insertion procedure does not have the same information available for choosing splits, which may cause sub-optimal choices. Our expectation would be that the VPMV\u2019s performance would degrade more when using incremental (I) insertion rather than half-batch (H), as the half-batch insertion will get to use more datapoints to estimate the split point for nodes higher up in the tree. Our results generally support this hypothesis, with VPMV (I) causing more distance queries to be performance than the (H) case. However, for MNIST8m, VxHeaven, and ILSVRC, the performance gap is not that large across the tested values of k. This suggests that the loosened bounds during insertion may also be an issue impacting the e\ufb03ciency after insertions. One possible way to reduce this impact would be to add multiple vantage points dynamically during insertion, to avoid impacting the existing low/high bounds of the VP-tree. Such Multi-Vantage-Point (MVP) trees have been explored previously(Bozkaya and Ozsoyoglu, 1999) in a batch construction context. We leave research in exploiting such extensions to to future work. Regarding the impact on query e\ufb03ciency given incremental insertions, we can con\ufb01dently state that the RBC approach is well poised to this part of the problem, with almost no negative impact to e\ufb03ciency. The VP-tree does not fair quite as well, but is still more e\ufb03cient than the RBCImp algorithm in all of these cases after construction from only incremental insertions. Overall, we can draw some immediate conclusions with respect to our proposed changes to Cover-trees, VP-trees, and the RBC index. First, that VP-trees in general strike a strong balance between construction time cost and query time e\ufb03ciency across many datasets with di\ufb00ering metrics. For both the RBC and VP tree, we can improve their query time e\ufb03ciency across the board. These improvements come with minimal cost, and so we can consider them exclusively in section 6 where we look at incremental insertions and querying. We also observe that the Cover-tree is signi\ufb01cantly degraded at insertion/construction time by when using the LZJD distance. 21 \fRaff and Nicholas 6. Evaluation of Incremental Insertion-Query E\ufb03ciency At this point we have shown that RBCImp and VPMV are improvements over the original RBC and VP-tree algorithms in terms of query e\ufb03ciency, with no signi\ufb01cant impact on the construction time. We have also shown that the indexes constructed by them are still e\ufb00ective are pruning distance computations, which encourages their use. We can now evaluate their overall e\ufb00ectiveness when we interleave insertions and queries in a single system. In this section we now consider the case of evaluating each index from the context of incremental insertion and querying. Contrasting with the standard scenario, where we build an index and immediately query it (usually for k-nearest neighbor classi\ufb01cation, or some similar purpose), we will be building an index and evaluating the number of distance computations performed after construction. This scenario corresponds to many realistic use cases, where a large training set is deployed for use, and new data added to the index over time. Given a dataset with n items in it, our evaluation procedure will consider r queries (or \"reads\") and w insertions (or \"writes\") to the index. The naive case, where we perform brute force search, there is no cost to writing to the index, only when we perform a query. This brute force approach also represents our baseline for the maximum number of distance computations needed to answer the queries. Similar to data structures for storing and accessing data and concurrency tools, we may also explore di\ufb00ering ratios of reads to writes. In our experiments we evaluated insert/query ratios from 100:1 to 1:100. In all cases, we found that the most challenging scenario was when we had 100 insertions for each query. This is not surprising, as all of our data structures have a non-zero cost for insertions, and in the case of RBC and Cover-trees, can be quite signi\ufb01cant. Thus, below we will only present results for the case where we have 100 insertions for each query, and our tests will limited to 1000 insertions due to runtime constraints7. We construct each initial index on half of the data points, using the batch construction method. For the Cover-tree, only CoverB produces reasonable insertion/query performance, as the maxdist bound can\u2019t be maintained when re-balancing occurs. Using the original loose bound causes a considerable reducing in e\ufb03ciency at query time. By recording the multiplicative di\ufb00erence between the tighter bound Cover-tree and the original looser bound in CoverB in Table 2, we can plot the performance of the ideal Cover-tree as a function of CoverB. This gives us a measure of what the best possible performance of the Cover-tree would be in this scenario, as it ignores all overheads in any potential scheme for selectively updating the Cover-tree bound as items are inserted that would cause re-balancing. We will indicate this ideal Cover-tree as CoverI. The results of our methods are presented in Figure 8. Amongst the RBCImp, VPMV, and CoverB algorithms, the VPMV dominates all other approaches. It successfully avoids the most total distance computations to answer nearest neighbor queries for all values of k on all datasets. This is not surprising given the cumulative results of section 5, which found the VPMV to require the fewest distance computations during construction time and was always either the most e\ufb03cient at avoiding distance computations, or nearly behind the Cover-tree approach. 7. We allowed a maximum of one week runtime for tests to complete in this scenario. 22 \fToward Metric Indexes for Incremental Insertion and Querying 100 101 102 0.5 0.6 0.7 0.8 0.9 1 k MNIST RBCImp VPMV CoverB CoverI 100 101 102 10\u22122 10\u22121 100 k MNIST8m 100 101 102 1 1.1 1.2 k VxHeaven 100 101 102 10\u22123 10\u22122 k Covtype 100 101 102 0.6 0.8 1 k IMDB Movies 100 101 102 5 \u00b7 10\u22122 0.1 0.15 0.2 0.25 k ILSVRC Figure 8. Fraction of distance computations needed (relative to naive approach) in incremental scenario, with 100 insertions for every query. Does not include initial construction costs, only subsequent insertion costs. If we had an ability to obtain the maxdist bound for free, we can also see that the CoverI approach is still not very competitive with the VPMV-tree. While CoverI does have better performance than VPMV on some datasets, it often trails behind on the Covtype by nearly an order of magnitude. Especially when we consider the failure of the Cover-trees to perform with the LZJD distance on VxHeaven and VirusShare5m. This variability in performance makes the Cover-tree less desirable to use for arbitrary distance metrics. While the VPMV appear to be the best overall \ufb01t to our task, we note that our RBCImp also makes a strong showing despite the O(\u221an) complexity target instead of O(log(n)). RBCImp consistently performs better than random guessing, which can\u2019t be said for the Cover-tree. On the more di\ufb03cult datasets, it is often not far behind the VPMV-tree in performance, though it is an order of magnitude less e\ufb03cient on the Covtype and ILSVRC datasets. The biggest weakness of the RBC approach is that the incremental insertions will have an amortized cost, with the insertion time increasing dramatically every \u221an insertions to expand the representative set. If the number of insertions is known to be bounded, this may be an avoidable cost \u2013 thus increasing the RBC\u2019s practicality. We note as well that in the case of datasets stored in a distributed index across multiple server\u2019s, the RBC\u2019s coarse structure may allow for more e\ufb03cient parallelization. This may be an important factor in future work when we consider datasets larger than what can be stored on a single machine. 23 \fRaff and Nicholas 6.1 Discussion While we have modi\ufb01ed three algorithms for our scenario of incremental querying and insertion, we note that there is a further unexplored area for improvement in the \"Read write\" ratio. In our case it was most challenging for all algorithms to handle more \"Writes\" per \"read\", as each insertion required multiple distance computations and the insertions did not dramatically change the performance at query time. This is in part because we have modi\ufb01ed existing algorithms to support this scenario, and so the performance interleaving insertions and queries closely follows the performance when we evaluate query by including the construction cost, as we did in section 5. Of the algorithms we have tested the VPMV performs best with the lowest construction time, and is almost always the fastest at query time. This is also in the context of evaluation in a single-threaded scenario. When we consider a multi-threaded scenario, the VPMV can utilize multiple threads for index construction using the batch-construction approach. However, insertion of a single data-point cannot easily be parallelized. The Cover-tree also has this challenge. Our RBCImp approach presents a potential advantage over both of these algorithms when we consider the multi-thread or distributed scenario. As a consequence of how the RBC algorithm achieves its O(\u221an) insertion and query time, we can readily parallelize line 1 of Algorithm 3 on up to \u221ap processors, requiring only a reduce operation to determine which processor had the closest representative. It may then be more practical than the VPMV approach for extremely large indexes if su\ufb03cient compute resources are available. The downside to the RBC algorithm comes when the representative set must be increased, requiring more work and presenting a insertion cost that will periodically spike. This could be remedied by amortizing the cost of increasing the representative set across the preceding insertions, but we leave this to future work as we must consider the real-world e\ufb03ciency of an implementation to determine how practical a solution it would be. In future work we hope to develop new algorithms that are speci\ufb01cally designed for incremental insertion and querying. We note two potential high level strategies in which one may develop methods that perform better for read and write heavy use-cases. We consider these beyond the scope of our current work, which looks at modifying existing algorithms, but may be fruitful inspiration for specialized methods. 6.1.1 Write & Insert Heavy When we have multiple datapoints inserted before each query, it may become possible to use the index itself to accelerate the insertion process. Say that there will be a set of Z points inserted into the index at a time. We can cluster the members of Z by their density/closeness, and insert each cluster together as a group. One option may be to \ufb01nd the medoid of the group and its radius, which can then be used as a proxy point that represents the group as a whole. One could then insert the sub-groups into the index with a reduced number of distance computations if the triangle inequality can be used to determine that all members of the group belong in the same region of the index. The group may then be dissolved as such macro level pruning becomes impossible, or reduced into smaller sub-groups to continue the process. The dual-tree query approach (Curtin and Ram, 2014), at a high level, presents a similar strategy for e\ufb03ciently answering multiple queries at a time. 24 \fToward Metric Indexes for Incremental Insertion and Querying 6.1.2 Read & Query Heavy Another scenario is that insertions into the index will be relatively rare, compared to the amount of nearest neighbor queries given to the index. In this case it may be desired to have the query process itself build and restructure the tree. This notion is in a similar spirit to splay trees and the union-\ufb01nd algorithm (Tarjan and van Leeuwen, 1984; Tarjan, 1975; Hopcroft and Ullman, 1973). Insertions to the dataset would be placed in a convenient location, and their \ufb01rst distances computed when a new query is given. Say that xi was a previously inserted point. Once we have a new query xq, the distance to the query is obtained for the xi and for xq\u2019s nearest neighbors. If d(xi, xq) \u2248c \u00b7 d(xq, x(k)), where x(k) is xq\u2019s k\u2019th nearest neighbor and c is some constant, we can then infer that xi should be placed in a similar location in the index. As multiple insertions are performed, we can use these distances with respect to the query to determine which points are related and should be kept close in the index. 7." + }, + { + "url": "http://arxiv.org/abs/1712.08197v1", + "title": "Fair Forests: Regularized Tree Induction to Minimize Model Bias", + "abstract": "The potential lack of fairness in the outputs of machine learning algorithms\nhas recently gained attention both within the research community as well as in\nsociety more broadly. Surprisingly, there is no prior work developing\ntree-induction algorithms for building fair decision trees or fair random\nforests. These methods have widespread popularity as they are one of the few to\nbe simultaneously interpretable, non-linear, and easy-to-use. In this paper we\ndevelop, to our knowledge, the first technique for the induction of fair\ndecision trees. We show that our \"Fair Forest\" retains the benefits of the\ntree-based approach, while providing both greater accuracy and fairness than\nother alternatives, for both \"group fairness\" and \"individual fairness.'\" We\nalso introduce new measures for fairness which are able to handle multinomial\nand continues attributes as well as regression problems, as opposed to binary\nattributes and labels only. Finally, we demonstrate a new, more robust\nevaluation procedure for algorithms that considers the dataset in its entirety\nrather than only a specific protected attribute.", + "authors": "Edward Raff, Jared Sylvester, Steven Mills", + "published": "2017-12-21", + "updated": "2017-12-21", + "primary_cat": "stat.ML", + "cats": [ + "stat.ML", + "cs.LG" + ], + "main_content": "Introduction As applications of Machine Learning becomes more pervasive in society, it is important to consider the fairness of such models. We consider a model to be fair with respect to some protected attribute ap (such as age or gender), if it\u2019s predicted label \u02c6 y with respect to a datumn x is unaffected by changes to ap. Removing ap from x is not suf\ufb01cient to meet this goal in practice, as ap\u2019s effect is still present as a latent variable (Pedreshi, Ruggieri, and Turini 2008). In this work, we look at adapting decision trees, speci\ufb01cally Random Forests, to this problem. Given an attribute ap that we wish to protect, we will show how to induce a \u201cFair Forest\u201d that provides improved fairness and accuracy compared to existing approaches. Decision Trees have become one of the most widely used classes of machine learning algorithms. In particular, C4.5 (Quinlan 1993) and CART (Breiman et al. 1984) tree induction approaches, combined with ensembling approaches like Random Forests (Breiman 2001) and Gradient Boosting (Friedman 2002), have proven to be potent and effective across a broad spectrum of needs and tasks. These methods Copyright \u00a9 2018, Association for the Advancement of Arti\ufb01cial Intelligence (www.aaai.org). All rights reserved. are one of the few to be simultaneously interpretable, nonlinear, and easy-to-use. Random Forests have proven to be particularly effective. In a study of over one-hundred datasets, Random Forests were found to be one of the best performing approaches \u2014 even when no hyperparameter tuning is done (Fern\u00e1ndezDelgado et al. 2014). XGBoost, a variant of gradient boosting, has been used in the winning solutions to over half of recent Kaggle competitions (Chen and Guestrin 2016). Tree-based algorithms also provide a rare degree of interpretability. Single trees within an ensemble can be printed in a human-readable form, allowing the immediate extraction of the decision process. Further still, there are numerous ways to extract feature importance scores from any treebased approach (Louppe et al. 2013; Breiman 2003). Being able to understand how a model reaches its decision is of special utility when we desire fair decision algorithms, as it gives us a method to double-check that the model appears to be making reasonable judgments. This interpretability has already been exploited in prior work to understand black-box models (Hall and Gill 2017). Given the wide-ranging bene\ufb01ts and successes of treebased learning, it is surprising that no prior work has focused on designing fair decision tree induction methods. Other methods for constructing fair models will be reviewed in section 2. In section 3 we propose, to the best of our knowledge, the \ufb01rst fair decision tree induction method. Our design is simple, requiring only minimal changes to existing tree induction code, thereby retaining the desirable property that the trees tend to \u201cjust work\u201d without hyperparameter tuning. Our experimental methodology is discussed in section 4, including the introduction of novel fairness measures which are suitable for use with multinomial and continuous attributes. Finally, experimental results are summarized in section 5, including a new experimental procedure to evaluate fair algorithms against all possible features rather than single protected attributed. We end with our conclusions in section 6. 2 Related Work One approach to building fair classi\ufb01ers is based on data alteration, where the original corpus is altered to remove or mask information about the protected attribute. Some of the \ufb01rst work in fairness learning followed this approach, and \fattempted to make the minimum number of changes that removed the discriminative protective information (Kamiran and Calders 2009). Others have attempted to re-label the data points to ensure a fair determination (Luong, Ruggieri, and Turini 2011). Another approach is to regularize the model in such a way that it is penalized for keeping information that allows it to discriminate against the protected feature. Some of the earliest work was to develop a fair version of Naive Bayes algorithm (Calders and Verwer 2010). Others have taken to creating a differentiable regularization term, and applying it to models such as Logistic and Linear Regression (Kamishima, Akaho, and Sakuma 2011; Bechavod and Ligett 2017; Berk et al. 2017; Calders et al. 2013). Our new fair induction algorithm is a member of this group of regularization-based approaches, but unlike prior works has no parameters to tune. One \ufb01nal group of related approaches is to build new representations, which mask the protected attribute (Dwork et al. 2012). The use of neural networks have become popular for this task, such as variational auto encoders (Louizos et al. 2016) and adversarial networks (Edwards and Storkey 2016). One of the seminal works in this \ufb01eld used an autoencoder with three separate terms in the loss (Zemel et al. 2013), and provides one of the largest comparisons on three now-standard datasets. We replicate their evaluation procedure in this work. There is an important commonality in all of these prior works. The research is done with respect to datasets and attributes where there is a prior normative expectation of fairness. These are problems usually of social importance, and protected attributes are intrinsic characteristics like age, gender and nationality. But what if focusing on such problems has inadvertently biased the development of fair research? The mechanism for inducing fairness should work for any attribute, not just those that align with current societal norms, and must not be over-\ufb01t to the protected attributes used in research. We evaluate our approach with respect to every possible feature choice, to ensure that the mechanism of producing fairness is not over-\ufb01t to the data. 3 Fair Forests We propose a simple regularization approach to constructing a fair decision tree induction algorithm. This is done by altering the way we measure the information gain G(T, a), where T is a set of training examples, and a is the attribute to split on. We will denote the set of points in each of the k branchs of the tree as Ti...k. This normally is combined with an impurity measure I(T ), to give us G(T, a) = I(T ) \u2212 X \u2200Ti\u2208splits(a) |Ti| |T | \u00b7 I(Ti) (1) The information gain scores the quality of a splitting attribute a by how much it reduces impurity compared to the current impurity. The larger the gain, the more pure the class labels have become, and thus, should improve classi\ufb01cation performance. In the CART algorithm, the Gini impurity (2) is normally used for categorical targets. IGini(T ) = 1 \u2212 X \u2200Ti\u2208splits(label) \u0012|Ti| |T | \u00132 (2) This greedy approach to feature selection has proven effective for decades, helping to cement the place of treebased algorithms as one of the most popular learning methods. However, this does not take into account any notion of fairness, which we desire to add. In this work we do so by altering the information gain scoring itself, leaving the whole of the tree induction process unaltered. We begin by noting we need to make two slight alterations for our approach. First, we will use the Impurity score to measure both the class label, and now additionally the protected attribute under consideration. We will denote these two cases as Il, and Ia, and the Gain with respect to the label and protected attribute as Gl and Ga respectively. Additionally, we will impose the constraint that the impurity measure must return a value normalized to the range of [0, 1]. For the Gini measure this becomes Ia Gini(T ) = 1 \u2212P \u2200Ti\u2208splits(a) \u0010 |Ti| |T | \u00112 1 \u2212|splits(a)|\u22121 (3) We require that the impurity score Ia(\u00b7) produce a normalized score so that we can compare scores on a similar scale range, regardless of which features are selected. We then use this to de\ufb01ne a new fair gain measure Gfair(T, a), which seeks to balance predictive accuracy with the fairness goal with respect to some protected attribute af. Gfair(T, b) = Gl(T, b) \u2212Gaf (T, b) (4) Intuitively, (4) will discourage the selection of any feature correlated with both the protected attribute and the target label. It remains possible for such a feature to still be selected if no other feature is better suited. 3.1 Gain for Numeric Features To our knowledge, no work has yet explored making a continuous feature the protected attribute. We can derive this naturally in our new fair induction framework. In CART, trees\u2019 numeric target variables are optimized by \ufb01nding the binary split that minimizes the weighted variance between each split. We use this same notion to de\ufb01ne a gain Gr(T, a) that is used when either the predictor or protected attribute is continuous. Because we are interested in fairness, we look at changes in the mean value of the splits compared to their parent. Even if variances differ, if they retain similar means the impact on the fairness is minimal. To produce a scaled value, we look at the number of standard deviations from the previous mean is for each of the new splits, and assume that being more than three standard deviations is the maximum violation. This gain is de\ufb01ned in (5), where \u03c3b,Ti indicates the standard deviation of attribute b for all datums in the set Ti, and \u00b5b,Ti has the same meaning but for the mean of the subset. Gr(T, b) = 1\u22121 3 X Ti\u2208split |Ti| |T | min \u0012|\u00b5b,T \u2212\u00b5b,Ti| \u03c3b,T , 3 \u0013 (5) \fWe emphasize that the standard deviation of the parent T is used, not that of any sub-population Ti. This is because we want to measure drift with respect to the current status. Re-writing the continuous splitting criteria in this fashion also produces a score normalized to the range [0, 1]. We can now continue to use the Gfair(T, b) function with continuous attributes as either the label target, or the protected attribute. This framework now gives us a means to induce decision trees, and thus build Random Forests, for all scenarios: classi\ufb01cation and regression problems, and protected features either nominal or numeric. We emphasize that this approach to regularizing the information gain has no tunable parameters as given. This is to keep with the general utility of decision trees in that they often \u201cjust work.\u201d While adjusting hyperparameters such as maximum tree depth may be used to improve classi\ufb01cation accuracy, the results of a decision tree are often effective without any kind of parameter tunning. This is important for practical use and adoption. Many fairness based systems require an additional two to three hyperparameters to tune (Kamishima, Akaho, and Sakuma 2011; Bechavod and Ligett 2017; Zemel et al. 2013), on top of whatever hyperparameters come with the original model. This increases the computational requirements in practice, especially when used with a classic gridsearch approach. 4 Methodology There is currently considerable discussion about what it means for a machine learning model to be fair, which metrics should be used, and whether or not they can be completely optimized (Skirpan and Gorelick 2017; Garc\u00eda-Mart\u00edn and Lavesson 2017; Hardt, Price, and Srebro 2016). We choose to use the same evaluation procedure laid out by Zemel et al. (2013). This makes our results comparable with a larger body of work, as their approach and metrics have been widely used through the literature (Landeiro and Culotta 2016; Bechavod and Ligett 2017; Dwork et al. 2017; Calders et al. 2013). We present both of their metrics \u2014 Discrimination and Inconsistency1 \u2014 in a manner compatible with both classi\ufb01cation and regression problems, while also extending Discrimination to a broader set of scenarios. We will also discuss the datasets used, their variants tested, and the models we will evaluate. 4.1 Metrics The \ufb01rst metric we will consider is the Discrimination of the model, measured by the average difference between the average predicted scores for each attribute value. Discrimination = \f \f \f \f \f P xi\u2208Tap \u02c6 yi |Tap| \u2212 P xi\u2208T\u00acap \u02c6 yi |T\u00acap| \f \f \f \f \f (6) Discrimination measures a macro-level quality of fairness, as such it is sometimes termed \u201cgroup fairness.\u201d However, 1Zemel et al. refer to their metric as \u2018consistency,\u2019 but de\ufb01ne it in a way that only makes sense for classi\ufb01cation. We use Inconsistency = 1 \u2212Consistency. This form is applicable to both classi\ufb01cation and regression tasks. the de\ufb01nition in (6) is limited to only binary protected attributes. For this work, we will also look at a generalization of Discrimination to k-way categorical variables. This is done by re-formulating Discrimination to consider the subpopulation differences from the global mean. This is equivalent to the original de\ufb01nition when k = 2, and is given by (7). (See the Appendix for a proof of equivalence.) Discrimination = 2 k k X i=1 \f \f \f \f \f P xj\u2208T \u02c6 yj |T | \u2212 P xj\u2208Ti \u02c6 yj |Ti| \f \f \f \f \f (7) We will also consider the discrimination with respect to a continuous variable. With ap denoting a protected continuous attribute, let xi(ap) be the value of feature ap for datum xi. We will then de\ufb01ne our new Maximum Discrimination (MaxD) metric as the largest discrimination score achieved for some binary split of ap by some threshold t. This is given in equation (8), and gives us a concise de\ufb01nition extending Discrimination to regression tasks. When a continuous attribute is manually discretized into a binary problem, as is done in prior work, we obtain by de\ufb01nition that MaxD \u2265Discrimination. MaxD = arg max t \f \f \f \f \f P xi(ap)