{ "url": "http://arxiv.org/abs/2404.16325v1", "title": "Semantic Segmentation Refiner for Ultrasound Applications with Zero-Shot Foundation Models", "abstract": "Despite the remarkable success of deep learning in medical imaging analysis,\nmedical image segmentation remains challenging due to the scarcity of\nhigh-quality labeled images for supervision. Further, the significant domain\ngap between natural and medical images in general and ultrasound images in\nparticular hinders fine-tuning models trained on natural images to the task at\nhand. In this work, we address the performance degradation of segmentation\nmodels in low-data regimes and propose a prompt-less segmentation method\nharnessing the ability of segmentation foundation models to segment abstract\nshapes. We do that via our novel prompt point generation algorithm which uses\ncoarse semantic segmentation masks as input and a zero-shot prompt-able\nfoundation model as an optimization target. We demonstrate our method on a\nsegmentation findings task (pathologic anomalies) in ultrasound images. Our\nmethod's advantages are brought to light in varying degrees of low-data regime\nexperiments on a small-scale musculoskeletal ultrasound images dataset,\nyielding a larger performance gain as the training set size decreases.", "authors": "Hedda Cohen Indelman, Elay Dahan, Angeles M. Perez-Agosto, Carmit Shiran, Doron Shaked, Nati Daniel", "published": "2024-04-25", "updated": "2024-04-25", "primary_cat": "cs.CV", "cats": [ "cs.CV", "cs.AI" ], "label": "Original Paper", "paper_cat": "Semantic AND Segmentation AND Image", "gt": "Ultrasound is a popular medical imaging modality used to image a large variety of organs and tissues. Ultrasound is often the preferred choice due to its non-radiative and non-invasive nature, relatively easy and fast imaging procedure, and lower costs. Automating the diagnosis or highlighting relevant areas in the image will contribute to faster workflows and potentially more consistent and accurate diagnoses. Artificial Intelligence (AI) has demonstrated remarkable success in automatic medical imaging analysis. Compared to classical methods, previous work based on convolutional neural networks on various medical imaging tasks, such as classification and segmentation, have shown state-of-the-art results [1, 2, 3, 4]. However, effective deep learning segmentation algorithms for medical images is an especially challenging task due to the scarcity of high-quality labeled images for supervision. Moreover, in medical imaging it is often the case that identification of findings regions, namely regions of potentially pathological visual anomalies, having neither a clear boundary nor a typical geometry or position is much more challenging than the identification of an anatomy in its context. Findings are also typically rare, which brings to light the challenge of training such models in limited data regimes. \u2217Corresponding author, e-mail: nati.daniel@gehealthcare.com. \u2020These authors have contributed equally to this work. 1Dept. of AI/ML Research, GE Healthcare, Haifa, Israel. 2Dept. of Clinical Applications, Point of Care Ultrasound & Handheld, Texas, USA. 3Dept. of Clinical Applications, Point of Care Ultrasound & Handheld, Wisconsin, USA. arXiv:2404.16325v1 [cs.CV] 25 Apr 2024 Semantic Segmentation Refiner for U/S Applications with Zero-Shot FMs COHEN H ET AL. Figure 1: A high-level illustration of our semantic segmentation refinement method with zero-shot foundation models. A pre-trained segmentation model predicts a semantic segmentation for each class of an input image. In this example, classes comprise anatomies and pathologies in an ultrasound image, and the coarse segmentor output depicts the predicted semantic segmentation of a pathology. A prompt selection model selects positive and negative points. Consequently, a zero-shot semantic segmentation mask of the pathology is predicted by a foundation segmentation model, prompted by the selected points for the input image. Positive prompt points are depicted in red, and negative prompt points are depicted in blue. The pathology semantic segmentation prediction is highlighted in red. For illustration purposes, the muscle is highlighted in purple, the tendon in yellow, and the bone in green. The freeze symbol indicates preventing gradients from being propagated to the model weights. Recently, new segmentation models have emerged. Trained on data at huge scales, these foundation models aim to be more generic rather than tailored to specific datasets. The Segment Anything Model (SAM) [5] is a foundational model demonstrating zero-shot generalization in segmenting natural images using a prompt-driven approach. The SonoSAM [6] foundational model adapts SAM to ultrasound images by fine-tuning the prompt and mask decoder [6]. Although fine-tuning methods often improve the results on target datasets [7] they essentially downgrade the generalization capabilities of the foundation model. Further, a significant domain gap between natural and medical images, ultrasound images in particular[8], hinders fine-tuning models trained on natural images to the task at hand [7]. In this work, we address the performance degradation of segmentation models in low-data regimes and derive a novel method for harnessing segmentation foundation models\u2019 ability to segment arbitrary regions. Our semantic segmentation refinement method comprises two stages: First, a coarse segmentation is predicted by a model trained on a small subset of the training data. In the second stage, our novel points generation from a coarse pathology segmentation algorithm is used to prompt a segmentation foundation model. Positive prompt points are selected using a partition around medoids method as the most representative pathology points. Negative prompt points are selected by a prompt selection optimization algorithm that identify the context anatomy. Importantly, we do not fine-tune the foundation model to our dataset, i.e., it produces a zero-shot segmentation. The end-to-end pipeline is illustrated in Fig. 1. The method\u2019s advantages are brought to light on varying degrees of low-data regimes experiments on a small-scale images dataset, yielding a larger performance gain compared to a state-of-the-art segmentation model [9] as the training set size decreases. Further, ablation studies validate the effectiveness of our semantic segmentation refinement model. Our approach applies to other ultrasound-based medical diagnostics tasks. The paper is organized as follows: Section 2 presents the semantic segmentation task and leading approaches. Our method is presented in Section 3, and the experimental setup is presented in Section 4. Section 5 presents the results and ablation studies on a discontinuity in tendon fiber (DITF) pathology finding task in a musculoskeletal ultrasound (MSK) dataset, and the conclusions are presented in Section 6.", "main_content": "2.1 Semantic Segmentation Models Semantic segmentation aims to assign a label or a class to each pixel in an image. Unlike image classification, which assigns a single label to the entire image, semantic segmentation provides a more detailed understanding of the visual scene by segmenting it into distinct regions corresponding to objects or classes. This is an essential technique for applications, such as autonomous vehicles, medical image analysis, and scene understanding in robotics. Like other computer vision tasks, deep learning has demonstrated state-of-the-art results in the semantic segmentation of medical images. The semantic segmentation problem can be formulated as follows: Given an image I \u2208RC\u00d7H\u00d7W , our goal is to train a deep neural network to predict the pixel-wise probability map SN\u00d7H\u00d7W of the classes in the dataset, where N is the number of classes in the dataset. 2 Semantic Segmentation Refiner for U/S Applications with Zero-Shot FMs COHEN H ET AL. DeepLabV3 [9] represents a distinctive approach in semantic image segmentation. Utilizing dilated convolutions, the model strategically enlarges the receptive field and manages the balance between global and local features through padding rates. Notably, the spatial pyramid pooling module proposed by the authors aggregates features from dilated convolutions at various scales, enhancing contextual information. Distinctive from encoder-decoder architectures such as the U-Net [10], it is built upon a robust pre-trained encoder, contributing to its success in generating accurate and detailed segmentation masks across diverse applications. Since DeepLabV3 remains a staple choice for a performant semantic segmentation model, we adopt it as our method\u2019s coarse segmentor. 2.2 Semantic Segmentation Foundation Models Foundation models are trained on broad data at a huge scale and are adaptable to a wide range of downstream tasks [11, 12, 13]. The Segment Anything Model (SAM) [5] emerged as a versatile foundation model for natural image segmentation. Trained on a dataset of over 11 million images and 1B masks, it demonstrates impressive zero-shot generalization in segmenting natural images using an interactive and prompt-driven approach. Prompt types include foreground/background points, bounding boxes, masks, and text prompts. However, SAM achieves subpar generalization on medical images due to substantial domain gaps between natural and medical images [14, 15, 16, 17, 18]. Moreover, SAM obtains the poorest results on ultrasound compared to other medical imaging modalities [15]. These results are attributed to the ultrasound characteristics, e.g., the scan cone, poor image quality, and unique speckled texture. A common methodology to overcome this generalization difficulty is to fine-tune a foundation model on a target dataset [19]. An efficient fine-tuning strategy is Low-Rank Adaptation (LoRA) [20], which has been adopted in fine-tuning SAM to relatively small medical imaging datasets [21, 22, 23]. SonoSAM [6] demonstrates state-of-the-art generalization in segmenting ultrasound images. Fine-tuned on a rich and diverse set of ultrasound image-mask pairs, it has emerged as a prompt-able foundational model for ultrasound image segmentation. Notably, adapting prompt-based models to medical image segmentation is difficult due to the conundrum of crafting high-quality prompts [15]. Manually selecting prompts is time-consuming and requires domain expertise. Methods of extracting prompts from ground-truth masks [23] cannot be applied during inference as they rely on full supervision. Auto-prompting techniques rely on the strong Vision Transformer (ViT-H) image encoder [24] semantic representation capabilities, and suggest generating a segmentation prompt based on SAM\u2019s image encoder embedding [18, 25]. Other strategies suggest replacing the mask decoder with a prediction head requiring no prompts [16]. Nevertheless, SAM\u2019s zero-shot prediction accuracy is typically lower than that of the segmentation models trained with fully supervised methods [26]. Motivated by the generalization abilities of segmentation foundation models, we devise a points selection algorithm from coarse segmentation masks that allows harnessing prompt-based models to ultrasound segmentation in a zero-shot setting. 3 Method In this section, we present our method for refining a coarse pathology segmentation mask with zero-shot foundation models. This method can be adapted to natural images, as well as to the medical imaging domain. Herein, we validate it based on a specific challenging task of segmenting a discontinuity of the tendon fiber finding (Sec. 4.1), which is the main ultrasound finding of a tendon partial tear pathology. Our key intuition is that although the performance of segmentation models decreases significantly in low-data regimes, even such coarse segmentation masks can be utilized for extracting high-quality prompts that harness segmentation foundation models\u2019 capabilities. Importantly, we use the publicly available pre-trained foundation models without further modification. The flexibility of our method allows for incorporating either SonoSAM or SAM. Though the above-mentioned foundation models allow several types of prompts, we focus on foreground (positive) and background (negative) prompt points. Our method makes use of the ground-truth tendon segmentation, denoted T gt. Since the tendon in the context of the DIFT pathology is usually easy to segment due to its typical geometry and position and relatively simple data acquisition and labeling, we assume that strong segmentation models exist for this task and that their output can be used in lieu of the ground-truth segmentation. With that, we introduce our two-stage method, summarized in Algorithm 1. First, a segmentation model [9] is trained on a random subset of the training data. A coarse semantic segmentation is then predicted for a given test image. Then, k positive and k negative prompt points are selected to prompt a segmentation foundation model. We next describe our prompt points selection algorithm in greater detail. 3 Semantic Segmentation Refiner for U/S Applications with Zero-Shot FMs COHEN H ET AL. Algorithm 1 The Semantic Segmentation Refiner Method Input: \u2022 Input image I \u2022 Ground-truth tendon mask T gt \u2022 Frozen SonoSAM model \u2022 Pre-trained segmentation model S Output: \u2022 Refined pathology segmentation mask O 1: Coarse segmentation mask \u02dc O \u2190S(I) 2: Positive points selection ptspos \u2190k-medoids( \u02dc O) 3: Modified ground-truth tendon mask T \u02dc gt \u2190T gt \\ \u02dc O 4: Initialize complementary problem 5: \u00af ptsneg \u2190ptspos, \u00af ptspos \u2190random from T \u02dc gt 6: for t in range(1, T) do 7: Optimize \u00af ptspos as parameters: 8: \u2113ce( \u00af pts, T \u02dc gt) = \u2212T \u02dc gt log (SonoSAM(I, \u00af pts)) 9: Update \u00af ptspos \u2190\u00af ptspos 10: end for 11: Flip: ptsneg \u2190\u00af ptspos 12: Output O \u2190SonoSAM(I, pts) 3.1 Positive Points Selection We aim to select points that are the most representative of the coarse pathology segmentation mask as the positive prompt points. This selection objective translates to the partitioning around the medoids method\u2019s approach. This approach is preferable compared to a selection based on a minimization of the sum of squared distance (i.e., the k-means) in the case of multiple pathology blobs since the latter might select centroids in between pathology blobs. Thus, k mass centers of the coarse pathology segmentation masks are selected as positive points using the kmedoids clustering algorithm [27]. To reduce the probability of selecting false positive points, a threshold is applied to the coarse pathology segmentation masks before selection. We denote the selected positive points as ptspos = {ptspos i }k i=1. This process is illustrated in Fig. 2. Figure 2: An illustration of our positive (foreground) points selection module, depicted in red. A threshold is applied to the coarse segmentation prediction. A kmedoids clustering algorithm is applied to select k positive pathology points. 3.2 Negative Points Refinement We take inspiration from hard negative selection literature [28, 29, 30], and aim to select the most informative negative points w.r.t. the foreground object. To that end, we formulate a complementary prompt points selection problem w.r.t. the background given the k selected foreground points (3.1), \u00af pts = { \u00af ptspos, \u00af ptsneg}. When the foreground is the pathology, the background is the context anatomy, herein the background is a tendon anatomy. The complementary prompt points selection is optimized to decrease the binary cross-entropy (BCE) loss between the foundation model\u2019s zero-shot tendon segmentation mask prompted on these points and a modified ground-truth tendon 4 Semantic Segmentation Refiner for U/S Applications with Zero-Shot FMs COHEN H ET AL. mask, denoted T \u02dc gt. To avoid predicting tendon points within foreground pathology, the values of the ground-truth tendon mask overlapping with the coarse pathology detection are modified to zero. As points initialization for this complementary problem, we flip the labels of ptspos such that they correspond to negative points, \u00af ptsneg \u2190ptspos. Further, k points are selected at random from T \u02dc gt, denoted \u00af ptspos. While freezing the foundation model, the point prompt optimization is performed for a maximum of 100 steps or until convergence. The optimization is performed such that the selected points are optimal w.r.t. the complementary problem of the tendon segmentation given the foreground pathology predicted by the coarse segmentor. Denote an input image as I, SonoSAM\u2019s zero-shot tendon segmentation given input I and its corresponding optimized prompt points \u00af pts as SonoSAM(I, \u00af pts). Then, the BCE loss of the complementary problem is: \u2113ce( \u00af pts, T \u02dc gt) = \u2212T \u02dc gt log (SonoSAM(I, \u00af pts)) . (1) We used the AdamW [31] optimizer, with learning rate of 4e\u22123, and standard betas to optimize the positive points \u00af ptspos. The optimized positive tendon points selected by this model serve as k negative prompt points, ptsneg \u2190\u00af ptspos, towards the foreground pathology segmentation. This process is illustrated in Fig. 3. Figure 3: An illustration of our negative (background) points selection module. In addition to the positive selected points (Sec. 3.1), negative points are selected randomly from the modified ground-truth tendon mask. The points are flipped to initialize the settings of the complementary tendon segmentation problem. Our points optimization model optimizes prompt points selection w.r.t. the complementary tendon zero-shot segmentation problem (Sec. 3.2). Finally, prompt points are again flipped to account for positive and negative prompt points towards the pathology segmentation. 4 Experiments 4.1 Dataset The data used for this study is ultrasound images of tendons around the shoulder joint. Specifically, we acquired images of the supraspinatus tendon, infraspinatus tendon, and subscapularis. The images are acquired from both the short-axis and the long-axis views. The main parameters of our data are summarized in Table 1. In this work, we aim to segment the partial tear pathology within the tendon, thus our data consists of images paired with the corresponding segmentation mask of anatomies and pathologies. Our data includes semantic labeling of the following classes: DITF, bone, tendon, and muscle. Table 2 summarizes the semantic labeling statistics. In total, our dataset includes 388 images from 124 subjects, 80% of which are used for training, and the remaining 20% are used for validation. The test set comprises 40 images. To prevent data leakage, the test set images are collected from subjects that do not appear in the train data. All images are resized to a constant resolution of 512x512 pixels. All data comply with the Institutional Review Board (IRB) data sharing agreement. 4.2 Evaluation Metric We use the Dice similarity coefficient [32] evaluation metric, commonly used in medical image segmentation research to measure the overlapping pixels between prediction and ground truth masks. The Dice similarity coefficient is defined as 2|A\u2229B| |A|+|B|, where A and B are the pixels of the prediction and the ground truth respectively. 4.3 A Segmentation Model In Low-Data Regimes In this experiment, we investigate the performance and properties of a state-of-the-art semantic segmentation model with a limited training set size of MSK ultrasound images. Our goal is two-fold: (i) to validate our conjecture that high-quality 5 Semantic Segmentation Refiner for U/S Applications with Zero-Shot FMs COHEN H ET AL. (a) 100% of train set. (b) 35% of train set. (c) 15% of train set. (d) 8% of train set. (e) 5% of train set. Figure 4: Positive pathology points retainment in increasingly coarse segmentation mask prediction and our method\u2019s results. Top row: Pathology segmentation mask predicted with a DeepLabV3 model trained on varying percent of the training set. Middle row: Positive points selected on binary pathology mask by our positive points selection module. Bottom row: An illustration of our method\u2019s pathology segmentation output, highlighted in red, compared to the ground-truth segmentation, highlighted in green. The tendon area is shown at the bottom left image for reference. Our method achieves for this test image a Dice similarity coefficient of 0.89, 0.71, 0.73, 0.72, 0.50 when the coarse segmentor is trained on 100%, 35%, 15%, 8%, 5% of the train set, respectively. Table 1: Summary of MSK pathology segmentation dataset main parameters. Parameters/Dataset MSK Ultrasound Images Total frames 388 Original frame size 1536 X 796 or 1044 X 646 pixels Subjects 90 (52.82% males, 47.18% females) Average BMI 24.69 \u00b1 8.92 Vendor GE Healthcare\u2122 Ultrasound system Logiq S8\u2122, Eagle\u2122, LogiqE10\u2122 Data collection Linear Collection Sites USA, Israel prompts can be extracted even from a coarse semantic segmentation prediction, and (ii) to measure the performance degradation in increasingly low-data regimes. These properties are the basis of our two-stage method for exploiting the advantages of a prompt-able foundation segmentation model. Concretely, for an input image I \u2208R512\u00d7512 the segmentation model prediction S \u2208R7\u00d7512\u00d7512 corresponds to a semantic segmentation for each class as detailed in Table 2. 4.4 Segmentation Refinement With Zero-Shot Foundation Models Positive Points Selection A combination of a constant and an adaptive threshold is applied to the coarse segmentation prediction prior to positive point selection. Denote by c0 the coarse segmentation mask prediction at the foreground channel (DITF in our case). 6 Semantic Segmentation Refiner for U/S Applications with Zero-Shot FMs COHEN H ET AL. Table 2: Semantic labeling statistics at the 512X512 patches level. M: Million. Class MSK Type Number of images (% of total) Total Area (pixels) Mean fraction out of total patch area Discontinuity in tendon fiber Pathology 179 (46.13%) 1.11M 1.09% Bone 288 (74.22%) 2.75M 2.7% Tendon Anatomy 388 (100%) 10.64M 10.46% Muscle 388 (100%) 28.13M 27.65% We apply a double thresholding mechanism to disregard the noise in the prediction. \u02dc c = c0 > tmin (2) c = \u02dc c > 0.4 \u2217max(\u02dc c) (3) The initial threshold screens predictions that lack sufficient global (cross-classes) certainty, when the minimum threshold is set to tmin = 0.15. The second thresholding term adaptively screens all predictions that lack sufficient local (classwise) certainty. Further, we set the k-medoids++ medoid initialization method [33] which selects more separated initial medoids than those selected by the other methods. The hyper-parameter k is adaptively set such that the sum of distances of samples to their closest cluster center (inertia) is minimized, k \u2208[4, 6]. Negative Points Refinement We deploy in our experiments the SonoSAM semantic segmentation foundation model since it is expected to better generalize to zero-shot segmentation of ultrasound images than SAM. Due to the randomness in the initialization of the complementary positive points \u00af ptspos selection problem, evaluation is performed over 10 random initialization. 4.5 Training Procedure Our coarse segmentor is DeepLabV3 [9], a state-of-the-art convolutional approach to handle objects in images of varying scales, with a ResNet-50 backbone [34]. As our complete dataset consists of only 275 training images, the model is pre-trained on the ImageNet dataset [35]. To evaluate our method across different data regimes we trained our coarse segmentor on varying n percentage of the training data, n \u2208[5, 8, 12, 20, 35, 60, 100], sub-sampled at random. The model is trained with equally weighted BCE loss and a Dice similarity coefficient loss between the predicted and ground-truth segmentation for each class. Each such experiment is trained for 100 epochs, where the weights of the maximal validation loss have been selected for testing. We used the robust AdamW [31] optimizer, with no learning rate scheduler and parameters of \u03b21 = 0.9, \u03b22 = 0.999 and learning rate of 4e\u22123. The test set remains constant across the different training experiments. The model training and evaluation code is implemented with the PyTorch [36] framework. 5 Results 5.1 Semantic Segmentation Model In Low-Data Regimes The results of this experiment validate our conjecture that positive pathology points are consistently selected in increasingly coarse segmentation mask predictions. As the segmentation model is trained on an increasingly smaller training set, the segmentation mask prediction becomes coarse: the pathology segmentation boundaries become less defined and its prediction probability decreases (Fig. 4, top row). Nevertheless, the positive pathology points selected by our method remain generally consistent (Fig. 4, middle row). Consistent with these results, we find that the average Dice similarity coefficient of the segmentation model decreases rapidly when the model is trained on increasingly smaller training set sizes (Fig. 5, \u2018Segmentation Model\u2019). These results validate our method\u2019s motivation and approach. 5.2 Semantic Segmentation Refinement With Zero-Shot Foundation Model Fig. 5 summarizes the results of our method in comparison with those of the baseline segmentation model in various training set sizes. Our method\u2019s average Dice is higher than the baseline\u2019s in every training set size. Moreover, 7 Semantic Segmentation Refiner for U/S Applications with Zero-Shot FMs COHEN H ET AL. our method\u2019s performance gain is larger as the training set size decreases (\u223c10% average Dice increase in 5% and 8% training set sizes), substantiating the advantage of our method in low-data regimes. Our method\u2019s pathology segmentation output in varying training set sizes compared to the ground-truth segmentation is illustrated in Fig. 4, bottom row. 100.0 60.0 35.0 20.0 15.0 12.0 8.0 5.0 Percent of the training set size (%) 0.20 0.25 0.30 0.35 0.40 0.45 Average Dice score Segmentation Model (DeepLabV3) Ours Figure 5: A summary of the average DITF Dice similarity coefficient of methods in various training set sizes. Depicted are the results of the baseline segmentation model[9] and our segmentation refinement with zero-shot SonoSAM foundation model. Error bars depict the standard deviation of our method\u2019s statistics. To analyze the stochasticity effect of our method\u2019s negative points random initialization (Sec. 3.2), we compare our method\u2019s DITF Dice score statistics over ten random initialization and the baseline segmentation model\u2019s average DITF Dice similarity coefficient. Results show that our method\u2019s performance is robust, exhibiting relatively low standard deviation in all train set sizes (Fig. 5). Additionally, our method\u2019s mean DITF Dice surpasses that of the baseline\u2019s in all but one train set size, and is higher by 4% on average than the baseline. 5.3 Ablation Studies In this section, we present ablation studies substantiating the effectiveness of our negative prompt points refinement (NPPR) model, as well as examining our method\u2019s performance when replacing the SonoSAM foundation model with SAM. 5.3.1 SAM vs. SonoSAM as a segmentation foundation model In this study, we investigate the impact of replacing SonoSAM with SAM as the zero-shot semantic segmentation foundation model in our method. Table 3 shows that harnessing SonoSAM\u2019s generalizability for MSK ultrasound images is preferable to SAM in low-data regimes and on par with SAM otherwise. 5.3.2 Random negative prompt points section In this experiment, we investigate the effectiveness of our negative prompt points refinement model by comparing it to a random negative prompt points selection algorithm. Concretely, k negative prompt points are randomly selected from the modified ground-truth tendon mask, T \u02dc gt. Our positive points selection approach remains unchanged. Results in Table 3 demonstrate that this naive selection algorithm achieves subpar average Dice scores across almost all train set sizes, especially in low-data regimes. These results establish the advantage of our negative points optimization algorithm. 6 Conclusions In this paper, we address the performance degradation of a state-of-the-art semantic segmentation model in low-data regimes. A novel prompt points selection algorithm optimized on a zero-shot segmentation foundation model was presented, as a means of refining a coarse pathology segmentation. Our method\u2019s advantages are brought to light in varying degrees of low-data regimes experiments, demonstrating a larger performance gain compared to the baseline segmentation model as the training set size decreases (Fig. 5). 8 Semantic Segmentation Refiner for U/S Applications with Zero-Shot FMs COHEN H ET AL. Table 3: Ablation studies: quantitative segmentation test results of the mean DITF Dice similarity coefficient (DSC) for different approaches over 10 run cycles. Our method is using zero-shot SonoSAM [6] foundation model. A higher DSC is better, with the best scores marked in bold. NPPR: Negative Prompt Points Refinement. Methods Percent of the training set 100% 60% 35% 20% 15% 12% 8% 5% Ours without NPPR 44.6% 40.0% 34.2% 27.8% 30.3% 27.5% 20.7% 16.6% Ours with SAM 45.5% 41.6% 39.7% 29.3% 32.9% 28.3% 27.6% 23.0% Ours 46.3% 39.3% 39.6% 31.9% 32.8% 31.8% 32.0% 24.6% Further, we validate our method\u2019s robustness to negative point initialization stochasticity and study the effectiveness of our prompt points refinement model (Section 5.3.2). Results demonstrate that the generalization of SonoSAM in extremely low data regimes is better than SAM\u2019s (Section 5.3.1). Our approach can be used for other ultrasound-based medical diagnostics tasks. An inherent limitation of our two-stage method is that its latency is higher than that of a core segmentation model.", "additional_graph_info": { "graph": [ [ "Hedda Cohen Indelman", "Nati Daniel" ], [ "Nati Daniel", "Eliel Aknin" ], [ "Nati Daniel", "Ariel Larey" ] ], "node_feat": { "Hedda Cohen Indelman": [ { "url": "http://arxiv.org/abs/2404.16325v1", "title": "Semantic Segmentation Refiner for Ultrasound Applications with Zero-Shot Foundation Models", "abstract": "Despite the remarkable success of deep learning in medical imaging analysis,\nmedical image segmentation remains challenging due to the scarcity of\nhigh-quality labeled images for supervision. Further, the significant domain\ngap between natural and medical images in general and ultrasound images in\nparticular hinders fine-tuning models trained on natural images to the task at\nhand. In this work, we address the performance degradation of segmentation\nmodels in low-data regimes and propose a prompt-less segmentation method\nharnessing the ability of segmentation foundation models to segment abstract\nshapes. We do that via our novel prompt point generation algorithm which uses\ncoarse semantic segmentation masks as input and a zero-shot prompt-able\nfoundation model as an optimization target. We demonstrate our method on a\nsegmentation findings task (pathologic anomalies) in ultrasound images. Our\nmethod's advantages are brought to light in varying degrees of low-data regime\nexperiments on a small-scale musculoskeletal ultrasound images dataset,\nyielding a larger performance gain as the training set size decreases.", "authors": "Hedda Cohen Indelman, Elay Dahan, Angeles M. Perez-Agosto, Carmit Shiran, Doron Shaked, Nati Daniel", "published": "2024-04-25", "updated": "2024-04-25", "primary_cat": "cs.CV", "cats": [ "cs.CV", "cs.AI" ], "main_content": "2.1 Semantic Segmentation Models Semantic segmentation aims to assign a label or a class to each pixel in an image. Unlike image classification, which assigns a single label to the entire image, semantic segmentation provides a more detailed understanding of the visual scene by segmenting it into distinct regions corresponding to objects or classes. This is an essential technique for applications, such as autonomous vehicles, medical image analysis, and scene understanding in robotics. Like other computer vision tasks, deep learning has demonstrated state-of-the-art results in the semantic segmentation of medical images. The semantic segmentation problem can be formulated as follows: Given an image I \u2208RC\u00d7H\u00d7W , our goal is to train a deep neural network to predict the pixel-wise probability map SN\u00d7H\u00d7W of the classes in the dataset, where N is the number of classes in the dataset. 2 Semantic Segmentation Refiner for U/S Applications with Zero-Shot FMs COHEN H ET AL. DeepLabV3 [9] represents a distinctive approach in semantic image segmentation. Utilizing dilated convolutions, the model strategically enlarges the receptive field and manages the balance between global and local features through padding rates. Notably, the spatial pyramid pooling module proposed by the authors aggregates features from dilated convolutions at various scales, enhancing contextual information. Distinctive from encoder-decoder architectures such as the U-Net [10], it is built upon a robust pre-trained encoder, contributing to its success in generating accurate and detailed segmentation masks across diverse applications. Since DeepLabV3 remains a staple choice for a performant semantic segmentation model, we adopt it as our method\u2019s coarse segmentor. 2.2 Semantic Segmentation Foundation Models Foundation models are trained on broad data at a huge scale and are adaptable to a wide range of downstream tasks [11, 12, 13]. The Segment Anything Model (SAM) [5] emerged as a versatile foundation model for natural image segmentation. Trained on a dataset of over 11 million images and 1B masks, it demonstrates impressive zero-shot generalization in segmenting natural images using an interactive and prompt-driven approach. Prompt types include foreground/background points, bounding boxes, masks, and text prompts. However, SAM achieves subpar generalization on medical images due to substantial domain gaps between natural and medical images [14, 15, 16, 17, 18]. Moreover, SAM obtains the poorest results on ultrasound compared to other medical imaging modalities [15]. These results are attributed to the ultrasound characteristics, e.g., the scan cone, poor image quality, and unique speckled texture. A common methodology to overcome this generalization difficulty is to fine-tune a foundation model on a target dataset [19]. An efficient fine-tuning strategy is Low-Rank Adaptation (LoRA) [20], which has been adopted in fine-tuning SAM to relatively small medical imaging datasets [21, 22, 23]. SonoSAM [6] demonstrates state-of-the-art generalization in segmenting ultrasound images. Fine-tuned on a rich and diverse set of ultrasound image-mask pairs, it has emerged as a prompt-able foundational model for ultrasound image segmentation. Notably, adapting prompt-based models to medical image segmentation is difficult due to the conundrum of crafting high-quality prompts [15]. Manually selecting prompts is time-consuming and requires domain expertise. Methods of extracting prompts from ground-truth masks [23] cannot be applied during inference as they rely on full supervision. Auto-prompting techniques rely on the strong Vision Transformer (ViT-H) image encoder [24] semantic representation capabilities, and suggest generating a segmentation prompt based on SAM\u2019s image encoder embedding [18, 25]. Other strategies suggest replacing the mask decoder with a prediction head requiring no prompts [16]. Nevertheless, SAM\u2019s zero-shot prediction accuracy is typically lower than that of the segmentation models trained with fully supervised methods [26]. Motivated by the generalization abilities of segmentation foundation models, we devise a points selection algorithm from coarse segmentation masks that allows harnessing prompt-based models to ultrasound segmentation in a zero-shot setting. 3 Method In this section, we present our method for refining a coarse pathology segmentation mask with zero-shot foundation models. This method can be adapted to natural images, as well as to the medical imaging domain. Herein, we validate it based on a specific challenging task of segmenting a discontinuity of the tendon fiber finding (Sec. 4.1), which is the main ultrasound finding of a tendon partial tear pathology. Our key intuition is that although the performance of segmentation models decreases significantly in low-data regimes, even such coarse segmentation masks can be utilized for extracting high-quality prompts that harness segmentation foundation models\u2019 capabilities. Importantly, we use the publicly available pre-trained foundation models without further modification. The flexibility of our method allows for incorporating either SonoSAM or SAM. Though the above-mentioned foundation models allow several types of prompts, we focus on foreground (positive) and background (negative) prompt points. Our method makes use of the ground-truth tendon segmentation, denoted T gt. Since the tendon in the context of the DIFT pathology is usually easy to segment due to its typical geometry and position and relatively simple data acquisition and labeling, we assume that strong segmentation models exist for this task and that their output can be used in lieu of the ground-truth segmentation. With that, we introduce our two-stage method, summarized in Algorithm 1. First, a segmentation model [9] is trained on a random subset of the training data. A coarse semantic segmentation is then predicted for a given test image. Then, k positive and k negative prompt points are selected to prompt a segmentation foundation model. We next describe our prompt points selection algorithm in greater detail. 3 Semantic Segmentation Refiner for U/S Applications with Zero-Shot FMs COHEN H ET AL. Algorithm 1 The Semantic Segmentation Refiner Method Input: \u2022 Input image I \u2022 Ground-truth tendon mask T gt \u2022 Frozen SonoSAM model \u2022 Pre-trained segmentation model S Output: \u2022 Refined pathology segmentation mask O 1: Coarse segmentation mask \u02dc O \u2190S(I) 2: Positive points selection ptspos \u2190k-medoids( \u02dc O) 3: Modified ground-truth tendon mask T \u02dc gt \u2190T gt \\ \u02dc O 4: Initialize complementary problem 5: \u00af ptsneg \u2190ptspos, \u00af ptspos \u2190random from T \u02dc gt 6: for t in range(1, T) do 7: Optimize \u00af ptspos as parameters: 8: \u2113ce( \u00af pts, T \u02dc gt) = \u2212T \u02dc gt log (SonoSAM(I, \u00af pts)) 9: Update \u00af ptspos \u2190\u00af ptspos 10: end for 11: Flip: ptsneg \u2190\u00af ptspos 12: Output O \u2190SonoSAM(I, pts) 3.1 Positive Points Selection We aim to select points that are the most representative of the coarse pathology segmentation mask as the positive prompt points. This selection objective translates to the partitioning around the medoids method\u2019s approach. This approach is preferable compared to a selection based on a minimization of the sum of squared distance (i.e., the k-means) in the case of multiple pathology blobs since the latter might select centroids in between pathology blobs. Thus, k mass centers of the coarse pathology segmentation masks are selected as positive points using the kmedoids clustering algorithm [27]. To reduce the probability of selecting false positive points, a threshold is applied to the coarse pathology segmentation masks before selection. We denote the selected positive points as ptspos = {ptspos i }k i=1. This process is illustrated in Fig. 2. Figure 2: An illustration of our positive (foreground) points selection module, depicted in red. A threshold is applied to the coarse segmentation prediction. A kmedoids clustering algorithm is applied to select k positive pathology points. 3.2 Negative Points Refinement We take inspiration from hard negative selection literature [28, 29, 30], and aim to select the most informative negative points w.r.t. the foreground object. To that end, we formulate a complementary prompt points selection problem w.r.t. the background given the k selected foreground points (3.1), \u00af pts = { \u00af ptspos, \u00af ptsneg}. When the foreground is the pathology, the background is the context anatomy, herein the background is a tendon anatomy. The complementary prompt points selection is optimized to decrease the binary cross-entropy (BCE) loss between the foundation model\u2019s zero-shot tendon segmentation mask prompted on these points and a modified ground-truth tendon 4 Semantic Segmentation Refiner for U/S Applications with Zero-Shot FMs COHEN H ET AL. mask, denoted T \u02dc gt. To avoid predicting tendon points within foreground pathology, the values of the ground-truth tendon mask overlapping with the coarse pathology detection are modified to zero. As points initialization for this complementary problem, we flip the labels of ptspos such that they correspond to negative points, \u00af ptsneg \u2190ptspos. Further, k points are selected at random from T \u02dc gt, denoted \u00af ptspos. While freezing the foundation model, the point prompt optimization is performed for a maximum of 100 steps or until convergence. The optimization is performed such that the selected points are optimal w.r.t. the complementary problem of the tendon segmentation given the foreground pathology predicted by the coarse segmentor. Denote an input image as I, SonoSAM\u2019s zero-shot tendon segmentation given input I and its corresponding optimized prompt points \u00af pts as SonoSAM(I, \u00af pts). Then, the BCE loss of the complementary problem is: \u2113ce( \u00af pts, T \u02dc gt) = \u2212T \u02dc gt log (SonoSAM(I, \u00af pts)) . (1) We used the AdamW [31] optimizer, with learning rate of 4e\u22123, and standard betas to optimize the positive points \u00af ptspos. The optimized positive tendon points selected by this model serve as k negative prompt points, ptsneg \u2190\u00af ptspos, towards the foreground pathology segmentation. This process is illustrated in Fig. 3. Figure 3: An illustration of our negative (background) points selection module. In addition to the positive selected points (Sec. 3.1), negative points are selected randomly from the modified ground-truth tendon mask. The points are flipped to initialize the settings of the complementary tendon segmentation problem. Our points optimization model optimizes prompt points selection w.r.t. the complementary tendon zero-shot segmentation problem (Sec. 3.2). Finally, prompt points are again flipped to account for positive and negative prompt points towards the pathology segmentation. 4 Experiments 4.1 Dataset The data used for this study is ultrasound images of tendons around the shoulder joint. Specifically, we acquired images of the supraspinatus tendon, infraspinatus tendon, and subscapularis. The images are acquired from both the short-axis and the long-axis views. The main parameters of our data are summarized in Table 1. In this work, we aim to segment the partial tear pathology within the tendon, thus our data consists of images paired with the corresponding segmentation mask of anatomies and pathologies. Our data includes semantic labeling of the following classes: DITF, bone, tendon, and muscle. Table 2 summarizes the semantic labeling statistics. In total, our dataset includes 388 images from 124 subjects, 80% of which are used for training, and the remaining 20% are used for validation. The test set comprises 40 images. To prevent data leakage, the test set images are collected from subjects that do not appear in the train data. All images are resized to a constant resolution of 512x512 pixels. All data comply with the Institutional Review Board (IRB) data sharing agreement. 4.2 Evaluation Metric We use the Dice similarity coefficient [32] evaluation metric, commonly used in medical image segmentation research to measure the overlapping pixels between prediction and ground truth masks. The Dice similarity coefficient is defined as 2|A\u2229B| |A|+|B|, where A and B are the pixels of the prediction and the ground truth respectively. 4.3 A Segmentation Model In Low-Data Regimes In this experiment, we investigate the performance and properties of a state-of-the-art semantic segmentation model with a limited training set size of MSK ultrasound images. Our goal is two-fold: (i) to validate our conjecture that high-quality 5 Semantic Segmentation Refiner for U/S Applications with Zero-Shot FMs COHEN H ET AL. (a) 100% of train set. (b) 35% of train set. (c) 15% of train set. (d) 8% of train set. (e) 5% of train set. Figure 4: Positive pathology points retainment in increasingly coarse segmentation mask prediction and our method\u2019s results. Top row: Pathology segmentation mask predicted with a DeepLabV3 model trained on varying percent of the training set. Middle row: Positive points selected on binary pathology mask by our positive points selection module. Bottom row: An illustration of our method\u2019s pathology segmentation output, highlighted in red, compared to the ground-truth segmentation, highlighted in green. The tendon area is shown at the bottom left image for reference. Our method achieves for this test image a Dice similarity coefficient of 0.89, 0.71, 0.73, 0.72, 0.50 when the coarse segmentor is trained on 100%, 35%, 15%, 8%, 5% of the train set, respectively. Table 1: Summary of MSK pathology segmentation dataset main parameters. Parameters/Dataset MSK Ultrasound Images Total frames 388 Original frame size 1536 X 796 or 1044 X 646 pixels Subjects 90 (52.82% males, 47.18% females) Average BMI 24.69 \u00b1 8.92 Vendor GE Healthcare\u2122 Ultrasound system Logiq S8\u2122, Eagle\u2122, LogiqE10\u2122 Data collection Linear Collection Sites USA, Israel prompts can be extracted even from a coarse semantic segmentation prediction, and (ii) to measure the performance degradation in increasingly low-data regimes. These properties are the basis of our two-stage method for exploiting the advantages of a prompt-able foundation segmentation model. Concretely, for an input image I \u2208R512\u00d7512 the segmentation model prediction S \u2208R7\u00d7512\u00d7512 corresponds to a semantic segmentation for each class as detailed in Table 2. 4.4 Segmentation Refinement With Zero-Shot Foundation Models Positive Points Selection A combination of a constant and an adaptive threshold is applied to the coarse segmentation prediction prior to positive point selection. Denote by c0 the coarse segmentation mask prediction at the foreground channel (DITF in our case). 6 Semantic Segmentation Refiner for U/S Applications with Zero-Shot FMs COHEN H ET AL. Table 2: Semantic labeling statistics at the 512X512 patches level. M: Million. Class MSK Type Number of images (% of total) Total Area (pixels) Mean fraction out of total patch area Discontinuity in tendon fiber Pathology 179 (46.13%) 1.11M 1.09% Bone 288 (74.22%) 2.75M 2.7% Tendon Anatomy 388 (100%) 10.64M 10.46% Muscle 388 (100%) 28.13M 27.65% We apply a double thresholding mechanism to disregard the noise in the prediction. \u02dc c = c0 > tmin (2) c = \u02dc c > 0.4 \u2217max(\u02dc c) (3) The initial threshold screens predictions that lack sufficient global (cross-classes) certainty, when the minimum threshold is set to tmin = 0.15. The second thresholding term adaptively screens all predictions that lack sufficient local (classwise) certainty. Further, we set the k-medoids++ medoid initialization method [33] which selects more separated initial medoids than those selected by the other methods. The hyper-parameter k is adaptively set such that the sum of distances of samples to their closest cluster center (inertia) is minimized, k \u2208[4, 6]. Negative Points Refinement We deploy in our experiments the SonoSAM semantic segmentation foundation model since it is expected to better generalize to zero-shot segmentation of ultrasound images than SAM. Due to the randomness in the initialization of the complementary positive points \u00af ptspos selection problem, evaluation is performed over 10 random initialization. 4.5 Training Procedure Our coarse segmentor is DeepLabV3 [9], a state-of-the-art convolutional approach to handle objects in images of varying scales, with a ResNet-50 backbone [34]. As our complete dataset consists of only 275 training images, the model is pre-trained on the ImageNet dataset [35]. To evaluate our method across different data regimes we trained our coarse segmentor on varying n percentage of the training data, n \u2208[5, 8, 12, 20, 35, 60, 100], sub-sampled at random. The model is trained with equally weighted BCE loss and a Dice similarity coefficient loss between the predicted and ground-truth segmentation for each class. Each such experiment is trained for 100 epochs, where the weights of the maximal validation loss have been selected for testing. We used the robust AdamW [31] optimizer, with no learning rate scheduler and parameters of \u03b21 = 0.9, \u03b22 = 0.999 and learning rate of 4e\u22123. The test set remains constant across the different training experiments. The model training and evaluation code is implemented with the PyTorch [36] framework. 5 Results 5.1 Semantic Segmentation Model In Low-Data Regimes The results of this experiment validate our conjecture that positive pathology points are consistently selected in increasingly coarse segmentation mask predictions. As the segmentation model is trained on an increasingly smaller training set, the segmentation mask prediction becomes coarse: the pathology segmentation boundaries become less defined and its prediction probability decreases (Fig. 4, top row). Nevertheless, the positive pathology points selected by our method remain generally consistent (Fig. 4, middle row). Consistent with these results, we find that the average Dice similarity coefficient of the segmentation model decreases rapidly when the model is trained on increasingly smaller training set sizes (Fig. 5, \u2018Segmentation Model\u2019). These results validate our method\u2019s motivation and approach. 5.2 Semantic Segmentation Refinement With Zero-Shot Foundation Model Fig. 5 summarizes the results of our method in comparison with those of the baseline segmentation model in various training set sizes. Our method\u2019s average Dice is higher than the baseline\u2019s in every training set size. Moreover, 7 Semantic Segmentation Refiner for U/S Applications with Zero-Shot FMs COHEN H ET AL. our method\u2019s performance gain is larger as the training set size decreases (\u223c10% average Dice increase in 5% and 8% training set sizes), substantiating the advantage of our method in low-data regimes. Our method\u2019s pathology segmentation output in varying training set sizes compared to the ground-truth segmentation is illustrated in Fig. 4, bottom row. 100.0 60.0 35.0 20.0 15.0 12.0 8.0 5.0 Percent of the training set size (%) 0.20 0.25 0.30 0.35 0.40 0.45 Average Dice score Segmentation Model (DeepLabV3) Ours Figure 5: A summary of the average DITF Dice similarity coefficient of methods in various training set sizes. Depicted are the results of the baseline segmentation model[9] and our segmentation refinement with zero-shot SonoSAM foundation model. Error bars depict the standard deviation of our method\u2019s statistics. To analyze the stochasticity effect of our method\u2019s negative points random initialization (Sec. 3.2), we compare our method\u2019s DITF Dice score statistics over ten random initialization and the baseline segmentation model\u2019s average DITF Dice similarity coefficient. Results show that our method\u2019s performance is robust, exhibiting relatively low standard deviation in all train set sizes (Fig. 5). Additionally, our method\u2019s mean DITF Dice surpasses that of the baseline\u2019s in all but one train set size, and is higher by 4% on average than the baseline. 5.3 Ablation Studies In this section, we present ablation studies substantiating the effectiveness of our negative prompt points refinement (NPPR) model, as well as examining our method\u2019s performance when replacing the SonoSAM foundation model with SAM. 5.3.1 SAM vs. SonoSAM as a segmentation foundation model In this study, we investigate the impact of replacing SonoSAM with SAM as the zero-shot semantic segmentation foundation model in our method. Table 3 shows that harnessing SonoSAM\u2019s generalizability for MSK ultrasound images is preferable to SAM in low-data regimes and on par with SAM otherwise. 5.3.2 Random negative prompt points section In this experiment, we investigate the effectiveness of our negative prompt points refinement model by comparing it to a random negative prompt points selection algorithm. Concretely, k negative prompt points are randomly selected from the modified ground-truth tendon mask, T \u02dc gt. Our positive points selection approach remains unchanged. Results in Table 3 demonstrate that this naive selection algorithm achieves subpar average Dice scores across almost all train set sizes, especially in low-data regimes. These results establish the advantage of our negative points optimization algorithm. 6 Conclusions In this paper, we address the performance degradation of a state-of-the-art semantic segmentation model in low-data regimes. A novel prompt points selection algorithm optimized on a zero-shot segmentation foundation model was presented, as a means of refining a coarse pathology segmentation. Our method\u2019s advantages are brought to light in varying degrees of low-data regimes experiments, demonstrating a larger performance gain compared to the baseline segmentation model as the training set size decreases (Fig. 5). 8 Semantic Segmentation Refiner for U/S Applications with Zero-Shot FMs COHEN H ET AL. Table 3: Ablation studies: quantitative segmentation test results of the mean DITF Dice similarity coefficient (DSC) for different approaches over 10 run cycles. Our method is using zero-shot SonoSAM [6] foundation model. A higher DSC is better, with the best scores marked in bold. NPPR: Negative Prompt Points Refinement. Methods Percent of the training set 100% 60% 35% 20% 15% 12% 8% 5% Ours without NPPR 44.6% 40.0% 34.2% 27.8% 30.3% 27.5% 20.7% 16.6% Ours with SAM 45.5% 41.6% 39.7% 29.3% 32.9% 28.3% 27.6% 23.0% Ours 46.3% 39.3% 39.6% 31.9% 32.8% 31.8% 32.0% 24.6% Further, we validate our method\u2019s robustness to negative point initialization stochasticity and study the effectiveness of our prompt points refinement model (Section 5.3.2). Results demonstrate that the generalization of SonoSAM in extremely low data regimes is better than SAM\u2019s (Section 5.3.1). Our approach can be used for other ultrasound-based medical diagnostics tasks. An inherent limitation of our two-stage method is that its latency is higher than that of a core segmentation model.", "introduction": "Ultrasound is a popular medical imaging modality used to image a large variety of organs and tissues. Ultrasound is often the preferred choice due to its non-radiative and non-invasive nature, relatively easy and fast imaging procedure, and lower costs. Automating the diagnosis or highlighting relevant areas in the image will contribute to faster workflows and potentially more consistent and accurate diagnoses. Artificial Intelligence (AI) has demonstrated remarkable success in automatic medical imaging analysis. Compared to classical methods, previous work based on convolutional neural networks on various medical imaging tasks, such as classification and segmentation, have shown state-of-the-art results [1, 2, 3, 4]. However, effective deep learning segmentation algorithms for medical images is an especially challenging task due to the scarcity of high-quality labeled images for supervision. Moreover, in medical imaging it is often the case that identification of findings regions, namely regions of potentially pathological visual anomalies, having neither a clear boundary nor a typical geometry or position is much more challenging than the identification of an anatomy in its context. Findings are also typically rare, which brings to light the challenge of training such models in limited data regimes. \u2217Corresponding author, e-mail: nati.daniel@gehealthcare.com. \u2020These authors have contributed equally to this work. 1Dept. of AI/ML Research, GE Healthcare, Haifa, Israel. 2Dept. of Clinical Applications, Point of Care Ultrasound & Handheld, Texas, USA. 3Dept. of Clinical Applications, Point of Care Ultrasound & Handheld, Wisconsin, USA. arXiv:2404.16325v1 [cs.CV] 25 Apr 2024 Semantic Segmentation Refiner for U/S Applications with Zero-Shot FMs COHEN H ET AL. Figure 1: A high-level illustration of our semantic segmentation refinement method with zero-shot foundation models. A pre-trained segmentation model predicts a semantic segmentation for each class of an input image. In this example, classes comprise anatomies and pathologies in an ultrasound image, and the coarse segmentor output depicts the predicted semantic segmentation of a pathology. A prompt selection model selects positive and negative points. Consequently, a zero-shot semantic segmentation mask of the pathology is predicted by a foundation segmentation model, prompted by the selected points for the input image. Positive prompt points are depicted in red, and negative prompt points are depicted in blue. The pathology semantic segmentation prediction is highlighted in red. For illustration purposes, the muscle is highlighted in purple, the tendon in yellow, and the bone in green. The freeze symbol indicates preventing gradients from being propagated to the model weights. Recently, new segmentation models have emerged. Trained on data at huge scales, these foundation models aim to be more generic rather than tailored to specific datasets. The Segment Anything Model (SAM) [5] is a foundational model demonstrating zero-shot generalization in segmenting natural images using a prompt-driven approach. The SonoSAM [6] foundational model adapts SAM to ultrasound images by fine-tuning the prompt and mask decoder [6]. Although fine-tuning methods often improve the results on target datasets [7] they essentially downgrade the generalization capabilities of the foundation model. Further, a significant domain gap between natural and medical images, ultrasound images in particular[8], hinders fine-tuning models trained on natural images to the task at hand [7]. In this work, we address the performance degradation of segmentation models in low-data regimes and derive a novel method for harnessing segmentation foundation models\u2019 ability to segment arbitrary regions. Our semantic segmentation refinement method comprises two stages: First, a coarse segmentation is predicted by a model trained on a small subset of the training data. In the second stage, our novel points generation from a coarse pathology segmentation algorithm is used to prompt a segmentation foundation model. Positive prompt points are selected using a partition around medoids method as the most representative pathology points. Negative prompt points are selected by a prompt selection optimization algorithm that identify the context anatomy. Importantly, we do not fine-tune the foundation model to our dataset, i.e., it produces a zero-shot segmentation. The end-to-end pipeline is illustrated in Fig. 1. The method\u2019s advantages are brought to light on varying degrees of low-data regimes experiments on a small-scale images dataset, yielding a larger performance gain compared to a state-of-the-art segmentation model [9] as the training set size decreases. Further, ablation studies validate the effectiveness of our semantic segmentation refinement model. Our approach applies to other ultrasound-based medical diagnostics tasks. The paper is organized as follows: Section 2 presents the semantic segmentation task and leading approaches. Our method is presented in Section 3, and the experimental setup is presented in Section 4. Section 5 presents the results and ablation studies on a discontinuity in tendon fiber (DITF) pathology finding task in a musculoskeletal ultrasound (MSK) dataset, and the conclusions are presented in Section 6." }, { "url": "http://arxiv.org/abs/2007.05724v2", "title": "Learning Randomly Perturbed Structured Predictors for Direct Loss Minimization", "abstract": "Direct loss minimization is a popular approach for learning predictors over\nstructured label spaces. This approach is computationally appealing as it\nreplaces integration with optimization and allows to propagate gradients in a\ndeep net using loss-perturbed prediction. Recently, this technique was extended\nto generative models, while introducing a randomized predictor that samples a\nstructure from a randomly perturbed score function. In this work, we learn the\nvariance of these randomized structured predictors and show that it balances\nbetter between the learned score function and the randomized noise in\nstructured prediction. We demonstrate empirically the effectiveness of learning\nthe balance between the signal and the random noise in structured discrete\nspaces.", "authors": "Hedda Cohen Indelman, Tamir Hazan", "published": "2020-07-11", "updated": "2021-06-14", "primary_cat": "stat.ML", "cats": [ "stat.ML", "cs.LG" ], "main_content": "Effective structured learning and inference over discrete, combinatorial models is challenging and has been addressed by different approaches. Direct loss minimization is an effective approach in discriminative learning that was devised to optimize non-convex and non-smooth loss functions for linear structured predictors (Hazan et al., 2010). Later it was extended to non-linear models, including hidden Markov models and deep learners (Keshet et al., 2011; Song et al., 2016). Our work extends direct loss minimization by adding random noise to its structured predictor and learning its variance. Recently, the idea of optimization that replaces sampling was extended to generative learning and reinforcement learning (Lorberbom et al., 2018; 2019). Similar to our work, these works also add random Gumbel perturbation and learn the mean of their structured predictor. In contrast, our work also learns the variance of the predictor, and our experimental validation shows it contributes to the performance of the predictor. Also, our theoretical contribution sets the framework to handle any structured predictor. Closely related is a method of differentiating through marginal inference (Domke, 2010), which shows that the gradient of the loss with respect to the parameters can be computed based on inference over the original parameters , and one over the parameters pertubed in the direction of the loss derivative w.r.t. to the marginals. Another line of work considers continuous relaxations of the discrete structures. Paulus et al. (2020) have suggested a unified framework for constructing structured relaxations of combinatorial distributions, and have demonstrated it as a generalization of the Gumbel-Softmax trick. Their method builds upon differentiating through a convex program and induces solutions found in the interior of the polytope rather than on its faces, as a function of temperature-controlled approximation. An efficient extension for sorting and ranking differential operators has been suggested lately (Blondel et al., 2020). SparseMAP (Niculae et al., 2018) is a sparse structured inference framework which offers a continuous relaxation. It finds sparse MAP solutions on the faces of the marginal polytope. Recently, Berthet et al. (2020) suggested stochastic smoothing to allow differentiation through perturbed maximizers. In contrast, we do not use convex smoothing techniques of the structured label for differentiation. Blackbox optimization (Pogancic et al., 2020) is a new scheme to differentiate through argmax, which allows backward pass through blackbox implementations of combinatorial solvers with linear objective functions. Our work considers two popular structured prediction problems: bipartite matching and k-nearest neighbors. Learning matchings in bipartite graphs has been extensively researched. When the bipartite graph is balanced, a matching can be represented by a permutation, which is an extreme point of the Birkhoff polytope, i.e., the set of all doubly stochastic matrices. Many works have built upon Sinkhorn normalization, an algorithm that maps a square matrix to a doubly-stochastic matrix. The Sinkhorn normalization has been incorporated in end-to-end learning algorithms in order to obtain relaxed gradients for learning to rank (Adams and Zemel, 2011), bipartite matching (Mena et al., 2018), visual permutation learning (Santa Cruz et al., 2019), and latent permutation inference (Linderman et al., 2018). This continuous relaxation is inspired by the Gumbel-Softmax trick (Jang et al., 2016; Maddison et al., 2017). Andriyash et al. Andriyash et al. (2018) have later showed that the Gumbel-Softmax estimator is biased and proposed a method to reduce its bias. We also consider the problem of stochastic maximization over the set of possible latent permutations. However, we do not relax the use of bipartite matchings. Instead, we directly optimize the bipartite matching predictor and propagate gradients using the direct optimization approach. Our work also considers learning k-nearest neighbors, i.e., learning an embedding of points that encourages the k closest points to the test point to have the correct label. The body of work on sorting and specifically top-k operators in an end-to-end learning framework is extensive. Grover et al. (2019) have suggested a continuous relaxation of the output of the sorting operator from permutation matrices to the set of unimodal row-stochastic matrices, where every row sums to one and has a distinct maximal argument. Pl\u00f6tz and Roth (2018) developed a continuous deterministic relaxation that maintains differentiability with respect to pairwise distances, but retains the original k-nearest neighbors as the limit of a temperature parameter approaching zero. Other approaches are based on top-k subset sampling (Xie and Ermon, 2019; Kool et al., 2019). Berrada et al. (2018) have introduce a family of smoothed, temperature controlled loss functions that are suited to top-k optimization. In contrast, our work does not relax the objective but rather directly optimize the top-k neighbors. Xie et al. (2020) have proposed a smoothed approximation to the top-k operator as the solution of an Entropic Optimal Transport problem. 3. Background Learning to predict structured labels y \u2208Y of data instances x \u2208X covers a wide range of problems. The structure is incorporated into the label y = (y1, ..., yn) which may refer to matchings, permutations, sequences, or other highdimensional objects. For any data instance x, its different structures are scored by a parametrized function \u00b5w(x, y). Discriminative learning aims to find a mapping from training data S = {(x1, y1), ..., (xm, ym)} to parameters w for Learning Randomly Perturbed Structured Predictors for Direct Loss Minimization which \u00b5w(x, y) assign high scores to structures y that describe well the data instance x. The parameters w are \ufb01tted to minimize the loss \u2113(\u00b7, \u00b7) of the instance-label pairs (x, y) \u2208S between the label y and the highest scoring structure of \u00b5w(x, y). While gradient methods are the most popular methods to learn the parameters w, they are notoriously inef\ufb01cient for learning discrete predictions. When considering discrete labels, the maximal argument of \u00b5w(x, y) is a piecewise constant function of w, and its gradient with respect to w is zero for almost any w. Consequently, various smoothing techniques were proposed to propagate gradients while learning to \ufb01t discrete structures. Direct loss minimization approach aims at minimizing the expected loss minw E(x,y)\u223cD\u2113(y\u2217 w, y) that incurs when the training label y is different than the predicted label (Hazan et al., 2010; Keshet et al., 2011; Song et al., 2016) y\u2217 w \u225carg max \u02c6 y \u00b5w(x, \u02c6 y) (1) Direct loss minimization relies on a loss-perturbed prediction y\u2217 w(\u03f5) \u225carg max \u02c6 y {\u00b5w(x, \u02c6 y) + \u03f5\u2113(y, \u02c6 y)}. (2) It introduces an optimization-based gradient step for the expected loss, namely \u2207E(x,y)\u223cD\u2113(y\u2217 w, y) = lim \u03f5\u21920 1 \u03f5 \u0010 E(x,y)\u223cD[\u2207w\u00b5w(x, y\u2217 w(\u03f5)) \u2212\u2207w\u00b5w(x, y\u2217 w)] \u0011 . (3) Unfortunately, the above gradient step does not hold for any w, cf. (Hazan et al., 2010) Section 3.1. For example, when w = 0 the gradient estimator in Equation (3) may be zero for any (x, y) \u223cD regardless of the value of \u2207E(x,y)\u223cD[\u2113(y\u2217 w, y)]. In Section 4.1 we de\ufb01ne the mathematical condition for which Equation (3) represents the gradient. Recently, the direct loss minimization technique was applied to generative learning. In this setting, a random perturbation \u03b3(y) is added to each con\ufb01guration, (Lorberbom et al., 2018). The technique allows to randomly generate structures y for any given x from a generative distribution q(y|x) \u221de\u00b5w(x,y). The generative learning approach relies on the connection between q(y|x) and the Gumbel-max trick, namely P\u03b3\u223cg[y\u2217 w,\u03b3 = y] \u221de\u00b5w(x,y), when y\u2217 w,\u03b3 = arg max\u02c6 y{\u00b5w(x, \u02c6 y) + \u03b3(\u02c6 y)} and \u03b3(y) are i.i.d. random variables that follow the zero mean Gumbel distribution law, which we denote by G. The corresponding gradient step, in discriminative learning setting, takes the form: \u2207E\u03b3\u223cG[\u2113(y\u2217 w,\u03b3, y)] = lim \u03f5\u21920 1 \u03f5 \u0010 E\u03b3\u223cG[\u2207\u00b5w(x, y\u2217 w,\u03b3(\u03f5)) \u2212\u2207\u00b5w(x, y\u2217 w,\u03b3)] \u0011 . (4) Here, y\u2217 w,\u03b3(\u03f5) = arg max\u02c6 y{\u00b5w(x, \u02c6 y) + \u03b3(\u02c6 y) + \u03f5\u2113(y, \u02c6 y)}. The advantage of using this framework in this setting is that it effortlessly elevates the mathematical dif\ufb01culties in de\ufb01ning the gradient of the expected loss that exists in the direct loss minimization framework. Unfortunately, the random noise \u03b3(y) that is injected to the optimization may mask the signal \u00b5w(x, y) and thus get sub-optimal results in discriminative learning. To enjoy the best of both worlds, we propose to learn the proper amount of randomness to add to the discriminative learner. In this work we focus on learning discrete structured labels y = (y1, ..., yn). A general score function \u00b5w(x, y) cannot be computed ef\ufb01ciently for discrete structured labels y = (y1, ..., yn) since the number of possible labels is exponential in n and a general score function \u00b5w(x, y) may assign a different value for each structure. Typically, such score functions are decomposed to localized score functions over small subsets \u03b1 \u2282{1, ..., n} of variables where y\u03b1 = (yi)i\u2208\u03b1. The score function takes the form: \u00b5w(x, y) = P \u03b1\u2208A \u00b5w,\u03b1(x, y\u03b1). The correspondence between the exponential family of distributions e\u00b5w(x,y) and the Gumbel-max trick requires an independent random variable \u03b3(y) for each of the exponentially many structures y = (y1, ., , , .yn). However, since we are focusing on discriminative learning we are not limited by the Gumbel-max trick. Instead, we can use fewer random variables in order to learn the minimal amount of randomness to add. We limit our predictors to low-dimensional independent random variables \u03b3(y) = Pn i=1 \u03b3i(yi), where \u03b3i(yi) are independent random variables for each index i = 1, ..., n and each yi. In this setting, the number of random variables we are using is linear in n, compared to exponential many random variables in the Gubeml-max setting. 4. Learning Structured Predictors In the following we present our main technical concept that derives the gradient of an expected loss using two structured predictions. In Section 4.1 we prove the gradient step of an expected loss in the direct loss minimization framework. We also deduce that it holds whenever y\u2217 w(\u03f5) is unique. Subsequently, in Section 4.2, we show that low dimensional random perturbations \u03b3i(yi) are able to implicitly enforce uniqueness of the maximizing structure with probability one. In Section 4.3, we present our approach that learns the variance of the random perturbation, to ensure that the random noise \u03b3i(yi) does not mask the signal \u00b5w(x, y). 4.1. Direct Loss Minimization We rely on the expected max-value that is perturbed by the loss function. This is the \u201cprediction generating function\" in Lorberbom et al. (2018). In the direct loss minimization setting, as de\ufb01ned in Equation (3), this function takes the Learning Randomly Perturbed Structured Predictors for Direct Loss Minimization form: G(w, \u03f5) = E(x,y)\u223cD h max \u02c6 y\u2208Y \b \u00b5w(x, \u02c6 y) + \u03f5\u2113(y, \u02c6 y) \ti (5) The proof technique relies on the existence of the Hessian of G(w, \u03f5) and the main challenge is to show that G(w, \u03f5) is differentiable, i.e., there exists a vector \u2207\u00b5w(x, y\u2217(\u03f5)) such that for any direction u, its corresponding directional derivative limh\u21920 G(w+hu,\u03f5)\u2212G(w,\u03f5) h equals E(x,y)\u223cD[\u2207w\u00b5w(x, y\u2217(\u03f5))\u22a4u]. The proof builds a sequence of functions {gn(u)}\u221e n=1 that satis\ufb01es lim h\u21920 G(w + hu, \u03f5) \u2212G(w, \u03f5) h = lim n\u2192\u221eE(x,y)\u223cD[gn(u)] (6) E(x,y)\u223cD[ lim n\u2192\u221egn(u)] = E(x,y)\u223cD[\u2207w\u00b5w(x, y\u2217(\u03f5))\u22a4u]. (7) The functions gn(u) correspond to the loss perturbed prediction y\u2217 w(\u03f5) through the quantity \u00b5w+ 1 n u(x, \u02c6 y) + \u03f5\u2113(y, \u02c6 y). The key idea we are exploiting is that there exists n0 such that for any n \u2265n0 the maximal argument y\u2217 w+ 1 n u(\u03f5) does not change. Lemma 1. Assume \u00b5w(x, y) are continuous functions of w and that their loss-perturbed maximal argument y\u2217 w+ 1 n u(\u03f5), which is de\ufb01ned in Equation (2), is unique for any u and n. Then there exists n0 such that for n \u2265n0 there holds y\u2217 w+ 1 n u(\u03f5) = y\u2217 w(\u03f5). Proof. Let fn(y) = \u00b5w+ 1 n u(x, y) + \u03f5\u2113(y, \u02c6 y) so that y\u2217 w+ 1 n u(\u03f5) = arg maxy fn(y). Also, let f\u221e(y) = \u00b5w(x, y)+\u03f5\u2113(y, \u02c6 y) so that y\u2217 w(\u03f5) = arg maxy f\u221e(y). Since fn is a continuous function of then maxy fn(y) is also a continuous function and limn\u2192\u221emaxy fn(y) = maxy f\u221e(y). Since maxy fn(y) = fn(y\u2217 w+ 1 n u(\u03f5)) is arbitrarily close to maxy f\u221e(y) = f\u221e(y\u2217 w(\u03f5)), and y\u2217 w(\u03f5), y\u2217 w+ 1 n u(\u03f5) are unique then for any n \u2265n0 these two arguments must be the same, otherwise there is a \u03b4 > 0 for which |f\u221e(y\u2217 w(\u03f5)) \u2212fn(y\u2217 w+ 1 n u(\u03f5))| \u2265\u03b4. This lemma relies on the discrete nature of the label space, ensuring that the optimal label does not change in the vicinity of y\u2217 w(\u03f5). This phenomena distinguishes the discrete label setting from the continuous relaxations of the label space (Domke, 2010; Berthet et al., 2020; Paulus et al., 2020). These relaxations of the label space utilize their continuities to differentiate through the label. In direct loss minimization, one works directly with the discrete label space which allows to control the maximal argument in in\ufb01nitesimal interval. Theorem 1. Assume \u00b5w(x, y) is a smooth function of w and that E(x,y)\u223cD\u2225\u2207w\u00b5w(x, y)\u2225\u2264\u221e. If the conditions of Lemma 1 hold then the prediction generating function G(w, \u03f5), as de\ufb01ned in Equation (5), is differentiable and \u2202G(w, \u03f5) \u2202\u03f5 = E(x,y)\u223cD[\u2113(y, y\u2217 w)]. (8) \u2202G(w, \u03f5) \u2202w = E(x,y)\u223cD h \u2207\u00b5w(x, y\u2217(\u03f5)) i . (9) Proof. Let fn(y) = \u00b5w+ 1 n u(x, y) + \u03f5\u2113(y, \u02c6 y) as in Lemma 1 and let gn(u) \u225cmax\u02c6 y\u2208Y fn(\u02c6 y) \u2212max\u02c6 y\u2208Y f\u221e(\u02c6 y) 1/n (10) We apply the dominated convergence theorem on gn(u), so that limn\u2192\u221eE(x,y)\u223cD[gn(u)] = E(x,y)\u223cD[limn\u2192\u221egn(u)] in order to prove Equations (6,7). We note that we may apply the dominated convergence theorem, since the conditions E(x,y)\u223cD\u2225\u2207w\u00b5w(x, y)\u2225\u2264\u221e imply that the expected value of gn is \ufb01nite (We recall that fn is a measurable function, and note that since \u02c6 y \u2208Y is an element from a discrete set Y , then gn is also a measurable function.). From Lemma 1, the terms \u2113(y, y\u2217(\u03f5)) are identical in both max\u02c6 y\u2208Y fn(\u02c6 y) and max\u02c6 y\u2208Y f\u221e(\u02c6 y). Therefore, they cancel out when computing the difference max\u02c6 y\u2208Y fn(\u02c6 y) \u2212 max\u02c6 y\u2208Y f\u221e(\u02c6 y). Then, for n \u2265n0: max \u02c6 y\u2208Y fn(\u02c6 y)\u2212max \u02c6 y\u2208Y f\u221e(\u02c6 y) = \u00b5w+ 1 n u(x, y\u2217(\u03f5))\u2212\u00b5w(x, y\u2217(\u03f5)) and Equation (10) becomes: gn(u) = \u00b5w+ 1 n u(x, y\u2217(\u03f5)) \u2212\u00b5w(x, y\u2217(\u03f5)) 1/n . (11) Since \u00b5w(x, y\u2217(\u03f5)) is smooth, then limn\u2192\u221egn(u) is composed of the derivatives of \u00b5w(x, y\u2217(\u03f5)) in direction u, namely, limn\u2192\u221egn(u) = \u2207w\u00b5w(x, y\u2217(\u03f5))\u22a4u. In the above theorem we assume that \u00b5w(x, y) is smooth, namely it is in\ufb01nitely differentiable. It suf\ufb01ces to assume that \u00b5w(x, y) is twice differentiable, to ensure that G(w, \u03f5) is twice differentiable and hence its Hessian exists. Corollary 1. Under the conditions of Theorem 1, E(x,y)\u223cD[\u2113(y, y\u2217 w)] is differentiable and its derivative is de\ufb01ned in Equation (3). Proof. Since Theorem 1 holds for every direction u: \u2202G(w, \u03f5) \u2202w = E(x,y)\u223cD h \u2207w\u00b5w(x, y\u2217(\u03f5)) i . Adding a derivative with respect to \u03f5 we get: \u2202 \u2202\u03f5 \u2202G(w, 0) \u2202w = lim \u03f5\u21920 1 \u03f5 E(x,y)\u223cD h \u2207w\u00b5w(x, y\u2217(\u03f5)) \u2212\u2207w\u00b5w(x, y\u2217) i Learning Randomly Perturbed Structured Predictors for Direct Loss Minimization The proof follows by showing that the gradient computation is apparent in the Hessian, namely Equation (3) is attained by the identity \u2202w\u2202\u03f5G(w, 0) = \u2202\u03f5\u2202wG(w, 0). Now we turn to show that \u2202w\u2202\u03f5G(w, 0) = \u2207wE(x,y)\u223cD[\u2113(y, y\u2217 w)]. Since \u03f5 is a real valued number rather than a vector, we do not need to consider the directional derivative, which greatly simpli\ufb01es the mathematical derivations. We de\ufb01ne fn(\u02c6 y) \u225c \u00b5w(x, \u02c6 y)+ 1 n\u2113(y, \u02c6 y) and follow the same derivation as above to show that \u2202\u03f5G(w, 0) = E(x,y)\u223cD[\u2113(y, y\u2217 w)]. Therefore \u2202w\u2202\u03f5G(w, 0) = \u2207wE(x,y)\u223cD[\u2113(y, y\u2217 w)]. We note the strong conditions that require the theorem to hold: the loss-perturbed maximal argument y\u2217 w+ 1 n u(\u03f5), which is de\ufb01ned in Equation (2), is unique for any u and n. Unfortunately, this condition does not hold in some cases, e.g., when w = 0. Next we show that with added random perturbation we can ensure this holds with probability one. 4.2. Randomly Perturbing Structured Predictors We turn to show that randomly perturbing the structured signal \u00b5w(x, y) = P \u03b1\u2208A \u00b5\u03b1(x, y\u03b1) with smooth random noise \u03b3i(yi) allows us to implicitly enforce the uniqueness condition. To account for the structured signal and the low-dimensional random perturbation we de\ufb01ne the set y\u2217 w,\u03b3(\u03f5) = arg max \u02c6 y n X \u03b1\u2208A \u00b5w,\u03b1(x, \u02c6 y\u03b1) + n X i=1 \u03b3i(\u02c6 yi) + \u03f5\u2113(y, \u02c6 y) o . (12) To reason about the set of maximal structures of y\u2217 w,\u03b3(\u03f5), we introduce the set of random perturbation \u0393\u03f5(y\u2032) which consists of all random values \u03b3i(yi) for which y\u2032 is their maximal structure: \u0393\u03f5(y\u2032) = \uf8f1 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f2 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f3 \u03b3 : X \u03b1\u2208A \u00b5w,\u03b1(x, \u02c6 y\u2032 \u03b1) + n X i=1 \u03b3i(\u02c6 y\u2032 i) + \u03f5\u2113(y, \u02c6 y\u2032) \u2265 \u2200\u02c6 y X \u03b1\u2208A \u00b5w,\u03b1(x, \u02c6 y\u03b1) + n X i=1 \u03b3i(\u02c6 yi) + \u03f5\u2113(y, \u02c6 y) \uf8fc \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8fd \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8fe (13) Whenever the set in Equation (12) consists of more than a single structure, say y\u2032 and y\u2032\u2032, their corresponding sets \u0393\u03f5(y\u2032) and \u0393\u03f5(y\u2032\u2032) intersect. We now prove that this happens with zero probability whenever \u03b3i(yi) are i.i.d. and with a smooth probability density function. Theorem 2. Let \u03b3i(yi) be i.i.d. random variables with a smooth probability density function. Then the set of maximal arguments in Equation (12) consists of a single structure with probability one. Proof. Consider there is an event (a set) of \u03b3 for which the set of maximal arguments consists of more than one structure, e.g., y\u2032 and y\u2032\u2032 and denote it by \u2126. Clearly, \u2126\u2282\u0393\u03f5(y\u2032) \u2229\u0393\u03f5(y\u2032\u2032). Let \u03b2 be the set of indexes for which y\u2032 i \u0338= y\u2032\u2032 i . Since for any \u03b3 \u2208\u2126it holds that P \u03b1\u2208A \u00b5w,\u03b1(x, \u02c6 y\u2032 \u03b1) + Pn i=1 \u03b3i(\u02c6 y\u2032 i) + \u03f5\u2113(y, \u02c6 y\u2032) = P \u03b1\u2208A \u00b5w,\u03b1(x, \u02c6 y\u2032\u2032 \u03b1) + Pn i=1 \u03b3i(\u02c6 y\u2032\u2032 i ) + \u03f5\u2113(y, \u02c6 y\u2032\u2032), then \u2126\u2282 {\u03b3 : P i\u2208\u03b2 \u03b3i(\u02c6 y\u2032 i) \u2212\u03b3i(\u02c6 y\u2032\u2032 i ) = c} for the constant c = \u00b5w,\u03b1(x, \u02c6 y\u2032\u2032 \u03b1) \u2212\u00b5w,\u03b1(x, \u02c6 y\u2032 \u03b1) + \u03f5\u2113(y, \u02c6 y\u2032\u2032) \u2212\u03f5\u2113(y, \u02c6 y\u2032). Since \u03b3i(\u02c6 y\u2032 i) \u2212\u03b3i(\u02c6 y\u2032\u2032 i ) are independent random variables with smooth probability density function, then their sum also has a smooth probability density function. Consequently the probability that P i\u2208\u03b2 \u03b3i(\u02c6 y\u2032 i) \u2212\u03b3i(\u02c6 y\u2032\u2032 i ) = c is zero, and thus P\u03b3[\u2126] = 0. We note that the uniqueness of the maximal structure of y\u2217 w,\u03b3 can be proved by Theorem 2 as well, in which case the constant c = \u00b5w,\u03b1(x, \u02c6 y\u2032\u2032 \u03b1) \u2212\u00b5w,\u03b1(x, \u02c6 y\u2032 \u03b1). It follows from the above theorem that adding random perturbations solves the uniqueness problem in direct loss gradient rule. Unfortunately, as we show in our experimental evaluation, the random perturbation that smooths the objective can also serve as noise that masks the signal \u00b5w(x, y). To address this caveat, we propose to learn the magnitude, i.e., the variance, of this noise explicitly. 4.3. Learning The Variance Of Randomly Perturbed Structured Predictors We propose to learn the magnitude of the random perturbation \u03b3i(yi). In our setting it translates to the prediction y\u2217 w,\u03b3 = arg max \u02c6 y n X \u03b1\u2208A \u00b5u,\u03b1(x, \u02c6 y\u03b1) + n X i=1 \u03c3v(x)\u03b3i(\u02c6 yi) o (14) w = (u, v) are the learned parameters. In this case we treat P \u03b1\u2208A \u00b5u,\u03b1(x, \u02c6 y\u03b1) + Pn i=1 \u03c3v(x)\u03b3i(\u02c6 yi) as a random variable whose mean is learned using \u00b5u,\u03b1(x, \u02c6 y\u03b1) and its variance is learned using \u03c3v(x). As such, we consider a strictly positive \u03c3v(x) both theoretically and practically. We are learning the same variance \u03c3v(x) for all random assignments \u03b3i(yi). We do so to learn to balance the overall noise Pn i=1 \u03b3i(yi) with the signal P \u03b1 \u00b5u,\u03b1(x, \u02c6 y\u03b1). The learned variance \u03c3v(x) allows us to interpolate between the original direct loss setting, where \u03c3v(x) = 0, to the generative learning setting, where \u03c3v(x) = 1. Corollary 2. Assume \u00b5u(x, y), \u03c3v(x) are smooth functions of w = (u, v). Let \u03b3i(yi) be i.i.d. random variables with a smooth probability density function. Let G(w, \u03f5) = E\u03b3\u223cG h max \u02c6 y\u2208Y n X \u03b1\u2208A \u00b5u,\u03b1(x, \u02c6 y\u03b1)+ n X i=1 \u03c3v(x)\u03b3i(\u02c6 yi)+\u03f5\u2113(y, \u02c6 y) oi . (15) Learning Randomly Perturbed Structured Predictors for Direct Loss Minimization (a) A randomly perturbed structured prediction illustration (b) The max predictor probability distribution for a low \u03c3(x). (c) The max predictor probability distribution for a high \u03c3(x). Figure 1. The randomized predictor y\u2217 w,\u03b3 is the structure that maximizes the randomly perturbed scoring function among all possible structures in Y (Figure 1a). As \u03c3(x) decreases, the expected max predictor approaches the expected value of a categorical random variable (Figure 1b). And vice versa, as \u03c3(x) increases, the expected max predictor converges to a uniform distribution over the discrete structures (Figure 1c). Then G(w, \u03f5) is a smooth function and \u2202 \u2202uE\u03b3[\u2113(y, y\u2217 w,\u03b3)] = lim \u03f5\u21920 1 \u03f5 E\u03b3 h X \u03b1\u2208A (\u2207\u00b5u,\u03b1(x, y\u2217 \u03b1(\u03f5)) \u2212\u2207\u00b5u,\u03b1(x, y\u2217 \u03b1)) i (16) and \u2202 \u2202vE\u03b3[\u2113(y, y\u2217 w,\u03b3)] = lim \u03f5\u21920 1 \u03f5 E\u03b3 h n X i=1 \u2207\u03c3v(x) \u0010 \u03b3i(y\u2217 i (\u03f5)) \u2212\u03b3i(y\u2217 i ) \u0011i . (17) We prove Corollary 2 in the supplementary material. The random perturbation induces a probability distribution over structures y. As \u03c3 increases, the expected max predictor tends to a uniform distribution over the discrete structures. Similarly, as \u03c3 decreases, the expected max predictor approaches a deterministic decision over the discrete structures. This idea is illustrated in Figure 1. Interestingly, whenever the random variables \u03b3(y) follow the zero mean Gumbel distribution law, the random variable \u00b5u(x, \u02c6 y) + \u03c3v(x)\u03b3(\u02c6 y) follows the Gumbel distribution law with mean \u00b5u(x, y) and variance \u03c32\u03c02/6. In this case, the variance turns to be the temperature of the corresponding Gibbs distribution: P\u03b3\u223cG[arg max\u02c6 y{\u00b5u(x, \u02c6 y) + \u03c3v(x)\u03b3(\u02c6 y)} = y] \u221de\u00b5u(x,y)/\u03c3v(x), see proof in the supplementary material. Our framework thus also allows to learn the temperature of the Gumbel-max trick instead of tuning it as a hyper-parameter. 5. Experimental Validation In the following we validate the advantage of our approach (referred to as \u2018Direct Stochastic Learning\u2019) in two popular structured prediction problems: bipartite matching and knearest neighbors. We compare to direct loss minimization (Hazan et al., 2010), which can be interpreted as setting the noise variance to zero (referred to as Direct \u00af \u03c3 = 0), as well as to Lorberbom et al. (2018), in which the noise variance is set to one (referred to as Direct \u00af \u03c3 = 1). Additionally, we compare to state-of-the-art in neural sorting (Grover et al., 2019; Xie and Ermon, 2019) and bipartite matching (Mena et al., 2018). Further architectural and training details are described in the supplementary material. In all direct loss based experiments we set a negative \u03f5. When \u03f5 > 0 the loss-pertubed label y\u2217 w(\u03f5) chooses a con\ufb01guration with a higher loss and performs a gradient descent step on \u2207w\u00b5w(x, y\u2217 w(\u03f5)), i.e., it moves the parameters w to reduce the score function for the high-loss label \u00b5w(x, y\u2217 w(\u03f5)). When \u03f5 < 0 the loss-perturbed label y\u2217 w(\u03f5) chooses a con\ufb01guration with a lower loss and performs a gradient descent step on \u2212\u2207w\u00b5w(x, y\u2217 w(\u03f5)), i.e., it increases the score function for the low-loss label \u00b5w(x, y\u2217 w(\u03f5)). This choice is especially important in structured prediction, when there might be exponentially many structures with high loss and only few structures with low loss. This observation already appears in the original direct loss work (Hazan et al., 2010) (see last paragraph of Section 2). 5.1. Bipartite Matchings We follow the problem setting, architecture \u00b5u,\u03b1(x, y\u03b1) and loss function \u2113(y, y\u2217) of Mena et al. (2018) for learning bipartite matching, and replace the Gumbel-Sinkhorn operation with our gradient step, see Figure 2. In this experiment each training example (x, y) \u2208S consists of an input vector x \u2208Rd of d numbers drawn independently from the uniform distribution over the [0, 1] interval. The structured label y, y \u2208{0, 1}d2, is a bipartite matching between the elements of x to the elements of the sorted vector of x. Formally, yij = 1 if xi = sort(x)j and zero otherwise. Here we set \u03b1 to be the pair of indexes i, j = 1, . . . , d that corresponds to the desired bipartite matching. The network learns a real valued number for each (i, j)-th entry, namely, \u00b5u,ij(x, yij) and our gradient update rule in Equation (16) replaces the Gumbel-Sinkhorn operator of Mena et al. (2018). We note that y\u2217 w,\u03b3 can be computed ef\ufb01ciently using any max-matching algorithm, Learning Randomly Perturbed Structured Predictors for Direct Loss Minimization X [batch,d] X [batch*d,1] [batch*d,32] [batch*d,d] \ud835\udf07\ud835\udc62(x,y) [batch,d,d] y \u2217w, \u03b3 E \u03b3[\u2113(\ud835\udc66, \u0ddc \ud835\udc66)] [batch,1] \u03c3\ud835\udc63(x) \u2202uE \u03b3 [ \u2113(\ud835\udc66, \u0ddc \ud835\udc66)] \u2202vE \u03b3 [ \u2113(\ud835\udc66, \u0ddc \ud835\udc66)] Figure 2. Architecture for learning bipartite matchings: The expectancy over Gumbel noise of the loss is derived w.r.t. the parameters u of the signal and w.r.t. the parameters v of the variance controller \u03c3 directly (Equations 16,17 respectively). The network \u00b5 has a \ufb01rst fully connected layer that links the sets of samples to an intermediate representation (with 32 neurons), and a second (fully connected) layer that turns those representations into batches of latent permutation matrices of dimension d by d each. It has the same architecture as the equivalent experiment by Mena et al. (2018). The network \u03c3 has a single layer connecting input sample sequences to a single output which is then activated by a softplus activation. We chose such an activation to enforce a positive \u03c3 value. which maximizes a linear function over the set of possible matching Y : y\u2217 w,\u03b3 = arg max \u02c6 y\u2208Y d X ij=1 \u00b5u,ij(x, \u02c6 yij) + d X ij=1 \u03c3v(x)\u03b3ij(\u02c6 yij) (18) Our gradient computation also requires the loss perturbed predictor y\u2217 w,\u03b3(\u03f5), which takes into account the quadratic loss function: \u2113(y, \u02c6 y) = d X i=1 \uf8eb \uf8ed d X j=1 xjyij \u2212 d X j=1 xj \u02c6 yij \uf8f6 \uf8f8 2 Note that this loss function is not smooth, as it is a function of binary elements, i.e. the squared differences between one hot vectors. Seemingly, the quadratic loss function does not decompose along the score structure \u00b5u,ij(x, yij), therefore it is challenging to recover the loss-perturbed prediction ef\ufb01ciently. Instead, we use the fact that y2 ij = yij for yij \u2208 {0, 1} and represent the loss as a linear function over the set of all matchings: \u2113(y, \u02c6 y) = t + Pd j=1 tij \u02c6 yij, with t = Pd ij=1 x2 jyij and tij = Pd i=1 x2 j(1\u22122yij). (Further details in the supplementary material). With this, we are able to recover the loss-perturbed predictor y\u2217 w,\u03b3(\u03f5) with the same computational complexity as y\u2217 w,\u03b3, i.e., using linear solver over maximum matching: y\u2217 w,\u03b3(\u03f5) = arg max \u02c6 y\u2208Y d X ij=1 \u00b5u,ij(x, \u02c6 yij) (19) + d X ij=1 \u03c3v(x)\u03b3ij(\u02c6 yij) + \u03f5 d X ij=1 tij \u02c6 yij. Note that we may omit t from the optimization since it does not impact the maximal argument. In our experimental validation we found that negative \u03f5 works the best. In this case, when yij = 0 the corresponding embedding potential \u00b5u,ij(x, yij) is perturbed by \u2212|\u03f5|x2 j, while when yij = 1 it is increased by |\u03f5|x2 j. Doing so incrementally pushes y\u2217 w,\u03b3(\u03f5) towards predicting the ground truth permutation, which is aligned with our intuition of the towards-best direct loss minimization. The dynamics of our method as a function of matching dimension under a positive versus a negative \u03f5 is illustrated in Figure 3. In Figure 3a we plot the loss as a function of training epochs with varying size of matching dimension d. While the loss is similar for \u03f5 > 0 and \u03f5 < 0 when d = 10, this changes for d = 100. As such, the percentage of correctly sorted input entries of y\u2217 w and y\u2217(\u03f5) greatly differs when d = 100 for different \u03f5. Importantly, when learning with \u03f5 > 0 there are less than 40% correct entries in y\u2217 w (Figure 3c), while when learning with \u03f5 < 0 there are at least 90% correct entries in y\u2217 w (Figure 3b). Mena et al. (2018) have introduced two evaluation measures: the proportion of sequences where there was at least one error (Prop. Any Wrong), and the overall proportion of samples assigned to a wrong position (Prop. Wrong). They report the best achieved Prop. Any Wrong measure over an unspeci\ufb01ed number of trials. To indicate robustness, we extend these measures to the following: Percentage of zero Prop. Any Wrong sequences, as well as Average and STD of Prop. Wrong, which are calculated over a number of training and testing repetitions. We follow the Sorting Numbers experiment protocol of Mena et al. (2018) and use the code released by the authors, to perform 20 Sinkhorn iterations and 10 different reconstruction for each batch sample. Also, the training set consists of 10 random sequences of length d and a test set that consists of a single sequence of the same length d. At test time, random noise is not added to the learned sigLearning Randomly Perturbed Structured Predictors for Direct Loss Minimization (a) loss with negative and positive \u03f5 (b) y\u2217and y\u2217(\u03f5) with negative \u03f5 (c) y\u2217and y\u2217(\u03f5) with positive \u03f5 Figure 3. The effect of the sign of \u03f5, as a function of dimension during training. We plot the percentage of correctly sorted input entries of the predictor (y\u2217) and the loss-augmented predictor (y\u2217(\u03f5)) as well as the loss. While the loss is similar for negative and positive \u03f5 when d = 10, this changes for d = 100 (Figure 3a). As such, the percentage of correctly sorted input entries of y\u2217 w and y\u2217(\u03f5) greatly differs when d = 100 for different \u03f5. Importantly, when learning with \u03f5 > 0 there are less than 40% correct entries in y\u2217 w (Figure 3c), while when learning with \u03f5 < 0 there are at least 90% correct entries in y\u2217 w (Figure 3b). nal \u00b5u,ij(x, yij). The results in Table 1 show the measures calculated over 200 repetitions of training and testing. One can see that direct loss minimization performs better than Gumbel-Sinkhorn, and the gap is larger for longer sequences. One can also see that learning the variance of the noise improves the performance of the structured predictor in all three measures, when compared to direct loss minimization (Hazan et al., 2010), in which the variance is set to zero, as well as to (Lorberbom et al., 2018), in which the noise variance is set to one. Running time comparison is given in Table 2. Our code may be found in https:// github.com/HeddaCohenIndelman/ PerturbedStructuredPredictorsDirect. 5.2. k-Nearest Neighbors For Image Classi\ufb01cation We follow the problem setting and architecture \u00b5u,\u03b1(x, y\u03b1) and loss function \u2113(y, y\u2217) of Grover et al. (2019) for learning the k-nearest neighbors (kNN) classier. We replace the unimodal row stochastic matrix operation with our gradient step to directly minimize the distance to the closest k candidates, see Figure 4. In this experiment each training example (x, y) \u2208S consists of an input vector x = (x1, ..., xn, xq) of n candidate images x1, ..., xn and a single query image xq; its corresponding structured label y = (y1, ..., yn) . The structured label y \u2208{0, 1}n points to the k candidate images with minimum Euclidean distance to the query image, i.e., Pn i=1 yi = k. Here we set \u03b1 to be the index i = 1, . . . , n that correspond to the top k candidate images. \u00b5u,i(x, yi) is the negative distance between the embedding hu(\u00b7) of the i-th candidate image and that of the query image: \u00b5u,i(x, yi) = \u2212\u2225hu(xi) \u2212hu(xq)\u2225. Our prediction y\u2217 u,\u03b3 yields the top-k images having the minimum Euclidean distance in embedding space (equivalently, the maximum negative Euclidean Table 1. Bipartite Matching Evaluation Measures. Results show Percentage of zero Prop. Any Wrong sequences of test set (i.e perfect sorting). Average and STD of Prop. Wrong in parenthesis. We show the effect of learning signal-to-noise ratio method \u2018Direct Stochastic Learning\u2019 in comparison with \u2018Direct \u00af \u03c3 = 0\u2019 referring to direct loss minimization (Hazan et al., 2010), which can be interpreted as setting the noise variance to zero, \u2019Direct \u00af \u03c3 = 1\u2019 referring to (Lorberbom et al., 2018), in which the noise variance is set to one, and \u2018Gumbel-Sinkhorn\u2019 referring to (Mena et al., 2018). Training set setting of 10 random sequences of length d and a test set of a single sequence of length d. Results are calculated from 200 training and testing repetitions. d Direct \u00af \u03c3 = 0 Direct \u00af \u03c3 = 1 Direct Stochastic Learning Gumbel-Sinkhorn 5 98.5% (0.6%\u00b14.9%) 100% (0%\u00b10%) 100% (0%\u00b10%) 100% (0%\u00b10%) 10 97% (0.6%\u00b13.4%) 100% (0%\u00b10%) 100% (0%\u00b10%) 100% (0%\u00b10%) 25 89.5% (0.9%\u00b12.8%) 97.5% (0.3%\u00b11.7%) 97.5% (0.3%\u00b11.6%) 87.5% (1%\u00b13%) 40 84.5% (1.2%\u00b14.5%) 90.5% (0.6%\u00b12.2%) 91.6% (0.5%\u00b11.6%) 83.5% (1%\u00b15%) 60 82% (0.9%\u00b12.6%) 80.0% (0.9%\u00b12.2%) 83.3% (0.7%\u00b11.8%) 21% (5%\u00b19%) 100 74.9% (1.4%\u00b16.9%) 68.5% (1.2%\u00b12.4%) 76.8% (0.9%\u00b12.1%) 0% (11.3%\u00b111.2%) Table 2. Comparison of average epoch time (seconds) of the bipartite matching experiment, per selected d d Direct Stochastic Learning Gumbel-Sinkhorn 10 0.247 0.288 40 0.252 0.294 100 0.304 0.306 distance) y\u2217 u,k,\u03b3 = arg max\u02c6 y\u2208Y {\u00b5u(x, \u02c6 y) + \u03b3(\u02c6 y)}. Here, we set Y to be the set of all structures y \u2208{0, 1}n satisfying Pn i=1 yi = k. The loss function is a linear function of its labels: \u2113(y, \u02c6 y) = \u2212Pn i=1 \u2225xi\u2212xq\u2225yi\u02c6 yi. Our gradient update rule in Equation (16) replaces the unimodal row stochastic construction operator of Grover et al. (2019). We note that Learning Randomly Perturbed Structured Predictors for Direct Loss Minimization Xq X1 XN X2 \u2026 exi =hu(xi) \ud835\udf07\ud835\udc62(x,y) y*k,w,\ud835\udefe E \u03b3 [\u2113 knn (\ud835\udc66, \u0ddc \ud835\udc66)] \u2202uE \u03b3[\u2113knn (\ud835\udc66, \u0ddc \ud835\udc66)] \u03c3\ud835\udc63(x) \u2202vE \u03b3 [\u2113knn (\ud835\udc66, \u0ddc \ud835\udc66)] Figure 4. k-nn schematic architecture. The expectancy over Gumbel noise of the loss is derived w.r.t. the parameters u of the signal and w.r.t. the parameters v of the variance controller \u03c3 directly (Equations 16,17 respectively). We deployed the same distance embedding networks as the ones deployed by Grover et al. (2019) (Details in the supplementary material). Our prediction y\u2217 u,\u03b3 yields the top-k images having the minimum Euclidean distance in embedding space (equivalently, the maximum negative Euclidean distance). The network \u03c3 output is activated by a softplus activation in order to enforce a positive \u03c3 value. y\u2217 w,\u03b3 and y\u2217 w,\u03b3(\u03f5) can be computed ef\ufb01ciently by extracting the top k elements over n elements. Table 3. Test-Set Classi\ufb01cation Average Accuracy, Per k. We show the effect of our \u2018Direct Stochastic Learning\u2019 method for learning signal-to-noise ratio in comparison with: \u2018Direct \u00af \u03c3 = 0\u2019 referring to direct loss minimization without random noise (Hazan et al., 2010), \u2018Direct \u00af \u03c3 = 1\u2019 referring to Lorberbom et al. (2018), in which the noise variance is set to one, \u2018NeuralSort\u2019 referring to Grover et al. (2019), and \u2018RelaxSubSample\u2019 referring to Xie and Ermon (2019) who quote results for k = 5 only. MNIST k=1 k=3 k=5 k=9 Direct \u00af \u03c3 = 0 99.1% 99.2% 99.3% 99.2% Direct \u00af \u03c3 = 1 16% 53.84% 14.25% 41.53% Direct Stochastic Learning 99.34% 99.4% 99.4% 99.34% NeuralSort deterministic 99.2% 99.5% 99.3% 99.3% NeuralSort stochastic 99.1% 99.3% 99.4% 99.4% RelaxSubSample 99.3% Fashion-MNIST k=1 k=3 k=5 k=9 Direct \u00af \u03c3 = 0 89.8% 93.2% 93.5% 93.7% Direct \u00af \u03c3 = 1 92.5% 93.4% 93.3% 93.2% Direct Stochastic Learning 92.6% 93.3% 94% 93.7% NeuralSort deterministic 92.6% 93.2% 93.5% 93% NeuralSort stochastic 92.2% 93.1% 93.3% 93.4% RelaxSubSample 93.6% CIFAR-10 k=1 k=3 k=5 k=9 Direct \u00af \u03c3 = 0 24.9% 27% 39.6% 39.9% Direct \u00af \u03c3 = 1 23.1% 89.95% 90.85% 91.6% Direct Stochastic Learning 29.6% 90.7% 91.25% 91.7% NeuralSort deterministic 88.7% 90.0% 90.2% 90.7% NeuralSort stochastic 85.1% 87.8% 88.0% 89.5% RelaxSubSample 90.1% We report the classi\ufb01cation accuracies on the standard test sets in Table 3. For MNIST and Fashion-MNIST, our method matched or outperformed \u2018NeuralSort\u2019 (Grover et al., 2019) and \u2018RelaxSubSample\u2019 (Xie and Ermon, 2019), in all except k = 3, 9 in MNIST. For CIFAR-10, our method outperformed \u2018NeuralSort\u2019 and \u2018RelaxSubSample\u2019, in all except k=1, for which disappointing results are attained by all direct loss based methods. We note that \u2018Direct \u00af \u03c3 = 0\u2019 seems to suffer from very low average accuracy on CIFAR10 dataset. Additionally, \u2018Direct \u00af \u03c3 = 1\u2019 suffer from very low average accuracy on MNIST dataset. It is evident that our method stabilizes the performance on all datasets. Running time comparison for k = 3 is given in Table 4. Performance is robust to k in both methods. Table 4. Comparison of average epoch running time (seconds) of the k-nn experiment. Results are for k = 3. Direct Stochastic Learning NeuralSort Stochastic MNIST 28.3 14.4 Fashion-MNIST 198.6 328.6 CIFAR-10 220. 337.6 6. Discussion And Future Work In this work, we learn the mean and the variance of structured predictors, while directly minimizing their loss. Our work extends direct loss minimization as it explicitly adds random perturbation to the prediction process to better control the relation between data instance and its exponentially many possible structures. Our work also extends direct optimization through the arg max in generative learning as it adds a variance term to better balance the learned signal with the perturbed noise. The experiments validate the bene\ufb01t of our approach. The structured distributions that are implied from our method are different than the standard Gibbs distribution, when the localized score functions are over subsets of variables. The exact relation between these distributions and the role of the Gumbel distribution law in the structured setting is an open problem. There are also optimization-related questions that arise from our work, such as exploring the role of \u03f5 and its impact on the convergence of the algorithm. Learning Randomly Perturbed Structured Predictors for Direct Loss Minimization", "introduction": "Learning and inference in high-dimensional structured mod- els drives much of the research in machine learning appli- cations, from computer vision, natural language process- ing, to computational chemistry. Examples include scene understanding (Kendall et al., 2017) machine translation (Wiseman and Rush, 2016) and molecular synthesis (Jin et al., 2020). The learning process optimizes a score for each of the exponentially many structures in order to best \ufb01t the mapping between input and output in the training data. While it is often computationally infeasible to evaluate the loss of all exponentially many structures simultaneously through sampling, it is often feasible to predict the highest scoring structure ef\ufb01ciently in many structured settings. Direct loss minimization is an appealing approach in dis- criminative learning that allows to learn a structured model by predicting the highest scoring structure (Hazan et al., 2010; Keshet et al., 2011; Song et al., 2016). It allows to im- prove the loss of the structured predictor by considering the 1Technion. Correspondence to: Hedda Cohen Indelman . Proceedings of the 38 th International Conference on Machine Learning, PMLR 139, 2021. Copyright 2021 by the author(s). gradients of two predicted structures: over the original loss function and over a perturbed loss function. This approach implicitly uses the data distribution to smooth the loss func- tion between a training structure and a predicted structure, thus propagating gradients through the maximal argument of the predicted structure. Unfortunately, our access to the data distribution is limited and we cannot reliably repre- sent the intricate relation between a training instance and its exponentially many structures. Recently, this framework was extended to generative learning, where a random per- turbation that follows the Gumbel distribution law allows to sample from all possible structures (Lorberbom et al., 2018). However, one cannot apply this generative learn- ing approach effectively to discriminative learning, since the random noise that is added in the generation process interferes in predicting the best scoring structure. In this work we combine these two approaches: we explic- itly add random perturbation to each of the structures, in order to reliably represent the intricate relation between the a training instance and its exponentially many structures. To balance between the learned score function and the added random perturbation, we treat the score function as the mean of the random perturbation, and learn its variance. This way we are able to control the ratio between the signal (the score) and the noise (the random perturbation) in discriminative learning. In summary, we make the following contributions: 1. We show that the uniqueness assumption of the pre- dicted structure is a key element in the gradient step of direct loss minimization, thus mathematically de\ufb01ning its general position assumption. 2. We prove that random perturbation ensures unique maximizers with probability one. 3. We identify that random perturbation might also serve as noise that masks the score function signal. Hence, we introduce a method for learning both the mean and the variance of randomized predictors in the high- dimensional structured label setting. 4. We show empirically the bene\ufb01t of our approach in two structured prediction problems. arXiv:2007.05724v2 [stat.ML] 14 Jun 2021 Learning Randomly Perturbed Structured Predictors for Direct Loss Minimization" } ], "Nati Daniel": [ { "url": "http://arxiv.org/abs/2302.06549v1", "title": "Between Generating Noise and Generating Images: Noise in the Correct Frequency Improves the Quality of Synthetic Histopathology Images for Digital Pathology", "abstract": "Artificial intelligence and machine learning techniques have the promise to\nrevolutionize the field of digital pathology. However, these models demand\nconsiderable amounts of data, while the availability of unbiased training data\nis limited. Synthetic images can augment existing datasets, to improve and\nvalidate AI algorithms. Yet, controlling the exact distribution of cellular\nfeatures within them is still challenging. One of the solutions is harnessing\nconditional generative adversarial networks that take a semantic mask as an\ninput rather than a random noise. Unlike other domains, outlining the exact\ncellular structure of tissues is hard, and most of the input masks depict\nregions of cell types. However, using polygon-based masks introduce inherent\nartifacts within the synthetic images - due to the mismatch between the polygon\nsize and the single-cell size. In this work, we show that introducing random\nsingle-pixel noise with the appropriate spatial frequency into a polygon\nsemantic mask can dramatically improve the quality of the synthetic images. We\nused our platform to generate synthetic images of immunohistochemistry-treated\nlung biopsies. We test the quality of the images using a three-fold validation\nprocedure. First, we show that adding the appropriate noise frequency yields\n87% of the similarity metrics improvement that is obtained by adding the actual\nsingle-cell features. Second, we show that the synthetic images pass the Turing\ntest. Finally, we show that adding these synthetic images to the train set\nimproves AI performance in terms of PD-L1 semantic segmentation performances.\nOur work suggests a simple and powerful approach for generating synthetic data\non demand to unbias limited datasets to improve the algorithms' accuracy and\nvalidate their robustness.", "authors": "Nati Daniel, Eliel Aknin, Ariel Larey, Yoni Peretz, Guy Sela, Yael Fisher, Yonatan Savir", "published": "2023-02-13", "updated": "2023-02-13", "primary_cat": "eess.IV", "cats": [ "eess.IV", "cs.CV", "cs.LG" ], "main_content": "2.1 Medical Image Synthesis Generative adversarial networks (GANs) [3] aim to model the distribution of real images given the input noise via a minimax game between a generator, G, and a discriminator, D. Where the G tries to generate synthetic images as close to the real images as possible whereas D tries to distinguish them apart. Conditional Generative adversarial networks (CGAN) [4] is a type of GAN that allows controlling the spatial distribution of the generated data by providing conditioning information. Medical AI researchers have leveraged these networks, whose goal is to generate large and diverse datasets for training and evaluating deep networks [5]. Hence, it enables a wide variety of applications in medical imaging, and in digital pathology in particular, such as image generation (Breast cancer [6], Glioblastoma [7], Colon cancer [8], CT scans from MRI [9], Skin lesion [10], Retinal fundi images [11]), image adaptation [12], image enhancement [13], and representation learning [14]. Yet, there are several challenges that need to be addressed in the generation of synthetic medical images. First, the images must have realistic texture, and second, they must be representative of a wide variety of tissue types and pathological conditions, such as the amount and location of cancer or immune cells. 2.2 Image Translation The image-to-image translation is a set of tasks that translate the source domain of images to the target domain, either given input-output image training pairs, such as pix2pix [15], pix2pixHD [16] or without any explicit correspondence between the images in the two sets such as CycleGAN [17]. These deep learning models are typically using a variant of image-conditional GANs [4]. One of these kinds of tasks is to insert a semantic map and translate it to an image based on the additional information, such as class labels, passed together with the image to the network during the training phase. Thus generating images from labels in a wide variety of applications and domains in an effective manner. 3 Between Generating Noise and Generating Images DANIEL N ET AL. 2.3 AI-assisted NSCLC diagnosis Non-small cell lung cancer (NSCLC) is the most common type of lung cancer, accounting for 85% of all lung cancer cases [18]. While recent development in Immunotherapy has shown promising results in treating NSCLC [19]. One of the most common ways to assess cancer\u2019s stage and characteristics is based on tissue scans. Those are mostly being decoded by pathologists. According to their diagnosis, the treatment plan is determined. For example, some patients can be treated with immunotherapy methods. Those methods are tremendously helpful for some patients but can be harmful to others, and they are also very expensive. Hence, there is an urgent need for identifying responders and non-responders at an early stage [20]. Pathologists usually use IHC-stained methods [21] to decide whether this treatment is bene\ufb01cial or not. IHC slides emphasized the expressions of Programmed Death-Ligand 1 (PD-L1), which is usually ampli\ufb01ed by cancer cells. PD-L1 neutralizes white blood cells\u2019 activity, thereby causing the immune system to ignore the cancerous cells. Hence, the cancerous cells exude the PD-L1, which performs in two different expressions, such as NSCLC PD-L1 positive and NSCLC PD-L1 negative. The fraction of PD-L1 positive out of the total cancer PDL-L1 cells measured as the tumor proportion score (TPS) [22]. Its value divides the patients into 3 classes: (0%\u22121%, 1% \u221250%, 50% \u2212100%). Pathologists estimate this value by looking at the WSI themselves [23, 24]. This estimation can be reliable when the labeling is clear, but the middle regions\u2019 assessments (around TPS=1%, TPS=50%) are not good enough. AI methods can then come forward and supply easy and more robust assessments in a wide variety of applications such as PD-L1 image classi\ufb01cation [25], PD-L1 image segmentation [26], TPS severity classi\ufb01cation [27], and also other realizations in digital pathology [28, 29]. 3 MATERIALS AND METHODS 3.1 Study population and dataset 22 whole slide images (WSIs) from 19 patients that were stained using anti-PD-L1 antibody clone 22C3 Dako using a Ventana immunostainer following a harmonization procedure. The slides were scanned using PANNORAMIC 250 Flash III (3DHISTECH) at 40X. All procedures performed in this study and involving human participants were in accordance with the ethical standards of the Rambam Medical center institutional research committee, approval 0522-10-RMB, and with the 1964 Helsinki declaration and its later amendments or comparable ethical standard. For our analysis, we cropped a small set of 512 images out of 22 WSIs of NSCLC tissue samples with a size of 512X1024 pixels for semantic labeling. These images were manually annotated by 4 trained and experienced researchers and were validated by an expert pathologist. Each pixel was assigned to one of four classes: NSCLC with PD-L1 expression (n = 1281 polygons, n = 39.5M pixels), de\ufb01ned by PD-L1 positive, NSCLC without PD-L1 expression (n = 871 polygons, n = 20.6M pixels), de\ufb01ned by PD-L1 negative, in\ufb02ammation (n = 1209 polygons, n = 29.5M pixels), and Other (n = 172 polygons, n = 134.6M pixels), de\ufb01ned by healthy tissue and air. In terms of Lung Cell TPS distribution, this dataset is imbalanced. Most of the images fall into a bimodal distribution with a lot of images getting TPS of 0 (about 26.4% of all dataset), and a lot getting TPS of 1 (about 36.8% of all dataset). On the one hand, these images have mostly NSCLC PD-L1 negative, or mostly NSCLC PD-L1 positive, but on the other hand around the clinical decision thresholds (TPS = 0.01, TPS = 0.5) there is less abundance of images (about 4% of all dataset). To avoid training bias, the images were manually split to build a non-biased training set (n = 360 images) and test set (n = 152 images). We used this dataset to model the generation of synthetic biopsy images. 3.2 Semantic segmentation metrics To estimate the UNet++ [30] segmentation performances, we used the following metrics, mIoU = 1 I \u00b7 C X i X c TPi,c TPi,c + FPi,c + FNi,c (1) wIoU = 1 I \u00b7 C \u00b7 S X i X c sc \u00b7 TPi,c TPi,c + FPi,c + FNi,c (2) wPrecision = 1 I \u00b7 C \u00b7 S X i X c sc \u00b7 TPi,c TPi,c + FPi,c (3) 4 Between Generating Noise and Generating Images DANIEL N ET AL. wRecall = 1 I \u00b7 C \u00b7 S X i X c sc \u00b7 TPi,c TPi,c + FNi,c (4) tObjective = 1 1 C P c 2\u00b7T Pc 2\u00b7T Pc+F Pc+F Nc \u22121 4 \u00b7 P c yo,c log(pro,c) (5) where the c index iterates over the different classes in the image, and the i index iterates over the different images in the dataset. pct denotes the number of pixels of class c classi\ufb01ed as class t. sc = P t pct is the total number of pixels belonging to class c, and S = P c sc denotes the number of all pixels. pro,c denotes the predicted probability observation o is of class c, and y is a binary indicator (0 or 1) if class label c is the correct classi\ufb01cation for observation o. C is the total number of classes, and I is the total number of images. TP, TN, FP, and FN are classi\ufb01cation elements that denote the true positive, true negative, false positive, and false negative of the areas of each image, respectively. 3.3 Image quality assessment metric To estimate the image synthesis of pix2pixHD [16] performances, we used the FID (Frechet inception distance) similarity metric, which is considered the gold-standard metric to date. FID is a visual quality discriminator for comparing the quality of generated images to real images by comparing the feature vectors of the images in the feature space of a pre-trained Inception network [31]. It is based on the Fr\u00e9chet distance between the two distributions of feature vectors, which measures how similar the two distributions are [32]. A lower FID score indicates that the generated images are more similar to the real images. 3.4 Training procedure The updated model was trained and optimized using Pytorch [33] framework on a single NVIDIA GeForce RTX A6000 GPU with 48GB GPU memory. During the training, different hyper-parameters were examined using Adam Solver [34] with beta1=0.5 and beta2=0.999, a minibatch of size 1, a learning rate of 2e-4, while we keep the same learning rate for the \ufb01rst 500 epochs, and linearly decay the rate to zero over the next 200 epochs. Weights were initialized from a Gaussian distribution with a mean of 0 and a unit standard deviation of 0.02. The optimization loss function contains two terms. First, for the discriminator which is an average discriminator prediction\u2019s mean square error (MSE) between synthetic and real images. Second, for the generator that consists of the classic adversarial loss based on Binary cross-entropy (BCE), and two features-based matching losses that force the output synthetic image to seem like the speci\ufb01c real image and thus keep the conditional features of the images. While all the loss function elements were weighted with values of one. 3.5 The pix2pixHD formulation In this work, we used pix2pixHD [16], which is a conditional GAN framework for image-to-image translation, to generate synthetic pathological images. The pix2pixHD is an extension of the pix2pix model [15], and generates high-resolution images, and better visual quality. This network has novel multiscale generators and discriminators, which contribute towards the stabilization and optimization of the training of conditional GANs [4] on high-resolution images, and thus aims to achieve state-of-the-art results of \ufb01ne geometry-image details and realistic textures. Particularly, in generator G architecture, we used only a single G1 that focus mainly on producing low-resolution images of size 512X1024 pixels based on global information, out of the decomposition of multiscale generators (G1, and G2). In Discriminator D architecture, we used two multiscale discriminators (D1, and D2) with the same architecture, but works on different image scales, out of the decomposition of three discriminators (D1, D2, and D3). Hence, the D aims to distinguish not only between a real and synthetic image in the entire image, but also in the \ufb01ne details and the different textures. As a result, the G is forced to study the true distribution of information on all scales, thus obtaining higher-quality images even in the smallest details. Hence, the objective of the pix2pixHD model is expressed as: min G (( max D1,..,K K X k=1 LGAN(G, Dk)) + \u03bb \u00b7 K X k=1 LF M(G, Dk))) (6) where \u03bb is a regularization parameter, and K is the number of discriminators that have an identical deep network structure but operate at different image scales. In our study, we used K = 2, which refers to the discriminators as D1, 5 Between Generating Noise and Generating Images DANIEL N ET AL. and D2. LGAN(G, D) is conditional GAN loss, LF M(G, D) is a feature matching loss, both are described in (7) and (8), respectively. min G max D Es,x[log D(s, x)] + Es[log (1 \u2212D(s, G(s)))] (7) where G is a generator and D is a discriminator. s represents the semantic label map, x is the real image, and G(s) is the generated image given the prior s. In the \ufb01rst term, the expectation Es,x is over both the real pairs of semantic priors and images and in the second term Es, is over the semantic priors alone. Es,x T X i=1 1 Ni [||Di(s, x) \u2212Di(s, G(s))||1] (8) where the ith-layer feature extractor of discriminator D as Di. T is the total number of layers and Ni denotes the number of elements in each layer. 3.6 Semantic labeling resolutions for Image Synthesis In this work, we compared three different resolutions of the semantic labeling for the generation of synthetic images of IHC-treated lung biopsies. All the resolutions are based on the pix2pixHD model described in subsection 3.5. The only difference between them is the input mask that contains the conditions to generate the synthetic images. The three approaches considered in this work are the following: \u2022 Polygons\u2019 mask is a typical mask of histology images containing only regional data. \u2022 Polygons + Noise mask is a noisy mask of histology images containing regional data with random Gaussian noise. \u2022 Polygons + Air + Cells mask is a single cell mask of histology images containing air (non-tissue regions), single cells, and NSCLC feature regions data. Where Polygons mask creation only needs the manually annotated NSCLC feature classes (PD-L1 positive, PD-L1 positive, In\ufb02ammation, Other) to obtain the corresponding input mask. Polygons + Noise masks are superpositions of the Polygons\u2019 mask with a random Gaussian noise, which is easy to generate automatically by a primary array programming library. Polygons + Air + Cells masks need in addition to Polygons annotations, the original RGB image information to extract the air and cells that forces the tissue mask structure to be similar to the original image. To extract air and cells from tissue images, we used classical computer vision methods to convert the images to grayscale and apply thresholds to extract air and cell pixels to distinguish between air and intracellular pixels. 3.7 Pipeline Architecture Our pipeline for generating synthetic biopsy Images builds upon the pix2pixHD model. While [16] uses instance-wise features in addition to labels as an input to image generation network G, we use the Gaussian random noise in addition to labels. Since NSCLC PD-L1 semantic label maps have a small number of classes and contain typically large and uniform polygons, random noise addition enables to challenge of the image generation process by avoiding repetitive texture effects, thereby achieving better image quality. 4 RESULTS 4.1 Comparison of different approaches To test the effect of adding noise to the semantic masks, we compared several image translation approaches for visual inspection of the generated histology synthetic images. The approaches included CycleGAN [17], pix2pix [15], and pix2pixHD [16] models. We compared three types of semantic masks: 1) with Polygons, 2) Polygons + Noise, and 3) Polygons + Air + Cells. Generated tissue image based on Polygons contains blur and repetitive artifacts due to the large smooth areas and can be explained by pix2pixHD fractionally-stride convolution architecture. When using Polygons + Air + Cells masks, masks that carry a lot of prior information, images have high similarity to the original images, therefore are more photorealistic, but not scalable for improving algorithms and existing AI models. 6 Between Generating Noise and Generating Images DANIEL N ET AL. Figure 3: IHC-stained of NSCLC synthetic images from semantic layouts of 512X1024 pixels. Visual comparison of three types of conditional Image-to-Image translation approaches, which were used for producing synthetic images, show that pix2pixHD outperforms CycleGAN and pix2pix models. On the other hand, introducing random noise as an additional label which is spread in the entire image spatially eliminates blur and repetitive artifacts and improves the synthetic tissue \ufb01ne details compared to the base polygons\u2019 image resolution, similar to the level of adding single-cell resolution labeling. Hence, in the context of the quality-scalability trade-off, Polygons + Noise masks help to provide not only high-quality images with tissue \ufb01ne details similar to the level of single-cell resolution labeling (Polygons + Air + Cells), but also add more control over the image. Therefore, we can conclude that Polygons + Noise masks allow for generating an easily more diverse set of high-quality images, and avoiding the time-consuming of manual image annotation. 4.2 Random noise frequency optimization To test the effect of noise frequency, eight different pix2pixHD models were trained with different mean distances between noise pixels. To test the similarity of the synthetic images we used InceptionV3 [31] and ResNet50 [35], and evaluated the performance of n = 152 synthetic images using a visual quality discriminator, based on Frechet Distance (FD) [36]. Analyzing the results, we can observe that a mean length of 15 pixels between two noise pixels yields the highest similarity comparison of the synthetic images to real images on both architectures (shown in Fig. 4A). A mean length of 15 pixels is within the range of the characteristic frequency of the healthy cell to the cancer cell in the realization of NSCLC. Fig. 4B presents a similarity comparison of generated synthetic images from different semantic labeling resolutions (in terms of FID [32]). It can be shown that adding random noise to polygon-based masks is closer to the result based on a single-cell structure, than the synthetic images generated by polygon-based masks by a factor of 1.76. 7 Between Generating Noise and Generating Images DANIEL N ET AL. Figure 4: The effect of noise on image similarity (A) Eights different pix2pixHD models were trained with different spatial noise to determine the optimal noise frequency. Applying a visual quality discriminator, such as Fr\u00e9chet Distance (FD) based on ResNet50 and InceptionV3 deep architectures, on the same set of (n= 152) test real images, shows that a mean 15 pixels between noise pixels yield the highest similarity comparison of the synthetic images to real images. The horizontal bars illustrate the typical size of healthy (green) and cancer cells (yellow). (B) Adding random spatial noise with the optimal frequency of 15 compared to the ideal case of single-cell resolution (i.e. Polygons + Air + Cells), while the reference line represents the best similarity score of 12.62 can be achieved on the same test set based on control real images. The marginal improvement of adding noise is almost the same as in the case of adding real single-cell features. 4.3 Algorithmic Improvement and Turing Test State-of-the-art segmentation architecture, UNet++ [30], was trained on 100 real images followed by the [37] hyperparameters, as a baseline model to distinguish between the four types of tissue cells, such as NSCLC PD-L1 positive, NSCLC PD-L1 negative, In\ufb02ammation cells, and other cells. We show in Fig. 5A, that adding (n = 152) synthetic images to the training set, improved the unseen test sample (n = 100) performance with respect to all segmentation metrics, described by (1)-(5). For instance, the network, fed by synthetic images, achieves better segmentation by a factor of 36.8% and 17.8% than the baseline model, in terms of mIoU and wPrecision, respectively. In addition, we performed a Turing test where trained and experienced researchers were presented with both real and synthetic images, and Fig. 5B presents through the confusion matrices that our pipeline produces synthetic images reliable enough and with high quality similar to real images. This performance analysis is critical to the ability of the model network to be more robust and provide a reliable TPS for patient severity diagnosis. 4.4 Unbiasing of IHC-treated lung datasets One of the main challenges in harnessing AI for TPS predictions is the bias within the data. Fig. 6 demonstrates our ability to create synthetic images of any desired TPS. 5 CONCLUSION Arti\ufb01cial Intelligence has the potential to revolutionize digital pathology by automating certain tasks and increasing the speed and accuracy of diagnosis. The use of synthetic data is becoming increasingly important for the development and training of AI algorithms in digital pathology. This type of data can be used to train AI algorithms in a controlled and ef\ufb01cient manner, without the need for real patient data. This is particularly bene\ufb01cial in the \ufb01eld of digital pathology, where access to high-quality, annotated data can be limited. With synthetic data, researchers can generate large amounts of data that can be used to train AI algorithms and evaluate their performance. Additionally, synthetic data can be used to test the robustness of AI algorithms and identify potential issues before they are deployed in a clinical setting. One of the main limitations of achieving debasing of histological datasets using synthetic images is the need ability to control the feature distribution in a precise manner. Conditional GANs and in particular paired GANs can provide such 8 Between Generating Noise and Generating Images DANIEL N ET AL. Figure 5: AI validation and Turing test of the NSCLC synthetic images. (A) To test the effect of synthetic images on AI performances, state-of-the-art architecture (UNet++) was trained on 100 real images and validated on a different real 100 ones. Adding 152 synthetic images based on polygon + noise masks to the real training set (red) improves the semantic segmentation accuracy by over 17% compared to baseline test results (blue), and a slightly lower than adding the same amount of control / different set of real images (yellow). (B) Two trained, experienced researchers were presented with both real and synthetic images. This \ufb01gure presents the Turing test results. P1, Expert 1 #1; P2, Expert 2 #2, IoU, intersection over union; m, mean; w, weighted; t, train. Figure 6: Controlling tumor proportion score (TPS) of synthetic images of immunohistochemistry-treated lung biopsies. (A) NSCLC healthy tissue image (TPS of 0). (B) NSCLC in\ufb02ammatory tissue image (TPS of 0). (C) NSCLC with tumor markers (TPS of 1%-50%). (D) NSCLC with tumor markers (TPS of 100%). 9 Between Generating Noise and Generating Images DANIEL N ET AL. control but unlike other domains, such as autonomous vehicles or face recognition, in the case of tissues generating the \u2019scene\u2019 is a challenge by itself. One way to control the features of the synthetic images is to de\ufb01ne regions of cell types by using polygon-based semantic masks as inputs. However, this approach can lead to inherent artifacts that are the result of generating a pattern with a small scale (that is the single-cell scale) with a smooth area input (the polygon that marks the area of the cells). In this work, we show that introducing single-pixel random noise with a mean distance that is within the typical scale of cells, can remove these artifacts. We demonstrate that adding a random noise is almost equivalent to adding the actual single-cell information itself. Therefore, our approach can use polygon semantic masks and noise to create images with any desired tumor-proportion score. Moroever, these images are not only similar to the real ones in terms of similarity metrics but also to human experts, and can be used to improve AI performance. Our results demonstrated the ability to overcome the problem of biased datasets in such as the frequency of rare disease cases, and cases that are at the critical thresholds of clinical decisions. In addition, it facilitates digital pathology AI development for histopathology diagnosis by improving AI models\u2019 performance, and robustness and understanding their failure cases. ACKNOWLEDGMENT The authors would like to thank Tanya Wasserman, Tal Ben-Yaakov, Yair Davidson, and Yael Abuhatsera for their technical support and valuable discussions.", "introduction": "Synthetic images of tissues have great potential in facilitating Arti\ufb01cial Intelligence (AI) and machine learning for computational pathology and biomedical applications in general. The ability to control the distribution of scenarios and, by that, debiasing the dataset, allows a better diversity and representation of the training set and validation set [1, 2]. Hence, it enables the development of more accurate and reliable AI models to extract useful information for better diagnoses, clinical outcomes, and treatment decisions, especially in rare disease conditions. \u2217Corresponding author, e-mail: yoni.savir@technion.ac.il. 1Department of Physiology, Biophysics and System Biology, Faculty of Medicine, Technion Israel Institute of Technology, Haifa, Israel. 2Faculty of Industrial Engineering, Technion Israel Institute of Technology, Haifa, Israel. 3Faculty of Computer Science, Technion Israel Institute of Technology, Haifa, Israel. 4Faculty of Electrical Engineering, Technion Israel Institute of Technology, Haifa, Israel. 5Division of Pathology, Rambam Health Care Campus, Haifa, Israel. arXiv:2302.06549v1 [eess.IV] 13 Feb 2023 Between Generating Noise and Generating Images DANIEL N ET AL. Currently, there are three primary approaches exist for generating synthetic images. The \ufb01rst is the classic approach using a generative adversarial network, coined Vanilla GAN, [3] in which random noise is processed by the generator to yield synthetic histology images without any prior on the generated images. This approach requires real histology images to be processed by the discriminator during training. The second and third are related to images translation approaches, also known as unpaired and paired image-to-image translation approaches, using conditional generative adversarial network (CGAN) [4] in which they use prior knowledge by converting discrete semantic label map or other properties into RGB photo-realistic histology images. These approaches also require semantic label maps or other image information during training in addition to the real histology images. Fig. 1 illustrates both image translation approaches and the classic image generative one. These approaches result in quality-scalability tradeoffs. The unpaired approach allows for generating a large number of images but lacks the ability to control the cellular features of the image in a precise manner. Figure 1: Illustration of three types of generative approaches for producing synthetic images. (Top) Vanilla GAN, random noise is processed by the generator to yield synthetic histology images. On the other hand, image translation approaches (middle and bottom) use prior knowledge by converting discrete semantic labels into RGB photorealistic histology images. The unpaired image translation approach (middle), requires a batch of real images and a batch of semantic masks for the training procedure. While the paired image translation approach (bottom), constrains the images and semantic masks to be paired where each semantic mask is extracted from its corresponding histology image. In the case of tissues, both approaches have been harnessed with limited success. While the classic approach can result in photo-realistic images, the ability to control the distribution of objects within the images themselves (such as the location of the cell, blood vessels, etc.) is limited. When producing the masks that would be used as input, the resolution of the semantic labeling is critical (Fig. 2A). Typical masks of histology images contain only regional data (i.e. polygons that engulf regions of some cell types). The reason for that is that generating input masks that contain full single-cell information is challenging. Generating polygon input masks allows control over the cell types in the synthetic images, and allows scalability. However, polygon masks containing large smooth areas can result in repetitive artifacts and hinder the photorealism of the synthetic image. In this work, we show that introducing random noise in particular frequencies into polygon-based masks can improve dramatically the quality of synthetic images, resulting in an image quality that is almost as good as providing the single-cell structure (Fig. 2B). We test our pipeline on Immunohistochemistry (IHC)-treated lung biopsies Lung cohort using a three-fold validation approach: Image similarity, Turing test, and AI improvements. Our work demonstrates how synthetic images can be easily created from masks that contain only regional data. These results pave the way for an automated diagnosis of Non-Small Cell Lung Cancer (NSCLC) and can be utilized for other conditions with similar challenges. 2 Between Generating Noise and Generating Images DANIEL N ET AL. Figure 2: (A) In the case of Non-small cell lung carcinoma (NSCLC) the treatment choice is determined by the relative areas of three types of cell types that are annotated by pathologists manually: PD-L1 positive, PD-L1 negative, and In\ufb02ammation. Using these types of polygons maps as input for synthetic images allows control over the cellular features in mass. However, these smooth polygon regions pose a challenge for creating non-repetitive synthetic images. Adding detailed semantic masks can break the repetitive artifacts of the generated image. Yet, this fragmentation requires detailed knowledge of the actual cell distribution in the real images and therefore is not scalable. (B) Our pipeline uses an image translation methodology by generating a histology photorealistic image from a given semantic label mask. As an intermediate step, we add a random noise with different frequencies to the semantic mask, where the spatial noise frequency is a hyperparameter of the model. Next, we apply the semantic mask with the additional label to the image translation generator to produce the synthetic histology image." } ], "Eliel Aknin": [ { "url": "http://arxiv.org/abs/2304.07787v1", "title": "Harnessing Digital Pathology And Causal Learning To Improve Eosinophilic Esophagitis Dietary Treatment Assignment", "abstract": "Eosinophilic esophagitis (EoE) is a chronic, food antigen-driven, allergic\ninflammatory condition of the esophagus associated with elevated esophageal\neosinophils. EoE is a top cause of chronic dysphagia after GERD. Diagnosis of\nEoE relies on counting eosinophils in histological slides, a manual and\ntime-consuming task that limits the ability to extract complex\npatient-dependent features. The treatment of EoE includes medication and food\nelimination. A personalized food elimination plan is crucial for engagement and\nefficiency, but previous attempts failed to produce significant results. In\nthis work, on the one hand, we utilize AI for inferring histological features\nfrom the entire biopsy slide, features that cannot be extracted manually. On\nthe other hand, we develop causal learning models that can process this wealth\nof data. We applied our approach to the 'Six-Food vs. One-Food Eosinophilic\nEsophagitis Diet Study', where 112 symptomatic adults aged 18-60 years with\nactive EoE were assigned to either a six-food elimination diet (6FED) or a\none-food elimination diet (1FED) for six weeks. Our results show that the\naverage treatment effect (ATE) of the 6FED treatment compared with the 1FED\ntreatment is not significant, that is, neither diet was superior to the other.\nWe examined several causal models and show that the best treatment strategy was\nobtained using T-learner with two XGBoost modules. While 1FED only and 6FED\nonly provide improvement for 35%-38% of the patients, which is not\nsignificantly different from a random treatment assignment, our causal model\nyields a significantly better improvement rate of 58.4%. This study illustrates\nthe significance of AI in enhancing treatment planning by analyzing molecular\nfeatures' distribution in histological slides through causal learning. Our\napproach can be harnessed for other conditions that rely on histology for\ndiagnosis and treatment.", "authors": "Eliel Aknin, Ariel Larey, Julie M. Caldwell, Margaret H. Collins, Juan P. Abonia, Seema S. Aceves, Nicoleta C. Arva, Mirna Chehade, Evan S. Dellon, Nirmala Gonsalves, Sandeep K. Gupta, John Leung, Kathryn A. Peterson, Tetsuo Shoda, Jonathan M. Spergel, Marc E. Rothenberg, Yonatan Savir", "published": "2023-04-16", "updated": "2023-04-16", "primary_cat": "cs.LG", "cats": [ "cs.LG" ], "main_content": "2.1 Six-Food vs. One-Food Eosinophilic Esophagitis Diet Study (SOFEED) Trial The \u201dSix-Food vs. One-Food Eosinophilic Esophagitis Diet Study\u201d (SOFEED) was a multicenter, randomized, openlabel trial that consisted of two phases [19, 20]. The first phase included the randomization of patients with active EoE to one of two diets (one-food elimination diet [1FED] or six-food elimination diet [6FED]). Patients who continued to the second phase were assigned to 6FED or topical swallowed steroid treatment. In this work, we focus only on the first phase of the trial. Patients with active EoE (PEC \u226515) were randomly assigned to one of the two food elimination diets: 1FED or 6FED. The 1FED excludes only animal milk, and the 6FED excludes animal milk, wheat, egg, soy, fish and shellfish, and peanut and tree nuts. This phase of the study lasted approximately six weeks. All participants who completed the first phase underwent an endoscopy to determine their disease status. Patients with PEC less than 15 were considered to be in remission, whereas patients having a PEC that is greater than or equal to 15 were considered to have active EoE and were assigned to a stricter treatment in the second phase (which is excluded from this study). Information from each patient used in this study was collected at randomization and six weeks. In the SOFEED study, 129 patients started the trial; 67 patients were assigned to the 1FED, and 62 patients were assigned to the 6FED. Due to missing information about patients because of withdrawal or missing biopsies, we excluded 17 patients from the dataset (8 that were assigned to 1FED and 9 that were assigned to 6FED). An illustration of the first phase of the trial process is described in Fig. 1. 3 EoE Dietary Treatment Assignment using Causal Learning AKNIN ET AL. 2.2 Data Preparation The dataset for this study has four sources: endoscopic observations score (EREFS), manual assessment of histology (PEC, EoEHSS), AI prediction of histology (38 histology parameters), and Eosinophilic Esophagitis Activity Index (EEsAI) patient-reported outcome (PRO) scores (symptoms questionnaire). 2.2.1 Endoscopy Each subject underwent endoscopy at the beginning of the trial and at the end of the \ufb01rst phase of the trial (six weeks). The presence and severity of the endoscopic \ufb01ndings of esophageal edema, rings, exudates, furrows, and stricture were assessed and reported as EoE endoscopic reference scores (EREFS)[21]. 2.2.2 Histology \u2013 Manual Assessment Esophageal biopsies were procured from up to three locations (distal, proximal, middle) in the esophagus during the endoscopy; the biopsies were processed, embedded, sectioned, and H&E-stained. Pathologists evaluated the slides, quanti\ufb01ed the peak eosinophil count (PEC), and performed the EoEHSS to score the severity (grade) and extent (stage) of each of seven features [4]. In each endoscopy, more than one biopsy could be sampled (between one to three) from different locations in the esophagus. In this study, we utilize the information from the location that exhibited the most severe features to represent the patient\u2019s histological features. 2.2.3 Histology \u2013 AI Prediction In recent work, we implemented an AI model that segments and evaluates different EoE features from the esophageal biopsy whole slide image [5, 7]. We applied this system to extract additional 38 histological parameters. Several features aim to encapsulate the pathologists\u2019 clinical methodologies (e.g., PEC), and several are novel metrics that we developed such as Spatial Eosinophil Count (SEC), Peak Basal Zone (PBZ), and Spatial Basal Zone (SBZ). For each endoscopy, the biopsy with the most severe quanti\ufb01cation of a given feature was chosen to represent each patient\u2019s histological features for downstream analyses. 2.2.4 Symptoms The Eosinophilic Esophagitis Activity Index (EEsAI) patient-reported outcome (PRO) instrument, which is a validated symptom diagnosis questionnaire for EoE patients, was performed by the patients at randomization and at the six-week timepoint, and the results were added to our dataset [22]. The features contained the response to questions regarding dif\ufb01culties during eating or while swallowing food. 2.2.5 Data Features Properties In total, 59 patients were assigned to 1FED and 53 patients to 6FED. Each patient\u2019s data sample contains 17 pathology features from the manual assessment, 38 AI-based features, 5 endoscopy features, and 6 symptom features (66 features in total). All features are severity features, which means that when their value is higher, the patient is experiencing a more severe condition or more extensive disease. 2.2.6 Data Pre-Process The clinical outcomes are de\ufb01ned as the difference between the features at the end of the trial (Xend) and the initial features (Xstart) before receiving the treatment. Formally: Y = Xend \u2212Xstart (1) To estimate the causal effects of the different outcomes, we use a common standardization (Z-score) approach: zi,f = yi,f \u2212mean(yf) std(yf) (2) Where zi,f is the Z score outcome of patient i taken from feature f. yi,f is the original outcome of patient i taken from feature f. mean(yf) and std(yf) are the average and standard-deviation of feature\u2019s f outcomes calculated over all patients\u2019 samples respectively. 4 EoE Dietary Treatment Assignment using Causal Learning AKNIN ET AL. Figure 2: Illustration of the two types of learners examined. T-learner is based on two different models where each model is dedicated to a different type of treatment and is trained separately, with its corresponding data, to predict the outcome. During inference, the patient\u2019s features are plugged into both models, and the treatment assignment is determined based on the model with the superior outcome. In the case of an S-learner, there is only one model that gets, in addition to the patients\u2019 data, the type of treatment that was given. During inference, the same model gets the treatment type as input. The treatment that results in a better outcome is the preferred one. 2.3 Average Treatment Effects (ATE) In our case, as the treatment assignments are random, one can assume \u201cStrong Ignorability\u201d. That is, there are no confounders that in\ufb02uence treatment assignment, thus, the potential outcomes are independent of treatment assignment. Furthermore, \u201cStable Unit Treatment Value Assumption\u201d (SUTVA) holds because an individual\u2019s treatment does not in\ufb02uence another patient\u2019s outcome. Moreover, because every patient has a probability to be assigned to every type of treatment (due to the random assignment), \u201cCommon Support\u201d holds too. The average treatment effect (ATE) [23] de\ufb01nition is: ATEf = E[zi,f,T =1 \u2212zi,f,T =0] (3) Where the operator E denotes expectation over all patients. zi,f,T =j is the outcome of the feature f for the patient i given the treatment j. In our case, the treatment is 1FED or 6FED. As a patient does not receive both treatments at the same time and because the causal identi\ufb01cation assumptions hold we separate expectations and calculate ATE over the different outcomes via the next equation: ATEf = E[zi,f|T = 1] \u2212E[zi,f|T = 0] (4) For convenience, we use the following notation, T=1, refers to 6FED treatment and T=0 refers to 1FED treatment. 2.4 Treatment Assignment Policy ATE represents the overall causal effects of the two treatments in different groups. This approach assumed the na\u00efve policy for treatment assignment, where all patients were assigned the same treatment. To determine the best treatment strategy, we implemented a policy where individuals may be assigned to different treatments based on machine-learning model predictions. Particularly, for a given set of features Xstart (before trial), a model is trained to predict the outcomes for the two types of treatment assignments, where the real outcomes zi,f serves as the Ground Truth (GT). During inference, for a given patient\u2019s Xstart features, the treatment is determined by the outcome with the superior effect. We examined two techniques [24] for this policy prediction (Fig. 2): \u2022 S-learner: Train a single model using the real treatment assignments to predict the outcomes, where the treatment T is also an input to the model in addition to the features Xstart. To predict treatment assignment, 5 EoE Dietary Treatment Assignment using Causal Learning AKNIN ET AL. infer the trained model twice, once with treatment T1 as input and once with T6, and assign treatment based on the prediction with the better effect. \u2022 T-learner: train two models; one is dedicated to T1 and is trained only with the data of patients who were assigned to 1FED during the randomized trial, and the second model is dedicated to T6 and was trained only with the data of patients who were assigned to 6FED. Each model is trained individually to predict the outcome given the features Xstart. To predict treatment assignment, we infer T1 model and T6 model with the same patient\u2019s Xstart features as an input and assign treatment based on the prediction with the superior effect. Figure 3: Policy Value Calculation. Each patient is assigned to a treatment based on a given policy. The policy could be de\ufb01ned by a causal learner using the patient\u2019s initial features, or it could be a naive policy that does not take the initial status of the patient before the trial into account. To evaluate the policy, we calculate the policy value by averaging the outcomes based on the samples that contain this information. More speci\ufb01cally, the outcomes that stem from the actual treatment assignments are equivalent to the given policy\u2019s treatment assignments. For both learners, we trained various machine learning models. The \ufb01rst two are Decision Tree (DT) [25] and XGBoost [26]. Both tree-based models can be trained even when the data have missing samples (i.e., some feature information is none), as occurs in our case. In addition, we trained Multi-Layer-Perceptron (MLP) [27] and Support vector machine (SVM) [28] as well. In these cases, the missing samples were \ufb01lled with the average value of the corresponding feature in the training set. We trained both learners with different architectures and various hyperparameters and reported the best results. Each training process was performed with a 6-fold cross-validation (CV) [29], wherein each CV iteration we train the model for policy prediction using a training set and assign treatment on the validation set based on the trained model. By this technique, we eventually yield a treatment assignment prediction to the model over the entire dataset, while each prediction was achieved by a model that was not trained on the observed sample. Table 1: Average treatment effects (ATE) results for peak eosinophil count (PEC) and EoE-Active outcomes. PEC-AI PEC-Manual EoE Active-AI EoE Active-Manual ATE -13.55 -9.03 0.05 0.02 CI 95% [-46.45, 15.11] [-27.09, 9.92] [-0.16, 0.25] [-0.16, 0.19] P-value 0.202 0.168 0.318 0.405 2.5 Policy Value The policy value is the average outcome calculated over the patients that were assigned to treatment based on a given policy. Since the subject received only one of two treatments, we calculate the policy value by referring only to the subjects whose treatment assignment was identical to the actual randomized trial assignment, enabling access to the 6 EoE Dietary Treatment Assignment using Causal Learning AKNIN ET AL. corresponding outcome values (Fig. 3). We calculated the policy value for different treatment assignment policies and compared them: \u2022 ML Policy: A policy that is predicted by a learner and is based on machine learning models. We trained S-learner and T-learner over different models and evaluated them based on their policy values. \u2022 PEC Policy: A naive policy that is based on PEC only, which is the gold-standard histologic metric for diagnosing EoE. We examined all possible different thresholds of PEC values at the randomization timepoint of the trial (Xstart) and determined the treatment assignment for each side of the threshold. \u2022 6FED Policy: A baseline policy where all patients receive the 6FED treatment regardless of their initial condition. \u2022 1FED Policy: A baseline policy where all patients receive the 1FED treatment regardless of their initial condition. \u2022 Random Policy: A baseline policy where each patient treatment is assigned randomly. We performed this policy assignment 1,000 times with different random seeds and reported the policy value statistics. Table 2: Diverse Policy Values for PEC Reduction and Effectiveness. Method PEC-AI Decrease PEC-Manual Decrease Effectiveness-AI Effectiveness-Manual 1FED Policy 27.2 25.1 27.1% 35.6% 6FED Policy 14.8 15.9 32.1% 37.7% Random Policy 21.4 20.4 29.4% 36.7% CI 95% [9.1, 34.7] [12.5, 28.6] [20.7%, 38.2%] [28.5%, 44.2%] PEC-AI Baseline 22.3 24.7 30.4% 36.8% PEC-Manual Baseline 24.5 18.2 30.9% 38.3% S-Learner Policy: MLP 38.3 32.5 41.4% 49% XGBoost 35.5 29.7 32.1% 41.3% T-Learner Policy: MLP 45.2 36.5 46.4% 50.1% XGBoost 45.9 39.9 52% 58.4% 3 Results 3.1 Overall Treatment Assignment This study considers two types of outcomes. The \ufb01rst outcome metric is the decrease in the PEC score due to the treatment, which is a continuous metric. The second outcome metric is treatment effectiveness the change in the patient activity (that is whether PEC is lower or above 15), which is a binary metric. The average treatment effectiveness over all the patients is the percentage of patients that responded to the treatment. For example, a treatment that reduces the PEC score from 80 eosinophils to 60 eosinophils would have a PEC decrease of 20 but without a change in activity. In comparison, a treatment that reduces the PEC score from 20 eosinophils to 10 eosinophils would have a PEC decrease of 10 eosinophils and a change in activity. We use two methods to assess the PEC score of the patient (which determines both outcome metrics). The \ufb01rst is using the pathologist\u2019s manual assessment and the second is using the AI-based automated assessment [5, 7]. Overall, we consider four metrics for the treatment outcome: PEC-AI decrease, PEC-manual decrease, Effectiveness-AI, and Effectiveness-manual We calculated the ATE for these four outcomes (Table 1). The results show that the treatments reduce the EoE activity but in all cases, there is no signi\ufb01cant advantage in assigning one treatment over the other. That is, assigning 6FED all the time has no advantage over assigning 1FED all the time (and vice versa), in terms of ATE. 3.2 Personalized Treatment Assignment Next, we trained the different ML models, performed the learners, and evaluated them by calculating their policy values. Our results show that the T-learner model (which consists of two XGBoost sub-modules) has the best performance in all four types of outcomes (Table 2). To evaluate the signi\ufb01cance of any policy, it is crucial to estimate its performance with the effect of a random policy. Assigning treatments in a random fashion results in treatment effectiveness distribution with 95% con\ufb01dence intervals of [20.7% 38.2%] and [28.5% 44.2%], using AI-based PEC or manual PEC, respectively 7 EoE Dietary Treatment Assignment using Causal Learning AKNIN ET AL. Figure 4: The treatment effectiveness, the percentage of patients that respond to the treatment for different policies. The graph compares four different policies. 1FED policy (blue) and 6FED policy (orange) are the naive approaches where all patients are assigned to the same treatment. The PEC policy (grey) is based only on the initial PEC (at the beginning of the trial), where patients with high PEC were assigned 6FED and the rest by 1FED. The causal (ML) policy (yellow) is the trained T-learner model. The PEC that characterizes the patients\u2019 outcome can be determined using manual assessment or using an automated AI platform (right and left, respectively). The green-shaded area is the distribution (95% CI) of the effectiveness of a random policy where 1FED or 6FED are assigned randomly. All policies overlap with the random policy con\ufb01dence interval, except for our T-learner which achieved signi\ufb01cantly better (pval<0.001) performances. (Table 2, Fig. 4). 1FED only, 6FED only, and PEC-based policy provide treatment effectiveness that is not signi\ufb01cantly better that a random assignment (Fig. 4). However, our ML policy provides treatment effectiveness of 52%, in the case of AI-based PEC, and 58.4% in the case of manual-based PEC. That is, our results provide a policy strategy that yields a signi\ufb01cantly better (p-val<0.001) treatment effectiveness. 3.3 Important Features The best casual model is the T-learner, which consists of two models; one is dedicated to the 1FED outcome, and the second to the 6FED. We found that the best models were based on XGBoost (Table 2), which consists of a features attention mechanism that predicts the input features gain and indicates their importance. To gain insight into the histological markers that are signi\ufb01cant for the treatment assignment, we compared the signi\ufb01cant features of the 1FED and 6FED models, whether the PEC is assessed manually or is AI-based (Fig. 5). The only feature that is signi\ufb01cant in all the cases is a histological feature that is a proxy to the spatial distribution of regions with high eosinophil not-intact count (\"PENIC spatial\"). Another common feature measures the density of the basal zone region in the whole slide image (SBZ). Interestingly, previous studies demonstrated a signi\ufb01cant correlation between PEC and SBZ, and that SBZ is informative in predicting remission [7]. 8 EoE Dietary Treatment Assignment using Causal Learning AKNIN ET AL. Figure 5: Top-5 signi\ufb01cant features of the T-learner modules. The main features that are signi\ufb01cant in all cases are the spatial distribution of regions with high eosinophil not-intact count (\"PENIC spatial\"), and the density of basal zone region in the whole slide image (SBZ). Abbreviation legend: PEC Peak Eosinophil Count, SEC Spatial Eosinophil Count, PENIC Peak Eosinophil Not-Intact Count, SBZ Spatial Bazal Zone, CR Connected Region, EI Esophageal In\ufb02ammation. See results section (C) for a detailed description. Other histological features had high attention in the ML model prediction: Spatial Eosinophil Count (SEC) is a score that represents the distribution of eosinophils\u2019 appearance within the slide, whereas peak eosinophil count (PEC) is a local representation of the densest region. PEC-Mean and BZH-Mean are the average scores of the high-power \ufb01elds (HPFs) examined over the biopsy slide regarding eosinophils and basal cells respectively. A connected region (CR) is a group of neighboring HPFs that each one of which contains eosinophils. Each CR size is measured as the number of HPFs within it. PEC-Mean-CR-size and PEC-Max-CR-Size represent the average size of the CRs in the slide and their maximal size respectively. Not all important features were based on AI; a few metrics assessed manually were important as well, such as EI-Stage and EREFS-Furrows. The EoEHSS score of EI-Stage is assessed manually by the pathologist and represents the distribution of the eosinophils within the slide. EREFS-Furrows is a score that is based on the appearance of vertical lines in the esophagus and is measured during the endoscopy. 4 Conclusions Personalized treatment planning requires comprehensive patient information and the tools to analyze and gain insights from the data. EoE is one of many conditions, such as many types of cancer, that its diagnosis and treatment planning relies heavily on histological examination. Traditional manual assessment of whole histological slides is a laborious, time-consuming, and somewhat subjective task and as a result, the number of features that determine the diagnosis is often limited. This is also the case for EoE where the gold standard for EoE diagnosis is the peak eosinophil count. One of the promises of AI in digital pathology, besides automating manual tasks, is the ability to process the entire WSI and infer a large number of histological markers that can provide the \"histological \ufb01ngerprint\" that can facilitate personalized decision-making. While in previous studies we developed the AI platform that extracts histological biomarker that improves diagnosis [5, 7], here we show that using these features and causal learning improve treatment planning. Diet is an important part of the treatment plan for EoE, as many patients have food triggers that can exacerbate symptoms. Choosing how strict the elimination should be for each patient is critical to treatment success. In this study, we examined the effect of two types of diet treatments on EoE patients. We used the data from the SOFEED trial that included endoscopy scores, manual histological scores, and symptom information. We further enlarged the dataset by applying the tissue WSIs to our AI system, achieving spatial and local information on different EoE features within the tissue. The average treatment effect (ATE) analysis of the two different diets provided in the trial did not show any signi\ufb01cant preference for one treatment over the other and is consistent with previous \ufb01ndings [20]. Assigning the same treatment to all patients yields effectiveness, that is the percentage of the patients that respond to the treatment of 35.6% and 37.7% for 1FED and 6FED treatments, respectively. The policy where the treatment assignment is based on PEC only 9 EoE Dietary Treatment Assignment using Causal Learning AKNIN ET AL. gives an effectiveness of 38.3%. Importantly, all these three approaches do not provide signi\ufb01cantly better results compared to a random diet assessment. A random diet treatment results in an average effectiveness of 36.7% with 95% con\ufb01dence intervals of [28.5% 44.2%]. Our T-learner provides a signi\ufb01cant improvement in treatment with an effectiveness of 58.4% Our results show that the T-learner had better performance than S-learner in all outcomes. It emphasizes the importance of training two different models dedicated to each treatment individually, rather than training one model dedicated to both treatments and distinguished only by a negligible treatment feature as an input. Moreover, XGBoost showed better results than the other machine learning models regarding all four clinical outcomes as well. When studying our best model we can identify the important features that were taken into account to train and learn the models. The most signi\ufb01cant features for the treatment assignment are not manual histological features but rather features that were extracted from our digital pathology AI system. The most common one that was relevant in all models was based on the spatial distribution of not-intact eosinophils and was extracted from AI as well. An additional important feature extracted by the AI tool is the spatial distribution of basal cells, this feature\u2019s importance has been previously demonstrated in previous work [7]. Interestingly, the most important features involve measures of density over the entire WSI rather than local features which is consistent with the observation that their global, spatial, and histological features contribute to the EoE diagnosis [6]. This work highlights the importance of systematically analyzing the distribution of biopsy features over the entire slide image and integrating them together with casual learning to provide better treatment planning. Our approach can be used for other conditions that rely on histology for diagnosis and treatment. Acknowledgment We thank Tanya Wasserman for fruitful discussions. This work is supported by the Cincinnati Children\u2019s-Technion Bridge to Next Gen Medicine Grant.", "introduction": "Eosinophilic esophagitis (EoE) is characterized by eosinophil accumulation in the esophagus and is essentially caused as a result of food sensitivity. EoE is a clinicopathologic disorder, and its diagnosis relies on the assessment of patient symptoms by a physician which includes chest pain and dysphagia in adults, vomiting, failure to thrive, and abdominal pain in children [1]. In addition, patients undergo an esophagogastroduodenoscopy (EGD) during which several esophageal biopsies are procured. The tissue is \ufb01xed in formalin, processed and embedded in paraf\ufb01n, sectioned onto slides, and subjected to hematoxylin and eosin (H&E) staining [2]. A pathologist analyzes the biopsies to assess eosinophil in\ufb01ltration and other histopathologic features such as abnormalities in the esophageal epithelium, lamina propria, and if present, muscularis mucosae. The gold standard for active EoE diagnosis requires that the individual\u2019s esophageal biopsy exhibit a peak eosinophil count (PEC) of greater than or equal to 15 eosinophils per 400X high-power \ufb01eld (HFP) in the esophageal epithelium [3]. The PEC is identi\ufb01ed by searching for the HPF in the whole slide image (WSI) that has the greatest number of eosinophils. Other microscopic features of EoE are assessed and quanti\ufb01ed during pathology diagnosis by the EoE histology scoring system (EoEHSS) [4], which assesses the severity and extent of eight features within the tissue. In recent works, we introduced an arti\ufb01cial intelligence (AI) system that predicts EoE activity by analyzing the eosinophil distribution within the entire WSI [5, 6]. An improved AI system predicts other EoE features and extracts more detailed information from the entire WSI [7]. Currently, the main treatments are used to reduce the many discomforts that patients experience daily. They include diet, drugs, and esophageal dilation [8]. Dietary therapy consists of at least three different possibilities: elemental diet, allergy test-based diet, and empiric elimination diet [9]. The elemental diet involves the use of amino-acid-based liquid formulas. Although this diet is highly effective (90% response in children and 70% in adults), it is infrequently accepted by patients because of its dif\ufb01culty to implement. The allergy test-based diet uses a combination of skin prick and patch tests to identify trigger foods, followed by elimination of the identi\ufb01ed foods from the diet; this diet shows a 70% response rate for children but a low response rate for adults. Finally, empiric elimination diets may exclude up to six of the most allergenic foods, followed by gradual reintroduction of each food. Such diets show a reasonable response rate in adults [10]. Having an improved method to identify the causative food(s) for an individual\u2019s EoE would be greatly helpful because the success of a particular diet in inducing disease remission can only be assessed by endoscopy and biopsy, which has risks due to general anesthesia. Drug treatment for EoE patients includes corticosteroids such as prednisolone or methylprednisolone. Studies demon- strated a high response rate for this treatment, but the recurrence of symptoms and eosinophilic in\ufb01ltration is usually observed. Almost 40% of patients with clinical EoE features respond to proton pump inhibitor (PPI) therapy, yet upon cessation of treatment, the disease recurs [11]. EoE patients usually develop esophageal strictures; therefore, endoscopic dilation can reduce pain and thus improve patients\u2019 daily life, although the treatment does not reduce in\ufb02ammation [12]. Causal learning has become increasingly important in personalized treatment planning due to its ability to identify the causal relationships between different factors that contribute to a person\u2019s health outcomes. By understanding the causal mechanisms underlying a patient\u2019s condition, clinicians can develop personalized treatment plans that are tailored to the speci\ufb01c needs and circumstances of that individual [13, 14, 15]. For example, comprehensive geriatric assessment (CGA) has been shown to improve the quality of life for older people with cancer who are starting systemic anti-cancer treatment [16]. Another example is a method that aims to estimate the conditional average treatment effect (CATE) on disability progression in individuals with multiple sclerosis using a deep learning model [17]. The model accurately predicts responders and non-responders to anti-CD20 antibodies, making it possible to adapt the treatment personally. In another paper, personalized treatment was used to reduce the incidence of obesity and obesity-related diseases [18]. The researchers used meta-algorithms to estimate the personalized optimal decision on alcohol, vegetable, high-caloric food, and daily water intake respectively for each individual. The results showed that personalized treatment based on the meta-algorithms has better effectiveness to reduce obesity levels. In this paper, our aim is to determine the diet treatment assignment most likely to induce disease remission for each individual EoE patient using information derived from their condition prior to the implementation of any diet. We use information from the \u201dSix-Food vs. One-Food Eosinophilic Esophagitis Diet Study\u201d (SOFEED) randomized, open-label trial (Fig. 1) to identify which diet is most suitable for each patient in order to increase the chance of inducing disease remission. Speci\ufb01cally, we examine the causal effect of these treatments on the clinical outcomes of PEC levels and EoE disease activity, each as determined by either pathologists or an AI system. Additionally, we utilize other EoE 2 EoE Dietary Treatment Assignment using Causal Learning AKNIN ET AL. attributes, such as EoEHSS features, AI features, EEsAI symptoms questionnaire, and endoscopic reference score, as prior (pre-diet) information about the patients. We calculate the treatment effect of each diet when applied to all patients equally using the standard causal method of average treatment effect (ATE). Furthermore, we show how personalizing the treatments achieves an improved effect. The study was done using different methods and models and indicates that using machine learning models provides better performance than random assignment. Figure 1: High-level trial pro\ufb01le. In this work, we focus on the \ufb01rst phase of the Six-Food vs. One-Food Eosinophilic Esophagitis Diet Study (SOFEED) trial. This study explored the effectiveness of two types of food elimination diets in treating eosinophilic esophagitis (EoE): a one-food elimination diet (1FED) and a six-food elimination diet (6FED). Only patients that completed the trial are included in our study. 112 patients were assigned randomly to one of the two treatments, where 59 patients were assigned to the 1FED, and the remaining 53 were assigned to 6FED. We utilize the information that was gathered from all patients at the beginning of the trial to predict treatment outcomes. This information included 66 features that were extracted from a patient-reported outcomes instrument (Eosinophilic Esophagitis Activity Index, EEsAI), endoscopic observations (EoE endoscopic reference score, EREFS), and histologic features of esophageal biopsy samples." } ], "Ariel Larey": [ { "url": "http://arxiv.org/abs/2306.12188v1", "title": "Facial Expression Re-targeting from a Single Character", "abstract": "Video retargeting for digital face animation is used in virtual reality,\nsocial media, gaming, movies, and video conference, aiming to animate avatars'\nfacial expressions based on videos of human faces. The standard method to\nrepresent facial expressions for 3D characters is by blendshapes, a vector of\nweights representing the avatar's neutral shape and its variations under facial\nexpressions, e.g., smile, puff, blinking. Datasets of paired frames with\nblendshape vectors are rare, and labeling can be laborious, time-consuming, and\nsubjective. In this work, we developed an approach that handles the lack of\nappropriate datasets. Instead, we used a synthetic dataset of only one\ncharacter. To generalize various characters, we re-represented each frame to\nface landmarks. We developed a unique deep-learning architecture that groups\nlandmarks for each facial organ and connects them to relevant blendshape\nweights. Additionally, we incorporated complementary methods for facial\nexpressions that landmarks did not represent well and gave special attention to\neye expressions. We have demonstrated the superiority of our approach to\nprevious research in qualitative and quantitative metrics. Our approach\nachieved a higher MOS of 68% and a lower MSE of 44.2% when tested on videos\nwith various users and expressions.", "authors": "Ariel Larey, Omri Asraf, Adam Kelder, Itzik Wilf, Ofer Kruzel, Nati Daniel", "published": "2023-06-21", "updated": "2023-06-21", "primary_cat": "cs.GR", "cats": [ "cs.GR", "cs.AI", "cs.CV" ], "main_content": "Video-driven facial animation, particularly in the context of Video retargeting / Motion capture, is a challenging task that has gained significant attention in recent years. One of the primary objectives of video-driven facial animation is to automatically transfer facial expressions from videos onto 3D characters. Hence, it enables a wide variety of product applications in virtual reality and augmented reality eras. In particular, in social media, gaming, movies, music clips, and video conference scenarios, driving a 3D avatar gives the illusion of the real world and improves the viewer and user experience. Several techniques and approaches have been proposed to realistically address the complexities of capturing and transferring facial expressions. In this section, we present a review of the relevant literature and highlight the key contributions and advancements in the field. 2.1 Facial Expressions Animation Transfer The emergence of deep learning has revolutionized many areas of computer vision and computer graphics, including facial animation transferring. Many researchers have leveraged deep neural networks, generative adversarial networks, and 2D/3D facial landmarks for facial animation retargeting, whose goal is typically to capture the facial performances of the source actor and then transfer the expression to a target character. [2] suggests a system that learns to regress blendshapes values based on 3D facial landmarks. This approach used a paired database of 2D images and user-specific blendshapes for the training process. [3] developed a novel deep model based on a generative adversarial network (GAN) [4] that transfers the captured source facial expressions to a different actor, thus allowing for personalized retargeting. Furthermore, [5] suggests a deep learning solution that regresses a displacement map to predict dynamic expression based on inferring accurate 2D facial landmarks and the geometry displacements from an actor video. [6] derived a system that learns the dynamic rigidity of prior images from 2D facial landmarks and motion-guided correctives [7]. In addition, [8] developed a method for 2D-3D facial expression transfer by estimating the rigid head pose and non-rigid face expression from 2D actor facial landmarks using an energy-based optimization solved by the non-linear least square problem. [9] introduced a method that combines optical-low estimation with a mesh deformation model to establish correspondences between the given actor video to the target 3D character. Besides, other approaches have recently been using domain transfer methods such as [10, 11] to animate any 3D characters from human videos. Specifically, [11] method learns to predict the facial expressions in a geometrically consistent manner and relies on the 3D rig parameters. This requires a large database of synthetic 2D image characters aligned to 3D facial rig [12, 13, 14, 15]. While the [10] method learns to transfer animations between distinct 3D characters without consistent rig parameters and any engineered geometric priors. 2.2 Facial Expressions Animation Synthesis Recently, there has been a shift towards 3D-based approaches for facial expression synthesis. With the availability of depth sensors and advanced 3D modeling techniques, researchers have explored methods to capture and represent facial expressions in three dimensions. This has led to the development of more accurate and detailed facial expressions models, such as 3D Morphable Models (3DMMs) and blendshape-based models, which enable more realistic retargeting of facial expressions onto different actors [16, 17, 18, 19]. 3 Method 3.1 Preliminaries Our method relies on datasets with reasonable facial expressions and with natural head poses, similar to the way achieved in our approach. The uniqueness of this process is that the main training is performed via only one character that is required to be a polygonal mesh of fixed topology that deforms to different blendshape targets linearly. Moreover, we assume the character is a realistic avatar with human facial geometry to reduce the domain gap between the training character and the inferred real actors. The blendshape coefficients predicted during inference could also be applied to other characters, even to stylized avatars with unnatural geometries. Yet, we assume these characters support the same blendshape targets as the original character used for training. In addition, we assume these blendshape targets are constructed with a semantic meaning. 3 Facial Expression Re-targeting from a Single Character LAREY A ET AL. Furthermore, our methodology does not require any temporal information in both domains. On the one hand, we do not use any information regarding the character\u2019s animation or continuously rendered frames for training. On the other hand, our pipeline could work during inference on actors\u2019 frames that lack temporal relation. Figure 2: Overview of Facial Expression Re-targeting inference pipeline. The full solution includes sub-modules applied to real footage, such as face detection to the given video frame and extracting its face landmarks as a pre-processing phase. Then, both face images and landmarks go through an alignment process. The given aligned face and landmarks are the input for Deep convolutional Neural networks (Landmarks to blendshapes weights network and complementary networks) that predict the corresponding blendshape weights of the given character model. The predicted weights are post-processed with the corrective expressions method (Reasonable Combinations) and stabilized based on previous frames\u2019 information (Exponential moving average). Finally, a Character is then rendered from real footage head pose, and the predicted blendshape weights to demonstrate the animated 3D character. 3.2 Pipeline architecture Our pipeline comprises several building blocks that process a facial image and predict its corresponding blendshape weights. The main route starts by detecting the face boundaries using a face detection model [20] and cropping the image based on the bounding box. Next, facial landmarks are extracted by a pre-trained model as well [20]. As a final pre-process step, the landmarks are aligned into a frontal position with a resolution of 128X128 pixels by an sRT transformation. The aligned landmarks are used as the input to a dedicated deep model that predicts the blendshape weights that serve as the coefficients for the blendshapes linear combination. As a post-process, we fine-tune the predicted weights to verify that they are plausibly visible. To accomplish that, we constructed in advance an array (coined Reasonable-Array) that consists of all blendshape target pairs. The Reasonable Array provides a binary indication of whether each pair of targets should be enabled together. When a pair of targets is not reasonable, the weight prediction of the smaller target is zeroed. Furthermore, When the input frames are part of a video sequence, they are processed by an Exponential Moving Average operation to smooth the temporal dynamics. Moreover, the head pose of the actor is predicted via Hope-net [21] for rendering orientation knowledge. Yet, some targets are prone to errors when predicting them using only landmarks. Eyes expressions detection using landmarks is a challenging task where the landmarks\u2019 prediction errors propagate to the final predictions. Moreover, for some blendshape targets, such as Puff and Sneer expressions, only 68 standard landmarks fail to cover relevant facial regions. Subsequently, the model miss-predicts the corresponding facial expressions. Thus, To achieve weights prediction over the entire set of blendshapes, complementary modules are being used in the challenging cases where the landmarks model fails. During these special scenarios, the cropped images are aligned into 128X128 pixels resolution and serve as an input to the complementary modules. 4 Facial Expression Re-targeting from a Single Character LAREY A ET AL. Finally, the predicted blendshape weights, face bounding box, and head pose are applied to a 3D-mesh rendering procedure. Fig. 2 illustrates the overall pipeline for inferring a 2D image, predicting its blendshape weights, and re-targeting them to a 3D mesh. Figure 3: Landmarks to blendshape weights network architecture. The architecture stages: (1) Separating the landmarks into facial regions. (2) Feature extraction for each region by 1D-convolution layers. (3) Grouping and connecting the regions to the relevant handshape weights. 3.3 Data Preparation The main deep learning model was trained using a single 3D character. The objective of the data preparation phase is to create 2d landmarks of the character in various head-poses and reasonable expressions while maintaining its blendshape weights as the ground truth for the training procedure. 3.3.1 Synthetic Data Pre-process First, we learn the distribution of natural head poses by predicting the orientation of real-life video frames that reflect the use-case scenario. The head pose prediction is performed by Hope-net [21], and the pose of each frame is stored in a collection of realistic head poses. Furthermore, the procedure requires creating manually in advance a Reasonable-Array that is adjusted to the character that defines blendshapes pairs that could be enabled simultaneously. 3.3.2 Synthetic Data Creation To create pairs of landmarks and ground truth blendshape weights for the training procedure, we start by rendering the 3D character into 2D images in various poses and expressions. The head pose is randomly sampled from the realistic head poses constructed in the pre-processing phase. The expression of each frame is generated randomly, with the following limitations: (1) No more than five blendshapes are active, where active weights are considered as weights with values larger than zero. (2) Each active blendshape weight ranges from (0,1]. (3) All active weight combinations are reasonable, based on the Reasonable-Array. Next, we process the high-resolution rendered images, similar to the process done in the inference pipeline. Still, in this case, it is performed over the rendered character images and not on real actor images. First, we obtain the bounding box of the character\u2019s face, extract its facial landmarks, and finally, align the landmarks into a frontal 128X128 pixels resolution. 5 Facial Expression Re-targeting from a Single Character LAREY A ET AL. 3.3.3 Real Footage Data Creation Based on the real-actor images dataset, specific blendshape targets not accurately reflected by the standard landmarks are predicted differently. Two eyes blinking weights are predicted by a dedicated model. In this case, we capture videos from different actors, where the actors must perform the same eye expression during the entire video, while other facial expressions, head pose, and distance from the camera alter. This technique enabled a simple and convenient labeling procedure, where all decoded frames from the same video are labeled the same. Each video contains one of the following eye expressions: (1) close both eyes, (2) natural open eyes, (3) wink, (4) close eyes partly. All videos are decoded into frames where each frame is assigned to the label of the entire video. In addition to the blinking, a dedicated model is trained for \u2019Puff\u2019 and \u2019Sneer\u2019 facial expressions using the same data-collecting technique. 3.4 Landmarks-to-Weights This main module aims to translate the knowledge represented by the facial landmarks to the facial expressions, and Visemes are reflected as the blendshape weights. Thus, the inputs to the model are 68 aligned landmarks, each represented by its two horizontal and vertical coordinates (68X2 shape). In contrast, the model outputs 62 blendshape weights that indicate the coefficients of the linear combination of the blendshape targets. The simple approaches of training regressors between the source landmarks domain to the target blendshape weights domain did not converge well. However, separating the landmarks into facial regions and propagating the information in a hierarchical route showed high performance. We pre-defined eight landmarks regions: eyebrow-left, eyebrowright, eye-left, eye-right, nose, nostril, teeth, and lips. 1D-convolution layers processed each region of 2-dimensional landmarks into a 1-dimensional hidden layer representing the region\u2019s extracted knowledge. The next step depends on the behavior of the blendshape targets. A grouping layer is required when blendshape targets are influenced by more than one landmarks region. In this case, the relevant regions\u2019 hidden layers are concatenated and processed by an additional MLP. Finally, each blendshape target has its dedicated MLP that outputs a scalar value in the range of [0,1] that represents the blendshape weight\u2019s value. For example, as demonstrated in Fig. 3, the target representing the \u2019AA\u2019 Viseme depends on the \u2019Teeth\u2019 and \u2019Lips\u2019 regions of landmarks. Thus \u2019Lips\u2019 and \u2019Teeth\u2019 landmarks regions are grouped into the \u2019Mouth\u2019 group in advance. On the other hand, when the blendshape target is affected directly by only one region of landmarks, the region\u2019s hidden layer is regressed directly to predict the corresponding blendshape weight. For example, the blendshape target representing the left lowered eyebrow depends only on the left-eyebrow region of landmarks (Fig. 3). 3.5 Complementary modules 3.5.1 Blink Detection Predicting blinking targets given facial landmarks is challenging due to the high diversity of eye structure and surrounding textures that cause landmarks extractors\u2019 failures. These errors propagate into the landmarks-based model when predicting blendshape weights. Thus, we train a dedicated model to directly predict two blinking blendshape weights from the given image using the real-footage dataset. The input images are aligned and cropped around the eyes to the resolution of 16X40 pixels and applied to a ResNet18 model [22]. Yet, predictions do not account for different eye geometries. I.e., eyes with narrow geometry could be interpreted as partially closed. Therefore, we adjust the prediction range of values to the individual actors\u2019 eye geometry. For each frame, the distance between the lower eyelid to the upper eyelid is calculated for both eyes, which is then classified by online K-means into one of two classes: \u2019opened eyes\u2019 and \u2019closed eyes\u2019. K-means averaged values of the two classes are updated during the video progress by the distances calculated per frame. The \u2019opened eyes\u2019 class value reflects the actor\u2019s natural eyelids distance and is converted by a linear transformation to a threshold between 0 to 1, which serves as the new low edge for the blinking prediction range of values. 3.5.2 Gaze Detection Herein, we derived a practical approach for accurately determining and monitoring the direction of an actor\u2019s gaze. This involves identifying whether the actor\u2019s gaze is directed straight ahead (the Primary position) or in one of the secondary positions, namely up, down, right, or left. 6 Facial Expression Re-targeting from a Single Character LAREY A ET AL. Our method relies on comprehensive facial eye landmarks, including the iris, inner corner keypoints, and outer corner keypoints. Specifically, the calculations are based on the distance between the iris and the inner and outer corner keypoints. To predict the coefficients for detecting the direction of the eyes\u2019 gaze, we outline the following three steps: 1. Horizontal Eye Line Calculation: We begin by calculating the properties of the horizontal eye line using the key points of the eye corners. These properties include the mid-point, line slope, bias, and the L2 distance of the horizontal eye. 2. Intersection Point Determination: Next, we determine the intersection point between the iris projection and horizontal eye lines. 3. Secondary Positions Detection: By analyzing the obtained intersection point, we identify the secondary positions of the gaze: (a) Left and Right Gaze: We measure the distance of the intersection point relative to the mid-point. This distance is normalized based on the individual\u2019s horizontal eye L2 distance, resulting in a unique value for each actor. The direction of the eye is correlated with the position of the intersection point relative to the mid-point. (b) Up and Down Gaze: We measure the distance between the intersection and iris points to detect the upward or downward gaze. The eye\u2019s direction is correlated with the position of the iris point relative to the horizontal eye line. In summary, leveraging comprehensive facial eye landmarks and employing specific calculations can predict the gaze blendshape coefficients for various eye positions while ensuring smooth and reliable results. 3.5.3 Special Expressions Detection \u2019Puff\u2019 and \u2019Sneer\u2019 expressions are part of the Facial Action Coding System (FACS) and refer to facial muscle movements. The standard landmarks are not sufficient to capture these expressions. Thus, to detect the corresponding blendshape weights, we train a ResNet18 model [22], using the real-footage dataset. 4 Experiments We used three datasets encompassing various facial expressions and video sequences to conduct our experiments. In addition, we demonstrated our study\u2019s methodological advancement through reproducibility, transparency, and comparability. Finally, we evaluated our approach\u2019s performance using qualitative and quantitative measures. Assessing the quality and performance of facial expressions retargeting algorithm is a crucial aspect of research in this domain. Therefore, various evaluation metrics have been proposed to measure the realism, accuracy, and perceptual quality of retargeted facial expressions, poses, and identities. 4.1 Datasets 4.1.1 Synthetic Character Landmarks Dataset We used a 3D character consisting of 12.8K vertices for each of its 62 blendshape targets to train the landmarks-based model. The blendshapes included various facial expressions and mimics in addition to 14 Visemes (Attribution 4.0 International license [23]). We created manually in advance a Reasonable Array that is adjusted to this specific set of blendshapes and performed the data preparation as described above. We used the Blender tool to render 30,000 images of the character, saved their corresponding blendshapes weights as ground truth, and extracted their aligned facial landmarks. The head pose of each frame scene was sampled from a natural head pose distribution. This distribution was obtained by detecting head pose information from relevant Youtube and Denver Intensity of Spontaneous Facial Action (DIFSA) videos [24]. 4.1.2 Real Footage Labeled Dataset We collected 200 videos from 40 actors targeting blinking, Puff, and Sneer expressions. The videos were decoded at 30 fps yielding 17101 real footage frames and their corresponding labeled blendshape weights. This dataset was dedicated to supervised training of the Special Expressions models. 7 Facial Expression Re-targeting from a Single Character LAREY A ET AL. 4.1.3 Real Footage Unlabeled Dataset To evaluate the performance of our pipeline, we captured additional real-footage videos from 20 identities. The actors were requested to perform various FACS expressions and Visemes. The videos were decoded at 30 fps resulting in 25075 real footage frames. 4.2 Training Procedure The updated model was trained and optimized using Pytorch [25] framework on a single NVIDIA GeForce GTX 1080 with 24GB GPU memory. The hyperparameters of the model that optimize convergence were examined using Adam Solver [26] with beta1=0.5 and beta2=0.999, eps of 1e-10, weight decay of 1e-7, a minibatch of size 16, a learning rate of 5e-5. In contrast, we decay the learning rate to zero by gamma=0.5 every three epochs. Weights were initialized from a uniform distribution described in [27]. We use Leaky ReLUs with slope 1e-2 for the convolutional layers and a fully connected layer, in addition to Sigmoid activation at the end of the fully connected layer. The optimization loss function contains two terms. First, a mean square error (MSE) between all ground truth and predicted blendshape weights. Second, we used an MSE between active ground truth and predicted blendshape weights. At the same time, the loss function elements were weighted with values of 1 and 0.1, respectively. Figure 4: Examples of prediction performance to a variety of Visemes. For each Viseme from left to right: (1) input image, (2) our output expressed 3D character. 4.3 Results 4.3.1 Facial Expressions and Visemes We examined our method using the Real Footage Unlabeled Dataset. The video frames were subjected to our pipeline, producing Blendshape weights, which serve as coefficients of the linear combination between the mesh geometry targets. Fig. 4 and Fig. 5 show examples of real footage frames and their corresponding rendered mesh with the translated 8 Facial Expression Re-targeting from a Single Character LAREY A ET AL. facial expression. The former presents different Visemes as part of a video speaking sequence, while the latter presents other common facial expressions frames. Figure 5: Examples of prediction performance to a variety of expressions. For each expression from left to right: (1) input image, (2) our output expressed 3D character. 4.3.2 Multiple Characters The uniqueness of our method stems from the single-character training procedure where only one 3D character is enough to obtain sufficient results. Yet, we can drive any desired character during inference if it supports the same geometrical space and can be animated using the same semantic blendshapes. Fig. 6 shows the robustness of our model over multiple actor identities and multiple 3D characters constructed by their unique geometries and textures. These characters were not part of the training but are represented in the same semantic blendshape space as the single mesh used for training [23]. Figure 6: Examples of performance when retargeting to multiple 3D characters from different actors. For each actor from top-to-bottom: (1) input image, (2) our results expressed on three different characters. 4.3.3 Competitive comparison As a baseline to the single-character video retargeting pipeline, we implemented the algorithm proposed in [10] where a semi-supervised approach that included an image translation technique was introduced. We inferred the baseline model over the Real Footage Unlabeled Dataset as well. Examples of our method\u2019s results compared to [10] method are presented in Fig. 7. For qualitative evaluation, we conducted a subjective user study where 80 individuals were required to rate the degree of compatibility between pairs of actors and their corresponding rendered 3D character images in facial expressions aspects. The user study contained 34 pairs, where each actor frame appeared twice, once with a prediction obtained by our method and once with a prediction obtained by [10] method (each time in random order). The subjects rated each 9 Facial Expression Re-targeting from a Single Character LAREY A ET AL. Figure 7: Video retargeting comparing. Examples of ours vs. Moser et al. pipeline results of different facial expressions for two real actors. pair on the Likert scale (scores between one to five), and we reported their Mean Opinion Score (MOS) separately for Visemes representative frames and other facial expressions representative frames. For quantitative evaluation, we introduce the landmarks similarity metric. Herein, the source actor frame and the corresponding rendered 3D character image are compared based on facial landmarks. First, both images are cropped around their face. Next, facial landmarks are extracted from both crops and are processed by landmarks alignment procedure to the same template using an sRT transformation. A mean square error (MSE) is calculated between aligned landmarks, representing the distance between the source actor\u2019s facial expression and the translated expression over the 3D character. Table 4.3.3 shows that our method outperforms [10] method in both qualitative and quantitative metrics. 10 Facial Expression Re-targeting from a Single Character LAREY A ET AL. FACS Visemes Overall Evaluation-Metric\\Method Moser et al. Ours Moser et al. Ours Moser et al. Ours Quantitative (MSE) \u2193 40.43 29.95 42.73 22.59 40.92 28.37 Qualitative (MOS) \u2191 2.43 4.28 2.38 3.81 2.41 4.05 Table 1: Quantitative and qualitative performance of ours and Moser et al. approaches. The quantitative evaluation represents the MSE of landmarks similarity on 39 videos. The qualitative evaluation represents the subjective results (MOS) of a survey of 80 participants. MSE, Mean square error a quantitative measure; MOS, Mean Opinion Score a qualitative measure; Downarrow symbol indicates lower is better; Uparrow symbol indicates higher is better. 4.3.4 Ablation Study In this section, we provide ablation experiments substantiating the need for the different grouping layers of the Landmarks to blendshape weights network. The No-Grouping network replaces all layers with a simple MLP model that directly regresses the blendshape weights from the facial landmarks. The Conv-Grouping model only uses our grouping method for the 1D convolution layers (i.e., each facial region has its convolution weights). The convolution features are propagated to the blendshape weights directly via an MLP network eliminating the last grouping layers. The Full-Grouping model contains all grouping layer components. In Table 4.3.4, all these effects are reported by calculating the MSE between the facial landmarks of the actors to the 3D characters on the test dataset. One can observe an improvement in accuracy at the introduction of every proposed component with a final reduction of 10% in the Error over the baseline. Overall Results Evaluation-Metric\\Method Moser et al. Ours No-Grouping Ours Conv-Grouping Ours Full-Grouping MSE \u2193 40.92 31.52 28.48 28.37 Table 2: Ablation study for the different grouping layers of the Landmarks to blendshape weights network. MSE, Mean square error a quantitative measure; Downarrow symbol indicates lower is better. 5 Conclusion One of the main challenges of establishing a video retargeting system is acquiring an optimal dataset for the supervised learning approach. Labeling each video frame manually with its corresponding blendshape weights (62 blendshapes in our case) can be laborious, time-consuming, and subjective. A reasonable solution could be training an AI model via a synthetic dataset. Yet, this approach requires an expensive dataset that contains a diverse range of 3D characters rendered in high-quality photo-realistic scenes. In this work, we describe a technique that overcomes these challenges to produce a facial animation model trained with only a single 3D character. Here, we introduce a full pipeline that benefits from facial landmarks to reduce the domain gap between the synthetic 3D character encountered during training to the real-footage inference actors. This approach eliminates using a large-scale, expensive dataset and enables us to achieve sufficient performance with only one 3D character. Through this pipeline, we translate the facial landmarks information into blendshape weights by a unique grouping approach where each spatial region of landmarks is grouped, and the knowledge propagates hierarchically until it reaches the corresponding target shape. We further demonstrate a technique to complete the expressions range by implementing target-shape-specific sub-modules. We show the effectiveness of our method in various aspects. The proposed pipeline captures the real footage of actors\u2019 facial expressions in both Visemes and FACS frames. We demonstrate the robustness of our method by capturing multiple actors and applying their frames to the pipeline over multiple 3D characters that were excluded from the training procedure. We further compared our results to Moser et al. qualitatively and quantitively, achieving a higher mean opinion score (MOS) by 68% and a lower mean squared error (MSE) by 30.7%. Overall, our work provides a state-of-the-art solution for video retargeting using a single 3D character in a high-level pipeline and low-level deep architecture. 11 Facial Expression Re-targeting from a Single Character LAREY A ET AL. Acknowledgments The authors thank all the human actors and actresses who participated in the work experiments.", "introduction": "Various applications use video retargeting for digital face animation. These domains include social media, gaming, movies, and video conferences. Video retargeting systems aim to translate human facial expressions into 3D characters, eventually mimicking the real footage expressions. A common method to represent facial expressions of 3D characters is by blendshapes. In this method, different mesh shapes serve as targets where each mesh target has its corresponding blendshape weight that determines its significance in the desired expression. Mathematically, the mesh targets serve as eigenvectors and the blendshape weights as linear combination coefficients, resulting in an interpolated expressed mesh. In the case of video retargeting, the objective is to translate expressions from real footage videos to sequences of blendshape weights to control 3D characters\u2019 facial animation. Particularly in this study, we used a 3D character with 62 mesh targets with semantic meanings such as smile, puff, and blink in addition to multiple Visemes. The large variation of facial expressions and character shapes poses a significant obstacle in training efficient models: creating a large, representative dataset of video frame - blendshape weights pairs. This dataset type can be created manually by labeling each video frame with its blendshape weights. \u2020Authors contributed equally to this work. \u2217Corresponding author, e-mail: snatidaniel@gmail.com. arXiv:2306.12188v1 [cs.GR] 21 Jun 2023 Facial Expression Re-targeting from a Single Character LAREY A ET AL. Yet, such a process is labor-intensive, time-consuming, and highly subjective \u2013 when individual observers can interpret the same image differently, causing a nondeterministic labeling process. A reasonable solution is training the machine learning model using a synthetic dataset. Figure 1: Results of our method on two video frames. From left-to-right: input image, cropped input image, extracted facial landmarks, our output expressed 3D character. In this approach, realistic 3D characters deform into numerous expressions based on pre-defined blendshape weights, which serve as the dataset\u2019s deterministic ground truth. Next, each mesh is rendered into realistic scenes used as input images during the training procedure. However, a significant challenge in this approach is overcoming the domain gap between the synthetic scenes the model encounters during training and the real scenes it encounters during inference. Producing a diverse synthetic dataset using high-poly realistic 3D characters that represent the photorealistic scenes sufficiently can, on the one hand, narrow the domain gap but, on the other hand, can be very expensive. We combat that multiplicative growth of efforts by training the video to blendshape weights conversion on a single character. To narrow the domain gap between the trained single character to the realistic human facial scenes, we use a well-known representation of face structure that captures expressions quite well \u2013 facial landmarks, particularly the standard set of 68 landmarks [1]. The conversion of a face image, depicting an actual person or a human-like (\"realistic\") 3D character, to a set of landmarks performs data reduction into a \"symbolic\" representation. We train a blendshape translation network in that space using a single character. Furthermore, the local nature of both blendshape coefficients and face landmarks allows partitioning the problem into subsets \u2013 e.g., eye landmarks versus eye blendshapes. Moreover, working with blendshape provides an additional advantage \u2013 the ability to apply the predicted blendshape coefficients to characters of identical topology but different geometry, thus generalizing from our single character to a wide range of avatars. Section 2 reviews related work, highlighting specific challenges. Our method is described in section 3, starting with our data preparation. We generate synthetic data according to a real-world distribution of head poses and by modifying blendshape coefficients to generate plausible expressions from a single 3D character that can be matched to video frames of real actors. We further apply landmark detection to the synthetic images and describe how to regress blendshape weights locally from landmarks. While the landmark networks reliably present many expressions, they fail to replicate eye blinks and eye gaze directions. Other expressions, like puff and sneer, occur in face regions, poorly represented by landmarks. Thus, we added complementary methods for these regions and expressions. Finally, our results in section 4 show that our approach scales well to videos with different users and various expressions, outperforming previous research in qualitative and quantitative metrics (Figure 1). 2 Facial Expression Re-targeting from a Single Character LAREY A ET AL." }, { "url": "http://arxiv.org/abs/2302.06513v1", "title": "DEPAS: De-novo Pathology Semantic Masks using a Generative Model", "abstract": "The integration of artificial intelligence into digital pathology has the\npotential to automate and improve various tasks, such as image analysis and\ndiagnostic decision-making. Yet, the inherent variability of tissues, together\nwith the need for image labeling, lead to biased datasets that limit the\ngeneralizability of algorithms trained on them. One of the emerging solutions\nfor this challenge is synthetic histological images. However, debiasing real\ndatasets require not only generating photorealistic images but also the ability\nto control the features within them. A common approach is to use generative\nmethods that perform image translation between semantic masks that reflect\nprior knowledge of the tissue and a histological image. However, unlike other\nimage domains, the complex structure of the tissue prevents a simple creation\nof histology semantic masks that are required as input to the image translation\nmodel, while semantic masks extracted from real images reduce the process's\nscalability. In this work, we introduce a scalable generative model, coined as\nDEPAS, that captures tissue structure and generates high-resolution semantic\nmasks with state-of-the-art quality. We demonstrate the ability of DEPAS to\ngenerate realistic semantic maps of tissue for three types of organs: skin,\nprostate, and lung. Moreover, we show that these masks can be processed using a\ngenerative image translation model to produce photorealistic histology images\nof two types of cancer with two different types of staining techniques.\nFinally, we harness DEPAS to generate multi-label semantic masks that capture\ndifferent cell types distributions and use them to produce histological images\nwith on-demand cellular features. Overall, our work provides a state-of-the-art\nsolution for the challenging task of generating synthetic histological images\nwhile controlling their semantic information in a scalable way.", "authors": "Ariel Larey, Nati Daniel, Eliel Aknin, Yael Fisher, Yonatan Savir", "published": "2023-02-13", "updated": "2023-02-13", "primary_cat": "eess.IV", "cats": [ "eess.IV", "cs.CV", "cs.LG" ], "main_content": "2.1 Medical Synthetic Images The use of generative models to produce synthetic images was explored in numerous works in the medical field. Image translation frameworks are widely used, such as models that generate endoscopy images given binary semantic masks [8], transform between radiological images [9], and convert between histology staining types [10]. Another image translation work is DeepLIIF [11], which provides for a given IHC image several outputs including stain deconvolution, segmentation masks, and different marker images. Other types of generative frameworks are common as well. DCGAN framework generates synthetic images from a sampled noise input and processes it through convolutionlayers architecture. It was used in several applications that generate medical images such as MR images [12], eye diseases images [13], X-ray images [14], and breast cancer histological images [15]. PathologyGAN [16] introduced a novel framework that generates high-quality pathology images in the size of 224X224 pixels. [17] introduced a two-step pipeline that is similar to ours. In the first step, they generate binary vessel segmentation masks using DCGAN, next they generate the RGB retinal image. Their pipeline provides synthetic images in the size of 512X512 pixels size. However, our pipeline provides higher-resolution (x2) synthetic images in the challenging field of histology. We focus on the first phase of learning the complex geometry structure that is reflected in the semantic label mask. We show that DCGAN is not sufficient for this task and introduce DEPAS as an improved architecture to overcome the challenges in the high-resolution histology domain. 2.2 Discrete Predictions The first step of our pipeline requires predicting discrete semantic masks. In this work, we focus on the binary scenario where there are two labels in the semantic mask \u2013 tissue and air. However, the binary output of the generator should be obtained by a step-function, but this non-differentiable operation can break the backpropagation of the optimization objective\u2019s gradients through the discriminator to the generator. A reasonable solution is by replacing the discrete output operations with continuous relaxations such as Sigmoid during training, and applying the discrete operation only during the test [18]. [19] proposed to use binary neurons in machine learning models via straight-threw-estimators, where the binary operator is applied during training in the forward pass, but is ignored and treated as an identity function in the backward pass. [20] explored the generative use of a deterministic binary neuron and stochastic binary neuron and introduced the BinaryGAN. We investigated the different approaches and found that in the high-resolution histology domain, the best performance was achieved by the Annealing-Sigmoid. In this approach, the last layer of DEPAS\u2019s generator is a Sigmoid whose slope is increased gradually during training toward the step-function [21]. 3 METHODS Our pipeline includes two main phases. The main focus of this study is on the first step where we learn the internal geometry of the digitalized histology tissue. For this task, we designed a generative architecture coined DEPAS that captures the tissue\u2019s morphology and expresses it by a semantic mask. The second phase is an image translation task to transfer the discrete semantic mask to an RGB photorealistic image of the tissue. 3.1 DEPAS Architecture To enhance scalability, the generative process of producing synthetic tissue masks is initialized by sampling noise from a given distribution and applying it to the model (Vanilla GAN). The mechanism is based on the DCGAN architecture that was used by [17] and consists of multiple convolution blocks in its generator and discriminator. In DEPAS we adjusted the DCGAN layers to the high-resolution size of 512X1024 pixels output and included three main extensions. (1) Discrete Adaptive Block. In our case, where the discriminator should obtain a binary mask during training, we require a binary output from the generator where every pixel indicates one of the two classes \u2013 Tissue or Air. Thus, we replaced the DCGAN\u2019s last block with this module (Fig. 2A). Instead of the non-differential step function, we use a Sigmoid activation with a high slope to mimic the former and yield a pseudo-binary differential output. For optimal convergence, we initiate the Sigmoid with its base slope of 1 and increase it gradually during training (AnnealingSigmoid, AKA AS). That is, in every iteration, the generator produces a Bernoulli probability that becomes more deterministic during training in differentiating the two classes. Formally, at iteration t, the AS is: ASt = 1 1 + e\u2212 1 1 + e\u2212\u03b4t\u2217x (1) 3 DEPAS: De-novo Pathology Semantic Masks using a Generative Model LAREY ET AL. Figure 2: Architecture of DEPAS. (A) The generator decodes semantic masks from latent noise. It consists of \ufb01ve transpose convolution layers where each one of them followed by Batch Normalization and ReLU activation. Another element of stochasticity is added to the hidden layers in the spatial dimension after being scaled. Finally, the last feature-maps are processed by the Discrete Adaptive block that outputs a semantic mask in the two-label case, or multiple masks in the multi-label case. (B) For training, we use three discriminators that support different resolutions of images. Each one of them encodes the corresponding image into a scalar which represents the probability that the image is real. The encoding is processed by convolution layers where each one of them is followed by Batch-Normalization and LeakyReLU activation. Where x is the input for the element-wise operation, and \u03b4t determines the Sigmoid\u2019s slope at iteration t. To increase the slope, we require that \u03b4t+1 > \u03b4t, and for initialization with the basic Sigmoid, we de\ufb01ne \u03b4t=0 = 1. Furthermore, we extend this approach to cases where there are more than two labels in the desired semantic mask. For example, in the case where the tissue itself has several types of morphology (e.g. tumor tissue, non-tumor tissue, and air) we will use the multi-label approach. In this scenario, we generalize the binary distribution to the multinomial distribution by designing the \u2018Discrete adaptive Block\u2019 to produce a multi-channel feature map, where each channel represents a different class. The feature maps are then applied to an Annealing-Softmax-Temperature (AST) activation. Instead of the non-differential Argmax function, we use a channel-wise Softmax layer with a low temperature to mimic a deterministic decision of the generated class for each pixel. Similarly to the binary situation, we initiate the Softmax temperature with its base value of 1, and decrease it gradually during training. I.e. every iteration, the generator produces for each pixel its classes probabilities that become more deterministic during training. Formally, at iteration t, the probability for class c provided by the AST is: AST t,c = e xc Tt P j e xj Tt (2) 4 DEPAS: De-novo Pathology Semantic Masks using a Generative Model LAREY ET AL. Where xi is the input for the element-wise operation of the channel that corresponds to class i, and Tt determines the Softmax\u2019s temperature at iteration t. To increase determinism, we require that Tt+1 < Tt, and for initialization with the standard Softmax, we de\ufb01ne Tt=0 = 1. Both binary and multi-label scenarios consist of the \u2018step-function\u2019 and \u2018argmax\u2019 operations respectively for inference. However, for training, where gradients should backpropagate through these layers, the non-differential operations are replaced by differential operations that adapt the former\u2019s attributes gradually. (2) Spatial Noise. In the standard DCGAN\u2019s implementation, latent vectors z are drawn from a Gaussian distribution as the input for the generator. The sampling is performed channel-wise. That is, a sampled input noise is a one-dimensional latent vector where each element represents an initial channel with a spatial size of 1X1 pixels without any noise diversity in the spatial domain. When the feature maps are spatially increasing in the forward pass (via the transpose convolution layers), they are prone to become repetitive in the spatial aspect. In our case, where DEPAS provides high-resolution semantic masks, this phenomenon is signi\ufb01cant. We address it by adding noise in the spatial domain as well. We draw a 2D array from a Gaussian distribution and inject it spatially into the hidden layers of the generator after resolution and scale adjustments (Fig. 2A). (3) Multi-Scale-Discriminators. Our pipeline generates high-resolution synthetic images in the size of 512X1024 pixels. For comparison, BinaryGAN [20], PathologyGAN [16], and the retinal vessel dual-phase pipeline [17] generate 28X28 pixels, 224X224 pixels, and 512X512 pixels size of images respectively. Inspired by pix2pixHD [7], we implemented three discriminators each one of them receiving a different scale of the input mask \u2013 100%, 50%, and 25% (Fig. 2B). This technique helps the discriminator to distinguish between real and fake masks from different levels of perspective. Low-resolution masks provide high-level information such as the general structure of the tissue. In contrast, low-level information, such as intercellular spaces, is obtained by high-resolution masks. The combined objective is provided by the following equation: LDEP AS = R X r=1 \u03b1r \u00b7 LGAN,r (3) Where \u03b1r is the weight of Dr, the discriminator that corresponds to the image resolution r \u2208{25%, 50%, 100%}, and R is the number of discriminators (R = 3). LGAN,r is the Vanilla GAN\u2019s loss obtained by Dr that receives the real input mask x and the generator\u2019s synthetic mask G(z), and is de\ufb01ned as: LGAN,r = log(Dr(x)) + log(1 \u2212Dr(G(z))) (4) 3.2 Paired Image Translation Paired image translation is a set of tasks that translate one domain of images to another domain given input-output image training pairs [6]. One of these kinds of tasks is to insert a semantic map and translate it to an image, based on the additional information, such as class labels, passed together with the image to the network during the training phase. In the second step of our pipeline, we used pix2pixHD, an image translation generative network [7], to produce synthetic pathological images from the given semantic masks. Particularly, pix2pixHD consists of a generator which is a composition of convolutional residual layers that receives a 512X1024 pixels semantic mask as an input and generates a 512X1024 pixels RGB image. In addition, we used two multiscale discriminators with the same CNN architecture, but work on two different image scales. 3.3 Datasets In this study, we perform our methodology on four histology realizations subjected to two different types of staining. The \ufb01rst type is hematoxylin and eosin (H&E) where histology images were collected from three different types of cancer: Prostate Adenocarcinoma (PRAD), Skin Cutaneous Melanoma (SKCM), and Lung Squamous Cell Carcinoma (LUSC). The three datasets are part of the Cancer Genome Atlas (TCGA) research network, where for all datatypes we used only their imaging information [22]. We performed our methodology on 50 WSIs from each H&E realization. The second staining type is immunohistochemistry (IHC), where histology images were collected from non-small cell lung carcinoma (NSCLC) patients. This data was originally part of a study that involved an Immune Checkpoint inhibitor therapy, where the detection of programmed death-ligand 1 (PD-L1) in the tissue biopsies was required for determining the course of therapy [23]. 27 WSIs were obtained from patients diagnosed with NSCLC who underwent biopsy at Rambam Health Care Campus. All procedures performed in this study and involving human participants were in accordance with the ethical standards of the Rambam Medical center institutional research committee, approval 0522-10-RMB, and with the 1964 Helsinki declaration and its later amendments or comparable ethical standard. 5 DEPAS: De-novo Pathology Semantic Masks using a Generative Model LAREY ET AL. In all realizations, the slides were split into patches with a size of 512X1024 pixels. Patches containing more than 85% background were \ufb01ltered. In total, 6000 images were used from each one of the H&E datasets and 2012 images from the IHC dataset. For each realization, 85% of the data was used for training and the rest for evaluation. In all realizations, we create ground truth for binary tissue semantic masks by converting the patches to grayscale and applying a high threshold to extract air pixels. We found that 204 and 235 were the optimal thresholds (in the range of 0-255) to distinguish between tissue and air pixels, in H&E and IHC respectively. 3.4 Training Procedure For each realization, we trained two models individually: pix2pixHD and DEPAS. PixPixHD was trained with pairs of images and their corresponding tissue masks, where the tissue masks served as the input for the generator and the images as the real ground truth for the discriminators. The training was conducted with a batch size of 1 for 400 epochs in the H&E realizations, and for 700 epochs in the IHC realization. We linearly decay the learning rate to zero over the last 100 epochs for H&E and over 200 epochs for IHC. DEPAS was trained only with the tissue masks from the same training set used to train pix2pixHD. In this case, the generator generates synthetic tissue masks from a sample drawn from the standard normal distribution. The model was trained with a batch size of 8, for 100 epochs where every 10 epochs, the Annealing Sigmoid\u2019s \u03b4 parameter in (1) increased by one. Each scale objective in the loss term (3) was equally weighted where \u03b1100% = \u03b150% = \u03b125% = 1. Both DEPAS and Pix2pixHD models\u2019 weights were optimized by Adam optimizer [24] with a learning rate of 2e-4, and beta\u2019s coef\ufb01cients range of (0.5, 0.999). They were developed in the PyTorch framework [25] and were trained on a single NVIDIA RTX A6000 GPU with 48GB GPU memory. 4 Results 4.1 Synthetic Semantic Tissue Masks For each realization, we trained in addition to DEPAS, a standard DCGAN as a baseline since it was used before to generate semantic masks in the medical \ufb01eld [17]. We generated from both trained models the same amount of synthetic tissue masks as in the real test set (1000 for H&E realizations and 328 for the IHC). Examples of the different tissue masks for all four histology realizations are shown in Fig. 3 with their latent space 2D projection extracted from a pre-trained ResNet model. For quantitative evaluation, we calculated the distance between DEPAS synthetic tissue masks to the real test tissue masks (that were excluded from training). The \ufb01rst two distance metrics, Kolmogorov\u2013Smirnov (KS) test and Kullback\u2013Leibler (KL) divergence, were applied to the TSNE projection of the masks\u2019 representations. These representations were extracted from the last hidden-layer of a pre-trained ResNet [26] model (size of 2048). For the third metric, we captured representation vectors from the last layer of a pre-trained Inception v3 model [27] (size of 2048), to calculate \u2018Fr\u00e9chet inception distance\u2019 (FID) which is the gold-standard metric for evaluating the quality of synthetic images [28]. The results for the different representation metrics are presented in Table 1. They show that for all four realizations, DEPAS synthetic tissue masks have the least distance to the real ones compared to DCGAN, in all metrics. Particularly, FID scores of DEPAS were lower (closer to the real images) than the DCGAN baseline by factors of 20.0, 6.9, 30.2, and 17.0, when referring to the PRAD, SKCM, LUSC, and NSCLC tissue masks respectively. 4.2 Synthetic Photorealistic RGB Images To further evaluate the full pipeline in the photorealistic histology perspective, we applied the synthetic tissue masks to the image translation model and compared their outputs to the real histology images. We performed this over two datasets, the \ufb01rst is SKCM which represents a realization subjected to H&E staining, and the second is lung cancer (NSCLC) subjected to IHC staining. Examples of the different tissue masks and images for both H&E and IHC realizations are shown in Fig. 4 with their latent space 2D projection extracted from a pre-trained ResNet model. Furthermore, we performed the same quantitative evaluation methodology over the images at the RGB level, by calculating the distance between the synthetic images stem from DEPAS to the real RGB histology images (Table 2). For comparison, we performed the same for three other datasets: (1) synthetic images where their prior tissue masks are generated by DCGAN. (2) Real histology images from a different type of pathology as a histology control. I.e. in the case where the evaluation is performed on the H&E realization, we also calculated the distance between the real IHC images to the real H&E images as a histology baseline, and vice-versa. (3) We also used SegTrack Dataset [29] as real-life scenario images for realistic control taken from the non-pathology \ufb01eld. 6 DEPAS: De-novo Pathology Semantic Masks using a Generative Model LAREY ET AL. Figure 3: Examples of tissue masks from four different types of cancer realizations. These realizations include three organs: skin, prostate and lung, and two types of staining: H&E and PD-L1 IHC. SKCM Skin Cutaneous Melanoma. PRAD Prostate Adenocarcinoma. LUSC Lung Squamous Cell Carcinoma. NSCLC Non-small cell lung carcinoma. In each realization, we show tissue masks taken from real biopsy slides (with their original RGB image). We compared the tissue masks to the ones produced by DEPAS and by a baseline DCGAN. The different types of tissue mask representations are projected into 2D via TSNE (right). We show that DEPAS provides tissue masks from a distribution that is closer to the Real images rather than DCGAN\u2019s outputs (as quanti\ufb01ed in Table I). Image quality metrics Method Dataset KS \u2193 KL \u2193 FID \u2193 DEPAS PRADa 1 (0.3) 1 (0.9) 1 (420.3) DCGAN x2.6 x42.8 x20.0 DEPAS SKCMa 1 (0.3) 1 (0.7) 1 (1006.6) DCGAN x5.0 x31.3 x6.9 DEPAS LUSCa 1 (0.2) 1 (0.6) 1 (151.3) DCGAN x11.5 x67.7 x30.2 DEPAS NSCLCb 1 (0.2) 1 (0.3) 1 (480.0) DCGAN x6.6 x128.0 x17.0 Table 1: Similarity metrics between DEPAS synthetics masks, real masks, and current SOTA. a and b denote H&E and IHC stained tissues respectively. In addition, KS is Kolmogorov\u2013Smirnov test, KL is Kullback\u2013Leibler divergence, and FID is Fr\u00e9chet inception distance. Values are normalized by DEPAS results, where the parenthesis phrase presents its actual raw data. Downarrow symbol indicates lower is better. 7 DEPAS: De-novo Pathology Semantic Masks using a Generative Model LAREY ET AL. Figure 4: Examples of tissue masks and their corresponding histology RGB images for two types of cancers. H&E stating of skin cutaneous melanoma (left) and IHC staining of non-small cell lung carcinoma (right). For each realization, we show pairs of masks-images taken from real biopsies slides. We compare them to the pairs produced by DEPAS and by a standard DCGAN. The different types of RGB image representations are projected into 2D via TSNE (bottom). We show that DEPAS provides images from a distribution that is closer to the Real images\u2019 distribution rather than DCGAN\u2019s outputs, or the negative histology-control outputs taken from real histology RGB images from a different realization. 8 DEPAS: De-novo Pathology Semantic Masks using a Generative Model LAREY ET AL. Image quality metrics Method Dataset KS \u2193 KL \u2193 FID \u2193 DEPAS SKCMa 1 (0.3) 1 (0.4) 1 (592.5) DCGAN x2.3 x16.5 x6.4 Pathology Control x1.8 x8.8 x9.5 Realistic Control x1.1 x9.2 x10.0 DEPAS NSCLCb 1 (0.2) 1 (0.4) 1 (219.9) DCGAN x3.0 x8.2 x1.4 Pathology Control x3.7 x44.8 x25.7 Realistic Control x2.8 x111.8 x40.2 Table 2: Similarity metrics between synthetic images based on DEPAS synthetics masks, real histological images, and various baselines. a and b denote H&E and IHC stained tissues respectively. In addition, KS is Kolmogorov\u2013Smirnov test, KL is Kullback\u2013Leibler divergence, and FID is Fr\u00e9chet inception distance. Values are normalized by DEPAS results, where the parenthesis phrase presents its actual raw data. The down arrow symbol indicates lower is better. We performed the same evaluation for all control batches in RGB levels. The results in Table 2 show that for both H&E and IHC images, DEPAS had the best performance over all metrics. Where DEPAS\u2019s FID score for H&E images was better than DCGAN by a factor of 6.36, and by a factor of 1.42 for IHC images. 4.3 Multi-Label DEPAS We further show the ability of DEPAS to generate synthetic multi-label discrete semantic masks on the IHC realization. This task is performed on the NSCLC dataset as before, but where the tissue mask is divided into more-detailed labels, based also on its PD-L1 attributes. PD-L1 is a molecule expressed by tumor cells and enables them to evade the immune system\u2019s attack. Hence, a common immunotherapy treatment uses blocking antibodies that target PD-L1 to increase the immune system\u2019s effectiveness against the tumor cells. Evaluating the PD-L1 rate in the patients\u2019 biopsies is essential for determining the treatment type and its level. To achieve PD-L1-related labels, all IHC patches were annotated by expert pathologists. For every patch, each tissue pixel was assigned to one of the four PD-L1 feature classes: In\ufb02ammation, PD-L1and PD-L1+, or non of them. Two more classes were assigned using computer vision techniques. Air class was assigned as before where grayscale pixels with values higher than 235 were considered Air. The cells class was assigned to the darker values of brown. Empirically, they were captured in the RGB image representation where the green and blue channels were smaller than 200 and the red channel was higher than the other channels. After creating the multi-label Ground truth, we trained the image translation model and DEPAS using the same methodology and hyperparameters, with only one exception. In the multi-label case, we train DEPAS by adjusting the discrete adaptive block to produce a multi-channel feature map. In this case, we set the initial value of the temperature in (2) to 1, and divide it every 10 epochs by 1.25. Examples of synthetic semantic masks generated from DEPAS, and their corresponding synthetic histology images, are presented in Fig. 5A and visualized near real examples. We projected the images\u2019 inception representation via TSNE into a 2D space and show that the real images get mixed up with the synthetic ones. We also present representing pairs of real and synthetic images that had the smallest Euclidian distance in the 2D space (Fig. 5B). Furthermore, synthetic RGB images generated by the multi-label approach are closer to the real images than the ones generated in the binary approach by 14% when referring to FID scores. 5 CONCLUSION One of the main challenges of generating synthetic images of tissues is controlling the distribution of features within them. Paired GANs provide a good way to improve the synthetics image quality by introducing semantic masks that account for the spatial structure of the tissue. However, Unlike other domains, such as autonomous vehicles or face recognition, simulating or generating synthetic masks that represent the actual biological complexity of the image with high \ufb01delity are lacking. A reasonable compromise, is augmenting semantic information taken from real tissues to generate photorealistic histology images. Yet, this approach suffers from bounded scalability, when a limited amount of real data are required for inference. Here we introduce an architecture for generating high-resolution binary masks of tissue structure that can be used as semantic prior knowledge for image translation models. Our work copes with the main challenge of generating binary synthetic images by adding noise along the different stages of decoding blocks and adding annealing temperature blocks to overcome the undifferentiability that is associated with binary masks while processing by a differential pipeline 9 DEPAS: De-novo Pathology Semantic Masks using a Generative Model LAREY ET AL. Figure 5: (A) Examples of a multi-label task that consists of PD-L1 tissue attributes as semantic labels. DEPAS\u2019s synthetic labels and images are shown alongside real PD-L1 examples as a reference. (B) presents a TSNE projection of inception\u2019s representations taken from both real and synthetic images\u2019 (after being cropped into 224X224 pixels). Autumn and Winter colormaps represent synthetic and real images projection respectively. As the colormaps are brighter, the histology images contain more air. Several pairs of real images and synthetic images that had the smallest Euclidian distance between them are shown as well. (C) displays the same TSNE projection but with a single coloring of the data types to emphasize the mixture between the synthetic (red) and real (blue). 10 DEPAS: De-novo Pathology Semantic Masks using a Generative Model LAREY ET AL. through the generator and discriminator. Our approach allows us not only to generate synthetic binary masks but also to produce multilabel masks that are critical for many applications that require labeling different cellular regions (such as cancer cells). We show that our synthetic mask indeed captures the real cell distribution and spatial orientation within various pathological realizations histology slides. Moreover, we show the synthetic images that result from our masks resemble real histological images better than the baseline in two types of cancers subjected to H&E and IHC staining types. Furthermore, we show that performing our pipeline with more detailed tissue information re\ufb02ected in the multi-label semantic masks improves the quality of the synthetic images. Overall, our work provides a state-of-the-art solution for the challenging task of synthetic histological image generation with their semantic information in a form that is both scalable and controllable. ACKNOWLEDGMENT The authors would like to thank Tanya Wasserman for her technical support and valuable discussions. The results shown here are in part based upon data generated by the TCGA Research Network: https://www.cancer.gov/tcga.", "introduction": "As the adoption of digitized histopathologic slide images became widespread, the use of Arti\ufb01cial Intelligence (AI) methods in the digital pathology \ufb01eld increased. In particular, computer vision and deep learning methods are used to automate and improve various tasks such as image analysis, diagnostic decision-making, and disease monitoring [1, 2, 3]. However, data limitations pose a major challenge in digital pathology and include issues related to data scarcity, variability, privacy, annotation, bias, quality, and labeling. Data scarcity and variability can make it dif\ufb01cult to train and evaluate computer algorithms for digital pathology, as there may not be enough data available for certain decision thresholds. Data bias is another concern, as digital pathology datasets may be biased toward certain populations, which can limit the generalizability of algorithms trained on them. Data labeling can also be subjective and dependent on the expertise of the labeler leading to inaccuracies. An emerging solution to these challenges is generating synthetic images. \u2217Corresponding author, e-mail: yoni.savir@technion.ac.il. 1Department of Physiology, Biophysics and System Biology, Faculty of Medicine, Technion Israel Institute of Technology, Haifa, Israel. 2Faculty of Computer Science, Technion Israel Institute of Technology, Haifa, Israel. 3Faculty of Industrial Engineering, Technion Israel Institute of Technology, Haifa, Israel. 4Division of Pathology, Rambam Health Care Campus, Haifa, Israel. arXiv:2302.06513v1 [eess.IV] 13 Feb 2023 DEPAS: De-novo Pathology Semantic Masks using a Generative Model LAREY ET AL. The \ufb01eld of generating synthetic images became more popular during the last years after the Generative Adversarial Network (GAN) [4] was introduced. In this approach, a \u2018discriminator\u2019 model is designed to discriminate between real data to fake data. A different model coined \u2018generator\u2019 is trained to produce synthetic data that will be plugged into the \u2018discriminator\u2019 during training. On the one hand, the \u2018generator\u2019 is trained in a way where the discriminator doesn\u2019t distinguish between the real data and the generated synthetic data. On the other hand, the discriminator model is trained to discern between the two correctly. It means the generated data is challenging the discriminator to get the best results. In the classic approach of image generation (coined Vanilla GAN), the input for the generator is sampled from a given distribution, and then is processed into a synthetic image. More advanced techniques called Conditional GAN (C-GAN) [5] supply information about the required type of generated data and plug it into the different GAN models, to control the type of generated data. Some approaches such as pix2pix [6] took this technique further and supply the generator with more detailed information at the pixel level. In this approach, the generator receives a semantic label mask as an input, and each pixel is generated to belong to its corresponding label from the given semantic mask. This image translation approach has the advantage of yielding pairs of images and semantic labels, that could be used in different tasks that require these pairs (e.g. segmentation), unlike the classic approach where the synthetic images lack semantic information. Yet, in some cases, the scarcity of semantic masks will be caused due to their creation complexity. A special case is the generation of synthetic histology images, where the semantic masks consist of various tissue types and complicated patterns resulting from the complex nature of the tissue. A na\u00efve solution uses tissue masks extracted from real histology images in the image translation pipeline, but the dependency on limited real images during the generation process causes a limited number of semantic masks. Thus, in this case, image translation models will not be scalable since the semantic masks have an integral part in their pipeline, while Vanilla GANs are scalable due to their only dependency on the scalable sampling process. In this study, we show how our dual-phase pipeline overcomes the tradeoff (Fig. 1), and generates pairs of histology synthetic semantic masks and images in a scalable design. We introduce DEPAS, a generative model that captures tissue structure and generates high-resolution semantic masks with state-of-the-art quality for three different organs: skin, lung, and prostate. Moreover, we also show that these masks can be processed by pix2pixHD [7], a generative image translation model that supports high-resolution images, to produce photorealistic RGB tissue images (Fig. 1). We demonstrate it for two types of staining: H&E and immunohistochemistry. This pipeline, on the one hand, generates pairs of semantic masks and histology images, and on the other hand, is scalable since it does not require real masks during inference. Figure 1: (A) Illustration of the tradeoff between Image Translation GANs to the Vanilla GANs. The former generates synthetic images based on their semantic labels. In this case, the scalability is bounded when the quantity of semantic labels is limited. On the Other hand, Vanilla GANs lack semantic information but can produce an unlimited number of synthetic images. (B) Our platform resolves this challenge in the histology domain with a dual-phase generative system. The \ufb01rst step includes generating semantic masks of tissue labels using a novel architecture of a Vanilla GAN coined DEPAS. Then, the generated masks are processed by a paired image translation GAN (such as pix2pixHD) to produce the synthetic histology RGB image. 2 DEPAS: De-novo Pathology Semantic Masks using a Generative Model LAREY ET AL." }, { "url": "http://arxiv.org/abs/2205.13583v1", "title": "Harnessing Artificial Intelligence to Infer Novel Spatial Biomarkers for the Diagnosis of Eosinophilic Esophagitis", "abstract": "Eosinophilic esophagitis (EoE) is a chronic allergic inflammatory condition\nof the esophagus associated with elevated esophageal eosinophils. Second only\nto gastroesophageal reflux disease, EoE is one of the leading causes of chronic\nrefractory dysphagia in adults and children. EoE diagnosis requires enumerating\nthe density of esophageal eosinophils in esophageal biopsies, a somewhat\nsubjective task that is time-consuming, thus reducing the ability to process\nthe complex tissue structure. Previous artificial intelligence (AI) approaches\nthat aimed to improve histology-based diagnosis focused on recapitulating\nidentification and quantification of the area of maximal eosinophil density.\nHowever, this metric does not account for the distribution of eosinophils or\nother histological features, over the whole slide image. Here, we developed an\nartificial intelligence platform that infers local and spatial biomarkers based\non semantic segmentation of intact eosinophils and basal zone distributions.\nBesides the maximal density of eosinophils (referred to as Peak Eosinophil\nCount [PEC]) and a maximal basal zone fraction, we identify two additional\nmetrics that reflect the distribution of eosinophils and basal zone fractions.\nThis approach enables a decision support system that predicts EoE activity and\nclassifies the histological severity of EoE patients. We utilized a cohort that\nincludes 1066 biopsy slides from 400 subjects to validate the system's\nperformance and achieved a histological severity classification accuracy of\n86.70%, sensitivity of 84.50%, and specificity of 90.09%. Our approach\nhighlights the importance of systematically analyzing the distribution of\nbiopsy features over the entire slide and paves the way towards a personalized\ndecision support system that will assist not only in counting cells but can\nalso potentially improve diagnosis and provide treatment prediction.", "authors": "Ariel Larey, Eliel Aknin, Nati Daniel, Garrett A. Osswald, Julie M. Caldwell, Mark Rochman, Tanya Wasserman, Margaret H. Collins, Nicoleta C. Arva, Guang-Yu Yang, Marc E. Rothenberg, Yonatan Savir", "published": "2022-05-26", "updated": "2022-05-26", "primary_cat": "cs.AI", "cats": [ "cs.AI", "cs.CV", "q-bio.QM" ], "main_content": "2.1 Dataset and clinical scores The dataset is part of the Consortium of Eosinophilic Gastrointestinal Disease Researchers (CEGIR) [22], a national collaborative network in the U.S. of 16 academic centers caring for adults and children with eosinophilic gastrointestinal disorders. The institutional review boards approved this study of the participating institutions via a central institutional review board at Cincinnati Children\u2019s Hospital Medical Center (CCHMC IRB protocol 2015-3613). Participants provided written informed consent. The dataset contains subjects with a history of EoE undergoing endoscopy (EGD) for standard-of-care purposes (n = 419). Distal, mid, or proximal esophageal biopsies (1-3 per anatomical site) per patient were placed in 10% formalin; the tissue was then processed and embedded in paraffin. Sections (4\u00b5m) were mounted on glass slides and subjected to hematoxylin and eosin (H&E) staining. Slides were scanned on the Aperio scanner at 400X magnification and were saved in SVS format. Each slide of esophageal tissue was analyzed by an anatomic pathologist who is a member of the CEGIR central pathology core. In addition to determining peak eosinophil count per 400X HPF (PEC), the pathologist subjected each slide to eosinophilic esophagitis histological scoring system (EoE HSS) analysis to assess the severity (grade) and extent (stage) of a set of histological abnormalities using a 4 point scale (0 normal; 3 maximum change) [6]. These features included eosinophilic inflammation (EI), basal zone hyperplasia (BZH), dilated intercellular spaces (DIS), eosinophilic abscess (EA), eosinophil surface layering (SL), surface epithelial alteration (SEA), dyskeratotic epithelial cells (DEC), and lamina propria fibrosis (LPF) [6]. The BZH grade score is determined by the amount of total epithelial thickness occupied by the basal zone, where 0 indicates that BZH is not present, 1 indicates that basal zone occupies >15% but <33% of the total epithelial thickness, 2 indicates that the basal zone occupies 33-66% of the total epithelial thickness, and 3 indicates that the basal zone occupies >66% of the total epithelial thickness. The BZH stage score indicates the amount of biopsy that indicated any degree of BZH, where 0 indicates that BZH is not present, 1 indicates that <33% of the epithelium exhibits any BZH with grade >0, 2 indicates that 33-66% of the epithelium exhibits any BZH with grade > 0, and 3 indicates that >66% of the epithelium exhibits any BZH with grade > 0 [6]. 2.2 Semantic labeling To train and validate the models, we labeled 23 patients\u2019 whole slide images (WSIs). The dataset consists of large WSIs with median length and width of 150,000 and 56,000 pixels, respectively. We cropped each WSI into small patches with a size of 1200 X 1200 pixels. Patches with a small amount of tissue, less than 15% of the patch area, were filtered. A total of n = 10,170 patches was used for semantic labeling. Those patches were analyzed and annotated by an expert using VIA [23] and then were verified by three different experts. For each patch, the intact eosinophils\u2019 centers and the basal zone area were marked. The result was two semantic masks. In the first, the pixels in the area of a circle with a radius of 25 pixels around the intact eosinophils center were labeled as Eos-Intact [21]. In the second, pixels within the marked basal zone polygons were labeled as BZ. That is, each pixel was classified either as a BZ type, Eos-Intact type, both of them, or as none. In total, about 570 million pixels were labeled as BZ, and about 78.47 million pixels were labeled as Eos-Intact. 8.6% of the images contained BZ, where their area was, on average, 45.45% of the image size. Eos-Intact were found in 22.8% of the images, with an average area fraction of 2.35%. 2.3 Semantic segmentation We trained two models, one using the Eos-Intact masks and one using the BZ masks. For both models, the annotated patches were divided into two groups; 80% of the data were dedicated to training the segmentation model, and the rest, 20%, for testing the model. The segmentation model was based on the UNet++ architecture [24]. It was developed in the PyTorch framework [25] and was trained on a single NVIDIA GeForce RTX 2080 Ti GPU. During the training phase, the 1200 X 1200-pixel patches were divided into 448 X 448-pixel sub-patches with an overlap of 72 pixels between them. Different sub-patch sizes were tested, and this size was optimal in terms of precision and recall (see segmentation metrics section of the systems). In addition, multiple hyperparameters were tested. The optimal parameters were batch size of 5, \"Cosine Annealing\" learning rate scheduler, and a 0.5 softmax threshold. The optimization loss function contains two terms, the Dice and Binary cross-entropy (BCE), where each term is weighted. After exploring different weights, we applied the weights 1 and 0.5 to the Dice and BCE, respectively. For inference, the test image was cropped into 448 X 448-pixel sub-patches as described above. To reduce segmentation noise, contiguous regions labeled as 3 AI Reveals Novel EoE Biomarkers LAREY ET AL. Eos-Intact or BZ that were smaller than an area of 1800 pixels, in the case of Eos-Intact, or area of 2007 (1% out of the sub-patch size), in the case of BZ, were re-labeled as none. 2.4 Semantic metrics To estimate the segmentation performances, we used the following metrics, mIoU = 1 I \u00b7 C X i X c TPi,c TPi,c + FPi,c + FNi,c (1) mPrecision = 1 I \u00b7 C X i X c TPi,c TPi,c + FPi,c (2) mRecall = 1 I \u00b7 C X i X c TPi,c TPi,c + FNi,c (3) mSpecification = 1 I \u00b7 C X i X c TNi,c TNi,c + FPi,c (4) where the c index iterates over the different classes in the image, and the i index iterates over the different images in the dataset. C is the total number of classes, and I is the total number of images. TP, TN, FP, and FN are classification elements that denote true positive, true negative, false positive, and false negative of the areas of each image, respectively. Metric Eos-Intact BZ Overall mIoU (Equation 1) 0.93 0.75 0.84 mPrecision (Equation 2) 0.95 0.8 0.88 mRecall (Equation 3) 0.97 0.94 0.95 mSpecificity (Equation 4) 0.998 0.82 0.91 Table 1: Four segmentation metrics measured at the pixel level. IoU denotes the Intersection Over Union between the Ground Truth and the prediction. Recall denotes the fraction of the True-Positive pixels among the total Ground Truth pixels in the image, whereas Precision denotes the fraction between the True-Positive pixels and the prediction pixels. The fraction between the True-Negative pixels and the total negative pixels in the image is coined Specificity. mIoU, mRecall, mPrecision and mSpecificity are obtained by averaging IoU, Recall, Precision, and Specificity, respectively, over the validation set. The metrics are presented for the Eos-Intact and BZ classes separately in addition to their average per image as the overall score. The compared patches size is the network\u2019s input size 448 X 448 pixels. 2.5 Calculating WSI AI scores To evaluate the eosinophil and basal zone distribution within each WSI, we use an iterative process to scan over the entire slide. At each step, an image the size of a HFP is processed. The area of an HPF corresponds to a size of 2144 X 2144 pixels (548 \u00b5m X 548 \u00b5m). The stride step between constitutive HPFs is 500 pixels. Each HPF is divided into 25 sub-patches (448 X 448 pixels corresponding to the network input size) with an overlap of 24 pixels. Each sub-patch is segmented and the HPF segmentation mask is assembled from them. The pixels\u2019 identity in the areas overlapping between sub-patches is determined by using OR function. After segmentation, each HPF is assigned two local scores: the number of intact eosinophils [21] and the BZ area rate, which is the ratio of the number of BZ pixels in the HPF mask, to the HPF size. After scanning the entire WSI, we produce score maps for both features an Intact-Eosinophils map and a BZ map, where every pixel in these maps represents the score of the matching HPF. Based on the score maps, we can produce four WSI scores (Figure 1C): \u2022 Peak Eosinophil Count (PEC) The number of eosinophils in the HPF with the densest area of eosinophils within the WSI. This score is used in the clinic to diagnose active EoE [5, 21]. A patient with a PEC greater than or equal to 15 is considered to have active EoE. The EI grade score is a proxy for this measure. \u2022 Spatial Eosinophil Count (SEC) The ratio of the number of HPFs with an Intact-Eosinophil count that is greater than or equal to 15 to the total number of HPFs in the feature map. The EI stage score is a proxy for this measure. 4 AI Reveals Novel EoE Biomarkers LAREY ET AL. \u2022 Peak Basal Zone (PBZ) The maximum HPF BZ area rate. This score is the maximal density of basal cells per HPF in the WSI. The BZH grade score is a proxy for this measure. \u2022 Spatial Basal Zone (SBZ) The ratio of the number of HPFs with local BZ score that is greater than or equal to 15% to the number of tissue HPFs in the feature map. The BZH stage score is a proxy to this measure. 2.6 Classifying whole slide image 2.6.1 Features-based classification We previously presented a pipeline for classifying WSIs using only the predicted PEC directly [21]. In this paper, we leverage the spatial information, for both eosinophils and basal cells that was revealed by segmenting the entire WSI. We used this information to devise four WSI scores and to predict the histological severity condition of the patient (Figure 1D). We explored different machine learning models support vector machine (SVM), and linear discriminant analysis (LDA). In addition, various architectures of multi-layer perceptron (MLP) were examined, particularly, all combinations of layers in the size of 10, 20, 50, 100 tiled up to four hidden layers. We used these types of classifiers because of their better capability to handle tabular data (in contrast to convolutional-neural-networks, for example, that support sequential data). The cohort contains 1066 WSIs that were not used for the segmentation training. Classifier training was done using 80% of the data, whereas the rest were used for validation. For each model, we repeated the training procedure 20 times with different random seeds for splitting the data, and reported the median results. Figure 2: Examples of our platform semantic segmentation. (A-I) The size of each image is 1200 X 1200 pixels. Each panel\u2019s left-hand side is colored according to the ground truth as annotated by trained experts. The right-hand side is colored with its corresponding network prediction mask. Basal zone (BZ) pixels are colored with red, intact eosinophils (Eos-Intact) pixels are colored with green, and pixels associated with both (that is, eosinophils within a BZ area) are colored with yellow. (A-C) The upper row shows examples with only one label or none. (D) An example of an image that contains both a small number of basal cells and intact eosinophils. (E) An example of an image with a large basal zone and a small number of intact eosinophils. (F) An example that contains a small area of basal zone and a large number of intact eosinophils. (G-I) The bottom row displays examples with large basal zones and also a large number of intact eosinophils. 5 AI Reveals Novel EoE Biomarkers LAREY ET AL. 2.6.2 Multi-classification To improve the histological severity classification performance, different classifiers were used for regions having different eosinophil density. We define two regions of PEC scores, classifier = \u001aCin (PEC \u226515 \u2212\u2206) and (PEC \u226415 + \u2206) Cout (PEC < 15 \u2212\u2206) or (PEC > 15 + \u2206) (5) where Cin and Cout denote the classifier inside the window and outside of the window, respectively. The hyperparameter \u2206defines the window size. The training procedure is as described above. To avoid bias, the contribution of each region to the 80%-20% split is proportional to the region size, ensuring that each region contributes points to the training and validation. We examined \u2206values in the range of [1, 12]. Figure 3: Examples of two different WSIs (left) and their corresponding scores maps with scale for each score defined (middle, right). Each pixel in these maps represents one HPF, and the color of the pixel indicates the respective score. From the Eos-Intact scores map (middle), we extracted peak eosinophil count (PEC) and spatial eosinophil count (SEC). From the basal zone (BZ) score map (right), we computed peak basal zone (PBZ) and spatial basal zone (SBZ) scores. (A) Example of a WSI of a biopsy obtained from an EoE patient with inactive disease (PEC=10). (B) Example of a biopsy obtained from a patient with active EoE (PEC = 245). 3 RESULTS 3.1 Local segmentation results Figure 2 illustrates a few examples of our platform semantic segmentation compared with ground truth labeling by a trained researcher. Table 1 summarizes the segmentation metrics over the whole validation-set, 1, 2, 3, and 4. 6 AI Reveals Novel EoE Biomarkers LAREY ET AL. 3.2 WSI features scores One of the main advantages of the described approach is that it allows scoring that is based not only on a limited number of regions probed by the pathologist but on the entire whole slide image (Fig. 3). To process the entire whole slide image, we used dynamics convolution to scan the slide using windows with a HPF size with a stride of about 1/4 of the HPF size (Subsection 2.5). We computed the score maps for 1066 WSIs from 400 patients that were not part of the semantic segmentation training and validation sets. The pipeline produces two feature-score maps for each WSI, one for the Eos-Intact score map and the second for the BZ score map. Figure 3 shows examples of two features score maps computed from two different WSIs. We computed four scores based on the semantic segmentation of the WSI; this included two local ones (peak eosinophil counts [PEC] and peak basal zone [PBZ]), and two global ones (spatial eosinophil counts [SEC] and spatial basal zone [SBZ]) (Subsection 2.5). We compared the different WSI scores with the relevant HSS score estimated by the pathologists. We compared PBZ, SBZ, PEC, and SEC with HSS BZH grade, HSS BZH stage, HSS EI grade and HSS EI stage, respectively (Subsection 2.5). Our scores showed a significant correlation with the human estimated metrics (Fig. 4A-D). We then analyzed the relationship between the two types of biomarkers: the number of eosinophils and the area of the basal zone. It was suggested that these features have some correlation between them [6]. A standard condition for the classification of a patient as having active EoE is having a PEC that is greater than or equal to 15. We show that the PBZ distribution of non-active patients has significantly lower values than the PBZ score distribution of the active patients (Fig. 4E). A similar trend is observed when analyzing the SBZ distribution (Fig. 4F). Yet, there are still patients with high PEC scores and low PBZ / SBZ scores, and vice-versa. This raises the question of whether a combination of basal zone-based metrics can better predict the patient clinical status and treatment outcome. INPUT WSI AI features scores OUTPUT Classification Models results PEC SEC PBZ SBZ SVM med / std LDA med / std MLP med / std + + 0.8364 / 0.0247 0.75 / 0.0787 0.8388 / 0.027 + + 0.7991 / 0.0236 0.8061 / 0.0227 0.806 / 0.0227 + + + + 0.8341 / 0.0233 0.8155 / 0.0208 0.8505 / 0.0285 PEC, Peak Eosinophil Count; SEC, Spatial Eosinophil Count; PBZ, Peak Basal Zone; SBZ, Spatial Basal Zone. SVM, Support Vector Machine; LDA, Linear Discriminant Analysis; MLP, Multi-Layer Perceptron Table 2: Classification results of multiple models (SVM, LDA and MLP) with different combinations of input features (PEC, SEC, PBZ, and SBZ). Each model was trained and validated 20 times with different train-validation random splits, the median (med) results are reported with the standard deviation (std). 3.3 Histological severity classification The naive approach for diagnosing patients\u2019 histological severity condition uses only PEC information. In this approach, if the patient\u2019s PEC is greater than or equal to 15, the patient is considered to have active EoE. Similar criteria are also applied to determine whether a patient who underwent treatment responded and is in remission. Recent studies suggested using basal zone histological information improves the estimation of the disease\u2019s histological severity. For example, it was suggested that patients with low PEC values, i.e., greater than 0 but less than 15, but with basal zone hyperplasia would not be considered as patients in remission [7]. To test the performance of our pipeline in integrating all four WSI scores, we used as the ground truth (GT) a standard clinical histological severity metric that defines a histologically severe patient as one who is not in histologic remission, i.e., that has a PEC of greater than or equal to 15 or an HSS total score of more than 3 [7]. This metric is stringent when examining whether a patient is in remission or not compared to taking into account only the PEC score. First, as a baseline classifier, we calculated the accuracy of the histological severity classification when it was based only on the PEC score. The best accuracy (83.3%) was obtained when the threshold criteria was PEC = 6. We recently showed that when taking only PEC as a metric for classification of the patient state (i.e., active EoE vs. non-active EoE), the AI-based PEC score provides a classification accuracy of 94.75%. Moreover, the optimal PEC threshold that provided the best accuracy in that case was 15 [21], the same as the gold standard threshold [5]. Thus, the current results suggest that to compensate for the cases in which low PEC are still considered histologically severe, the system converges to more tight PEC criteria for histological severity classification. Next, we trained a classifier that takes into account all four metrics we calculated from the WSI score maps (i.e., PEC, SEC, PBZ, SBZ). We used several training approaches: support vector machine (SVM), linear discriminant analysis (LDA), and multi-layer perceptron (MLP). The best results were obtained using MLP with three hidden layers where 7 AI Reveals Novel EoE Biomarkers LAREY ET AL. Figure 4: Correlations among the different score types. (A-D) Comparing the computed scores with the HSS scores. The HSS scoring method for BZH grade, BZH stage, EI grade, and EI stage, each score is an integer between zero and three. Each panel depicts a violin plot that shows the distribution of the computed WSI scores (vertical axis) for each HSS score that is the appropriate proxy (horizontal axis). The white circle indicates the median value, and the black bar indicates the standard deviation. There is a significant correlation between the computed scores and their HSS counterparts. (E-F) Histograms of basal zone related metrics PBZ (E) and SBZ (F) for active (PEC \u226515) and non-active patients (PEC < 15). Both the PBZ and SBZ distribution scores of non-active patients have significantly lower values than the PBZ and SBZ distribution scores of the active patients. (Kolmogorov\u2013Smirnov-test, P-value << 0.0001) each layer has 20, 50 and 100 neurons, respectively. Integrating all the metrics yields an improvement in accuracy to 85.05%. Moreover, the false alarm rate decreased by about 20% compared to the baseline classifier, whereas the miss rate decreased by about 5% (Fig. 5). A possible factor that may impede the prediction performances is the fact that our data contain patients with a large range of eosinophil counts. To further improve the prediction, we took a multi-classification approach at which patients with a PEC level that is near the decision threshold are classified separately from patients that have a PEC level that is far from it. The best results were achieved when patients with PEC values within the range [6 24] were analyzed separately (Subsection 2.6.2). This approach led to an accuracy of 86.70% and a significant reduction in the false-alarm 8 AI Reveals Novel EoE Biomarkers LAREY ET AL. Figure 5: Classification performance of the different models. We examined a few different classification approaches (left inset): a baseline in which the classification is according only to the PEC score (yellow, ROC curve in purple), a trained classifier that accounts for all the four WSI scores (orange), and a multi-classification approach that separates between patients close to the decision threshold from those that are far from it (blue). Spider plot (right inset) depicts the performance of the different models. Accounting for all the AI WSI scores significantly improves the classification performance models. The multi-classification approach that separates patients near the decision threshold improves the performance even more. rate to 9.91% (Fig. 5). In this case, the best results were given by an MLP with three hidden layers in the size of 100, 20, and 100, respectively, for both classifiers. To gain insight into the role of each of our four WSI scores, we explored the effect of training a classifier with a limited subset of them (Table 2). In all configurations, the best accuracy was obtained by the MLP model. As expected, the highest classification score was achieved when we used all four WSI AI features scores. Yet, accounting only of Eos-intact scores (PEC and SEC) provides better accuracy than using only BZ scores (PBZ and SBZ). 4 DISCUSSION Biopsy-based diagnosis often requires the identification of features that are on the single-cell scale. In the case of EoE, the diagnosis procedure involves counting eosinophils and estimating their density. As a typical whole slide image contains at least tens of high-power fields, gold standard scores usually do not account for the entire features distribution. In the case of EoE the gold standard score takes into account only the maximal density of cells. One of the promises of digital pathology, besides automating manual tasks, is the ability to process the entire WSI and infer novel biomarkers that capture the spatial distribution of the relevant features. In this work, we introduce an artificial intelligence system that infers novel local and spatial biomarkers based on semantic segmentation of intact eosinophils and basal zone. This approach enables a decision support system that takes into account information from the entire WSI and classifies EoE patients\u2019 histological severity. In previous works [21], we introduced a platform that infers the maximal eosinophil density and, based on that, predicts whether a patient has active disease or not with an accuracy of 94.75%. Here, we develop a platform that not only recapitulates the metric used by the pathologists but also provides novel biomarkers. Besides a metric that captures the maximal density 9 AI Reveals Novel EoE Biomarkers LAREY ET AL. of eosinophils (PEC) and the maximal basal zone fraction (PBZ), we suggest two additional metrics that reflect the distribution of eosinophils and basal zone fractions (SEC and SBZ respectively). To test the platform, we utilized a cohort that includes 1066 biopsy slides from 400 subjects. Whereas the decision of whether EoE is active or not depends on a gold standard cutoff of 15 eosinophils per high power field, the histological severity score (mainly used to estimate whether a patient was in histologic remission after a treatment) also accounts for the basal zone properties. Indeed, using only the PEC of greater than or equal to 15 as a threshold to predict histological severity yields an accuracy of only 78.97%. The PEC cutoff that provides the best accuracy for histological severity, which was 83.3%, is 6 eosinophils/HPF. This reflects the fact that adding the basal zone criteria results in a stronger criteria for the PEC. To improve the performance, we used a few machine learning approaches that take our metrics as an input. We show that taking the eosinophils metrics alone yields an accuracy of 83.4% whereas taking the basal zone metrics alone gives an accuracy of 80.6%. Putting all the metrics together gives an accuracy of 85.05%. That is, using all the metrics together gives better performances than each of the metrics alone and also better than a na\u00efve approach of changing the PEC cutoff. Finally, we also constructed a multi classifier approach that is based on the fact that patients around the PEC = 15 cutoffs are more prone to errors. Altogether, our platform yields a classification accuracy of 86.70%, sensitivity of 84.50%, and specificity of 90.09%. Our approach highlights the importance of systematically analyzing the distribution of biopsy features over the entire slide image and putting together metrics based on them. Our platform paves the way towards a personalized decision support system that will assist in not only counting cells but also in providing treatment prediction. Author Contributions YS and MER conceived and designed the research. YS and AL designed the pipeline. YS, AL, EA, and ND designed and coded the platform code. AL, EA, ND, and TW analyzed the data, and validated it. AL, EA, ND, and YS performed all the mathematical analyses. GO and JC contributed to the pipeline clinical aspects, annotated, and validated the segmentation data. GO, and JC organized and analyzed the data. MC, NA and GY annotated the CEGIR slides. MC supervised the data annotation. MC, MR, NA and GY, contributed to the pipeline clinical aspects. YS, AL, EA, ND, TW, and JC wrote the draft of the paper, which was reviewed, modified, and approved by all other authors. Funding YS was supported by Israel Science Foundation #1619/20, Rappaport Foundation, and the Prince Center for Neurodegenerative Disorders of the Brain 3828931. M.E.R. was supported by NIH R01 AI045898-21, the CURED Foundation, and Dave and Denise Bunning Sunshine Foundation. CEGIR (U54 AI117804) is part of the Rare Disease Clinical Research Network (RDCRN), an initiative of the Office of Rare Diseases Research (ORDR), NCATS, and is funded through collaboration between NIAID, NIDDK, and NCATS. CEGIR is also supported by patient advocacy groups including American Partnership for Eosinophilic Disorders (APFED), Campaign Urging Research for Eosinophilic Diseases (CURED), and Eosinophilic Family Coalition (EFC). As a member of the RDCRN, CEGIR is also supported by its Data Management and Coordinating Center (DMCC) (U2CTR002818).", "introduction": "Eosinophilic esophagitis (EoE) is a chronic immune system disease associated with esophageal tissue inflammation and injury characterized by a large number of eosinophils, which are found in the lining of the esophagus, called the esophageal mucosa [1]. EoE is allergy-driven and mainly caused by a reaction to food [2]. The damaged esophageal tissue leads to symptoms, such as pain and trouble swallowing [3]. In particular, EoE is becoming a more common cause of dysphagia in adults and vomiting, failure to thrive, and abdominal pain in children [3]. EoE can be treated by dietary restriction or topical steroids, and in more severe conditions, an endoscopic dilation intervention, specifically stricture dilation, is used. Currently, the diagnosis of EoE relies on performing an upper endoscopy and obtaining esophageal mucosal biopsies. The hematoxylin and eosin (H&E) stained slides [4] are examined by pathologists. The physicians typically manually examine the slide using a microscope, identify the area of the tissue with the greatest eosinophil density, and count the number of intact eosinophils in that high-power field (HPF), i.e., the peak eosinophil count (PEC). The gold standard, histologic criterion, to date, is to define patients with EoE as having active disease if their PEC \u226515 [5]. Yet, the PEC score captures only the maximal eosinophil count and not other properties such as the distribution of the eosinophils within the tissue, and it does not account for other cellular features that are captured by the EoE histology scoring system (EoEHSS) [6]. This method includes eight features that are relevant to EoE and accounts not only for the maximal severity of these features, but also for their distribution. This includes, for example, quantifying the percentage of HPFs within the slide that exceed the threshold of \u226515 eosinophils. However, estimating such a metric visually poses a significant challenge. Another example of the importance of accounting for features in addition to the maximal eosinophil count is the development of a histological severity score that was used to diagnose remission (EoEHRS) [7]. In this case, both PEC < 15/HPF and total grade and stage scores from all EoEHSS features \u22643 are required to define remission. Whereas processing the features of the entire whole slide improves diagnostic metrics, current manual approaches limit it. Counting PEC and scoring EoE histology is time-consuming, requires trained personnel, and can lead to variability between pathologists upon EoE biopsy diagnosis [8, 5, 9]. Hence, in recent years, considerable effort has been dedicated to build a robust and trustworthy process of inferring pathological biomarkers in health and disease. This includes harnessing machine learning in general and deep learning specifically [10, 11, 12, 13, 14, 15, 16, 17, 18, 19]. We have recently applied a dual approach towards diagnosing EoE: the first one is assigning a global label for the pathology images that is based on the patient condition [20]. The second one is based on segmenting and counting inflammatory cells, such as Intact eosinophils and Not-Intact eosinophils for EoE biopsy diagnosis using a deep convolutional neural network (DCNN) [21] . Figure 1: Artificial intelligence pipeline for diagnosing whole slide images (WSIs) and predicting disease activity of patients with eosinophilic esophagitis (EoE). (A) First, we analyze the WSI with a high-power-field (HPF)-sized kernel. (B) For each HPF, we segment intact eosinophils (Eos-intact) and basal zone (BZ) areas to obtain a local score for both features. (C) Once we have the analyzed entire WSI, we extract four biomarker scores that depend on the spatial distributions of eosinophils and basal zone. (D) We use these four biomarkers to predict the histological severity of the patients\u2019 conditions. Here, we developed an artificial intelligence (AI) approach using machine learning for extracting novel biomarkers and used it to predict the histological severity condition (Figure 1). The pipeline has a state-of-the-art segmentation performance with a mean intersection over union metric (mIoU) score of 83.85% based on basal zone (BZ) and intact 2 AI Reveals Novel EoE Biomarkers LAREY ET AL. eosinophils (Eos-Intact) features. We show that derived biomarkers significantly correlate with manually obtained HSS scores. Using a cohort of 1066 biopsy slides from 400 patients, we demonstrate that AI biomarkers estimate histological severity achieving an accuracy of 86.70%, sensitivity of 84.50%, and specificity of 90.09%." } ] }, "edge_feat": {} } }