diff --git "a/abs_29K_G/test_abstract_long_2405.02801v2.json" "b/abs_29K_G/test_abstract_long_2405.02801v2.json" new file mode 100644--- /dev/null +++ "b/abs_29K_G/test_abstract_long_2405.02801v2.json" @@ -0,0 +1,432 @@ +{ + "url": "http://arxiv.org/abs/2405.02801v2", + "title": "Mozart's Touch: A Lightweight Multi-modal Music Generation Framework Based on Pre-Trained Large Models", + "abstract": "In recent years, AI-Generated Content (AIGC) has witnessed rapid\nadvancements, facilitating the generation of music, images, and other forms of\nartistic expression across various industries. However, researches on general\nmulti-modal music generation model remain scarce. To fill this gap, we propose\na multi-modal music generation framework Mozart's Touch. It could generate\naligned music with the cross-modality inputs, such as images, videos and text.\nMozart's Touch is composed of three main components: Multi-modal Captioning\nModule, Large Language Model (LLM) Understanding & Bridging Module, and Music\nGeneration Module. Unlike traditional approaches, Mozart's Touch requires no\ntraining or fine-tuning pre-trained models, offering efficiency and\ntransparency through clear, interpretable prompts. We also introduce\n\"LLM-Bridge\" method to resolve the heterogeneous representation problems\nbetween descriptive texts of different modalities. We conduct a series of\nobjective and subjective evaluations on the proposed model, and results\nindicate that our model surpasses the performance of current state-of-the-art\nmodels. Our codes and examples is availble at:\nhttps://github.com/WangTooNaive/MozartsTouch", + "authors": "Tianze Xu, Jiajun Li, Xuesong Chen, Xinrui Yao, Shuchang Liu", + "published": "2024-05-05", + "updated": "2024-05-07", + "primary_cat": "cs.SD", + "cats": [ + "cs.SD", + "cs.AI", + "eess.AS" + ], + "label": "Original Paper", + "paper_cat": "Multi AND Modal AND LLM", + "gt": "In recent years, AI-Generated Content (AIGC) has witnessed rapid\nadvancements, facilitating the generation of music, images, and other forms of\nartistic expression across various industries. However, researches on general\nmulti-modal music generation model remain scarce. To fill this gap, we propose\na multi-modal music generation framework Mozart's Touch. It could generate\naligned music with the cross-modality inputs, such as images, videos and text.\nMozart's Touch is composed of three main components: Multi-modal Captioning\nModule, Large Language Model (LLM) Understanding & Bridging Module, and Music\nGeneration Module. Unlike traditional approaches, Mozart's Touch requires no\ntraining or fine-tuning pre-trained models, offering efficiency and\ntransparency through clear, interpretable prompts. We also introduce\n\"LLM-Bridge\" method to resolve the heterogeneous representation problems\nbetween descriptive texts of different modalities. We conduct a series of\nobjective and subjective evaluations on the proposed model, and results\nindicate that our model surpasses the performance of current state-of-the-art\nmodels. Our codes and examples is availble at:\nhttps://github.com/WangTooNaive/MozartsTouch", + "main_content": "INTRODUCTION In recent years, the intersection of artificial intelligence (AI) and creative arts has witnessed remarkable advancements [2], leading to the emergence of novel techniques and systems capable of producing music[1, 3, 24], images[21\u201323], and other forms of artistic expression[19] in a wide range of industries. As the remarkable advancements in Artificial Intelligence for Generative Composition (AIGC), there is a growing belief that it heralds a new era in AI and will have a substantial influence across the globe. arXiv:2405.02801v2 [cs.SD] 7 May 2024 \fMM\u201924, October 28 November 1, 2024, Melbourne, Australia. Tianze Xu, Jiajun Li, Xuesong Chen, Xinrui Yao, and Shuchang Liu However, current music generation models, when tasked with image-to-music synthesis, encounter notable limitations. These models often struggle to accurately capture the ambiance and underlying emotions conveyed by the visual input. While they may produce music that aligns with the visual elements, the nuanced details and subtle cues present in the image are frequently lost in translation. This shortfall hampers the ability of existing systems to truly evoke the intended atmosphere and sentiment of the imagery, thereby limiting their effectiveness in multi-modal creative endeavors. It is evident that there exists a gap in the current stateof-the-art models concerning their proficiency in leveraging visual cues to inform the musical composition process. Natural language serves as a powerful intermediary, demonstrating significant potential in bridging across different sensory modalities. Designed to interact directly with human, Large language models (LLMs) are typically comprised of a vast number of parameters and trained on extensive datasets, granting them powerful comprehension and reasoning capabilities.[8] Harnessing these advantages, researchers have employed LLMs to achieve semantic understanding across multiple modalities. Despite the significant strides made in AI-driven creativity, a compelling question arises: How can we harness the formidable capabilities of LLMs to empower multi-modal tasks such as imageto-music synthesis? This inquiry serves as the focal point of our investigation, wherein we seek to elucidate the seamless integration of LLMs into the process of generating music inspired by visual contents. In this paper, we present Mozart\u2019s Touch, a multi-modal music generation framework that harnesses the power of Large Language Models (LLMs) and pre-trained models to generate music based on visual information. An overview of the architecture is depicted in Figure 1. Mozart\u2019s Touch offers multiple advantages for image-to-music generation: By leveraging the deep understanding and generalizable knowledge of Large Language Models (LLMs) to interpret visual elements accurately, it differs from previous multi-modal end-to-end music generation methods (e.g. CoDi [26] and M2UGen [10]). Unlike traditional approaches, it requires no training of music generation models or fine-tuning LLMs, conserving computational resources and ensuring efficiency. Moreover, Mozart\u2019s Touch utilizes clear, interpretable prompts for greater transparency during the whole process, which improves overall framework explainability. Our contributions are summarized as follows: \u2022 We introduce the Mozart\u2019s Touch framework, an innovative integration of Large Language Models (LLMs) for multimodal music generation. Departing from traditional end-toend paradigms, this framework harnesses the power of LLMs to synthesize music aligned with visual inputs. \u2022 We offer a new perspective on leveraging LLMs for multimodal generation tasks. Our framework showcases a novel application of LLMs in text-to-music generation , demonstrating the potential of LLMs in understanding and bridging different sensory modalities and empowering creative processes. \u2022 We assess Mozart\u2019s Touch on the imageand video-to-audio dataset MUImage and MUVideo [11] , utilizing both objective and subjective metrics. Comparative evaluation results show that our approach outperforms existing state-of-theart methods. This experiment demonstrates the effectiveness of our framework and its potential as a new baseline benchmark for future works in the domain. 2 RELATED WORK 2.1 Multi-modal Large Language Model (MLLM) Due to the prevalence of researches in Large Language Models(LLM), the combination of LLM and models in other modalities has also been a rising research hot spot, leading to the new field of MLLM. According to this survey [27] , the key applications of MLLM includes Multi-modal Instruction Tuning (M-IT), Multi-modal InContext Learning (M-ICL), Multi-modal Chain of Thought (M-CoT), and LLM-Aided Visual Reasoning (LAVR). For Mozart\u2019s Touch, we employ Modality Bridging technology, utilizing natural language as an intermediary medium and leveraging LLM to bridge the modality gap. VideoChat-Text [15], for example, is an end-to-end chatcentric video understanding system, which uses pre-trained vision models to extract visual information such as actions and enriches the descriptions using a speech recognition model, which are all represented as textual information as a bridge. 2.2 Image Captioning Image captioning, which is the process of generating descriptive text (captions) that accurately and relevantly capture the content of an image, is a typical multi-modal task requiring both abilities of visual understanding and natural language generation. [25] The field of image captioning has seen significant advancements, such as CLIP [20] and BLIP [14] model. CLIP is developed by OpenAI that has revolutionized the way computers understand images and text, which efficiently learns visual concepts from natural language supervision. The main idea of CLIP is to align texts and images in the feature domain without predetermined labels for specific object categories by training on a large corpus of image-text pairs collected from the Internet. BLIP is another multi-modal framework which transfers flexibly to both vision-language understanding and generation tasks. To pre-train a unified model with both understanding and generation capabilities, they propose multi-modal mixture of encoder-decoder (MED) and achieve great performance across multiple tasks, such as image captioning. 2.3 Multi-Modal Music Generation The advent of Transformer and diffusion models has promoted the development of music generation models. Many impressive works emerged in recent years, such as MusicLM [1], MusicGen [3] , Noise2Music [9] and AudioLDM 2 [17] . MusicLM and MusicGen both consist of autoregressive decoder to generate music. MusicLM can generate high-quality music based on descriptive text such as emotions, styles and instruments. Noise2Music and AudioLDM 2 use diffusion models to generate music based on text that transcends fine-grained semantics and can reach deeper emotions. However, these works above all take text or audio as input to generate music, ignoring other modality information, such as image \fMozart\u2019s Touch: A Lightweight Multi-modal Music Generation Framework Based on Pre-Trained Large Models MM\u201924, October 28 November 1, 2024, Melbourne, Australia. and video. Notable exceptions include the CoDi [26] and M2UGen [11], which allow inputs with more modalities. CoDi(Composable Diffusion) can generate output modalities in parallel from any combination of input modalities. It first use individual modality-specific diffusion models for images, videos, audio, and texts respectively to build a shared multimodal space, and then uses Latent Alignment [4] to achieve joint multi-modal generation. M2UGen is an LLMbased multi-modal music understanding and generation framework. It consists of multi-modal feature encoders, multi-model understanding adapters, bridging LLM, and generation modules to process inputs from multiple modalities such as text, images, and videos, and generate corresponding music. 3 MOZART\u2019S TOUCH Mozart\u2019s Touch is a collaborative multi-modal AIGC framework structured into a sequential integration of three core modules: a Multi-modal Captioning Module, a LLM Understanding & Bridging Module based on LLMs and Music Generation Module. The overall architecture is illustrated in Figure 1. 3.1 Multi-modal Captioning Module The Multi-modal Captioning Module is responsible to encode and understand users\u2019 input, providing textual descriptions for multimodality. This module employs state-of-the-art techniques ViT [5] and BLIP [14] model to analyze images and videos and generate descriptive captions. When users input images and videos without prompting, Our framework can also performs well to generate music that aptly complements the theme. However, in consideration of customization, we also permit users to input textual prompts to guide the music generation process. 3.1.1 Image Captioning Process. For image inputs, we leverage the capabilities of Vision Transformer (ViT) and BLIP-base modules, implemented by the clipinterrogator, to analyze and generate descriptions of the images. This process involves interpreting the visual content of an image \ud835\udc3c and converting it into a image caption description \ud835\udc37caption. Given an input image \ud835\udc3c, the framework generates a caption description \ud835\udc37caption : \ud835\udc37caption = \ud835\udc53BLIP(\ud835\udc3c) (1) where \ud835\udc53BLIP denotes the BLIP model to convert images into descriptive texts. The generated image caption description \ud835\udc37caption serves as input for the subsequent process. 3.1.2 Video Process. For video inputs, we employ a two-step process to handle and interpret the content. Initially, Video-BLIP2-Preprocessor tool is used to sample frames from the video \ud835\udc49, generating a set of frames {\ud835\udc39\ud835\udc56}. Each frame \ud835\udc39\ud835\udc56is then processed to generate a textual description \ud835\udc37\ud835\udc56using the BLIP model, similar to the image process. This process can be formulated as: {\ud835\udc37\ud835\udc56} = {\ud835\udc53BLIP(\ud835\udc39\ud835\udc56)} (2) where \ud835\udc53BLIP denotes the BLIP model to convert frames into descriptive texts. Subsequently, to synthesize a video caption description \ud835\udc37caption of the entire video, we aggregate the frame descriptions {\ud835\udc37\ud835\udc56} and process them through Large Language Models (LLMs) to interpret and condense the video\u2019s visual and thematic content into a coherent textual representation. This process can be represented as: \ud835\udc37caption = \ud835\udc53LLM({\ud835\udc37\ud835\udc56}|\ud835\udc43\ud835\udc5f\ud835\udc5c\ud835\udc5a\ud835\udc5d\ud835\udc61\ud835\udc63\ud835\udc56\ud835\udc51\ud835\udc52\ud835\udc5c) (3) where \ud835\udc53LLM denotes the LLM to integrate and interpret the set of frame descriptions into a single video description \ud835\udc37caption . The prompt used in this process is shown in Table 1. Table 1: Prompt template used to integrate the set of frame descriptions into video description. Role Content system You are about to process a sequence of captions, each corresponding to a distinct frame sampled from a video. Your task is to convert these captions into a cohesive, well-structured paragraph. This paragraph should describe the video in a fluid, engaging manner and follows these guidelines: avoiding semantic repetition to the greatest extent, and giving a description in less than 200 characters. This video caption description \ud835\udc37caption then serves as the input for subsequent process, similar to the image captioning process. 3.2 LLM Understanding & Bridging Module LLM Understanding & Bridging Module plays a pivotal role in the transition from visual to auditory art forms. It is tasked with converting the image/video-descriptive caption text, generated by the Multi-modal Captioning Module, into prompts which are useful in musical generation. This conversion leverages the capabilities of Large Language Models (LLMs) to interpret the underlying mood, themes, and elements conveyed in the textual descriptions of images or videos. Why we undertake the step of LLM-Bridge Module? This is because we contend that although multi-modal caption description have already been presented by Multi-modal Captioning Module, the problems of heterogeneous representations among different modalities still remain unsolved. For example, image captioning model (such as BLIP) intend to generate textual representations which lean more towards describing visual attributes (e.g. appearance, shape, etc.) while for music generation models (e.g. MusicGen), input descriptions that describe musical styles, moods and genres can lead to a better generation of music. From this prospective, we propose LLM Understanding & Bridging Module to align the two types of descriptions mentioned above. To enhance the specificity and relevance of the generated music, the module also optimizes the prompts with additional constraints aimed at music generation. This includes specifying the music genre and incorporating several few-shot examples provided by MusicGen. The optimization process ensures that the final musicdescriptive prompt \ud835\udc37music not only reflects the mood and theme indicated by the input visuals but also adheres to the stylistic and genre-specific guidelines necessary for generating contextually \fMM\u201924, October 28 November 1, 2024, Melbourne, Australia. Tianze Xu, Jiajun Li, Xuesong Chen, Xinrui Yao, and Shuchang Liu appropriate music pieces. Two type of \ud835\udc43\ud835\udc5f\ud835\udc5c\ud835\udc5a\ud835\udc5d\ud835\udc61\ud835\udc4f\ud835\udc5f\ud835\udc56\ud835\udc51\ud835\udc54\ud835\udc52, for image and video input separately, are shown in Table 2 and 3 The process can be formulated as below. Given an visual descriptive caption \ud835\udc37caption, the module generates a corresponding music-descriptive prompt \ud835\udc37music : \ud835\udc37music = \ud835\udc53LLM(\ud835\udc37caption|\ud835\udc43\ud835\udc5f\ud835\udc5c\ud835\udc5a\ud835\udc5d\ud835\udc61\ud835\udc4f\ud835\udc5f\ud835\udc56\ud835\udc51\ud835\udc54\ud835\udc52) (4) where \ud835\udc53LLM denotes the LLM to transform the descriptive texts into a coherent musical prompt that encapsulates the intended mood, themes, and potentially, the genre of the music to be generated, with the help of \ud835\udc43\ud835\udc5f\ud835\udc5c\ud835\udc5a\ud835\udc5d\ud835\udc61\ud835\udc4f\ud835\udc5f\ud835\udc56\ud835\udc51\ud835\udc54\ud835\udc52. Table 2: Prompt template for image-to-music generation. Role Content system Convert in less than 200 characters this image caption to a very concise musical description with musical terms, so that it can be used as a prompt to generate music through AI model, strictly in English. If user provides prompt, give priority to information provided by user. You need to speculate the mood of the given image caption and add it to the music description. You also need to specify a music genre in the description such as pop, hip hop, funk, electronic, jazz, rock, metal, soul, R&B etc. user a city with a tower and a castle in the background, a detailed matte painting, art nouveau, epic cinematic painting, kingslanding assistant A grand orchestral arrangement with thunderous percussion, epic brass fanfares, and soaring strings, creating a cinematic atmosphere fit for a heroic battle. user a group of people sitting on a beach next to a body of water, tourist destination, hawaii assistant Pop dance track with catchy melodies, tropical percussion, and upbeat rhythms, perfect for the beach By invoking LLMs through API, the model is able to distinguish semantic nuances with high accuracy while ensuring its lightweight nature. This capability not only fosters streamlined processing but also facilitates seamless deployment of model services on servers with constrained computational resources. 3.3 Music Generation Module The Music Generation Module utilizes the pre-trained model MusicGenmedium [3] to generate music pieces based on the music-descriptive prompts provided by the LLM Understanding & Bridging Module. MusicGen is designed to produce high-quality music compositions while accommodating various musical styles and preferences. By integrating MusicGen into the Mozart\u2019s Touch framework, we ensure that the generated music aligns closely with the intended mood and theme extracted from the input visuals. Table 3: Prompt template for video-to-music generation. Role Content system Convert in less than 200 characters this video caption to a very concise musical description with musical terms, so that it can be used as a prompt to generate music through AI model, strictly in English. You need to speculate the mood of the given video caption and add it to the music description. You also need to specify a music genre in the description such as pop, hip hop, funk, electronic, jazz, rock, metal, soul, R&B etc. user Two men playing cellos in a room with a piano and a grand glass window backdrop. assistant Classical chamber music piece featuring cello duet, intricate piano accompaniment, the rich harmonies blend seamlessly in an elegant and refined setting, creating a symphonic masterpiece. user A man with guitar in hand, captivates a large audience on stage at a concert. The crowd watches in awe as the performer delivers a stellar musical performance. assistant Rock concert with dynamic guitar riffs, precise drumming, and powerful vocals, creating a captivating and electrifying atmosphere, uniting the audience in excitement and musical euphoria. Given a music-descriptive prompt \ud835\udc37music, the Music Generation Module generates a music piece \ud835\udc40: \ud835\udc40= \ud835\udc53MusicGen(\ud835\udc37music) (5) where \ud835\udc53MusicGen represents the MusicGen model to transform the music prompt into music composition audio. It encapsulates the complex process of interpreting the prompts and translating them into musical elements such as melody, harmony, rhythm, and texture, ensuring that the generated music pieces accurately reflect the intended mood and themes conveyed by the input visuals. 4 EXPERIMENTS In this section, we assess the image-to-music and video-to-music generation capacities of Mozart\u2019s Touch, with the discussion of two evaluation datasets MUImage and MUVideo, and the evaluation metrics utilized. The result of evaluation shows our current state-ofthe-art performance in the task of multi-modal music generation. 4.1 Evaluation Dataset To assess our framework\u2019s performance of image-to-music generation, we utilize the MUImage dataset proposed by M2UGen [10]. MUImage is assembled by obtaining music samples from the AudioSet [6] with corresponding images, which contains 9,966 musicimage pairs in total. We sampled 2,500 music-image pairs randomly from MUImage as our evaluation dataset. \fMozart\u2019s Touch: A Lightweight Multi-modal Music Generation Framework Based on Pre-Trained Large Models MM\u201924, October 28 November 1, 2024, Melbourne, Australia. Table 4: Objective comparison of models for image-to-music generation. The best results are made bold. Model \ud835\udc39\ud835\udc34\ud835\udc37\ud835\udc63\ud835\udc54\ud835\udc54\u2193 KL\u2193 IM Rank\u2191 M2UGen 9.166 1.870 0.556 CoDi 6.674 1.821 0.525 Mozart\u2019s Touch 4.625 1.169 0.753 For video-to-music generation task, we utilize the MUVideo dataset, which is also proposed by M2UGen. We adopted a construction method similar to that of the image-to-music generation task, yielding a corpus of 2,500 music-video pairs for evaluating video-to-music generation task. 4.2 Evaluation metrics For both tasks, we utilize the Frechet Audio Distance (FAD)[12], Kullback-Leibler divergence (KL) and ImageBind Rank (IB Rank)[7] as the evaluation metrics. FAD is a reference-free evaluation metric for music enhancement algorithms. A low score of FAD indicates a high quality of generated music. KL scores measure the labels between the original and the generated music. When the KL score is low, the generated audios are expected to share similar distributions with the reference music. For these two metrics, we utilize the official implementation in PyTorch, where FAD score is supported by the VGGish model. IB Rank[7] is introduced by M2UGen, to assess the alignment between the image/video modality and the generated music. Firstly, we use the Image-Bind model to obtain embeddings for the images/videos and the generated music, then calculate their cosine similarity scores and give them a score based on their ranking. For IB Rank, High score represents a relatively high ranking among the baselines. 4.3 Baselines and Details For both tasks, we compare Mozart\u2019s Touch with two baselines: CoDi[26] and M2UGen[10]. We use open-source CoDi model and M2UGen checkpoint files to run inference. Our framework runs on one NVIDIA RTX 3090 24GB GPU, and two baselines run on one NVIDIA V100 32GB GPU to load the whole models. 4.4 Performance Comparison Table 4 presents the performance of our framework, Mozart\u2019s Touch, and two baseline models in image-to-music generation. The results highlight significant improvements in both the quality and relevance of the music generated by our framework. Moreover, Mozart\u2019s Touch surpasses prior state-of-the-art models despite its simpler architecture. Table 5 shows the results of video-to-music generation. For this task, we observed that Mozart\u2019s Touch still outperforms other models, indicating that our two-step captioning strategy is also highly effective. 4.5 Subjective Evaluation Although we achieve exceptional performance in the objective evaluation, we also believe that quantitative evaluation method Table 5: Objective comparison of models for video-to-music generation. The best results are made bold. Model \ud835\udc39\ud835\udc34\ud835\udc37\ud835\udc63\ud835\udc54\ud835\udc54\u2193 KL\u2193 IM Rank\u2191 M2UGen 9.047 1.878 0.552 CoDi 5.055 1.195 0.494 Mozart\u2019s Touch 4.339 1.048 0.787 Table 6: Subjective comparison of models for image-to-music generation. The best results are made bold. Model OVL\u2191 REL\u2191 CoDi 2.95 3.24 M2UGen 3.77 3.02 Mozart\u2019s Touch 3.74 3.76 Ground Truth\u2217 3.88 4.08 Table 7: Ablation study on image-to-music generation task. The best results are made bold. Model \ud835\udc39\ud835\udc34\ud835\udc37\ud835\udc63\ud835\udc54\ud835\udc54\u2193 KL\u2193 IM Rank\u2191 Mozart\u2019s Touch 4.625 1.170 0.757 w/o LUBM 3.741 1.121 0.743 has great limitations for music generation tasks. The metrics above can effectively measure the quality and relevance of the generated music, but fall short in the understanding of creativity and human feelings, as supported by previous research [18]. Following previous similar works [13, 18], the generated samples are rated based on i) overall quality (OVL); and ii) relevance to the input image (REL). Both OVL and REL metrics have a Likert scale [16] between one and five, where a larger number indicates better performance. In this case, We conduct the subjective evaluation involving 125 participants, taking image-to-music generation as example. Totally 75 questions are created for the subjective evaluation, which are randomly sampled from our evaluation dataset. Each question contains a video with the input image as the visual part and generated (or ground truth) music as the audio. 20 audios are sampled from ground truth, 20 from M2UGen, 20 from Mozart\u2019s Touch, and 15 from CoDi. Each questionnaire comprises ten randomly selected questions. Upon subsequent validation by our team, all 75 questions are covered by the total 125 questionnaires. The subjective evaluation result is presented in Table 6. While our method slightly underperforms in terms of the metrics for overall quality (OVL) when compared to M2UGen, the result shows that there is a notable enhancement in the metric of relevance (REL) to input image, which is consistent with our target to generate corresponding music that aligns the image well. 4.6 Ablation Studies To demonstrate the effectiveness of LLM bridging modality, we conducted a further ablation experiment, comparing the performance \fMM\u201924, October 28 November 1, 2024, Melbourne, Australia. Tianze Xu, Jiajun Li, Xuesong Chen, Xinrui Yao, and Shuchang Liu of the original system with and without (w/o) the LLM Understanding & Bridging Module (LUBM) in the task of iamge-to-music generation. As indicated in the table 7, the framework without LUBM achieves higher scores in the FAD and KL metrics, the two metrics measure the similarity between ground truth and generated audios, rather than the similarity between different modalities. On the other side, the framework with LUBM performs better in IB Rank metric. This metric utilizes the ImageBind model to encode multi-modal information uniformly, thereby evaluating the similarity between input modality information and generated audio, aligning more closely with the objectives of evaluating multi-modal music generation. Therefore, we believe that there is no clear superiority or inferiority between the Mozart\u2019s Touch framework with and without LUBM. This once again emphasizes that quantitative evaluation may not always be the best approach for assessing the multi-modal music generation tasks. 4.7 Case Study In this part, we conduct a case study to analyze how our LLM Understanding & Bridging Module (LUBM) mitigates the problem of heterogeneous representations among different modalities. By showcasing some representative comparative examples in Figure 2, We demonstrate that the absence of the LUBM does indeed have adverse effects on the generation results. The first example illustrates a portrait of Bach. Some keywords in the original image description disturb the generation of corresponding music, as they focus on the attributes of image instead of that of music. The second example illustrates an anime girl from a visual novel game Atri: My Dear Moments. This example shows that insufficiency of music attributions may also mislead the generation of music in a quite different way. 5", + "additional_graph_info": { + "graph": [ + [ + "Tianze Xu", + "Xuesong Chen" + ], + [ + "Tianze Xu", + "Shuchang Liu" + ], + [ + "Xuesong Chen", + "Shaoshuai Shi" + ], + [ + "Xuesong Chen", + "Benjin Zhu" + ], + [ + "Shuchang Liu", + "Qingpeng Cai" + ], + [ + "Shuchang Liu", + "Bowen Sun" + ], + [ + "Shuchang Liu", + "Dong Zheng" + ], + [ + "Shuchang Liu", + "Kun Gai" + ] + ], + "node_feat": { + "Tianze Xu": [ + { + "url": "http://arxiv.org/abs/2405.02801v2", + "title": "Mozart's Touch: A Lightweight Multi-modal Music Generation Framework Based on Pre-Trained Large Models", + "abstract": "In recent years, AI-Generated Content (AIGC) has witnessed rapid\nadvancements, facilitating the generation of music, images, and other forms of\nartistic expression across various industries. However, researches on general\nmulti-modal music generation model remain scarce. To fill this gap, we propose\na multi-modal music generation framework Mozart's Touch. It could generate\naligned music with the cross-modality inputs, such as images, videos and text.\nMozart's Touch is composed of three main components: Multi-modal Captioning\nModule, Large Language Model (LLM) Understanding & Bridging Module, and Music\nGeneration Module. Unlike traditional approaches, Mozart's Touch requires no\ntraining or fine-tuning pre-trained models, offering efficiency and\ntransparency through clear, interpretable prompts. We also introduce\n\"LLM-Bridge\" method to resolve the heterogeneous representation problems\nbetween descriptive texts of different modalities. We conduct a series of\nobjective and subjective evaluations on the proposed model, and results\nindicate that our model surpasses the performance of current state-of-the-art\nmodels. Our codes and examples is availble at:\nhttps://github.com/WangTooNaive/MozartsTouch", + "authors": "Tianze Xu, Jiajun Li, Xuesong Chen, Xinrui Yao, Shuchang Liu", + "published": "2024-05-05", + "updated": "2024-05-07", + "primary_cat": "cs.SD", + "cats": [ + "cs.SD", + "cs.AI", + "eess.AS" + ], + "main_content": "INTRODUCTION In recent years, the intersection of artificial intelligence (AI) and creative arts has witnessed remarkable advancements [2], leading to the emergence of novel techniques and systems capable of producing music[1, 3, 24], images[21\u201323], and other forms of artistic expression[19] in a wide range of industries. As the remarkable advancements in Artificial Intelligence for Generative Composition (AIGC), there is a growing belief that it heralds a new era in AI and will have a substantial influence across the globe. arXiv:2405.02801v2 [cs.SD] 7 May 2024 \fMM\u201924, October 28 November 1, 2024, Melbourne, Australia. Tianze Xu, Jiajun Li, Xuesong Chen, Xinrui Yao, and Shuchang Liu However, current music generation models, when tasked with image-to-music synthesis, encounter notable limitations. These models often struggle to accurately capture the ambiance and underlying emotions conveyed by the visual input. While they may produce music that aligns with the visual elements, the nuanced details and subtle cues present in the image are frequently lost in translation. This shortfall hampers the ability of existing systems to truly evoke the intended atmosphere and sentiment of the imagery, thereby limiting their effectiveness in multi-modal creative endeavors. It is evident that there exists a gap in the current stateof-the-art models concerning their proficiency in leveraging visual cues to inform the musical composition process. Natural language serves as a powerful intermediary, demonstrating significant potential in bridging across different sensory modalities. Designed to interact directly with human, Large language models (LLMs) are typically comprised of a vast number of parameters and trained on extensive datasets, granting them powerful comprehension and reasoning capabilities.[8] Harnessing these advantages, researchers have employed LLMs to achieve semantic understanding across multiple modalities. Despite the significant strides made in AI-driven creativity, a compelling question arises: How can we harness the formidable capabilities of LLMs to empower multi-modal tasks such as imageto-music synthesis? This inquiry serves as the focal point of our investigation, wherein we seek to elucidate the seamless integration of LLMs into the process of generating music inspired by visual contents. In this paper, we present Mozart\u2019s Touch, a multi-modal music generation framework that harnesses the power of Large Language Models (LLMs) and pre-trained models to generate music based on visual information. An overview of the architecture is depicted in Figure 1. Mozart\u2019s Touch offers multiple advantages for image-to-music generation: By leveraging the deep understanding and generalizable knowledge of Large Language Models (LLMs) to interpret visual elements accurately, it differs from previous multi-modal end-to-end music generation methods (e.g. CoDi [26] and M2UGen [10]). Unlike traditional approaches, it requires no training of music generation models or fine-tuning LLMs, conserving computational resources and ensuring efficiency. Moreover, Mozart\u2019s Touch utilizes clear, interpretable prompts for greater transparency during the whole process, which improves overall framework explainability. Our contributions are summarized as follows: \u2022 We introduce the Mozart\u2019s Touch framework, an innovative integration of Large Language Models (LLMs) for multimodal music generation. Departing from traditional end-toend paradigms, this framework harnesses the power of LLMs to synthesize music aligned with visual inputs. \u2022 We offer a new perspective on leveraging LLMs for multimodal generation tasks. Our framework showcases a novel application of LLMs in text-to-music generation , demonstrating the potential of LLMs in understanding and bridging different sensory modalities and empowering creative processes. \u2022 We assess Mozart\u2019s Touch on the imageand video-to-audio dataset MUImage and MUVideo [11] , utilizing both objective and subjective metrics. Comparative evaluation results show that our approach outperforms existing state-of-theart methods. This experiment demonstrates the effectiveness of our framework and its potential as a new baseline benchmark for future works in the domain. 2 RELATED WORK 2.1 Multi-modal Large Language Model (MLLM) Due to the prevalence of researches in Large Language Models(LLM), the combination of LLM and models in other modalities has also been a rising research hot spot, leading to the new field of MLLM. According to this survey [27] , the key applications of MLLM includes Multi-modal Instruction Tuning (M-IT), Multi-modal InContext Learning (M-ICL), Multi-modal Chain of Thought (M-CoT), and LLM-Aided Visual Reasoning (LAVR). For Mozart\u2019s Touch, we employ Modality Bridging technology, utilizing natural language as an intermediary medium and leveraging LLM to bridge the modality gap. VideoChat-Text [15], for example, is an end-to-end chatcentric video understanding system, which uses pre-trained vision models to extract visual information such as actions and enriches the descriptions using a speech recognition model, which are all represented as textual information as a bridge. 2.2 Image Captioning Image captioning, which is the process of generating descriptive text (captions) that accurately and relevantly capture the content of an image, is a typical multi-modal task requiring both abilities of visual understanding and natural language generation. [25] The field of image captioning has seen significant advancements, such as CLIP [20] and BLIP [14] model. CLIP is developed by OpenAI that has revolutionized the way computers understand images and text, which efficiently learns visual concepts from natural language supervision. The main idea of CLIP is to align texts and images in the feature domain without predetermined labels for specific object categories by training on a large corpus of image-text pairs collected from the Internet. BLIP is another multi-modal framework which transfers flexibly to both vision-language understanding and generation tasks. To pre-train a unified model with both understanding and generation capabilities, they propose multi-modal mixture of encoder-decoder (MED) and achieve great performance across multiple tasks, such as image captioning. 2.3 Multi-Modal Music Generation The advent of Transformer and diffusion models has promoted the development of music generation models. Many impressive works emerged in recent years, such as MusicLM [1], MusicGen [3] , Noise2Music [9] and AudioLDM 2 [17] . MusicLM and MusicGen both consist of autoregressive decoder to generate music. MusicLM can generate high-quality music based on descriptive text such as emotions, styles and instruments. Noise2Music and AudioLDM 2 use diffusion models to generate music based on text that transcends fine-grained semantics and can reach deeper emotions. However, these works above all take text or audio as input to generate music, ignoring other modality information, such as image \fMozart\u2019s Touch: A Lightweight Multi-modal Music Generation Framework Based on Pre-Trained Large Models MM\u201924, October 28 November 1, 2024, Melbourne, Australia. and video. Notable exceptions include the CoDi [26] and M2UGen [11], which allow inputs with more modalities. CoDi(Composable Diffusion) can generate output modalities in parallel from any combination of input modalities. It first use individual modality-specific diffusion models for images, videos, audio, and texts respectively to build a shared multimodal space, and then uses Latent Alignment [4] to achieve joint multi-modal generation. M2UGen is an LLMbased multi-modal music understanding and generation framework. It consists of multi-modal feature encoders, multi-model understanding adapters, bridging LLM, and generation modules to process inputs from multiple modalities such as text, images, and videos, and generate corresponding music. 3 MOZART\u2019S TOUCH Mozart\u2019s Touch is a collaborative multi-modal AIGC framework structured into a sequential integration of three core modules: a Multi-modal Captioning Module, a LLM Understanding & Bridging Module based on LLMs and Music Generation Module. The overall architecture is illustrated in Figure 1. 3.1 Multi-modal Captioning Module The Multi-modal Captioning Module is responsible to encode and understand users\u2019 input, providing textual descriptions for multimodality. This module employs state-of-the-art techniques ViT [5] and BLIP [14] model to analyze images and videos and generate descriptive captions. When users input images and videos without prompting, Our framework can also performs well to generate music that aptly complements the theme. However, in consideration of customization, we also permit users to input textual prompts to guide the music generation process. 3.1.1 Image Captioning Process. For image inputs, we leverage the capabilities of Vision Transformer (ViT) and BLIP-base modules, implemented by the clipinterrogator, to analyze and generate descriptions of the images. This process involves interpreting the visual content of an image \ud835\udc3c and converting it into a image caption description \ud835\udc37caption. Given an input image \ud835\udc3c, the framework generates a caption description \ud835\udc37caption : \ud835\udc37caption = \ud835\udc53BLIP(\ud835\udc3c) (1) where \ud835\udc53BLIP denotes the BLIP model to convert images into descriptive texts. The generated image caption description \ud835\udc37caption serves as input for the subsequent process. 3.1.2 Video Process. For video inputs, we employ a two-step process to handle and interpret the content. Initially, Video-BLIP2-Preprocessor tool is used to sample frames from the video \ud835\udc49, generating a set of frames {\ud835\udc39\ud835\udc56}. Each frame \ud835\udc39\ud835\udc56is then processed to generate a textual description \ud835\udc37\ud835\udc56using the BLIP model, similar to the image process. This process can be formulated as: {\ud835\udc37\ud835\udc56} = {\ud835\udc53BLIP(\ud835\udc39\ud835\udc56)} (2) where \ud835\udc53BLIP denotes the BLIP model to convert frames into descriptive texts. Subsequently, to synthesize a video caption description \ud835\udc37caption of the entire video, we aggregate the frame descriptions {\ud835\udc37\ud835\udc56} and process them through Large Language Models (LLMs) to interpret and condense the video\u2019s visual and thematic content into a coherent textual representation. This process can be represented as: \ud835\udc37caption = \ud835\udc53LLM({\ud835\udc37\ud835\udc56}|\ud835\udc43\ud835\udc5f\ud835\udc5c\ud835\udc5a\ud835\udc5d\ud835\udc61\ud835\udc63\ud835\udc56\ud835\udc51\ud835\udc52\ud835\udc5c) (3) where \ud835\udc53LLM denotes the LLM to integrate and interpret the set of frame descriptions into a single video description \ud835\udc37caption . The prompt used in this process is shown in Table 1. Table 1: Prompt template used to integrate the set of frame descriptions into video description. Role Content system You are about to process a sequence of captions, each corresponding to a distinct frame sampled from a video. Your task is to convert these captions into a cohesive, well-structured paragraph. This paragraph should describe the video in a fluid, engaging manner and follows these guidelines: avoiding semantic repetition to the greatest extent, and giving a description in less than 200 characters. This video caption description \ud835\udc37caption then serves as the input for subsequent process, similar to the image captioning process. 3.2 LLM Understanding & Bridging Module LLM Understanding & Bridging Module plays a pivotal role in the transition from visual to auditory art forms. It is tasked with converting the image/video-descriptive caption text, generated by the Multi-modal Captioning Module, into prompts which are useful in musical generation. This conversion leverages the capabilities of Large Language Models (LLMs) to interpret the underlying mood, themes, and elements conveyed in the textual descriptions of images or videos. Why we undertake the step of LLM-Bridge Module? This is because we contend that although multi-modal caption description have already been presented by Multi-modal Captioning Module, the problems of heterogeneous representations among different modalities still remain unsolved. For example, image captioning model (such as BLIP) intend to generate textual representations which lean more towards describing visual attributes (e.g. appearance, shape, etc.) while for music generation models (e.g. MusicGen), input descriptions that describe musical styles, moods and genres can lead to a better generation of music. From this prospective, we propose LLM Understanding & Bridging Module to align the two types of descriptions mentioned above. To enhance the specificity and relevance of the generated music, the module also optimizes the prompts with additional constraints aimed at music generation. This includes specifying the music genre and incorporating several few-shot examples provided by MusicGen. The optimization process ensures that the final musicdescriptive prompt \ud835\udc37music not only reflects the mood and theme indicated by the input visuals but also adheres to the stylistic and genre-specific guidelines necessary for generating contextually \fMM\u201924, October 28 November 1, 2024, Melbourne, Australia. Tianze Xu, Jiajun Li, Xuesong Chen, Xinrui Yao, and Shuchang Liu appropriate music pieces. Two type of \ud835\udc43\ud835\udc5f\ud835\udc5c\ud835\udc5a\ud835\udc5d\ud835\udc61\ud835\udc4f\ud835\udc5f\ud835\udc56\ud835\udc51\ud835\udc54\ud835\udc52, for image and video input separately, are shown in Table 2 and 3 The process can be formulated as below. Given an visual descriptive caption \ud835\udc37caption, the module generates a corresponding music-descriptive prompt \ud835\udc37music : \ud835\udc37music = \ud835\udc53LLM(\ud835\udc37caption|\ud835\udc43\ud835\udc5f\ud835\udc5c\ud835\udc5a\ud835\udc5d\ud835\udc61\ud835\udc4f\ud835\udc5f\ud835\udc56\ud835\udc51\ud835\udc54\ud835\udc52) (4) where \ud835\udc53LLM denotes the LLM to transform the descriptive texts into a coherent musical prompt that encapsulates the intended mood, themes, and potentially, the genre of the music to be generated, with the help of \ud835\udc43\ud835\udc5f\ud835\udc5c\ud835\udc5a\ud835\udc5d\ud835\udc61\ud835\udc4f\ud835\udc5f\ud835\udc56\ud835\udc51\ud835\udc54\ud835\udc52. Table 2: Prompt template for image-to-music generation. Role Content system Convert in less than 200 characters this image caption to a very concise musical description with musical terms, so that it can be used as a prompt to generate music through AI model, strictly in English. If user provides prompt, give priority to information provided by user. You need to speculate the mood of the given image caption and add it to the music description. You also need to specify a music genre in the description such as pop, hip hop, funk, electronic, jazz, rock, metal, soul, R&B etc. user a city with a tower and a castle in the background, a detailed matte painting, art nouveau, epic cinematic painting, kingslanding assistant A grand orchestral arrangement with thunderous percussion, epic brass fanfares, and soaring strings, creating a cinematic atmosphere fit for a heroic battle. user a group of people sitting on a beach next to a body of water, tourist destination, hawaii assistant Pop dance track with catchy melodies, tropical percussion, and upbeat rhythms, perfect for the beach By invoking LLMs through API, the model is able to distinguish semantic nuances with high accuracy while ensuring its lightweight nature. This capability not only fosters streamlined processing but also facilitates seamless deployment of model services on servers with constrained computational resources. 3.3 Music Generation Module The Music Generation Module utilizes the pre-trained model MusicGenmedium [3] to generate music pieces based on the music-descriptive prompts provided by the LLM Understanding & Bridging Module. MusicGen is designed to produce high-quality music compositions while accommodating various musical styles and preferences. By integrating MusicGen into the Mozart\u2019s Touch framework, we ensure that the generated music aligns closely with the intended mood and theme extracted from the input visuals. Table 3: Prompt template for video-to-music generation. Role Content system Convert in less than 200 characters this video caption to a very concise musical description with musical terms, so that it can be used as a prompt to generate music through AI model, strictly in English. You need to speculate the mood of the given video caption and add it to the music description. You also need to specify a music genre in the description such as pop, hip hop, funk, electronic, jazz, rock, metal, soul, R&B etc. user Two men playing cellos in a room with a piano and a grand glass window backdrop. assistant Classical chamber music piece featuring cello duet, intricate piano accompaniment, the rich harmonies blend seamlessly in an elegant and refined setting, creating a symphonic masterpiece. user A man with guitar in hand, captivates a large audience on stage at a concert. The crowd watches in awe as the performer delivers a stellar musical performance. assistant Rock concert with dynamic guitar riffs, precise drumming, and powerful vocals, creating a captivating and electrifying atmosphere, uniting the audience in excitement and musical euphoria. Given a music-descriptive prompt \ud835\udc37music, the Music Generation Module generates a music piece \ud835\udc40: \ud835\udc40= \ud835\udc53MusicGen(\ud835\udc37music) (5) where \ud835\udc53MusicGen represents the MusicGen model to transform the music prompt into music composition audio. It encapsulates the complex process of interpreting the prompts and translating them into musical elements such as melody, harmony, rhythm, and texture, ensuring that the generated music pieces accurately reflect the intended mood and themes conveyed by the input visuals. 4 EXPERIMENTS In this section, we assess the image-to-music and video-to-music generation capacities of Mozart\u2019s Touch, with the discussion of two evaluation datasets MUImage and MUVideo, and the evaluation metrics utilized. The result of evaluation shows our current state-ofthe-art performance in the task of multi-modal music generation. 4.1 Evaluation Dataset To assess our framework\u2019s performance of image-to-music generation, we utilize the MUImage dataset proposed by M2UGen [10]. MUImage is assembled by obtaining music samples from the AudioSet [6] with corresponding images, which contains 9,966 musicimage pairs in total. We sampled 2,500 music-image pairs randomly from MUImage as our evaluation dataset. \fMozart\u2019s Touch: A Lightweight Multi-modal Music Generation Framework Based on Pre-Trained Large Models MM\u201924, October 28 November 1, 2024, Melbourne, Australia. Table 4: Objective comparison of models for image-to-music generation. The best results are made bold. Model \ud835\udc39\ud835\udc34\ud835\udc37\ud835\udc63\ud835\udc54\ud835\udc54\u2193 KL\u2193 IM Rank\u2191 M2UGen 9.166 1.870 0.556 CoDi 6.674 1.821 0.525 Mozart\u2019s Touch 4.625 1.169 0.753 For video-to-music generation task, we utilize the MUVideo dataset, which is also proposed by M2UGen. We adopted a construction method similar to that of the image-to-music generation task, yielding a corpus of 2,500 music-video pairs for evaluating video-to-music generation task. 4.2 Evaluation metrics For both tasks, we utilize the Frechet Audio Distance (FAD)[12], Kullback-Leibler divergence (KL) and ImageBind Rank (IB Rank)[7] as the evaluation metrics. FAD is a reference-free evaluation metric for music enhancement algorithms. A low score of FAD indicates a high quality of generated music. KL scores measure the labels between the original and the generated music. When the KL score is low, the generated audios are expected to share similar distributions with the reference music. For these two metrics, we utilize the official implementation in PyTorch, where FAD score is supported by the VGGish model. IB Rank[7] is introduced by M2UGen, to assess the alignment between the image/video modality and the generated music. Firstly, we use the Image-Bind model to obtain embeddings for the images/videos and the generated music, then calculate their cosine similarity scores and give them a score based on their ranking. For IB Rank, High score represents a relatively high ranking among the baselines. 4.3 Baselines and Details For both tasks, we compare Mozart\u2019s Touch with two baselines: CoDi[26] and M2UGen[10]. We use open-source CoDi model and M2UGen checkpoint files to run inference. Our framework runs on one NVIDIA RTX 3090 24GB GPU, and two baselines run on one NVIDIA V100 32GB GPU to load the whole models. 4.4 Performance Comparison Table 4 presents the performance of our framework, Mozart\u2019s Touch, and two baseline models in image-to-music generation. The results highlight significant improvements in both the quality and relevance of the music generated by our framework. Moreover, Mozart\u2019s Touch surpasses prior state-of-the-art models despite its simpler architecture. Table 5 shows the results of video-to-music generation. For this task, we observed that Mozart\u2019s Touch still outperforms other models, indicating that our two-step captioning strategy is also highly effective. 4.5 Subjective Evaluation Although we achieve exceptional performance in the objective evaluation, we also believe that quantitative evaluation method Table 5: Objective comparison of models for video-to-music generation. The best results are made bold. Model \ud835\udc39\ud835\udc34\ud835\udc37\ud835\udc63\ud835\udc54\ud835\udc54\u2193 KL\u2193 IM Rank\u2191 M2UGen 9.047 1.878 0.552 CoDi 5.055 1.195 0.494 Mozart\u2019s Touch 4.339 1.048 0.787 Table 6: Subjective comparison of models for image-to-music generation. The best results are made bold. Model OVL\u2191 REL\u2191 CoDi 2.95 3.24 M2UGen 3.77 3.02 Mozart\u2019s Touch 3.74 3.76 Ground Truth\u2217 3.88 4.08 Table 7: Ablation study on image-to-music generation task. The best results are made bold. Model \ud835\udc39\ud835\udc34\ud835\udc37\ud835\udc63\ud835\udc54\ud835\udc54\u2193 KL\u2193 IM Rank\u2191 Mozart\u2019s Touch 4.625 1.170 0.757 w/o LUBM 3.741 1.121 0.743 has great limitations for music generation tasks. The metrics above can effectively measure the quality and relevance of the generated music, but fall short in the understanding of creativity and human feelings, as supported by previous research [18]. Following previous similar works [13, 18], the generated samples are rated based on i) overall quality (OVL); and ii) relevance to the input image (REL). Both OVL and REL metrics have a Likert scale [16] between one and five, where a larger number indicates better performance. In this case, We conduct the subjective evaluation involving 125 participants, taking image-to-music generation as example. Totally 75 questions are created for the subjective evaluation, which are randomly sampled from our evaluation dataset. Each question contains a video with the input image as the visual part and generated (or ground truth) music as the audio. 20 audios are sampled from ground truth, 20 from M2UGen, 20 from Mozart\u2019s Touch, and 15 from CoDi. Each questionnaire comprises ten randomly selected questions. Upon subsequent validation by our team, all 75 questions are covered by the total 125 questionnaires. The subjective evaluation result is presented in Table 6. While our method slightly underperforms in terms of the metrics for overall quality (OVL) when compared to M2UGen, the result shows that there is a notable enhancement in the metric of relevance (REL) to input image, which is consistent with our target to generate corresponding music that aligns the image well. 4.6 Ablation Studies To demonstrate the effectiveness of LLM bridging modality, we conducted a further ablation experiment, comparing the performance \fMM\u201924, October 28 November 1, 2024, Melbourne, Australia. Tianze Xu, Jiajun Li, Xuesong Chen, Xinrui Yao, and Shuchang Liu of the original system with and without (w/o) the LLM Understanding & Bridging Module (LUBM) in the task of iamge-to-music generation. As indicated in the table 7, the framework without LUBM achieves higher scores in the FAD and KL metrics, the two metrics measure the similarity between ground truth and generated audios, rather than the similarity between different modalities. On the other side, the framework with LUBM performs better in IB Rank metric. This metric utilizes the ImageBind model to encode multi-modal information uniformly, thereby evaluating the similarity between input modality information and generated audio, aligning more closely with the objectives of evaluating multi-modal music generation. Therefore, we believe that there is no clear superiority or inferiority between the Mozart\u2019s Touch framework with and without LUBM. This once again emphasizes that quantitative evaluation may not always be the best approach for assessing the multi-modal music generation tasks. 4.7 Case Study In this part, we conduct a case study to analyze how our LLM Understanding & Bridging Module (LUBM) mitigates the problem of heterogeneous representations among different modalities. By showcasing some representative comparative examples in Figure 2, We demonstrate that the absence of the LUBM does indeed have adverse effects on the generation results. The first example illustrates a portrait of Bach. Some keywords in the original image description disturb the generation of corresponding music, as they focus on the attributes of image instead of that of music. The second example illustrates an anime girl from a visual novel game Atri: My Dear Moments. This example shows that insufficiency of music attributions may also mislead the generation of music in a quite different way. 5" + } + ], + "Xuesong Chen": [ + { + "url": "http://arxiv.org/abs/2306.05888v2", + "title": "TrajectoryFormer: 3D Object Tracking Transformer with Predictive Trajectory Hypotheses", + "abstract": "3D multi-object tracking (MOT) is vital for many applications including\nautonomous driving vehicles and service robots. With the commonly used\ntracking-by-detection paradigm, 3D MOT has made important progress in recent\nyears. However, these methods only use the detection boxes of the current frame\nto obtain trajectory-box association results, which makes it impossible for the\ntracker to recover objects missed by the detector. In this paper, we present\nTrajectoryFormer, a novel point-cloud-based 3D MOT framework. To recover the\nmissed object by detector, we generates multiple trajectory hypotheses with\nhybrid candidate boxes, including temporally predicted boxes and current-frame\ndetection boxes, for trajectory-box association. The predicted boxes can\npropagate object's history trajectory information to the current frame and thus\nthe network can tolerate short-term miss detection of the tracked objects. We\ncombine long-term object motion feature and short-term object appearance\nfeature to create per-hypothesis feature embedding, which reduces the\ncomputational overhead for spatial-temporal encoding. Additionally, we\nintroduce a Global-Local Interaction Module to conduct information interaction\namong all hypotheses and models their spatial relations, leading to accurate\nestimation of hypotheses. Our TrajectoryFormer achieves state-of-the-art\nperformance on the Waymo 3D MOT benchmarks. Code is available at\nhttps://github.com/poodarchu/EFG .", + "authors": "Xuesong Chen, Shaoshuai Shi, Chao Zhang, Benjin Zhu, Qiang Wang, Ka Chun Cheung, Simon See, Hongsheng Li", + "published": "2023-06-09", + "updated": "2023-08-18", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "main_content": "Introduction 3D multi-object tracking (MOT) is an essential and critical task in the fields of autonomous driving and robotics. It plays a vital role in enabling systems to accurately perceive their surrounding dynamic environment and make appropriate responses. Among the various sensors used in au*Corresponding authors tonomous driving, LiDAR-based systems have emerged as a popular choice because they can capture accurate and detailed 3D information of the environment, enabling more precise object detection and tracking. Therefore, 3D MOT based on LiDAR point clouds shows great potential to improve the safety and efficiency of autonomous vehicles. Tracking-by-detection is a popular paradigm that has demonstrated excellent performance on the 3D MOT task [34, 13, 27, 1, 15]. Previous methods, such as CenterPoint [34] and SimpleTrack [13], rely on heuristic rules to associate objects across frames. These methods use manually designed affinity metrics such as distance, intersection over union (IoU), and GIoU to match a history trajectory with a current detection box based on their positional relationship. However, these heuristic rules are not robust as they cannot be trained and different categories may prefer different association metrics [13]. Moreover, these methods only consider pair-wise position relationships between boxes, without considering comprehensive global context information, which often results in low-quality trajectories. Other methods have attempted to enhance 3D MOT by modeling the spatial context among different boxes. PolarMOT [8] adopts Graph Neural Network (GNN) to establish the spatial-temporal relationship between trajectories and different boxes, followed by edge classification to conduct association. Similarly, InterTrack [30] employs attention mechanisms to interact between trajectories and all boxes, generating the affinity matrix for association. These methods generally leverage global context information, resulting in improved tracking performance compared to heuristic methods. However, they still only rely on detection boxes for associating with existing trajectories, which limits the recall rate when the detector misses objects. Thus, incorporating additional box candidates for association has great potential to improve the recall and performance of 3D MOT. To overcome the limitations of existing approaches, we present TrajectoryFormer, a point-cloud-based 3D MOT 1 arXiv:2306.05888v2 [cs.CV] 18 Aug 2023 \fframework. Our framework generates multiple trajectory hypotheses with hybrid candidate boxes, enabling robust tracking of challenging objects. It employs a per-hypothesis feature encoding module and a cross-hypothesis feature interaction module to learn representative features for selecting the best hypotheses. The per-hypothesis feature encoding module encodes both the appearance and motion information of each hypothesis, whereas the feature interaction module captures the contextual relationship among all hypotheses. By leveraging multiple hypotheses and contextual information, TrajectoryFormer can enhance tracking performance in challenging scenarios with limited overhead. Specifically, our framework first generates multiple trajectory hypotheses for each existing trajectory using two types of association candidate boxes: temporally predicted boxes and current frame detection boxes. Unlike existing approaches that only consider detection boxes at the current frame, we design a small motion prediction network that generate predicted boxes for several future frames of each history trajectory. This allows us to generate multiple trajectory hypotheses for an object by linking its history trajectory with both temporally predicted boxes (generated by its motion prediction at different past time steps) and current frame detection boxes (matched by nearest distance of box centers). Such a strategy enables the network to recover objects missed by the detector at the current moment and provides additional association options that can help correct trajectory errors caused by low-quality detection boxes. After generating multiple trajectory hypotheses, TrajectoryFormer combines long-term object motion feature and short-term object appearance feature to create perhypothesis feature embedding. More specifically, we adopt a PointNet-like [19] neural network to encode the motion feature for each trajectory hypothesis via encoding its longterm sequence boxes, and a small transformer-based neural network on the cropped object points to encode its appearance feature. Note that we only encode the object appearance feature based on short-term point clouds, since it not only requires very limited computational overhead but also avoids handling long-term object point variations. The concatenation of two types of features that capture complementary information for each trajectory hypothesis creates the per-hypothesis feature embedding. This embedding enables the evaluation of each hypothesis quality and facilitates the modeling of relationships among multiple hypotheses. To jointly consider the trajectory association across all objects, we introduce a global-local Interaction module that models spatial relations of all trajectory hypotheses. It uses a transformer-based neural network to alternately conduct scene-level (e.g., all trajectory hypotheses within the scene) and ID-level (e.g., multiple trajectory hypotheses of each object) feature interactions on the hypotheses, leading to more accurate estimation of hypotheses. During inference, TrajectoryFormer selects the hypothesis with the highest confidence as the best association result for each object. The selected hypothesis is then refined using its extracted features to generate a more accurate trajectory. In summary, our contributions are three-fold: 1) We propose TrajectoryFormer, a novel transformer-based 3D MOT tracking framework, which generates multiple trajectory hypotheses that incorporate both predicted and detected boxes to better track challenging objects. 2) To better encode each hypothesis, we incorporate both long-term trajectory motion features and short-term object appearance features. Additionally, the framework employs a globallocal interaction module to model relationships among all hypotheses to adaptively determine the optimal trajectorybox association. 3) We demonstrate the effectiveness of our proposed approach through extensive experiments, and our framework achieves state-of-the-art results for 3D MOT on the challenging Waymo 3D tracking benchmark. 2. Related Work 2.1. 3D Object Detection on Point Clouds Current methods of 3D detection on point cloud can be categorized into three groups: point-based, voxel-based, and point-voxel-based. The point-based methods [21, 16, 33] directly extract information from the original point clouds. These methods leverage operations like set abstraction [17] to capture spatial position features of the irregular 3D point clouds. In contrast, voxel-based approaches convert the irregular points into regular 3D voxels. Therefore, voxel-based works [31, 37, 34] can utilize 3D CNN to directly extract features of each voxel in 3D space. Some high-efficiency methods [32, 9, 28, 5] further reduce the height dimension of voxels, named pillar, and adopt a bird-eye view (BEV) representation to encode 3D features efficiently. Additionally, point-voxelbased methods aim to enhance the performance by leveraging the strengths of both point and voxel representations. By exploiting the two representations, some recent pointvoxle-based works [18, 10, 20] have achieved state-of-theart detection results. On the other hand, some methods aim to exploit the benefit of multi-frame point cloud for better detection performance. Early methods employ a feature-based strategy to aggregate temporal features with 2D CNN [11] or transformer-based architectures [26, 35]. Recent works [6, 25, 34] have shown that a simple concatenation strategy of multi-frame points can significantly outperform the singleframe setting. Furthermore, MPPNet [2] proposes to employ proxy point as a medium to handle information aggregation of long point clouds sequences. 2 \fShort-term Appearance Encoding MLP Self/Cross Attn x 3 Each Traj. Hypo. Long-term Motion Encoding PointNet Each Traj. Hypo. Fuse Cropped Points N Trajs Traj. Hypo. Length Local Hypos. Interaction with Self-Attn Global Hypos. Interaction with Self-Attn Hypos. Classification \u2714 \u2714 x 3 (Highest score) Traj. Hypo. Encodings Point Cloud at Time t Traj. 1 Traj. N Traj. 1 Traj. N Long-term Boxes Figure 1. The overall framework of the proposed TrajectoryFormer. Given N history trajectories and the input point cloud, we first generate multiple trajectory hypotheses for each history trajectory by incorporating both W detected boxes and Tf temporally predicted boxes. Then a long-short hypothesis feature encoding module is used to encode the appearance and motion feature of each hypothesis. These hypothesis features are then further encoded via a global-local hypothesis interaction module to propagate information among these hypotheses. Finally, these features are utilized to predict the confidence of each hypothesis for selecting the best trajectory hypothesis. 2.2. 3D Multi-Object Tracking Benefiting from recent advancements in 3D object detection, the state-of-the-art 3D MOT algorithms have adopted the tracking-by-detection paradigm. A notable example is CenterPoint [34], which proposes a simple approach that utilize objects\u2019 center distance as the association metric to link detection boxes across sequential frames. However, CenterPoint employ a constant velocity assumption to compensate for the motion displacement between different frames. This approach may exhibit less resilient to missing detections or curved motion trajectories. Similar to the conception of optical flow [7, 22, 23], several 3D MOT algorithms utilize Kalman Filters to estimate the location of tracked objects. AB3DMOT [29] serves as a foundational approach in this regard, where 3D Intersection-over-Union (IoU) is employed as the association metric for object tracking. Furthermore, Chiu et al. [3] propose an alternative approach by introducing the use of Mahalanobis distance as a replacement for 3D IoU to capture the uncertainty of the trajectories. Meanwhile, SimpleTrack [13] conducts an analysis of the different components of a tracking-by-detection pipeline and and provides suggestions for enhancing each component. ImmortalTracker [27] propose a simple tracking system that maintain tracklets for objects gone dark to solve the ID switch problem in 3D MOT. SpOT [24] introduces a approach by developing the representation of tracked objects as sequences of time-stamped points and bounding boxes over a long temporal history. At each timestamp, SpOT improves the location estimates of tracked objects by utilizing encoded features from the maintained sequence of objects. Some works exploit trajectory prediction to deal with occlusion problems in tracking or detection. FutureDet [14] propose an end-to-end approach for detection and motion forecasting based on LiDAR, which is capable of forecasting multiple-future trajectories via future detection. Quo-Vadis [4] utilizes trajectory prediction to solve longterm occlusions in single-camera tracking. Similarly, PFTrack [12] maintains the object positions and enables reassociation by integrating motion predictions to handle long-term occlusions multi-camera 3D MOT. 3. TrajectoryFormer Existing state-of-the-art 3D MOT approaches [34, 13, 27, 24] generally adopt the tracking-by-detection paradigm, which utilizes the detected boxes at the current frame for trajectory-box association. Although these approaches have achieved excellent tracking performance, they may encounter difficulties when tracking challenging objects, such as occluded or distant objects, due to mis-detections or inaccurate localization caused by sparse object points. To address these limitations, we present an efficient framework, TrajectoryFormer, for 3D MOT in point cloud scenarios. Specifically, as shown in Fig. 1, TrajectoryFormer generates a novel set of multiple trajectory hypotheses, which incorporate both current frame detection boxes and historytrajectory prediction boxes to better cover the potential moving patterns of tracked objects. In Sec. 3.1, we first introduce the generation of multiple trajectory hypotheses. Next, in Sec. 3.2, we present the feature encoding of each trajectory hypothesis. In Sec. 3.3, we propose the globallocal feature interaction module to propagate information among all the trajectory hypotheses and generate the final trajectories. Finally, we introduce the losses of TrajectoryFormer in Sec. 3.4. 3 \fIllustration of Multiple Hypotheses Generation at Time T History Traj. 3 Traj. Hypo. 3 Traj. Hypo. 2 History Traj. 2 \ud835\udc61\u22125 \ud835\udc61\u22124 \ud835\udc61\u22123 \ud835\udc61\u22122 \ud835\udc61\u22121 Current Time \ud835\udc95 \ud835\udc61+ 1 \ud835\udc61+ 2 Traj. Hypo. 1 History Traj. 1 History Trajectory Evolution Over Time Figure 2. The illustration of the generation of multiple trajectory hypotheses at frame t for a single history trajectory. 3.1. Generation of Multiple Trajectory Hypotheses Given N existing history trajectories up to time t \u22121, {ht i}N i=1, state-of-the-art 3D MOT approaches [13, 27] generally associate each history trajectory with its nearest detection boxes at t for extending the trajectories up to current time t. However, this association strategy may fail in tracking some challenging objects if the detector misses the object at time t. To address this limitation, TrajectoryFormer is designed to generate multiple trajectory hypotheses for each tracked object ht i to better cover the potential moving patterns of each object. Unlike existing approaches that solely consider current frame detection boxes for trajectorybox association, each history trajectory ht i is paired with hybrid candidate boxes at time t to generate trajectory hypotheses, which include not only the current frame detection boxes but also the temporally predicted boxes based on the motion prediction of the history trajectory ht i. Motion Prediction of History Trajectories. To achieve this goal, we introduce a motion prediction network that encodes the historical trajectories of tracked objects to predict their future states. Specifically, we first reorganize the history trajectories {ht i}N i=1 as Ht = {\u02c6 ht i | \u02c6 ht i \u2208RTh\u00d7S}N i=1 \u2208 RN\u00d7Th\u00d7S. Note that \u02c6 ht i \u2208RTh\u00d7S is the cropped history trajectory of the i-th trajectory ht i up to time t\u22121 with temporal length Th, and S denotes the number of state attributes at each frame, such as location, heading angle, velocity, and time encoding. We pad all-zero vectors to the history trajectories that are shorter than Th. The motion features of each history trajectory are then encoded using a PointNet-like encoder as Ht g = MaxPool(MLP(Ht)), (1) where MLP(\u00b7) is a multi-layer perception network transforming each S-dimensional history trajectory\u2019s state vector, followed by max-pooling over the temporal dimension to summarize all frames\u2019 features into N history trajectory features Ht g \u2208RN\u00d7D. The trajectory features are then used as input to an MLP prediction head that predicts each object\u2019s future states as Ht p = MLP(Ht g), (2) where Ht p \u2208RN\u00d7Tf \u00d73 is the set of predicted states at future Tf frames for each history trajectory ht i up to time t \u22121. Ht p can be reformulated as a set Ht p = {pt i}N i=1, where pt i \u2208RTf \u00d73 indicates the predicted future states of the i-th trajectory starting at time t, and 3 represents the predicted 2D location and the heading angle at each time step. Generation of Multiple Trajectory Hypotheses. With the predicted trajectory states from the past time steps, TrajectoryFormer generates multiple trajectory hypotheses for each tracked object by associating each history trajectory to each of its Tf temporally predicted boxes and current-frame detected boxes. Specifically, as shown in Fig. 2, for a given history trajectory hi up to time t \u22121, we collect its predicted states at time t from the motion prediction results at the previous Tf frames of the history trajectory, which can be represented as the set {pt\u22121 i [1], . . . , pt\u2212j i [j], . . . , pt\u2212Tf i [Tf]}, where pt\u2212j i [j] indicates the predicted state at current time t by using the short clip of this history trajectory at time t\u2212j. Note that we only predict the future position and heading angle of each trajectory and assume that the box dimension is unchanged for each history trajectory. For the sake of simplicity, we consider {pt\u2212j i [j]}Tf j=1 as the predicted boxes of the i-th history trajectory at current time t, which are utilized to associate with the i-th history trajectory to generate multiple trajectory hypotheses. We illustrate the generation of multiple trajectory hypotheses with temporally predicted boxes in Fig. 2, where the prediction length Tf = 3 and history length Th = 3. In addition to the boxes temporally predicted from the past, each history trajectory ht i is also associated with the W detection boxes at the current frame, which are generated by a 3D detector and chosen as the nearest boxes to the trajectory. We denote the associated detection boxes of the i-th history trajectory as {dj i}W j=1. Given the generated predicted boxes and associated detection boxes, for each history trajectory ht i, we can obtain a set \u2126consisting of M = N \u00d7 (Tf + W) hypotheses as \u2126p = {ht i \u2295pt\u22121 i [1], . . . , ht i \u2295pt\u2212Tf i [Tf]}, \u2126d = {ht i \u2295d1 i , . . . , ht i \u2295dW i }, (3) \u2126= \u2126p \u222a\u2126d, where \u2295indicates linking the i-th history trajectory with a temporally predicted or detection box to generate a trajectory hypothesis. These proposed multiple trajectory hypotheses strategy provides two key benefits. Firstly, it enables the recovery of objects that were not detected at time t by propagating past times\u2019 temporal prediction boxes to time t, which cre4 \fates trajectory hypotheses better tolerating short-term misdetection of the tracked objects. Secondly, it provides more association candidates and can correct tracking errors in trajectories caused by low-quality detection boxes, since the 3D detector only uses limited temporal information in the point cloud sequence (e.g., 2-3 frames) and may produce low-quality detection boxes for challenging objects. In such cases, temporally predicted boxes might provide better trajectory hypotheses to improve the tracking quality. 3.2. Long Short-Term Hypothesis Feature Encoding After obtaining multiple trajectory hypotheses, TrajectoryFormer adopts a long-short feature encoder to transform the trajectory hypotheses into the feature space, which involves encoding the long-term motion information and the short-term appearance of each trajectory hypothesis. For long-term motion encoding, we employ a PointNetlike neural network that takes M trajectory hypotheses\u2019 box sequence \u2126B \u2208RM\u00d7(Th+1)\u00d78 as input, where 8 means the number of boxes\u2019 properties (e.g. 7-dim geometry and 1-dim time encoding), and outputs their motion features Em \u2208RM\u00d7D as Em = MaxPool(MLP(\u2126B)). (4) The incorporation of such long-term motion information is crucial in differentiating hypotheses that exhibit similar appearance and location at the current time. To reduce the computational cost and avoid handling long-term object point variations, we only encode the shortterm appearance of each trajectory hypothesis. Specifically, we randomly sample Y points by cropping the input shortterm point cloud within the box at time t of each trajectory hypothesis. We follow MPPNet [2] to encode the box information to each cropped object point by computing the relative differences between each sampled point pi and 9 representative points of the hypothesis box (8 corner and 1 center points). By appending an extra one-dimensional time offset embedding, the final point-wise appearance features of the j-th hypothesis of the i-th tracked object can be further encoded with an MLP network, which can be represented as Oj i \u2208RY \u00d7D. Given these encoded point-wise features, we first utilize the self-attention mechanism to perform information interaction among all points and then adopt a cross-attention layer to obtain the aggregated embedding from Y points as \u02c6 Oj i = SelfAttn(Q(Oj i ), K(Oj i ), V (Oj i )), V j i = CrossAttn(Q(v), K( \u02c6 Oj i ), V ( \u02c6 Oj i )), (5) where i \u2208 {1, \u00b7 \u00b7 \u00b7 , N} and j \u2208 {1, \u00b7 \u00b7 \u00b7 , Tf + W}. Q(\u00b7), K(\u00b7), V (\u00b7) are linear projection layers to generate query, key, value features for the attention layers. v \u2208 R1\u00d7D is zero-initialized learnable parameters to aggregate features from all the subsampled points of the j-th hypothesis of the i-th tracked object, which generates its final appearance feature as V j i \u2208RD. In practice, the self-attention and cross-attention operations are iteratively repeated for multiple rounds to update the query vector v gradually. The final short-term appearance features of all M hypotheses can be denoted as Ea = {V j i }N,Tf +W i=1,j=1 \u2208RM\u00d7D. Given the appearance embedding Ea and motion embedding Em, the long short-term embedding E \u2208RM\u00d7D of all M hypotheses of trajectory i is formed by concatenating the features with a one-hot class vector C to distinguish their target category along the channel dimension as E = MLP(Concat(Ea, Em, C)). (6) 3.3. Global-local Feature Interaction of Multiple Trajectory hypothesis The hypothesis features encode the appearance and historical motion information of each tracked object. However, it fails to consider the relationship between multiple trajectory hypotheses of the same tracked object and the interactions between all tracked objects in the same scene. To properly model inter-hypothesis and inter-trajectory relations, we propose a Global-local Interaction Module to model the spatial relationships among all hypotheses of all tracked objects. Specifically, we design a transformer with self-attention mechanism to propagate information between all trajectory hypotheses. The interaction is performed alternatively between global and local contexts, as depicted in Fig. 1. During global interaction, each hypothesis gathers information from all other hypotheses as G = SelfAttn(Q(E), K(E), V (E))), (7) which forms global-interacted embedding G \u2208RM\u00d7D. On the other hand, local interaction emphasizes the interaction between different hypotheses of the same tracked object. Specifically, we use Gj i represents the globally-interacted j-th hypothesis embedding of the i-th tracked object, where j = {1, . . . , Tf +W}, i = {1, . . . , N}. Therefore, the local interaction of Tf + W hypotheses of the i-th tracked object can be expressed as Lj i = SelfAttn(Q(Gj i), K(Gj i), V (Gj i))), (8) which forms local-interacted embedding L \u2208RM\u00d7D. We alternately conduct the global and local interaction in the transformer for several times, which enables the representations of the hypotheses to incorporate both global and local contexts, allowing each hypothesis to gain a better understanding of the distribution of its neighboring objects. This module leads to improved association outcomes since 5 \fthe embedding of each hypothesis becomes more contextaware. After the interaction process, an MLP head is appended to generate a final probability score for each hypothesis\u2019 confidence evaluation, which is utilized for selecting the best trajectory hypothesis for of each tracked object. 3.4. Losses The overall training loss contains two loss terms: a confidence-score loss Lconf and a bounding-box regression loss Lreg as L = Lconf + Lreg. (9) We adopt the binary cross entropy loss for Lconf, which is introduced to supervise the network to predict confidencescore of all trajectory hypotheses. For Lreg, we employ the same box regression loss in MPPNet [2] to supervise box refinement, that is, to predict the residual of the position, shape and heading angle between hypothesis boxes and ground truth. In addition, the simple motion prediction network is trained separately. We use the L1 loss to supervise the networks\u2019 prediction of future trajectory states, including center location and heading angle. 4. Experiments In this section, we first outline our experimental setup, which includes the datasets, evaluation metrics, implementation details and life cycle management. Subsequently, we present comprehensive comparisons with state-of-theart methods on the Waymo 3D MOT benchmarks. Finally, we provide a range of ablation studies and related analyses to investigate various design choices in our approach. 4.1. Dataset and Implementation Details Waymo Open Dataset. The Waymo Open Dataset comprises 1150 sequences, with 798 training, 202 validation, and 150 testing sequences, and each of which contains 20 seconds of continuous driving data within the range of [75m, 75m]. 3D labels are provided for three classes, including vehicle, pedestrian and cyclist. Nuscenes Dataset. The nuScenes dataset is a large dataets that contains 1000 driving sequences and each sequence spans 20 seconds. LiDAR data in nuScenes is provided at 20Hz but 3D labels are only given at 2Hz. We evaluate on the two most observed classes: car and pedestrian. Evaluation Metrics. We adopt the official evaluation metrics as defined by the Waymo and nuScenes benchmarks for comparison. For Waymo, MOTA is employed as the primary evaluation metric, which involves three types of errors: false positives (FP), missing objects (Miss), and identity switches (IDS) at each timestamp. Furthermore, the evaluation performance is divided into two difficulty levels: LEVEL 1 and LEVEL 2. The former evaluates objects with more than five points, while the latter includes objects with at least one point. We use LEVEL 2 as the default performance setting. For nuScenes, we follow the official tracking protocol and use AMOTA as the main metric. Implementation Details. We employ the detection boxes of CenterPoint and MPPNet as inputs for our method. During training, we take 4 hypotheses that includes 2 generated hypotheses (1 predicted box and 1 detection box) and 2 augmented hypotheses derived from the generated ones for diverse hypotheses distribution. For inference, we specify the number of multiple hypotheses for each history trajectory as 6 (5 predicted boxes and 1 detection box) and 2 ( 1 predicted boxes and 1 detection box) for CenterPoint and MPPNet on Waymo, respectively. The detection boxes are associated with the trajectory through a greedy matching algorithm. For history trajectory without the matched current frame detection boxes, we pad all-zero boxes to create a hypothesis. We set a maximum matching distance of 2m, 0.5m and 1m for vehicles, pedestrians and cyclists in Waymo and 2.2m, 2.0m for car and pedestrian in nuScenes, respectively. The track-birth confidence threshold varies by detectors and classes. For CenterPoint, the track-birth confidence threshold is 0.2 for all classes in nuScenes and it is set to 0.72 for pedestrian and 0.8 for vehicle and cyclist in Waymo. For model hyper-parameters, we set the feature dimension D = 256, the number of sampling points Y = 128. We set the number of iteration blocks to 3 for both point feature encoding process and global-local interaction module. For optimization, the network is trained with the ADAM optimizer for 6 epochs with an initial learning rate of 0.001 and a batch size of 4. Life Cycle Management. If the score of a trajectory\u2019s latest predicted hypothesis is below a threshold, we remove the tracked object. For the retained objects, we select the hypothesis with the highest score as the association result. Finally, the new-born objects are generated from detection boxes which remain unassociated with history trajectories and do not overlap with the history trajectories. These boxes that meet the criteria and have score above the track-birth threshold are considered to new-born trajectories. 4.2. Comparison with State-of-the-art 3D MOT Tracker Waymo Validation Set. In Table 1, we compare TrajectoryFormer with other 3D MOT methods on the validation set of Waymo Open dataset, where TrajectoryFormer exhibits superior performance compared to other methods. To be specific, it outperforms the highest reported performance by 3.3%, 0.5%, and 1.5% and exceeds the adopted CenterPoint baseline by 4.6%, 6.1%, and 3.2% in terms of MOTA metric on vehicle, pedestrian, and cyclist, respectively. More specifically, TrajectoryFormer exhibits a significant improvement in the Miss metric compared to the employed baseline, which implies that our method can suc6 \fMethod Vehicle Pedestrian Cyclist MOTA\u2191 FP%\u2193 Miss%\u2193 IDS%\u2193 MOTA\u2191 FP%\u2193 Miss%\u2193 IDS%\u2193 MOTA\u2191 FP%\u2193 Miss%\u2193 IDS%\u2193 AB3DMOT [29] 55.7 0.40 52.2 2.74 CenterPoint [34] 55.1 10.8 33.9 0.26 54.9 10.0 34.0 1.13 57.4 13.7 28.1 0.83 SimpleTrack [13] 56.1 10.4 33.4 0.08 57.8 10.9 30.9 0.42 56.9 11.6 30.9 0.56 ImmotralTrack [27] 56.4 10.2 33.4 0.01 58.2 11.3 30.5 0.26 59.1 11.8 28.9 0.10 SpOT [24] 55.7 11.0 33.2 0.18 60.5 11.3 27.6 0.56 Ours (CenterPoint) 59.7 11.7 28.4 0.19 61.0 8.8 29.8 0.37 60.6 13.0 25.6 0.70 Ours (MPPNet) 61.0 10.9 28.0 0.13 63.4 11.6 24.6 0.40 63.5 8.2 28.0 0.28 Table 1. Tracking performance on the Waymo Open dataset validation split. The employed detector of compared tracking methods, including SimpleTrack, ImmotralTrack and SpOT are all CenterPoint. Method Vehicle Pedestrian Cyclist MOTA\u2191 FP%\u2193 Miss%\u2193 IDS%\u2193 MOTA\u2191 FP%\u2193 Miss%\u2193 IDS%\u2193 MOTA\u2191 FP%\u2193 Miss%\u2193 IDS%\u2193 AB3DMOT [29] 40.1 16.4 43.4 0.13 37.7 11.6 50.2 0.47 PVRCNN-KF [20] 57.7 8.4 33.6 0.26 53.8 9.3 36.2 0.73 55.1 8.3 35.8 0.91 AlphaTrack [36] 55.7 9.6 34.3 0.44 56.8 10.7 31.3 1.23 59.6 5.4 33.7 1.23 CenterPoint [34] 59.4 9.4 30.9 0.32 56.6 9.3 33.1 1.07 60.0 11.1 28.1 0.78 SimpleTrack [13] 60.3 8.8 30.9 0.08 60.1 10.7 28.8 0.40 60.1 9.7 29.6 0.67 ImmotralTrack [27] 60.6 8.5 31.0 0.01 60.6 11.0 28.3 0.18 61.6 9.3 29.0 0.10 Ours (CenterPoint) 64.6 8.5 26.7 0.17 62.3 7.6 29.7 0.35 64.6 8.7 26.1 0.64 Ours (MPPNet) 64.9 9.1 25.8 0.21 65.5 9.4 24.7 0.42 64.2 7.2 28.0 0.55 Table 2. Tracking performance on the Waymo Open dataset testing split. cessfully recover objects that were missed by the detector. We attribute this success to our multi-hypothesis tracking strategy, which utilizes multiple trajectory hypotheses to propagate the state information of objects from past frames to the current frame and thus our model can better capture the potential motion of the tracked objects. Besides, this strategy provides extra candidate bounding boxes, which can be associated with objects that the detector failed to detect in the current frame. Moreover, for pedestrians, our method achieves lower False Positive (FP) values compared to other methods, indicating that the boxes in our trajectories have higher quality. Pedestrian trajectories are more complex and crowded compared to other categories, which makes it challenging for the network to generate correct associations. Hence, the lower FP for pedestrians indicates that TrajectoryFormer can handle associations in complex scenarios. When adopt more advanced detector, MPPNet, TrajectoryFormer can achieve higher performance. Waymo Testing Set. As shown in Table 2, TrajectoryFormer also significantly outperforms other methods on the testing set of Waymo Open Dataset. NuScenes Validation Set. We also evaluate TrajectoryFormer on the validation split of the nuScenes dataset, as shown in Table 3. Following SpOT, we conduct experiments on the two main classes, namely car and pedestrian. All compared methods utilizes the detection results of CenterPoint. Our approach surpasses CenterPoint by 1.2% and 5.6% and SpOT by 0.3% and 0.4% in terms of AMOTA for car and pedestrian, respectively. 4.3. Ablation Studies To verify the effectiveness of each component in TrajectoryFormer, We conduct comprehensive ablation studies on the Waymo benchmark. Unless otherwise mentioned, Method Car Pedestrian AMOTA\u2191 MOTA\u2191 AMOTA\u2191 MOTA\u2191 CenterPoint 84.2 71.9 77.3 64.5 SimpleTrack 83.8 70.1 79.4 67.0 ImmotralTracker 84.0 69.8 80.2 68.0 SpOT 85.1 82.5 TrajectoryFormer (ours) 85.4 75.0 82.9 69.9 Table 3. Tracking performance on val split of the nuScenes dataset. All the compared methods utilize CenterPoint as the detector, and the main metric (AMOTA) is highlighted in gray. all ablation experiments of TrajectoryFormer are trained on the vehicle category by taking the detection results of CenterPoint with 3 epoch. We take MOTA (LEVEL 2) as the default metric for comparison. Effects of the multiple hypotheses. Table 4 investigates the impact of different number of hypotheses for each tracked object. Firstly, without predicted boxes, TrajectoryFormer\u2019s association performance degrades to the same level as CenterPoint baseline, which employs center distance and a greedy algorithm to perform trajectory-box association for each trajectory. In this scenario, compared to baseline, the refinement of detection box results in a 1.2% performance gain. For the single prediction box setting, we set Tf = 1. In other words, the motion prediction network only predicts the future box of the tracked object in the next single-frame. The incorporation of even a single prediction boxes allows the network to transfer past information of tracked objects to the current frame, resulting in a significant 3.5% performance improvement. When employing multiple temporal prediction boxes (e.g., 5), a slight performance improvement of 0.3% is observed compared to the single-frame prediction box setting. Utilizing the prediction from trajectory embedding at different history moments can provide more diverse candidate boxes, which brings a slight improvement. However, the use of more prediction boxes (i.e., 10) does not provide any additional performance im7 \fMethod MOTA\u2191 FP\u2193 Miss\u2193 IDS\u2193 CenterPoint [34] 55.1 10.8 33.9 0.26 w/o pred. boxes 56.3 10.5 33.0 0.24 1 pred. box 59.5 12.1 28.3 0.21 5 pred. boxes 59.8 11.3 28.7 0.23 10 pred. boxes 59.7 11.6 28.5 0.22 Table 4. Effects of the number of temporally prediction boxes. All experiments use 1 heuristic matched detection box. Category Method MOTA\u2191 FP\u2193 Miss\u2193 IDS\u2193 Vehicle 1 frame 59.6 11.5 28.7 0.23 3 frame 59.8 11.3 28.7 0.23 5 frame 59.8 11.3 28.7 0.23 Pedestrian 1 frame 59.8 9.7 30.1 0.37 3 frame 60.8 8.9 29.9 0.37 5 frame 61.0 8.8 29.8 0.37 Table 5. Effects of different numbers of point cloud frames for appearance feature encoding. provements and instead increases computational overhead. Effects of length of point cloud frames. Table 5 displays the performances of using different numbers of point cloud frames. It should be noted that, in contrast to SpOT [24], which maintains a point cloud sequence with the same length as the trajectory bounding box, we only crop the concatenated multi-frame points of hypothesis boxes at the current time to reduce computation overhead. we scrutinize the effect of different numbers of point cloud frames in appearance feature encoding by keeping the number of randomly sampled point clouds constant. For the vehicle class, the point cloud appearance information of 1, 3, or 5 frames yields comparable performance. Conversely, for the pedestrian class, the utilization of 5-frame point cloud information outperforms single-frame and 3-frame point clouds by 1.2% and 0.2%, respectively. We attribute this to the fact that the pedestrian class has sparser raw LiDAR points in comparison to vehicles. Thus, the concatenated multi-frame points can provide more complete appearance information, which is advantageous for the network to differentiate between various candidate hypotheses. Effects of length of trajectory boxes. we explore the impact of trajectory box length of our approach, as presented in Table 6. We observe that trajectories that are too short fail to fully leverage past temporal motion information of the tracked objects, resulting in 0.5% performance drop. For the Waymo dataset, we find that history trajectory boxes length with 10 frames can effectively capture the object\u2019s past motion states, resulting in the best performance. Further increasing the trajectory length does not yield any additional performance benefits, as the motion state of the object may have changed, compared with earlier time steps. However, longer trajectories result in additional computational overhead. Therefore, we employ 10 frame trajectory boxes as the default setting in our approach as a trade-off. Effects of the combination of point embedding and traMethod MOTA\u2191 FP\u2193 Miss\u2193 IDS\u2193 5 frame 59.3 11.7 28.8 0.23 10 frame 59.8 11.3 28.7 0.23 15 frame 59.7 11.5 28.6 0.22 20 frame 59.7 11.5 28.6 0.23 Table 6. Effects of numbers of trajectory length. Method MOTA\u2191 FP\u2193 Miss\u2193 IDS\u2193 Trajectory 50.8 15.4 33.2 0.59 Point 56.5 12.2 31.2 0.17 Point + Trajectory 59.8 11.3 28.7 0.23 Table 7. Effects of different designs of hypothesis embedding. jectory embedding. Table 7 presents the investigation of different hypothesis embedding designs. As we can see, only using the long-term boxes feature will lead to a 9% performance drop, which is reflected by the large value of the Miss and FP indicators. This suggests that a network based solely on the trajectory boxes feature cannot adequately select the best matching boxes for each tracked object, resulting in the retention of low-quality boxes (increasing FP) and the discarding of high-quality boxes (increasing Miss). Meanwhile, utilizing the short-term appearance features of point clouds demonstrates better association ability than trajectory box features, but also decreases performance by 3.3%. In the end, the optimal performance was achieved through the joint utilization of point cloud and trajectory features, emphasizing the significance of integrating both motion and appearance information. 5." + }, + { + "url": "http://arxiv.org/abs/2110.07225v2", + "title": "Web Search via an Efficient and Effective Brain-Machine Interface", + "abstract": "While search technologies have evolved to be robust and ubiquitous, the\nfundamental interaction paradigm has remained relatively stable for decades.\nWith the maturity of the Brain-Machine Interface, we build an efficient and\neffective communication system between human beings and search engines based on\nelectroencephalogram(EEG) signals, called Brain-Machine Search Interface(BMSI)\nsystem. The BMSI system provides functions including query reformulation and\nsearch result interaction. In our system, users can perform search tasks\nwithout having to use the mouse and keyboard. Therefore, it is useful for\napplication scenarios in which hand-based interactions are infeasible, e.g, for\nusers with severe neuromuscular disorders. Besides, based on brain signals\ndecoding, our system can provide abundant and valuable user-side context\ninformation(e.g., real-time satisfaction feedback, extensive context\ninformation, and a clearer description of information needs) to the search\nengine, which is hard to capture in the previous paradigm. In our\nimplementation, the system can decode user satisfaction from brain signals in\nreal-time during the interaction process and re-rank the search results list\nbased on user satisfaction feedback. The demo video is available at\nhttp://www.thuir.cn/group/YQLiu/datasets/BMSISystem.mp4.", + "authors": "Xuesong Chen, Ziyi Ye, Xiaohui Xie, Yiqun Liu, Weihang Su, Shuqi Zhu, Min Zhang, Shaoping Ma", + "published": "2021-10-14", + "updated": "2021-10-15", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR" + ], + "main_content": "INTRODUCTION Adopted in diverse environments and used by billions of users, search engines have changed how humans learn and think. Driven by the diversity of information needs and benefiting from the increase in computing resources, search technology is evolving to become more powerful. However, the fundamental interaction paradigm has been relatively stable for decades. When searching, a user needs to formulate a query, which often consists of a few keywords, according to his information need, and submit it to the search engine. Upon receiving the query, the search engine will retrieve and return a ranked search results list to users. However, there exists several shortcomings of the current search interface: 1) users tend to issue short queries which bring uncertainty and ambiguity. Due to the strong dependence on the formulated query, the information loss of bi-directional transmission between users and search engines has caused a significant performance bottleneck [7]. 2) traditional search systems collect implicit user feedback such as click and dwell time and attempt to connect implicit user feedback with user\u2019s subjective feelings. But this implicit feedback is usually inaccurate and noisy and may not necessarily align with the subjective feelings of real users. 3) current search interface requires users to interact with search engines using the mouse and keyboard, which is impractical for scenarios where hand-based interactions are infeasible. Recently, with the development of BMI, it is possible to change the search interface to circumvent the problems mentioned above. BMI provides a direct communication pathway between an enhanced or wired brain and an external device, which is widely applied in researching, mapping, assisting, augmenting, and repairing brain functions. In the area of non-invasive BMIs, the most popular choice is electroencephalogram (EEG), which has attracted a large amount of theoretical and applied researches in text inputting, Human-Machine interaction, and cognitive activities analysis. Taking information transfer rate and visual theoretical study into consideration, Steady-State Visually Evoked Potential (SSVEP) paradigm is applied to implement the module of query inputting and the interaction between humans and the search engine in our system. This paradigm assigns each target key (alphabet or function key) in the virtual keyboard with different flicker frequencies. When the user gazes at certain target key, the SSVEP signal with the same arXiv:2110.07225v2 [cs.IR] 15 Oct 2021 \fWSDM \u201922, February 21\u201325 2022, Phoenix, AZ, USA Chen, et al. frequency (as well as its harmonics) is elicited in the visual cortex of the brain. Through analyzing this evoked signal, the system will get the target key which the user intends to enter. Besides the ability of system control, brain signals can be decoded as the search context and user feedback to understand and improve the search process. For example, Moshfeghi et al. [8] find that brain signals are related to the occurrence of information need and Gwizdka et al. [5] decode brain signals to the relevance judgment. In this paper, we build a search system based on the EEG device. Users can perform search tasks including formulating/submitting queries, interacting with search results. Moreover, we estimate the search state and decode user satisfaction and utilize the inferred feedback for evaluating and re-ranking search results to improve search experience. As far as we know, our implemented system is the first closed-loop system that users can interact with without relying on the mouse or keyboard. Based on collected brain signals, the proposed system can improve the search experience proactively, dynamically and personally. 2 SYSTEM OVERVIEW BMSI System consists of two parts: User Interaction Module and Data Process Module, as shown in Figure 1. The User Interaction Module provides interaction interfaces including the visual speller page, landing pages, and SERPs, on which users can perform search tasks such as formulating queries and examining search results. While the Data Process Module runs in the backend and it can decode the brain signals and provide real-time feedback for the User Interaction Module. In the visual speller page, users can issue queries by gazing at the target key that flickers with a specific frequency, while the Data Process Module will analyze harmonics evoked in the visual cortex (nine parietal and occipital channels, including Pz, PO3, PO5, PO4, PO6, POz, O1, Oz, and O2) to locate the target key chosen by the user. During the query formulation process, BMSI system also records brain signals in other areas of the brain cortex, like the frontal, speech, and reading cortex, which is related to attention, mental status, language understanding, etc. The signals in these areas provide rich contextual information about users\u2019 information needs and search context, such as the formulation difficulty of summarizing information needs into the query, and users\u2019 current status (working, entertainment or exercise). We leave exploring these signals to further boost the search experience as future work. Once the user finishing the query formulation and clicks the search button, a top-ranked page is presented and attempts to satisfy the users\u2019 information need directly, similar to the landing page after the user pressing on the \u201cI\u2019m Feeling Lucky\u201d button in Google. The selection strategy of the top-ranked page could make use of brain signals in query formulation, which is expected to meet user needs instantly. In our demo system, we select the top result in the original Search Engine Result Page (SERP). During the examination process of the top-ranked page, the Data Process Module will use brain signals to decode user satisfaction in real-time. Then the system would re-rank the search results list according to the detected satisfaction feedback and the user can continue to examine more search results on this re-ranked SERP. On the re-ranked SERP, interaction options including clicking search results, scrolling up or down are given. These interaction options are provided using several blocks with different flickering frequencies while these blocks are displayed in the right position of the current viewport. Similar to the interaction paradigm in the visual speller page, search interactions on the re-ranked SERP are also based on the evoked SSVEPs and does not need users to use the mouse or keyboard. 3 APPROACHES 3.1 SSVEP based Keyboard Neurological research suggests that SSVEP signals are natural responses to visual stimulation at specific frequencies. When the retina is excited by a visual stimulus ranging from 3.5 Hz to 75 Hz[1], the visual cortex of the brain generates electrical activity at the same (or multiples of) frequency of the visual stimulus. We design a 33-target BMI speller referred to BETA [6] for visual stimulation to evoke SSVEP, and the flicker frequency ranges from 8 to 15.68 Hz. To make sure that users can get used to this system easily, we resemble the conventional QWERTY keyboard to construct the graphical interface in which 33 target keys, including 5 numbers, 26 alphabets, and 2 function signs (Delete and Search), are aligned in five rows. Among them, the 5 numbers are used to select candidate words. A sampled sinusoidal stimulation method [2] is adopted to present the visual flicker on the screen. In general, the stimulus sequence of each flicker can be generated by \ud835\udc60(\ud835\udc53,\ud835\udf19,\ud835\udc56) = 1 2 {1 + sin[2\ud835\udf0b\ud835\udc53( \ud835\udc56 RefreshRate ) + \ud835\udf19]} (1) where \ud835\udc56denotes the frame index in the stimulus sequence, and \ud835\udc53 and \ud835\udf19denote the frequency and phase values of the encoded flicker that uses a joint frequency and phase modulation[3]. The grayscale value of the stimulus sequence ranges from 0 to 1, where 0 indicates dark, and 1 indicates the highest luminance of the screen. For the 33 targets, the tagged frequency and phase values can be respectively obtained by \ud835\udc53\ud835\udc58= \ud835\udc530 + (\ud835\udc58\u22121) \u00b7 \u0394\ud835\udc53 \u03a6\ud835\udc58= \u03a60 + (\ud835\udc58\u22121) \u00b7 \u0394\u03a6 (2) where the frequency interval \u0394\ud835\udc53is 0.24 Hz, the phase interval \u0394\u03a6 is 0.5\ud835\udf0b, and k denotes the target index. In our work, f0 and \u03a60 are set to 8 Hz and 0 \ud835\udf0b, respectively. 3.2 Input Recognition Algorithm By analyzing the evoked SSVEPs, Input Recognition Algorithm could recognize the target key of user intent. In our system, Canonical Correlation Analysis (CCA) is applied to measure the canonical correlation coefficient between the real-time SSVEPs and the reference signals, i.e., the theoretical brain signals evoked by the stimulus flickered at a specific frequency. Specifically, the SSVEPs can be expressed by the following formula: S = \u0000x1, x2, x3, \u00b7 \u00b7 \u00b7 x9 \u0001T (3) \fWeb Search via an Efficient and Effective Brain-Machine Interface WSDM \u201922, February 21\u201325 2022, Phoenix, AZ, USA query formulation top result examination re-ranked SERP examination user interaction data process Query Suggestion Brain signals contains: \u2022 Information need \u2022 search context Selector decode brain signals to: \u2022 satisfaction \u2022 \u2026 Re-Ranker Input Recognition query related web pages original search result page Figure 1: Brain-Machine Search Interface System (BMSI) consists of the User Interaction Module and the Data Process Module. The User Interaction Module provides the interaction interface including the visual speller page, landing pages, and SERPs. The Data Process Module decodes the brain signals and provide real-time feedback for User Interaction Module. and the reference signals are: Rf = \uf8ee \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8f0 \ud835\udc60\ud835\udc56\ud835\udc5b(2\ud835\udf0b\ud835\udc53\ud835\udc61) \ud835\udc50\ud835\udc5c\ud835\udc60(2\ud835\udf0b\ud835\udc53\ud835\udc61) . . . \ud835\udc60\ud835\udc56\ud835\udc5b(2\ud835\udf0b\ud835\udc41\ud835\udc53\ud835\udc61) \ud835\udc50\ud835\udc5c\ud835\udc60(2\ud835\udf0b\ud835\udc41\ud835\udc53\ud835\udc61) \uf8f9 \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fb ,\ud835\udc61= 1 \ud835\udc39\ud835\udc60 , 2 \ud835\udc39\ud835\udc60 \u00b7 \u00b7 \u00b7 \ud835\udc41\ud835\udc60 \ud835\udc39\ud835\udc60 (4) where N is the number of harmonics, \ud835\udc53is the reference frequency(\ud835\udc53= 8.00, 8.24, 8.48 \u00b7 \u00b7 \u00b7 15.68), \ud835\udc39\ud835\udc60is the sampling rate and \ud835\udc41\ud835\udc60is the number of sampling points. For each flicker frequency ranging from 8 to 15.68Hz, we use Eq.(4) to generate the reference signal then calculate the correlation between each reference signal and SSVEP signal. After calculation, the reference signal with the highest correlation is the recognition result, which indicates the target key that the user intends to enter. 3.3 Query Suggestion User search intents are complex. Hence, a single query is hard to fully express their information needs. In that regard, query suggestion techniques can help users to complete their search tasks with less effort in complex search scenarios to some extent. Especially for Chinese users, the alphabet letters are usually not the final expression of queries but are used to input PinYin and then select the right Chinese words as the query string. Under this circumstance, query suggestion is more difficult for Chinese search engines but is strongly necessary to quickly capture users\u2019 information needs. The function of query suggestion in our system is powered by an API provided by Sogou Inc 1, one of the most popular Chinese search engines. The suggestion model integrates the analyses of large-scale heterogeneous data and user behavior on the Internet, such as user clicks, query reformulation, and tracking of hot news, which could effectively bridge the gap between user intent and query candidates. 1www.sogou.com Table 1: The performance of the query suggestion model with different input strategies. Chinese Input Strategies Successful Match Ratio #Keys per Char first letter spelling 0.77 0.65 full letter spelling 0.91 1.16 Query suggestion can provide appropriate query candidates with the incomplete part of a query. To test the efficiency of the query suggestion model, we simulate the input of 2,956 query samples with page views greater or equal to 500 times in the past half-year. Our stimulation adopted the two most popular PinYin input strategies: first letter spelling and full letter spelling. If query candidates are the fine-grained or same intent of the query sample, we define this as a successful match. For example, we regard all these candidates, \u201cyoutube online\u201d, \u201cyoutube download\u201d, and \u201cyoutube\u201d, as successful matches with the query \u201cyoutube\u201d. We show the performance of these two strategies in Table 1, we can observe that inputting queries by the first letter require less effort while using full letter spelling can achieve a higher match ratio. 3.4 Brain Signals Decoding Existing works in multichannel EEG-based prediction usually need effective feature extraction. Several features are proposed throughout literature and among these features, differential entropy (DE) is widely used and performs better than other features including band power, rational asymmetry, and differential asymmetry in an multi-channel EEG-based emotion recognition task [4]. Therefore, we extract DE features using Short Time Fourier Transform over five frequency bands (delta: 0.5-4Hz, theta: 4-8Hz, alpha: 8-13Hz, beta: 14-30Hz, gamma: 30-50Hz) in 62 distinct EEG channels except for two re-reference channels. For classification model, we apply Gradient Boosting Decision Tree (GBDT), which can automatically choose and combine the EEG features and it has shown effective \fWSDM \u201922, February 21\u201325 2022, Phoenix, AZ, USA Chen, et al. in usefulness estimation with multichannel EEG features. To train the prediction model, we use the Search-Brainwave dataset 2. The dataset contains EEG data recorded during the 18 participants doing pre-defined search tasks in a period of 60 minutes as well as the usefulness annotation for each search result. To tune the hyper parameters, including learning rate, estimator number, leaf nodes, and the maximum tree depth, we applied the protocol of leave-one-participant-out. The protocol means that we apply data of each participant for validation and train the classifier with left participants. The parameters are tuned according to the averaged validation Area Under Curve (AUC) of each participant and then we train our final classifier with all data. As a result, the system can achieve an averaged AUC of 0.69 in validation and the whole steps (feature extraction and GBDT-based classification) described above cost averagely 0.2 seconds in our practice. Now that we can predict the satisfaction feedback and understand the perceived usefulness of the search results, our search system can automatically adjust to improve the search process. In our practice, we apply a simple strategy as a first step to utilize brain signals as feedback for a more proactive search system. Before our experiment, each search result involved has been annotated with some subtopics by topic model. On the one hand, when the system notice that the user perceive certain landing page is useful, search results share similar subtopics would be moved to the top. On the other hand, when we detect that the user is unsatisfied with certain landing page, we will re-rank the search result list and the results share the same subtopics will be moved to the back. The certain landing page refers to the top-ranked page as described in Section 2. Note that we focus on the effectiveness of brain signals can be decoded as feedback for a better search performance in a real-time system, the methods of how to combine these feedback with other evaluation framework are left as future work. 4 DEMONSTRATION In this section, we apply two cases to elaborate our BMSI System. More detailed information is shown in our video. In our first case, the user wants to learn more about Cheetah (LieBao in Chinese) Browser, a Web Browser developed by a Chinese company. Then he inputs \u201clb\u201d, the first letters of PinYin to \u201cLieBao\u201d. Our system automatically generates a candidate query list with PinYin completion and query suggestion. In this situation, the first candidate meets his information needs, and he can \u201cpress\u201d the \u201csearch\u201d button without extra selection. Then the system will present the related top-ranked page, which is the official website of Cheetah Browser. During the examination of the top-ranked page, the system decodes the user\u2019s satisfaction with brain signals in real-time, and it infers that the user is satisfied with this page. Therefore, when the user continue to examine the SERP, as shown on the left of Figure 2, the search results related to the subtopic of Cheetah Browser will be ranked higher than others due to our re-ranking strategy. In our second case, the user plans to download some pictures about the Paris Fashion Show. The user inputs \u201cbl\u201d and selects \u201cBaLi\u201d(Paris in English) as the query. The presented top-ranked page is a wiki page of the city, which does not meet the user\u2019s information 2http://www.thuir.cn/group/YQLiu/datasets/SearchBrainwaveDataset.zip Official Website of Cheetah Browser Download Link of Cheetah Browser Stock of Cheetah Inc. Cheetah Animal Cheetah Car Paris Fashion Show L\u2019Or\u00e9al Paris Niebo nad Paryzem News in Paris Short Videos about Paris Figure 2: The re-ranked SERPs of two queries are shown above. The user is satisfied with the top-ranked page about Cheetah Browser, so on the left page, the search results related to Cheetah Browser are higher than others. While for the right page, the user is unsatisfied with the city introduction page of Paris, more diverse results are shown at the front of the SERP. need, and the satisfaction decoding module perceives this feedback. To find useful information, the user chooses to examine more search results. On the re-ranked SERP, the diverse search results related to Paris, such as fashion shows, cosmetics, movies, etc, are provided, while results related to the subtopic that introducing this city are ranked lower. The corresponding re-ranked SERP is displayed on the right of Figure 2. 5" + } + ], + "Shuchang Liu": [ + { + "url": "http://arxiv.org/abs/2306.02239v4", + "title": "Generative Flow Network for Listwise Recommendation", + "abstract": "Personalized recommender systems fulfill the daily demands of customers and\nboost online businesses. The goal is to learn a policy that can generate a list\nof items that matches the user's demand or interest. While most existing\nmethods learn a pointwise scoring model that predicts the ranking score of each\nindividual item, recent research shows that the listwise approach can further\nimprove the recommendation quality by modeling the intra-list correlations of\nitems that are exposed together. This has motivated the recent list reranking\nand generative recommendation approaches that optimize the overall utility of\nthe entire list. However, it is challenging to explore the combinatorial space\nof list actions and existing methods that use cross-entropy loss may suffer\nfrom low diversity issues. In this work, we aim to learn a policy that can\ngenerate sufficiently diverse item lists for users while maintaining high\nrecommendation quality. The proposed solution, GFN4Rec, is a generative method\nthat takes the insight of the flow network to ensure the alignment between list\ngeneration probability and its reward. The key advantages of our solution are\nthe log scale reward matching loss that intrinsically improves the generation\ndiversity and the autoregressive item selection model that captures the item\nmutual influences while capturing future reward of the list. As validation of\nour method's effectiveness and its superior diversity during active\nexploration, we conduct experiments on simulated online environments as well as\nan offline evaluation framework for two real-world datasets.", + "authors": "Shuchang Liu, Qingpeng Cai, Zhankui He, Bowen Sun, Julian McAuley, Dong Zheng, Peng Jiang, Kun Gai", + "published": "2023-06-04", + "updated": "2023-06-09", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR" + ], + "main_content": "INTRODUCTION Recommender systems present a list of items upon each user\u2019s request to fulfill their personalized demand and interest. And the quality of the recommended list directly impacts the user\u2019s experience and his/her satisfaction with the overall system. Abundant literature has studied various supervised learning approaches [8, 11, 22] that increase the model expressiveness to better capture the patterns in the complex user-recommender interactions. While most existing methods adopt a pointwise or pairwise learning-to-rank paradigm that results in a model that separately scores each individual item for ranking, evidence [5] has shown that optimizing a listwise utility appears to be a superior option since it tends to make better use of the item\u2019s mutual influences in the list. As an intuitive example, adding an item with high click probability may not always produce better list-wise performance, since other items in the list might be too similar causing competition. In contrast, adding an item with low click probability may not always produce worse list performance, since it may emphasize or complement the neighboring items and make them more attractive. Based on this motivation, the list-wise ranking approaches [2, 5] and slate recommendation methods [12, 14] have been proposed. The key challenge of solving the list-wise recommendation problem is how to effectively and efficiently search the combinatorially arXiv:2306.02239v4 [cs.IR] 9 Jun 2023 \fKDD \u201923, August 6\u201310, 2023, Long Beach, CA, USA Shuchang Liu et al. large action space. Existing work could generally be categorized as either learning a list-wise evaluator [9] or learning a list-wise generator [14]. The first approach uses the evaluator to approximate the list-wise utility function to guide the generation of lists. However, this paradigm heavily depends on the accuracy of the evaluator which makes it less promising in recommendation tasks. The latter approach belongs to the generative methods that can model the intra-list patterns and the list utility together in the generative process. Its stochastic generation process could greatly improve the diversity but with a severe trade-off on the recommendation quality (we show evidence in section 4.1). As another challenge of the listwise recommendation problem, an item list typically aggregates the probability of exposing high-quality items during recommendation and is less likely to explore lists with slightly lower utility. This is especially true for standard training with cross-entropy loss, as we will illustrate in section 3.4. To solve the aforementioned challenges, we reformulate the goal into providing sufficiently diverse and high-quality recommendation lists. Intuitively, sufficient recommendation diversity would expand the policy\u2019s knowledge of the action space and improves its efficiency in finding better recommendation. On the other hand, we would also want to make sure that the diverse recommendations have a high quality so that the search of item list could become more reasonable and improves the exploration effectiveness on the action space. Thus, in this work, we propose a generative approach based on a new flow-matching learning paradigm [4, 25, 26] which is capable of generating diverse and accurate recommendations. The key insights behind the proposed framework consist of a flow-matching loss that directly aligns the list generation probability with the list\u2019s utility in log scale, and an autoregressive item selection model that iteratively appends an item into the output list. Specifically, the autoregressive item selection process is associated with a generation tree, each possible list corresponds to a root-to-leaf trajectory, and the generative model controls the probability flow on the tree graph. By matching the list-wise probability flow with the utility, the resulting methods tend to align the log-likelihood of an item with log scale rewards (rather than aligning with the original reward as in cross-entropy), which gives a higher chance of exposure for items with slight lower rewards. One challenge during the optimization of our method is that the large action space may induce extremely skewed probability distribution towards zero, so bias factors are introduced to control the scale of the probability aggregation and stabilize the learning of the generative model. We summarize our contributions as follows: \u2022 We propose the GFN4Rec framework for the listwise recommendation problem and discuss its relationships with existing generative and reinforcement learning approaches. \u2022 We build simulated online environments based on two realworld datasets and validate the superiority of GFN4Rec over strong list-wise recommendation methods when training and exploring online, and prove its ability to provide diverse recommendations with high quality. \u2022 We conduct offline training and evaluation on the datasets as well to validate the consistent performance of GFN4Rec and the feasibility of the online environment. 2 BACKGROUND 2.1 Problem Formulation We define a set of user U and a set of item I. Each recommendation request from a user \ud835\udc62\u2208U consists of a set of profile features (e.g. user ID, gender), the most recent history of interactions, and a candidate set C. Note that a multi-stage recommendation process will have C \u2282I and C = I only holds for a one-stage recommendation task. Specifically, we denote the recommendation in the first case (C \u2282I) as a re-ranking scenario where an initial ranker exists, and denote that in the second case (C = I) as a ranking scenario. Goal: Then, the goal is to learn a policy \ud835\udf0b(C,\ud835\udc62;\ud835\udf03) that selects an item list O \u2208C\ud835\udc3efor the given user request and maximizes the listwise reward R(\ud835\udc62,\ud835\udc42). We assume a multi-behavior scenario where the user may provide different types of feedback (e.g. click, like, comment) for each item exposure. Formally, we define the set of user behavior as B, and \ud835\udc66\ud835\udc62,\ud835\udc56,\ud835\udc4fas the user \ud835\udc62\u2019s response of item \ud835\udc56with respect to behavior \ud835\udc4f\u2208B. Then, for a given list O = {\ud835\udc4e1, . . . ,\ud835\udc4e\ud835\udc3e}, each item \ud835\udc4e\ud835\udc56obtains a multi-behavior response \ud835\udc4c\ud835\udc62,\ud835\udc4e\ud835\udc56= [\ud835\udc66\ud835\udc62,\ud835\udc4e\ud835\udc56,\ud835\udc4f1, . . . ,\ud835\udc66\ud835\udc62,\ud835\udc4e\ud835\udc56,\ud835\udc4f|B|], and the list-wise user response is: \ud835\udc4c\ud835\udc62,O = \uf8ee \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8f0 \ud835\udc66\ud835\udc62,\ud835\udc4e1,\ud835\udc4f1 . . . \ud835\udc66\ud835\udc62,\ud835\udc4e\ud835\udc3e,\ud835\udc4f1 . . . ... . . . \ud835\udc66\ud835\udc62,\ud835\udc4e1,\ud835\udc4f|B| . . . \ud835\udc66\ud835\udc62,\ud835\udc4e\ud835\udc3e,\ud835\udc4f|B| \uf8f9 \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fb (1) For simplicity, we define the listwise reward as the average of itemwise reward R(\ud835\udc62,\ud835\udc42) = 1 \ud835\udc3e \u00cd \ud835\udc56\u2208O R(\ud835\udc62,\ud835\udc56), where the item reward is calculated as the weighted sum of different positive user responses R(\ud835\udc62,\ud835\udc56) = \u00cd \ud835\udc4f\ud835\udc64\ud835\udc4f\ud835\udc66\ud835\udc62,\ud835\udc56,\ud835\udc4f. Note that this reward metric is linearly separable by items and linearly separable by behaviors, which can accommodate efficient pointwise/pairwise training. However, it does not reflect the mutual influences of items so independently improving the item-wise reward\ud835\udc64\ud835\udc4f\ud835\udc66\ud835\udc62,\ud835\udc56,\ud835\udc4fof a single item on a single behavior does not necessarily improves the list-wise metric, since the rewards of other items in the list may drop as consequences. We remind readers that there are more advanced reward function designs that aim to improve the overall reward [6, 37] and we consider them as complementary to our solution. Online vs Offline: Additionally, we assume the existence of the online learning loop (data \u2192policy \u2192data) where the observed new interactions between \ud835\udf0band the user environment continuously expand the training data during the optimization of \ud835\udf0b. This indicates that the policy\u2019s exploration ability also determines the knowledge it will learn in the future, which in turn affects the recommendation performance. Note that this is different from the standard reinforcement learning setting in recommendation [1, 12, 20, 36] and conventional session-based recommendation [34] where the recommender needs to consecutively interact with a user for several rounds (one recommendation list in each round) and optimize the multi-round cumulative reward. In our setting, the aforementioned learning goal is a single-list reward optimization goal, and we want to achieve it in a dynamic online environment. 2.2 Related Work Top-K Recommendation and List-wise Recommendation: Standard pointwise and pairwise learning-to-rank methods [8, 11, \fGenerative Flow Network for Listwise Recommendation KDD \u201923, August 6\u201310, 2023, Long Beach, CA, USA 15, 16, 29, 31] aims to learn an item-wise scoring function for a given user request, so they can adopt efficient supervise learning (by formulating the problem as classification task) and their expressiveness mainly comes from the sophisticated design of user request encoder (e.g. DNN [8], Transformer [15]). During inference, items are ranked based on the learned pointwise scoring function, and the top K items are selected as the recommendation. Yet, this learning paradigm does not align with real-world recommendation services which present to the user a list of items at a time. In such cases, the way how items are organized also influences how users respond to each item. For example, some users might prefer more diverse recommendations while other users might want to compete for similar items in the same list [17]. Then, the list-wise recommendation problem is defined to emphasize the mutual influences between items in the exposed list [2, 5, 7, 28, 35]. The general idea is to infer and learn from the difference between the inclusion and exclusion of a certain item in the exposed list with respect to the list-wise metric (e.g. NDCG) or the whole list evaluation (for more sophisticated rewards). Some work also shows that in a multi-stage recommendation system, the reranking model can better model the item correlations since the candidate set size is significantly reduced enabling a more powerful neural model [9, 22, 28]. Generative List Recommendation: In recent years, there has been a discussion on the generative perspective of the pointwise recommendation [19, 33] listwise recommendation [14, 21] or slate recommendation [12]. To handle the enormous combinatorial output space of lists, the generative approach models the distribution of recommended lists directly and generates a list as a whole with the use of deep generative models. For example, ListCVAE [14] uses Conditional Variational Autoencoders (CVAE) to capture the item positional biases and item interdependencies in list distribution. Although promising, subsequent research [21] has shown that ListCVAE struggles with accuracy-diversity trade-offs. Such an analysis shows that balancing the exploitation and exploration in existing generative list recommendation models remains challenging. Our method also belongs to the generative approach, but it uses a brand new flow matching paradigm [4] that directly maps the list generation probability with its utility. This learning scheme has the potential to generate high-quality recommendations with sufficient significantly improved diversity, which helps the online exploration and searching for a better recommendation. 2.3 Preliminary on GFlowNet The idea of GFlowNet [4] aroused first in the problem of stochastic object generation from a sequence of actions. For example, constructing and designing a molecular graph for new medicine. And the main insight behind GFlowNet is considering the iterative object generation sequence \ud835\udf0f= {O0 \u2192O1 \u2192\u00b7 \u00b7 \u00b7 \u2192O\ud835\udc47} as a trajectory in a probabilistic flow network, and the learned generative model aims to assign each trajectory a sampling probability proportional to the corresponding reward of the completed object: \ud835\udc43(\ud835\udf0f) \u221d\ud835\udc45(O\ud835\udc47) (2) similar to the energy-based generative model [18]. In order to avoid an expensive MCMC process, the proposed method borrows the idea of temporal difference [32] in reinforcement learning and formulates a flow matching objective \u2200O\ud835\udc61+1 \u2208\ud835\udf0fas in Eq.(3). It ensures that the sum of incoming flow matches the sum of outgoing flow. The reward has R = 0 for intermediate nodes and R > 0 only on leaf nodes, and the transition function \ud835\udc47states a deterministic object transformation based on the given action. \u2211\ufe01 O\ud835\udc61,\ud835\udc4e\ud835\udc61: \ud835\udc47(O\ud835\udc61,\ud835\udc4e\ud835\udc61)=O\ud835\udc61+1 F (O\ud835\udc61,\ud835\udc4e\ud835\udc61) = F (O\ud835\udc61+1) = R(O\ud835\udc61+1)+ \u2211\ufe01 \ud835\udc4e\ud835\udc61+1 F (O\ud835\udc61+1,\ud835\udc4e\ud835\udc61+1) (3) The author further derived two variants of this objective that are easy to optimize [23], namely, the Detailed Balance (DB) loss and the Trajectory Balance (TB) loss: LDB(O\ud835\udc61, O\ud835\udc61+1) = \u0012 log F (O\ud835\udc61)\ud835\udc43(O\ud835\udc61+1|O\ud835\udc61;\ud835\udf03) F (O\ud835\udc61+1)\ud835\udc43\ud835\udc35(O\ud835\udc61|O\ud835\udc61+1;\ud835\udf03) \u00132 LTB(\ud835\udf0f) = log \ud835\udc4d\ud835\udf03 \u00ce\ud835\udc47 \ud835\udc61=1 \ud835\udc43(O\ud835\udc61|O\ud835\udc61\u22121;\ud835\udf03) R(O\ud835\udc47) \u00ce\ud835\udc47 \ud835\udc61=1 \ud835\udc43\ud835\udc35(O\ud835\udc61\u22121|O\ud835\udc61;\ud835\udf03) !2 (4) which involves the learning of a flow estimator F (O), a forward probability function \ud835\udc43(O\ud835\udc61|O\ud835\udc61\u22121) that serves as the step-wise stochastic policy that builds up the object, and a backward probability function \ud835\udc43\ud835\udc35(O\ud835\udc61\u22121|O\ud835\udc61) that helps infer the flow from a certain parent. The TB loss minimizes the difference between the trajectory flow and the observed reward, and it reaches the minimum when the forward inference and the backward inference are identical. The DB loss optimizes the flow matching objective for each generation step O\ud835\udc61\u2192O\ud835\udc61+1, and for the leaf node with no child node, the denominator is replaced by the reward R(O\ud835\udc47) In our setting of list recommendation, we found two critical components of GFlowNet that are most helpful in improving recommendation performances: a) The log-scale reward that increases the chance of exploring diverse item lists during online learning; And b) the auto-regressive generation that optimizes a future reward while capturing the mutual influences of items. We will further explain this in the next section. 3 PROPOSED METHOD In this section, we illustrate our proposed framework GFN4Rec. Compared to GFlowNet\u2019s original design, our solution framework adopts several key changes to accommodate the list recommendation problem stated in section 2.1: a) The generation of a recommendation list forms a tree graph rather than a directed acyclic graph, which means that the backward probability is always one; b) The models are conditioned on user request \ud835\udc62so that collaborative learning can be used to alleviate the limited samples per request; c) The action space (i.e. item list) is usually much larger than that in [4] indicating a harder exploration problem, so we add bias terms for the global normalization, the reward scaling, and the forward probability shift to stabilize the training. 3.1 Item Selection Model and Generation Tree We follow an autoregressive generation process that selects one item at a time. During inference, a user request \ud835\udc62comes in and it contains the user information (profile features X\ud835\udc62and recent history H\ud835\udc62), and the initial output list is empty, i.e. O0 = \u2205. At each step \ud835\udc61> 0, an item \ud835\udc4e\ud835\udc61\u2208C/O\ud835\udc61\u22121 is selected based on the probabilistic model \ud835\udc4e\ud835\udc61\u223c\ud835\udc43\ud835\udf03(\ud835\udc56|\ud835\udc62, O\ud835\udc61\u22121), noted as the item selection \fKDD \u201923, August 6\u201310, 2023, Long Beach, CA, USA Shuchang Liu et al. User Environment (Evaluator) GFN4Rec List Generation User Request Encoder Figure 1: Example of list generation with \ud835\udc3e= 5 and three types of user responses. model, parameterized by \ud835\udf03. Then the selected item is pushed at the end of the output list, i.e. O\ud835\udc61= O\ud835\udc61\u22121 \u2295{\ud835\udc4e\ud835\udc61} is an ordered list. At the final step \ud835\udc61= \ud835\udc3e, we will have a full recommendation list O\ud835\udc3e= {\ud835\udc4e1, . . . ,\ud835\udc4e\ud835\udc3e} which is then exposed to the user environment in answer to the request. Figure 1 shows an example of this process with \ud835\udc3e= 5 and the item selection model in each step is presented in Figure 3. During online exploration, the item is randomly sampled based on the softmax score, and for greedy strategies, we select the item with the top score. Note that our problem focuses on the list-wise recommendation, and there is no intermediate response signal for an item selection step until the final list is generated. The generation tree: We assume a recommendation list of a fixed size \ud835\udc3e(also known as the slate recommendation). Since we iteratively add items into the list in order, the generation graph of all possible lists forms a \ud835\udc3e-depth tree structure, where the nodes are (intermediate or final) output lists and each edge represents a selected item. Figure 2 shows an example of such a generation tree. In a tree graph, each node O\ud835\udc61has only one possible parent node O\ud835\udc61\u22121 except for the source node that has no parent. And the number of children for a given node O\ud835\udc61is linear to |C| \u2212\ud835\udc61except the leaf nodes that have no child. All leaves have depth \ud835\udc3e, and the total number of leaves (i.e. list-wise search space) is equivalent to the number of size-\ud835\udc3eplacement: \u0000| C| \ud835\udc3e \u0001 \u00d7\ud835\udc3e! = \ud835\udc42(|C|\ud835\udc3e). By sampling according to the autoregressive item selection model \ud835\udc43(\ud835\udc4e\ud835\udc61|\ud835\udc62, O\ud835\udc61\u22121), the generator ends up with a trajectory with the observed output list O = O\ud835\udc3e= {\ud835\udc4e1, . . . ,\ud835\udc4e\ud835\udc3e}, and the output list (in leaf node) has a one-to-one correspondence to its generation trajectory. Thus, we can obtain the generation probability of the output list as its unique trajectory\u2019s sampling probability conditioned on \ud835\udc62: \ud835\udc43(O|\ud835\udc62) = \ud835\udc3e \u00d6 \ud835\udc61=1 \ud835\udc43(O\ud835\udc61|\ud835\udc62, O\ud835\udc61\u22121) = \ud835\udc3e \u00d6 \ud835\udc61=1 \ud835\udc43\ud835\udf03(\ud835\udc4e\ud835\udc61|\ud835\udc62, O\ud835\udc61\u22121) where the choice of item \ud835\udc4e\ud835\udc61determines the output list in the next step, i.e. \ud835\udc43(O\ud835\udc61|\ud835\udc62, O\ud835\udc61\u22121) = \ud835\udc43\ud835\udf03(\ud835\udc4e\ud835\udc61|\ud835\udc62, O\ud835\udc61\u22121). Using the example in Figure 2, the recommendation {\ud835\udc562,\ud835\udc561} has a trajectory probability \ud835\udc43(\ud835\udc562|\ud835\udc62, \u2205)\ud835\udc43(\ud835\udc561|\ud835\udc62, {\ud835\udc562}) = 0.5 \u00d7 0.7 = 0.35. 3.2 Learning Objectives on Network Flow Different from the standard reward maximization goal in most learning-to-rank paradigms, we want to learn a generative model that not only finds the best reward but also favors other high-reward recommendation lists for better exploration. Thus, following Eq.(2), Figure 2: Example of generation tree with K=2, |C| = 3. we aim to learn a trajectory distribution that is proportional to the list-wise rewards for a certain user \ud835\udc62: \ud835\udc43(O|\ud835\udc62) \u221dR(\ud835\udc62, O) (5) As we will discuss in section 3.4, this would enforce the model to match the log scale rewards for items that are less likely to be trapped in local sub-optima and boosts the exploration of lists with slightly lower rewards. One challenge of the optimization under this learning goal is the limited observation per user request (or only one interaction per request in the most extreme case). Fortunately, we can solve this through collaborative training across users. Matching the flow and the reward: Intuitively, users have different behavioral patterns which induce different reward distributions. In order to match these differences, we assign a personalized initial flow estimator F (\ud835\udc62, O0) = F (\ud835\udc62, \u2205) to the source node (the starting step with an empty list), representing the prior of the reward. Then the generation tree will split this initial flow according to the step-wise item selection model and the flow of a leaf node with O is F (\ud835\udc62, O) = F (\ud835\udc62, \u2205)\ud835\udc43(O|\ud835\udc62). Combining with Eq.(5), the user-wise flow distribution will have: \ud835\udc4f\ud835\udc67F (\ud835\udc62, O) = R(\ud835\udc62, O) (6) where \ud835\udc4f\ud835\udc67is a hyperparameter that represents the fixed global normalizing bias for the forward passes compared to observed rewards. Learning the trajectory probability: Based on previous notions, for an observed training sample (\ud835\udc62, O, R(\ud835\udc62, O)), we can derive from Eq.(4) the trajectory balance (TB) objective: LTB = \u0010 log\ud835\udc4f\ud835\udc67+ log F\ud835\udf19(\ud835\udc62, \u2205) \u00ce\ud835\udc3e \ud835\udc61\u22121 \ud835\udc43\ud835\udf03(\ud835\udc4e\ud835\udc61|\ud835\udc62, O\ud835\udc61\u22121) R(\ud835\udc62, O\ud835\udc3e) + \ud835\udc4f\ud835\udc5f \u00112 (7) where\ud835\udc4f\ud835\udc5fis a hyperparameter that represents the global reward bias, and it is introduced to control the smoothness of the loss landscape and avoids division by zero rewards. The learnable parameters include \ud835\udf19of the initial flow estimator F and \ud835\udf03of the item selection model (representing the forward probability function). Note that the backward probability is a constant \ud835\udc43(O\ud835\udc61\u22121|\ud835\udc62, O\ud835\udc61) = 1 since each node has only one parent in a tree graph. From trajectory-wise to step-wise: the TB loss optimizes the overall trajectory as a whole but induces a large variance in the squared error term. One alternative is to use a more detailed objective (derived from the DB loss of Eq.(4)) on each item generation \fGenerative Flow Network for Listwise Recommendation KDD \u201923, August 6\u201310, 2023, Long Beach, CA, USA step O\ud835\udc61\u22121 \u2192O\ud835\udc61: LDB = \uf8f1 \uf8f4 \uf8f4 \uf8f4 \uf8f2 \uf8f4 \uf8f4 \uf8f4 \uf8f3 \u0010 log F\ud835\udf19(\ud835\udc62, O\ud835\udc3e) \u2212log(R(\ud835\udc62, O\ud835\udc3e) + \ud835\udc4f\ud835\udc5f) \u00112 for leaf node \u0010 log\ud835\udc4f\ud835\udc67 \ud835\udc3e + log F\ud835\udf19(\ud835\udc62,O\ud835\udc61\u22121)\ud835\udc43\ud835\udf03(\ud835\udc4e\ud835\udc61|\ud835\udc62,O\ud835\udc61\u22121) F\ud835\udf19(\ud835\udc62,O\ud835\udc61)\ud835\udc43(O\ud835\udc61\u22121|\ud835\udc62,O\ud835\udc61) \u00112 \ud835\udc61\u2208{1, . . . , \ud835\udc3e} (8) It consists of a reward-matching term for the leaf node and a flowmatching term for each of the intermediate nodes. Here, F\ud835\udf19(\u00b7) represents the flow estimator for any given node (leaf or intermediate), and the reward smooth bias \ud835\udc4f\ud835\udc5fand normalizing bias \ud835\udc4f\ud835\udc67have the same meaning as in LTB. Again, the single-parent property of nodes in a tree graph gives \ud835\udc43(O\ud835\udc61\u22121|\ud835\udc62, O\ud835\udc61) = 1 and we can simplify the second case of LDB to: LDB = \u0010 log\ud835\udc4f\ud835\udc67 \ud835\udc3e + log F\ud835\udf19(\ud835\udc62, O\ud835\udc61\u22121)\ud835\udc43\ud835\udf03(\ud835\udc4e\ud835\udc61|\ud835\udc62, O\ud835\udc61\u22121) F\ud835\udf19(\ud835\udc62, O\ud835\udc61) \u00112 ,\ud835\udc61\u2208{1, . . . , \ud835\udc3e} (9) Note that this learning objective is separable by item which is better suited for parallel training, but it does not directly optimize the trajectory probability, which may be less effective for limited observations or insufficient reward accuracy. Forward probability shifting for better stability During training, we observe that the scale of \ud835\udc43(\ud835\udc4e\ud835\udc61|\ud835\udc62, O\ud835\udc61\u22121) is usually around 1 |I| which is quite different from the scale of the reward and the learned scale of the flow estimator. This could induce a very large negative value with high variance after taking the log, which could dominate the gradient calculation at the beginning and makes the training process very unstable. As a result, we also include a hyperparameter \ud835\udc4f\ud835\udc53that shifts the forward probability to a value range similar to other components. In other words, the original log term log \ud835\udc43\ud835\udf03(\ud835\udc4e\ud835\udc61|\ud835\udc62, O\ud835\udc61\u22121) is shifted to log(\ud835\udc43(\ud835\udc4e\ud835\udc61|\ud835\udc62, O\ud835\udc61\u22121) + \ud835\udc4f\ud835\udc53). As an intuitive example, we can set \ud835\udc4f\ud835\udc53= 1.0 to make log(\ud835\udc43(\u00b7) + \ud835\udc4f\ud835\udc53) \u22650. 3.3 Transformer-based User Request Encoder In our recommendation setting, a user request consists of the user\u2019s profile X\ud835\udc62that maintains the static features of the user as well as the L most recent interaction history H\ud835\udc62= [(\ud835\udc4e1,\ud835\udc4c\ud835\udc4e1), . . . , (\ud835\udc4e\ud835\udc3f,\ud835\udc4c\ud835\udc4e\ud835\udc3f)] that captures the dynamic changes in the user\u2019s interest. The user request encoder will take X\ud835\udc62and H\ud835\udc62as input and outputs a user state embedding \ud835\udc94\ud835\udc62for later list generation phase. It consists of a transformer-based history encoder and a DNN-based feature extraction module. We present its details in Appendix A.1. And we remind the readers that this request encoder is not specifically designed for our GFN4Rec method and it could accommodate many existing models that require a user encoding module [15] including the baselines in our experiments as described in section 4. 3.4 Relation to Existing Methods Reward vs. Log-scale Reward: In standard learning-to-rank solutions and many list-wise methods that assumes conditional independence of item probabilities, a classification paradigm is adopted, such as binary or multi-class cross-entropy loss [15, 28, 30]. It results in an alignment between the item-wise log probability \ud835\udc43(\ud835\udc56|\ud835\udc62) and the item-wise reward, i.e. log \ud835\udc43(\ud835\udc56|\ud835\udc62) \u2192R(\ud835\udc62,\ud835\udc56). Assuming independent item selection, then this would induce exponential probability aggregation for an item list: \ud835\udc43(O\ud835\udc3e|\ud835\udc62) = \u00ce \ud835\udc4e\ud835\udc61\u2208O\ud835\udc3e\ud835\udc43(\ud835\udc4e\ud835\udc61|\ud835\udc62) \u2192 \u00ce \ud835\udc4e\ud835\udc61\u2208O\ud835\udc3e\ud835\udc52\ud835\udc45(\ud835\udc62,\ud835\udc4e\ud835\udc61), which is sensitive to items with high scores. Thus, Item Kernel Item Kernel ...... Item Kernel Item Kernel ...... no item no item Concatention ...... ...... Item Kernel User Request Encoder softmax Figure 3: Flow estimator \ud835\udf19and item selection model \ud835\udf03in GFN4Rec. We presents details of the user request encoder and item kernel in Appendix A.1. \u2299represents dot product. the generator may quickly distinguish items with top-ranking scores and quickly converge to a local optimum. In contrast, one of the key insights from the GFlowNet is the log scale reward matching paradigm, which aims to directly align the log probability with logscaled reward, i.e. log \ud835\udc43(O|\ud835\udc62) \u2192log R. Adopting the definition of list-wise reward in section 2.1, this log-scale alignment means that the list generation probability will be linear to the linear combination of item-wise reward: \ud835\udc43(O\ud835\udc3e|\ud835\udc62) \u2192R(\ud835\udc62, O) = \u00cd \ud835\udc4e\ud835\udc61\u2208O R(\ud835\udc62,\ud835\udc4e\ud835\udc61). In such a case, items with high scores are less distinguishable than those with lower scores, and items with slightly lower point-wise scores now have a good chance of being selected. Evaluator vs. Generator: As we have discussed in section 2.2, list-wise recommendation approaches can be generally categorized as evaluator-based methods, generator-based methods, and evaluator-generator paradigms. Our GFN4Rec framework is defined as a list generator where the list generation probability is proportional to its reward label. Notably, this property also means that GFN4Rec can be regarded as an evaluator-based method as well since the trajectory probability \ud835\udc43(O|\ud835\udc62) estimated upon generation is also an indicator of the list\u2019s quality (represented by the list-wise reward). This is different from generative methods like CVAE [14] that use the reward label as input upon generation. In general, GFN4Rec as well as any generation model that matches the list generation probability with the reward is simultaneously a generator and an evaluator. Compared to the generator-evaluator learning paradigm [9] that uses a list evaluator to indirectly guide the recommendation policy, GFN4Rec is a more direct approach that is easier to optimize and stabilize. Additionally, the autoregressive generation process of GFN4Rec does not restrict the model design and can accommodate many existing advanced solutions [3, 31], but the main difference lies in the flow matching loss for the entire list rather than learning from a decomposed item-wise signal. 4 EXPERIMENTS To validate the effectiveness of the proposed method, we conduct both offline and online experiments on two real-world datasets. \fKDD \u201923, August 6\u201310, 2023, Long Beach, CA, USA Shuchang Liu et al. Dataset |U| |I| #record |B| Range of R ML1M 6400 3706 1,000,208 3 [0,3] KR1K 1000 69,219 2,597,865 7 [-1,6] Table 1: Dataset Summary. The records are used for offline training of policies and online user environment, but not used for online training of policies. Datasets: we include two real-world datasets ML1M and KR1K. ML1M is the one million version of the MovieLens dataset 1 dataset that consists of users\u2019 rating (original range in {1, . . . , 5}) history for movies, but the rating signals are transformed into clicks (rating \u22653), likes (rating \u22654), and stars (rating \u22655). The KR1K is the 1K-user version of the KuaiRand [10] dataset that consists of users\u2019 interaction histories for short videos, the user feedback include clicks, views, likes, comments, forwards, follows, and hates, and all behavior types are 0/1 signals 2. For both datasets, we filter the records into 20-core data and cut the user history into segments of size \ud835\udc3e= 6, and regard each segment as an observed recommendation list. For simplicity, we set the item-wise reward weight \ud835\udc64\ud835\udc4f= 1 except that the hate signal in KR1K has \ud835\udc64hate = \u22121. As a result, the range of item-wise reward R(\ud835\udc62,\ud835\udc56) \u2208[0, 3] in ML1M and R(\ud835\udc62,\ud835\udc56) \u2208[\u22121, 6] in KR1K. Statistics of the resulting datasets are summarized in Table 1. Models and Baselines: We compare the GFN4Rec model with both ranking and reranking models. We summarize the included models as the following: \u2022 CF [15]: a pointwise model that scores the user-item interaction based on the dot product between the user encoding and the item encoding. \u2022 ListCVAE [14]: a generative model that captures the list distribution based on conditional VAE, and the reward is formulated as the input condition when providing a recommendation. \u2022 PRM [28]: a re-ranking model that uses the CF model as the initial ranker and uses a transformer-based re-ranker to encode the intermediate candidate set. \u2022 GFN4Rec: our proposed GFN4Rec model with trajectory balance loss. Comparison between trajectory balance and detailed balance will be further discussed in section 4.3. As mentioned in section 2.1, the ranking models provide a onestage recommendation with C = I, and the re-ranking model is associated with a pretrained initial ranker that filters the item pool into a smaller candidate set \ud835\udc36\u2282I for the re-ranker. To better control the variables in the model comparison, we use the same user request encoder across all models. We present more model details in Appendix A.1. Simulated User Environment: in order to simulate the complex multi-type user behavior in the observed data, we build a stochastic user response model E : U \u00d7 C\ud835\udc3e\u2192B\ud835\udc3ethat predict the probability of a user \ud835\udc62positively engage with item \ud835\udc56by behavior \ud835\udc4f. The base neural model \ud835\udc54(\ud835\udc62, O) outputs the initial behavior 1https://grouplens.org/datasets/movielens/1m/ 2https://kuairand.com/ Algorithm 1 GFN4Rec # Apply current policy in running episodes: 1: procedure Online Inference 2: Initialize replay buffer A. 3: while True, in each running episode do 4: Observe user request \ud835\udc62. 5: Initial O0 \u2190\u2205 6: for \ud835\udc61\u2208{1, . . . , \ud835\udc3e} do 7: Sample item \ud835\udc4e\ud835\udc61\u223c\ud835\udc43\ud835\udf03(\ud835\udc56|\ud835\udc62, O\ud835\udc61\u22121) with current policy. 8: O\ud835\udc61= O\ud835\udc61\u22121 \u2295{\ud835\udc4e\ud835\udc61} 9: end for 10: Obtain user responses \ud835\udc4cO from online environment and calculate R(\ud835\udc62, O). 11: (\ud835\udc62, O, R(\ud835\udc62, O),\ud835\udc4c\ud835\udc62,O) \u2192A 12: end while 13: end procedure # Simultaneous training on the buffer: 14: procedure Training 15: Initialize all trainable parameters in the policy (e.g. \ud835\udf03and \ud835\udf19 in GFN4Rec) 16: Wait until A has stored minimum amount of data points. 17: while Not Converged, in each iteration do 18: Obtain mini-batch sample (\ud835\udc62, O, \ud835\udc45(\ud835\udc62, O),\ud835\udc4c\ud835\udc62,O) \u223cA. 19: Calculate \ud835\udc43\ud835\udf03(\ud835\udc4e\ud835\udc61|\ud835\udc62, O\ud835\udc61\u22121) and F\ud835\udf19(O\ud835\udc61) for each generation step \ud835\udc61. 20: Update the policy through one step of gradient descent on LTB or LDB. 21: end while 22: end procedure likelihood, and it consist of a Transformer-based user history encoder similar to the user request encoder, and a state-to-behavior predictor that infers the user response probability for the given recommendation O. We train this base model using binary cross entropy on the ground truth label\ud835\udc66\ud835\udc62,\ud835\udc56,\ud835\udc4fand obtain AUC in [0.7, 0.9] for both datasets across different behaviors. When the pretrained user response model takes effect in the online environment, we also include an item-influence module that suppresses the initial ranking score by each item\u2019s similarity to other items in the list, to simulate the user\u2019s demand for recommendation diversity. We use a significance factor \ud835\udf0c> 0 to ensure the existence of item influence and set \ud835\udf0c= 0.2 for ML1M while \ud835\udf0c= 0.1 for KR1K. The final user response \ud835\udc66\ud835\udc62,\ud835\udc56,\ud835\udc4fis uniformly sampled based on the modified behavior likelihood to simulate the uncertainty of user feedback in the recommendation. For the data sampling strategy of all online learning methods (e.g. In GFN4Rec, Algorithm 1, line 18), half of the mini-batch samples are newly added instances from the online inference procedure, and the other half comes from the uniform sampling over the entire buffer to avoid catastrophic forgetting [27]. 4.1 Online Learning The main purpose of the online learning experiment is to 1) verify the GFN4Rec\u2019s ability to find better recommendation policies that produce higher rewards; 2) validate the more diverse behaviors \fGenerative Flow Network for Listwise Recommendation KDD \u201923, August 6\u201310, 2023, Long Beach, CA, USA Method ML1M KR1K Avg. R Max R Coverage ILD Avg. R Max R Coverage ILD CF 2.073 2.939 13.963 0.529 2.253 4.039 100.969 0.543 ListCVAE 0.940 2.209 262.420 0.796 2.075 4.042 446.100 0.565 PRM 2.156 2.967 18.647 0.559 2.174 3.811 27.520 0.538 GFN4Rec(Explore) 2.047 2.938 87.660 0.617 2.212 3.984 415.515 0.591 GFN4Rec 2.172 2.972 15.693 0.565 2.414 4.054 21.267 0.520 Table 2: Model performances of online learning model. Best values are in bold. Strongest baseline in underline. Method ML1M KR1K Avg. R Max R R-NDCG R-MRR Coverage ILD Avg. R Max R R-NDCG R-MRR Coverage ILD CF 1.675 2.694 0.563 0.0713 12.217 0.729 1.941 3.860 0.390 0.0824 17.275 0.611 ListCVAE 1.896 3.802 0.381 0.0803 343.067 0.657 RerankCF 1.901 2.918 0.632 0.0806 129.823 0.627 1.931 3.990 0.395 0.0823 153.186 0.586 PRM 1.914 2.914 0.636 0.0812 128.626 0.623 1.909 3.966 0.386 0.0808 284.000 0.595 GFN4Rec 1.996 2.908 0.665 0.0848 21.788 0.605 1.962 3.870 0.393 0.0834 32.16 0.630 Table 3: Online simulator performances for offline model. Best values are in bold. Strongest baseline in underline. R-NDCG and R-MRR correspond to the R-NDCG(online) and R-MRR(online) metrics. Method ML1M KR1K R-NDCG(online) R-MRR(online) R-NDCG(test) R-MRR(test) R-NDCG(online) R-MRR(online) R-NDCG(test) R-MRR(test) CF 0.563 0.0713 0.533 0.0824 0.390 0.0824 0.356 0.0420 ListCVAE 0.381 0.0803 0.361 0.0419 RerankCF 0.632 0.0806 0.570 0.0835 0.395 0.0823 0.339 0.0415 PRM 0.636 0.0812 0.578 0.0861 0.386 0.0808 0.352 0.0415 GFN4Rec 0.665 0.0848 0.561 0.0826 0.393 0.0834 0.362 0.0421 Table 4: Online and offline ranking metrics of offline model. Best values are in bold. Strongest baseline in underline. of GFN4Rec during online sampling while keeping high-quality recommendations. 4.1.1 Training framework: we summarize the training procedures of GFN4Rec in algorithm 1. Lines 18-20 correspond to the main optimization step and lines 5-9 are the online sampling steps. During test time, if we aim to find the best output, the action sampling (in line 7) will be turned off and we will adopt greedy selection according to the scores provided by the item selection model. To better illustrate the exploration behavior of our GFN4Rec method, we observe both the test performance under the aforementioned greedy selection and that using sampling (with line 7 turned on), we denote the latter as GFN4Rec(Explore). When training other baselines, the overall online learning framework is similar to algorithm 1 and differs mainly in the loss minimization step (lines 18-20) and the list generation step (lines 5-9). For example, the CF baseline learns a pointwise model \ud835\udc43(\ud835\udc56|\ud835\udc62) which uses the dot product between user request encoding and candidate item kernel encoding as the ranking scores and simply selects the top-\ud835\udc3eas the recommendation, and its objective function is the reward-based binary cross-entropy: LBCE = \u2212R(\ud835\udc62,\ud835\udc56) log \ud835\udc43(\ud835\udc56|\ud835\udc62) + (1 \u2212R(\ud835\udc62,\ud835\udc56)) log(1 \u2212\ud835\udc43(\ud835\udc56|\ud835\udc62)) (10) where the label in the original BCE loss is replaced by the continuous multi-behavior reward. During training, we fix all experiments with a mini-batch size of 128 and start training after 100 steps of running episodes. For reranking models, we include additional online training steps for the initial ranker before the training of the reranker, its learning objective also uses the aforementioned R-BCE loss. 4.1.2 Evaluation Protocol: For each user request and the recommended list, the online user environment returns the user feedback and we calculate the corresponding listwise reward R(\ud835\udc62, O) (defined in section 2.1). We report both the Average Reward as well as the Max reward across user requests in a mini-batch. For diversity metrics, we include the item Coverage metric that describes the number of distinct items exposed in a mini-batch, and intra-list diversity (ILD) that estimates the embedding-based dissimilarity between items in each recommended list: ILD(O) = 1 \ud835\udc3e(\ud835\udc3e\u22121) \u2211\ufe01 \ud835\udc4e\ud835\udc56\u2208O \u2211\ufe01 \ud835\udc4e\ud835\udc57\u2208O/{\ud835\udc4e\ud835\udc56} (1 \u2212similarity(\ud835\udc4e\ud835\udc56,\ud835\udc4e\ud835\udc57)) (11) As mentioned in [21], the item coverage reflects the cross-list diversity which will help us understand how GFN4Rec generates diverse lists. For each model, we use grid search to find the hyperparameters that yield the best results. Specifically, we check learning rate in {0.001, 0.0001, 0.00001}, L2 regularization in {0.0001, 0.00001, 0}. For ListCVAE, we search the \ud835\udefdcoefficient of the KLD loss in {1.0, 0.1, 0.01, 0.001}. For PRM, we control its PV loss coefficient in {1.0, 0.1, 0.01}. For all GFN4Rec variants we search \ud835\udc4f\ud835\udc5fin {0.1, 0.3, 1.0, 1.5}, \ud835\udc4f\ud835\udc53in {0.1, 0.5, 1.0, 1.5, 2.0}, and \ud835\udc4f\ud835\udc67in {0.1, 0.5, 1.0, 1.5}. We notice that most models converge around episode step 5000 in both ML1M \fKDD \u201923, August 6\u201310, 2023, Long Beach, CA, USA Shuchang Liu et al. and KR1K, and the average result of the last 100 steps is regarded as test samples for evaluation. 4.1.3 Empirical Results: After searching the model parameters, we run each model\u2019s best setting for 5 rounds with different random seeds and report the average test results in Table 2. In both online environments, GFN4Rec achieves the best performance in terms of the reward metrics, and it significantly outperforms the strongest baseline in KR1K by 10% in the average reward. The reranking PRM achieves the same level of reward in ML1M, but it takes advantage of an extra ranking phase. This means that the GFN4Rec model can find a better recommendation policy than other baselines. The online-sampling counterpart GFN4Rec(Explore) also achieves a relatively high reward (the same level as CF) in both environments, but what makes it superior is the significantly improved item coverage and ILD. Specifically, in both ML1M and KR1K, GFN4Rec(Explore) improves the item coverage by 4\u00d7 compared to CF and PRM. ListCVAE could achieve the same level of diversity but suffers from severe accuracy trade-offs, especially in ML1M. On the contrary, GFN4Rec(Explore) achieves almost the same level of diversity as ListCVAE, with a significantly better accuracy performance in terms of rewards. All this evidence proves that GFN4Rec is able to find high-quality recommendations (in terms avg. R and max R) with better diversity as an online learning framework. 4.2 Offline Learning We include the offline experiments as verification of 1) the consistent performance of GFN4Rec in both offline and online evaluation; and 2) the feasibility of the online simulator (discussed in section 4.3.3). 4.2.1 Training Framework: For offline training, the policy no longer samples the lists online nor collects training samples into the buffer, so GFN4Rec(Explore) is no longer applicable. Instead, it only uses the offline log data (as those in Table 1 that takes the same format (\ud835\udc62, O, R(\ud835\udc62, O),\ud835\udc4c\ud835\udc62,O). Except for this difference in the data iterator, the remaining optimization steps are identical to the training procedure of algorithm 1. To engage in offline test, we split the last \ud835\udc41 interactions of each user\u2019s history as test samples while the remaining as training samples, and we set \ud835\udc41= 1 for ML1M and \ud835\udc41= 4 for KR1K. We train each model with a mini-batch size of 128 and stop the training upon convergence (around 10000 steps in ML1M and 5000 steps in KR1K). We exclude ListCVAE in the comparison of ML1M for its unstable and incomparable performance. 4.2.2 Evaluation Protocol: During the evaluation, for the data points in the test set, we modify the standard offline metrics into the reward-based NDCG (R-NDCG) and the reward-weighted mean reciprocal rank (R-MRR) as illustrated in Eq.(12). where the Rank(\ud835\udc62,\ud835\udc56) is a position-wise rank of items on the same position in the batch data since each position in the recommendation list now corresponds to an item selection step. The R-NDCG metric generalizes the standard NDCG metric where the item-wise reward R(\ud835\udc62,\ud835\udc56) becomes the relevance label, and the IDCG is agnostic to the model being evaluated. The R-MRR metric generalizes the standard MRR metric but replaces the item label with the item-wise reward. For both metrics, a larger value means that the learned policy performs better on the offline data. R-NDCG = 1 \ud835\udc3e \u2211\ufe01 \ud835\udc58\u2208{1,...,\ud835\udc3e} R-NDCG(\ud835\udc58) R-NDCG(\ud835\udc58) = \u00cd \ud835\udc62,\ud835\udc4e\ud835\udc58R(\ud835\udc62,\ud835\udc4e\ud835\udc58)21\u2212Rank(\ud835\udc62,\ud835\udc4e\ud835\udc58) IDCG R-MRR = 1 \ud835\udc3e \u2211\ufe01 \ud835\udc58\u2208{1,...,\ud835\udc3e} R-MRR(\ud835\udc58) R-MRR(\ud835\udc58) = \u2211\ufe01 \ud835\udc62,\ud835\udc4e\ud835\udc58 R(\ud835\udc62,\ud835\udc4e\ud835\udc58) Rank(\ud835\udc62,\ud835\udc4e\ud835\udc58) (12) Additionally, we can still deploy the models to the online environment even though they are trained offline, only that there is no buffer to maintain and no online sampling for exploration. We adopt the same online evaluation protocol in section 4.1 and include both the accuracy metrics (average reward and maximum reward) and the diversity metrics (item Coverage and ILD). Note that we can calculate R-NDCG and R-MRR for both the offline data and the online observed interactions, so we denote the first case as R-NDCG(test) and R-MRR(test), and denote the second case as R-NDCG(online) and R-MRR(online). 4.2.3 Empirical Results: We adopt the same grid search for common hyperparameters as in online learning, and report the best parameter with 5-seed averaged results in Table 3 and Table 4. Specifically, in Table 4, GFN4Rec achieves better results than CF in ML1M and achieves the best results in KR1K in terms of the test set ranking metric R-NDCG(test) and R-MRR(test). These offline metrics are almost consistent with the online metrics, but with one exception when comparing the reranking baseline PRM, where GFN4Rec is slighted better on R-NDCG(online) and R-MRR(online) and PRM is slightly better on R-NDCG(test) and R-MRR(test) in ML1M. This might be related to the smaller action space of ML1M, which may improve the chance of the reranking mechanism to finding better intermediate candidates for later reranking. In general, GFN4Rec is effective in finding better rewards than one-stage models when engaging in offline training, and its performance is consistent in both offline and online metrics. Additionally, in Table 3, online ranking metrics (R-NDCG(online) and R-MRR(online)) are consistent with other online accuracy metrics (closest to Avg. R) in terms of model comparison. Combining with the aforementioned consistency between online and offline ranking metrics, this further verifies the feasibility of the evaluation under the online simulator (further explained in section 4.3.3). 4.3 Ablation Study 4.3.1 Trajectory Balance vs. Detailed Balance. As we have discussed in section 3.2, trajectory balance LTB (denote as GFN_TB) directly optimizes the item selection probability of different positions together, while the detailed balance LDB (denote as GFN_DB) separates the learning of each position and only the last step is directly guided by the accurate reward label. Thus, DB loss adopts step-wise learning which would result in lower variance in the squared error term, compared with TB loss. This indicates that DB is potentially more suitable for larger action space (item candidate set). As shown \fGenerative Flow Network for Listwise Recommendation KDD \u201923, August 6\u201310, 2023, Long Beach, CA, USA Method KR1K GFN_DB GFN_TB Avg. R 2.034 1.962 Max R 3.905 3.870 Coverage 2.034 1.962 ILD 0.582 0.630 R-NDCG(online) 0.400 0.393 R-MRR(online) 0.0859 0.0834 R-NDCG(test) 0.363 0.362 R-MRR(test) 0.0423 0.0421 Table 5: TB vs. DB with offline model training. Figure 4: Learning curves of greedy GFN4Rec and GFN4Rec(Explore) in KR1K. in Table 5, GFN_DB achieves better performance than GFN_TB in the practical KR1K dataset. We suggest using DB loss in practice as it is more suitable for large action spaces and more stable in training. 4.3.2 Greedy vs. Exploration. As supported by section 4.1, the GFN4Rec model can achieve high recommendation quality with better diversity than the exploration counterpart. We further illustrate this in Figure 4, where the reward metrics of GFN4Rec(Explore) grow much slower than that of the greedy GFN4Rec (for both DB and TB variants). In contrast, the item coverage and ILD metrics drop much slower in GFN4Rec(Explore). Additionally, we observe that the max reward, though it generally improves over time, appears to be very unstable. GFN4Rec(Explore) exhibits very stable behavior, which indicates that there might exist a large number of slates with high quality while extreme actions could misguide the learning process. 4.3.3 Feasibility of Online Simulator: While offline metrics like NDCG and MRR are widely verified in practice, the feasibility of an online simulator for the recommendation has been an important research topic in recent years [13]. We need a realistic online simulator that follows real-world user behavioral patterns in order to verify the effectiveness of recommendation policies. In section 4.2, we use both the online simulator and offline test set for model evaluation and observe that the two sets of metrics are generally consistent across different models. This indicates that our design of the user environment is sufficiently realistic to model the user behaviors and feasible in validating the recommendation models. Still, we remind readers that in theory, there is a feasible region of the pretrained user environment that is close to the offline data, but it does not exclude the chance of mode collapse if we do not regulate the pretraining process [24]. 4.3.4 Offline vs. Online. As many online learning methods pointed out [36], the offline log data does not provide the ground truth user feedback for the counterfactual question \u201cWhat if the policy recommends a different list and how would the user behave\u201d. This limitation restricts the exploration of better data samples and is the main motivation of the aforementioned research on user simulation. In our experiments, we observe evidence of the sub-optimal reward performances in models that are trained offline compared with their online training counterparts. For example, the best model in KR1K is GFN4Rec which achieves 2.414 in average reward, but it only reaches 1.962 on the same online simulator when it is trained offline. This phenomenon is consistent across all variants of GFN4Rec, indicating the effectiveness of engaging exploration in the online environment and the limitation of offline training. 4.3.5 Inconsistent Diversity of Reranking Model: We observe that the reranking baseline PRM achieves significantly higher item coverage when trained with offline data but not so in online learning. We believe this is related to the diversity of the initial ranker. To validate this, we include the RerankCF baseline which consists of a CF-based initial ranker and a deterministic top-K selection as the re-ranker, and present its results in Table 3. We observe that the diversity of RerankCF also achieves significantly higher item coverage than CF and GFN4Rec. This indicates that the existence of the initial ranker could potentially improve the diversity but at a cost of lower accuracy (in online reward and offline metrics). 5" + }, + { + "url": "http://arxiv.org/abs/2302.03431v2", + "title": "Exploration and Regularization of the Latent Action Space in Recommendation", + "abstract": "In recommender systems, reinforcement learning solutions have effectively\nboosted recommendation performance because of their ability to capture\nlong-term user-system interaction. However, the action space of the\nrecommendation policy is a list of items, which could be extremely large with a\ndynamic candidate item pool. To overcome this challenge, we propose a\nhyper-actor and critic learning framework where the policy decomposes the item\nlist generation process into a hyper-action inference step and an effect-action\nselection step. The first step maps the given state space into a vectorized\nhyper-action space, and the second step selects the item list based on the\nhyper-action. In order to regulate the discrepancy between the two action\nspaces, we design an alignment module along with a kernel mapping function for\nitems to ensure inference accuracy and include a supervision module to\nstabilize the learning process. We build simulated environments on public\ndatasets and empirically show that our framework is superior in recommendation\ncompared to standard RL baselines.", + "authors": "Shuchang Liu, Qingpeng Cai, Bowen Sun, Yuhao Wang, Ji Jiang, Dong Zheng, Kun Gai, Peng Jiang, Xiangyu Zhao, Yongfeng Zhang", + "published": "2023-02-07", + "updated": "2023-02-08", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR" + ], + "main_content": "INTRODUCTION Recommender Systems (RS) serve as one of the fundamental components for a wide range of web services including e-commerce, social media, news, and advertising. In recent years, studies have shown that the long-term interactions between users and the RS formulate a Markov Decision Process (MDP), where Reinforcement Learning (RL) methods can be used to further improve the predictive performances [1] compared to traditional learning-to-rank solutions [32]. Rather than optimizing the immediate user response, the key insight behind RL is to maximize the cumulative reward over all the interactions over time. When adopting this formulation in practice, the recommendation problem distinguishes itself from other RL tasks by the challenge of large dynamic discrete action space. For example, in an e-commerce system or video recommendation system, it may need to select for each request a recommendation list from a pool of millions of arXiv:2302.03431v2 [cs.IR] 8 Feb 2023 \fWWW \u201923, May 1\u20135, 2023, Austin, TX, USA Liu and Cai, et al. a: list of item Hyper-Actor Environment (User Model) s: user state r: immediate reward from user Scoring Function Z: latent hyper-action Figure 1: MDP formulation with latent hyper-action products, and the candidate pool grows every day. This means that it would be unrealistic for tabular-based methods (e.g. Q-Learning, SARSA, Policy Iteration) that favor small action spaces and methods designed for fixed action spaces. Though efforts have been made to alleviate this issue by decomposition of the recommendation list [21] into item-wise sub actions, learning a policy gradient over the entire candidate item pool is still an challenging task. Fortunately, this challenge has already been solved in early-age non-RL models like latent factor models [26] and two-tower collaborative filtering method [50]. They learn a common latent space that can represent both the user requests and items (or item lists), so that the learned latent space can accommodate arbitrary number of items and is agnostic to the dynamic changes of the item pool. In this work, we combine this insight into the RL-methods and focus on the list recommendation problem. The latent representations of recommendation lists are generalized as hyper-actions as shown in Figure 1. In a forward inference, the policy first propose a vectorized hyper-action, then this latent vector will serve as a deterministic function that rank, and finally select a list of items, denoted as effect-action, from the candidate pool. Note that this extra implicit inference step also induces new challenges to the RL solution: Most importantly, it introduces inconsistency between the two actions spaces. Specifically, we want to apply efficient end-to-end training on the hyper-actions but it is the effect-actions that actually interact with the users. On the other hand, the most accurate latent representation of the discrete effect-action may not be exactly the same as that of the proposed hyper-action that selects it, so the inference accuracy is not guaranteed. Additionally, it also introduces an extra exploration stage and it is unknown whether one should explore hyper-actions on the latent space or explore the discrete effect-action space. All of these add up to the instability and uncertainty of the learning, so we need a solution that can regulate the two action spaces and stabilize the RL process. To solve the aforementioned challenges, we propose a general Hyper-Actor Critic (HAC) learning framework that contains four components: the user state and hyper-action generator; the scoring function that maps hyper-actions into effect-actions (i.e. recommendation lists); the critic network that evaluates both the hyper-action space and the effect-action space; and an inverse mapping module that infers the hyper-action back based on the effect action. During training, the backbone actor-critic learning paradigm is augmented with an alignment module that ensures consistency between two action spaces and a supervision module that improves the stability and effectiveness of RL. The resulting framework generalizes many existing solutions to recommendation tasks like DDPG and Online/Offline Supervise Learning (SL). We summarize our contribution as follows: \u2022 We propose a practical and efficient RL framework that learns to recommend a list of items from a large item pool through a latent hyper-action. \u2022 We build online simulators based on public datasets and empirically show that the resulting framework achieves better performance compared to standard RL and SL solutions. \u2022 We also point out that providing supervision and regulating the consistency between the hyper-action space and the effect-action space are helpful for improving the sampling efficiency and inference accuracy. 2 METHOD 2.1 Problem Formulation Here we consider the session-based recommendation scenario where the system considers an item pool of size \ud835\udc41, denoted as I, and for each user we observe the static user features \ud835\udc96and the interaction history of a session \ud835\udc651:\ud835\udc61= {\ud835\udc651, . . . ,\ud835\udc65\ud835\udc61} where \ud835\udc65\ud835\udc56\u2208I(1 \u2264\ud835\udc56\u2264\ud835\udc61). And the goal is to interactively recommend lists of items to users that maximizes user cumulative reward over the session of interactions. As we have described in section 1, we emphasize the existence of the latent action space and formulate the recommendation task as a modified Markov Decision Process (MDP) as captured in Figure 1. Then, the MDP components with the latent action space become: \u2022 S: the continuous representation space of user state. \u2022 A: the final effect-action space corresponds to the possible recommendation lists. Without loss of generality, we consider the list of fixed size \ud835\udc58so the action space is A = I\ud835\udc58. \u2022 Z: the latent hyper-action space which is a continuous vector space that encodes how the effect-action will be selected. We assume a many-to-one mapping function \ud835\udc53: Z \u2192A. \u2022 R: The cumulative reward function that estimates the user feedback in the user session, and the immediate reward \ud835\udc5f(\ud835\udc60,\ud835\udc4e) captures the single step reward when taking an action \ud835\udc4e\u2208A on state \ud835\udc60\u2208S. Note that there is an implicit transition model P(\ud835\udc60\u2032 \ud835\udc61|\ud835\udc60\ud835\udc61,\ud835\udc4e\ud835\udc61) that describes the probability of reaching a certain new state \ud835\udc60\u2032 \ud835\udc61after taking action \ud835\udc4e\ud835\udc61on state \ud835\udc60\ud835\udc61. In RS, this transition function is integrated into the user state encoder and is usually modeled by a sequential model that takes the user interaction sequence as input. At each interaction step in a user session, given a user\u2019s current state \ud835\udc60\ud835\udc61\u2208S (e.g. portrait and history interactions), the recommendation policy \ud835\udf0b\ud835\udf03(\ud835\udc4e\ud835\udc61|\ud835\udc60\ud835\udc61) first infer a hyper-action representation \ud835\udc4d\ud835\udc61and then generates a list of items as the effect action \ud835\udc4e\ud835\udc61. The user\u2019s feedback along with the updated user state is returned from the user environment and the reward function \ud835\udc5f(\ud835\udc60\ud835\udc61,\ud835\udc4e\ud835\udc61) is assumed as given so is regarded as part of the environment. The goal is to find an optimal recommendation policy \ud835\udf0b\u2217(\ud835\udc4e\ud835\udc61|\ud835\udc60\ud835\udc61) : S \u2192A that maximizes the expected cumulative reward throughout the session: E\ud835\udf0f\u223c\ud835\udf0b[R(\ud835\udf0f)] = E\ud835\udf0f\u223c\ud835\udf0b h |\ud835\udf0f| \u2211\ufe01 \ud835\udc61=0 \ud835\udefe\ud835\udc61\ud835\udc5f(\ud835\udc60\ud835\udc61,\ud835\udc4e\ud835\udc61) i (1) \fExploration and Regularization of the Latent Action Space in Recommendation WWW \u201923, May 1\u20135, 2023, Austin, TX, USA where \ud835\udf0f= [(\ud835\udc600,\ud835\udc4e0,\ud835\udc5f0), (\ud835\udc601,\ud835\udc4e1,\ud835\udc5f1), . . . ] denotes the sampled trajectories, and \ud835\udefe\u2208[0, 1) denotes the discount factor for future reward. 2.2 Overall Framework We present our framework as Figure 2, and denote it as the HyperActor Critic (HAC) learning method. The recommendation policy \ud835\udf0b(\ud835\udc4e\ud835\udc61|\ud835\udc60\ud835\udc61) is decomposed into an hyper-actor network \ud835\udc43(\ud835\udc4d\ud835\udc61|\ud835\udc60\ud835\udc61) that generates a vectorized hyper-action and a ranking scorer \ud835\udc43(\ud835\udc4e\ud835\udc61|\ud835\udc4d\ud835\udc61) that select the final recommendation list based on the hyper-action. Then we propose to share the critic network between action spaces so that it can evaluate either the hyper-action or the final effect-action (i.e. the recommendation list). Our framework uses DDPG [30] as foundation, but differently, we address the importance of using different action spaces for actor learning and critic learning. Specifically, we optimize the critic based on effect-actions to guarantee the accuracy of action/state evaluation, and use hyper-actions to optimize the actor so that efficient end-toend training and exploration can be applied. To ensure consistency between the two different action spaces, we also learn an inverse pooling module with item kernel functions to infer the hyper-action back from the effect-action. This means that the evaluation of the two action spaces will share the same critic, and the knowledge learned from the effect-action can be transferred to hyper-actions. To stabilize the learning process, we also include supervision on the effect-action using immediate user responses. 2.3 User State and Hyper-Actor In the case of RS, the observable information of a user usually includes the up-to-date user interaction history \ud835\udc991:\ud835\udc61and the static demographic features \ud835\udc96. Then one can use any sequential model to encode this information and infer the dynamic user state: \ud835\udc94\ud835\udc61= StateEnc(\u03a6(\ud835\udc651), . . . , \u03a6(\ud835\udc65\ud835\udc61), \ud835\udc96) (2) where the item kernel function \u03a6 maps the items\u2019 raw features into a dense embedding in the kernel space. We also use a user kernel function to map the user features \ud835\udc96into the same kernel space of items, then concatenate the sequence of history items with the user embedding as the new sequence. To encode user features and histories, there exist several feasible sequential models such as [23, 25, 40], and we use the state encoder in SASRec [25] as our backbone since it can capture both the sequential patterns in the dynamic history and the user-item correlations through the self-attention mechanism. With the state encoded, a vectorized representation (i.e. the hyper-action) is inferred by an Multi-Layer Perceptron (MLP) module: \ud835\udc81\ud835\udc61= MLP(\ud835\udc94\ud835\udc61) (3) We assume that the distribution of this hyper-action follows the standard Gaussian N (\ud835\udc81\ud835\udc61, \ud835\udf0e2 \ud835\udc4d) and we can use the reparameterization trick to engage the end-to-end training. 2.4 Scoring Functions and Effect Action Given \ud835\udc81\ud835\udc61that contains sufficient information of user preferences, we can assume that the selection of items only depends on this latent vector. In other words, we have conditional independence: \ud835\udc43(\ud835\udc4e\ud835\udc61|\ud835\udc94\ud835\udc61, \ud835\udc81\ud835\udc61) = \ud835\udc43(\ud835\udc4e\ud835\udc61|\ud835\udc81\ud835\udc61) (4) Note that this setting also indicates that the inferred \ud835\udc81\ud835\udc61is a hyperaction that can be considered as the parameters of the later item selection module. As a result, the overall recommendation policy follows \ud835\udf0b(\ud835\udc4e\ud835\udc61|\ud835\udc60\ud835\udc61) \u223c\ud835\udc43(\ud835\udc4e\ud835\udc61|\ud835\udc81\ud835\udc61). When generating the effect-action, we select items from the candidate pool according to a scoring function parameterized by \ud835\udc81\ud835\udc61. The scoring function provides a ranking score for each of the candidate items in I, and the final recommendation list is generated through either top-\ud835\udc58selection or categorical sampling. Taking the SASRec model as an example, the scoring function is a dot product between the kernel item embedding and the encoded user state: score(\ud835\udc56|\ud835\udc4d\ud835\udc61) = \u03a6(\ud835\udc56)\u22a4\ud835\udc81\ud835\udc61 (5) Note that the parameters of the item kernel function \u03a6 are not considered as part of the hyper-action \ud835\udc81\ud835\udc61since it is independent of the given user state. In order to engage efficient learning and inference, one can assume that selection of each item is conditionally independent of each other: \ud835\udc43(\ud835\udc4e\ud835\udc61|\ud835\udc4d\ud835\udc61) = \u00ce \ud835\udc56\u2208\ud835\udc4e\ud835\udc61\ud835\udc43(\ud835\udc56|\ud835\udc4d\ud835\udc61), similar to the slate decomposition in [21]. Then, we can define the selection or sampling probability of an item as \ud835\udc43(\ud835\udc56|\ud835\udc4d\ud835\udc61) = softmaxI (score(\ud835\udc56|\ud835\udc4d\ud835\udc61)). 2.5 Shared Critic and The Inverse Module The purpose of the critic is to accurately evaluate the long-term quality of the state-action pair (e.g. Q function in DDPG) or the expected value of the given state (e.g. V function in A2C) so that it can effectively guide the actor learning and the action exploration. Compared to the standard RL framework, the new problem setting allows us to evaluate either the hyper-action with \ud835\udc44(\ud835\udc60\ud835\udc61, \ud835\udc81\ud835\udc61) or the effect-action with \ud835\udc44(\ud835\udc60\ud835\udc61,\ud835\udc4e\ud835\udc61) or both. In order to ensure consistent evaluation of the actions from different spaces, we propose to transfer knowledge between \ud835\udc44(\ud835\udc60\ud835\udc61,\ud835\udc4e\ud835\udc61) and \ud835\udc44(\ud835\udc60\ud835\udc61,\ud835\udc4d\ud835\udc61) through a shared critic network. As shown in Figure 2, this shared critic is a mapping function \ud835\udc54: S \u00d7 Z \u2192R, that takes the user state \ud835\udc60\ud835\udc61 and the action embedding in the kernel space \ud835\udc4d\ud835\udc61. Or equivalently, \ud835\udc44(\ud835\udc60\ud835\udc61,\ud835\udc4d\ud835\udc61) = \ud835\udc54(\ud835\udc60\ud835\udc61,\ud835\udc4d\ud835\udc61). To evaluate the effect-action, an inverse module \u210eis introduced to infer the hyper-action back: \u02c6 \ud835\udc81\ud835\udc61= \u210e(\ud835\udc4e\ud835\udc61) = pooling(\u03a6(\ud835\udc56)|\ud835\udc56\u2208\ud835\udc4e\ud835\udc61) (6) and the evaluation becomes \ud835\udc44(\ud835\udc60\ud835\udc61,\ud835\udc4e\ud835\udc61) = \ud835\udc54(\ud835\udc60\ud835\udc61, \u02c6 \ud835\udc81\ud835\udc61). In practice, we found that the average pooling of the item embedding in the kernel space generates the most stable result, though there are infinitely many latent \ud835\udc81that can generate the same list. Compared to existing solutions that infer the latent action using adjacent states like PGRA [4], we believe that the effect-action in recommendation task has sufficient information to recover the hyper-action. In addition, we use a reconstruction loss to further regulate the consistency between the hyper-action and the effect-action through an alignment loss function. As we will describe in the next section, it ensures that the generated hyper-action \ud835\udc81\ud835\udc61is in the valid region close to the candidate items in the kernel space. 2.6 Overall Learning Framework The overall optimization process is a modified actor-critic learning framework that consists of a critic loss, an actor loss, a hyper-actor loss, and a supervised loss. And a experience replay buffer D will collect the sample records in the form of (\ud835\udc60\ud835\udc61,\ud835\udc4e\ud835\udc61,\ud835\udc5f(\ud835\udc60\ud835\udc61,\ud835\udc4e\ud835\udc61),\ud835\udc60\ud835\udc61+1,\ud835\udc51) where \ud835\udc51is the termination signal indicating whether the user has \fWWW \u201923, May 1\u20135, 2023, Austin, TX, USA Liu and Cai, et al. Gradient from actor/critic loss TD error Policy Environment Q Maximization Shared Critic Gradient from behavior loss Gradient from hyper-action loss sample sample Forward Shared Critic average pooling Figure 2: Hyper-Actor Critic (HAC) learning framework. \u2299represents the scoring function that selects items from I, left. The critic loss aims to train an accurate evaluator that captures the patterns in the quality of actions: LTD = ED h (\ud835\udc5f(\ud835\udc60\ud835\udc61,\ud835\udc4e\ud835\udc61) + \ud835\udefe(1 \u2212\ud835\udc51)\ud835\udc44(\ud835\udc60\ud835\udc61+1,\ud835\udc4e\ud835\udc61+1) \u2212\ud835\udc44(\ud835\udc60\ud835\udc61,\ud835\udc4e\ud835\udc61))2i (7) where\ud835\udc4e\ud835\udc61+1 is generated by the recommendation policy using greedy method (equivalent to finding arg max\ud835\udc44(\ud835\udc60\ud835\udc61,\ud835\udc4e\ud835\udc61) when the Q function is accurate). This is a standard TD error and we only calculate Q for the effect-action when learning the critic to ensure the accuracy of evaluation. Note that in DDPG-based methods, each actor and critic is paired with a target network which is used to estimate future state-action pairs \ud835\udc44(\ud835\udc60\ud835\udc61+1,\ud835\udc4e\ud835\udc61+1). These target networks adopt elastic updates from the actor and critic so that their slow optimization can stabilize the learning. Then, with an accurate critic as the evaluator, we can efficiently learn our actor by maximizing the Q-value of the inferred action, which is equivalent to minimizing the following actor loss: LQMax = ED h \ud835\udc44(\ud835\udc94\ud835\udc61, \ud835\udc81\ud835\udc61) i (8) where \ud835\udc81\ud835\udc61is inferred by the hyper-actor as described in section 2.3. Note that the learning of critic and actor uses different action spaces, so we need to align the two spaces to avoid the mode collapse [39] of the generated hyper-action. In our solution, we use the L2 regularizer to ensure this consistency: LHyper = E\ud835\udc37 h \u2225\ud835\udc81\ud835\udc61\u2212\u02c6 \ud835\udc81\ud835\udc61\u22252i (9) where \ud835\udc81\ud835\udc61is produced by the hyper-action based on state \ud835\udc94\ud835\udc61, and \u02c6 \ud835\udc81\ud835\udc61is generated by first greedily select the effect action using \ud835\udc81\ud835\udc61as described in section 2.4 and then reconstruct the hyper-action back using the inverse module as described in section 2.5. Additionally, to better stabilize the RL and exploit the detailed user response signals on each item, we also include a supervised learning objective based on the effect-action. LBCE = E h \u2211\ufe01 \ud835\udc56\u2208\ud835\udc4e\ud835\udc61 \ud835\udc66\ud835\udc61,\ud835\udc56log \ud835\udc43(\ud835\udc56|\ud835\udc81\ud835\udc61) + (1 \u2212\ud835\udc66\ud835\udc61,\ud835\udc56) log(1 \u2212\ud835\udc43(\ud835\udc56|\ud835\udc81\ud835\udc61)) i (10) Algorithm 1 Hyper-Actor Critic Training 1: procedure HAC 2: Initialize all trainable parameters in the actors, critics, and the item kernel function. 3: Initialize replay buffer B. 4: while Not Converged, in each iteration do 5: Apply current policy in running episodes, collect and store samples to B. 6: Sample mini-batch of (\ud835\udc94\ud835\udc61,\ud835\udc4e\ud835\udc61,\ud835\udc5f(\ud835\udc94\ud835\udc61,\ud835\udc4e\ud835\udc61), \ud835\udc94\ud835\udc61+1,\ud835\udc51) \u223cB. 7: Update actor and critic with loss Eq.(8) and Eq.(7). 8: Update actor and the kernel with loss Eq.(9), if any action alignment. 9: Update actor and the kernel with loss Eq.(10), if any supervision. 10: end while which is a binary cross-entropy loss where \ud835\udc66\ud835\udc61,\ud835\udc56is the ground truth user response on the exposed item \ud835\udc56. We remind readers that there are other advanced supervision and regularization methods that can accommodate RL-based models and potentially extend our framework. For example, [14] could supervise the effect-action space for off-policy training, and [49] is well-suited for the distance control on the hyper-action space. As a summary, we present the resulting learning paradigm as algorithm 1. And note that the parameters of the inverse module come from the item kernel, so both Eq.(9) and Eq.(10) update them as in line 8-9. 2.7 Exploration in Hyper-Action Space and Effect-Action Space Readers may notice that the inclusion of latent hyper-action also introduces an extra action sampling step as shown in Figure 2, so the resulting framework allows both the sampling on the hyperactions space (e.g. by adding Gaussian noise) and the sampling on the effect-action space (e.g. categorical sampling of items based \fExploration and Regularization of the Latent Action Space in Recommendation WWW \u201923, May 1\u20135, 2023, Austin, TX, USA 1.0 0.8 0.6 0.8 1.0 1.2 1.4 Optimal Action Sub-Optimal Action (current policy) Efficient Guassian Exploration 0.6 0.8 1.0 1.2 1.4 Q-Contour in effect-action space Q-Contour in hyper-action space Policy Gradient Figure 3: Exploration in Different Action Spaces on ranking scores). Theoretically, this indicates that the sampling probability of effect-actions should be described as the following: \ud835\udc43(\ud835\udc4e\ud835\udc61|\ud835\udc94\ud835\udc61) = \u222b \ud835\udc4d\ud835\udc61 \ud835\udc43(\ud835\udc4e\ud835\udc61|\ud835\udc81\ud835\udc61)\ud835\udc43(\ud835\udc81\ud835\udc61|\ud835\udc94\ud835\udc61) (11) When the effect-action generation is deterministic, the exploration only depends on the sampling of \ud835\udc81\ud835\udc61, similar to that in [4]; and if the hyper-actor is deterministic, the exploration only depends on the effect-action sampling as in standard policy gradient methods. Note that the variance \ud835\udf0e2 \ud835\udc4dof the hyper-action controls the uncertainty of the inferred latent action embedding, and it is critical to find an adequate value that can improve the exploration effectiveness in RL. On one hand, giving a variance that is too small will limit the exploration of new actions resulting in sub-optimal results; On the other hand, making the variance too large will induce unstable action exploration that hardly converges. In general, we would like to take advantage of the efficient learning and exploration of the hyper-action space, so it becomes critical to align the distribution of \ud835\udc81\ud835\udc61and the embedded item in the kernel space, as we mentioned in section 2.6. As we showcase in Figure 3, the item kernel function helps increase the expressiveness of the policy by folding the action space where exploration could be more efficient. Though we are skeptical whether there is a guarantee for all RL solutions that explores the latent action space, we will empirically show the effectiveness of encoding both users and items into the same kernel space and regulating the action with the inverse pooling module using Eq.(9) in section 3.4. 3 EXPERIMENTS 3.1 Experimental Settings 3.1.1 Datasets: We include three public datasets in our experiments: RL4RS1 is a session-based dataset [44] that is first introduced in the BigData Cup 2021 to boost RL research; ML1M2 is the MovieLens data [20] with 1 million records which consists of user\u2019s ratings of movies; KuaiRand1K3 is a recent dataset for sequential short-video recommendation, and we use the 1K version [16] which has irrelevant videos removed. We preprocess all these datasets into a unified format where each record consists of (user features, user 1https://github.com/fuxiAIlab/RL4RS 2https://grouplens.org/datasets/movielens/1m/ 3https://kuairand.com/ Dataset |U| |I| #record \ud835\udc58 RL4RS 283 781,367 9 MovieLens-1M 6400 3706 1,000,208 10 KuaiRand 986 11,643 969,831 10 Table 1: Dataset Summary. \ud835\udc58represents the size of the recommendation list (i.e. the effect-action size). RL4RS dataset provides user profile features instead of user ID so it does not have a count for the user set. history, exposed items, user feedback, and timestamp) in sequential order. The details of this process are summarized in Appendix A.1, and the resulting dataset statistics is provided in Table 1. 3.1.2 Online Environment Simulator. To simulate the online user interaction we train a user response model \u03a8 : S \u00d7 A \u2192R\ud835\udc58 for each of the dataset. The user state is derived based on the static user features and the dynamic history interactions. \u03a8 outputs the probabilities of the user\u2019s positive feedback on each item in recommended \ud835\udc4e\ud835\udc61, and the final response \ud835\udc66\ud835\udc61\u2208{0, 1}\ud835\udc58(e.g. click) is uniformly sampled according to the probabilities. We design the reward \ud835\udc5f(\ud835\udc60\ud835\udc61,\ud835\udc4e\ud835\udc61) \u2208[\u22120.2, 1.0] as the average of the item-wise reward. We provide details of the environment in Appendix A.2. 3.1.3 Models and Baselines. We use SASRec [25] as our backbone actor as described in section 2.3, it consists of a Transformer-based state encoder and hyper-action generator, and the dot-product scorer. We also implemented the following RL baselines using our proposed HAC framework to better showcase how our HAC generalizes existing methods: \u2022 Online SL: the SASRec actor directly learns from immediate user feedback instead of the long-term commutative reward. \u2022 A2C: the synchronized version of the A3C [34] that applies the policy gradient on the effect-action space. \u2022 DDPG: a Deep DPG framework using the hyper-action space for both the actors and critics [30]. This method is equivalent to our HAC model without supervision. \u2022 TD3: improve the DDPG with double Q-learning so the training of critic becomes more stable [15]. \u2022 DDPG-RA: the DDPG framework with the action representation learning as in [4]. This method is closest to our work and it regulates the effect-actions while our HAC model aligns the hyper-actions. To better compare RL and SL methods, we also include the Offline SL that optimizes the policy using Eq.(10) based on the offline data instead of the online environment. The model architectures and specifications of these models are provided in Appendix A.3. 3.1.4 Evaluation. For all datasets, we split them into the first 80% for training and the last 20% for evaluation according to record timestamps. We then pretrain the online environment on the training set, and pretrain another online environment on the entire dataset for later evaluation. We train our recommendation policies in the first environment and evaluate them in the second. During training, we set the discount of reward as \ud835\udefe= 0.9 and limit the interaction depth to \u226420 for all experiments. We find that most RL-based methods converge and stabilize within 50,000 iterations. For long-term evaluation metrics, we consider the Total Reward \fWWW \u201923, May 1\u20135, 2023, Austin, TX, USA Liu and Cai, et al. Model RL4RS ML1M KuaiRand Total Reward Depth Total Reward Depth Total Reward Depth Offline SL 6.721 8.163 18.559 18.717 14.394 14.982 Online SL 9.502 10.571 18.629 18.780 13.456 14.147 A2C 7.789 9.140 16.158 16.556 12.460 13.250 DDPG 8.337 9.588 17.205 17.508 11.394 12.313 TD3 8.553 9.791 17.545 17.814 11.777 12.664 PG-RA 8.561 9.728 18.466 18.633 10.859 11.814 HAC 10.059 11.102 18.863 18.988 14.789 15.335 Table 2: Online Performance. The best performances in bold and second best in Underline that represents the summation of the rewards in an entire user session and the Depth represents how many interactions the user, and each session is observed using the simulated online environment that interacts with the learned policies. And for both metrics, a higher value means better performance. To evaluate the stability of the learned policies, we include a Reward Variance metric that estimates how inconsistent a policy would deal with different user states, so a lower value indicates a more stable policy. Note that this metric describes the variance across states not the variance across random seeds. In each experiment, we evaluate all aforementioned metrics and report the average values across different user sessions. 3.2 Effectiveness For all tasks, the goal of the recommender system is to maximize the long-term satisfaction represented by total reward and average depth. For each model, we grid-search the hyper-parameters and pick the setting with the best results to report in Table 2. Main result: We can see that the proposed HAC framework consistently achieves the best performances across all datasets on both long-term metrics: 6% improvement on Rl4RS, 1% on ML1M, and 3% on KuaiRand over the best baselines. This indicates the expressiveness of the proposed hyper-actions and the effectiveness of the learning method. Note that all other RL methods can only achieve better results than offline supervised learning in the RL4RS task, but become worse in ML1M and KuaiRand with larger effectaction spaces. This indicates that our HAC framework can better capture the patterns for larger action spaces. RL-baselines: Among RL solutions, A2C always has the worst performance and appears to be the most unstable learning framework, but the gap between A2C and DDPG tends to be smaller in datasets (ML1M and KuaiRand) with larger action spaces and A2C even achieves better performances than DDPG in KuaiRand with the largest action space. Since A2C directly optimizes the effectaction and DDPG uses the hyper-action, this reduced gap may indicate that it may become harder to learn consistent and accurate hyper-action representations in larger effect-action spaces. This may also proves that ensuring consistency between the two action spaces is critical to achieving effective RL. TD3 slightly improves the performance over DDPG but still behave in a similar way. Action space regularization: In addition to our method, The DDPG-RA method also addresses the alignment of action spaces and has the closest behavior to our method. Differently, it does not regulate the hyper-actions has HAC, instead, it aligns the effect-action Figure 4: Training curves of HAC without supervision on RL4RS. X-axis corresponds to the number of iteration. The four losses are presented in log scales. Performance Reward Variance Figure 5: Effect of supervision loss on KuaiRand. X-axis represents the learning rate of Eq.(10). space that is not directly used in guiding the actor. Additionally, DDPG-RA uses the hyper-action rather than the effect-action when learning critics, so it achieves better results than A2C and DDPG, but does not surpass our method or even supervised methods. 3.3 Learning HAC Supervision and stable learning: To further illustrate the training behavior of the HAC model, we plot the learning curves of HAC in Figure 4 where we compare a \u201cHAC w/o supervision\u201d that has no gradient for the supervised loss Eq.(10). All methods saturate \fExploration and Regularization of the Latent Action Space in Recommendation WWW \u201923, May 1\u20135, 2023, Austin, TX, USA Figure 6: Training curves of HAC on RL4RS. The reward variance correspond to the variance of total reward across different users. The four losses are presented in log scales. \ud835\udf06\u210e represents the magnitude of hyper-action alignment. \ud835\udf06\u210e= 0 does not include this alignment loss so has LHyper = 0. Performance Reward Variance Figure 7: Effect of hyper-action alignment on ML1M. X-axis represents the magnitude of Eq.(9), i.e. \ud835\udf06\u210ein Figure 6. on the performance metric as shown in the penal \u201cTotal Reward\u201d and can successfully reduce all four loss functions mentioned in section 2.6. Note that \u201cHAC w/o supervision\u201d can still successfully reduce the BCE loss on each item, indicating the effectiveness of RL based on the reward that aggregates the item-wise signals. We can also see that including the supervision would help boost the model performance and reduce the actor loss LQMax that helps explore better actions. Note that HAC has a lower critic loss LTD than \u201cHAC w/o supervision\u201d, which indicates a more stable learning process. We can also verify this by observing the variance of the total reward across users since higher reward variance indicates that the learned policy is less capable of providing good actions under different user states. As shown in Figure 5 for the KuaiRand environment, increasing the importance of supervised loss would help improve the recommendation performance and reduce the variance. Yet, assigning the supervision module with a learning rate (0.001 in KuaiRand) that is too large may over-exploit the user\u2019s intention and harm the performance. We observe similar patterns in the other two datasets and provide the results in Appendix B. Hyper-action regulation: We conduct the same ablation experiments when comparing different magnitudes of hyper-action Figure 8: Effect of model components on ML1M. alignment for loss Eq.(9) and plot the learning curves of HAC in Figure 6 In general, including the hyper-action alignment (\ud835\udf06\u210e= 0.1 and \ud835\udf06\u210e= 1) would cause the HAC framework to learn slower than the HAC that only uses supervision (\ud835\udf06\u210e= 0) in terms of the convergence of total reward and reward variance. In contrast, the more consistent action space helps the model to learn and explore better action policies. Besides, increasing the importance of this alignment module results in worse LQMax and better LTD, indicating that critic is more accurate in capturing the quality of actions. Note that \ud835\udf06\u210e= 1 is more stable than \ud835\udf06\u210e= 0.1 but may be less effective in exploration. To better verify this We illustrate the evaluation result in Figure 7 where the results exhibit a best point \ud835\udf06\u210e= 0.1 where the recommendation is higher and more stable in reward. 3.4 Ablation Study Model components: To better investigate how different learning modules in our framework work, we compare several alternatives of our method: 1) DDPG: HAC without supervision and action alignment, and it uses hyper-action space for both actor learning and critic learning; 2) HAC w/o LBCE: excluding the supervision of HAC; 3) HAC w/o LHyper: excluding the hyper-action alignment module. We summarize the results in Figure 8 for ML1M dataset. We can see that excluding either the supervision or the alignment module would reduce the performance and increase the reward variance. This indicates that both modules help improve the model\u2019s accuracy and learning stability. Similar results are also observed in other datasets as augmented in Appendix B. Note that DDPG achieves relatively the same performance as HAC w/o LBCE, this indicates that using separating action spaces for actor learning and critic learning as in HAC may reduce the performance and simply including an action alignment module would not fill the gap. In this sense, HAC needs both hyper-action alignment and supervision. This also means that the inconsistency between the two action spaces is smaller than the inconsistency between the aggregated reward and item-wise user response signals. Exploration on different action spaces: In terms of the exploration of HAC model, we can apply exploration on both the hyper-action space and the effect-action space. To compare the effect of different magnitude of hyper-action exploration, we change the variance of the Gaussian noise during learning and fix all other hyperparameters of HAC, and present the results in Figure 9. The comparison of recommendation performance shows an optimal point in the middle of the search space, indicating that one should carefully design the exploration so that the sampling variance is not too small or too large. As we have discussed in section 2.7, small \fWWW \u201923, May 1\u20135, 2023, Austin, TX, USA Liu and Cai, et al. Performance Reward Variance Figure 9: Effect of HAC\u2019s hyper-action exploration magnitude on RL4RS dataset. X-axis correspond to the Gaussian noise variance for hyper-actions in HAC model. Performance Reward Variance Figure 10: HAC\u2019s effect-action exploration magnitude on RL4RS dataset. X-axis correspond to the rate of greedy top-k selection rather than categorical sampling for effect-actions in HAC model, and we set hyper-action noise to 0. variance may limit the exploration of new actions and large variance may induce unstable exploration that hardly converges. And empirically, we find that sampling on effect-actions is less effective than exploration on hyper-actions. As the example in Figure 10, applying top-\ud835\udc58greedy selection achieves the best result and adding categorical sampling would make the learned policy sub-optimal. 4 RELATED WORK 4.1 Sequential Recommendation and Session-based Recommendation The session-based recommendation (SBR) problem is closely related to the sequential recommendation (SR) task [45]. An SR task aims to learn a policy that can infer the next recommendation (item or list) based on the given user\u2019s historical interactions. In comparison, the SBR considers the existence of the beginning and termination of an interaction session. We see our setting as somewhere intersects the two notions: by setting the list size to 1, it would be almost identical to SR except for the reward setup; yet, our goal is to optimize the entire future reward of the user session, which is closer to the nextpartial SBR as defined in [45]. Under both problem settings, the most adopted solution uses the Markov Chain assumption to model the dynamic transitions of the user-system interactions. The main challenge of this line of work is how to construct a representative user state based on the long-term history. Early solutions to the recommendation problem adopted the collaborative filtering techniques [12, 27, 29, 36] and later approaches embrace deep neural networks like Recurrent Network [23], Convolution Network [43], Memory Network [10], Self-Attention [25, 40], GCN [5, 46], Machine Reasoning [7, 24, 38] and Foundation Models [19] to improve the model expressiveness, so that it can better capture the abundant and complex information from user/item features and interaction histories. The key insight behind all these methods is to accurately encode the long-range histories, but this paradigm does not optimize the long-term user rewards. 4.2 Reinforcement Learning in Recommendation The RL-based RS [1, 37, 41] also follows the MDP formulation and it emphasizes the importance of optimizing the cumulative reward that represents long-term user satisfaction. In the simplest setting, tabular-based methods [33] are used to optimize an evaluation table but only work for a fixed set of state-action pairs. Then value-based methods [35, 42, 51, 56] and policy gradient methods [6, 8, 17, 18, 28, 47] are proposed to learn to evaluate and optimize the action policy based on the sampled long-term reward. The actor-critic paradigm [48, 53, 54] integrates these two methods by simultaneously learning an action evaluator and an action generator. The main challenges of RL-based RS consist of the large combinatorial state/actions space [13, 21, 31], regulating the unstable learning behavior [2, 9], and finding optimal reward function for heterogeneous user behaviors [3, 11]. Our work focus on the action space representation learning and the stability of RL. And we consider PG-RA [4] as the closest work that also emphasizes the learning of latent action representations. As we have mentioned in section 3.1, PG-RA aims to transfer knowledge through a shared scoring function and applies action alignment on the effect-action space, which is not well suited for the latent-factor decomposition for users and items. We have empirically verified this inferior performance in 3.2. Additionally, we have illustrated the effectiveness of our method through evaluation on different simulated environments, but we remind readers that there is still a chance that the actual online environment is a more complex and dynamic mechanism. In this sense, there are works focusing on building a more realistic online user environment for RS [22, 52, 55] which could complement our work in practice. 5" + }, + { + "url": "http://arxiv.org/abs/2102.13302v1", + "title": "Variation Control and Evaluation for Generative SlateRecommendations", + "abstract": "Slate recommendation generates a list of items as a whole instead of ranking\neach item individually, so as to better model the intra-list positional biases\nand item relations. In order to deal with the enormous combinatorial space of\nslates, recent work considers a generative solution so that a slate\ndistribution can be directly modeled. However, we observe that such approaches\n-- despite their proved effectiveness in computer vision -- suffer from a\ntrade-off dilemma in recommender systems: when focusing on reconstruction, they\neasily over-fit the data and hardly generate satisfactory recommendations; on\nthe other hand, when focusing on satisfying the user interests, they get\ntrapped in a few items and fail to cover the item variation in slates. In this\npaper, we propose to enhance the accuracy-based evaluation with slate variation\nmetrics to estimate the stochastic behavior of generative models. We illustrate\nthat instead of reaching to one of the two undesirable extreme cases in the\ndilemma, a valid generative solution resides in a narrow \"elbow\" region in\nbetween. And we show that item perturbation can enforce slate variation and\nmitigate the over-concentration of generated slates, which expand the \"elbow\"\nperformance to an easy-to-find region. We further propose to separate a pivot\nselection phase from the generation process so that the model can apply\nperturbation before generation. Empirical results show that this simple\nmodification can provide even better variance with the same level of accuracy\ncompared to post-generation perturbation methods.", + "authors": "Shuchang Liu, Fei Sun, Yingqiang Ge, Changhua Pei, Yongfeng Zhang", + "published": "2021-02-26", + "updated": "2021-02-26", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR", + "cs.LG" + ], + "main_content": "INTRODUCTION In most recommender systems, items are naturally exposed to users as a slate, which usually contains a fixed number of items, e.g., a 1-by-5 list of recommended items, or a 2-by-2 block that can fit a mobile phone screen. This leads to the idea of slate recommendation, also known as exact-\ud835\udc58recommendation [18, 42]. The problem is usually formalized as generating a slate of items such that certain expected user behavior (e.g., the number of clicks) is maximized. The challenge of this problem is that the number of possible slates is combinatorially large [44]. For example, for a system with \ud835\udc5bitems, to generate a slate of \ud835\udc58items, the possible number of slates will be \ud835\udc42(\ud835\udc5b\ud835\udc58), which is huge given that many recommender systems work on millions or even billions of items. Traditional ranking-based recommendation models such as learning-to-rank (LTR) [7, 8, 17, 33, 37] first predicts the probability of user engagement on each candidate item, and then selects the topranked ones as the recommendation list. Despite its well-recognized effectiveness and scalability, this ranking and selection process is greedy in essence and neglects the fact that the user behavior on an item may be influenced by other (e.g., complementary or competitive) items exposed in the same list [29, 48], thus resulting in its sub-optimality. Furthermore, evidence has shown that one can improve the recommendation performance by taking into account the intra-list item relations in ranking [2, 8, 13, 18, 36, 48]. Recently, researchers have explored the possibility of solving this problem by directly generating the slate as a whole to break the limitation of ranking-based approaches. Many of the approaches are based on generative models such as Variational Auto-Encoders (VAE) [23, 28]. However, these generative models are stochastic in nature and their variational behavior may not produce satisfactory slate recommendations. For example, in the case of VAE-based models, the performance depends on a trade-off coefficient \ud835\udefd[23]\u2014the larger the \ud835\udefd-value during training, the more the model is focused on encoding variation control against the data reconstruction accuracy. In terms of slate recommendation, this phenomenon diverges the generative results into one of the three cases: \u2022 Over-reconstruction: when \ud835\udefdis smaller than some lower threshold \ud835\udefd\u2212, it tends to overfit the slate reconstruction on the training set. Though the resulting generated slates have extremely high variance, the model usually fails to generate satisfactory recommendations. \u2022 Over-concentration: when \ud835\udefdis larger than some upper threshold \ud835\udefd+, the model tends to choose from only a few prototypical slates that achieve satisfactory performance but fails to explore the variety of slates. \u2022 Elbow case: when \ud835\udefdis selected in an appropriate region (i.e., \ud835\udefd\u2208 \u0002 \ud835\udefd\u2212, \ud835\udefd+\u0003 ), it gives intermediate item variety and is arXiv:2102.13302v1 [cs.IR] 26 Feb 2021 \fWWW \u201921, April 19\u201323, 2021, Ljubljana, Slovenia Shuchang Liu, Fei Sun, Yingqiang Ge, Changhua Pei, and Yongfeng Zhang able to fulfill certain degree of user interests. We show that this transitional region is the most suitable for slate recommendation task. Unfortunately, this very setting usually lies in a narrow region (e.g. \ud835\udefd+ \u2212\ud835\udefd\u2212\u226a10\u22122) while the search space of \ud835\udefdcan be arbitrarily large. We denote this as the Reconstruction-Concentration Dilemma (RCD) and in this paper we investigate possible solutions that can increase the variety of items under the over-concentration case. To achieve this, one can simply apply post-generation perturbation to enforce item variety, yet this solution ignores the intra-slate features and significantly downgrades the recommendation accuracy. With this in mind, we further derive a modification of the original generation process, so that it can perturb before the final generation while reducing the negative effect of the perturbation. Specifically, when generating a slate, it follows a two-phase procedure: first, a pivot selection model chooses an item for a fixed slate position; then a slate completion model generates the remainder of the slate based on the pivot item along with other constraints. With this framework, we summarize our contributions as below: \u2022 We propose to consider both the slate accuracy metric and the slate variation metric when evaluating models that generate stochastic slates. \u2022 We identify the RCD with these metrics and show that the most desirable recommendation performance appears in a narrow \u201celbow\u201d region. \u2022 We conduct experiments on real-world datasets and simulation environments to show that enforcing variation can mitigate over-concentration and extend the elbow\u2019s performance to a wide range of search space. \u2022 We show that the proposed pivot selection phase can provide better control over the slate variation under the overconcentration case of the dilemma. In the following sections, we first list related studies in section 2, then describe how generative slate recommendation is achieved in section 3.1. Further, we explain how to employ variance metrics as complements of accuracy metrics in section 3.2, and then introduce our slate recommendation framework in section 3.3. We present our experimental results on both real-world datasets and simulation environments in section 4 and 5 as the evidence to support our claims. And finally, we discuss some other possible solutions that may also improve the item variety to bridge the gap between generative methods and recommendation systems. 2 RELATED WORK There exist several types of generative modeling approaches to recommender systems. The most studied area is to leverage recurrent neural networks (RNN) [14]. It models the probability of each item conditioned on all previously recommended items \ud835\udc43(\ud835\udc51\ud835\udc56|\ud835\udc51\ud835\udc56\u22121, . . . ,\ud835\udc511) and consecutively make recommendations from \ud835\udc511 to \ud835\udc51\ud835\udc3e. Modeling in this way means that the recommendation of item \ud835\udc51\ud835\udc56does not depend on the items \ud835\udc51\ud835\udc56+1, . . . ,\ud835\udc51\ud835\udc3ethat appear later, which weakens the intra-list relation of the recommendations. This sub-optimality has already been shown in [28]. Another track of research uses auto-encoder for recommendation [32, 38], but they model the user history profiles instead of the distribution of slates. A recent line of research adopts reasoning-based recommendation models [11, 40, 47], which models recommendation as a cognition rather than perception task and adopts neural reasoning rather than neural matching models for better recommendation. In addition to the generative approach represented by [28], there are other efforts that aim to deal with slate recommendation using reinforcement learning (RL) [16, 26, 27, 42]. Like the early attempts [39, 43], this type of methods mostly targets on exploring how to make use of the long term effects of several consecutive recommendations by transforming the slate and its user reaction as \u201cstates\u201d in RL. Though they are suitable for solving the problem of slate recommendation, the essence behind RL and generative methods are mostly complementary, since a generated model can be pretrained and transplanted as the actor in RL frameworks. We can also consider slate recommendation as a type of list recommendation, but the list size is fixed. Except for accuracy measures, there are many list-wise metrics that are proved beneficial to both the recommender systems and its customers, including but not limited to coverage [19] and intra-list diversity [49, 50]. Typically, the solution has to balance between accuracy and diversity, such as Max-Marginal Relevance (MMR) [9], relative benefits [6], \ud835\udefc-NDCG [12], and Determinantal Point Process (DPP) [15]. But as pointed out by Jiang et al. [28], it will be unfair to compare these essentially discriminative methods in generative settings, and conversely, it will be unfair for generative methods to compete in traditional LTR settings. In order to show this deviation, we investigate how much discriminative ranking methods are different from generative methods if compared in the same setting in section 5. A relatively unrelated track that considers slate-wise patterns is to re-rank the items based on the expected user interaction on the candidate slate [1, 3, 46]. However, the items available for reranking are often restricted to the candidates given by some base ranking model. Our problem is about directly generating slate recommendations with no restriction on candidate items, which is essentially a different task. One should also distinguish slate recommendation with session-based recommendation [22], which usually consists of user interaction history of arbitrary length, typically with a sequence of sessions, and the major research focus is on the modeling of the user sequential behaviors [14, 41]. 3 GENERATIVE SLATE RECOMMENDATION The corpus of items is denoted as D, and a slate of size \ud835\udc3eis defined as an ordered list of items \ud835\udc94= (\ud835\udc511,\ud835\udc512, . . . ,\ud835\udc51\ud835\udc3e), where \ud835\udc51\ud835\udc58\u2208D and positional index \ud835\udc58\u2208{1, . . . , \ud835\udc3e} represents that the item appeared in the \ud835\udc58-th slot in the slate. A user\u2019s response to a slate \ud835\udc94is denoted as \ud835\udc93= (\ud835\udc5f1,\ud835\udc5f2, . . . ,\ud835\udc5f\ud835\udc3e), where \ud835\udc5f\ud835\udc58is the response on item \ud835\udc51\ud835\udc58, e.g., \ud835\udc5f\ud835\udc58\u2208{0, 1} represents \ud835\udc51\ud835\udc58is clicked or not. Assume that each slate \ud835\udc94has corresponding latent unknown features \ud835\udc9band some known characteristics/constraints \ud835\udc84. Typically, let \ud835\udc84= onehot(\u00cd\ud835\udc3e \ud835\udc58=1 \ud835\udc5f\ud835\udc58) so that the user responses are contained in the constraints. For example, for a slate with 0 click, the corresponding constraint would be [1, 0, 0, 0, 0, 0], while for a slate with 3 clicks, the constraint would be [0, 0, 0, 1, 0, 0]. Unlike discriminative ranking methods that model \ud835\udc45(\ud835\udc93|\ud835\udc94), which is the user response for a given slate, the goal of generative slate recommendation models is to learn the distribution of slates with the given constraints: \ud835\udc43\ud835\udf03(\ud835\udc94|\ud835\udc9b, \ud835\udc84) \fVariation Control and Evaluation for Generative Slate Recommendations WWW \u201921, April 19\u201323, 2021, Ljubljana, Slovenia where \ud835\udc9bis the latent slate encoding. An optimal slate \ud835\udc94\u2217should maximize the expected number of clicks E[\u00cd\ud835\udc3e \ud835\udc58=1 \ud835\udc5f\ud835\udc58], so during recommendation, one should provide to the inference model with the maximum number of clicks as constraint \ud835\udc84\u2217= [0, 0, 0, 0, 0, 1] (correspond to the ideal all-clicked response \ud835\udc93\u2217= [1, 1, 1, 1, 1]). Different from the setting in [28], we also allow user features, so the constraint vector \ud835\udc84in this case will be the concatenation of extracted user embedding and the aforementioned transformed response. As we will discuss in section 5.4, a more fine-grained constraint vector that involves user is more likely to induce a smooth distribution instead of a disjoint manifold in the encoding space \ud835\udc9b. 3.1 Slate Generation Model To find a good generative model \ud835\udc43\ud835\udf03(\ud835\udc94|\ud835\udc9b, \ud835\udc84), a Conditional Variational Auto-Encoder (CVAE) framework learns a set of latent factors \ud835\udc9b\u2208R\ud835\udc5asuch that \ud835\udc9bcan encode sufficient high-level information to reproduce the observed slates with maximum likelihood. As formulated in [30], a variational posterior \ud835\udc44\ud835\udf19(\ud835\udc9b|\ud835\udc94, \ud835\udc84) is used as an approximation to solve the intractable marginal likelihood (which involves integral over latent \ud835\udc9b). The resulting model structure contains an encoder \ud835\udc44\ud835\udf19that learns to encode the input slate \ud835\udc94and constraint \ud835\udc84into a set of variational information (e.g., the mean and variance when Gaussian prior is assumed) of each factor of \ud835\udc9b, and a decoder \ud835\udc43\ud835\udf03, which corresponds to the generative model. When training the model, one can maximize the variational Evidence Lower Bound (ELBO) of the data likelihood [30], which is equivalent to minimizing: L\ud835\udc94= E\ud835\udc44\ud835\udf19(\ud835\udc9b|\ud835\udc94,\ud835\udc84) \u0002 log \ud835\udc43\ud835\udf03(\ud835\udc94|\ud835\udc9b, \ud835\udc84) \u0003 \u2212\ud835\udefdKL \u0002 \ud835\udc44\ud835\udf19(\ud835\udc9b|\ud835\udc94, \ud835\udc84)\u2225\ud835\udc43\ud835\udf03(\ud835\udc9b|\ud835\udc84) \u0003 (1) where \ud835\udc43\ud835\udf03(\ud835\udc9b|\ud835\udc84) is the conditional prior distribution of \ud835\udc9b, KL represents the Kullback-Leibler Divergence (KLD), which restrains the distance measure between two distributions, and \ud835\udefdis the trade-off coefficient as described in section 1. The encoder, decoder, and the conditional prior are all modeled by neural networks to capture complex feature interactions. With the decoder, items of each slate are selected based on the dot product similarity between output embeddings and embeddings of all items in D. During training, in order to avoid overfitting, the reconstruction loss is calculated by the cross entropy over down-sampled items instead of the entire D. At inference time, the slate is generated by passing the ideal condition \ud835\udc84\u2217into the decoder along with a randomly sampled encoding \ud835\udc9b (e.g., from random Gaussian) based on the variational information provided by the conditional prior. In the loss Eq. (1), we can interpret the KL divergence as how well the learned encoding \ud835\udc9bdistribution is regulated by the guiding prior \ud835\udc43\ud835\udf03(\ud835\udc9b|\ud835\udc84), and the other term reveals how well existing slates are reconstructed. According to [23], manipulating the trade-off parameter \ud835\udefdwill push the model to favor one of the terms over the other. For example, if we assume isotropic Gaussian as the prior distribution and set larger \ud835\udefd, the factors in the learned \ud835\udc9bspace will become more disentangled, and thus more meaningful control over the generation, but with a possible downgrade of reconstruction performance resulting in unrealistic generation. Despite its feasibility in many other tasks, as we will discuss in section 5.1, this \ud835\udefdleads to a reconstruction-concentration trade-off that barely provide satisfactory recommendation results. 3.2 Variance Evaluation of Generated Slates Many generative methods (e.g. VAEs and GANs[20]) are stochastic in terms of the output, but it is possible that the slate encoding \ud835\udc9b is not obtained through an encoder model so one cannot simply estimate the slate variance based on \ud835\udc9b. Thus, we are interested in evaluation metrics that can estimate the variance of slates for a wide range of stochastic models. An evident choice is directly using item variance across all possible generated slates. Since items are typically represented by embedding vectors, let \ud835\udc991, . . . , \ud835\udc99\ud835\udc3ebe the vector representations of generated items. For simplicity, assume conditional independence among factors of \ud835\udc99, then the item variance can be calculated as the variance of each factor and be approximated by sampling: Cov(\ud835\udc99) = E\ud835\udc94\u223c\ud835\udc43\ud835\udf03 \" 1 \ud835\udc3e \ud835\udc3e \u2211\ufe01 \ud835\udc56=1 \r \r\ud835\udc99(\ud835\udc94) \ud835\udc56 \u2212\ud835\udf41 \r \r2 # = lim \ud835\udc41\u2192\u221e 1 \ud835\udc41\ud835\udc3e \ud835\udc41 \u2211\ufe01 \ud835\udc57=1 \ud835\udc3e \u2211\ufe01 \ud835\udc56=1 \r \r\ud835\udc99(\ud835\udc94\ud835\udc57) \ud835\udc56 \u2212\ud835\udf41 \r \r2 (2) where \ud835\udc41is the number of generated slate samples, and each slate \ud835\udc94\ud835\udc57is sampled from \ud835\udc43\ud835\udf03(\ud835\udc94|\u00b7). Note that \ud835\udf41is the average of all \ud835\udc41\ud835\udc3e generated items, and it depends on the input constraint. If the generative model is personalized, then the user is included in the input of \ud835\udc43\ud835\udf03and the generation process will first run \ud835\udc41times for each user to give personalized variance estimation, then the estimations are averaged for all users. Let \ud835\udf41(\ud835\udc94) be the average item of slate \ud835\udc94: \ud835\udf41(\ud835\udc94) = 1 \ud835\udc3e \ud835\udc3e \u2211\ufe01 \ud835\udc56=1 \ud835\udc99(\ud835\udc94) \ud835\udc56 (3) then each slate variance in Eq.(2) can be decomposed into: \ud835\udc3e \u2211\ufe01 \ud835\udc56=1 \u2225\ud835\udc99(\ud835\udc94\ud835\udc57) \ud835\udc56 \u2212\ud835\udf41\u22252 = \ud835\udc3e \u2211\ufe01 \ud835\udc56=1 \u2225\ud835\udc99(\ud835\udc94\ud835\udc57) \ud835\udc56 \u2212\ud835\udf41(\ud835\udc94\ud835\udc57) + \ud835\udf41(\ud835\udc94\ud835\udc57) \u2212\ud835\udf41\u22252 = \u0012 \ud835\udc3e \u2211\ufe01 \ud835\udc56=1 (\ud835\udc99(\ud835\udc94\ud835\udc57) \ud835\udc56 \u2212\ud835\udf41(\ud835\udc94\ud835\udc57))\u22a4(\ud835\udc99(\ud835\udc94\ud835\udc57) \ud835\udc56 \u2212\ud835\udf41(\ud835\udc94\ud835\udc57))+ \ud835\udc3e \u2211\ufe01 \ud835\udc56=1 (\ud835\udf41(\ud835\udc94\ud835\udc57)\u2212\ud835\udf41)\u22a4(\ud835\udf41(\ud835\udc94\ud835\udc57)\u2212\ud835\udf41) + 2(\ud835\udf41(\ud835\udc94\ud835\udc57) \u2212\ud835\udf41)\u22a4 \ud835\udc3e \u2211\ufe01 \ud835\udc56=1 (\ud835\udc99(\ud835\udc94\ud835\udc57) \ud835\udc56 \u2212\ud835\udf41(\ud835\udc94\ud835\udc57)) \u0013 (4) Since the last term has \u00cd\ud835\udc3e \ud835\udc56=1(\ud835\udc99(\ud835\udc94\ud835\udc57) \ud835\udc56 \u2212\ud835\udf41(\ud835\udc94\ud835\udc57)) = 0 (from Eq.(3)), it simplifies the total item variance as: Cov(\ud835\udc99)= lim \ud835\udc41\u2192\u221e 1 \ud835\udc41 \ud835\udc41 \u2211\ufe01 \ud835\udc57=1 \r \r\ud835\udf41(\ud835\udc94\ud835\udc57) \u2212\ud835\udf41 \r \r2+ 1 \ud835\udc41\ud835\udc3e \ud835\udc41 \u2211\ufe01 \ud835\udc57=1 \ud835\udc3e \u2211\ufe01 \ud835\udc56=1 \r \r\ud835\udc99(\ud835\udc94\ud835\udc57) \ud835\udc56 \u2212\ud835\udf41(\ud835\udc94\ud835\udc57)\r \r2 (5) where the first term describes the slate-mean variance and the second term describes the intra-slate variance. Each of the two terms provides a lower bound for the total item variance, and conversely, the total item variance Eq.(2) gives an upper bound for either term. A useful conclusion we can derive from this is that models good at one of the two terms in Eq.(5) may not be the one that achieves the best total item variance. On one hand, models with good intra-slate variance may still provide repeating slate with the same \ud835\udf41(\ud835\udc94\ud835\udc57) = \ud835\udf41, which results in extremely low slatemean variance. On the other hand, models with good coverage of \fWWW \u201921, April 19\u201323, 2021, Ljubljana, Slovenia Shuchang Liu, Fei Sun, Yingqiang Ge, Changhua Pei, and Yongfeng Zhang item across slates may still have repeated items (in the most extreme case, \ud835\udc99(\ud835\udc94\ud835\udc57) \ud835\udc56 = \ud835\udf41(\ud835\udc94\ud835\udc57) when all items are equal) inside each slate inducing reduced intra-slate variance. Intuitively, we would like to make both slate-mean variance and intra-slate variance sufficiently large in order to support good total variance. Thus, the evaluation protocol should at least include two of the metrics among total item variance, slate-mean variance, and intra-slate variance. 3.3 The Two-Phase Generation Framework We seek to enforce slate variation when CVAE model provides over-concentrated recommendations (i.e., the large \ud835\udefdcase of RCD). A straightforward solution is to perturb the generated slate by considering each position as a separate ranking model. However, this post-generation perturbation is very hard to control and always takes the risk of significant downgrade of recommendation accuracy (detail in Appendix A), due to the large perturbation space and the ignorance of the positional bias and item relations. With this in consideration, we turn to pre-generation perturbation and propose a simple and effective CVAE framework to mitigate the problem. In general, we separate the original generative process into two steps: \ud835\udc43\ud835\udf03(\ud835\udc94|\ud835\udc9b, \ud835\udc84) = \ud835\udc43\ud835\udf03(\ud835\udc511, . . . ,\ud835\udc51\ud835\udc3e|\ud835\udc9b, \ud835\udc84) = \ud835\udc43\ud835\udf03(\ud835\udc512, . . . ,\ud835\udc51\ud835\udc3e|\ud835\udc511, \ud835\udc9b, \ud835\udc84)\ud835\udc43\ud835\udf03(\ud835\udc511|\ud835\udc9b, \ud835\udc84) (6) That is, the framework first uses a pivot selection model \ud835\udc43\ud835\udf03(\ud835\udc511|\ud835\udc9b, \ud835\udc84) to select an adequate pivot item for a fixed slate position (here \ud835\udc511 means we always generate the first appearing item in the slate). Then with this pivot item as additional condition, a slate completion model \ud835\udc43\ud835\udf03(\ud835\udc512, . . . ,\ud835\udc51\ud835\udc3e|\ud835\udc511, \ud835\udc9b, \ud835\udc84) generates the rest of the items for the slate. With this separation, we can avoid RCD by enforcing variation of resulting slates through perturbation in the first stage, and use the second phase to clean up the mess if it has made a bad choice of pivot. As illustrated in Figure 1, the pivot controller is only applied to the generative decoder. Compared to the standard VAE model, little has to be nudged in the encoder \ud835\udc44(\ud835\udc9b|\ud835\udc94, \ud835\udc84) since it already has the potential to encode any intra-slate pattern. Picking Pivot Item for the Slate: \ud835\udc43\ud835\udf03(\ud835\udc511|\ud835\udc9b, \ud835\udc84) will predict an item as the pivot, based on this, the slate completion model will fill in the rest of the slate according to the pivot. In other words, the goal of this part is to find the best item for a certain position in the slate, based on the encoding \ud835\udc9band constraint \ud835\udc84. It first generates an \u201cideal\u201d latent item embedding b \ud835\udc991, and then applies dot product with all item embeddings in \u03a8 to find the closest item as the \ud835\udc511. The minimization of the reconstruction term can be achieved by optimizing the cross entropy with softmax. In practice, we also use down sampling [35] to reduce the computational cost and alleviate over-fitting on the training set. Readers may notice that this part can be treated as a typical ranking model and thus any learning-torank framework is suitable for its training, only that one instead of many items are selected at a time. Similar to sequential modeling, the training of \ud835\udc43\ud835\udf03(\ud835\udc511|\ud835\udc9b, \ud835\udc84) is made independent of the later slate completion model, and in both training and inference, this pivot selection phase allows perturbation which improves the item variation. Yet, perturbation inevitably causes information loss and downgrades the recommendation accuracy. Theoretically, taking the simplest assumption that item interactions are directional and are all binary relations, then there are at most \ud835\udc3e(\ud835\udc3e\u22121) such interactions between items for a slate of size \ud835\udc3e. This separation and the introduction of perturbation mean that our model neglects \ud835\udc3e\u22121 of them (from \ud835\udc3e\u22121 remaining items towards the pivot). Even though, in our experiments, we find that this pre-generation perturbation can improve item variety more significantly with only a minor loss of accuracy compared to post-generation perturbations, which means that the later slate completion model is able to correct the slate according to the perturbed pivot. Additionally, we suggest to pick one pivot instead of more in this phase, since for any 1 < \ud835\udc58\u2032 < \ud835\udc3e(in the binary relation case), when choosing \ud835\udc58\u2032 pivots, the number of missing relations will be (\ud835\udc3e\u2212\ud835\udc58\u2032)\ud835\udc58\u2032 \u2265\ud835\udc3e\u22121, which indicates more loss of information and recommendation accuracy. Slate Completion with a Given Pivot Item: After the selection of the pivot, the goal of the slate completion model \ud835\udc43\ud835\udf03(\ud835\udc512, . . . ,\ud835\udc51\ud835\udc3e|\ud835\udc511, \ud835\udc9b, \ud835\udc84) (7) is to learn to fill up the remaining items that can satisfy the desired constraint \ud835\udc84. A forward pass will take as input the selected pivot b \ud835\udc511, the encoding \ud835\udc9b(which is the output of \ud835\udc44if training, output of the conditional prior \ud835\udc43\ud835\udf03(\ud835\udc9b|\ud835\udc84\u2217) if inference, as in VAE Eq.(1)), and the constraint \ud835\udc84, then output a set of \u201cbest\u201d latent item embeddings b \ud835\udc992, . . . , b \ud835\udc99\ud835\udc3efor each of the remaining slots in the slate. After generating these latent embeddings, it will find for each of the b \ud835\udc99\ud835\udc56the nearest neighbor in the candidate set D through dot product similarity. Similar to that in the pivot selection model, we can again apply cross-entropy loss with softmax and negative sampling during training. Note that this is the final generation stage and it does not employ perturbation. Compared to inference time when the model can only use the inferred b \ud835\udc511 \u223c\ud835\udc5d\ud835\udf03(\ud835\udc511|\ud835\udc9b, \ud835\udc84) from the pivot selection model, during training, there is another valid choice of the pivot the ground truth item in the data. We find that the later choice achieves the same performance but usually exhibits faster convergence. Thus, we adopts the ground truth item\ud835\udc511 for the input of the slate completion model during training in our experiments, and if perturbation, we calculate item similarities based on the ground truth instead of the inferred item embedding. Additionally, when the pivot is perturbed during training, the slate completion model tends to learn a \u201cdenoised\u201d intra-slate patterns which may results in a slate that is more accurate but with less variation, compared to training without perturbation, as we will discuss in section 5.3. 4 EXPERIMENTAL SETTING 4.1 Real-world Datasets We conducted experiments 1 on two real-world datasets. The first is YOOCHOOSE 2 from RecSys 2015 Challenge and we follow the same reprocessing procedure as [28]. The resulting dataset contains around 274K user slate-response pairs. Note that there is no user identifier involved in this dataset, so our second dataset is constructed from the MovieLens 100K3 dataset. We split user rating sessions into slates of size 5 and consider the rating of 4-5 as positive feedback (with label 1) and 1-3 as negative feedback 1Code link: https://github.com/CharlieFaceButt/PivotCVAE 2https://2015.recsyschallenge.com/challenge.html 3https://grouplens.org/datasets/movielens/100k/ \fVariation Control and Evaluation for Generative Slate Recommendations WWW \u201921, April 19\u201323, 2021, Ljubljana, Slovenia MLP List-CVAE: Decoder Pivot Slection Model Slate Completion Model Pivot-CVAE: Decoder Figure 1: Structure of the generative framework during training. \ud835\udc94is the input slate of size \ud835\udc58. \ud835\udc93is the user response vector of the input slate. b \ud835\udc94represents the output slate inferred by decoder. \u03a8 and \u03a8(\ud835\udc62) extract pretrained embeddings for items and users, respectively. (with label 0). The resulting distribution of slate responses (Figure 8 in Appendix C) is similar to that in the Yoochoose dataset. We consider two versions of this dataset: ML (User) and ML (No User) to investigate how the presence of user affects the generative results. Compared to ML(User), the ML(No User) dataset ignores user IDs like Yoochoose data. Since both datasets are skewed towards slates with 0 and 3 clicks, we augment the records of 1,2,4, and 5 clicks by random repetition until each group has at least half the size of the largest response type. Note that these offline log data have limited feasibility for evaluation since they cannot provide accurate estimations for unseen records. Thus, an additional user response model \ud835\udc45: D\ud835\udc3e\u2192{0, 1}\ud835\udc3eis trained (with binary cross-entropy loss) to fulfill the role of \u201cground truth\u201d user feedback. 4.2 Simulation Environment Settings To observe how generative models behave for unseen slates under different environment settings and to investigate the difference between slate generation metrics and traditional ranking metrics, similar to existing works [26, 28], we employ simulations with plugins of positional biases and item interactions. The primary goal of the simulated environment is to model \ud835\udc45(\ud835\udc93|\ud835\udc94, \ud835\udc96) that predicts the users\u2019 true responses given slate \ud835\udc94. And for each of the simulators described in this section, the final response for each item \ud835\udc51\ud835\udc58is sampled by Bernoulli distribution with click probability I(\ud835\udc51\ud835\udc58, \ud835\udc57), which represents user \ud835\udc57\u2019s interest for \ud835\udc51\ud835\udc58: \ud835\udc5f\ud835\udc58= \ud835\udc45(\ud835\udc5f\ud835\udc58\ud835\udc57|\ud835\udc51\ud835\udc58, \ud835\udc57) \u223cBernoulli(I(\ud835\udc51\ud835\udc58, \ud835\udc57)) (8) Thus, the click behavior follows Poisson binomial distribution, and the expectation of the number of clicks is: E h \ud835\udc3e \u2211\ufe01 \ud835\udc58=1 \ud835\udc5f\ud835\udc58 i = \u2211\ufe01 \ud835\udc51\ud835\udc58\u2208\ud835\udc94 I(\ud835\udc51\ud835\udc58, \ud835\udc57) (9) We tune the resulting distribution with proper setting (details in appendix D) so that it coincides with that of real-world datasets. Specifically, each simulation is built based on a basic User Response Model (URM), which only considers point-wise user-item responses like the matrix factorization model. By adding awareness Table 1: Pivot-CVAE variations Models perturbation of \ud835\udc511 training time inference time Pivot-CVAE (GT-PI) Pivot-CVAE (SGT-PI) \u2713 Pivot-CVAE (GT-SPI) \u2713 Pivot-CVAE (SGT-SPI) \u2713 \u2713 of positional bias and multi-item relations, we obtain URM_P (P stands for positional bias) and URM_P_MR (MR stands for multiitem relations), respectively. The URM_P_MR consists of a coefficient \ud835\udf0cfor the weight of the multi-item relations. As a special case, setting \ud835\udf0c= 0 will tell the simulation to include no item relations and the environment will reduce to URM_P. The details of each simulation environment are given in Appendix D. Simulation Data: We set up three URM_P_MR environments (|D| = 3, 000, |U| = 1, 000) with different values of \ud835\udf0c\u2208{0, 0.5, 5.0}. Note that there is no need to train a response model from the generated dataset like that for real-world datasets. Conversely, we generate a training set of 100,000 slates from each environment. The number of slates for all types of user responses are also balanced similar to that of real-world datasets. The user and item embeddings are assumed explicit and free to use in the training of the recommendation model. Here we expect readers to distinguish these simulations from those used in Reinforcement Learning (RL)based recommendation models, because the generative model does not interact with the simulated environment for rewards during training. In other words, the generative model is training offline and the simulators are only used for evaluation purposes. 4.3 Model Specification We denote our two-step generative process as Pivot-CVAE. For Pivot-CVAE model, perturbation of \ud835\udc511 can be applied either on training phase or inference phase, inducing 4 possible variations: where \u201cGT\u201d represents that the model uses Ground Truth item during training, \u201cPI\u201d represents that the model uses Pivot Item during inference, and \u201cS\u201d means the item applies perturbation. For all perturbation, we adopt sigmoid dot-product between item embeddings as similarity and sample according to multinomial distribution so that it can capture user interests. Baselines: We include the List-CVAE model [28] as an example of VAE and build its non-greedy version (denote as Non-Greedy List-CVAE) that conducts post-generation perturbation. That is, after the generation of the slate, the item \ud835\udc511 (in the same position as the pivot of Pivot-CVAE) is perturbed by sampling from D. Again, we apply sampling based on multinomial distribution of sigmoid dot product similarity. We also include biased MF [31] and NeuMF [21] as representatives of discriminative ranking models. In order to engage generative recommendations that can explore items other than the top items, we extend these discriminative methods into Non-greedy MF/NeuMF by applying the same perturbation method on \ud835\udc511 as that in Non-greedy List-CVAE and Pivot-CVAE. To compare the item variance with intra-slate variance, we include the widely adopted MF-MMR [10] as a representative diversity-aware method. It re-ranks the items proposed by the pre-trained biased \fWWW \u201921, April 19\u201323, 2021, Ljubljana, Slovenia Shuchang Liu, Fei Sun, Yingqiang Ge, Changhua Pei, and Yongfeng Zhang MF model based on the following modified score: score(\ud835\udc51) = \ud835\udf06sim(\ud835\udc51, \ud835\udc57) + (1 \u2212\ud835\udf06) max \ud835\udc51\ud835\udc56\u2208\ud835\udc94sim(\ud835\udc51\ud835\udc56,\ud835\udc51) where slate \ud835\udc94starts from an empty set and choose the item with the best MMR score in each step until the slate size is \ud835\udc3e. sim(\ud835\udc51, \ud835\udc57) represents the item\u2019s original ranking score given by the base MF model, and sim(\ud835\udc51\ud835\udc56,\ud835\udc51) is the item\u2019s similarity to the \ud835\udc56-th item that has already been added to the list \ud835\udc94. In our experiment, we adopt two-layered network with 256 dimensional hidden size for each encoder, decoder, \ud835\udc43\ud835\udf03(\ud835\udc511|\ud835\udc9b, \ud835\udc84), the slate completion model \ud835\udc43\ud835\udf03(\ud835\udc512, . . . ,\ud835\udc51\ud835\udc3e|\ud835\udc511, \ud835\udc9b, \ud835\udc84), and the MLP component in NeuMF. In terms of the performance of CVAE-based models, we found that it is relatively insignificant to change the width or depth of the encoder and decoder network as long as they are large enough. The user and item embedding size for all datasets and simulations are fixed to 8, and the size of \ud835\udc9bis set to \ud835\udc5a= 16. The slate size is \ud835\udc3e= 5, which means the size of the condition input \ud835\udc84 of CVAE-based model is \ud835\udc3e+ 1 = 6 (without user condition) as described in the first paragraph of section 3. All models are optimized by Adam with stochastic mini-batches (batch size of 64), and we use grid search to find the best learning rate (0.0001 for List-CVAE and Pivot-CVAE, 0.0003 for MF and NeuMF) and weight decay (0.0001 for all models). For MF and NeuMF, we follow the standard LTR paradigm with point-wise binary cross-entropy loss and assign 2 random negative samples of each record to optimize these models until their ranking performance converges in the validation set. For MF-MMR, we use sigmoid dot-product as item similarity and set \ud835\udf06= 0.5. During training of generative models, the softmax function on each slot in a slate is associated with 1000 negative samples for Yoochoose, and 100 negative samples for MovieLens and simulation environments. 4.4 Evaluation Protocol For all datasets, we randomly split them into train, validation, and test sets following the 80-10-10 holdout rule. And we run each experiment five times to obtain the average performances. We consider two major evaluation metrics based on interactive environment \ud835\udc45(\ud835\udc93|\ud835\udc94): slate accuracy and slate variation. And for the illustration of why ranking metrics on test set is invalid for evaluation of generative models, we further include discriminative ranking metrics. Slate Accuracy Metric: The primary metric, following the evaluation setting of [28], is the Expected Number of Clicks (ENC) which is calculated as: E \" \ud835\udc3e \u2211\ufe01 \ud835\udc58=1 \ud835\udc5f\ud835\udc58 # = \u2211\ufe01 \ud835\udc94\u2208D\ud835\udc3e \ud835\udc43(\ud835\udc94)E \" \ud835\udc3e \u2211\ufe01 \ud835\udc58=1 \ud835\udc5f\ud835\udc58|\ud835\udc94 # where \ud835\udc93\ud835\udc58|\ud835\udc94is a random variable modeled by \ud835\udc45(\ud835\udc93|\ud835\udc94), and \ud835\udc43(\ud835\udc94) is the probability of generating \ud835\udc94. Similar to the variation evaluation described in section 3.2, we can approximated this metric by sampling techniques. This metric is exactly the ultimate goal of the optimization and does not involve any test set compared to traditional ranking metrics. For simulation, combining Eq. (9), it becomes: E \" \ud835\udc3e \u2211\ufe01 \ud835\udc58=1 \ud835\udc5f\ud835\udc58 # = \u2211\ufe01 \ud835\udc94\u2208D\ud835\udc3e \ud835\udc43(\ud835\udc94) \u2211\ufe01 \ud835\udc51\ud835\udc58\u2208\ud835\udc94 I(\ud835\udc51\ud835\udc58, \ud835\udc57) And for real-world dataset, we train \ud835\udc45(\ud835\udc93|\ud835\udc94) (\ud835\udc45(\ud835\udc93|\ud835\udc94,\ud835\udc62) if user IDs are presented) with point-wise binary cross entropy minimization. Slate Variation: This metric reveals the severance of the \u201cover concentration\u201d in RCD and the generation pitfall of limited slate prototypes. As described in section 3.2, we use total item variance and intra-slate variance metrics in our evaluation. Notably, the variance of \ud835\udc9bdirectly models the slate variance, but it is unique in VAE-based generative models. In order to form comparison with non-VAE models, we use item Coverage [19] as the item variance metric and Intra-List Diversity (ILD) [49, 50] as an approximation of the intra-slate variance. Item coverage estimates the proportion of unique items in D that can appear after several times of generations. Obviously, LTR models are deterministic so will always cover only 5/|D| of the items without perturbation. Intra-list diversity is based on Intra-List Similarity (ILS) [50]: ILD = 1 \u2212ILS(\ud835\udc94) = 1 \u2212 \u2211\ufe01 \ud835\udc51\ud835\udc56\u2208\ud835\udc94 \u2211\ufe01 \ud835\udc51\ud835\udc59\u2208\ud835\udc94 \ud835\udc51\ud835\udc59\u2260\ud835\udc51\ud835\udc56 \ud835\udc54(\ud835\udc97\u22a4 \ud835\udc56\ud835\udc97\ud835\udc59) where the similarity measure \ud835\udc54between \ud835\udc51\ud835\udc56and \ud835\udc51\ud835\udc59in the slate is based on the dot product of their item embeddings. Ranking Metrics: We agree with [28] that it is inadequate to use traditional offline ranking metrics to evaluate generative models, as we will discuss in section D.1, these metrics behave differently on a test set compared to that on a interactive user response environment. Even though, it is still reasonable to compare these metrics among generative models. Specifically, we calculate slate Hit Rate and slate Recall considering each slate as a ranking list. It is considered as a \u201chit\u201d if an item in the ground truth slate with positive feedback is recommended. And the slate recall considers each slate as a user history instead of the combined user history across slates. Note that in Yoochoose and ML, user identifiers are absent, so we assume a universal user for all slates during training. In summary, we conduct two types of evaluation: 1) recommendation performance (slate accuracy and variance metric) on \ud835\udc45(\ud835\udc5f|\ud835\udc60) as main evaluation, and 2) ranking metric on the test set. Due to the stochastic nature of generative models (List-CVAE, Pivot-CVAE, and all Non-greedy models), the evaluation of each metric is calculated based on \ud835\udc41sampled outputs (correspond to section 3.2). Note that \ud835\udc41cannot be too small or else it will not provide accurate and stable estimation of the true value. In the meantime, it can neither be too large, otherwise the model would exhibit indistinguishably high item coverage (i.e. it may simply generate all items in D given sufficient number of samples). 5 RESULTS AND DISCUSSIONS 5.1 The Reconstruction-Concentration Dilemma We consider the search space of \ud835\udefd\u2208[0.00001, 30.0] (chosen uniformly on log \ud835\udefdspace) and for each setting we train List-CVAE and all Pivot-CVAE models until convergence of ENC on \ud835\udc45(\ud835\udc93|\ud835\udc94). When evaluation, we generate \ud835\udc41= 500 slates from each trained model and calculate the average as described in section 4.4. In Figure 3, we plot the RCD pattern of List-CVAE on Yoochoose dataset, and we have observed the same pattern in MovieLens 100K and all simulation environments. \fVariation Control and Evaluation for Generative Slate Recommendations WWW \u201921, April 19\u201323, 2021, Ljubljana, Slovenia Figure 2: The slate encoding TSNE plots of List-CVAE on Yoochoose. The first plot correspond to over-reconstruction case, the last corresponds to over concentration case, and the middle plots correspond to the \u201celbow\u201d case. Over Concentration Over Reconstruction Desired Behavior Training Inference Log KLD Reconstruction Loss ENC Slate Variance Figure 3: Training loss behavior (left) and recommendation performance (right) of RCD on the Yoochoose Data. Each point in the left panel represents the average result of slates in one training epoch of a model. Each point in the right panel represents a certain generated slate. Here, we use ILD as slate variation, ENC as accuracy metric. In cases where \ud835\udefdis small, CVAE becomes biased towards learning the reconstruction term of Eq. (1) as illustrated by the yellow dotdashed circle in the left subplot of Figure 3. And because of the subdued regularization from the KL term, the encoding distribution of \ud835\udc9bbecomes less aligned with that of the predefined prior. When setting the prior \ud835\udc43\ud835\udf03(\ud835\udc9b|\ud835\udc84) as isotropic standard Gaussian, we observe that the means of the inferred \ud835\udc9bare often significantly deviated from 0 and the variances var(\ud835\udc9b) are far from 1. Though it successfully learns and generates the slates in the dataset during training, there is no guarantee on the effectiveness of the sampled \ud835\udc9bduring inference. In other words, the distribution of generated slates is close to a random selection on the observed dataset. As shown in the yellow dot-dashed circle in the right subplot of Figure 3, we observe low ENC and high variance during inference. On the contrary, in the over-concentration case where \ud835\udefdis rather large, the KL term plays a more important role in the learning. The slate encoding \ud835\udc9bindeed is more aligned with the prior, ensuring the sampling effectiveness, and consequently able to generate satisfactory slates during inference. Yet, it is less capable of encoding information that is necessary to reconstruct the slates. When the model learns that \ud835\udc9bis reluctant to encode corresponding slates, the generator tends to ignore \ud835\udc9band focuses on the condition \ud835\udc84. Since \ud835\udc84alone does not contain any variational information about slates, the model will only be able to output several biased \u201cslate prototypes\u201d (as illustrated in Appendix B, second row of Figure 6). An alternative analysis of the slate encoding \ud835\udc9bof List-CVAE is given in Figure 2. It shows that with large \ud835\udefd, slate encoding becomes disjoint according to the ground truth number of clicks, which means that slate encoding tends to gather around its corresponding prototype given by the prior. This is undesirable since the model cannot infer slates outside the cluster, which results in the lack of variety in recommendations. Besides, we notice that in the training data a lot of repeated clicks appear in the click and/or purchase sessions in Yoochoose data. This makes the RCD problem even worse since the same item is repeatedly recommended even within the same slate, inducing low intra-slate variance. We observed that RCD exists even with \ud835\udefd-annealing [5], disabled condition (reduce CVAE to VAE), and constrained variation (only fix the variation of \ud835\udc67, but not the mean). These observations indicates that RCD problem may exist for a broad range of generative models. 5.2 The Narrow \u201cElbow\u201d of CVAE Though neither of the extremes appears to be a good choice for recommendation, we find that there exists a very narrow region in between, where models can provide feasible outputs. In Figure 4, we show case the results of all metrics on ML(No User) data for generative models across different \ud835\udefd\u2208[0.00001, 10.0]. X-axis represents the setting of \ud835\udefdand note that results for different \ud835\udefds correspond to different models that are separately trained and evaluated. For ENC and ILD metrics, we use box plot to better demonstrate the distribution of generated slates. We summarize three trends of model behavior when increasing the value of \ud835\udefdas follows: \u2022 For model training, the converged reconstruction loss gets worse while the KLD loss gets better; \u2022 When inference, the accuracy measure ENC starts to boost but the variation metric of the generated slates drops; \u2022 \ud835\udc9bstarts to show clustering behavior under the regulating prior and the clusters will become crisper along with the transition as shown in Figure 5. This transitional behavior indicates that models in this intermediate region can to some extent cover the variety of slates in the data while provide moderate accuracy performance. To better show the detailed transitional behavior of the feasible region, we include a more fine-grained search space for \ud835\udefd\u2208[0.001, 0.01] and highlight it with shaded green in Figure 4. However, in the experiment of both real-world datasets and all simulation datasets, we found that this transition happens within a very small region (at most 30% of the log \ud835\udefdsearch space or equivalently 2% of the \ud835\udefdsearch space), while the search space in our experiment is \ud835\udefd\u2208[0.00001, 30.0] \fWWW \u201921, April 19\u201323, 2021, Ljubljana, Slovenia Shuchang Liu, Fei Sun, Yingqiang Ge, Changhua Pei, and Yongfeng Zhang ENC Coverage ILD HitRate Recall Over-Concentration Over-Reconstruction Elbow Figure 4: Performance on ML (No User) data. \u201clistcvaewithprior\u201d represents the List-CVAE and \u201cng_listcvaewithprior\u201d corresponds to the Non-Greedy List-CVAE. Additionally, we observe that this transitional region consistently gives good test set ranking performance (both hit rate and recall) compared to other choices of \ud835\udefd. The two extreme cases outside the \u201celbow\u201d region do not always reveal a decreasing hit rate and an increasing recall on the test set as in Figure 4, but the best ranking performance usually appears in one of the two sides. Intuitively, the generative model should be able to maximize the likelihood of the test set in addition to the training set. Following the same derivation of Eq.(1), this would require both the ability to reconstruct the slate information and the ability to satisfy the constraint. This can only be observed in this transitional region if the slate variation is not enforced, since the two extremes only possess one of the two characteristics. Note that this ranking performance can only serve as indicators to compare generative models, and it is incomparable between deterministic ranking models and stochastic generative models. As we will discuss in section D.1, the stochastic generation process explores and proposes various good slates in the view of the user \ud835\udc45(\ud835\udc94|\ud835\udc84), and may not necessarily pin-point the data in the test set thus it is typically not favored by this kind of metrics. 5.3 Controlling Slate Variation We present the results of ENC and variance in Table 2. Generative models with \ud835\udefd= 1.0 are chosen as representatives of the large-\ud835\udefd case, since we want to observe the improvement of slate variance when models are over-concentrated. Generative models with small \ud835\udefd(described in section 5.1) and post-perturbation methods that change more than one item cannot provide satisfactory user response, so they are not included in the comparison. We only present results of datasets with user IDs (ML (User) and all simulation environments) so that collaborative filtering models like MF and NeuMF can be compared. The result of each stochastic model (Non-Greedy models, List-CVAE and Pivot-CVAE models) is calculated by the Table 2: Model Performance on User Feedback \ud835\udc45(\ud835\udc5f|\ud835\udc60) of datasets with user IDs. All results are significant (\ud835\udc5d< 0.05) and the overall best are the bold scores while the best among generative models are underlined. ML(User) URM_P URM_P_MR (\ud835\udf0c=0.5) URM_P_MR (\ud835\udf0c=5.0) A: Expected Number of Click (ENC) MF 3.246 3.353 3.870 4.961 NeuMF 3.197 3.344 3.810 4.938 MF-MMR 2.400 3.243 3.725 4.617 Non-Greedy MF 2.950 3.315 3.755 4.869 Non-Greedy NeuMF 3.020 3.303 3.730 4.819 List-CVAE 3.579 3.237 3.924 4.971 Non-Greedy List-CVAE 3.285 3.262 3.883 4.777 Pivot-CVAE (SGT-PI) 3.376 3.274 3.934 4.944 Pivot-CVAE (GT-SPI) 3.252 3.226 3.711 4.622 Pivot-CVAE (SGT-SPI) 3.152 3.270 3.816 4.704 B: Item Coverage MF 0.003 0.002 0.002 0.002 NeuMF 0.003 0.002 0.002 0.002 MF-MMR 0.003 0.002 0.002 0.002 Non-Greedy MF 0.142 0.082 0.082 0.080 Non-Greedy NeuMF 0.141 0.082 0.081 0.080 List-CVAE 0.004 0.030 0.011 0.005 Non-Greedy List-CVAE 0.139 0.106 0.088 0.084 Pivot-CVAE (SGT-PI) 0.071 0.065 0.014 0.005 Pivot-CVAE (GT-SPI) 0.250 0.235 0.180 0.227 Pivot-CVAE (SGT-SPI) 0.144 0.097 0.090 0.083 C: Intra-List Diversity (ILD) MF 0.206 0.031 0.035 0.036 NeuMF 0.694 0.300 0.534 0.779 MF-MMR 0.287 0.230 0.193 0.227 Non-Greedy MF 0.545 0.515 0.231 0.126 Non-Greedy NeuMF 0.836 0.576 0.644 0.827 List-CVAE 0.178 0.836 0.407 0.524 Non-Greedy List-CVAE 0.428 0.864 0.572 0.664 Pivot-CVAE (SGT-PI) 0.486 0.869 0.451 0.632 Pivot-CVAE (GT-SPI) 0.725 0.945 0.740 0.814 Pivot-CVAE (SGT-SPI) 0.551 0.856 0.600 0.637 average of all users\u2019 evaluation. Note that when calculating item coverage and diversity, we consider user-wise instead of the systemwise metric for these datasets. The List-CVAE baseline achieves the best ENC on ML(User) and URM_P_MR environments because it is over-concentrated on the optimal slate prototype, and CF models achieves the best ENC on URM_P because of the pointwise environment. All models with item perturbation (Non-greedy List-CVAE, Pivot-CVAE (SGT_PI), Pivot-CVAE (GT_SPI), and Pivot-CVAE (SGT_SPI)) exhibit degraded ENC compared with the original List-CVAE, but significantly improves slate variation (Item Coverage and ILD). Among models using perturbation, the Pivot-CVAE (GT-SPI) model always achieves satisfactory accuracy with the best slate variety. We observe this outstanding performance across all datasets, meaning that sampling the pivot during inference (SPI) will induce more variance and explore more choices of item combinations than sampling \fVariation Control and Evaluation for Generative Slate Recommendations WWW \u201921, April 19\u201323, 2021, Ljubljana, Slovenia User No User #click Figure 5: The slate encoding TSNE plots of List-CVAE on MovieLens datasets. When user identifier is presented, the encoding forms more fine-grained clusters that is no longer disjoint between one another. during training (SGT). Pivot-CVAE (SGT_PI) applies perturbation during training but not inference, this allows the model to give more accurate generation with better ENC, but the improvement of item coverage and ILD becomes limited. Note that it can achieve a similar ILD with Non-Greedy List-CVAE even if there is a huge gap between their item coverages, indicating that SGT_PI seeks to find good slates with sufficient intra-slate variance but tends to be concentrated slate-wise in exchange for good accuracy. When applying perturbation on both training and inference as Pivot-CVAE (SGT-SPI), it has similar performance to Non-Greedy List-CVAE. As shown in Table 2, generative methods consistently outperform MF and NeuMF on variance metrics, and achieves better ENC on all datasets except for URM_P where the environment is pointwise. This indicates that the user responses of real-world datasets like ML(User) are closer to URM_P_MR, which contain intra-slate features such as item relations, rather than URM_P. Additionally, Non-Greedy MF/NeuMF can improve the item coverage of these LTR models to the level of Non-Greedy List-CVAE baseline (still worse than Pivot-CVAE (GT-SPI)) and Non-Greedy NeuMF even occasionally achieves better ILD performance than Pivot-CVAE (GT_SPI). However, they achieve this with greater sacrifice on the ENC. On the other hand, MF-MMR is able to increase ILD, but its performance is worse than generative models on all metrics. Moreover, it also shows that a model that improves intra-slate variance does not necessarily improve the total item variance. 5.4 Personalization Improves Variance Different from Yoochoose and MovieLens (No User) Data, the MovieLens (User) and our simulation environments include user ID in the constraints in addition to the ideal response, allowing the model to learn personalized preference of slates. We plot the distribution of \ud835\udc9b(of List-CVAE) in Figure 5 to show their difference in overconcentration case. For generative models trained with large \ud835\udefd, instead of having disjoint slate encoding clusters for each type of user response, the presence of user ID in the constraint will guide the model to learn a set of more fine-grained clusters, each of which corresponds to a user. Note that the same user may have different types of user responses, and a typical user that usually gives a certain type of response also has a higher chance of giving responses of similar types (e.g., a user frequently clicks everything may also frequently click \ud835\udc3e\u22121 items). Consequently, user clusters become closer if they give similar types of response and closer response types become partially mixed with each other because of the common users, thus forming a topologically sorted chain in the space, as shown in the right most panels in the first row of Figure 5. This property will contribute to the total item variation of the overall system across users, but in a personalized view, the concentration of slate still exists. As given in Table 2-A, the user-wise item coverage of List-CVAE is close to that of discriminative ranking models. 6" + } + ], + "Shaoshuai Shi": [ + { + "url": "http://arxiv.org/abs/2209.13508v2", + "title": "Motion Transformer with Global Intention Localization and Local Movement Refinement", + "abstract": "Predicting multimodal future behavior of traffic participants is essential\nfor robotic vehicles to make safe decisions. Existing works explore to directly\npredict future trajectories based on latent features or utilize dense goal\ncandidates to identify agent's destinations, where the former strategy\nconverges slowly since all motion modes are derived from the same feature while\nthe latter strategy has efficiency issue since its performance highly relies on\nthe density of goal candidates. In this paper, we propose Motion TRansformer\n(MTR) framework that models motion prediction as the joint optimization of\nglobal intention localization and local movement refinement. Instead of using\ngoal candidates, MTR incorporates spatial intention priors by adopting a small\nset of learnable motion query pairs. Each motion query pair takes charge of\ntrajectory prediction and refinement for a specific motion mode, which\nstabilizes the training process and facilitates better multimodal predictions.\nExperiments show that MTR achieves state-of-the-art performance on both the\nmarginal and joint motion prediction challenges, ranking 1st on the\nleaderboards of Waymo Open Motion Dataset. The source code is available at\nhttps://github.com/sshaoshuai/MTR.", + "authors": "Shaoshuai Shi, Li Jiang, Dengxin Dai, Bernt Schiele", + "published": "2022-09-27", + "updated": "2023-03-18", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "main_content": "Introduction Motion forecasting is a fundamental task of modern autonomous driving systems. It has been receiving increasing attention in recent years [21, 48, 31, 61, 37] as it is crucial for robotic vehicles to understand driving scenes and make safe decisions. Motion forecasting requires to predict future behaviors of traf\ufb01c participants by jointly considering the observed agent states and road maps, which is challenging due to inherently multimodal behaviors of the agent and complex scene environments. To cover all potential future behaviors of the agent, existing approaches mainly fall into two different lines: the goal-based methods and the direct-regression methods. The goal-based methods [21, 65] adopt dense goal candidates to cover all possible destinations of the agent, predicting the probability of each candidate being a real destination and then completing the full trajectory for each selected candidate. Although these goal candidates alleviate the burden of model optimization by reducing trajectory uncertainty, their density largely affects the performance of these methods: fewer candidates will decrease the performance while more candidates will greatly increase computation and memory cost. Instead of using goal candidates, the direct-regression methods [37, 49] directly predict a set of trajectories based on the encoded agent feature, covering the agent\u2019s future behavior adaptively. Despite the \ufb02exibility in predicting a broad range of agent behaviors, they generally converge slowly as various motion modes are required to be regressed from the same agent feature without utilizing any spatial priors. They also tend to predict the most frequent modes of training data since these frequent modes dominate the optimization of the agent feature. In this paper, we present a uni\ufb01ed framework, namely Motion TRansformer (MTR), which takes the best of both types of methods. 36th Conference on Neural Information Processing Systems (NeurIPS 2022). arXiv:2209.13508v2 [cs.CV] 18 Mar 2023 \fIn our proposed MTR, we adopt a small set of novel motion query pairs to model motion prediction as the joint optimization of two tasks: The \ufb01rst global intention localization task aims to roughly identify agent\u2019s intention for achieving higher ef\ufb01ciency, while the second local movement re\ufb01nement task aims to adaptively re\ufb01ne each intention\u2019s predicted trajectory for achieving better accuracy. Our approach not only stabilizes the training process without depending on dense goal candidates but also enables \ufb02exible and adaptive prediction by enabling local re\ufb01nement for each motion mode. Speci\ufb01cally, each motion query pair consists of two components, i.e., a static intention query and a dynamic searching query. The static intention queries are introduced for global intention localization, where we formulate them based on a small set of spatially distributed intention points. Each static intention query is the learnable positional embedding of an intention point for generating trajectory of a speci\ufb01c motion mode, which not only stabilizes the training process by explicitly utilizing different queries for different modes, but also eliminates the dependency on dense goal candidates by requiring each query to take charge of a large region. The dynamic searching queries are utilized for local movement re\ufb01nement, where they are also initialized as the learnable embeddings of the intention points but are responsible for retrieving \ufb01ne-grained local features around each intention point. For this purpose, the dynamic searching queries are dynamically updated according to the predicted trajectories, which can adaptively gather latest trajectory features from a deformable local region for iterative motion re\ufb01nement. These two queries complement each other and have been empirically demonstrated their great effectiveness in predicting multimodal future motion. Besides that, we also propose a dense future prediction module. Existing works generally focus on modeling the agent interaction over past trajectories while ignoring the future trajectories\u2019 interaction. To compensate for such information, we adopt a simple auxiliary regression head to densely predict future trajectory and velocity for each agent, which are encoded as additional future context features to bene\ufb01t future motion prediction of our interested agent. The experiments show that this simple auxiliary task works well and remarkably improves the performance of multimodal motion prediction. Our contributions are three-fold: (1) We propose a novel motion decoder network with a new concept of motion query pair, which adopts two types of queries to model motion prediction as joint optimization of global intention localization and local movement re\ufb01nement. It not only stabilizes the training with mode-speci\ufb01c motion query pairs, but also enables adaptive motion re\ufb01nement by iteratively gathering \ufb01ne-grained trajectory features. (2) We present an auxiliary dense future prediction task to enable the future interactions between our interested agent and other agents. It facilitates our framework to predict more scene-compliant trajectories for the interacting agents. (3) By adopting these techniques, we propose MTR framework that explores transformer encoder-decoder structure for multimodal motion prediction. Our approach achieves state-of-the-art performance on both the marginal and joint motion prediction benchmarks of Waymo Open Motion Dataset (WOMD) [15], outperforming previous best ensemble-free approaches with +8.48% mAP gains for marginal motion prediction and +7.98% mAP gains for joint motion prediction. As of 19 May 2022, our approach ranked 1st on both the marginal and joint motion prediction leaderboards of WOMD. Moreover, our approach with more ensembled variants of MTR also won the champion of Motion Prediction Challenge in Waymo Open Dataset Challenge 2022 [56, 44]. 2 Related Work Motion Prediction for Autonomous Driving. Recently, motion prediction has been extensively studied due to the growing interest in autonomous driving, and it typically takes road map and agent history states as input. To encode such scene context, early works [38, 33, 6, 13, 64, 4, 9] typically rasterize them into an image so as to be processed with convolutional neural networks (CNNs). LaneGCN [29] builds a lane graph toscalability capture map topology. VectorNet [17] is widely adopted by recent works [21, 45, 37, 49] due to its ef\ufb01ciency and scalability, where both road maps and agent trajectories are represented as polylines. We also adopt this vector representation, but instead of building global graph of polylines, we propose to adopt transformer encoder on local connected graph, which not only better maintains input locality structure but also is more memory-ef\ufb01cient to enable larger map encoding for long-term motion prediction. Given the encoded scene context features, existing works explore various strategies to model multimodal future motion. Early works [2, 22, 41, 46, 42] propose to generate a set of trajectory samples to approximate the output distribution. Some other works [10, 23, 35, 39, 43] parameterize multimodal predictions with Gaussian Mixture Models (GMMs) to generate compact distribution. HOME series [19, 18] generate trajectories with sampling on a predicted heatmap. IntentNet [8] considers 2 \fTransformer Encoder Transformer Decoder Layer \u2026 Dense Future Prediction \u2026 Dynamic Map Collection GMM Prediction \ud835\udca6\u00d7(2 + 2) \ud835\udca6\u00d71\u00d72 \ud835\udca6\u00d7\ud835\udc37 \ud835\udca6\u00d7(2 + 2) \ud835\udca6\u00d7\ud835\udc47\u00d72 \ud835\udca6\u00d7\ud835\udc37 Motion Query Pair Polyline Encoder Predicted Trajectory Initialization Query Content Initialization Query Updating \u00d7\ud835\udc41 Agent features Map feature Predicted trajectory Trajectory-specific map feature Interested agent Other agent \u2026 Local Graph Query Content Feature Predicted Trajectory Key & Value Query Position Query Content \uff08a\uff09 \uff08b\uff09 \uff08c\uff09 Figure 1: The architecture of MTR framework. (a) indicates the dense future prediction module, which predicts a single trajectory for each agent (e.g., drawn as yellow dashed curves in the above of (a)). (b) indicates the dynamic map collection module, which collects map elements along each predicted trajectory (e.g., drawn as the shadow region along each trajectory in the above part of (b)) to provide trajectory-speci\ufb01c feature for motion decoder network. (c) indicates the motion decoder network, where K is the number of motion query pairs, T is the number of future frames, D is hidden feature dimension and N is the number of transformer decoder layers. The predicted trajectories, motion query pairs, and query content features are the outputs from last decoder layer and will be taken as input to next decoder layer. For the \ufb01rst decoder layer, both two components of motion query pair are initialized as prede\ufb01ned intention points, the predicted trajectories are replaced with the intention points for initial map collection, and query content features are initialized as zeros. intention prediction as a classi\ufb01cation with 8 high level actions, while [31] proposes a region-based training strategy. Goal-based methods [65, 42, 16, 32] are another kinds of models where they \ufb01rst estimate several goal points of the agents and then complete full trajectory for each goal. Recently, the large-scale Waymo Open Motion Dataset (WOMD) [15] is proposed for long-term motion prediction. To address this challenge, DenseTNT [21] adopts a goal-based strategy to classify endpoint of trajectory from dense goal points. Other works directly predict the future trajectories based on the encoded agent features [37] or latent anchor embedding [49]. However, the goal-based strategy has the ef\ufb01ciency concern due to a large number of goal candidates, while the directregression strategy converges slowly as the predictions of various motion modes are regressed from the same agent feature. In contrast, our approach adopts a small set of learnable motion query pairs, which not only eliminate the large number of goal candidates but also alleviate the optimization burden by utilizing mode-speci\ufb01c motion query pairs for predicting different motion modes. Some very recent works [47, 25, 24] also achieve top performance on WOMD by exploring Mix-andMatch block [47], a variant of MultiPath++ [25] or heterogeneous graph [24]. However, they generally focus on exploring various structures for encoding scene context, while how to design a better motion decoder for multimodal motion prediction is still underexplored. In contrast, our approach focuses on addressing this challenge with a novel transformer-based motion decoder network. Transformer. Transformer [50] has been widely applied in natural language processing [12, 3] and computer vision [14, 52, 5, 51, 62]. Our approach is inspired by DETR [5] and its follow-up works [67, 34, 60, 27, 30, 11, 63], especially DAB-DETR [30], where the object query is considered as the positional embedding of a spatial anchor box. Motivated by their great success in object detection, we introduce a novel concept of motion query pair to model multimodal motion prediction with prior intention points, where each motion query pair takes charge of predicting a speci\ufb01c motion mode and also enables iterative motion re\ufb01nement by combining with transformer decoders. 3 Motion TRansformer (MTR) We propose Motion TRansformer (MTR), which adopts a novel transformer encoder-decoder structure with iterative motion re\ufb01nement for predicting multimodal future motion. The overall structure is illustrated in Figure 1. In Sec. 3.1, we introduce our encoder network for scene context modeling. In 3 \fSec. 3.2, we present motion decoder network with a novel concept of motion query pair for predicting multimodal trajectories. Finally, in Sec. 3.3, we introduce the optimization process of our framework. 3.1 Transformer Encoder for Scene Context Modeling The future behaviors of the agents highly depend on the agents\u2019 interaction and road map. To encode such scene context, existing approaches have explored various strategies by building global interacting graph [17, 21] or summarizing map features to agent-wise features [37, 49]. We argue that the locality structure is important for encoding scene context, especially for the road map. Hence, we propose a transformer encoder network with local self-attention to better maintain such structure information. Input representation. We follow the vectorized representation [17] to organize both input trajectories and road map as polylines. For the motion prediction of a interested agent, we adopt the agent-centric strategy [65, 21, 49] that normalizes all inputs to the coordinate system centered at this agent. Then, a simple polyline encoder is adopted to encode each polyline as an input token feature for the transformer encoder. Speci\ufb01cally, we denote the history state of Na agents as Ain \u2208RNa\u00d7t\u00d7Ca, where t is the number of history frames, Ca is the number of state information (e.g., location, heading angle and velocity), and we pad zeros at the positions of missing frames for trajectories that have less than t frames. The road map is denoted as Min \u2208RNm\u00d7n\u00d7Cm, where Nm is the number of map polylines, n is the number of points in each polyline and Cm is the number of attributes of each point (e.g., location and road type). Both of them are encoded by a PointNet-like [40] polyline encoder as: Ap = \u03c6 (MLP(Ain)) , Mp = \u03c6 (MLP(Min)) , (1) where MLP(\u00b7) is a multilayer perceptron network, and \u03c6 is max-pooling to summarize each polyline features as agent features Ap \u2208RNa\u00d7D and map features Mp \u2208RNm\u00d7D with feature dimension D. Scene context encoding with local transformer encoder. The local structure of scene context is important for motion prediction. For example, the relation of two parallel lanes is important for modelling the motion of changing lanes, but adopting attention on global connected graph equally considers relation of all lanes. In contrast, we introduce such prior knowledge to context encoder by adopting local attention, which better maintains the locality structure and are more memory-ef\ufb01cient. Speci\ufb01cally, the attention module of j-th transformer encoder layer can be formulated as: Gj = MultiHeadAttn \u0000query=Gj\u22121 + PEGj\u22121, key=\u03ba(Gj\u22121) + PE\u03ba(Gj\u22121), value=\u03ba(Gj\u22121) \u0001 , (2) where MultiHeadAttn(\u00b7, \u00b7, \u00b7) is the multi-head attention layer [50], G0 = [Ap, Mp] \u2208R(Na+Nm)\u00d7D concatenating the features of agents and map, and \u03ba(\u00b7) denotes k-nearest neighbor algorithm to \ufb01nd k closest polylines for each query polyline. PE denotes sinusoidal position encoding of input tokens, where we utilize the latest position for each agent and utilize polyline center for each map polyline. Thanks to such local self-attention, our framework can encode a much larger area of scene context. The encoder network \ufb01nally generates both agent features Apast \u2208RNa\u00d7D and map features M \u2208 RNm\u00d7D, which are considered as the scene context inputs of the following decoder network. Dense future prediction for future interactions. Interactions with other agents heavily affect behaviors of our interested agent, and previous works propose to model the multi-agent interactions with hub-host based network [68], dynamic relational reasoning [28], social spatial-temporal network [59], etc. However, most existing works generally focus on learning such interactions over past trajectories while ignoring the interactions of future trajectories. Therefore, considering that the encoded features A have already learned rich context information of all agents, we propose to densely predict both future trajectories and velocities of all agents by adopting a simple regression head on A: S1:T = MLP(Apast), (3) where Si \u2208RNa\u00d74 includes future position and velocity of each agent at time step i, and T is the number of future frames to be predicted. The predicted trajectories S1:T are encoded by adopting the same polyline encoder as Eq. (1) to encode the agents\u2019 future states as features Afuture \u2208RNa\u00d7D, which are then utilized to enhance the above features A by using a feature concatenation and three MLP layers as A = MLP([Apast, Afuture]). This auxiliary task provides additional future context information to the decoder network, facilitating the model to predict more scene-compliant future trajectories for the interested agent. The experiments in Table 3 demonstrates that this simple and light-weight auxiliary task can effectively improve the performance of multimodal motion prediction. 4 \f3.2 Transformer Decoder with Motion Query Pair Given the scene context features, a transformer-based motion decoder network is adopted for multimodal motion prediction, where we propose motion query pair to model motion prediction as the joint optimization of global intention localization and local movement re\ufb01nement. Each motion query pair contains two types of queries, i.e., static intention query and dynamic searching query, for conducting global intention localization and local movement re\ufb01nement respectively. As shown in Figure 2, our motion decoder network contains stacked transformer decoder layers for iteratively re\ufb01ning the predicted trajectories with motion query pairs. Next, we illustrate the detailed structure. Global intention localization aims to localize agent\u2019s potential motion intentions in an ef\ufb01cient and effective manner. We propose static intention query to narrow down the uncertainty of future trajectory by utilizing different intention queries for different motion modes. Speci\ufb01cally, we generate Multi-Head Attention Multi-Head Self-Attention GMM Prediction \ud835\udc3c: (\ud835\udca6\u00d72) \ud835\udc36!: (\ud835\udca6\u00d7\ud835\udc37) \ud835\udc4c \" !: (\ud835\udca6\u00d72) Sine + MLP Sine + MLP + + Q K V Q K V \ud835\udc3c: (\ud835\udca6\u00d72) \ud835\udc36!#$: (\ud835\udca6\u00d7\ud835\udc37) \ud835\udc4c \" !#$: (\ud835\udca6\u00d72) \ud835\udc4d$:\" !#$: (\ud835\udca6\u00d7\ud835\udc47\u00d76) Motion Query Pair Context Features Position Embedding Static Intention Query Dynamic Searching Query \u00d7\ud835\udc41 Add & Norm Add & Norm Add & Norm FFN Query Content Feature Query Updating Figure 2: The network structure of our motion decoder network with motion query pair. K representative intention points I \u2208RK\u00d72 by adopting k-means clustering algorithm on the endpoints of ground-truth (GT) trajectories, where each intention point represents an implicit motion mode that considers both motion direction and velocity. We model each static intention query as the learnable positional embedding of the intention point as: QI = MLP (PE(I)) , (4) where PE(\u00b7) is the sinusoidal position encoding, and QI \u2208RK\u00d7D. Notably, each intention query takes charge of predicting trajectories for a speci\ufb01c motion mode, which stabilizes the training process and facilitates predicting multimodal trajectories since each motion mode has their own learnable embedding. Thanks to their learnable and adaptive properties, we only need a small number of queries (e.g., 64 queries in our setting) for ef\ufb01cient intention localization, instead of using densely-placed goal candidates [65, 21] to cover the destinations of the agents. Local movement re\ufb01nement aims to complement with global intention localization by iteratively gathering \ufb01ne-grained trajectory features for re\ufb01ning the trajectories. We propose dynamic searching query to adaptively probe trajectory features for each motion mode. Each dynamic searching query is also the position embedding of a spatial point, which is initialized with its corresponding intention point but will be dynamically updated according to the predicted trajectory in each decoder layer. Speci\ufb01cally, given the predicted future trajectories Y j 1:T = {Y j i \u2208RK\u00d72 | i = 1, \u00b7 \u00b7 \u00b7 , T} in j-th decoder layer, the dynamic searching query of (j + 1)-th decoder layer is updated as follows: Qj+1 S = MLP \u0010 PE(Y j T ) \u0011 . (5) As shown in Figure 3, for each motion query pair, we propose a dynamic map collection module to extract \ufb01ne-grained trajectory features by querying map features from a trajectory-aligned local region, which is implemented by collecting L polylines whose centers are closest to the predicted trajectory. As the agent\u2019s behavior largely depends on road maps, this local movement re\ufb01nement strategy enables to continually focus on latest local context information for iterative motion re\ufb01nement. Attention module with motion query pair. In each decoder layer, static intention query is utilized to propagate information among different motion intentions, while dynamic searching query is utilized to aggregate trajectory-speci\ufb01c features from scene context features. Speci\ufb01cally, we utilize static intention query as the position embedding of self-attention module as follows: Cj sa = MultiHeadAttn(query=Cj\u22121 + QI, key=Cj\u22121 + QI, value=Cj\u22121), (6) where Cj\u22121 \u2208RK\u00d7D is query content features from (j \u22121)-th decoder layer, C0 is initialized to zeros, and Cj sa \u2208RK\u00d7D is the updated query content. Next, we utilize dynamic searching query as query position embedding of cross attention to probe trajectory-speci\ufb01c features from the outputs 5 \fMotion Query 1 Motion Query 2 Motion Query 3 0.3 0.3 0.4 0.5 0.4 0.1 Prediction from layer \ud835\udc57\u22121 Prediction of layer \ud835\udc57 Static Intention Query Dynamic Searching Query Predicted trajectory Trajectory-specific map features Figure 3: The illustration of dynamic map collection module for iterative motion re\ufb01nement. of encoder. Inspired by [34, 30], we concatenate content features and position embedding for both query and key to decouple their contributions to the attention weights. Two cross-attention modules are adopted separately for aggregating features from both agent features A and map features M as: Cj A = MultiHeadAttn(query=[Cj sa, Qj S], key=[A, PEA], value=A), Cj M = MultiHeadAttn(query=[Cj sa, Qj S], key=[\u03b1(M), PE\u03b1(M)], value=\u03b1(M)), (7) Cj = MLP([Cj A, Cj M]) where [\u00b7, \u00b7] indicates feature concatenation, \u03b1(M) is the aforementioned dynamic map collection module to collect L trajectory-aligned map features for motion re\ufb01nement. Note that for simplicity, in Eq. (6) and (7), we omit the residual connection and feed-forward network in transformer layer [50]. Finally, Cj \u2208RK\u00d7D is the updated query content features for each motion query pair in j-th layer. Multimodal motion prediction with Gaussian Mixture Model. For each decoder layer, we append a prediction head to Cj for generating future trajectories. As the behaviors of the agents are highly multimodal, we follow [10, 49] to represent the distribution of predicted trajectories with Gaussian Mixture Model (GMM) at each time step. Speci\ufb01cally, for each future time step i \u2208{1, \u00b7 \u00b7 \u00b7 , T}, we predict the probability p and parameters (\u00b5x, \u00b5y, \u03c3x, \u03c3y, \u03c1) of each Gaussian component as follows Zj 1:T = MLP(Cj), (8) where Zj i \u2208RK\u00d76 includes K Gaussian components N1:K(\u00b5x, \u03c3x; \u00b5y, \u03c3y; \u03c1) with probability distribution p1:K. The predicted distribution of agent\u2019s position at time step i can be formulated as: P j i (o) = K X k=1 pk \u00b7 Nk(ox \u2212\u00b5x, \u03c3x; oy \u2212\u00b5y, \u03c3y; \u03c1). (9) where P j i (o) is the occurrence probability of the agent at spatial position o \u2208R2. The predicted trajectories Y j 1:T can be generated by simply extracting the predicted centers of Gaussian components. 3.3 Training Losses Our model is trained end-to-end with two training losses. The \ufb01rst auxiliary loss is L1 regression loss to optimize the outputs of Eq. (3). For the second Gaussian regression loss, we adopt negative loglikelihood loss according to Eq. (9) to maximum the likelihood of ground-truth trajectory. Inspired by [10, 49], we adopt a hard-assignment strategy that selects one closest motion query pair as positive Gaussian component for optimization, where the selection is implemented by calculating the distance between each intention point and the endpoint of GT trajectory. The Gaussian regression loss is adopted in each decoder layer, and the \ufb01nal loss is the sum of the auxiliary regression loss and all the Gaussian regression loss with equal loss weights. Please refer to appendix for more loss details. 4 Experiments 4.1 Experimental Setup Dataset and metrics. We evaluate our approach on the large-scale Waymo Open Motion Dataset (WOMD) [15], which mines interesting interactions from real-world traf\ufb01c scenes and is currently 6 \fTable 1: Performance comparison of marginal motion prediction on the validation and test set of Waymo Open Motion Dataset. \u2020: The results are shown in italic for reference since their performance is achieved with model ensemble techniques. We only evaluate our default setting MTR on the test set by submitting to of\ufb01cial test server due to the limitation of submission times of WOMD. Method Reference minADE \u2193 minFDE \u2193 Miss Rate \u2193 mAP \u2191 Test MotionCNN [26] CVPRw 2021 0.7400 1.4936 0.2091 0.2136 ReCoAt [66] CVPRw 2021 0.7703 1.6668 0.2437 0.2711 DenseTNT [21] ICCV 2021 1.0387 1.5514 0.1573 0.3281 SceneTransformer [37] ICLR 2022 0.6117 1.2116 0.1564 0.2788 MTR (Ours) 0.6050 1.2207 0.1351 0.4129 \u2020MultiPath++ [49] ICRA 2022 0.5557 1.1577 0.1340 0.4092 \u2020MTR-Advanced-ens (Ours) 0.5640 1.1344 0.1160 0.4492 Val MTR (Ours) 0.6046 1.2251 0.1366 0.4164 MTR-e2e (Ours) 0.5160 1.0404 0.1234 0.3245 \u2020MTR-ens (Ours) 0.5686 1.1534 0.1240 0.4323 \u2020MTR-Advanced-ens (Ours) 0.5597 1.1299 0.1167 0.4551 Table 2: Performance comparison of joint motion prediction on the interactive validation and test set of Waymo Open Motion Dataset. Method Reference minADE \u2193 minFDE \u2193 Miss Rate \u2193 mAP \u2191 Test Waymo LSTM baseline [15] ICCV 2021 1.9056 5.0278 0.7750 0.0524 HeatIRm4 [36] CVPRw 2021 1.4197 3.2595 0.7224 0.0844 AIR2 [58] CVPRw 2021 1.3165 2.7138 0.6230 0.0963 SceneTransformer [37] ICLR 2022 0.9774 2.1892 0.4942 0.1192 M2I [45] CVPR 2022 1.3506 2.8325 0.5538 0.1239 MTR (Ours) 0.9181 2.0633 0.4411 0.2037 Val MTR (Ours) 0.9132 2.0536 0.4372 0.1992 the most diverse interactive motion dataset. There are two tasks in WOMD with separate evaluation metrics: (1) The marginal motion prediction challenge that independently evaluates the predicted motion of each agent (up to 8 agents per scene). (2) The joint motion prediction challenge that needs to predict the joint future positions of 2 interacting agents for evaluation. Both of them provide 1 second of history data and aim to predict 6 marginal or joint trajectories of the agents for 8 seconds into the future. There are totally 487k training scenes, and about 44k validation scenes and 44k testing scenes for each challenge. We utilize the of\ufb01cial evaluation tool to calculate the evaluation metrics, where the mAP and miss rate are the most important ones as in the of\ufb01cial leaderboard[55, 54]. Implementation details. For the context encoding, we stack 6 transformer encoder layers. The road map is represented as multiple polylines, where each polyline contains up to 20 points (about 10m in WOMD). We select Nm = 768 nearest map polylines around the interested agent. The number of neighbors in encoder\u2019s local self-attention is set to 16. The encoder hidden feature dimension is set as D = 256. For the decoder modules, we stack 6 decoder layers. L is set to 128 to collect the closest map polylines from context encoder for motion re\ufb01nement. By default, we utilize 64 motion query pairs where their intention points are generated by conducting k-means clustering algorithm on the training set. To generate 6 future trajectories for evaluation, we use non-maximum suppression (NMS) to select top 6 predictions from 64 predicted trajectories by calculating the distances between their endpoints, and the distance threshold is set as 2.5m. Please refer to Appendix for more details. Training details. Our model is trained in an end-to-end manner by AdamW optimizer with a learning rate of 0.0001 and batch size of 80 scenes. We train the model for 30 epochs with 8 GPUs (NVDIA RTX 8000), and the learning rate is decayed by a factor of 0.5 every 2 epochs from epoch 20. The weight decay is set as 0.01 and we do not use any data augmentation. MTR-e2e for end-to-end motion prediction. We also propose an end-to-end variant of MTR, called MTR-e2e, where only 6 motion query pairs are adopted so as to remove NMS post processing. In the training process, instead of using static intention points for target assignment as in MTR, MTR-e2e selects positive mixture component by calculating the distances between its 6 predicted trajectories and the GT trajectory, since 6 intention points are too sparse to well cover all potential future motions. 4.2 Main Results Performance comparison for marginal motion prediction. Table 1 shows our main results for marginal motion prediction, our MTR outperforms previous ensemble-free approaches [21, 37] with 7 \fTable 3: Effects of different components in MTR framework. All models share the same encoder network. \u201clatent learnable embedding\u201d indicates using 6 latent learnable embeddings as queries of decoder network, and \u201citerative re\ufb01nement\u201d indicates using 6 stacked decoders for motion re\ufb01nement. Global Intention Localization Iterative Re\ufb01nement Local Movement Re\ufb01nement Dense Future Prediction minADE \u2193minFDE \u2193Miss Rate \u2193mAP \u2191 Latent learnable embedding \u00d7 \u00d7 \u00d7 0.6829 1.4841 0.2128 0.2633 Static intention query \u00d7 \u00d7 \u00d7 0.7036 1.4651 0.1845 0.3059 Static intention query \u2713 \u00d7 \u00d7 0.6919 1.4217 0.1776 0.3171 Static intention query \u2713 \u2713 \u00d7 0.6833 1.4059 0.1756 0.3234 Static intention query \u2713 \u00d7 \u2713 0.6735 1.3847 0.1706 0.3284 Static intention query \u2713 \u2713 \u2713 0.6697 1.3712 0.1668 0.3437 remarkable margins, increasing the mAP by +8.48% and decreasing the miss rate from 15.64% to 13.51%. In particular, our single-model results of MTR also achieve better mAP than the latest work MultiPath++ [49], where it uses a novel model ensemble strategy that boosts its performance. Table 1 also shows the comparison of MTR variants. MTR-e2e achieves better minADE and minFDE by removing NMS post-processing, while MTR achieves better mAP since it learns explicit meaning of each motion query pair that produces more con\ufb01dent intention predictions. We also propose a simple model ensemble strategy to merge the predictions of MTR and MTR-e2e and utilize NMS to remove redundant predictions (denoted as MTR-ens), and it takes the best of both models and achieves much better mAP. By adopting such ensemble strategy to 7 variants of our framework (e.g., more decoder layers, different number of queries, larger hidden dimension), our advanced ensemble results (denoted as MTR-Advanced-ens) achieve best performance on the test set leaderboard. Performance comparison for joint motion prediction. To evaluate our approach for joint motion prediction, we combine the marginal predictions of two interacting agents into joint prediction as in [7, 15, 45], where we take the top 6 joint predictions from 36 combinations of these two agents. The con\ufb01dence of each combination is the product of marginal probabilities. Table 2 shows that our approach outperforms state-of-the-arts [37, 45] with large margins on all metrics. Particularly, our MTR boosts the mAP from 12.39% to 20.37% and decreases the miss rate from 49.42% to 44.11%. The remarkable performance gains demonstrate the effectiveness of MTR for predicting scene-consistent future trajectories. Besides that, we also provide some qualitative results in Figure 5 to show our predictions in complicated interacting scenarios. As of May 19, 2022, our MTR ranks 1st on the motion prediction leaderboard of WOMD for both two challenges [55, 54]. Our approach with more ensembled variants of MTR (i.e., MTR-Advacnedens) also won the champion of Motion Prediction Challenge in Waymo Open Dataset Challenge 2022 [56, 44]. The signi\ufb01cant improvements manifest the effectiveness of MTR framework. 4.3 Ablation Study We study the effectiveness of each component in MTR. For ef\ufb01ciently conducting ablation experiments, we uniformly sampled 20% frames (about 97k scenes) from the WOMD training set according to their default order, and we empirically \ufb01nd that it has similar distribution with the full training set. All models are evaluated with marginal motion prediction metric on the validation set of WOMD. Effects of the motion decoder network. We study the effectiveness of each component in our decoder network, including global intention localization, iterative re\ufb01nement and local movement re\ufb01nement. Table 3 shows that all components contributes remarkably to the \ufb01nal performance in terms of the of\ufb01cial ranking metric mAP. Especially, our proposed static intention queries with intention points achieves much better mAP (i.e., +4.26%) than the latent learnable embeddings thanks to its mode-speci\ufb01c querying strategy, and both the iterative re\ufb01nement and local movement re\ufb01nement strategy continually improve the mAP from 30.59% to 32.34% by aggregating more \ufb01ne-grained trajectory features for motion re\ufb01nement. Effects of dense future prediction. Table 3 shows that our proposed dense future prediction module signi\ufb01cantly improves the quality of predicted trajectories (e.g., +1.78% mAP), which veri\ufb01es that future interactions of the agents\u2019 trajectories are important for motion prediction and our proposed strategy can learn such interactions to predict more reliable trajectories. 8 \f6 16 32 64 100 Number of Motion Query Pairs 0.20 0.22 0.24 0.26 0.28 0.30 0.32 0.34 0.36 0.38 mAP Static Intention Points Predictions 6 16 32 64 100 Number of Motion Query Pairs 0.16 0.18 0.20 0.22 0.24 0.26 0.28 0.30 Miss Rate Static Intention Points Predictions Figure 4: MTR framework with different number of motion query pairs, and two different colored lines demonstrate different strategies for selecting the positive mixture component during training process. Table 4: Effects of local self-attention in transformer encoder. \u201c#polyline\u201d is the number of input map polylines used for context encoding, and a large number of polylines indicate that there is a larger map context around the interested agent. \u201cOOM\u201d indicates running out of memory. Attention #Polyline minADE \u2193minFDE \u2193MR \u2193mAP \u2191 Global 256 0.683 1.4031 0.1717 0.3295 Global 512 0.6783 1.4018 0.1716 0.3280 Global 768 OOM OOM OOM OOM Local 256 0.6724 1.3835 0.1683 0.3372 Local 512 0.6707 1.3749 0.1670 0.3392 Local 768 0.6697 1.3712 0.1668 0.3437 Local 1024 0.6757 1.3782 0.1663 0.3452 2 1 1 2 3 1 2 (a) V2 is passing the intersection to turn left with high speed. Our model predicts multimodal behaviors for V1: turn left or make a U-turn. In any case, V1 is predicted to yield for V2. t+0s t+8s low high (b) P2 is passing the road through the crosswalk while V1 is on the right-turn lane to turn right. Both V1 and V3 are predicted to yield for P2. (c) Our model predicts multimodal behaviors for V1: go straight and turn right, since it still has a distance to the intersection. V2 is predicted to yield for V1 when turning left, since V1 is moving fast towards the intersection. Figure 5: Qualitative results of MTR framework on WOMD. There are two interested agents in each scene (green rectangle), where our model predicts 6 multimodal future trajectories for each of them. For other agents (blue rectangle), a single trajectory is predicted by dense future prediction module. We use gradient color to visualize the trajectory waypoints at different future time step, and trajectory con\ufb01dence is visualized by setting different transparent. Abbreviation: Vehicle (V), Pedestrian (P). Effects of local attention for context encoding. Table 4 shows that by taking the same number of map polylines as input, local self-attention in transformer encoder achieves better performance than global attention (i.e., +0.77% mAP for 256 polylines and +1.12% mAP for 512 polylines), which veri\ufb01es that the input local structure is important for motion prediction and introducing such prior knowledge with local attention can bene\ufb01t the performance. More importantly, local attention is more memory-ef\ufb01cient and the performance keeps growing when improving the number of map polylines from 256 to 1,024, while global attention will run out of memory due to its quadratic complexity. Effects of the number of motion query pairs with different training strategies. As mentioned before, during training process, MTR and MTR-e2e adopt two different strategies for assigning positive mixture component, where MTR depends on static intention points (denoted as \u03b1) while MTR-e2e utilizes predicted trajectories (denoted as \u03b2). Figure 4 investigates the effects of the number of motion query pairs under these two strategies, where we have the following observations: (1) When increasing the number of motion query pairs, strategy \u03b1 achieves much better mAP and miss rate than strategy \u03b2. Because intention query points can ensure more stable training process since each intention query points is responsible to a speci\ufb01c motion mode. In contrast, strategy \u03b2 depends on unstable predictions and the positive component may randomly switch among all components, so a large number of motion query pairs are hard to be optimized with strategy \u03b2. (2) The explicit meaning of each intention query point also illustrates the reason that strategy \u03b1 consistently achieves much better mAP than strategy \u03b2, since it can predict trajectories with more con\ufb01dent scores to bene\ufb01t mAP metric. (3) From another side, when decreasing the number of motion query pairs, the miss rate of strategy \u03b1 greatly increases, since a limit number of intention query points can not well cover all potential motions of agents. Conversely, strategy \u03b2 works well for a small number of motion query pairs since its queries are not in charge of speci\ufb01c region and can globally adapt to any region. 9 \f5" + }, + { + "url": "http://arxiv.org/abs/2209.10033v1", + "title": "MTR-A: 1st Place Solution for 2022 Waymo Open Dataset Challenge -- Motion Prediction", + "abstract": "In this report, we present the 1st place solution for motion prediction track\nin 2022 Waymo Open Dataset Challenges. We propose a novel Motion Transformer\nframework for multimodal motion prediction, which introduces a small set of\nnovel motion query pairs for generating better multimodal future trajectories\nby jointly performing the intention localization and iterative motion\nrefinement. A simple model ensemble strategy with non-maximum-suppression is\nadopted to further boost the final performance. Our approach achieves the 1st\nplace on the motion prediction leaderboard of 2022 Waymo Open Dataset\nChallenges, outperforming other methods with remarkable margins. Code will be\navailable at https://github.com/sshaoshuai/MTR.", + "authors": "Shaoshuai Shi, Li Jiang, Dengxin Dai, Bernt Schiele", + "published": "2022-09-20", + "updated": "2022-09-20", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "main_content": "Introduction Recently, motion prediction is receiving increasing attention [4, 7, 5, 11, 8, 9, 6] as it is crucial for autonomous vehicles to make safe decisions. It is also a highly challenging task due to its inherently multimodal behaviors of the agent and complex scene environments. To predict accurate future trajectories of the agent, existing approaches mainly follow two different lines. Some approaches [13, 5] adopt the goal-based strategy to localize the agent\u2019s destination with densely sampled goal candidates, which alleviate the burden of model optimization by reducing the trajectory uncertainty. Some other approaches [9, 12] direct predict a set of future trajectories based on the encoded agent feature, which can adaptively cover the agent\u2019s future behavior in a more \ufb02exible manner. However, the goal-based methods suffer from high computation and memory cost since their performance depends on a large number of goal candidates, while the directregression methods generally converge slowly as various motion modes are required to be regressed from the same agent features without any spatial priors. Hence, to address these limitations, we propose a novel framework, namely Motion Transformer (MTR), which takes the best of both worlds. Speci\ufb01cally, our approach adopt a transformer encoderdecoder structure for multimodal motion prediction, where a small set of novel motion query pairs is proposed to model the multimodal future behaviors of the agent. Each motion query pair contains a static intention query and a dynamic searching query, where the static intention query takes charge of predicting the future trajectory for a speci\ufb01c motion mode based on its associated spatial intention point, and the dynamic searching query conducts iterative motion re\ufb01nement by continually aggregating trajectoryspeci\ufb01c features. Thanks to these learnable and modespeci\ufb01c motion query pairs, our framework not only stabilizes the training process by introducing spatial priors based on a small set of intention points, but also enables an adaptive prediction of future trajectory for each motion mode by retrieving their trajectory-speci\ufb01c feature. 2. Method The overall architecture of our approach is shown in Fig. 1, and it consists of a transformer encoder network for scene context encoding and a transformer decoder network for multimodal motion prediction. 2.1. Context Encoding with Transformer Encoders To predict the future behavior of the agent, the \ufb01rst step is to model the interaction of all agents and encode the road environment. For this purpose, we adopt a simple and effective encoder network with stacked transformer encoders. Input representation. We adopt the agent-centric strategy as in [13, 5, 12], where both the agent history trajectories and the road map are normalized to the coordinate system centered at our interested agent. We utilize the vector representation [4] to organize both agent\u2019s history trajectories and the road map as polylines. A simple PointNetlike [10] polyline encoder is adopted to encode these two input polyline representations, which produces the agent fea1 arXiv:2209.10033v1 [cs.CV] 20 Sep 2022 \fTransformer Encoder Transformer Decoder Layer \u2026 \u2026 GMM Prediction [\ud835\udc44!, \ud835\udc44\" #] \ud835\udca6\u00d7\ud835\udc47\u00d72 [\ud835\udc44!, \ud835\udc44\" #$%] \ud835\udca6\u00d7\ud835\udc47\u00d72 Motion Query Pair Predicted Trajectories Query Updating \u00d7\ud835\udc41 Input History State of Agents Input Road Map Polylines \ud835\udca6\u00d7\ud835\udc37 \ud835\udca6\u00d7\ud835\udc37 Query Content Feature Polyline Encoder Polyline Encoder \u2026 \u2026 Dynamic Map Collection Figure 1: The architecture of our proposed Motion Transformer framework for multimodal motion prediction. tures A \u2208RNa\u00d7D and the map features M \u2208RNm\u00d7D (Na is the number of agent, Nm is the number of map polylines and D is the feature dimension). Scene context encoding with transformer encoder. Given the encoded polyline features of the agents and the road maps, we adopt a simple encoder network with stacked transformer encoder layers to model the agent interaction and encode the road environment. It takes the agent features A and map features M as input, and a set of self-attention modules are then adopted on A and M to model the interaction of agent and also encode the scene environment features for the following decoder network. 2.2. Multimodal Trajectory Prediction Given the encoded scene context features A and M, a novel transformer-based decoder network is adopted for predicting multimodal future trajectories. Inspired by the concept of object query [1] for object detection, we propose the motion query pair to model motion prediction, which consists of a static intention query and a dynamic searching query, aiming at global intention localization and local movement re\ufb01nement, respectively. Motion query pair for motion prediction. Our motion query pair aims to localize the potential motion intentions of the agents and generate their future trajectories in a modespeci\ufb01c manner. Hence, each motion query pair actually is associated with a speci\ufb01c intention point. Speci\ufb01cally, for each category, we generate K representation intention points (denotaed as I) by using the k-means algorithm on the endpoints of ground-truth trajectories of the training set. Each static intention query is the learnable position embedding of a speci\ufb01c intention point by using a simple multi-layer perceptron (MLP) network, where each static intention query takes charge of predicting future trajectory for a speci\ufb01c motion mode. The static intention query is formulated as QI = MLP(PE(I)), where each static intention query is associated with a \ufb01xed intention point in different decoder layers, and PE(\u00b7) indicates the sinusoidal position encoding. This mode-speci\ufb01c motion prediction greatly stabilizes the training process and also facilitates predicting multimodal trajectories by enabling each motion mode to have their own learnable embedding. To complement with the static intention query for predicting better future trajectories, we further adopt a dynamic searching query for each static intention query, which aims at iteratively re\ufb01ning the predicted trajectory with updated \ufb01ne-grained trajectory feature. Speci\ufb01cally, the dynamic searching query is also initialized as a learnable position embedding of its corresponding intention point, but it will be dynamically updated based on the predicted trajectory in each decoder layer, so as to collect trajectory-speci\ufb01c features for iterative motion re\ufb01nement. Hence, the dynamic searching query of (j + 1)-th layer can be updated as Qj+1 S = MLP(PE(Y j T )), where Y j T is the endpoint of the predicted trajectory in j-th decoder layer, and T is the number of future frames for trajectory prediction. Moveover, we propose a dynamic map collection strategy to extract trajectory-aligned feature based on the predicted trajectory in j-th layer, which is implemented by collecting the closest 128 map polylines along the predicted trajectory (i.e., the map polylines are \ufb01rstly ranked by calculating the smallest distance of its polyline center and all the predicted 80 waypoints of a single trajectory, and then we select the closest 128 map polylines for this trajectory). The collected \ufb01negrained map features are then inputted to the (j + 1)-th decoder layer for re\ufb01ning the predicted trajectory from j-th deocder layer. Note that for the \ufb01rst decoder layer, the dynamic map collection for each dynamic searching query is implemented by collecting map polylines around its intention point. Attention with motion query pair. The transformer decoder takes each motion query pair as the query embedding, and aims to aggregate context information from both the agent features A and the map features M. For each 2 \fTable 1: Top 10 entries on the test leaderboard of motion prediction track of 2022 Waymo Open Dataset Challenge. Our approach is termed as MTR-A, i.e., Motion Transformer Advanced. The Soft mAP is the of\ufb01cial ranking metric while the miss rate is the secondary ranking metric. Method Soft mAP\u2191 mAP \u2191 minADE \u2193 minFDE \u2193 Miss Rate \u2193 MTR-A 1st (Ours) 0.4594 0.4492 0.5640 1.1344 0.1160 golfer 2nd 0.4259 0.4119 0.5533 1.1608 0.1354 HBEns 3rd 0.3797 0.3700 0.6431 1.3405 0.1592 (Null) 0.3777 0.3719 0.6132 1.3218 0.1730 DM 0.3766 0.3710 0.6777 1.3558 0.1646 HDGT(softmAP) 0.3709 0.3577 0.7676 1.1077 0.1325 MAML 0.3445 0.3383 0.6945 1.4652 0.1846 Gnet 0.3367 0.3213 0.6255 1.2432 0.1740 prenet 0.3319 0.3168 0.6063 1.2415 0.1678 HDGT 0.3246 0.2826 0.5703 1.1434 0.1440 Table 2: Per-class performance of our approach on the validation set of Waymo Open Motion Dataset. Setting Category mAP \u2191 minADE \u2193 minFDE \u2193 Miss Rate \u2193 MTR Vehicle 0.4620 0.7559 1.5229 0.1541 Pedestrian 0.429 0.3341 0.6881 0.0706 Cyclist 0.3647 0.7037 1.4119 0.1802 Avg 0.4186 0.5979 1.2076 0.1350 MTR (Ensemble) Vehicle 0.4911 0.6676 1.3331 0.1200 Pedestrian 0.4550 0.3397 0.7077 0.0674 Cyclist 0.4191 0.6718 1.3489 0.1627 Avg 0.4551 0.5597 1.1299 0.1167 transformer decoder layer, we \ufb01rst utilize the static intention query to propagate information among different motion intentions by adopting the self-attention module, which generates the query content features for the following cross attention module. Then, the dynamic searching query is considered as the query embedding for cross attention module, and two separate cross attention modules are adopted for aggregating information from A and M, respectively. These two aggregated features are concatenated as the queried features for each motion query pair, aiming at predicting future trajectory for its corresponding motion mode. Motion prediction head with GMM. Given the queried feature for each motion query pair in each layer, we attach a simple prediction head with several MLP layers for predicting the future trajectory according to each queried features. We follow [2, 12] to model the multimodal future motion with Gaussian Mixture Model (GMM) at each time step, where we predict a probability p and GMM parameter N(\u00b5x, \u03c3x; \u00b5y, \u03c3y; \u03c1) for each motion query pair at each future time step. The whole framework is optimized by adopting the negative log-likelihood loss to maximum the likelihood of ground-truth trajectory in each decoder layer. Inspired by the hard-assignment strategy [2, 12], we also select a positive Gaussian component from the predicted GMMs of K motion query pairs, where the selection is based on calculating the distance between each intention point and the endpoint of GT trajectory. The loss at each time step can be formulated as: LG = \u2212log N( \u02c6 Yx \u2212\u00b5x, \u03c3x; \u02c6 Yy \u2212\u00b5y, \u03c3y; \u03c1) (1) where ( \u02c6 Yx, \u02c6 Yy) is a waypoint of the selected ground-truth trajectory at this time step. The \ufb01nal loss is calculated by equally summing the loss of each decoder layer. 2.3. Model Ensemble In order to further boost the performance of our framework, we adopt a model ensemble strategy to combine the results from multiple variants of our framework. Speci\ufb01cally, given Ne well-trained models, we \ufb01rst collect 6 predicted future trajectories from each model, which results in 6Ne multimodal future trajectories for each of our interested agent. Each trajectory has their own predicted con\ufb01dence from their original model. We then select top 6 future trajectories by adopting non-maximum-suppression (NMS) on the endpoints of these predicted trajectories, where the distance threshold \u03b4 is scaled along with the length L of the trajectory that has the highest con\ufb01dence among 6Ne predictions, as follows: \u03b4 = min \u0012 3.5, max \u0012 2.5, L \u221210 50 \u221210 \u00d7 1.5 + 2.5 \u0013\u0013 . (2) This simple model ensemble strategy facilitates taking the best predictions from multiple models, leading to better prediction of multimodal future trajectories. Note that our proposed MTR with this model ensemble strategy is denoted as MTR-A in the following experiment section. 3 \f3. Experiments 3.1. Implementation Details Architecture details. In the default setting of our model, we adopt 6 transformer encoder layers for the context encoding and 6 transformer decoder layers for generating the multimodal future trajectories. The hidden feature dimension is set to 512 to get a large model capacity for such a large-scale Waymo Open Motion Dataset (WOMD) [3]. For the context encoding, the road map is represented as polylines, where each polyline contains up to 20 map points (about 10m in WOMD). For the prediction head, a threelayer MLP head is adopted with feature dimention 512. We do not use any traf\ufb01c light data in our model. For each category, we adopt 64 motion query pairs based on 64 intention points that are generated by k-means clustering algorithm on the training set. During testing, we adopt NMS with distance threshold 2.5m to select top 6 predictions from 64 predicted trajectories. Training details. Our model is trained end-to-end by AdamW optimizer with a learning rate of 0.0001 and batch size of 80 scenes. All models are trained with 60 epochs, and we decay the learning rate by a factor of 0.5 every 5 epochs from epoch 30. The weight decay is set as 0.01 and we do not use any data augmentation. We utilize a single model to generate future trajectories for all three categories. Model ensemble details. We trained 7 variants of our model for conducting the model ensemble, where the variables includes the number of decoder layers (e.g., 6, 9), the number of motion queries (e.g., 6, 64, 100), and the hidden feature dimension (e.g., 256, 512). As mentioned in Sec. 2.3, the predicted results of 7 models are \ufb01nally combined with NMS to generate the \ufb01nal results. 3.2. Main Results Table 1 shows the top 10 entries of the \ufb01nal leaderboard of 2022 Waymo Open Dataset Motion Prediction challenge. Our approach ranked 1st place on the leaderbaord, and surpasses all other submissions with remarkable margins in terms of Soft mAP, mAP and the miss rate, which demonstrates that our approach can predict better multimodal future trajectories. Besides that, as shown in Table 2, we also report the per-class performance of our single-model results and the model ensemble results for reference. 4." + }, + { + "url": "http://arxiv.org/abs/2008.12599v1", + "title": "PV-RCNN: The Top-Performing LiDAR-only Solutions for 3D Detection / 3D Tracking / Domain Adaptation of Waymo Open Dataset Challenges", + "abstract": "In this technical report, we present the top-performing LiDAR-only solutions\nfor 3D detection, 3D tracking and domain adaptation three tracks in Waymo Open\nDataset Challenges 2020. Our solutions for the competition are built upon our\nrecent proposed PV-RCNN 3D object detection framework. Several variants of our\nPV-RCNN are explored, including temporal information incorporation, dynamic\nvoxelization, adaptive training sample selection, classification with RoI\nfeatures, etc. A simple model ensemble strategy with non-maximum-suppression\nand box voting is adopted to generate the final results. By using only LiDAR\npoint cloud data, our models finally achieve the 1st place among all LiDAR-only\nmethods, and the 2nd place among all multi-modal methods, on the 3D Detection,\n3D Tracking and Domain Adaptation three tracks of Waymo Open Dataset\nChallenges. Our solutions will be available at\nhttps://github.com/open-mmlab/OpenPCDet", + "authors": "Shaoshuai Shi, Chaoxu Guo, Jihan Yang, Hongsheng Li", + "published": "2020-08-28", + "updated": "2020-08-28", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "main_content": "Introduction The Waymo Open Dataset Challenges at CVPR\u201920 are the highly competitive competition with the largest LiDAR point cloud dataset for autonomous driving. We mainly focus on the 3D detection track, which requires to localize and classify the surrounding objects of ego-vehicle in the 3D LiDAR point cloud scenes. With our proposed powerful 3D detector, we not only achieve 1st place on the 3D detection track among all LiDAR-only methods [1], but also achieve the top performance on both the 3D tracking track and the domain adaptation track [2, 3]. 2. PV-RCNN: Solution to 3D Detection from Point Cloud 3D detection with LiDAR point cloud is challenging due to its sparsity and irregular format. Previous methods generally either transform the point cloud to regular voxels for processing with regular convolution [15, 12, 9], or directly estimate 3D bounding boxes with PointNet [5, 6] from raw point cloud [5, 8]. Actually both voxel-based and point-based strategies have their advantages, where voxel-based strategy is generally more ef\ufb01cient and effective while point-based strategy has \ufb02exible receptive \ufb01eld and remains accurate point locations. Hence, we propose the PV-RCNN 3D detection framework to deeply integrate the voxel-based sparse convolution [4] and point-based set abstraction [6] to bring the best from both of them. Our solutions for those competitions of Waymo Challenges are mostly built upon our PV-RCNN 3D detection framework. 2.1. Review of PV-RCNN 3D detection framework We \ufb01rst brie\ufb02y review our PV-RCNN 3D detection framework proposed in our CVPR\u201920 paper [7]. The whole framework is illustrated in Fig. 1, where the framework mainly has two stages, the voxel-to-keypoint scene encoding and the keypoint-to-grid RoI feature abstraction. In the \ufb01rst voxel-to-keypoint scene encoding stage, we propose the Voxel Set Abstraction (VSA) layer to aggregate the multi-scale voxel features to a small set of keypoint features, where the keypoints are set as ball centers to aggregate the surrounding sparse voxel-wise features from multiple scales of 3D sparse-convolution-based backbone network. Hence, the keypoint features integrates the features from both the voxel-based sparse convolution and the pointbased set abstraction, and also remain accurate point locations, which are especially important for the following \ufb01negrained proposal re\ufb01nement. In this stage, the high quality 3D proposals are also generated based on the prede\ufb01ned anchors on bird-view feature maps from the backbone. Since the foreground keypoints should contribute more while background keypoints should contribute less in the following proposal re\ufb01nement stage, we propose the Predicted Keypoint Weighting (PKW) module to further reweight the keypoint features with extra supervision from point cloud segmentation. In the keypoint-to-grid RoI feature abstraction stage, we 1 arXiv:2008.12599v1 [cs.CV] 28 Aug 2020 \fx y z x y z Voxelization FPS To BEV Keypoints Sampling Voxel Set Abstraction Module 3D Sparse Convolution V1: Classification V2: Confidence Box Regression RPN Predicted Keypoint Weighting Module RoI-grid Pooling Module Keypoints with features 3D Box Proposals V1: Confidence V2: Classification Box Refinement FC (256, 256) Raw Point Cloud Figure 1. The illustration of our proposed two variants of PV-RCNN 3D detection framework. propose the RoI-grid pooling module to aggregate the keypoint features to RoI-grid points. In contrast to the previous stage, here the RoI-grid points are set as the ball centers to group the features from surrounding keypoint features. Compared with previous 3D RoI pooling strategies [8, 7], the proposed RoI-grid pooling scheme has larger receptive \ufb01eld and could even group the features of the surrounding foreground keypoints which are outside the 3D bounding box proposals to help to re\ufb01ne the 3D proposals. For more details of PV-RCNN 3D detection framework, please refer to our CVPR\u201920 paper [7]. 2.2. Variants of PV-RCNN To further improve the 3D detection performance of PVRCNN on the Waymo Open Dataset [10], we explored several modi\ufb01cations based on PV-RCNN framework for the following model ensemble. Incorporate last frame to get denser point cloud. The waymo dataset is composed of many temporal sequences where most frames could \ufb01nd the previous consecutive frames to compensate the information of current frame. We adopt a simple strategy to incorporate the previous frame to get a denser point cloud as the input of our detection framework. Speci\ufb01cally, denote the points of frame at time step t as P t = ((xt 1, yt 1, zt 1), (xt 2, yt 2, zt 2), \u00b7 \u00b7 \u00b7 , (xt nt, yt nt, zt nt)) and the current frame is at time step t. We combine the points of frame t and frame t \u22121 to get the \ufb01nal input points \u02dc P t as follows: \u02dc P t = \u0000(xt 1, yt 1, zt 1, 0), \u00b7 \u00b7 \u00b7 , (xt nt, yt nt, zt nt, 0), (1) (xt\u22121 1 , yt\u22121 1 , zt\u22121 1 , \u03b4), \u00b7 \u00b7 \u00b7 , (xt\u22121 nt , yt\u22121 nt , zt\u22121 nt , \u03b4) \u0001 where \u03b4 is the time difference between frame t\u22121 and frame t to discriminate these two frames (\u03b4 = 0.1 in waymo competition). This simple strategy are especially bene\ufb01cial for the detection of small objects like pedestrian and cyclist. Testing with dynamic voxelization. Dynamic voxelization is proposed in [14] to avoid information loss during voxelization process. In this competition, we only adopt the dynamic voxelization in the inference process while keeping the training process unchanged, which slightly improved the \ufb01nal detection accuracies (see Table 1). Adaptive training sampling selection. Our previous PVRCNN adopts anchor-based strategy for 3D proposal generation, where we need de\ufb01ne separate anchors and hyperparameters (i.e., IoU threshold of positive / negative samples) for each class. Inspired by [13], we adopt the similar adaptive training sampling selection strategy on our PVRCNN framework to adaptively de\ufb01ne the IoU threshold for each ground-truth object, which effectively removes most of hyper-parameters in anchor assignment. Classi\ufb01cation with RoI-aligned features. Due to the class-variant anchor de\ufb01nition, the object classi\ufb01cation is conducted in the \ufb01rst proposal generation stage in our previous PV-RCNN, and the second stage only estimate the con\ufb01dence of each 3D proposal. With the help of the above class-agnostic anchor de\ufb01nition, we propose another variant of our PV-RCNN framework to conduct the classi\ufb01cation in the second stage with the RoI-aligned features by RoI-grid pooling (see Fig. 1). It results in more accurate classi\ufb01cation accuracy, and the experiments show that this variant of PV-RCNN could achieve higher detection accuracy on the pedestrian and cyclist categories (see Table 1). 2.3. Model ensemble After obtaining multiple different 3D object detectors, we use an ensemble of these detectors for the \ufb01nal submission. In order to preserve all the possible predictions for model ensemble, non-maximum suppression is not applied on individual detector but instead the detection results of all detectors are merged and non-maximum suppression is applied once, which produces the NMS boxes. Then, we utilize the boxes before non-maximum suppression, i.e. original boxes, to re\ufb01ne locations and dimensions of the NMS 2 \fboxes into \ufb01nal boxes. The details are as follows: bfinal,i k = 1 N X j\u2208S boriginal,j k , k \u2208{x, y, z, w, l, h} (2) where S contains original boxes whose IoUs with NMS box bnms,i are higher than a thresh , N is the number of the selected original boxes in S and k \u2208{x, y, z, w, l, h}. The rotation and score of the original boxes are kept unchanged. We named this technique as 3D box voting. Finally the NMS box is replaced with the corresponding \ufb01nal box as the submission results. Although the detection performance of Vehicle and Cyclist is improved signi\ufb01cantly by 3D box voting while improvement of Pedestrian is unsatisfactory. Hence, we explore another model ensemble technique to improve the performance of Pedestrian. Instead of merging detection results of all detectors directly, we merge the detectors one by one. Speci\ufb01cally, we \ufb01rstly select two detectors and the scores of detection boxes predicted by each detector are multiplied with a score weight, which is obtained by grid search on Waymo validation split to obtain the best Pedestrian performance. Then the detection boxes of two detectors are merged and non-maximum suppression is applied. The result after non-maximum suppression can be considered as result of a new detector and the procedure above is performed until the performance improvement of Pedestrian is minor. This technique is named as greedy ensemble. By employing 3D box voting and greedy ensemble to ensemble different detectors, the detection performance is improved signi\ufb01cantly. The results are shown in Table 1. 3. Solution to 3D Tracking and Domain Adaptation of Point Cloud 3.1. Solution to 3D tracking challenge The goal of 3D object tracking is to \ufb01nd the correspondence between 3D boxes across frames given lidar and camera sequence. In this report we only focus on the lidar-only 3D object tracking. Considering the high performance of 3D object detection achieved by PV-RCNN, We use PVRCNN as an off-the-shelf 3D object detector to obtain oriented 3D bounding boxes given the LiDAR point cloud. In order to obtain object IDs of the 3D boxes, we borrow the idea from [11], where a combination of 3D Kalman \ufb01lter and Hungarian algorithm is used for state estimation and data association. Although we utilize simple combination of off-the-shelf 3D object detector and tracker, it is extremely ef\ufb01cient and effective and our method rank 1st among all lidar-only methods and rank 2nd among multimodal methods on the 3D object tracking leader board. The results are shown in Table 2. 3.2. Solution to domain adaptation challenge The domain adaptation challenge aims to adapt the 3D detector to the new location and new weather with limited labeled data. In this competition, to tackle this challenge, we adopt a straightforward strategy by directly \ufb01ne-tuning our well-trained 3D detector on a small set of labeled data of target domain. Thanks to our strong 3D detector from source domain, as shown in Table 3, this simple strategy already achieves great performance on the target domain. Note that for the detection of cyclist on target domain, we directly adopt the 3D detector trained on source domain since the target domain has quite small number of cyclist. 4. Experiments Waymo Open Dataset is the largest dataset with LiDAR point cloud for autonomous driving. For the 3D detection and 3D tracking tasks, there are totally 798 training sequences (around 160k samples), 202 validation sequences (around 40k samples) and 150 testing sequences (around 30k samples). Annotations are provided for only the training and validation set. For the domain adaptation task, there are totally 80 labeled training sequences (around 16k samples) and 20 labeled validation sequences (around 4k samples). In this competition, all of our models are only trained or \ufb01ne-tuned on the training sequences. Training details. 3D detection models are the most important parts in this competition, where we trained all models for 80 epochs from scratch with ADAM optimizer and learning rate 0.01. The cosine annealing learning rate strategy was adopted for decaying the learning rate. The models were trained with batch size 64 and 32 GTX 1080 Ti GPUs on the training set. For training with the proposal re\ufb01nement network, we sample 128 proposals with 1:1 ratio for positive and negative proposals. The detection point cloud range is set to x \u2208[\u221275.2, 75.2]m, y \u2208[\u221275.2, 75.2]m and z \u2208[\u22122, 4]m, while the voxel size is 0.1 \u00d7 0.1 \u00d7 0.15m. We adopt the commonly used data augmentation for 3D object detection, including randomly \ufb02ipping along x and y axes, global scaling with scaling factor sampled from [0.95, 1.05], randomly global rotation along z axis with angle sampled from [\u2212\u03c0 4 , \u03c0 4 ], and the ground-truth sampling augmentation as in [12]. 4.1. Results for 3D detection challenge As mentioned before, we explored several variants of PV-RCNN to improve the 3D detection accuracy. The detailed 3D detection results on the validation set are shown in Table 1. We could see that the performance on the validation set improves constantly by combined with more new features. The \ufb01nal submission is based on the model ensemble of the models of all variants of PV-RCNN framework mentioned before, and the ensemble validation and test results are also shown in Table 1. 4.2. Results for 3D tracking challenge The submission for 3D object tracking is based on the model ensemble results of 3D object detection. To generate the object ID for each 3D detection box, a combination of 3 \fSetting Eval Set Training Set Vehicle Pedestrian Cyclist AP/APH (L1) AP/APH (L2) AP/APH (L1) AP/APH (L2) AP/APH (L1) AP/APH (L2) Baseline (original PV-RCNN) val \u223c80k 74.43/73.84 65.35/64.84 61.40/53.43 53.90/46.72 64.73/63.48 62.03/60.83 + incorporate last frame val \u223c80k 74.65/74.06 65.59/65.07 64.13/59.51 55.12/51.09 60.86/59.85 59.14/58.15 + dynamic voxelization testing val \u223c80k 75.20/74.63 66.17/65.66 64.72/60.40 55.75/51.95 63.87/62.85 61.00/60.02 + 50 epochs with full training data val \u223c160k 75.89/75.37 66.98/66.51 75.54/71.18 67.66/63.52 68.02/67.01 65.22/64.26 + soft-nms val \u223c160k 77.46/76.91 68.71/68.21 77.87/73.26 68.71/64.48 69.81/68.75 67.53/66.50 Classi\ufb01cation with RoI features val \u223c160k 75.81/75.36 66.91/66.50 78.92/75.12 69.84/66.36 73.24/72.12 70.41/69.34 Model ensemble val \u223c160k 78.70/78.13 70.13/69.61 81.72/77.65 72.80/69.02 74.70/73.49 72.06/70.89 Model ensemble test \u223c160k 81.06/80.57 73.69/73.23 80.31/76.28 73.98/70.16 75.10/73.84 72.38/71.16 Table 1. 3D detection performance of variants of PV-RCNN on the validation set and the \ufb01nal results on the test set of Waymo Open Dataset. Note that the baseline PV-RCNN are trained with 30 epochs on half training data. Category Val Set Test Set MOTA MOTP MOTA MOTP Vehicle 57.20/53.58 16.73/16.73 60.97/57.73 16.09/16.14 Pedestrian 55.98/55.23 31.20/31.27 55.32/53.80 31.63/31.63 Cyclist 56.91/56.78 26.75/26.75 55.13/55.07 27.14/27.14 Table 2. Performance of 3D tracking challenge on the validation and test set of Waymo Open Dataset. Category Val Set Test Set AP/APH (L1) AP/APH (L2) AP/APH (L1) AP/APH (L2) Vehicle 71.93/70.88 62.14/61.19 71.40/70.70 59.67/59.08 Pedestrian 55.09/51.80 38.81/36.49 58.40/55.36 48.27/45.74 Cyclist 28.80/27.98 28.31/27.50 Table 3. Performance of domain adaptation challenge on validation and test set of Waymo Open Dataset. 3D Kalman \ufb01lter and Hungarian algorithm is used for state estimation and data association. The results of 3D object tracking on validation and test set of Waymo Open Dataset are presented in Table2. 4.3. Results for domain adaptation challenge For the domain adaptation challenge, we \ufb01ne-tuned the models of 3D detection challenge on the labeled data of target domain for 20 epochs. The \ufb01nal submission results are based on the model ensemble of \ufb01ne-tuned models, except for the cyclist which we directly tested with the source models. The \ufb01nal domain adaption results are shown in Table 3." + }, + { + "url": "http://arxiv.org/abs/1912.13192v2", + "title": "PV-RCNN: Point-Voxel Feature Set Abstraction for 3D Object Detection", + "abstract": "We present a novel and high-performance 3D object detection framework, named\nPointVoxel-RCNN (PV-RCNN), for accurate 3D object detection from point clouds.\nOur proposed method deeply integrates both 3D voxel Convolutional Neural\nNetwork (CNN) and PointNet-based set abstraction to learn more discriminative\npoint cloud features. It takes advantages of efficient learning and\nhigh-quality proposals of the 3D voxel CNN and the flexible receptive fields of\nthe PointNet-based networks. Specifically, the proposed framework summarizes\nthe 3D scene with a 3D voxel CNN into a small set of keypoints via a novel\nvoxel set abstraction module to save follow-up computations and also to encode\nrepresentative scene features. Given the high-quality 3D proposals generated by\nthe voxel CNN, the RoI-grid pooling is proposed to abstract proposal-specific\nfeatures from the keypoints to the RoI-grid points via keypoint set abstraction\nwith multiple receptive fields. Compared with conventional pooling operations,\nthe RoI-grid feature points encode much richer context information for\naccurately estimating object confidences and locations. Extensive experiments\non both the KITTI dataset and the Waymo Open dataset show that our proposed\nPV-RCNN surpasses state-of-the-art 3D detection methods with remarkable margins\nby using only point clouds. Code is available at\nhttps://github.com/open-mmlab/OpenPCDet.", + "authors": "Shaoshuai Shi, Chaoxu Guo, Li Jiang, Zhe Wang, Jianping Shi, Xiaogang Wang, Hongsheng Li", + "published": "2019-12-31", + "updated": "2021-04-09", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.LG", + "eess.IV" + ], + "main_content": "Introduction 3D object detection has been receiving increasing attention from both industry and academia thanks to its wide applications in various \ufb01elds such as autonomous driving and robotics. LiDAR sensors are widely adopted in autonomous driving vehicles and robots for capturing 3D scene information as sparse and irregular point clouds, which provide vital cues for 3D scene perception and understanding. In this paper, we propose to achieve high performance 3D object detection by designing novel point-voxel integrated networks E-mail: {ssshi, hsli}@ee.cuhk.edu.hk RoI Grid Point Refine Keypoint z x y Raw points 3D Voxel Figure 1. Our proposed PV-RCNN framework deeply integrates both the voxel-based and the PointNet-based networks via a twostep strategy including the voxel-to-keypoint 3D scene encoding and the keypoint-to-grid RoI feature abstraction for improving the performance of 3D object detection. to learn better 3D features from irregular point clouds. Most existing 3D detection methods could be classi\ufb01ed into two categories in terms of point cloud representations, i.e., the grid-based methods and the point-based methods. The grid-based methods generally transform the irregular point clouds to regular representations such as 3D voxels [27, 41, 34, 2, 26] or 2D bird-view maps [1, 11, 36, 17, 35, 12, 16], which could be ef\ufb01ciently processed by 3D or 2D Convolutional Neural Networks (CNN) to learn point features for 3D detection. Powered by the pioneer work, PointNet and its variants [23, 24], the pointbased methods [22, 25, 32, 37] directly extract discriminative features from raw point clouds for 3D detection. Generally, the grid-based methods are more computationally ef\ufb01cient but the inevitable information loss degrades the \ufb01negrained localization accuracy, while the point-based methods have higher computation cost but could easily achieve larger receptive \ufb01eld by the point set abstraction [24]. However, we show that a uni\ufb01ed framework could integrate the best of the two types of methods, and surpass the prior stateof-the-art 3D detection methods with remarkable margins. We propose a novel 3D object detection framework, PVRCNN (Illustrated in Fig. 1), which boosts the 3D detection performance by incorporating the advantages from both the Point-based and Voxel-based feature learning methods. The principle of PV-RCNN lies in the fact that the voxel-based operation ef\ufb01ciently encodes multi-scale feature representations and can generate high-quality 3D pro1 arXiv:1912.13192v2 [cs.CV] 9 Apr 2021 \fposals, while the PointNet-based set abstraction operation preserves accurate location information with \ufb02exible receptive \ufb01elds. We argue that the integration of these two types of feature learning frameworks can help learn more discriminative features for accurate \ufb01ne-grained box re\ufb01nement. The main challenge would be how to effectively combine the two types of feature learning schemes, speci\ufb01cally the 3D voxel CNN with sparse convolutions [6, 5] and the PointNet-based set abstraction [24], into a uni\ufb01ed framework. An intuitive solution would be uniformly sampling several grid points within each 3D proposal, and adopt the set abstraction to aggregate 3D voxel-wise features surrounding these grid points for proposal re\ufb01nement. However, this strategy is highly memory-intensive since both the number of voxels and the number of grid points could be quite large to achieve satisfactory performance. Therefore, to better integrate these two types of point cloud feature learning networks, we propose a two-step strategy with the \ufb01rst voxel-to-keypoint scene encoding step and the second keypoint-to-grid RoI feature abstraction step. Speci\ufb01cally, a voxel CNN with 3D sparse convolution is adopted for voxel-wise feature learning and accurate proposal generation. To mitigate the above mentioned issue of requiring too many voxels for encoding the whole scene, a small set of keypoints are selected by the furtherest point sampling (FPS) to summarize the overall 3D information from the voxel-wise features. The features of each keypoint is aggregated by grouping the neighboring voxel-wise features via PointNet-based set abstraction for summarizing multi-scale point cloud information. In this way, the overall scene can be effectively and ef\ufb01ciently encoded by a small number of keypoints with associated multi-scale features. For the second keypoint-to-grid RoI feature abstraction step, given each box proposal with its grid point locations, a RoI-grid pooling module is proposed, where a keypoint set abstraction layer with multiple radii is adopted for each grid point to aggregate the features from the keypoints with multi-scale context. All grid points\u2019 aggregated features can then be jointly used for the succeeding proposal re\ufb01nement. Our proposed PV-RCNN effectively takes advantages of both point-based and voxel-based networks to encode discriminative features at each box proposal for accurate con\ufb01dence prediction and \ufb01ne-grained box re\ufb01nement. Our contributions can be summarized into four-fold. (1) We propose PV-RCNN framework which effectively takes advantages of both the voxel-based and point-based methods for 3D point-cloud feature learning, leading to improved performance of 3D object detection with manageable memory consumption. (2) We propose the voxelto-keypoint scene encoding scheme, which encodes multiscale voxel features of the whole scene to a small set of keypoints by the voxel set abstraction layer. These keypoint features not only preserve accurate location but also encode rich scene context, which boosts the 3D detection performance signi\ufb01cantly. (3) We propose a multi-scale RoI feature abstraction layer for grid points in each proposal, which aggregates richer context information from the scene with multiple receptive \ufb01elds for accurate box re\ufb01nement and con\ufb01dence prediction. (4) Our proposed method PV-RCNN outperforms all previous methods with remarkable margins and ranks 1st on the highly competitive KITTI 3D detection benchmark [10], ans also surpasses previous methods on the large-scale Waymo Open dataset with a large margin. 2. Related Work 3D Object Detection with Grid-based Methods. To tackle the irregular data format of point clouds, most existing works project the point clouds to regular grids to be processed by 2D or 3D CNN. The pioneer work MV3D [1] projects the point clouds to 2D bird view grids and places lots of prede\ufb01ned 3D anchors for generating 3D bounding boxes, and the following works [11, 17, 16] develop better strategies for multi-sensor fusion while [36, 35, 12] propose more ef\ufb01cient frameworks with bird view representation. Some other works [27, 41] divide the point clouds into 3D voxels to be processed by 3D CNN, and 3D sparse convolution [5] is introduced [34] for ef\ufb01cient 3D voxel processing. [30, 42] utilizes multiple detection heads while [26] explores the object part locations for improving the performance. These grid-based methods are generally ef\ufb01cient for accurate 3D proposal generation but the receptive \ufb01elds are constraint by the kernel size of 2D/3D convolutions. 3D Object Detection with Point-based Methods. FPointNet [22] \ufb01rst proposes to apply PointNet [23, 24] for 3D detection from the cropped point clouds based on the 2D image bounding boxes. PointRCNN [25] generates 3D proposals directly from the whole point clouds instead of 2D images for 3D detection with point clouds only, and the following work STD [37] proposes the sparse to dense strategy for better proposal re\ufb01nement. [21] proposes the hough voting strategy for better object feature grouping. These pointbased methods are mostly based on the PointNet series, especially the set abstraction operation [24], which enables \ufb02exible receptive \ufb01elds for point cloud feature learning. Representation Learning on Point Clouds. Recently representation learning on point clouds has drawn lots of attention for improving the performance of point cloud classi\ufb01cation and segmentation [23, 24, 41, 31, 7, 38, 15, 28, 33, 8, 29, 3]. In terms of 3D detection, previous methods generally project the point clouds to regular bird view grids [1, 36] or 3D voxels [41, 2] for processing point clouds with 2D/3D CNN. 3D sparse convolution [6, 5] are adopted in [34, 26] to effectively learn sparse voxel-wise features from the point clouds. Qi et al. [23, 24] proposes the PointNet to directly learn point-wise features from the raw point clouds, where set abstraction operation enables \ufb02exible receptive \ufb01elds by 2 \fsetting different search radii. [19] combines both voxelbased CNN and point-based SharedMLP for ef\ufb01cient point cloud feature learning. In comparison, our proposed PVRCNN takes advantages from both the voxel-based feature learning (i.e., 3D sparse convolution) and PointNet-based feature learning (i.e., set abstraction operation) to enable both high-quality 3D proposal generation and \ufb02exible receptive \ufb01elds for improving the 3D detection performance. 3. PV-RCNN for Point Cloud Object Detection In this paper, we propose the PointVoxel-RCNN (PVRCNN), which is a two-stage 3D detection framework aiming at more accurate 3D object detection from point clouds. State-of-the-art 3D detection approaches are based on either 3D voxel CNN with sparse convolution or PointNet-based networks as the backbone. Generally, the 3D voxel CNNs with sparse convolution are more ef\ufb01cient [34, 26] and are able to generate high-quality 3D object proposals, while the PointNet-based methods can capture more accurate contextual information with \ufb02exible receptive \ufb01elds. Our PV-RCNN deeply integrates the advantages of two types of networks. As illustrated in Fig. 2, the PV-RCNN consists of a 3D voxel CNN with sparse convolution as the backbone for ef\ufb01cient feature encoding and proposal generation. Given each 3D object proposal, to effectively pool its corresponding features from the scene, we propose two novel operations: the voxel-to-keypoint scene encoding, which summarizes all the voxels of the overall scene feature volumes into a small number of feature keypoints, and the point-to-grid RoI feature abstraction, which effectively aggregates the scene keypoint features to RoI grids for proposal con\ufb01dence prediction and location re\ufb01nement. 3.1. 3D Voxel CNN for Ef\ufb01cient Feature Encoding and Proposal Generation Voxel CNN with 3D sparse convolution [6, 5, 34, 26] is a popular choice by state-of-the-art 3D detectors for ef\ufb01ciently converting the point clouds into sparse 3D feature volumes. Because of its high ef\ufb01ciency and accuracy, we adopt it as the backbone of our framework for feature encoding and 3D proposal generation. 3D voxel CNN. The input points P are \ufb01rst divided into small voxels with spatial resolution of L \u00d7 W \u00d7 H, where the features of the non-empty voxels are directly calculated as the mean of point-wise features of all inside points. The commonly used features are the 3D coordinates and re\ufb02ectance intensities. The network utilizes a series of 3 \u00d7 3 \u00d7 3 3D sparse convolution to gradually convert the point clouds into feature volumes with 1\u00d7, 2\u00d7, 4\u00d7, 8\u00d7 downsampled sizes. Such sparse feature volumes at each level could be viewed as a set of voxel-wise feature vectors. 3D proposal generation. By converting the encoded 8\u00d7 downsampled 3D feature volumes into 2D bird-view feature maps, high-quality 3D proposals are generated following the anchor-based approaches [34, 12]. Speci\ufb01cally, we stack the 3D feature volume along the Z axis to obtain the L 8 \u00d7 W 8 bird-view feature maps. Each class has 2 \u00d7 L 8 \u00d7 W 8 3D anchor boxes which adopt the average 3D object sizes of this class, and two anchors of 0\u25e6, 90\u25e6orientations are evaluated for each pixel of the bird-view feature maps. As shown in Table 4, the adopted 3D voxel CNN backbone with anchor-based scheme achieves higher recall performance than the PointNet-based approaches [25, 37]. Discussions. State-of-the-art detectors mostly adopt two-stage frameworks. They require pooling RoI speci\ufb01c features from the resulting 3D feature volumes or 2D maps for further proposal re\ufb01nement. However, these 3D feature volumes from the 3D voxel CNN have major limitations in the following aspects. (i) These feature volumes are generally of low spatial resolution as they are downsampled by up to 8 times, which hinders accurate localization of objects in the input scene. (ii) Even if one can upsample to obtain feature volumes/maps of larger spatial sizes, they are generally still quite sparse. The commonly used trilinear or bilinear interpolation in the RoIPooling/RoIAlign operations can only extract features from very small neighborhoods (i.e., 4 and 8 nearest neighbors for bilinear and trilinear interpolation respectively). The conventional pooling approaches would therefore obtain features with mostly zeros and waste much computation and memory for stage-2 re\ufb01nement. On the other hand, the set abstraction operation proposed in the variants of PointNet [23, 24] has shown the strong capability of encoding feature points from a neighborhood of an arbitrary size. We therefore propose to integrate a 3D voxel CNN with a series of set abstraction operations for conducting accurate and robust stage-2 proposal re\ufb01nement. A naive solution of using the set abstraction operation for pooling the scene feature voxels would be directly aggregating the multi-scale feature volume in a scene to the RoI grids. However, this intuitive strategy simply occupies much memory and is inef\ufb01cient to be used in practice. For instance, a common scene from the KITTI dataset might result in 18, 000 voxels in the 4\u00d7 downsampled feature volumes. If one uses 100 box proposal for each scene and each box proposal has 3 \u00d7 3 \u00d7 3 grids. The 2, 700 \u00d7 18, 000 pairwise distances and feature aggregations cannot be ef\ufb01ciently computed, even after distance thresholding. To tackle this issue, we propose a two-step approach to \ufb01rst encode voxels at different neural layers of the entire scene into a small number of keypoints and then aggregate keypoint features to RoI grids for box proposal re\ufb01nement. 3.2. Voxel-to-keypoint Scene Encoding via Voxel Set Abstraction Our proposed framework \ufb01rst aggregates the voxels at the multiple neural layers representing the entire scene into 3 \fx y z x y z Voxelization FPS To BEV Keypoints Sampling Voxel Set Abstraction Module 3D Sparse Convolution Classification Box Regression RPN Predicted Keypoint Weighting Module RoI-grid Pooling Module Keypoints with features 3D Box Proposals Confidence Box Refinement FC (256, 256) Raw Point Cloud Figure 2. The overall architecture of our proposed PV-RCNN. The raw point clouds are \ufb01rst voxelized to feed into the 3D sparse convolution based encoder to learn multi-scale semantic features and generate 3D object proposals. Then the learned voxel-wise feature volumes at multiple neural layers are summarized into a small set of key points via the novel voxel set abstraction module. Finally the keypoint features are aggregated to the RoI-grid points to learn proposal speci\ufb01c features for \ufb01ne-grained proposal re\ufb01nement and con\ufb01dence prediction. a small number of keypoints, which serve as a bridge between the 3D voxel CNN feature encoder and the proposal re\ufb01nement network. Keypoints Sampling. Speci\ufb01cally, we adopt the FurthestPoint-Sampling (FPS) algorithm to sample a small number of n keypoints K = {p1, \u00b7 \u00b7 \u00b7 , pn} from the point clouds P, where n = 2, 048 for the KITTI dataset and n = 4, 096 for the Waymo dataset. Such a strategy encourages that the keypoints are uniformly distributed around non-empty voxels and can be representative to the overall scene. Voxel Set Abstraction Module. We propose the Voxel Set Abstraction (VSA) module to encode the multi-scale semantic features from the 3D CNN feature volumes to the keypoints. The set abstraction operation proposed by [24] is adopted for the aggregation of voxel-wise feature volumes. The surrounding points of keypoints are now regular voxels with multi-scale semantic features encoded by the 3D voxel CNN from the multiple levels, instead of the neighboring raw points with features learned from PointNet. Speci\ufb01cally, denote F (lk) = {f (lk) 1 , \u00b7 \u00b7 \u00b7 , f (lk) Nk } as the set of voxel-wise feature vectors in the k-th level of 3D voxel CNN, V(lk) = {v(lk) 1 , \u00b7 \u00b7 \u00b7 , v(lk) Nk } as their 3D coordinates calculated by the voxel indices and actual voxel sizes of the k-th level, where Nk is the number of non-empty voxels in the k-th level. For each keypoint pi, we \ufb01rst identify its neighboring non-empty voxels at the k-th level within a radius rk to retrieve the set of voxel-wise feature vectors as S(lk) i = \uf8f1 \uf8f4 \uf8f4 \uf8f2 \uf8f4 \uf8f4 \uf8f3 h f (lk) j ; v(lk) j \u2212pi iT \f \f \f \f \f \f \f \f \r \r \rv(lk) j \u2212pi \r \r \r 2 < rk, \u2200v(lk) j \u2208V(lk), \u2200f (lk) j \u2208F (lk) \uf8fc \uf8f4 \uf8f4 \uf8fd \uf8f4 \uf8f4 \uf8fe , (1) where we concatenate the local relative coordinates v(lk) j \u2212pi to indicate the relative location of semantic voxel feature f (lk) j . The voxel-wise features within the neighboring voxel set S(lk) i of pi are then transformed by a PointNetblock [23] to generate the feature for the key point pi as f (pvk) i = max n G \u0010 M \u0010 S(lk) i \u0011\u0011o , (2) where M(\u00b7) denotes randomly sampling at most Tk voxels from the neighboring set S(lk) i for saving computations, G(\u00b7) denotes a multi-layer perceptron network to encode the voxel-wise features and relative locations. Although the number of neighboring voxels varies across different keypoints, the along-channel max-pooling operation max(\u00b7) maps the diverse number of neighboring voxel feature vectors to a feature vector f (pvk) i for the key point pi. Generally, we also set multiple radii rk at the k-th level to aggregate local voxel-wise features with different receptive \ufb01elds for capturing richer multi-scale contextual information. The above voxel set abstraction is performed at different levels of the 3D voxel CNN, and the aggregated features from different levels can be concatenated to generate the multi-scale semantic feature for the key point pi f (pv) i = h f (pv1) i , f (pv2) i , f (pv3) i , f (pv4) i i , for i = 1, \u00b7 \u00b7 \u00b7 , n, (3) where the generated feature f (pv) i incorporates both the 3D voxel CNN-based feature learning from voxel-wise feature f (lk) j and the PointNet-based feature learning from voxel set abstraction as Eq. (2). Besides, the 3D coordinate of pi also preserves accurate location information. Extended VSA Module. We extend the VSA module by further enriching the keypoint features from the raw point clouds P and the 8\u00d7 downsampled 2D bird-view feature maps (as described in Sec. 3.1), where the raw point clouds partially make up the quantization loss of the initial pointcloud voxelization while the 2D bird-view maps have larger receptive \ufb01elds along the Z axis. The raw point-cloud feature f (raw) i is also aggregated as in Eq. (2). For the bird view feature maps, we project the keypoint pi to the 2D bird-view coordinate system, and utilize bilinear interpolation to obtain the features f (bev) i from the bird-view feature 4 \fn x 3 Sigmoid FL Label Keypoint Features Keypoint Coordinates 3D GT Boxes n x 1 n x 256 n x C n x C Foreground Point Check Predicted Keypoint Weighting Module Training Part Figure 3. Illustration of Predicted Keypoint Weighting module. maps. Hence, the keypoint feature for pi is further enriched by concatenating all its associated features f (p) i = h f (pv) i , f (raw) i , f (bev) i i , for i = 1, \u00b7 \u00b7 \u00b7 , n, (4) which have the strong capability of preserving 3D structural information of the entire scene and can also boost the \ufb01nal detection performance by large margins. Predicted Keypoint Weighting. After the overall scene is encoded by a small number of keypoints, they would be further utilized by the succeeding stage for conducting proposal re\ufb01nement. The keypoints are chosen by the Further Point Sampling strategy and some of them might only represent the background regions. Intuitively, keypoints belonging to the foreground objects should contribute more to the accurate re\ufb01nement of the proposals, while the ones from the background regions should contribute less. Hence, we propose a Predicted Keypoint Weighting (PKW) module (see Fig. 3) to re-weight the keypoint features with extra supervisions from point-cloud segmentation. The segmentation labels can be directly generated by the 3D detection box annotations, i.e. by checking whether each key point is inside or outside of a ground-truth 3D box since the 3D objects in autonomous driving scenes are naturally separated in 3D space. The predicted feature weighting for each keypoint\u2019s feature \u02dc f (p) i can be formulated as \u02dc f (p) i = A(f (p) i ) \u00b7 f (p) i , (5) where A(\u00b7) is a three-layer MLP network with a sigmoid function to predict foreground con\ufb01dence between [0, 1]. The PKW module is trained by focal loss [18] with default hyper-parameters for handling the unbalanced number of foreground/background points in the training set. 3.3. Keypoint-to-grid RoI Feature Abstraction for Proposal Re\ufb01nement In the previous step, the whole scene is summarized into a small number of keypoints with multi-scale semantic features. Given each 3D proposal (RoI) generated by the 3D voxel CNN, the features of each RoI need to be aggregated from the keypoint features \u02dc F = { \u02dc f (p) 1 , \u00b7 \u00b7 \u00b7 , \u02dc f (p) n } for accurate and robust proposal re\ufb01nement. We propose the keypoint-to-grid RoI feature abstraction based on the set abstraction operation for multi-scale RoI feature encoding. RoI-grid Point Features Grid Point Key Point Raw Point Figure 4. Illustration of RoI-grid pooling module. Rich context information of each 3D RoI is aggregated by the set abstraction operation with multiple receptive \ufb01elds. RoI-grid Pooling via Set Abstraction. Given each 3D RoI, as shown in Fig. 4, we propose the RoI-grid pooling module to aggregate the keypoint features to the RoI-grid points with multiple receptive \ufb01elds. We uniformly sample 6 \u00d7 6 \u00d7 6 grid points within each 3D proposal, which are denoted as G = {g1, \u00b7 \u00b7 \u00b7 , g216}. The set abstraction operation is adopted to aggregate the features of grid points from the keypoint features. Speci\ufb01cally, we \ufb01rstly identify the neighboring keypoints of grid point gi within a radius \u02dc r as \u02dc \u03a8 = (h \u02dc f (p) j ; pj \u2212gi iT \f \f \f \f \f \u2225pj \u2212gi\u22252 < \u02dc r, \u2200pj \u2208K, \u2200\u02dc f (p) j \u2208\u02dc F ) , (6) where pj \u2212gi is appended to indicate the local relative location of features \u02dc f (p) j from keypoint pj. Then a PointNetblock [23] is adopted to aggregate the neighboring keypoint feature set \u02dc \u03a8 to generate the feature for grid point gi as \u02dc f (g) i = max n G \u0010 M \u0010 \u02dc \u03a8 \u0011\u0011o , (7) where M(\u00b7) and G(\u00b7) are de\ufb01ned as the same in Eq. (2). We set multiple radii \u02dc r and aggregate keypoint features with different receptive \ufb01elds, which are concatenated together for capturing richer multi-scale contextual information. After obtaining each grid\u2019s aggregated features from its surrounding keypoints, all RoI-grid features of the same RoI can be vectorized and transformed by a two-layer MLP with 256 feature dimensions to represent the overall proposal. Compared with the point cloud 3D RoI pooling operations in previous works [25, 37, 26], our proposed RoI-grid pooling operation targeting the keypoints is able to capture much richer contextual information with \ufb02exible receptive \ufb01elds, where the receptive \ufb01elds are even beyond the RoI boundaries for capturing the surrounding keypoint features outside the 3D RoI, while the previous state-of-theart methods either simply average all point-wise features within the proposal as the RoI feature [25], or pool many uninformative zeros as the RoI features [26, 37]. 3D Proposal Re\ufb01nement and Con\ufb01dence Prediction. Given the RoI feature of each box proposal, the proposal re\ufb01nement network learns to predict the size and location (i.e., center, size and orientation) residuals relative to the input 3D proposal. The re\ufb01nement network adopts a 25 \flayer MLP and has two branches for con\ufb01dence prediction and box re\ufb01nement respectively. For the con\ufb01dence prediction branch, we follow [14, 9, 26] to adopt the 3D Intersection-over-Union (IoU) between the 3D RoIs and their corresponding ground-truth boxes as the training targets. For the k-th 3D RoI, its con\ufb01dence training target yk is normalized to be between [0, 1] as yk = min (1, max (0, 2IoUk \u22120.5)) , (8) where IoUk is the IoU of the k-th RoI w.r.t. its ground-truth box. Our con\ufb01dence branch is then trained to minimize the cross-entropy loss on predicting the con\ufb01dence targets, Liou = \u2212yk log(\u02dc yk) \u2212(1 \u2212yk) log(1 \u2212\u02dc yk), (9) where \u02dc yk is the predicted score by the network. Our experiments in Table 9 show that this quality-aware con\ufb01dence prediction strategy achieves better performance than the traditional classi\ufb01cation targets. The box regression targets of the box re\ufb01nement branch are encoded by the traditional residual-based method as in [34, 26] and are optimized by smooth-L1 loss function. 3.4. Training losses The proposed PV-RCNN framework is trained end-toend with the region proposal loss Lrpn, keypoint segmentation loss Lseg and the proposal re\ufb01nement loss Lrcnn. (1) We adopt the same region proposal loss Lrpn with [34] as Lrpn = Lcls + \u03b2 X r\u2208{x,y,z,l,h,w,\u03b8} Lsmooth-L1(d \u2206ra, \u2206ra), (10) where the anchor classi\ufb01cation loss Lcls is calculated with focal loss [18] with default hyper-parameters and smoothL1 loss is utilized for anchor box regression with the predicted residual d \u2206ra and the regression target \u2206ra. (2) The keypoint segmentation loss Lseg is also calculated by the focal loss as mentioned in Sec. 3.2. (3) The proposal re\ufb01nement loss Lrcnn includes the IoU-guided con\ufb01dence prediction loss Liou and the box re\ufb01nement loss as Lrcnn = Liou + X r\u2208{x,y,z,l,h,w,\u03b8} Lsmooth-L1(d \u2206rp, \u2206rp), (11) where d \u2206rp is the predicted box residual and \u2206rp is the proposal regression target which are encoded same with \u2206ra. The overall training loss are then the sum of these three losses with equal loss weights. Further training loss details are provided in the supplementary \ufb01le. 4. Experiments In this section, we introduce the implementation details of our PV-RCNN framework (Sec. 4.1) and compare with previous state-of-the-art methods on both the highly competitive KITTI dataset [4] (Sec. 4.2) and the newly introduced large-scale Waymo Open Dataset [20, 40] (Sec. 4.3). In Sec. 4.4, we conduct extensive ablation studies to investigate each component of PV-RCNN to validate our design. 4.1. Experimental Setup Datasets. KITTI Dataset [4] is one of the most popular dataset of 3D detection for autonomous driving. There are 7, 481 training samples and 7, 518 test samples, where the training samples are generally divided into the train split (3, 712 samples) and the val split (3, 769 samples). We compare PV-RCNN with state-of-the-art methods on both the val split and the test split on the online learderboard. Waymo Open Dataset is a recently released and currently the largest dataset of 3D detection for autonomous driving. There are totally 798 training sequences with around 158, 361 LiDAR samples, and 202 validation sequences with 40, 077 LiDAR samples. It annotated the objects in the full 360\u25e6\ufb01eld instead of 90\u25e6in KITTI dataset. We evaluate our model on this large-scale dataset to further validate the effectiveness of our proposed method. Network Architecture. As shown in Fig. 2, the 3D voxel CNN has four levels with feature dimensions 16, 32, 64, 64, respectively. Their two neighboring radii rk of each level in the VSA module are set as (0.4m, 0.8m), (0.8m, 1.2m), (1.2m, 2.4m), (2.4m, 4.8m), and the neighborhood radii of set abstraction for raw points are (0.4m, 0.8m). For the proposed RoI-grid pooling operation, we uniformly sample 6 \u00d7 6 \u00d7 6 grid points in each 3D proposal and the two neighboring radii \u02dc r of each grid point are (0.8m, 1.6m). For the KITTI dataset, the detection range is within [0, 70.4]m for the X axis, [\u221240, 40]m for the Y axis and [\u22123, 1]m for the Z axis, which is voxelized with the voxel size (0.05m, 0.05m, 0.1m) in each axis. For the Waymo Open dataset, the detection range is [\u221275.2, 75.2]m for the X and Y axes and [\u22122, 4]m for the Z axis, and we set the voxel size to (0.1m, 0.1m, 0.15m). Training and Inference Details. Our PV-RCNN framework is trained from scratch in an end-to-end manner with the ADAM optimizer. For the KITTI dataset, we train the entire network with the batch size 24, learning rate 0.01 for 80 epochs on 8 GTX 1080 Ti GPUs, which takes around 5 hours. For the Waymo Open Dataset, we train the entire network with batch size 64, learning rate 0.01 for 30 epochs on 32 GTX 1080 Ti GPUs. The cosine annealing learning rate strategy is adopted for the learning rate decay. For the proposal re\ufb01nement stage, we randomly sample 128 proposals with 1:1 ratio for positive and negative proposals, where a proposal is considered as a positive proposal for box re\ufb01nement branch if it has at least 0.55 3D IoU with the ground-truth boxes, otherwise it is treated as a negative proposal. During training, we utilize the widely adopted data augmentation strategy of 3D object detection, including random \ufb02ipping along the X axis, global scaling with a random scaling factor sampled from [0.95, 1.05], global rotation around the Z axis with a random angle sampled from [\u2212\u03c0 4 , \u03c0 4 ]. We also conduct the ground-truth sampling aug6 \fMethod Reference Modality Car 3D Detection Car BEV Detection Cyclist 3D Detection Cyclist BEV Detection Easy Mod. Hard Easy Mod. Hard Easy Mod. Hard Easy Mod. Hard MV3D [1] CVPR 2017 RGB + LiDAR 74.97 63.63 54.00 86.62 78.93 69.80 ContFuse [17] ECCV 2018 RGB + LiDAR 83.68 68.78 61.67 94.07 85.35 75.88 AVOD-FPN [11] IROS 2018 RGB + LiDAR 83.07 71.76 65.73 90.99 84.82 79.62 63.76 50.55 44.93 69.39 57.12 51.09 F-PointNet [22] CVPR 2018 RGB + LiDAR 82.19 69.79 60.59 91.17 84.67 74.77 72.27 56.12 49.01 77.26 61.37 53.78 UberATG-MMF [16] CVPR 2019 RGB + LiDAR 88.40 77.43 70.22 93.67 88.21 81.99 SECOND [34] Sensors 2018 LiDAR only 83.34 72.55 65.82 89.39 83.77 78.59 71.33 52.08 45.83 76.50 56.05 49.45 PointPillars [12] CVPR 2019 LiDAR only 82.58 74.31 68.99 90.07 86.56 82.81 77.10 58.65 51.92 79.90 62.73 55.58 PointRCNN [25] CVPR 2019 LiDAR only 86.96 75.64 70.70 92.13 87.39 82.72 74.96 58.82 52.53 82.56 67.24 60.28 3D IoU Loss [39] 3DV 2019 LiDAR only 86.16 76.50 71.39 91.36 86.22 81.20 Fast Point R-CNN [2] ICCV 2019 LiDAR only 85.29 77.40 70.24 90.87 87.84 80.52 STD [37] ICCV 2019 LiDAR only 87.95 79.71 75.09 94.74 89.19 86.42 78.69 61.59 55.30 81.36 67.23 59.35 Patches [13] Arxiv 2019 LiDAR only 88.67 77.20 71.82 92.72 88.39 83.19 Part-A2-Net [26] TPAMI 2020 LiDAR only 87.81 78.49 73.51 91.70 87.79 84.61 PV-RCNN (Ours) LiDAR only 90.25 81.43 76.82 94.98 90.65 86.14 78.60 63.71 57.65 82.49 68.89 62.41 Improvement +1.58 +1.72 +1.73 +0.24 +1.46 -0.28 -0.06 +2.12 +2.35 -0.07 +1.65 +2.13 Table 1. Performance comparison on the KITTI test set. The results are evaluated by the mean Average Precision with 40 recall positions. Method Reference Modality 3D mAP MV3D [1] CVPR 2017 RGB + LiDAR 62.68 ContFuse[17] ECCV 2018 RGB + LiDAR 73.25 AVOD-FPN [11] IROS 2018 RGB + LiDAR 74.44 F-PointNet [22] CVPR 2018 RGB + LiDAR 70.92 VoxelNet [41] CVPR 2018 LiDAR only 65.46 SECOND [34] Sensors 2018 LiDAR only 76.48 PointRCNN [25] CVPR 2019 LiDAR only 78.63 Fast Point R-CNN [2] ICCV 2019 LiDAR only 79.00 STD [37] ICCV 2019 LiDAR only 79.80 PV-RCNN (Ours) LiDAR only 83.90 Table 2. Performance comparison on the moderate level car class of KITTI val split with mAP calculated by 11 recall positions. mentation [34] to randomly \u201cpaste\u201d some new ground-truth objects from other scenes to the current training scenes, for simulating objects in various environments. For inference, we keep the top-100 proposals generated from the 3D voxel CNN with a 3D IoU threshold of 0.7 for non-maximum-suppression (NMS). These proposals are further re\ufb01ned in the proposal re\ufb01nement stage with aggregated keypoint features. We \ufb01nally use an NMS threshold of 0.01 to remove the redundant boxes. 4.2. 3D Detection on the KITTI Dataset To evaluate the proposed model\u2019s performance on the KITTI val split, we train our model on the train set and report the results on the val set. To conduct evaluation on the test set with the KITTI of\ufb01cial test server, the model is trained with 80% of all available train+val data and the remaining 20% data is used for validation. Evaluation Metric. All results are evaluated by the mean average precision with a rotated IoU threshold 0.7 for cars and 0.5 for cyclists. The mean average precisions on the test set are calculated with 40 recall positions on the of\ufb01cial KITTI test server [10]. The results on the val set in Table 2 are calculated with 11 recall positions to compare with the results by the previous works. Comparison with state-of-the-art methods. Table 1 shows the performance of PV-RCNN on the KITTI test set from the of\ufb01cial online leaderboard as of Nov. 15th, 2019. IoU Thresh. 3D mAP BEV mAP Easy Moderate Hard Easy Moderate Hard 0.7 92.57 84.83 82.69 95.76 91.11 88.93 Table 3. Performance on the KITTI val split set with mAP calculated by 40 recall positions for car class. Method PointRCNN [25] STD [37] PV-RCNN (Ours) Recall (IoU=0.7) 74.8 76.8 85.5 Table 4. Recall of different proposal generation networks on the car class at moderate dif\ufb01culty level of the KITTI val split set. For the most important 3D object detection benchmark of the car class, our method outperforms previous state-of-theart methods with remarkable margins, i.e. increasing the mAP by 1.58%, 1.72%, 1.73% on easy, moderate and hard dif\ufb01culty levels, respectively. For the bird-view detection of the car class, our method also achieves new state-of-theart performance on the easy and moderate dif\ufb01culty levels while dropping slightly on the hard dif\ufb01culty level. For 3D detection and bird-view detection of cyclist, our methods outperforms previous LiDAR-only methods with large margins on the moderate and hard dif\ufb01culty levels while achieving comparable performance on the easy dif\ufb01culty level. Note that we train a single model for both the car and cyclist detection instead of separate models for each class as previous methods [34, 12, 25, 37] do. As of Nov. 15th, 2019, our method currently ranks 1st on the car 3D detection leaderboard among all methods including both the RGB+LiDAR methods and LiDAR-only methods, and ranks 1st on the cyclist 3D detection leaderboard among all published LiDAR-only methods. The signi\ufb01cant improvements manifest the effectiveness of the PV-RCNN. We also report the performance of the most important car class on the KITTI val split with mAP from R11. Similarly, as shown in Table 2, our method outperforms previous stateof-the-art methods with large margins. The performance with R40 are also provided in Table 3 for reference. 7 \fDif\ufb01culty Method 3D mAP (IoU=0.7) 3D mAPH (IoU=0.7) BEV mAP (IoU=0.7) BEV mAPH (IoU=0.7) Overall 0-30m 30-50m 50m-Inf Overall 0-30m 30-50m 50m-Inf Overall 0-30m 30-50m 50m-Inf Overall 0-30m 30-50m 50m-Inf LEVEL 1 PointPillar [12] 56.62 81.01 51.75 27.94 75.57 92.1 74.06 55.47 MVF [40] 62.93 86.30 60.02 36.02 80.40 93.59 79.21 63.09 PV-RCNN (Ours) 70.30 91.92 69.21 42.17 69.69 91.34 68.53 41.31 82.96 97.35 82.99 64.97 82.06 96.71 82.01 63.15 Improvement +7.37 +5.62 +9.19 +6.15 +2.56 +3.76 +3.78 +1.88 LEVEL 2 PV-RCNN (Ours) 65.36 91.58 65.13 36.46 64.79 91.00 64.49 35.70 77.45 94.64 80.39 55.39 76.60 94.03 79.40 53.82 Table 5. Performance comparison on the Waymo Open Dataset (version 1.0 released in August, 2019) with 202 validation sequences for the vehicle detection. Note that the results of PointPillar [12] on the Waymo Open Dataset are reproduced by [40]. Method Reference Vehicle (LEVEL 1) Vehicle (LEVEL 2) Ped. (LEVEL 1) Ped. (LEVEL 2) Cyc. (LEVEL 1) Cyc. (LEVEL 2) mAP mAPH mAP mAPH mAP mAPH mAP mAPH mAP mAPH mAP mAPH *StarNet [20] NeurIPSw 2019 53.70 66.80 *PointPillar [12] CVPR 2019 56.62 59.25 *MVF [40] CoRL 2019 62.93 65.33 \u2020SECOND [34] Sensors 2018 72.27 71.69 63.85 63.33 68.70 58.18 60.72 51.31 60.62 59.28 58.34 57.05 PV-RCNN (Ours) 77.51 76.89 68.98 68.41 75.01 65.65 66.04 57.61 67.81 66.35 65.39 63.98 Table 6. Performance comparison on the Waymo Open Dataset (version 1.2 released in March 2020) with 202 validation sequences for three categories. \u2020: re-implemented by ourselves with their open source code. \u2217: performance on the version 1.0 of Waymo Open Dataset. 4.3. 3D Detection on the Waymo Open Dataset To further validate the effectiveness of our proposed PVRCNN, we evaluate the performance of PV-RCNN on the newly released large-scale Waymo Open Dataset. Evaluation Metric. We adopt the of\ufb01cial released evaluation tools for evaluating our method, where the mean average precision (mAP) and the mean average precision weighted by heading (mAPH) are used for evaluation. The rotated IoU threshold is set as 0.7 for vehicle detection and 0.5 for pedestrian / cyclist. The test data are split in two ways. The \ufb01rst way is based on objects\u2019 different distances to the sensor: 0 \u221230m, 30 \u221250m and > 50m. The second way is to split the data into two dif\ufb01culty levels, where the LEVEL 1 denotes the ground-truth objects with at least 5 inside points while the LEVEL 2 denotes the ground-truth objects with at least 1 inside points or the ground-truth objects manually marked as LEVEL 2. Comparison with state-of-the-art methods. Table 5 shows that our method outperforms previous state-of-theart [40] signi\ufb01cantly with a 7.37% mAP gain for the 3D object detection and a 2.56% mAP gain for the bird-view object detection. The results show that our method achieves remarkably better mAP on all distance ranges of interest, where the maximum gain is 9.19% for the 3D detection in the range of 30 \u221250m, which validates that our proposed multi-level point-voxel integration strategy is able to effectively capture more accurate contextual information for improving the 3D detection performance. As shown in Table 5, our method also achieves superior performance in terms of mAPH, which demonstrates that our model predicted accurate heading direction for the vehicles. The results on the LEVEL 2 dif\ufb01cult level are also reported in Table 5 for reference, and we could see that our method performs well even for the objects with fewer than 5 inside points. The experimental results on the large-scale Waymo Open dataset further validate the generalization ability of Method RPN with 3D Voxel CNN Keypoints Encoding RoI-grid Pooling Easy Mod. Hard RPN Baseline \u2713 90.46 80.87 77.30 Pool from Encoder \u2713 \u2713 91.88 82.86 80.52 PV-RCNN \u2713 \u2713 \u2713 92.57 84.83 82.69 Table 7. Effects of voxel-to-keypoint scene encoding strategy and RoI-grid pooling re\ufb01nement. f (pv1) i f (pv2) i f (pv3) i f (pv4) i f (bev) i f (raw) i Moderate mAP \u2713 81.98 \u2713 83.32 \u2713 83.17 \u2713 \u2713 84.54 \u2713 \u2713 \u2713 84.69 \u2713 \u2713 \u2713 \u2713 84.72 \u2713 \u2713 \u2713 \u2713 \u2713 84.75 \u2713 \u2713 \u2713 \u2713 \u2713 \u2713 84.83 Table 8. Effects of different feature components for VSA module. our proposed framework on various datasets. Better performance for multi-class detection with more proposals. To evaluate the performance of our method for multi-class detection, we further conduct experiments on the latest Waymo Open Dataset (version 1.2 released in March 2020). Here the number of proposals is increased from 100 to 500 since we only train a single model for detecting all three categories (e.g., vehicle, pedestrian and cyclist). As shown in Table 6, our method signi\ufb01cantly surpasses previous methods on all dif\ufb01culty levels of these three categories. We hope it could set up a strong baseline on the Waymo Open Dataset for future works. 4.4. Ablation Studies In this section, we conduct extensive ablation experiments to analyze individual components of our proposed method. All models are trained on the train split and evaluated on the val split for the car class of KITTI dataset [4]. Effects of voxel-to-keypoint scene encoding. We validate the effectiveness of voxel-to-keypoint scene encoding strategy by comparing with the native solution that directly 8 \faggregating multi-scale feature volumes of encoder to the RoI-grid points as mentioned in Sec. 3.1. As shown in the 2nd and 3rd rows of Table 7, the voxel-to-keypoint scene encoding strategy contributes signi\ufb01cantly to the performance in all three dif\ufb01culty levels. This bene\ufb01ts from that the keypoints enlarge the receptive \ufb01elds by bridging the 3D voxel CNN and RoI-grid points, and the segmentation supervision of keypoints also enables a better multi-scale feature learning from the 3D voxel CNN. Besides, a small set of keypoints as the intermediate feature representation also decreases the GPU memory usage when compared with the directly pooling strategy. Effects of different features for VSA module. In Table 8, we investigate the importance of each feature component of keypoints in Eq. (3) and Eq. (4). The 1st row shows that the performance drops a lot if we only aggregate features from f (raw) i , since the shallow semantic information is not enough for the proposal re\ufb01nement. The high level semantic information from f (pv3) i , f (pv4) i and f (bev) i improves the performance signi\ufb01cantly as shown in 2nd to 5th rows. As shown in last four rows, the additions of relative shallow semantic features f (pv1) i , f (pv2) i , f (raw) i further improves the performance slightly and the best performance is achieved with all the feature components as the keypoint features. Effects of PKW module. We propose the predicted keypoint weighting (PKW) module in Sec. 3.2 to re-weight the point-wise features of keypoint with extra keypoint segmentation supervision. Table 9 (1st and 4th rows) shows that removing the PKW module drops performance a lot, which demonstrates that the PKW module enables better multi-scale feature aggregation by focusing more on the foreground keypoints, since they are more important for the succeeding proposal re\ufb01nement network. Effects of RoI-grid pooling module. We investigate the effects of RoI-grid pooling module by replacing it with the RoI-aware pooling [26] and keeping the other modules consistent. Table 9 shows that the performance drops signi\ufb01cantly when replacing RoI-grid pooling module, which validates that our proposed set abstraction based RoI-grid pooling could learn much richer contextual information, and the pooled features also encode more discriminative RoI features by pooling more effective features with large search radii for each grid point. 1st and 2nd rows of Table 7 also shows that comparing with the 3D voxel RPN, the performance increases a lot after the proposal is re\ufb01ned by the features aggregated from the RoI-grid pooling module. 5." + }, + { + "url": "http://arxiv.org/abs/1812.04244v2", + "title": "PointRCNN: 3D Object Proposal Generation and Detection from Point Cloud", + "abstract": "In this paper, we propose PointRCNN for 3D object detection from raw point\ncloud. The whole framework is composed of two stages: stage-1 for the bottom-up\n3D proposal generation and stage-2 for refining proposals in the canonical\ncoordinates to obtain the final detection results. Instead of generating\nproposals from RGB image or projecting point cloud to bird's view or voxels as\nprevious methods do, our stage-1 sub-network directly generates a small number\nof high-quality 3D proposals from point cloud in a bottom-up manner via\nsegmenting the point cloud of the whole scene into foreground points and\nbackground. The stage-2 sub-network transforms the pooled points of each\nproposal to canonical coordinates to learn better local spatial features, which\nis combined with global semantic features of each point learned in stage-1 for\naccurate box refinement and confidence prediction. Extensive experiments on the\n3D detection benchmark of KITTI dataset show that our proposed architecture\noutperforms state-of-the-art methods with remarkable margins by using only\npoint cloud as input. The code is available at\nhttps://github.com/sshaoshuai/PointRCNN.", + "authors": "Shaoshuai Shi, Xiaogang Wang, Hongsheng Li", + "published": "2018-12-11", + "updated": "2019-05-16", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "main_content": "Introduction Deep learning has achieved remarkable progress on 2D computer vision tasks, including object detection [8, 32, 16] and instance segmentation [6, 10, 20], etc. Beyond 2D scene understanding, 3D object detection is crucial and indispensable for many real-world applications, such as autonomous driving and domestic robots. While recent developed 2D detection algorithms are capable of handling large variations of viewpoints and background clutters in images, the detection of 3D objects with point clouds still faces great challenges from the irregular data format and large search space of 6 Degrees-of-Freedom (DoF) of 3D object. In autonomous driving, the most commonly used 3D sensors are the LiDAR sensors, which generate 3D point clouds to capture the 3D structures of the scenes. The dif\ufb01culty of point cloud-based 3D object detection mainly lies in irregularity of the point clouds. State-of-the-art 3D de3D box estimation region to frustum 2D image detector 2D RoIs point cloud point cloud in frustum 3D anchors front view projection & pooling bird view projection & pooling fusion front view projection & pooling bird view projection & pooling 2D CNN 2D CNN 3D RoIs fusion canonical 3D box refinement point cloud network point cloud RoI pooling point cloud segmentation 3D proposal generation point-wise feature vector bottom-up 3D proposal generation Point cloud Bird s view RGB image RGB image 3D Box Predictions 3D Box Predictions 3D Box Predictions b: Frustum-Pointnet a: Aggregate View Object Detection (AVOD) c: Our approach (PointRCNN) ... Figure 1. Comparison with state-of-the-art methods. Instead of generating proposals from fused feature maps of bird\u2019s view and front view [14], or RGB images [25], our method directly generates 3D proposals from raw point cloud in a bottom-up manner. tection methods either leverage the mature 2D detection frameworks by projecting the point clouds into bird\u2019s view [14, 42, 17] (see Fig. 1 (a)), to the frontal view [4, 38], or to the regular 3D voxels [34, 43], which are not optimal and suffer from information loss during the quantization. Instead of transforming point cloud to voxels or other regular data structures for feature learning, Qi et al. [26, 28] proposed PointNet for learning 3D representations directly from point cloud data for point cloud classi\ufb01cation and segmentation. As shown in Fig. 1 (b), their follow-up work [25] applied PointNet in 3D object detection to estimate the 3D bounding boxes based on the cropped frustum point cloud from the 2D RGB detection results. However, the performance of the method heavily relies on the 2D detection performance and cannot take the advantages of 3D information for generating robust bounding box proposals. Unlike object detection from 2D images, 3D objects in autonomous driving scenes are naturally and well separated 1 arXiv:1812.04244v2 [cs.CV] 16 May 2019 \fby annotated 3D bounding boxes. In other words, the training data for 3D object detection directly provides the semantic masks for 3D object segmentation. This is a key difference between 3D detection and 2D detection training data. In 2D object detection, the bounding boxes could only provide weak supervisions for semantic segmentation [5]. Based on this observation, we present a novel two-stage 3D object detection framework, named PointRCNN, which directly operates on 3D point clouds and achieves robust and accurate 3D detection performance (see Fig. 1 (c)). The proposed framework consists of two stages, the \ufb01rst stage aims at generating 3D bounding box proposal in a bottomup scheme. By utilizing 3D bounding boxes to generate ground-truth segmentation mask, the \ufb01rst stage segments foreground points and generates a small number of bounding box proposals from the segmented points simultaneously. Such a strategy avoids using the large number of 3D anchor boxes in the whole 3D space as previous methods [43, 14, 4] do and saves much computation. The second stage of PointRCNN conducts canonical 3D box re\ufb01nement. After the 3D proposals are generated, a point cloud region pooling operation is adopted to pool learned point representations from stage-1. Unlike existing 3D methods that directly estimate the global box coordinates, the pooled 3D points are transformed to the canonical coordinates and combined with the pooled point features as well as the segmentation mask from stage-1 for learning relative coordinate re\ufb01nement. This strategy fully utilizes all information provided by our robust stage-1 segmentation and proposal sub-network. To learn more effective coordinate re\ufb01nements, we also propose the full bin-based 3D box regression loss for proposal generation and re\ufb01nement, and the ablation experiments show that it converges faster and achieves higher recall than other 3D box regression loss. Our contributions could be summarized into three-fold. (1) We propose a novel bottom-up point cloud-based 3D bounding box proposal generation algorithm, which generates a small number of high-quality 3D proposals via segmenting the point cloud into foreground objects and background. The learned point representation from segmentation is not only good at proposal generation but is also helpful for the later box re\ufb01nement. (2) The proposed canonical 3D bounding box re\ufb01nement takes advantages of our highrecall box proposals generated from stage-1 and learns to predict box coordinates re\ufb01nements in the canonical coordinates with robust bin-based losses. (3) Our proposed 3D detection framework PointRCNN outperforms state-of-theart methods with remarkable margins and ranks \ufb01rst among all published works as of Nov. 16 2018 on the 3D detection test board of KITTI by using only point clouds as input. 2. Related Work 3D object detection from 2D images. There are existing works on estimating the 3D bounding box from images. [24, 15] leveraged the geometry constraints between 3D and 2D bounding box to recover the 3D object pose. [1, 44, 23] exploited the similarity between 3D objects and the CAD models. Chen et al. [2, 3] formulated the 3D geometric information of objects as an energy function to score the prede\ufb01ned 3D boxes. These works can only generate coarse 3D detection results due to the lack of depth information and can be substantially affected by appearance variations. 3D object detection from point clouds. State-of-the-art 3D object detection methods proposed various ways to learn discriminative features from the sparse 3D point clouds. [4, 14, 42, 17, 41] projected point cloud to bird\u2019s view and utilized 2D CNN to learn the point cloud features for 3D box generation. Song et al. [34] and Zhou et al. [43] grouped the points into voxels and used 3D CNN to learn the features of voxels to generate 3D boxes. However, the bird\u2019s view projection and voxelization suffer from information loss due to the data quantization, and the 3D CNN is both memory and computation inef\ufb01cient. [25, 39] utilized mature 2D detectors to generate 2D proposals from images and reduced the size of 3D points in each cropped image regions. PointNet [26, 28] is then used to learn the point cloud features for 3D box estimation. But the 2D imagebased proposal generation might fail on some challenging cases that could only be well observed from 3D space. Such failures could not be recovered by the 3D box estimation step. In contrast, our bottom-to-up 3D proposal generation method directly generates robust 3D proposals from point clouds, which is both ef\ufb01cient and quantization free. Learning point cloud representations. Instead of representing the point cloud as voxels [22, 33, 35] or multi-view formats [27, 36, 37], Qi et al. [26] presented the PointNet architecture to directly learn point features from raw point clouds, which greatly increases the speed and accuracies of point cloud classi\ufb01cation and segmentation. The follow-up works [28, 12] further improve the extracted feature quality by considering the local structures in point clouds. Our work extends the point-based feature extractors to 3D point cloud-based object detection, leading to a novel two-stage 3D detection framework, which directly generate 3D box proposals and detection results from raw point clouds. 3. PointRCNN for Point Cloud 3D Detection In this section, we present our proposed two-stage detection framework, PointRCNN, for detecting 3D objects from irregular point cloud. The overall structure is illustrated in Fig. 2, which consists of the bottom-up 3D proposal generation stage and the canonical bounding box re\ufb01nement stage. 3.1. Bottom-up 3D proposal generation via point cloud segmentation Existing 2D object detection methods could be classi\ufb01ed into one-stage and two-stage methods, where one-stage 2 \fPoint Cloud Decoder ... Bin-based 3D Box Generation Foreground Point Segmentation Point-wise feature vector Generate 3D proposal from each foreground point Semantic Features ... Canonical Transformation Point Cloud Region Pooling Point Cloud Encoder Point Cloud Encoder MLP Bin-based 3D Box Refinement Confidence Prediction Point Coords. Semantic Features Foreground Mask 3D RoIs Merged Features ... a: Bottom-up 3D Proposal Generation b: Canonical 3D Box Refinement Point cloud representation of input scene 3D boxes of detected objects Local Spatial Points ... Figure 2. The PointRCNN architecture for 3D object detection from point cloud. The whole network consists of two parts: (a) for generating 3D proposals from raw point cloud in a bottom-up manner. (b) for re\ufb01ning the 3D proposals in canonical coordinate. methods [19, 21, 31, 30, 29] are generally faster but directly estimate object bounding boxes without re\ufb01nement, while two-stage methods [10, 18, 32, 8] generate proposals \ufb01rstly and further re\ufb01ne the proposals and con\ufb01dences in a second stage. However, direct extension of the two-stage methods from 2D to 3D is non-trivial due to the huge 3D search space and the irregular format of point clouds. AVOD [14] places 80-100k anchor boxes in the 3D space and pool features for each anchor in multiple views for generating proposals. FPointNet [25] generates 2D proposals from 2D images, and estimate 3D boxes based on the 3D points cropped from the 2D regions, which might miss dif\ufb01cult objects that could only be clearly observed from 3D space. We propose an accurate and robust 3D proposal generation algorithm as our stage-1 sub-network based on wholescene point cloud segmentation. We observe that objects in 3D scenes are naturally separated without overlapping each other. All 3D objects\u2019 segmentation masks could be directly obtained by their 3D bounding box annotations, i.e., 3D points inside 3D boxes are considered as foreground points. We therefore propose to generate 3D proposals in a bottom-up manner. Speci\ufb01cally, we learn point-wise features to segment the raw point cloud and to generate 3D proposals from the segmented foreground points simultaneously. Based on this bottom-up strategy, our method avoids using a large set of prede\ufb01ned 3D boxes in the 3D space and signi\ufb01cantly constrains the search space for 3D proposal generation. The experiments show that our proposed 3D box proposal method achieves signi\ufb01cantly higher recall than 3D anchor-based proposal generation methods. Learning point cloud representations. To learn discriminative point-wise features for describing the raw point clouds, we utilize the PointNet++ [28] with multi-scale grouping as our backbone network. There are several other alternative point-cloud network structures, such as [26, 13] or VoxelNet [43] with sparse convolutions [9], which could also be adopted as our backbone network. Foreground point segmentation. The foreground points provide rich information on predicting their associated objects\u2019 locations and orientations. By learning to segment the foreground points, the point-cloud network is forced to capture contextual information for making accurate point-wise prediction, which is also bene\ufb01cial for 3D box generation. We design the bottom-up 3D proposal generation method to generate 3D box proposals directly from the foreground points, i.e., the foreground segmentation and 3D box proposal generation are performed simultaneously. Given the point-wise features encoded by the backbone point cloud network, we append one segmentation head for estimating the foreground mask and one box regression head for generating 3D proposals. For point segmentation, the ground-truth segmentation mask is naturally provided by the 3D ground-truth boxes. The number of foreground points is generally much smaller than that of the background points for a large-scale outdoor scene. Thus we use the focal loss [19] to handle the class imbalance problem as Lfocal(pt) = \u2212\u03b1t(1 \u2212pt)\u03b3 log(pt), (1) where pt = ( p for forground point 1 \u2212p otherwise During training point cloud segmentation, we keep the default settings \u03b1t = 0.25 and \u03b3 = 2 as the original paper. 3 \fBin-based 3D bounding box generation. As we mentioned above, a box regression head is also appended for simultaneously generating bottom-up 3D proposals with the foreground point segmentation. During training, we only require the box regression head to regress 3D bounding box locations from foreground points. Note that although boxes are not regressed from the background points, those points also provide supporting information for generating boxes because of the receptive \ufb01eld of the point-cloud network. A 3D bounding box is represented as (x, y, z, h, w, l, \u03b8) in the LiDAR coordinate system, where (x, y, z) is the object center location, (h, w, l) is the object size, and \u03b8 is the object orientation from the bird\u2019s view. To constrain the generated 3D box proposals, we propose bin-based regression losses for estimating 3D bounding boxes of objects. For estimating center location of an object, as shown in Fig. 3, we split the surrounding area of each foreground point into a series of discrete bins along the X and Z axes. Speci\ufb01cally, we set a search range S for each X and Z axis of the current foreground point, and each 1D search range is divided into bins of uniform length \u03b4 to represent different object centers (x, z) on the X-Z plane. We observe that using bin-based classi\ufb01cation with cross-entropy loss for the X and Z axes instead of direct regression with smooth L1 loss results in more accurate and robust center localization. The localization loss for the X or Z axis consists of two terms, one term for bin classi\ufb01cation along each X and Z axis, and the other term for residual regression within the classi\ufb01ed bin. For the center location y along the vertical Y axis, we directly utilize smooth L1 loss for the regression since most objects\u2019 y values are within a very small range. Using the L1 loss is enough for obtaining accurate y values. The localization targets could therefore be formulated as bin(p) x = \u0016xp \u2212x(p) + S \u03b4 \u0017 , bin(p) z = \u0016zp \u2212z(p) + S \u03b4 \u0017 , res(p) u u\u2208{x,z} = 1 C \u0012 up \u2212u(p) + S \u2212 \u0012 bin(p) u \u00b7 \u03b4 + \u03b4 2 \u0013\u0013 , (2) res(p) y = yp \u2212y(p) where (x(p), y(p), z(p)) is the coordinates of a foreground point of interest, (xp, yp, zp) is the center coordinates of its corresponding object , bin(p) x and bin(p) z are ground-truth bin assignments along X and Z axis, res(p) x and res(p) z are the ground-truth residual for further location re\ufb01nement within the assigned bin, and C is the bin length for normalization. The targets of orientation \u03b8 and size (h, w, l) estimation are similar to those in [25]. We divide the orientation 2\u03c0 into n bins, and calculate the bin classi\ufb01cation target bin(p) \u03b8 and residual regression target res(p) \u03b8 in the same way as x or z prediction. The object size (h, w, l) is directly regressed by calculating residual (res(p) h , res(p) w , res(p) l ) w.r.t. the average object size of each class in the entire training set. Figure 3. Illustration of bin-based localization. The surrounding area along X and Z axes of each foreground point is split into a series of bins to locate the object center. In the inference stage, for the bin-based predicted parameters, x, z, \u03b8, we \ufb01rst choose the bin center with the highest predicted con\ufb01dence and add the predicted residual to obtain the re\ufb01ned parameters. For other directly regressed parameters, including y, h, w, and l, we add the predicted residual to their initial values. The overall 3D bounding box regression loss Lreg with different loss terms for training could then be formulated as L(p) bin = X u\u2208{x,z,\u03b8} (Fcls(c bin (p) u , bin(p) u ) + Freg(c res(p) u , res(p) u )), L(p) res = X v\u2208{y,h,w,l} Freg(c res(p) v , res(p) v ), (3) Lreg = 1 Npos X p\u2208pos \u0010 L(p) bin + L(p) res \u0011 where Npos is the number of foreground points, c bin (p) u and c res(p) u are the predicted bin assignments and residuals of the foreground point p, bin(p) u and res(p) u are the ground-truth targets calculated as above, Fcls denotes the cross-entropy classi\ufb01cation loss, and Freg denotes the smooth L1 loss. To remove the redundant proposals, we conduct nonmaximum suppression (NMS) based on the oriented IoU from bird\u2019s view to generate a small number of high-quality proposals. For training, we use 0.85 as the bird\u2019s view IoU threshold and after NMS we keep top 300 proposals for training the stage-2 sub-network. For inference, we use oriented NMS with IoU threshold 0.8, and only top 100 proposals are kept for the re\ufb01nement of stage-2 sub-network. 3.2. Point cloud region pooling After obtaining 3D bounding box proposals, we aim at re\ufb01ning the box locations and orientations based on the previously generated box proposals. To learn more speci\ufb01c local features of each proposal, we propose to pool 3D points and their corresponding point features from stage-1 according to the location of each 3D proposal. For each 3D box proposal, bi = (xi, yi, zi, hi, wi, li, \u03b8i), we slightly enlarge it to create a new 3D box 4 \fY CCS-5 CCS-4 CCS-3 CCS-1 LiDAR Coordinate System X Z Z X Y Z X Y Canonical Transformation Canonical Coordinate System 2 Canonical Coordinate System 5 CCS-2 Figure 4. Illustration of canonical transformation. The pooled points belonged to each proposal are transformed to the corresponding canonical coordinate system for better local spatial feature learning, where CCS denotes Canonical Coordinate System. be i = (xi, yi, zi, hi + \u03b7, wi + \u03b7, li + \u03b7, \u03b8i) to encode the additional information from its context, where \u03b7 is a constant value for enlarging the size of box. For each point p = (x(p), y(p), z(p)), an inside/outside test is performed to determine whether the point p is inside the enlarged bounding box proposal be i. If so, the point and its features would be kept for re\ufb01ning the box bi. The features associated with the inside point p include its 3D point coordinates (x(p), y(p), z(p)) \u2208R3, its laser re\ufb02ection intensity r(p) \u2208R, its predicted segmentation mask m(p) \u2208 {0, 1} from stage-1, and the C-dimensional learned point feature representation f (p) \u2208RC from stage-1. We include the segmentation mask m(p) to differentiate the predicted foreground/background points within the enlarged box be i. The learned point feature f (p) encodes valuable information via learning for segmentation and proposal generation therefore are also included. We eliminate the proposals that have no inside points in the following stage. 3.3. Canonical 3D bounding box re\ufb01nement As illustrated in Fig. 2 (b), the pooled points and their associated features (see Sec. 3.2) for each proposal are fed to our stage-2 sub-network for re\ufb01ning the 3D box locations as well as the foreground object con\ufb01dence. Canonical transformation. To take advantages of our high-recall box proposals from stage-1 and to estimate only the residuals of the box parameters of proposals, we transform the pooled points belonging to each proposal to the canonical coordinate system of the corresponding 3D proposal. As shown in Fig. 4, the canonical coordinate system for one 3D proposal denotes that (1) the origin is located at the center of the box proposal; (2) the local X\u2032 and Z\u2032 axes are approximately parallel to the ground plane with X\u2032 pointing towards the head direction of proposal and the other Z\u2032 axis perpendicular to X\u2032; (3) the Y \u2032 axis remains the same as that of the LiDAR coordinate system. All pooled points\u2019 coordinates p of the box proposal should be transformed to the canonical coordinate system as \u02dc p by proper rotation and translation. Using the proposed canonical coordinate system enables the box re\ufb01nement stage to learn better local spatial features for each proposal. Feature learning for box proposal re\ufb01nement. As we mentioned in Sec. 3.2, the re\ufb01nement sub-network combines both the transformed local spatial points (features) \u02dc p as well as their global semantic features f (p) from stage-1 for further box and con\ufb01dence re\ufb01nement. Although the canonical transformation enables robust local spatial features learning, it inevitably loses depth information of each object. For instance, the far-away objects generally have much fewer points than nearby objects because of the \ufb01xed angular scanning resolution of the LiDAR sensors. To compensate for the lost depth information, we include the distance to the sensor, i.e., d(p) = p (x(p))2 + (y(p))2 + (z(p))2, into the features of point p. For each proposal, its associated points\u2019 local spatial features \u02dc p and the extra features [r(p), m(p), d(p)] are \ufb01rst concatenated and fed to several fully-connected layers to encode their local features to the same dimension of the global features f (p). Then the local features and global features are concatenated and fed into a network following the structure of [28] to obtain a discriminative feature vector for the following con\ufb01dence classi\ufb01cation and box re\ufb01nement. Losses for box proposal re\ufb01nement. We adopt the similar bin-based regression losses for proposal re\ufb01nement. A ground-truth box is assigned to a 3D box proposal for learning box re\ufb01nement if their 3D IoU is greater than 0.55. Both the 3D proposals and their corresponding 3D ground-truth boxes are transformed into the canonical coordinate systems, which means the 3D proposal bi = (xi, yi, zi, hi, wi, li, \u03b8i) and 3D ground-truth box bgt i = (xgt i , ygt i , zgt i , hgt i , wgt i , lgt i , \u03b8gt i ) would be transformed to \u02dc bi = (0, 0, 0, hi, wi, li, 0), (4) \u02dc bgt i = (xgt i \u2212xi, ygt i \u2212yi, zgt i \u2212zi, hgt i , wgt i , lgt i , \u03b8gt i \u2212\u03b8i) The training targets for the ith box proposal\u2019s center location, (bini \u2206x, bini \u2206z, resi \u2206x, resi \u2206z, resi \u2206y), are set in the same way as Eq. (2) except that we use smaller search range S for re\ufb01ning the locations of 3D proposals. We still directly regress size residual (resi \u2206h, resi \u2206w, resi \u2206l) w.r.t. the average object size of each class in the training set since the pooled sparse points usually could not provide enough information of the proposal size (hi, wi, li). For re\ufb01ning the orientation, we assume that the angular difference w.r.t. the ground-truth orientation, \u03b8gt i \u2212\u03b8i, is within the range [\u2212\u03c0 4 , \u03c0 4 ], based on the fact that the 3D IoU between a proposal and their ground-truth box is at least 0.55. Therefore, we divide \u03c0 2 into discrete bins with the bin size \u03c9 and predict the bin-based orientation targets as bini \u2206\u03b8 = $ \u03b8gt i \u2212\u03b8i + \u03c0 4 \u03c9 % , (5) resi \u2206\u03b8 = 2 \u03c9 \u0010 \u03b8gt i \u2212\u03b8i + \u03c0 4 \u2212 \u0010 bini \u2206\u03b8 \u00b7 \u03c9 + \u03c9 2 \u0011\u0011 Therefore, the overall loss for the stage-2 sub-network can be formulated as 5 \fLre\ufb01ne = 1 ||B|| X i\u2208B Fcls(probi, labeli) + 1 ||Bpos|| X i\u2208Bpos ( \u02dc L(i) bin + \u02dc L(i) res) (6) where B is the set of 3D proposals from stage-1 and Bpos stores the positive proposals for regression, probi is the estimated con\ufb01dence of \u02dc bi and labeli is the corresponding label, Fcls is the cross entropy loss to supervise the predicted con\ufb01dence, \u02dc L(i) bin and \u02dc L(i) res are similar to L(p) bin and L(p) res in Eq. (3) with the new targets calculated by \u02dc bi and \u02dc bgt i as above. We \ufb01nally apply oriented NMS with bird\u2019s view IoU threshold 0.01 to remove the overlapping bounding boxes and generate the 3D bounding boxes for detected objects. 4. Experiments PointRCNN is evaluated on the challenging 3D object detection benchmark of KITTI dataset [7]. We \ufb01rst introduce the implementation details of PointRCNN in Sec. 4.1. In Sec. 4.2, we perform a comparison with state-of-the-art 3D detection methods. Finally, we conduct extensive ablation studies to analyze PointRCNN in Sec. 4.3. 4.1. Implementation Details Network Architecture. For each 3D point-cloud scene in the training set, we subsample 16,384 points from each scene as the inputs. For scenes with the number of points fewer than 16,384, we randomly repeat the points to obtain 16,384 points. For the stage-1 sub-network, we follow the network structure of [28], where four set-abstraction layers with multi-scale grouping are used to subsample points into groups with sizes 4096, 1024, 256, 64. Four feature propagation layers are then used to obtain the per-point feature vectors for segmentation and proposal generation. For the box proposal re\ufb01nement sub-network, we randomly sample 512 points from the pooled region of each proposal as the input of the re\ufb01nement sub-network. Three set abstraction layers with single-scale grouping [28] (with group sizes 128, 32, 1) are used to generate a single feature vector for object con\ufb01dence classi\ufb01cation and proposal location re\ufb01nement. The training scheme. Here we report the training details of car category since it has the majority of samples in the KITTI dataset, and the proposed method could be extended to other categories (like pedestrian and cyclist) easily with little modi\ufb01cations of hyper parameters. For stage-1 sub-network, all points inside the 3D groundtruth boxes are considered as foreground points and others points are treated as background points. During training, we ignore background points near the object boundaries by enlarging the 3D ground-truth boxes by 0.2m on each side of object for robust segmentation since the 3D groundtruth boxes may have small variations. For the bin-based proposal generation, the hyper parameters are set as search range S = 3m, bin size \u03b4 = 0.5m and orientation bin number n = 12. To train the stage-2 sub-network, we randomly augment the 3D proposals with small variations to increase the diversity of proposals. For training the box classi\ufb01cation head, a proposal is considered as positive if its maximum 3D IoU with ground-truth boxes is above 0.6, and is treated as negative if its maximum 3D IoU is below 0.45. We use 3D IoU 0.55 as the minimum threshold of proposals for the training of box regression head. For the bin-based proposal re\ufb01nement, search range is S = 1.5m, localization bin size is \u03b4 = 0.5m and orientation bin size is \u03c9 = 10\u25e6. The context length of point cloud pooling is \u03b7 = 1.0m. The two stage sub-networks of PointRCNN are trained separately. The stage-1 sub-network is trained for 200 epochs with batch size 16 and learning rate 0.002, while the stage-2 sub-network is trained for 50 epochs with batch size 256 and learning rate 0.002. During training, we conduct data augmentation of random \ufb02ip, scaling with a scale factor sampled from [0.95, 1.05] and rotation around vertical Y axis between [-10, 10] degrees. Inspired by [40], to simulate objects with various environments, we also put several new ground-truth boxes and their inside points from other scenes to the same locations of current training scene by randomly selecting non-overlapping boxes, and this augmentation is denoted as GT-AUG in the following sections. 4.2. 3D Object Detection on KITTI The 3D object detection benchmark of KITTI contains 7481 training samples and 7518 testing samples (test split). We follow the frequently used train/val split mentioned in [4] to divide the training samples into train split (3712 samples) and val split (3769 samples). We compare PointRCNN with state-of-the-art methods of 3D object detection on both val split and test split of KITTI dataset. All the models are trained on train split and evaluated on test split and val split. Evaluation of 3D object detection. We evaluate our method on the 3D detection benchmark of the KITTI test server, and the results are shown in Tab. 1. For the 3D detection of car and cyclist, our method outperforms previous state-of-the-art methods with remarkable margins on all three dif\ufb01culties and ranks \ufb01rst on the KITTI test board among all published works at the time of submission. Although most of the previous methods use both RGB image and point cloud as input, our method achieves better performance with an ef\ufb01cient architecture by using only the point cloud as input. For the pedestrian detection, compared with previous LiDAR-only methods, our method achieves better or comparable results, but it performs slightly worse than the methods with multiple sensors. We consider it is due 6 \fMethod Modality Car (IoU=0.7) Pedestrian (IoU=0.5) Cyclist (IoU=0.5) Easy Moderate Hard Easy Moderate Hard Easy Moderate Hard MV3D [4] RGB + LiDAR 71.09 62.35 55.12 UberATG-ContFuse [17] RGB + LiDAR 82.54 66.22 64.04 AVOD-FPN [14] RGB + LiDAR 81.94 71.88 66.38 50.80 42.81 40.88 64.00 52.18 46.61 F-PointNet [25] RGB + LiDAR 81.20 70.39 62.19 51.21 44.89 40.23 71.96 56.77 50.39 VoxelNet [43] LiDAR 77.47 65.11 57.73 39.48 33.69 31.51 61.22 48.36 44.37 SECOND [40] LiDAR 83.13 73.66 66.20 51.07 42.56 37.29 70.51 53.85 46.90 Ours LiDAR 85.94 75.76 68.32 49.43 41.78 38.63 73.93 59.60 53.59 Table 1. Performance comparison of 3D object detection with previous methods on KITTI test split by submitting to of\ufb01cial test server. The evaluation metric is Average Precision(AP) with IoU threshold 0.7 for car and 0.5 for pedestrian/cyclist. Method AP(IoU=0.7) Easy Moderate Hard MV3D [4] 71.29 62.68 56.56 VoxelNet [43] 81.98 65.46 62.85 SECOND [40] 87.43 76.48 69.10 AVOD-FPN [14] 84.41 74.44 68.65 F-PointNet [25] 83.76 70.92 63.65 Ours (no GT-AUG) 88.45 77.67 76.30 Ours 88.88 78.63 77.38 Table 2. Performance comparison of 3D object detection with previous methods on the car class of KITTI val split set. to the fact that our method only uses sparse point cloud as input but pedestrians have small size and image could capture more details of pedestrians than point cloud to help 3D detection. For the most important car category, we also report the performance of 3D detection result on the val split as shown in Tab. 2. Our method outperforms previous stage-of-the-art methods with large margins on the val split. Especially in the hard dif\ufb01culty, our method has 8.28% AP improvement than the previous best AP, which demonstrates the effectiveness of the proposed PointRCNN. Evaluation of 3D proposal generation. The performance of our bottom-up proposal generation network is evaluated by calculating the recall of 3D bounding box with various number of proposals and 3D IoU threshold. As shown in Tab. 3, our method (without GT-AUG) achieved signi\ufb01cantly higher recall than previous methods. With only 50 proposals, our method obtains 96.01% recall at IoU threshold 0.5 on the moderate dif\ufb01culty of car class, which outperforms recall 91% of AVOD [14] by 5.01% at the same number of proposals, note that the latter method uses both 2D image and point cloud for proposal generation while we only use point cloud as input. When using 300 proposals, our method further achieves 98.21% recall at IoU threshold 0.5. It is meaningless to increase the number of proposals since our method already obtained high recall at IoU threshold 0.5. In contrast, as shown in Tab. 3, we report the recall of 3D bounding box at IoU threshold 0.7 for reference. With 300 proposals, our method achieves 82.29% recall at IoU threshold 0.7. Although the recall of proposals are loosely [11, 8] related to the \ufb01nal 3D object detection performance, RoIs # Recall(IoU=0.5) Recall(IoU=0.7) MV3D AVOD Ours Ours 10 86.00 86.66 29.87 20 91.83 32.55 30 93.31 32.76 40 95.55 40.04 50 91.00 96.01 40.28 100 96.79 74.81 200 98.03 76.29 300 91.00 98.21 82.29 Table 3. Recall of proposal generation network with different number of RoIs and 3D IoU threshold for the car class on the val split at moderate dif\ufb01culty. Note that only MV3D [4] and AVOD [14] of previous methods reported the number of recall. the outstanding recall still suggests the robustness and accuracy of our bottom-up proposal generation network. 4.3. Ablation Study In this section, we conduct extensive ablation experiments to analyze the effectiveness of different components of PointRCNN. All experiments are trained on the train split without GT-AUG and evaluated on the val split with the car class1. Different inputs for the re\ufb01nement sub-network. As mentioned in Sec. 3.3, the inputs of the re\ufb01nement subnetwork consist of the canonically transformed coordinates and pooled features of each pooled point. We analyze the effects of each type of features to the re\ufb01nement sub-network by removing one and keeping all other parts unchanged. All experiments share the same \ufb01xed stage-1 sub-network for fair comparison. The results are shown in Tab. 4. Without the proposed canonical transformation, the performance of the re\ufb01nement sub-network dropped signi\ufb01cantly, which shows the transformation into a canonical coordinate system greatly eliminates much rotation and location variations and improve the ef\ufb01ciency of feature learning for the stage-2. We also see that removing the stage-1 features f (p) learned from point cloud segmentation and proposal generation decreases the mAP by 2.71% on the moderate dif\ufb01culty, which demonstrates the 1The KITTI test server only allows 3 submissions in every 30 days. All previous methods conducted ablation studies on the validation set. 7 \fCT RPN features camera depth seg. mask APE APM APH \u00d7 \u2713 \u2713 \u2713 7.64 13.68 13.94 \u2713 \u00d7 \u2713 \u2713 84.75 74.96 74.29 \u2713 \u2713 \u00d7 \u2713 87.34 76.79 75.46 \u2713 \u2713 \u2713 \u00d7 86.25 76.64 75.86 \u2713 \u2713 \u2713 \u2713 88.45 77.67 76.30 Table 4. Performance for different input combinations of re\ufb01nement network. APE, APM, APH denote the average precision for easy, moderate, hard dif\ufb01culty on KITTI val split, respectively. CT denotes canonical transformation. \u03b7 (context width) APE APM APH no context 86.65 75.68 68.92 0.5m 87.87 77.12 75.61 0.8m 88.27 77.40 76.07 1.0m 88.45 77.67 76.30 1.5m 86.82 76.87 75.88 2.0m 86.47 76.61 75.53 Table 5. Performance of adopting different context width \u03b7 of context-aware point cloud pooling. advantages of learning for semantic segmentation in the \ufb01rst stage. Tab. 4 also shows that the camera depth information d(p) and segmentation mask m(p) for 3D points p contribute slightly to the \ufb01nal performance, since the camera depth completes the distance information which is eliminated during the canonical transformation and the segmentation mask indicates the foreground points in the pooled regions. Context-aware point cloud pooling. In Sec. 3.2, we introduce enlarging the proposal boxes bi by a margin \u03b7 to create be i to pool more contextual points for each proposal\u2019s con\ufb01dence estimation and location regression. Tab. 5 shows the effects of different pooled context widths \u03b7. \u03b7 = 1.0m results in the best performance in our proposed framework. We notice that when no contextual information is pooled, the accuracies, especially those at the hard dif\ufb01culty, drops signi\ufb01cantly. The dif\ufb01cult cases often have fewer points in the proposals since the object might be occluded or far away from the sensor, which needs more context information for classi\ufb01cation and proposal re\ufb01nement. As shown in Tab. 5, too large \u03b7 also leads to performance drops since the pooled region of current proposals may include noisy foreground points of other objects. Losses of 3D bounding box regression. In Sec. 3.1, we propose the bin-based localization losses for generating 3D box proposals. In this part, we evaluate the performances when using different types of 3D box regression loss for our stage-1 sub-network, which include the residual-based loss (RB-loss) [43], residual-cos-based loss (RCB-loss), corner loss (CN-loss) [4, 14], partial-bin-based loss (PBB-loss) [25], and our full bin-based loss (BB-loss). Here the residual-cos-based loss encodes \u2206\u03b8 of residualbased loss by (cos(\u2206\u03b8), sin(\u2206\u03b8)) to eliminate the ambiguity of angle regression. 0 20 40 60 80 100 120 140 160 180 200 epochs 0.0 0.2 0.4 0.6 0.8 1.0 recall RB-Loss(iou=0.5) RCB-Loss(iou=0.5) CN-loss(iou=0.5) PBB-loss(iou=0.5) BB-loss(iou=0.5) RB-Loss(iou=0.7) RCB-Loss(iou=0.7) CN-loss(iou=0.7) PBB-loss(iou=0.7) BB-loss(iou=0.7) Figure 5. Recall curves of applying different bounding box regression loss function. The \ufb01nal recall (IoU thresholds 0.5 and 0.7) with 100 proposals from stage-1 are used as the evaluation metric, which are shown in Fig. 5. The plot reveals the effectiveness of our full bin-based 3D bounding box regression loss. Speci\ufb01cally, stage-1 sub-network with our full bin-based loss function achieves higher recall and converges much faster than all other loss functions, which bene\ufb01ts from constraining the targets, especially the localization, with prior knowledge. The partial-bin-based loss achieves similar recall but the convergence speed is much slower than ours. Both full and partial bin-based loss have signi\ufb01cantly higher recall than other loss functions, especially at IoU threshold 0.7. The improved residual-cos-based loss also obtains better recall than residual-based loss by improving the angle regression targets. 4.4. Qualitative Results Fig. 6 shows some qualitative results of our proposed PointRCNN on the test split of KITTI [7] dataset. Note that the image is just for better visualization and our PointRCNN takes only the point cloud as input to generation 3D detection results. 5." + } + ], + "Benjin Zhu": [ + { + "url": "http://arxiv.org/abs/2212.07289v1", + "title": "ConQueR: Query Contrast Voxel-DETR for 3D Object Detection", + "abstract": "Although DETR-based 3D detectors can simplify the detection pipeline and\nachieve direct sparse predictions, their performance still lags behind dense\ndetectors with post-processing for 3D object detection from point clouds. DETRs\nusually adopt a larger number of queries than GTs (e.g., 300 queries v.s. 40\nobjects in Waymo) in a scene, which inevitably incur many false positives\nduring inference. In this paper, we propose a simple yet effective sparse 3D\ndetector, named Query Contrast Voxel-DETR (ConQueR), to eliminate the\nchallenging false positives, and achieve more accurate and sparser predictions.\nWe observe that most false positives are highly overlapping in local regions,\ncaused by the lack of explicit supervision to discriminate locally similar\nqueries. We thus propose a Query Contrast mechanism to explicitly enhance\nqueries towards their best-matched GTs over all unmatched query predictions.\nThis is achieved by the construction of positive and negative GT-query pairs\nfor each GT, and a contrastive loss to enhance positive GT-query pairs against\nnegative ones based on feature similarities. ConQueR closes the gap of sparse\nand dense 3D detectors, and reduces up to ~60% false positives. Our\nsingle-frame ConQueR achieves new state-of-the-art (sota) 71.6 mAPH/L2 on the\nchallenging Waymo Open Dataset validation set, outperforming previous sota\nmethods (e.g., PV-RCNN++) by over 2.0 mAPH/L2.", + "authors": "Benjin Zhu, Zhe Wang, Shaoshuai Shi, Hang Xu, Lanqing Hong, Hongsheng Li", + "published": "2022-12-14", + "updated": "2022-12-14", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.RO" + ], + "main_content": "Introduction 3D object detection from point clouds has received much attention in recent years [7, 32, 34, 47, 52] as its wide applications in autonomous driving, robots navigation, etc. State-of-the-art 3D detectors [7,31,33,53] still adopt dense predictions with post-processing (e.g., NMS [2]) to obtain \ufb01nal sparse detections. This indirect pipeline usually involves many hand-crafted components (e.g., anchors, center masks) based on human experience, which involves much effort for tuning, and prevents dense detectors from being (a) Voxel-DETR (b) ConQueR Figure 1. Comparison of our baseline Voxel-DETR and ConQueR. GTs (green) and predictions (blue) of an example scene in the WOD is visualized. Sparse predictions of Voxel-DETR still contain many highly overlapped false positives (in the red dashed circle), while ConQueR can generate much sparser predictions. optimized end-to-end to achieve optimal performance. Recently, DETR-based 2D detectors [3, 39, 49, 57] show that transformers with direct sparse predictions can greatly simplify the detection pipeline, and lead to better performance. However, although many efforts [1,26,27] have been made towards direct sparse predictions for 3D object detection, because of the different characteristics of images and point clouds (i.e., dense and ordered images v.s. sparse and irregular points clouds), performance of sparse 3D object detectors still largely lags behind state-of-the-art dense detectors. To achieve direct sparse predictions, DETRs usually adopt a set of object queries [1, 3, 27, 39, 49, 57], and resort to the one-to-one Hungarian Matching [17] to assign ground-truths (GTs) to object queries. However, to guarantee a high recall rate, those detectors need to impose much more queries than the actual number of objects in a scene. For example, recent works [1, 27] select top-300 scored query predictions to cover only \u223c40 objects in each scene of Waymo Open Dataset (WOD) [36], while 2D DETR detectors [3,39,49,57] use 10\u00d7 more predictions than the average GT number of MS COCO [22]. As shown in Fig. 1(a), we visualize an example scene by a baseline DETR-based 3D detector, named Voxel-DETR, which shows its top-300 1 arXiv:2212.07289v1 [cs.CV] 14 Dec 2022 \fscored predictions. Objects are generally small and densely populated in autonomous driving scenes, while 3D DETRs adopt the same \ufb01xed top-N scored predictions as 2D DETRs, and lack a mechanism to handle such small and dense objects. Consequently, they tend to generate densely overlapped false positives (in the red-dashed circle), harming both the accuracy and sparsity [29,39] of \ufb01nal predictions. We argue the key reason is that the Hungarian Matching in existing 3D DETRs only assigns each GT to its best matched query, while all other unmatched queries near this GT are not effectively suppressed. For each GT, the oneto-one matching loss solely forces all unmatched queries to predict the same \u201cno-object\u201d label, and the best matched query are supervised without considering its relative ranking to its surrounding unmatched queries. This design causes the detectors to be insuf\ufb01ciently supervised in discriminating similar query predictions for each GT, leading to duplicated false positives for scenes with densely populated objects. To overcome the limitations of current supervision, we introduce a simple yet novel Query Contrast strategy to explicitly suppress predictions of all unmatched queries for each GT, and simultaneously enhance the best matched query to generate more accurate predictions in a contrastive manner. The Query Contrast strategy is integrated into our baseline Voxel-DETR, which consists of a sparse 3D convolution backbone to extract features from voxel grids, and a transformer encoder-decoder architecture with a bipartite matching loss to directly generate sparse predictions. Our Query Contrast mechanism involves the construction of positive and negative GT-query pairs, and the contrastive learning on all GT-query pairs to supervise both matched and unmatched queries with knowledge of the states of their surrounding queries. Such GT-query pairs are directly created by reusing the Hungarian Matching results: each GT and its best matched query form the positive pair, and all other unmatched queries of the same GT then form negative pairs. To quantitively measure the similarities of the GT-query pairs, we formulate the object queries to be the same as GT boxes (i.e., using only box categories, locations, sizes and orientations), such that GTs and object queries can be processed by the same transformer decoder, and embedded into a uni\ufb01ed feature space to properly calculate their similarities. Given the GT-query similarities, we adopt the contrastive learning loss [5, 12, 54] to effectively enhance the positive (matched) query\u2019s prediction for each GT, and suppress those of all its negative queries at the same time. Moreover, to further improve the contrastive supervision, we construct multiple positive GT-query pairs for each GT by adding small random noises to the original GTs, which greatly boost the training ef\ufb01ciency and effectiveness. The resulting sparse 3D detector, named Query Contrast Voxel-DETR (ConQueR), signi\ufb01cantly improves the detection performance and sparsity of \ufb01nal predictions, as shown in Fig. 1(b). Moreover, ConQueR abandons the \ufb01xed top-N prediction scheme and enables to output a vary number of predictions for different scenes. ConQueR reduces up to \u223c60% false positives and sets new records on the challenging Waymo Open Dataset (WOD) [36]. Contributions are summarized as bellow: 1. We introduce a novel Query Contrast strategy into DETR-based 3D detectors to effectively eliminate densely overlapped false positives and achieve more accurate predictions. 2. We propose to construct multi-positive contrastive training, which greatly improve the effectiveness and ef\ufb01ciency of our Query Contrast mechanism. 3. Our proposed sparse 3D detector ConQueR closes the gap between sparse and dense 3D detectors, and sets new records on the challenging WOD benchmark. 2. Related Works End-to-End 2D Object Detection. End-to-end object detection aims to generate \ufb01nal sparse predictions without non-differentiable components like NMS. RelationNet [14] proposes an object relation module and DETR [3] greatly simpli\ufb01es the detection pipeline by removing many handcrafted components like anchors, NMS, etc. DETR introduce a set of object queries and resorts to the Hungarian Matching to associate each GT with the query predictions of minimal matching cost, and selects top-N scored predictions for inference. [39, 42] also reveal that one-to-one matching is the key to achieve sparse predictions. Following works [16, 19, 19, 25, 43, 57] improves DETR in many aspects including query design, convergence speed, and performance, surpassing CNN-based dense detectors [8,51,56] by a large margin. However, they still need to select a \ufb01xed number of predictions as \ufb01nal results, no matter how many objects are there in an image. Recently, DINO-DETR [49] introduces a \u201ccontrastive\u201d denoising training strategy. It creates positive and negative GTs conceptually, and supervise these GTs with different targets separately, which has no relation with contrastive learning. 3D Object Detection from Point Clouds. State-of-theart 3D detectors usually adopts voxel-based [31\u201333, 47], range-view [38,40] or point-based [7,45] paradigms to convert raw point clouds into dense feature representations, followed by detection heads to generate dense predictions and resort to NMS to \ufb01lter out low-quality predictions. Many attempts have also been made to incorporate transformer architectures [24, 30, 37, 53] into 3D object detection, but they still rely on post-processing. Others [1,27] make a step 2 \fqueries memory \u2026 Encoder Layer 1 Encoder Layer N Sinusoid Position Encoding point cloud VFE Sparse ResNet FPN 1/4 1/8 1/16 \u2026 Decoder Layer 1 Backbone Transformer Encoder Transformer Decoder 1/8 Class-agnostic FFN Set Matching loss FFN Set Matching loss Query Contrast top-k FFN Decoder Layer N Query Contrast Query Contrast GTs queries Final Predictions Query Contrast Figure 2. Overall pipeline of the proposed ConQueR. It consists of a 3D Sparse ResNet-FPN backbone to extract dense BEV features, and a transformer encoder-decoder architecture with one-to-one matching. Top-k scored object proposals from a class-agnostic FFN form the object queries to input to the transformer decoder. During training, GTs (noised) are concatenated with object queries to input to the transformer decoder to obtain uni\ufb01ed embeddings, which are then used for Query Contrast at each decoder layer. During inference, Top-scored predictions from the last decoder layer are kept as \ufb01nal sparse predictions. \u201cVFE\u201d denotes the voxel feature extractor in [44,47,55]. further to use the one-to-one matching loss to achieve direct sparse 3D predictions. [27] proposes Box-Attention, a variant of deformable attention to better capture local informations and applies it to 3D object detection. [1] introduce image features into a decoder-only architecture to enhance query features. However, their performance still largely lags behind state-of-the-art dense 3D detectors. Contrastive Learning for Object Detection. Contrastive learning aims to learn an embedding space such that similar data pairs stay close while dissimilar ones are far apart. [10] proposes to learn representations by contrasting positive pairs against negative ones. The popular InfoNCE loss [28] uses categorical cross-entropy loss to learn such an embedding space. Following works [4, 5, 12] demonstrate the superiority of contrastive learning on providing pre-trained weights for downstream tasks (e.g., 2D detection). Few works explore the use of contrastive loss in object detection. [18] introduces semantically structured embeddings from knowledge graphs to alleviate misclassi\ufb01cations. [46] conducts contrastive distillation between different feature regions to better capture teacher\u2019s information. As far as we know, we are the \ufb01rst to introduce the contrastive learning process into DETR-based detectors. 3. Query Contrast Voxel-DETR (ConQueR) State-of-the-art 3D detectors usually generate dense object predictions, which require many hand-designed components (e.g., anchors, box masks) based on prior knowledge, and resort to post-processing to \ufb01lter out low-quality and duplicated boxes. This indirect pipeline hinders the detectors from being optimized end-to-end and achieving optimal performance. 3D DETRs aim at streamlining these hand-crafted modules, and directly generating sparse predictions via the transformer architecture and one-to-one matching loss, but they still cannot compete with state-ofthe-art dense 3D detectors and face the problem of highly overlapped false positives, as shown in Fig. 1(a). To solve these challenges, we \ufb01rst introduce our competitive DETRbased 3D framework, named Voxel-DETR in Sec. 3.1, and present the Query Contrast strategy to tackle with the duplicated false positives and further improve the detection performance in Sec. 3.2. 3.1. Voxel-DETR As illustrated in Fig. 2, Voxel-DETR consists of a 3D backbone, an encoder-decoder transformer architecture, and a set-matching loss to achieve direct sparse predictions. Backbone. Point cloud is rasterized into sparse voxel grids and fed into a 3D Sparse ResNet [13] backbone network to extract sparse 3D features. These features are transformed into dense Bird Eye View (BEV) feature maps, followed by an FPN [20] to extract multi-scale features. Transformer. The encoder-decoder transformer is similar to the two-stage Deformable-DETR [57]. The 8\u00d7 downscaled BEV features from the FPN are input to the trans3 \fgt feature embeddings query feature embeddings query boxes GT boxes positive pairs negative pairs gradients Contrastive Loss Decoder Layer Decoder Layer EMA Projector GT Queries BEV Feature Map GT Matched Query Unmatched Queries Query Embeddings GT Embeddings Matched (Pos) Pairs Unmatched (Neg) Pairs Figure 3. Illustration of Query Contrast. Given the GT (green), Hungarian Matching gives its best matched (blue) and all other unmatched (gray) object queries. Query embeddings are projected by an extra MLP to align with GT embeddings. The contrastive loss is applied to all positive and negative GT-query pairs based on their feature similarities. former encoder, which consists of 3 encoder layers. Considering the characteristics of 3D detection from point clouds (i.e., all objects are relatively small and densely distributed), we adopt BoxAttention [27], which applies spatial in-box constraints to Deformable Attention [57], to perform local self-attention. A class-agnostic feed-forward network (FFN) head is used to generate initial object proposals from encoder features. Top-k scored box proposals are selected as object queries to input to the 3-layer transformer decoder. Decoder layers conduct inter-query self-attention and crossattention between query and encoder features, followed by prediction heads to perform iterative box re\ufb01nement [57]. Predicted query boxes from the previous decoder layer\u2019s FFN head are transformed by a 3-layer MLP and added with the updated query features (initialized as zero) from the previous decoder layer. Losses. During training, all FFN prediction heads use the Hungarian Matching to assign GTs to object queries. The detection loss Ldet consists of a focal loss [21] for classi\ufb01cation, a smooth L1 loss and a 3D GIoU loss for box regression: Ldet = \u03b1Lfocal + \u03b2Ll1 + \u03b3LGIoU, (1) where \u03b1, \u03b2, \u03b3 are hyper-parameters to balance the loss terms. During inference, top-N scored predictions from the last decoder layer are kept as the \ufb01nal sparse detections. 3.2. Query Contrast Although Voxel-DETR already achieves satisfactory performance, its top-N scored predictions still suffer from densely overlapped false positives (as shown in Fig. 1(a)). To tackle this problem, we present a novel Query Contrast mechanism (depicted in Fig. 3) to explicitly enhance each GT\u2019s best matched query over unmatched ones. We \ufb01rst construct positive and negative GT-query pairs for each GT, which are then processed by each decoder layer to generate aligned GT and query embeddings. To promote the positive queries\u2019 similarity towards a GT against negative ones, the contrastive loss is applied at each decoder layer. Construction of positive/negative GT-query pairs. To determine queries to be enhanced or suppressed for each GT, we \ufb01rst construct positive and negative GT-query pairs by reusing the Hungarian Matching results (used for Eq.(1)), which is naturally compatible with our VoxelDETR framework. Given a GT, the query with the minimal matching cost forms a positive pair with the GT, all other queries and this GT then form negative GT-query pairs. These GT-query pairs help to identify the object queries that need to be further enhanced or suppressed in our VoxelDETR. Motivated by the SwAV [4] that incorporates multiple image crops to form multiple positive pairs to boost the training process, we further add small noises of different magnitudes on each GT to generate multiple noised GT copies. The multiple noised GT copies then form additional GT-query pairs with the same positive/negative query partitions as original GTs. In practice, if a noised copy deviates too much from its original GT, the noised GT-query pairs would harm the contrastive training process. However, \ufb01nding proper noise magnitudes is rather laboursome and cannot generalise well across scenarios. We thus add an auxiliary GT de-noising loss similar to that in DN-DETR [19] to obligate the detector to recover the original GT from its noised versions, which ensures that the noised GT copies would not diverge. Note that the \u201cnoising-denoising\u201d step alone only has marginal effects to detection performance, while our multipositive Query Contrast based on the noised GT copies leads to superior detection performance, as shown in our ablation studies. Contrast positive pairs against negative pairs. Before applying supervisions to the positive and negative GT-query pairs, we need to quantitatively measure the similarities of these pairs. However, simple geometric metrics (e.g., IoU) cannot suf\ufb01ciently model the similarities between GTs and queries (i.e., category, appearance, location, size, etc.). We thus propose to embed GTs and queries into a latent space for comprehensive similarity measurement. In our VoxelDETR, object queries are formulated as proposal boxes (i.e. object category, box location, size, and orientation). Therefore, the transformer decoder can naturally be used to encode both GTs and queries into feature embeddings at a chosen layer. We simply select the output layer of the FFN prediction head after each decoder layer (as shown in Fig. 2), followed by a shared MLP for similarity estimation. However, we observe that the distributions of GT objects and query boxes can be quite different: GTs have no overlap with each other and generally distribute following the roadmap layouts, while queries might correspond to densely overlapped boxes and show up at random locations. As the transformer decoder utilizes self-attention to capture interbox relations, the different distributions of GTs and query boxes would greatly affect estimation of their similarities. 4 \fTo mitigate the distribution gap, we adopt an extra MLP to project query features to align with GTs\u2019 latent space (the \u201cProjector\u201d in Fig. 3). With the aligned GT and query embeddings, we estimate all positive and negative GT-query pairs\u2019 similarities with cosine similarity metric, and adopt the InfoNCE loss [28] to encourage the best matched query to generate more accurate predictions towards its assigned GT, and force all other unmatched queries to deviate away. Moreover, to obtain more stable GT representations for supervising queries, we adopt an exponential moving average (EMA) copy for each decoder layer to embed GTs, which is shown to be effective in our ablations. Assume that for the i-th GT in a point cloud scene, we add T different noises and denote the noised GT embeddings as {b1 i , b2 i , ..., bT i }, and denote K query embeddings as {q1, q2, ..., qK}. Suppose that the Hungarian Matching assigns the i-th GT to the j-th query, then our Query Contrast loss for the i-th GT LQC i can be formulated as LQC i = \u2212 T X t=1 log exp(cos(bt i, g(qj))/\u03c4) PK k=1 exp(cos(bt i, g(qk))/\u03c4) ! , (2) where \u03c4 is the temperature coef\ufb01cient, and g(\u00b7) denotes the extra MLP projector to align query features to GTs\u2019. As shown in Fig. 2, the Query Contrast loss is adopted at every decoder layer. During inference, we abandon the widely adopted top-N scored prediction strategy and use a score threshold (e.g., 0.1) to \ufb01lter out low-quality query predictions. Query Contrast works quite well on suppressing similar query predictions in local neighborhoods, as shown in Fig. 1(b). ConQueR greatly boosts the detection accuracy, and reduces up to \u223c60% false positives. Discussion: Why does Query Contrast improve DETRbased 3D detectors? As discussed in Sec. 1, current detection losses (i.e., focal loss for classi\ufb01cation, smooth L1 and GIoU loss for regression) supervise each query without considering its surrounding queries, which lack supervision to train detectors to discriminate similar object queries especially in local regions. The proposed Query Contrast strategy tackles this issue by constructing a contrastive objective to supervise all queries simultaneously. As suggested in Eq.(2), for each GT object, the detector is instructed to identify the best matched query, and is forced to learn to differentiate it from all other unmatched counterparts, even if some of them are highly overlapping with the best matched query. As a result, all unmatched queries are trained to deviate from the GT, thus the duplicated false positives in our baseline Voxel-DETR can be effectively suppressed. Another core design of our Query Contrast is to encode the GTs and queries into a uni\ufb01ed learnable latent space. GT objects are encoded to provide better forms of supervision for both matched and unmatched queries. Previous works [11,50] in 2D object detection also show that encoding labels into feature embeddings to serve as extra supervision can perform better than the common hand-designed learning targets (i.e., classi\ufb01cation logits and regression offsets), but they generally work in a knowledge distillation (KD) manner, which cannot be utilized to supervise negative queries. In contrast, our contrastive loss does not force matched queries to approach GTs directly, but encourages them to be \u201ccloser\u201d to their corresponding GT embeddings than other close-by duplicated queries. Note that in our Query Contrast mechanism, GT embeddings are processed in an off-line manner and encoded into a uni\ufb01ed space as queries\u2019, which serve as a type of supervision and force the detector to generate more similar query features as GTs\u2019. According to our experiments, the proposed Query Contrast strategy can not only suppress those duplicated false positives, but also contribute to better detection performance, which are consistent with the above discussions. 4. Experiments ConQueR is mainly evaluated on the Waymo Open Dataset [36] (WOD) benchmark using the of\ufb01cial detection metrics: mAP and mAPH (mAP weighted by heading) for Vehicle (Veh.), Pedestrian (Ped.), and Cyclist (Cyc.). The metrics are further splitted into two dif\ufb01culty levels according to the point numbers in GT boxes: LEVEL 1 (>5) and LEVEL 2 (\u22651). We conduct ablation studies on the validation set, and compare with state-of-the-art detectors on both validation and test set. 4.1. Implementation Details Training. We follow common practice as previous voxelbased methods [31\u201333, 47] to use point cloud range of [\u221275.2m, 75.2m] \u00d7 [\u221275.2m, 75.2m] \u00d7 [\u22122.0m, 4.0m] with voxel size [0.1m, 0.1m, 0.15m] in x, y, and z-axes respectively. The same set of augmentations (i.e., GT-Aug, \ufb02ip, rotation, scaling) are adopted following the previous works [47]. We follow [1,41] to use the \u201cfade-strategy\u201d to drop GT-Aug at the last epoch to avoid over\ufb01tting. Both our baseline Voxel-DETR and ConQueR are trained for 6 epochs unless otherwise speci\ufb01ed. We use the OneCycle [35] learning rate scheduler and AdamW [23] optimizer with maximal learning rate 0.001. Network. For the 3D backbone in Fig. 2, we use the same architecture as ResNet-18 [13] but use sparse 3D convolutions [9] to replace the 2D ones. No pre-trained weights are used. The same FPN structure as RetinaNet [21] is used to obtain multi-scale BEV features. For simplicity, we only use the 8\u00d7 downscaled features as input to the transformer, which adopts 3 encoder layers and 3 decoder layers for computation ef\ufb01ciency. We select top-1000 scored query predictions from the encoder\u2019s class-agnostic prediction head as object queries. We adopt top-N (e.g., 300) 5 \fMethods mAP/mAPH L2 Vehicle 3D AP/APH Pedestrian 3D AP/APH Cyclist 3D AP/APH L2 L2 L1 L2 L1 L2 L1 Dense Detectors CenterPointts [47] -/67.4 -/67.9 -/-/65.6 -/-/68.6-/-/PV-RCNN [32] 66.8/63.3 69.0/68.4 77.5/76.9 66.0/57.6 75.0/65.6 65.4/64.0 67.8/66.4 AFDetV2 [15] 71.0/68.8 69.7/69.2 77.6/77.1 72.2/67.0 80.2/74.6 71.0/70.1 73.7/72.7 SST TS [6] -/68.0/67.6 76.2/75.8 72.8/65.9 81.4/74.1 -/-/SWFormer [37] -/69.2/68.8 77.8/77.3 72.5/64.9 80.9/72.7 -/-/PillarNet-34 [31] 71.0/68.5 70.9/70.5 79.1/78.6 72.3/66.2 80.6/74.0 69.7/68.7 72.3/71.2 CenterFormer [53] 71.2/69.0 70.2/69.7 75.2/74.7 73.6/68.3 78.6/73.0 69.8/68.8 72.3/71.3 PV-RCNN++ [33] 71.7/69.5 70.6/70.2 79.3/78.8 73.2/68.0 81.3/76.3 71.2/70.2 73.7/72.7 Sparse Detectors BoxeR-3D -/63.9/63.7 70.4/70.0 61.5/53.7 64.7/53.5 -/50.2/48.9 TransFusion-L -/64.9 -/65.1 -/-/63.7 -/-/65.9 -/Voxel-DETR (ours) 68.8/66.1 67.8/67.2 75.4/74.9 69.7/63.1 77.6/70.5 69.0/67.9 71.7/70.5 ConQueR (ours) 70.3/67.7 68.7/68.2 76.1/75.6 70.9/64.7 79.0/72.3 71.4/70.1 73.9/72.5 ConQueR \u2020(ours) 73.1/70.6 71.0/70.5 78.4/77.9 73.7/68.1 80.9/75.2 74.5/73.3 77.3/76.1 ConQueR \u2021(ours) 74.0/71.6 71.0/70.5 78.4/77.9 75.8/70.1 82.4/76.6 75.2/74.1 77.5/76.4 Table 1. Performances on the WOD validation split. All models take single-frame input with the same range, no pre-training or ensembling is required. \u2020 denotes using the 2\u00d7 wider ResNet [48] with 1/4 downscaled BEV feature map in our backbone. \u2021 denotes conducting NMS on pedestrians and cyclists. Bold denotes the best entries, and underline denotes the second-best entries. ts denotes the two-stage model. scored predictions, or score threshold (e.g., \u22650.1) during inference. We set \u03b1 = 1, \u03b2 = 4, \u03b3 = 2 in Eq. (1). For the proposed Query Contrast, we use \u03c4 = 0.7 in Eq. (2), and adopt T = 3 noising groups with a maximal box noise ratio of 0.4 [19], and label noise ratio of 0.5 [19]. Category labels are simply encoded as one-hot embeddings rather than the learnable embeddings in DN-DETR [19]. 4.2. Main Results For fair comparison, all methods included use the same point cloud input range, do not use any pre-trained weights, test-time augmentation or model ensembling. Performance. As shown in Table 1, state-of-the-art 3D detectors are divided into dense and sparse categories according to whether they can directly generate sparse detections. Our sparse detector ConQueR sets new records on all categories of the WOD validation set. ConQueR with direct sparse predictions (the second-last entry) achieves \u223c1.0 mAPH/L2 higher than the previous best single-frame model PV-RCNN++ [33], and is over 3.0 mAPH/L2 higher than the popular anchor-free CenterPoint [47]. Notably, ConQueR demonstrates overwhelming performance on pedestrians and cyclists, outperforming previous best methods by \u223c2.0 APH/L2, which shows the effectiveness of our Query Contrast strategy especially for densely populated categories. The signi\ufb01cant performance improvements can also be validated on the WOD test set in Table 2. Moreover, ConQueR surpasses previous best sparse detectors TransFusion-L by \u223c6.0 mAPH/L2, closing the performance gap between sparse and dense 3D detectors. When compared with our baseline Voxel-DETR, the proposed Query Contrast mechanism brings over 1.6 mAPH/L2 without any Methods All Veh. Ped. Cyc. CenterPoint [47] 69.0 71.9 67.0 68.2 PV-RCNN++ [33] 70.2 73.5 69.0 68.2 AFDetv2 [15] 70.0 72.6 68.6 68.7 PillarNet-34 [31] 69.6 74.7 68.5 65.5 ConQueR (Ours) 72.0 73.3 70.9 71.9 Table 2. Single-frame performance comparisons on the WOD test set. APH/L2 results are reported. extra inference cost. Besides, our baseline Voxel-DETR with only 6 epochs of training outperforms previous sparse 3D detectors, and achieves comparable performance with CenterPoint (36-epoch training) with only 1/6 GPU hours. In addition, ConQueR has an inference latency of 70ms (46ms for CenterPoint)1. Although ConQueR with direct sparse predictions already achieves state-of-the-art performance, we \ufb01nd that applying NMS onto ConQueR\u2019s sparse predictions can further improve small and densely populated categories such as pedestrians, while NMS causes \u223c1.2 APH/L2 performance drop on the well-trained vehicles (as shown in Appendix. A). This is also the case with our baseline VoxelDETR. We speculate this is caused by the learning dif\ufb01culties inherent in the data for extremely similar queries (as shown in Fig. 1(b)) . We thus report ConQueR\u2019s performance after conducting NMS on pedestrians and cyclists (the last entry of Table 1).2 Sparsity. Apart from the performance improvements on the WOD of\ufb01cial metrics, ConQueR shows great poten1Latency is measured with batch size 1 on NVIDIA A100 GPU. 2FSD [7] adopts larger point cloud ranges and requires a point segmentation pre-train weights. It is \u223c1.0 mAPH/L2 lower than our ConQueR. 6 \fMethods Preds/Scene Veh. Ped. Cyc. CenterPointnms 192 66.4 62.9 67.9 TransfusiontopN 300 65.1 63.7 65.9 Voxel-DETRtopN 300 67.1 63.0 67.8 Voxel-DETRscore 222 67.2 63.1 67.9 ConQueRtopN 300 68.0 64.6 70.0 ConQueRscore 131 68.2 64.7 70.1 ConQueRscore \u2020 122 70.5 68.1 73.3 Table 3. Sparsity of \ufb01nal predictions. APH/L2 results are reported on the WOD validation set. The subscripts of each entry denotes the way they obtain \ufb01nal predictions. For example, CenterPointnms uses NMS to \ufb01lter out duplicated boxes, and Voxel-DETRtopN denotes it uses top-N scored proposals as \ufb01nal predictions, while ConQueRscore denotes that using score thresholding to generate \ufb01nal sparse predictions. \u2020 denotes our best model in Table 1. tial in reducing false positives and improving the sparsity of \ufb01nal predictions. We list the average number of predictions per scene for different 3D detectors in Table 3. For the baseline Voxel-DETR, thresholding according to scores helps to reduce \u223c25% predictions per sample with slightly better performance. With the help of Query Contrast, ConQueR further reduces the number of predictions substantially by \u223c60%. Besides, as the performance of ConQueR continually improves (the last two lines), the sparsity of \ufb01nal predictions steadily improve as well. When we adopt the same top-300 predictions as baseline Voxel-DETRtopN for evaluation, ConQueRtopN still improves the detection performance signi\ufb01cantly. This indicates the Query Contrast mechanism contributes to generating more accurate predictions from best matched queries. Furthermore, our ConQueR can achieve much sparser predictions even compared with NMS-based dense detectors such as CenterPoint. 4.3. Ablation Study Components of Query Contrast. We deduce the components of ConQueR to baseline Voxel-DETR by gradually removing multi-positive pairs, auxiliary de-noising loss, and contrastive loss in Table 4. Compared to ConQueR (the \ufb01rst row), removing the multiple noised copies of GTs from contrastive learning (the second row) causes over 0.6 mAPH/L2 performance drop. If we further remove the auxiliary denoising loss (the third row), performances of vehicles and pedestrians classes even become slightly better, indicating that the auxiliary denoising loss alone is not the key for performance improvements. Moreover, we can \ufb01nd that Query Contrast with only original GTs (the second last entry) already improves over the baseline (the last entry) dramatically especially on pedestrians and cyclists. Overall, the Query Contrast scheme brings 1.1, 1.7, 2.3 APH/L2 improvements for vehicles, pedestrians and cyclists respectively. InfoNCE Loss Aux DN Multi Pos APH/L2 Veh. Ped. Cyc. \u2713 \u2713 \u2713 68.2 64.7 70.1 \u2713 \u2713 67.4 (-0.8) 64.1 (-0.6) 69.6 (-0.5) \u2713 67.5 (+0.1) 64.2 (+0.1) 69.3 (-0.3) 67.1 (-0.4) 63.0 (-1.2) 67.8 (-1.5) Table 4. Effects of components in Query Contrast. The numbers in brackets denotes the performance drop (red) or increase (blue) for each component. Both the multi-positive contrastive loss (MultiPos) and the InfoNCE loss (Eq. (2)) from only original GTs have deep impact on performance, while the auxiliary denoising loss (Aux-DN) only has marginal effects. Methods Veh. Ped. Cyc. Voxel-DETR 67.1 63.0 67.8 ConQueRKD\u2212MSE 68.1 63.4 68.2 ConQueRQC\u2212GIoU 66.6 63.6 68.4 ConQueRQC\u2212Cos 68.2 64.7 70.1 Table 5. Effects of different supervisions or similarity metrics applied to GT-query pairs. APH/L2 results are reported. QC\u2212Cos denotes our default Query Contrast with the cosine similarity metric, while QC\u2212GIoU denotes using GIoU as the similarity measurement of GT-query pairs. KD\u2212MSE indicates replacing Query Contrast with Knowledge Distillation MSE loss to supervise positive GT-query pairs only. Effects of different supervisions or similarity metrics for GT-query pairs. We demonstrate the effects of different type of supervision or similarity metrics applied to GT-query pairs in Table 5. As discussed in Sec. 3.2, simple geometric relations like GIoU cannot suf\ufb01ciently measure the similarities between GTs and queries because they cannot take the appearance information into account, thus only have marginal effects compared to our baseline VoxelDETR. If we replace Query Contrast with the MSE loss in knowledge distillation (KD) to supervise positive GT-query pairs, performance of vehicles is still comparable with our Query Contrast strategy (the last entry), but it cannot handle densely populated categories like pedestrians and cyclists, indicating the importance of suppressing negative GT-query pairs in our Query Contrast strategy. Number of positive pairs. We present the results of using different numbers of noised GT copies in Table 6. We observe that using 3 groups of noised copies without original GTs (default setting) achieves the best performance. Moreover, incorporating original GT into the multi-positive contrastive loss harms the performance. The \ufb01rst two entries show that using single noised copies of GTs is better than using the original GTs. We conjecture this is caused by the lack of training for original GT boxes. The detector is only trained to recover from noised GTs, while having no idea how to deal with perfectly located original GTs. 7 \fOriginal GTs # Noised GT Groups Veh Ped Cyc \u2713 0 67.5 64.2 69.3 1 67.9 64.4 69.6 2 68.2 64.3 69.9 \u2713 2 67.8 64.4 68.8 3 68.2 64.7 70.1 \u2713 3 68.0 64.3 69.9 4 67.7 64.4 70.1 Table 6. Number of positive pairs in the contrastive loss. APH/L2 results are reported on the WOD validation split. \u2713denotes including the original GT group into Eq. (2). Projection Veh. Ped. Cyc. 67.2 64.2 69.3 Q 68.2 64.7 70.1 G&Q 67.3 64.1 68.9 Table 7. Design choices of the asymmetric feature alignment. APH/L2 results are reported. \u2018G\u2019 and \u2018Q\u2019 denotes GT and query embeddings respectively from the selected layer in detector or prediction heads. Layer to Contrast Veh. Ped. Cyc. Lastdecoder 68.1 63.9 69.7 LastFFN 68.2 64.7 70.1 SecondLastFFN 67.4 64.6 69.6 Table 8. Layers to conduct Query Contrast. Results are the APH/L2 reported on the WOD validation split. Lastdecoder and LastFFN denotes the output layer of each decoder layer and FFN prediction head respectively, while SecondLastFFN indicates the second-last layer of each FFN prediction head is chosen to conduct Query Contrast. Query-GT feature alignment. We demonstrate the importance of aligning query embeddings to GTs\u2019 with an extra MLP in Table 7. Removing the MLP for query embeddings alignment (the \ufb01rst row) or applying the MLP alignment for both GT and query embeddings (the last row) causes \u223c1 APH/L2 performance drop, indicating the importance of the asymmetric alignment design to mitigate the distribution gap between GT and query embeddings. Neural Layers for conducting Query Contrast. We compare 3 layer alternatives to conduct Query Contrast in Table 8: the output layer of each decoder layer, the output layer of each FFN prediction head, and the second-last layer of each FFN prediction head. The Query Contrast scheme can bring consistent improvements for all layer choices, and the features from the last layer of FFN prediction head performs the best, indicating that directly regulate the detection outputs via the contrastive loss can achieve the \u201cenhancesuppress\u201d effects onto queries to the utmost. Generalisation ability w.r.t. query numbers. We verify the generalization ability of Query Contrast by varying query numbers in Table 9. By default we adopt top-1000 scored proposals as initial queries to input to the transformer Methods #Query Veh. Ped. Cyc. Voxel-DETR 300 66.3 62.0 66.5 ConQueR 300 67.0 (+0.7) 63.6 (+1.6) 68.9 (+2.4) Voxel-DETR 500 66.9 62.8 67.3 ConQueR 500 67.8 (+0.9) 64.4 (+1.6) 69.0 (+1.7) Voxel-DETR 1000 67.1 63.0 67.8 ConQueR 1000 68.2 (+1.1) 64.7 (+1.7) 70.1 (+2.3) Table 9. Improvements of Query Contrast under different query numbers. APH/L2 results are reported. The blue numbers in brackets indicates the performance gains. Momentum Veh. Ped. Cyc. 0 67.9 64.4 69.0 0.9 67.6 64.3 69.1 0.99 68.0 64.5 69.2 0.999 68.2 64.7 70.1 Table 10. Effects of EMA momentum coef\ufb01cient. \u03c4 Veh. Ped. Cyc. 1.0 67.9 64.2 69.8 0.7 68.2 64.7 70.1 0.5 67.6 64.5 69.7 Table 11. Effects of \u03c4. APH/L2 results are reported. decoder. The performance gain of Query Contrast is relatively stable when we gradually reduce query numbers to 500 and 300. EMA coef\ufb01cients for generating GT embeddings. Here we show results of different momentums of our EMA decoder, which is used to embed GT boxes, in Table 10. The performance of using the same decoder as queries (the \ufb01rst line) already achieves satisfactory results, while introducing a more stable decoder for GT boxes can further improve the performance especially on categories with fewer instances (i.e., cyclists). Temperature coef\ufb01cient in Eq. (2). We shown the effects of different \u03c4 in Table 11. \u03c4 controls the contrastive learning dif\ufb01culty of the GT-query similarities, and we \ufb01nd \u03c4 = 0.7 leads to the best performance. 5." + }, + { + "url": "http://arxiv.org/abs/2007.03496v3", + "title": "AutoAssign: Differentiable Label Assignment for Dense Object Detection", + "abstract": "Determining positive/negative samples for object detection is known as label\nassignment. Here we present an anchor-free detector named AutoAssign. It\nrequires little human knowledge and achieves appearance-aware through a fully\ndifferentiable weighting mechanism. During training, to both satisfy the prior\ndistribution of data and adapt to category characteristics, we present Center\nWeighting to adjust the category-specific prior distributions. To adapt to\nobject appearances, Confidence Weighting is proposed to adjust the specific\nassign strategy of each instance. The two weighting modules are then combined\nto generate positive and negative weights to adjust each location's confidence.\nExtensive experiments on the MS COCO show that our method steadily surpasses\nother best sampling strategies by large margins with various backbones.\nMoreover, our best model achieves 52.1% AP, outperforming all existing\none-stage detectors. Besides, experiments on other datasets, e.g., PASCAL VOC,\nObjects365, and WiderFace, demonstrate the broad applicability of AutoAssign.", + "authors": "Benjin Zhu, Jianfeng Wang, Zhengkai Jiang, Fuhang Zong, Songtao Liu, Zeming Li, Jian Sun", + "published": "2020-07-07", + "updated": "2020-11-25", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "main_content": "Introduction Current state-of-the-art CNN based object detectors perform a common paradigm of dense prediction. Both twostage (the RPN [16] part) and one-stage detectors [10, 19, 25, 24] predict objects with various scales, aspect ratios, and classes over every CNN feature locations in a regular, dense sampling manner. This dense detection task raises an essential issue of sampling positives and negatives in the spatial locations, which we call label assignment. Moreover, as the modern CNN-based detectors commonly adopt multi-scale features (e.g., FPN [9]) to alleviate scale variance, label assignment requires not only selecting locations among spatial feature maps (spatial assignment) but also choosing the level of features with appropriate scale (scale assignment). As shown in Fig. 1, existing detectors mainly sample the positive and negative locations by human prior knowledge: (1) Anchor-based detectors like RetinaNet [10] preset several anchors of multiple scales and aspect ratios on RetinaNet Negatives Positives FCOS Negatives Positives AutoAssign Positives Negatives 0 1 Ignore Figure 1. Illustration of different label assignment strategies. Compared to \ufb01xed label assignment strategies like RetinaNet and FCOS, AutoAssign do not rely on preset samples and can adapt to object appearance automatically. For better visualization, we stack locations across multiple scales to show the \ufb01nal results. each location and resort to the Intersection over Union (IoU) for sampling positives and negatives among spatial and scale-level feature maps. (2) Anchor-free detectors like FCOS [19] sample a \ufb01xed fraction of center area as positive spatial locations for each object, and select certain stages of FPN [9] by the pre-de\ufb01ned scale constraints. These detectors follow the center prior (objects are more likely to located around the center of their bounding box) in data distributions to design their assignment strategies, which are proved to be effective on benchmarks like Pascal VOC [2, 3] and MS COCO [11]. However, appearances of objects vary a lot across categories and scenarios. The above \ufb01xed center sampling strategy may pick locations outside objects (e.g., bananas, umbrellas) as positives, thus cannot cover the diverse distributions of categories. To deal with the diverse data distributions, a few recent works introduce some partially dynamic strategies in label assignment. GuidedAnchoring [20] and MetaAnchor [22] dynamically change the prior of anchor shapes before sampling, while other methods adaptively modify the sampling strategy for each object in the spatial dimension [25, 24, 8] or the scale dimension [27]. The success of these partially dynamic methods demonstrates great potential in making label assignment more adaptive. However, these strategies can only free part of the label assignment to be data-driven. 1 arXiv:2007.03496v3 [cs.CV] 25 Nov 2020 \fThe other parts stay constrained by human designs, preventing label assignment to be further optimized. Intuitively, sampling locations on objects is better than background because they are prone to generate higher quality proposals. Motivated by this, we present AutoAssign, which makes label assignment fully data-dependent and appearance-aware. By dropping the many human knowledge (e.g., anchors, IoU thresholds, and top-k) and proposing a uni\ufb01ed weighting mechanism across spatial and scales, we reach a fully differentiable strategy. We adopt a similar paradigm of anchor-free detectors like FCOS [19] to predict one object proposal at each location directly. Given an object, we initially treat all the locations across FPN scales inside its bounding box as both positive and negative candidates for further optimization. To adapt to the data distribution of different categories, we propose a category-wise Center Weighting module to learn each category\u2019s distribution. To get adapted to each instance\u2019s appearance and scale, we propose a Con\ufb01dence Weighting module to modify the positive and negative con\ufb01dences of the locations in both spatial and scale dimensions. The two weighting modules are combined to generate positive and negative weight maps for all locations inside an object. According to Fig. 1, the assignment results can dynamically adapt to object appearances. The entire process of weighting is differentiable and can be conveniently optimized by back-propagation during training. All of the weighting modules are only used during loss calculation; thus, AutoAssign is inference cost-free. Moreover, the proposed method only requires the center prior knowledge, saving a lot of effort in hyper-parameters tuning, thus can accommodate other data distributions conveniently without any modi\ufb01cation. In summary, the contributions of this study are three-fold as follows: 1. An appearance-aware and fully differentiable weighting mechanism for label assignment is proposed. It enables spatial and scale assignment to be optimized in a uni\ufb01ed manner. 2. Two weighting modules (i.e., Center Weighting and Con\ufb01dence Weighting) are proposed to adjust the category-speci\ufb01c prior distribution and the instancespeci\ufb01c sampling strategy in both spatial and scale dimensions. 3. AutoAssign achieves state-of-the-art performance on the challenging MS COCO [11] dataset. Competitive results on datasets from different distributions, such as PASCAL VOC [2, 3], Object365 [18] and WiderFace [21] demonstrate the effectiveness and broad applicability of AutoAssign. 2. Related Work Fixed Label assignment Classical object detectors sample positives and negatives with pre-de\ufb01ned strategies. The RPN in Faster R-CNN [16] preset anchors of different scales and aspect ratios at each location. Given an instance, assignments in both scale and spatial dimensions are guided by the anchor matching IoU. This anchor-based strategy quickly dominates modern detectors and extends to multi-scale outputs (e.g., YOLO [14, 15], SSD [12], and RetinaNet [10]). Recently, attention has been geared toward anchor-free detectors. FCOS [19] and its precursors [6, 23, 13] drop the prior anchor settings and directly assign the spatial positions around bounding box center of each object as positives. In scale dimension, they pre-de\ufb01ne scale ranges of different FPN [9] stages to assign instances of different sizes. Both the anchor-based and anchor-free strategies follow the center prior inherent in data distributions . However, all of these methods only depend on human knowledge to solve spatial and scale assignment separately and cannot adapt to instance appearances. Dynamic Label assignment Recent detectors propose adaptive mechanisms to improve label assignment. GuidedAnchoring [20] leverages semantic features to guide the anchor settings and dynamically change the shape of anchors to \ufb01t various distributions of objects. MetaAnchor [22] randomly samples anchors of any shapes during training to cover different kinds of object boxes. Besides the modi\ufb01cation of anchor prior, some works directly change the sampling for each object. FSAF [27] dynamically assigns each instance to the most suitable FPN feature level with minimal training loss. SAPD [26] re-weights the positive anchors and applies an extra meta-net to select the proper FPN stages. FreeAnchor [25] constructs a bag of top-k anchor candidates based on IoU for every object and uses a Mean-Max function to weight among selected anchors, and NoisyAnchor [8] designs another weighting function to eliminate noisy anchors. ATSS [24] proposes an adaptive training sample selection mechanism by the dynamic IoU threshold according to the statistical characteristics of instances. Concurrent work PAA [7] adaptively separates anchors into positive and negative samples in a probabilistic manner. However, they still rely on hand-crafted anchors, thresholds, or other human knowledge for guiding the assignment, which could prevent label assignment from being further optimized. 3. Methodology Before starting, we need to ask: which part of label assignment is essential? To answer the question, we present existing label assignment strategies from a more holistic perspective in Table 1. We organize the components of 2 \fMethod Prior Instance AP scale spatial RetinaNet [10] anchor size & IoU IoU 36.3 FreeAnchor [25] anchor size & IoU top-k weighting, IoU 38.7 ATSS [24] anchor size & IoU top-k, dynamic IoU 39.3 GuidedAnchoring [20] dynamic anchor size & IoU IoU 37.1 FCOS* [19] center range radius 38.7 FSAF [27] anchor & center loss IoU & radius 37.2 AutoAssign (Ours) Center Weighting Con\ufb01dence Weighting 40.5 Table 1. Comparison of label assignment between different typical detectors. Results in terms of AP (%) are reported on the MS COCO 2017 val set, using ResNet-50 [5] as backbone. * denotes improved versions. some representative methods as prior-related and instancerelated. Clearly, apart from the heuristic-based methods like RetinaNet [10] and FCOS [19], all the existing dynamic strategies bene\ufb01t from its dynamic parts. But they only make partial components of label assignment data-driven, and the other components still rely on hand-crafted rules. We can conclude that: (1) All of the existing detectors obey the center prior. Sampling locations near box centers is effective. (2) Both spatial and scale assignments need to be tackled. But existing methods all solve the scale and spatial assignments using two different strategies. Motivated by these observations, our aim becomes making both the prior-related and instance-related components adapt to the category or instance characteristics. In this section, we will \ufb01rst give an overall picture of AutoAssign, then demonstrate how the priorand instance-level tasks are solved. 3.1. Overview As shown in Fig. 2, the upper gray box shows network architecture. We \ufb01rst follow the anchor-free manner like FCOS [19] to drop the pre-designed anchors and directly predict objects on each feature location. The network has three outputs: classi\ufb01cation score, Implicit-Objectness (ImpObj) score (which will be described later), and localization offsets. During training (the bottom green box), we \ufb01rst convert all the network predictions into a joint con\ufb01dence indicator. On top of this, we propose a weighting mechanism, which consists of a Center Weighting module and a Con\ufb01dence Weighting module. The Center Weighting module is designed to both satisfy the inherent center prior property in data and adapt to each category\u2019s speci\ufb01c shape pattern. It starts from the standard center prior and then learns the distribution of each category from data. The Con\ufb01dence Weighting module is for assigning the most appropriate locations of each instance based on its appearance and scale adaptively. For each location i in a ground-truth (gt) box, The two modules are combined together to generate positive and negative weights. Finally, positive and negative classi\ufb01cation loss will be calculated, and label assignment will be optimized jointly with the network. From the label assignment perspective, given an object, AutoAssign can automatically \ufb01nd both its appropriate scales across FPN levels and spatial locations based on the network outputs. As a result, the task of label assignment is solved properly in a uni\ufb01ed, appearance-aware, and differentiable manner. 3.2. Prior-level: Center Weighting The prior distribution is a fundamental element for label assignment, especially in the early stage of training. In general, the distribution of objects is subject to the center prior. However, the objects from different categories, e.g., giraffe, and human, may have distinct distributions. Keeping sampling center positions cannot capture the diverse distributions of different categories. Preferably, adaptive center distributions for different categories are more desired. Starting from the center prior, we introduce a categorywise Gaussian-shape weighting function G with learnable parameters. This Center Weighting module guarantees that locations closer to bounding box center have higher location weights than locations far from box center. Moreover, it can automatically adjust its shape according to data distributions of different categories. Here we de\ufb01ne G as: G(\u20d7 d | \u20d7 \u00b5,\u20d7 \u03c3) = e \u2212(\u20d7 d\u2212\u20d7 \u00b5)2 2\u20d7 \u03c32 , (1) where \u20d7 d denotes the offsets of a certain position inside an object to its box center along xand y-axis, which means it can be negative. \u20d7 \u00b5 and \u20d7 \u03c3 are learnable parameters of shape (K, 2). K is the number of categories of a dataset. Each category has two parameters along spatial dimension. As G contributes to the training loss, the parameters can be optimized by back-propagation. At the beginning, \u20d7 \u00b5 is initialized to 0 and \u20d7 \u03c3 to 1. Intuitively, \u20d7 \u00b5 controls center offset of each category from the box center. And \u20d7 \u03c3 measures each location\u2019s importance based on category characteristics. As 3 \f\ud835\udc64!\ud835\udc43! \ud835\udc64\"\ud835\udc43\" Confidence Weighting Confidence Weighting Center Prior P7 P6 P5 P4 P3 FPN Classification (H x W x C) Implicit objectness (H x W x 1) Localization (H x W x 4) Shared weights across FPN levels x 4 x 4 \ud835\udc60\ud835\udc50\ud835\udc5c\ud835\udc5f\ud835\udc52 \ud835\udc3f\ud835\udc47\ud835\udc45\ud835\udc35 box coordinates Center Weighting Inference Train mask gt box \ud835\udc64! concat Joint confidence Positive loss Negative loss \ud835\udc64! \ud835\udc64\" Figure 2. Illustration of AutoAssign. The upper block shows network architecture. The product of classi\ufb01cation and ImpObj is used as \ufb01nal classi\ufb01cation con\ufb01dence. LTRB means the localization offsets are in left-top-right-bottom format. The bottom block presents the label assignment strategy. Given an object, its box coordinates are used for calculating the initial center prior and generating foreground masks to select inbox locations. The indexed locations will be \ufb02attened and concatenated together. For positive candidates, both Con\ufb01dence Weighting and Center Weighting are used. For negative candidates, only Con\ufb01dence Weighting is applied. As a result, positive and negative weight maps are generated. In this process, both spatial and scale assignments are \ufb01nished jointly. shown in Fig. 2, the bounding box will generate a location weight map as demonstrated in \u201cCenter Prior\u201d. Given an object, we calculate the location weights using G on every FPN stage individually, then stack the weighting results together for later usage. Furthermore, to mitigate the interference caused by the different scales of FPN, we normalize the distance \u20d7 d by its downscale ratio. 3.3. Instance-level: Con\ufb01dence Weighting As mentioned above, all locations inside a bounding box across FPN stages will be considered as both positive and negative sample candidates at the beginning. This operation will signi\ufb01cantly increase the background locations in positive candidates and vice versa. This is quite different from all existing label assignment strategies, which only sample a subset of locations as positives before loss calculation. On the other hand, given a location inside a bounding box, to obtain a reasonable weight, all aspects, including classi\ufb01cation and regression, need to be taken into account. Motivated by these aspects, in Con\ufb01dence Weighting, we propose a joint con\ufb01dence indicator of both classi\ufb01cation and localization to guide the weighting strategy in both spatial and scale dimensions. Classi\ufb01cation con\ufb01dence. Generally speaking, selected positive samples of typical detectors imply that these locations have high con\ufb01dence of containing instances. However, in our setting, the initial positives set tends to contain a considerable part of background locations, as an object can hardly \ufb01ll its bounding box completely. Consequently, if a location is, in fact, background, all class predictions in the location should be unreasonable. So taking too many inferior background locations as positives will damage detection performance, which is also the case for the negatives set. To suppress noisy candidates (i.e., backgrounds in positives set, foregrounds in negatives set) from the inferior locations, we introduce a novel Implicit-Objectness (ImpObj) branch, which is shown in Fig. 2. The form of ImpObj is just like the center-ness in FCOS, but here we meet another issue of lacking explicit supervisions. Considering the aim that we need to \ufb01nd and emphasize proper positives and \ufb01lter out noise candidates dynamically, we optimize the ImpObj together with the classi\ufb01cation branch. Speci\ufb01cally, we use the product of ImpObj and classi\ufb01cation score as our recti\ufb01ed classi\ufb01cation con\ufb01dence. ImpObj thus shares supervision with the classi\ufb01cation branch and does not require explicit labels. 4 \fJoint con\ufb01dence indicator. For generating unbiased estimation of each location towards positives/negatives, we should include the localization con\ufb01dence besides classi\ufb01cation. The typical outputs of localization are box offsets, which are hard to measure the regression con\ufb01dence directly. Considering the fact that Binary Cross-Entropy (BCE) loss is commonly adopted for classi\ufb01cation task, we thus convert the localization loss Lloc i into likelihood: Pi(loc) = e\u2212\u03bbLloc i , (2) for being combined with classi\ufb01cation con\ufb01dence conviently, in which \u03bb is a hyper-parameter to balance between classi\ufb01cation and localization. GIoU loss [17] is used as Lloc i . Then we combine classi\ufb01cation and regression likelihood together to get the joint con\ufb01dence Pi. For the positive candidates, we de\ufb01ne positive con\ufb01dence P+ i = Pi(cls)\u00b7Pi(loc), where classi\ufb01cation con\ufb01dence Pi(cls) is the product of classi\ufb01cation score and ImpObj score. For a location candidate in negatives set, considering the fact that only classi\ufb01cation task will be performed on negative locations, thus the negative con\ufb01dence P\u2212 i = Pi(cls), which is the same as locations outside bounding boxes. Therefore, all background locations can be tackled uniformly. Positive weights. If a location has higher con\ufb01dence towards positive samples, we prefer to take it as a foreground. Based on the joint con\ufb01dence representation P+ i , we thus propose our con\ufb01dence weighting function C(P+ i ) in an exponential form to emphasize the locations with high con\ufb01dence containing objects as: C(P+ i ) = eP+ i /\u03c4, (3) where \u03c4 is a hyper-parameter to control the contributions of high and low con\ufb01dence locations towards positive losses. Intuitively, given an object i, for all locations inside its bounding box, we should focus on the proper locations with more accurate predictions. However, at the start of the training process, the network parameters are randomly initialized, making its predicted con\ufb01dences unreasonable. Thus guiding information from prior is also critical. For location i \u2208Sn, where Sn denotes all locations inside the bounding box at all the scale levels of object n, we combine the category-speci\ufb01c prior G(\u20d7 di) from center weighting module and the con\ufb01dence weighting module C(P+ i ) together to generate the positive weights w+ i as: w+ i = C(P+ i )G(\u20d7 di) P j\u2208Sn C(P+ j )G(\u20d7 dj) , (4) here for an object n, each w+ i is normalized by sum of location candidates in Sn for the purpose of being used as valid weights. Negative weights. As discussed above, a bounding box usually contains an amount of real-background locations, and we also need weighted negative losses to suppress these locations and eliminate false positives. Moreover, as the locations inside the boxes always tend to predict high con\ufb01dence of positives, we prefer the localization con\ufb01dence to generate the unbiased indicator of false positives. Paradoxically, the negative con\ufb01dence P\u2212has no gradient for the regression task, which means the localization con\ufb01dence Pi(loc) should not be optimized by negative loss. Hence we use IoUs between each position\u2019s predicted proposal and all objects to generate our negative weights w\u2212 i as: w\u2212 i = 1 \u2212f(ioui), (5) in which f(ioui) = 1/(1 \u2212ioui), ioui denotes max IoU between proposal of location i \u2208Sn and all ground truth boxes. To be used as valid weights, we normalize f(ioui) into range [0, 1] by its value range. This transformation sharpens the weight distributions and ensure that the location with highest IoU receives zero negative loss. For all locations outside bounding boxes, w\u2212 i is set to 1 because they are backgrounds for sure. 3.4. Loss function By generating positive and negative weight maps, we achieve the purpose of dynamically assigning more appropriate spatial locations and automatically selecting the proper FPN stages for each instance. As the weight maps contribute to the training loss, AutoAssign tackles the label assignment in a differentiable manner. The \ufb01nal loss function L of AutoAssign is de\ufb01ned as follows: L=\u2212 N X n=1 log( X i\u2208Sn w+ i P+ i ) \u2212 X k\u2208S log(1 \u2212w\u2212 k P\u2212 k ), (6) S denotes all the locations at all the scales on the output feature maps. To ensure at least one location matches object n, we use the weighted sum of all positive weights to get the \ufb01nal positive con\ufb01dence. Thus for a location inside bounding boxes, both positive and negative loss will be calculated with different weights. The positive loss and negative loss are calculated independently. Thus the magnitude of positive and negative weights requires no extra operation. To handle the severe imbalance problem in negative samples, the Focal Loss [10] is applied to the negative loss in Eq. 6. 4. Experiments Experiments are mainly evaluated on the MS COCO 2017 [11] benchmark, which contains around 118k images in the train set, 5k in the val set and 20k in the test-dev set. We report analysis and ablation studies on the val set and compare with other methods on the test-dev set. 5 \fStage 3 Stage 4 Stage 5 Stage 6 Stage 7 Center weighting Confidence weighting Positive weights Figure 3. Visualization of center weighting, con\ufb01dence weighting, and positive weights. From the 3rd row, objects of different shapes and sizes are assigned to its appropriate spatial locations and suitable scale stages automatically. 4.1. Implementation Details We use ResNet-50 [5] with FPN [9] as backbone for all experiments if not speci\ufb01cally pointed out. We initialize the backbone with weights pre-trained on ImageNet [1]. Following common practice, all models are trained for 1\u00d7 schedule named in [4], i.e., 90k iterations with an initial learning rate of 0.01, which is then divided by 10 at 60k and 80k iterations, with the weight decay of 0.0001 and the momentum of 0.9. Random horizontal \ufb02ipping is used in data augmentation. For all ablations, we use an image scale of 800 pixels for training and testing, unless otherwise speci\ufb01ed. We set \u03c4 = 1/3 in Eq. 3, and \u03bb = 5.0 in P(loc). Focal Loss with \u03b1 = 0.25 and \u03b3 = 2.0 is applied for negative classi\ufb01cation. NMS with IoU threshold 0.6 is applied to merge the results. 4.2. Ablation Studies Baseline. None of existing label assignment strategies can be used as baseline of our AutoAssign, because we only rely on the center prior, and do not require any other human knowledge like anchors, IoU thresholds, and top-k, which is indispensable for many other detectors. As a result, we build AutoAssign from a very simple and clean start point in Table 2. The 17.7 mAP baseline can be seen as removing w+ and w\u2212from Eq. 6. Other detectors, like RetinaNet, can also be implemented by adding modules to this simple baseline. Overall weighting mechanism. To demonstrate the effectiveness of the two weighting modules, we construct the positives weights w+ i using only Center Weighting or Con\ufb01dence Weighting separately in Table 2, while keeping the negatives weighting unchanged. Center Weighting brings Center Conf AP AP50 AP75 APS APM APL 17.7 30.9 18.1 15.7 24.2 23.3 \u2713 21.5 35.8 22.6 16.6 28.9 36.0 \u2713 37.7 57.4 40.6 20.3 41.4 52.0 \u2713 \u2713 40.5 59.8 43.9 23.1 44.7 52.9 Table 2. Effectiveness of Center Weighting and Con\ufb01dence Weighting. \u201cCenter\u201d means center weighting, and \u201cConf\u201d indicates con\ufb01dence weighting. relatively signi\ufb01cant performance gain, suggesting that the prior distribution is critical for guiding the training. Besides, con\ufb01dence weighting further improves the accuracy as it dynamically changes the strategy for each object in both spatial and scale dimensions according to object appearances. More design choices of the two modules can be found in Supplementary Materials. To better understand how spatial and scale assignment is solved through the weighting mechanism, we visualize the positive weight maps separately in each FPN stage from a well-trained detector. From Fig. 3, the Center Weighting is applied to all FPN stages to achieve a coarse weighting based on the category-speci\ufb01c center prior. Then the Con\ufb01dence Weighting generates weights according to object appearances. The two modules perform spatial and scale assignments of each instance jointly. Center Weighting. To analyze the design of the center weighting, we compare different prior distributions in Table 3. We denote the Gaussian-shape function G without learnable parameters as \u201c\ufb01xed\u201d, while \u201cshared\u201d means all categories share one group of learnable \u20d7 \u00b5 and \u20d7 \u03c3. Compared 6 \fmotorcycle parking meter bear surfboard hotdog Figure 4. Visualization of learned center weighting weights of different categories. All of the objects are visualized on the same scale. Center weighting results of motorcycle and surfboard show the prior distribution become ellipses (controlled by \u03c3) to accommodate the shape characteristics of these categories. Center offsets (controlled by \u00b5) are actually larger than 10 pixels in the raw image, which means it could shift one or more grids on output feature maps. Images are best viewed in color. to the \ufb01xed prior, \u201cshared\u201d prior slightly drops the AP by 0.1%, while our category-wise prior increases the AP by 0.2% on MS COCO. As MS COCO contains 80 categories with a huge amount of data, its object distribution generally falls into a normal distribution. Thus the total improvement of category-wise prior is not signi\ufb01cant. But when we look at some classes with unique distributions, e.g., surfboard, and hotdog, the improvements are notable. Center AP moto prk-mtr bear surfboard hotdog none 21.5 15.2 14.9 66.3 7.9 11.5 \ufb01xed 40.3 42.2 41.9 71.9 32.4 33.5 shared 40.2 41.8 40.7 69.2 33.1 32.7 category 40.5 42.9 43.3 73.6 34.8 35.8 Table 3. Results of different center weighting choices over the whole categories of MS COCO and the subset. \u201cmoto\u201d means motorcycle, \u201cprk-mtr\u201d means parking meter. This can also be evidenced by the visualization of the learned priors for each category in Fig. 4. We mark white points as the center of bounding boxes and red points as the center of learned priors. We can see that in the categories of parking meter and hotdog, the learned centers \u20d7 \u00b5 shift down as these categories tend to have more essential clues at the bottom half. Moreover, the category-speci\ufb01c \u20d7 \u03c3 is also changed for each category. For the categories of motorcycle and surfboard, the prior becomes ellipses to accommodate the shape characteristics of these categories. Con\ufb01dence Weighting We evaluate the effectiveness of classi\ufb01cation con\ufb01dence P(cls), localization con\ufb01dence P(loc), and ImpObj separately in Table 4. In the \ufb01rst two rows, we respectively use classi\ufb01cation con\ufb01dence P(cls) and localization con\ufb01dence P(loc) alone in the con\ufb01dence weighting. The combination of the two con\ufb01dences (AutoAssign) achieves higher performance, indicating that the joint con\ufb01dence indicator is the preferable choice when evaluating a location\u2019s quality. In the next two rows, we evaluate the contribution of Con\ufb01dence AP AP50 AP75 APS APM APL P(cls)-only 38.7 59.9 41.6 22.9 42.0 49.5 P(loc)-only 39.7 58.4 43.1 22.4 43.6 51.6 no-obj 39.4 58.7 42.5 22.4 43.5 50.7 explicit-obj 39.5 58.8 42.3 21.6 43.4 52.2 AutoAssign 40.5 59.8 43.9 23.1 44.7 52.9 Table 4. Comparison of different choices for con\ufb01dence weighting. P(cls)-only means only use P(cls) for con\ufb01dence weighting. \u201cno-obj\u201d means do not use ImpObj for P(cls). \u201cexplicit-obj\u201d means give the object-ness branch individual supervision, rather than sharing with classi\ufb01cation. ImpObj. \u201cexplicit-obj\u201d means that we explicitly supervise the objectness branch with consistent labels (i.e., 1 for foregrounds and 0 for backgrounds) for all the locations inside the boxes. We \ufb01nd that simply using hard labels for the objectness has no help to performance, while our ImpObj can signi\ufb01cantly boost the performance by \u223c1% AP. Moreover, the performance of objects at all sizes can obtain obvious performance gains. We think the contribution of ImpObj comes from its effect on both \ufb01ltering out the noise candidates and achieving better separation from the background. Visualizations can be found in Supplementary Materials. 4.3. Comparison with State-of-the-art We compare AutoAssign with other detectors on MS COCO test-dev set. We adopt 2\u00d7 schedule following the previous works [19, 25, 24]. Results are shown in Table 5. Under the same training setting, AutoAssign can consistently outperform other counterparts. For example, AutoAssign with ResNet-101 backbone achieves 44.5% AP, and our best model achieves 52.1% AP, which outperforms all existing one-stage detectors. 4.4. Generalization Another bene\ufb01t of using little human knowledge it that huge effort on hyper-parameters tuning when transfer to other datasets. To demonstrate the generalization ability, 7 \fMethod Iteration AP AP50 AP75 APS APM APL ResNet-101 RetinaNet [10] 135k 39.1 59.1 42.3 21.8 42.7 50.2 FCOS [10] 180k 41.5 60.7 45.0 24.4 44.8 51.6 FreeAnchor [25] 180k 43.1 62.2 46.4 24.5 46.1 54.8 SAPD [26] 180k 43.5 63.6 46.5 24.9 46.8 54.6 ATSS [24] 180k 43.6 62.1 47.4 26.1 47.0 53.6 AutoAssign (Ours) 180k 44.5 64.3 48.4 25.9 47.4 55.0 ResNeXt-64x4d-101 FCOS* [19] 180k 44.7 64.1 48.4 27.6 47.5 55.6 FreeAnchor [25] 180k 44.9 64.3 48.5 26.8 48.3 55.9 SAPD [26] 180k 45.4 65.6 48.9 27.3 48.7 56.8 ATSS [24] 180k 45.6 64.6 49.7 28.5 48.9 55.6 AutoAssign (Ours) 180k 46.5 66.5 50.7 28.3 49.7 56.6 ResNeXt-64x4d-101-DCN SAPD [26] 180k 47.4 67.4 51.1 28.1 50.3 61.5 ATSS [24] 180k 47.7 66.5 51.9 29.7 50.8 59.4 AutoAssign (Ours) 180k 48.3 67.4 52.7 29.2 51.0 60.3 AutoAssign (Ours)\u2020 180k 49.5 68.7 54.0 29.9 52.6 62.0 AutoAssign (Ours)\u2020\u2021 180k 52.1 69.6 58.0 33.9 54.0 64.0 Table 5. Performance comparison with state-of-the-art one-stage detectors on MS COCO 2017 test-dev set. All results listed adopt multiscale training. \u2020 indicates multi-scale training with wider range [480, 960] used in [25]. \u2021 indicates multi-scale testing. * indicates improved versions. Method PASCAL VOC Objects365 WiderFace AP AP50 AP75 AP AP50 AP75 AP AP50 AP75 RetinaNet [10] 55.4 81.0 60.1 18.4 28.4 19.6 46.7 83.7 47.1 FCOS* [19] 55.4 80.5 61.1 20.3 29.9 21.9 48.1 87.1 48.4 FreeAnchor [25] 56.8 81.1 62.1 21.4 31.5 22.8 46.3 81.6 47.5 ATSS [24] 56.6 80.7 62.6 20.7 30.0 22.4 48.9 87.1 49.7 AutoAssign (Ours) 57.9 81.6 64.1 21.6 31.7 23.2 49.5 88.2 49.9 Table 6. Performance comparison with typical detectors on PASCAL VOC, Objects365 and WiderFace. * indicates improved versions. we evaluate AutoAssign and several other detectors on different data distributions, including general object detection (PASCAL VOC [2, 3], Objects365 [18]) and face detection (WiderFace [21]). In these experiments, we keep all the hyper-parameters unchanged and only adjust the training settings following the common paradigm of each dataset. Results are shown in Table 6. We \ufb01nd that the performance of other methods with \ufb01xed or partly \ufb01xed assigning strategies are unstable on different datasets. Although they may achieve excellent performance on certain datasets, their accuracies on the other dataset may be worse. This proves that the label assignment strategy of these methods has low robustness, thus needs to be adjusted cautiously. In contrast, AutoAssign can automatically adapt to different data distributions and achieve superior performance without any adjustment. 5." + }, + { + "url": "http://arxiv.org/abs/1908.09492v1", + "title": "Class-balanced Grouping and Sampling for Point Cloud 3D Object Detection", + "abstract": "This report presents our method which wins the nuScenes3D Detection Challenge\n[17] held in Workshop on Autonomous Driving(WAD, CVPR 2019). Generally, we\nutilize sparse 3D convolution to extract rich semantic features, which are then\nfed into a class-balanced multi-head network to perform 3D object detection. To\nhandle the severe class imbalance problem inherent in the autonomous driving\nscenarios, we design a class-balanced sampling and augmentation strategy to\ngenerate a more balanced data distribution. Furthermore, we propose a balanced\ngroup-ing head to boost the performance for the categories withsimilar shapes.\nBased on the Challenge results, our methodoutperforms the PointPillars [14]\nbaseline by a large mar-gin across all metrics, achieving state-of-the-art\ndetection performance on the nuScenes dataset. Code will be released at CBGS.", + "authors": "Benjin Zhu, Zhengkai Jiang, Xiangxin Zhou, Zeming Li, Gang Yu", + "published": "2019-08-26", + "updated": "2019-08-26", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "main_content": "Introduction Point cloud 3D object detection has recently received more and more attention and becomes an active research topic in 3D computer vision community since it has great potential for visual applications like autonomous driving and robots navigation. The KITTI dataset [7] is the most widely used dataset in this task. Recently, NuTonomy releases the nuScenes dataset [2], which greatly extends KITTI in dataset size, sensor modalities, categories, and annotation numbers. Compared to the KITTI 3D detection benchmark [8], in which we need to locate and classify objects of 3 categories respectively, the nuScenes 3D Detection Challenge requires to detect 10 categories at the same time. Moreover, we need to estimate a set of attributes and object velocities for each object. Furthermore, the nuScenes dataset [2] suffers from severe class imbalance issues. As shown in Figure 2, instance distribution of categories in the nuScenes dataset is long-tailed, exhibiting an extreme imbalance in the number of examples between common and rare object classes. All the above challenges make the nuScenes 3D Detection Challenge more dif\ufb01cult, yet closer to real-world scenarios. Existing 3D object detection methods have explored several ways to tackle 3D object detection task. Several works [3, 13, 15, 29, 14] convert point cloud into bird-view format and apply 2D CNN to get 3D object detection results. Voxel-based methods [26, 32, 28] convert point cloud into regular 3D voxels then apply 3D CNN or 3D sparse convolution [10, 9, 5] to extract features for 3D object detection. Point-based Methods [19, 27] \ufb01rstly utilize 2D detectors to obtain 2D boxes from the image, and then apply PointNet++ [20, 21] on the cropped point cloud to further estimate location, size and orientation of 3D objects. Methods taking advantage of both voxel-based and point-based methods like [25, 30, 24] \ufb01rst use pointnet fashions to acquire highquality proposals, then voxel-based methods is applied to obtain \ufb01nal predictions. However, most of above methods are performed on each single category respectively in order to achieve their highest performance. For example, the previous SOTA method PointPillars [14] can only achieve very low performance on most of the rare categories(e.g., Bicycle). Multi-task Learning is another technique that we use in the challenge because the multi-category joint detection can be taken as a multi-task learning problem. Many works investigate how to adaptively set weights for the different task effectively. For example, MGDA [23] takes multi-task learning as a multi-objective optimization problem. GradNorm [4] uses gradient normalization strategies to balance loss of different tasks adaptively. Bene\ufb01ting from multi1 arXiv:1908.09492v1 [cs.CV] 26 Aug 2019 \fClass Instance Num Sample Num Instance Num After Sample Num After Car 413318 27558 1962556 126811 Truck 72815 20120 394195 104092 Bus 13163 9156 70795 49745 Trailer 20701 7276 125003 45573 Constr. Veh. 11993 6770 82253 46710 Pedestrian 185847 22923 962123 110425 Motocycle 10109 6435 60925 38875 Bicycle 9478 6263 58276 39301 Traf\ufb01c Cone 82362 12336 534692 73070 Barrier 125095 9269 881469 60443 Total 944881 28130 5132287 128100 Table 1: Instance and sample distribution of training split before and after dataset sampling(DS Sampling). Column Instance Num indicates instance number of each category. Column Sample Num indicates total sample numbers that a category appears in the training split. Column Instance Num After indicates instance number of each category after dataset sampling which expands the training set from 28130 to 128100 samples. Column Sample Num After is the same as column Instance Num After. Total number of samples indicates training dataset size, rather than the sum of all categories listed above, considering the fact that multiple categories can appear in the same point cloud sample. task learning, our method performs better when training all categories jointly than training each of them individually. There are 3 tracks in the nuScenes 3D Detection Challenge: Lidar Track, Vision Track, and Open Track. Only lidar input is allowed in Lidar Track. Only camera input is allowed in Vision Track. External data or map data is not allowed in above two tracks. As for Open Track, any input is allowed. Besides, pre-training is allowed in all of the 3 tracks. We participate in the Lidar Track of the challenge. Final leaderboard can be found at [17]. Finally, our contributions in this challenge can be concluded as follows: \u2022 We propose class-balanced sampling strategy to handle extreme imbalance issue in the nuScenes Dataset. \u2022 We design a multi-group head network to make categories of similar shapes or sizes could bene\ufb01t from each other, and categories of different shapes or sizes stop interfere with each other. \u2022 Together with improvements on network architecture, loss function, and training procedure, our method achieves state-of-the-art performance on the challenging nuScenes Dataset [2]. We \ufb01rst introduce our methodology in Section 2. Training details and network settings are presented in Section 3. Results are shown in Section 4. Finally we conduct conclusion in Section 5. 2. Methodology Overall network architecture is presented in Figure 3, which is mainly composed of 4 part: Input Module, 3D Feature Extractor, Region Proposal Network, and Multi-group Head network. Together with improvements on data augmentation, loss function, and training procedure, we not only make it perform 10 categories\u2019 3D object detection, velocity and attribute prediction simultaneously, but also achieve better performance than perform each category\u2019s detection respectively. In this section, we \ufb01rst introduce inputs and corresponding data augmentation strategies. Then the 3D Feature Extractor, Region Proposal Network, and Multi-group head network will be explained in detail. Finally, improvements on loss, training procedure as well as other tricks will be introduced. 2.1. Input and Augmentation The nuScenes dataset provides point cloud sweeps in (x, y, z, intensity, ringindex) format, each of them associated with a time-stamp. We follow the fashion of of\ufb01cial nuScenes baseline [2] by accumulating 10 Lidia sweeps to form dense point cloud inputs. Speci\ufb01cally, our input is of (x, y, z, intensity, \u2206t) format. \u2206t is the time lag between each non-keyframe sweep regarding keyframe sweep, and \u2206t ranges from 0s to 0.45s. We use grid size 0.1m, 0.1m, 0.2m in x, y, z axis respectively to convert the raw point cloud into voxel presentation. In each voxel, we take mean of all points in the same voxel to get \ufb01nal inputs to the network. No extra data normalization strategy is applied. As shown in Figure 2, the nuScenes dataset [2] has a se2 \fFigure 1: Examples of ground plane detection result. Points belonging to ground plane are shown in color, which can be formulated by Ax + By + Cz + D = 0. In average, the ground plane is about -1.82 meters along z axis. Open3D [31] is used for visualization. vere class imbalance problem . Blue columns tell the original distribution of training split. To alleviate the severe class imbalance, we propose DS Sampling, which generates a smoother instance distribution as the orange columns indicate. To this end, like the sampling strategy used in the image classi\ufb01cation task, we \ufb01rstly duplicate samples of a category according to its fraction of all samples. The fewer a category\u2019s samples are, more samples of this category are duplicated to form the \ufb01nal training dataset. More specifically, we \ufb01rst count total point cloud sample number that exists a speci\ufb01c category in the training split, then samples of all categories which are summed up to 128106 samples. Note that there exist duplicates because multiple objects of different categories can appear in one point cloud sample. Intuitively, to achieve a class-balanced dataset, all categories should have close proportions in the training split. So we randomly sample 10% of 128106 (12810) point cloud samples for each category from the class-speci\ufb01c samples mentioned above. As a result, we expand the training set from 28130 samples to 128100 samples, which is about 4.5 times larger than the original dataset. To conclude, DS Sampling can be seen as improving the average density of rare classes in the training split. Apparently, DS Sampling could alleviate the imbalance problem effectively, as shown in orange columns in Figure 2. Besides, we use GT-AUG strategy as proposed in SECOND [28] to sample ground truths from an annotation database, which is generated of\ufb02ine, and place those sampled boxes into another point cloud. Note that the ground plane location of point cloud sample needs to be computed before we could place object boxes properly. So we utilize the least square method and RANSAC [6] to estimate each sample\u2019s ground plane, which can be formulated as Ax + By + Cz + D = 0. Examples of our ground plane detection module can be seen in Figure 1. With the help of the above two strategies, we enable the model to perform better in all, especially tail classes, showing an obvious promoting effect on alleviating the problem of class imbalance. 0 50000 200000 250000 300000 350000 400000 450000 Car Pedestrian Barrier Truck Trailer Bus Cons Veh Motorcycle Bicycle Instance Distribution Before and After Data Sampling 500000 150000 100000 After T C Before Figure 2: Class imbalance in the nuScenes Dataset. 50% categories account for only a small fraction of total annotations. Distribution of original Training Split is shown in blue. Distribution of sampled Training Split is shown is orange. 2.2. Network As Shown in Figure 3, we use sparse 3D convolution with skip connections to build a resnet-like architecture for the 3D feature extractor network. For a N \u00d7C \u00d7H \u00d7W input tensor, the feature extractor outputs a N\u00d7l\u00d7 C m\u00d7 H n \u00d7 W n feature map, m, n is the downscale factor of z, x, y dimensions respectively, l is output channel of 3D Feature Extractor\u2019s last layer. To make that 3D feature maps more suitable for the following Region Proposal Network and multi-group head which will be explained in detail in the next subsection, we reshape feature maps to N \u00d7 C\u00d7l m \u00d7 H n \u00d7 W n , then use a region proposal network like VoxelNet [32] to perform regular 2D convolution and deconvolution to further aggregate features and get higher resolution feature maps. Based on these feature maps the multi-group head network is thus able to detect objects of different categories ef\ufb01ciently and effectively. 3 \fInput Input 3D Feature Extractor 3D Feature Extractor Region Proposal Network Region Proposal Network Multi-group Head Multi-group Head Submanifold Sparse 3D Convolution Sparse 3D Convolution Sparse 3D Convolution 2D Convolution 2D Convolution Submanifold Sparse 3D Convolution Sparse 3D Convolution 2D Convolution Stack Deconv Deconv Conv Stack Deconv Deconv Conv Classification Orientation Classification Box Regression Classification Orientation Classification Box Regression Classification Orientation Classification Box Regression Classification Orientation Classification Box Regression Classification Orientation Classification Box Regression Classification Orientation Classification Box Regression Figure 3: Network Architecture. 3D Feature Extractor is composed of submanifold and regular 3D sparse convolutions. Outputs of 3D Feature Extractor are of 16\u00d7 downscale ratio, which are \ufb02atten along output axis and fed into following Region Proposal Network to generate 8\u00d7 feature maps, followed by the multi-group head network to generate \ufb01nal predictions. Number of groups in head is set according to grouping speci\ufb01cation. 2.3. Class-balanced Grouping The intrinsic long-tail property poses a multitude of open challenges for object detection since the models will be largely dominated by those abundant head classes while degraded for many other tail classes. As shown in Figure 2, for example, Car accounts for 43.7% annotations of the whole dataset, which is 40 times the number of bicycle, making it dif\ufb01cult for a model to learn features of tail classes suf\ufb01ciently. That is, if instance numbers of classes sharing a common head differ a lot, there is usually no data for the tail class at most time. As a result, the corresponding head, as the purple parts pictured in Figure 3, will be dominated by the major classes, resulting in poor performance on rare classes. On the other hand, if we put classes of discrepant shapes or sizes together, regression target will have bigger inter-class variances, which will make classes of different shapes interfere with each other. That is why the performance trained with different shapes jointly is often lower than trained them individually. Our experiments prove that classes of similar shape or size are easier to learn from the same task. Intuitively, classes of similar shapes or sizes can contribute to each other\u2019s performance when trained jointly because there are common features among those relative categories so that they can compensate for each other to achieve higher detection results together. To this end, we manually divide all categories into several groups following some principles. For a particular head in the Multi-group Head module, it only needs to recognize classes and locates objects belongs to classes of this group. There are mainly 2 principles which guide us split the 10 classes into several groups effectively: \u2022 Classes of similar shapes or sizes should be grouped. Classes of similar shapes often share many common attributes. For example, all vehicles look similar because they all have wheels, and look like a cube. Motorcycle and bicycle, traf\ufb01c cone and pedestrian also have a similar relation. By grouping classes of similar shape or size, we divide classi\ufb01cation into two steps logically. Firstly the model recognizes \u2019superclasses\u2019, namely groups, then in each group, different classes share the same head. As a result, different groups learn to model different shape and size patterns, and in a speci\ufb01c group, the network is forced to learn the inter-class difference of similar shapes or sizes. \u2022 Instance numbers of different groups should be balanced properly. We take into account that instance number of different groups should not vary greatly, which will make the learning process dominated by major classes. So we separate major classes from groups of similar shape or size. For example, Car, Truck and Construction Vehicle have similar shape and size, but Car will dominate the group if we put the 3 classes together, so we take Car as a single group, and put Truck and Construction Vehicle together as a group. In this way, we can control the weights of different groups to further alleviate the imbalance problem. Guided by the above two principles, in the \ufb01nal settings we split 10 classes into 6 groups: (Car), (Truck, Construction Vehicle), (Bus, Trailer), (Barrier), (Motorcycle, Bicycle), (Pedestrian, Traf\ufb01c Cone). According to our ablation study as shown in Table 4, the class-balanced grouping contributes the most to the \ufb01nal result. 4 \fModality Map External mAP mATE mASE mAOE mAVE mAAE NDS Point Pillars [14] Lidar \u00d7 \u00d7 30.5 0.517 0.290 0.500 0.316 0.368 45.3 BRAVE [17] Lidar \u00d7 \u00d7 32.4 0.400 0.249 0.763 0.272 0.090 48.4 Tolist [17] Lidar \u00d7 \u00d7 42.0 0.364 0.255 0.438 0.270 0.319 54.5 MEGVII(Ours) Lidar \u00d7 \u00d7 52.8 0.300 0.247 0.380 0.245 0.140 63.3 Table 2: Overall performance. BRAVE and Tolist are the other top three teams. Our method achieves the best performance on all but mAAE metrics. Car Ped Bus Barrier TC Truck Trailer Moto Cons. Veh. Bicycle Mean Point Pillars [14] 70.5 59.9 34.4 33.2 29.6 25.0 16.7 20.0 4.50 1.60 29.5 MEGVII(Ours) 81.1 80.1 54.9 65.7 70.9 48.5 42.9 51.5 10.5 22.3 52.8 Table 3: mAP by Categories compared to PointPillars. Our method shows more competitive and balanced performance on tail classes. For example, Bicycle is improved by 14 times. Motorcycle, Construction Vehicle(Cons. Veh.), Trailer, and Traf\ufb01c Cone(TC) are improved by more than 2 times. 2.4. Loss Function Apart from regular classi\ufb01cation and bounding box regression branch required by 3D object detection, we add an orientation classi\ufb01cation branch as proposed in SECOND [28]. It\u2019s important to point out that most of the object boxes are parallel or perpendicular to LiDAR coordinates axis according to our statistics. So if orientation classi\ufb01cation is applied as it is in SECOND, it turns out the mAOE is very high for the fact that many predicted bounding boxes\u2019 orientation are just opposite to ground truth. So we add an offset to orientation classi\ufb01cation targets to dismiss orientation ambiguity. As for velocity estimation, regression without normalization can achieve the best performance compared to adding extra normalization operations. We use anchors to reduce learning dif\ufb01culty through import prior knowledge. Anchors are con\ufb01gured as VoxelNet [32]. That is, anchors of different classes have different height and width con\ufb01guration which are determined by class means values. There is 1 size con\ufb01guration with 2 different directions for a category. For velocities, the anchor is set to 0 in both x and y axis. Objects are moving along the ground so we do not need to estimate velocity in the z axis. In each group, we use weighted Focal Loss for classi\ufb01cation, the smooth-l1 loss for x, y, z, l, w, h, yaw, vx, vy regression, and softmax cross-entropy loss for orientation classi\ufb01cation. We do not add attribute estimation because its results are not comparable to just applying each category\u2019s most common attribute. We further improve attribute estimation by taking velocity into account. For example, most bicycles are without rider, but if the model predicts a bicycle\u2019s velocity is above a threshold, there should be riders so we change corresponding bicycle\u2019s attribute to with rider. The Multi-group head is taken as a multi-task learning procedure in our experiments. We use Uniform Scaling to con\ufb01gure weights of different branches. 2.5. Other Improvements Apart from the above improvements, we \ufb01nd that SENet [11], Weight Standardization [22] can also help in the detection task when used properly. Besides, if we use a heavier head network, performance can still be improved. In our \ufb01nal submission, we ensemble several models of multiple scales to achieve our best performance: mAP 53.2%, NDS 63.78% on validation split. 3. Training Details In this section, we explain the implementation details of the data augmentation, training procedure and method itself. Our method is implemented in PyTorch [18]. All experiments are trained using NVIDIA 2080Ti distributedly with synchronized batch normalization support. For this task, we consider point cloud within the range of [-50.4, 50.4] \u00d7 [-51.2, 51.2] \u00d7 [-5, 3] meters in X, Y, Z axis respectively. We choose a voxel size of sx = 0.1, sy = 0.1, sz = 0.2 meters, which leads to a 1008 \u00d7 1024 \u00d7 40 voxels. Max points number allowed in a voxel is set to 10. For using 10 sweeps(1 keyframe + 9 preceeding non-keyframes), max number of non-empty voxels is 60000. During training, we conduct data augmentation of random \ufb02ip in the x-axis, scaling with a scale factor sampled from [0.95, 1.05], rotation around Z axis between [-0.3925, 0.3925] rads and translation in range [0.2, 0.2, 0.2] m in all axis. For GT-AUG, we \ufb01rst \ufb01lter out ground truth boxes with less than 5 points inside, then randomly select and paste ground truth boxes of different classes using different 5 \fGT-AUG DB Sampling Multi-head Res-Encoder SE Heavier Head WS Hi-res mAP NDS \u00d7 \u00d7 \u00d7 \u00d7 \u00d7 \u00d7 \u00d7 \u00d7 35.68 45.17 \u2713 \u00d7 \u00d7 \u00d7 \u00d7 \u00d7 \u00d7 \u00d7 37.69 53.66 \u2713 \u2713 \u00d7 \u00d7 \u00d7 \u00d7 \u00d7 \u00d7 42.64 56.66 \u2713 \u2713 \u2713 \u00d7 \u00d7 \u00d7 \u00d7 \u00d7 44.86 58.13 \u2713 \u2713 \u2713 \u2713 \u00d7 \u00d7 \u00d7 \u00d7 48.64 60.08 \u2713 \u2713 \u2713 \u2713 \u2713 \u00d7 \u00d7 \u00d7 48.14 59.66 \u2713 \u2713 \u2713 \u2713 \u2713 \u2713 \u00d7 \u00d7 49.55 60.20 \u2713 \u2713 \u2713 \u2713 \u2713 \u2713 \u2713 \u00d7 49.43 60.56 \u2713 \u2713 \u2713 \u2713 \u2713 \u2713 \u2713 \u2713 51.44 62.56 Table 4: Ablation studies for different components used in our method on Validation Split. Database Sampling and Res-Encoder contribute the most to mAP. Category Car Truck Bus Trailer Cons. Veh. Traf\ufb01c Cone Barrier Bicycle Motorcycle Pedestrian Magnitude 2 3 7 4 6 2 6 6 2 2 Table 5: GT-AUG magnitudes of different categories. For each category, the magnitude means number of instances placed into a point cloud sample. magnitude on the ground plane as shown in Table 5. 3.1. Training Procedure We use adamW [16] optimizer together with one-cycle policy [1] with LR max 0.04, division factor 10, momentum ranges from 0.95 to 0.85, \ufb01xed weight decay 0.01 to achieved super convergence. With batch size 5, the model is trained for 20 epochs. During inference, top 1000 proposals are kept in each group, then NMS with score threshold 0.1 and IoU threshold 0.2 is applied. Max number of boxes allowed in each group after NMS is 80. 3.2. Network Details For the 3D feature extractor, we use 16, 32, 64, 128 layers of sparse 3D convolution respectively for each block. As used in [10], submanifold sparse convolution is used when we downsample the feature map. In other conditions, regular sparse convolution is applied. For the region proposal module, we use 128 and 256 layers respectively for downscale ratio 16\u00d7 and 8\u00d7 layers. In each head, we apply 1 \u00d7 1 Conv to get \ufb01nal predictions. To achieve a heavier head, we \ufb01rst use one layer 3 \u00d7 3 Conv to reduce channels by 1 8, then use a 1 \u00d7 1 Conv layer to get \ufb01nal predictions. Batch Normalization [12] is used for all but the last layer. Anchors of different categories are set according to their mean height and width, with different threshold when assigning class labels. For categories of suf\ufb01cient annotations, we set the positive area threshold to 0.6, for those categories with fewer annotations we set the threshold to 0.4. We use the default setting of focal loss in the original paper. For x, y, z, l, w, h, yaw, vx, vy regression, we use 0.2 for velocity prediction and the others are set to 1.0 to achieve a balanced and stable training process. 4. Results In this section we report our results in detail. We also investigate contributions of each module to the \ufb01nal result in Table 4. As shown in Table 2, our method surpasses of\ufb01cial PointPillars [14] baseline by 73.1%. More speci\ufb01cally, our method shows better performance in all categories, especially in long-tail classes like Bicycle, Motorcycle, Bus, and Trailer. Moreover, our method achieves less error in translation(mATE), scale(mASE), orientation(mAOE), velocity(mAVE) and attribute(mAAE). Examples of detection results can be seen in Figure 4, our method generates reliable detection results on all categories. The edge with a line attached in the bounding box indicates the vehicle\u2019s front. 5." + } + ], + "Qingpeng Cai": [ + { + "url": "http://arxiv.org/abs/2302.01724v3", + "title": "Reinforcing User Retention in a Billion Scale Short Video Recommender System", + "abstract": "Recently, short video platforms have achieved rapid user growth by\nrecommending interesting content to users. The objective of the recommendation\nis to optimize user retention, thereby driving the growth of DAU (Daily Active\nUsers). Retention is a long-term feedback after multiple interactions of users\nand the system, and it is hard to decompose retention reward to each item or a\nlist of items. Thus traditional point-wise and list-wise models are not able to\noptimize retention. In this paper, we choose reinforcement learning methods to\noptimize the retention as they are designed to maximize the long-term\nperformance. We formulate the problem as an infinite-horizon request-based\nMarkov Decision Process, and our objective is to minimize the accumulated time\ninterval of multiple sessions, which is equal to improving the app open\nfrequency and user retention. However, current reinforcement learning\nalgorithms can not be directly applied in this setting due to uncertainty,\nbias, and long delay time incurred by the properties of user retention. We\npropose a novel method, dubbed RLUR, to address the aforementioned challenges.\nBoth offline and live experiments show that RLUR can significantly improve user\nretention. RLUR has been fully launched in Kuaishou app for a long time, and\nachieves consistent performance improvement on user retention and DAU.", + "authors": "Qingpeng Cai, Shuchang Liu, Xueliang Wang, Tianyou Zuo, Wentao Xie, Bin Yang, Dong Zheng, Peng Jiang, Kun Gai", + "published": "2023-02-03", + "updated": "2023-02-12", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.IR" + ], + "main_content": "INTRODUCTION As newly emerging media and sharing platforms, short video applications like TikTok, YouTube Shorts, and Kuaishou quickly attract billions of users by recommending interesting content to them. There has been an increasing interest in short video recommendation [12, 16, 27, 33] in academia and industry. In the view of the recommender, a user interacts with the system through explicit and implicit feedback (e.g. watching time, like, follow, comment, etc.) in multiple requests, and the system returns a list of short videos at each request. The ultimate goal is to improve retention, which reflects user satisfaction [30, 32, 37]. User retention is defined as the ratio of visiting the system again, and commonly referred to the retention at next day. It directly affects DAU, which is the core value of the short video app. Current recommender systems deploy point-wise models [6] or list-wise models [23] to recommend items to users. Point-wise models predict the immediate reward of user-item pair, while listwise models estimate the combination of rewards of a list of items considering the positions and relationships between items. Both of them aim at predicting a point-wise reward or a combination of point-wise rewards. However, retention reward is a long-term feedback after multiple interactions between users and the system. Similar to Go game [25], the relationship between retention reward arXiv:2302.01724v3 [cs.LG] 12 Feb 2023 \fWWW\u201923, April 30 May 4, 2023, Austin, Texas, USA Cai, Liu, et al. and the intermediate feedback is not clear, and it is difficult to decompose retention reward to each item or a list of items. Thus traditional models are not able to optimize retention. Reinforcement learning (RL) [26] methods are designed to learn a policy to maximize the long-term reward by interacting with the environment. Thus in this paper we choose RL methods to optimize retention. We model the problem of optimizing user retention in short video systems as an infinite horizon request-based Markov Decision Process (MDP), where the recommender is the agent, and users serve as the environments. The session starts when the user visits the app. At each step (the user\u2019s request), the agent plays an action (the ranking weight) to ensemble scores from scoring models that predict various user feedback. The ensemble ranking function inputs both the ranking weight and the prediction scores, outputs videos with highest scores to the user. Then the user provides immediate feedback including watch time and other interactions. The session ends when the user leaves the app, the next session starts when the user opens the app again and the process repeats. Our objective is to minimize the cumulative returning time (defined as the time gap between the last request of the session and the first request of the next session), which is equal to improving the frequency of user visits, i.e. app open frequency. And the frequency of user visits directly corresponds to retention. Different from previous RL for recommender system works [1, 3\u20135, 9, 11, 17\u201321, 28, 31, 34\u201337] that maximizes the cumulative immediate feedback, we are one of the first works to directly optimize user retention in short video recommender systems to the best of our knowledge. However, current RL algorithms can not be trivially applied due to the following properties of retention reward: 1) Uncertainty: retention is not totally decided by the recommendation algorithm, and is usually affected by many uncertain factors outside the system. For example, retention is disturbed by the noise of social events. 2) Bias: retention is biased with the different factors including time and the level of user\u2019s activity. The retention varies between weekdays and weekends, and highly active users inherently have high retention. 3) Long delay time: different from the instant reward signal in games, the retention reward returns mostly in several hours. It causes the instability of the training of RL policy due to huge distribution shifts [14]. For the uncertainty problem, we propose a novel normalization technique that predicts the returning time and use it to reduce the variance of the retention reward. For the bias problem, we train different policies over user groups to prevent the learning from being dominated by high active users. For the long delay time problem, we propose a novel soft regularization method to better balance the sample efficiency and stability. We also take the immediate feedback as the heuristic rewards and the intrinsic motivation methods [2, 22] to better optimize user retention in the delayed reward setting. Combining the above techniques, we propose the Reinforcement Learning for User Retention algorithm, named as RLUR. We summarize our contributions as below: \u2022 We model the user retention for short video recommendation problem as an infinite horizon requested-based MDP, and the aim is to minimize the cumulative returning time. \u2022 We propose novel methods to solve the challenges of uncertainty, bias, and long delay time of user retention. \u2022 Experiments on both offline and live environments show that RLUR improves user retention significantly. \u2022 The RLUR algorithm has been fully launched in Kuaishou app, and it shows that RLUR continuously improves user retention and DAU. 2 PROBLEM DEFINITION We model the problem as an infinite horizon request-based Markov Decision Process (MDP), where the recommender is the agent, and users are the environments. As shown in Figure 1(a), when the user opens the app, a session \ud835\udc56starts. At each request (step) \ud835\udc56\ud835\udc61of the session \ud835\udc56, the agent plays an action \ud835\udc4e\ud835\udc56\ud835\udc61given the state \ud835\udc60\ud835\udc56\ud835\udc61of the user. \ud835\udc5bdeep models predict scores \ud835\udc65\ud835\udc57= (\ud835\udc65\ud835\udc571, ...,\ud835\udc65\ud835\udc57\ud835\udc5b) for each candidate video \ud835\udc57, in terms of various feedback(watch time, follow, like, etc). The ranking function \ud835\udc53inputs the action \ud835\udc4e\ud835\udc56\ud835\udc61and the prediction scores \ud835\udc65\ud835\udc57, and outputs the ranking score \ud835\udc53(\ud835\udc4e\ud835\udc56\ud835\udc61,\ud835\udc65\ud835\udc57) for each video \ud835\udc57. The system recommends top 6 videos with the highest ranking score to the user as shown in Figure 1(b). Then the user provides immediate feedback \ud835\udc3c(\ud835\udc60\ud835\udc56\ud835\udc61,\ud835\udc4e\ud835\udc56\ud835\udc61). The session ends when the user leaves the app. When the user returns to the app again, the next session \ud835\udc56+ 1 starts, and the delayed reward(returning time) of session \ud835\udc56is returned. Then the above process repeats. We now introduce the detail of the MDP. The state \ud835\udc60\ud835\udc56\ud835\udc61consists of user profile, behavior history \ud835\udc62\ud835\udc56\ud835\udc61, request context, and candidate video features. The action is a continuous vector in [0,\ud835\udc36]\ud835\udc5b, where\ud835\udc5b is the number of scoring models. We use a linear ranking function, i.e, \ud835\udc53(\ud835\udc4e\ud835\udc56\ud835\udc61,\ud835\udc65\ud835\udc57) = \u00cd\ud835\udc5b \ud835\udc58=1 \ud835\udc4e\ud835\udc56\ud835\udc61\ud835\udc58\ud835\udc65\ud835\udc57\ud835\udc58. The immediate reward \ud835\udc3c(\ud835\udc60\ud835\udc56\ud835\udc61,\ud835\udc4e\ud835\udc56\ud835\udc61) is defined as the sum of watch time and the number of interactions at this request. The returning time\ud835\udc47(\ud835\udc60\ud835\udc56) is the time gap between the last request of session \ud835\udc60\ud835\udc56and the first request of session \ud835\udc60\ud835\udc56+1. The returning time reward \ud835\udc5f(\ud835\udc60\ud835\udc56\ud835\udc61,\ud835\udc4e\ud835\udc56\ud835\udc61) is\ud835\udc47(\ud835\udc60\ud835\udc56) for the last request, and 0 otherwise. We learn a deterministic policy \ud835\udf0b(\ud835\udc60\ud835\udc56\ud835\udc61|\ud835\udf03) that inputs the state \ud835\udc60\ud835\udc56\ud835\udc61and outputs the action \ud835\udc4e\ud835\udc56\ud835\udc61. The objective is to minimize the cumulative returning time \u00cd\u221e \ud835\udc56=1 \ud835\udefe\ud835\udc56\u22121\ud835\udc47(\ud835\udc60\ud835\udc56), where \ud835\udefe(0 < \ud835\udefe< 1) is the discount factor. 3 METHOD In this section we firstly discuss the learning of retention in the delayed reward setting. Then we propose techniques to tackle these challenges incurred by the characteristic of user retention. 3.1 Retention Critic Learning In this section we discuss how to learn the retention. We learn a critic function \ud835\udc44\ud835\udc47(\ud835\udc60\ud835\udc56\ud835\udc61,\ud835\udc4e\ud835\udc56\ud835\udc61|\ud835\udc64\ud835\udc47) parameterized by \ud835\udc64\ud835\udc47, to estimate the cumulative returning time from the current state \ud835\udc60\ud835\udc56\ud835\udc61and action \ud835\udc4e\ud835\udc56\ud835\udc61. We follow the deep deterministic policy gradient method [15] to estimate the cumulative returning time reward. As shown in Figure 1(d), the loss function \ud835\udc3f(\ud835\udc64\ud835\udc47) for learning returning time is \u2211\ufe01 \ud835\udc60\ud835\udc56\ud835\udc61,\ud835\udc4e\ud835\udc56\ud835\udc61\u2208\ud835\udc37 (\ud835\udc44\ud835\udc47(\ud835\udc60\ud835\udc56\ud835\udc61,\ud835\udc4e\ud835\udc56\ud835\udc61|\ud835\udc64\ud835\udc47)\u2212(\ud835\udc5f(\ud835\udc60\ud835\udc56\ud835\udc61,\ud835\udc4e\ud835\udc56\ud835\udc61)+\ud835\udefe\ud835\udc56\ud835\udc61\ud835\udc44\ud835\udc47(\ud835\udc60\ud835\udc56\ud835\udc61+1, \ud835\udf0b(\ud835\udc60\ud835\udc56\ud835\udc61+1 |\ud835\udf03)|\ud835\udc64\ud835\udc47)))2, (1) where the discount factor\ud835\udefe\ud835\udc56\ud835\udc61is 1 for samples that are not the last requests and\ud835\udefefor the last requests, and \ud835\udc37is the dataset. The discount \fReinforcing User Retention in a Billion Scale Short Video Recommender System WWW\u201923, April 30 May 4, 2023, Austin, Texas, USA a) MDP Rec Sys 6videos Sampled Action Sampled Return Time Reward Function Retention Critic Sampled State Retention Critic Sampled Next State Actor Next Action Retention reward next d) Retention critic learning of RLUR Sampled Action Sampled Immediate Response Reward Function I Critic Sampled State I Critic Sampled Next State Actor Next Action Immediate reward next RND Sampled History Intrinsic reward e) Immediate response critic learning of RLUR Sampled State Actor Retention Critic c) Actor training of RLUR Action I Critic Open/Return App Session i starts Request (step) t Immeidate response . . . . . . Leave App Session i ends Return App Session i+1 starts . . . Returning Time Profile & history Request context Candidate video features Request State Candidate items Action Predicted scores of videos Ranking score Actor Selector 6 videos Scoring Models b) Inference of RLUR Figure 1: The infinite horizon request-based MDP and the framework of RLUR factor of non-terminal samples is set as 1 to prevent the returning time reward from vanishing by the exponential decay mechanism. If one set the discount factor to be a number less than 1, the importance of returning time in future sessions will be extremely small. Optimizing the loss (1) is equivalent to estimate \u00cd\u221e \ud835\udc56=1 \ud835\udefe\ud835\udc56\u22121\ud835\udc47(\ud835\udc60\ud835\udc56). We omit the proof due to lack of space. 3.2 Methods for Delayed Reward The returning time reward only occurs at the end of each session, and it is inefficient to learn an RL policy with delayed rewards. For better learning of user retention, we adopt the heuristic reward methods [13] to enhance the policy learning and the intrinsic motivation methods [2, 22] to drive the policy to explore novel states. As the feedback and returning time are positively correlated, we take the immediate rewards as heuristic rewards to guide the policy to improve the user retention. For better exploration, we choose the Random Network Distillation (RND) [2] method that is both effective and computationally efficient. The idea is that we randomly initialize two networks with the same structure, and train one network to fit the output of another fixed network. The loss of RND is \ud835\udc3f(\ud835\udc64\ud835\udc52) = \u00cd \ud835\udc62\ud835\udc56\ud835\udc61\u2208\ud835\udc37||\ud835\udc38(\ud835\udc62\ud835\udc56\ud835\udc61|\ud835\udc64\ud835\udc52) \u2212\ud835\udc38(\ud835\udc62\ud835\udc56\ud835\udc61|\ud835\udc64\u2217 \ud835\udc52)||2 2, where \ud835\udc38is the embedding function of behavior history \ud835\udc62\ud835\udc56\ud835\udc61, \ud835\udc64\ud835\udc52is the parameter, and \ud835\udc64\u2217 \ud835\udc52is the fixed parameter. Intuitively, the loss of each state decreases with more samples. Thus it can quantify the novelty of each state. We use the RND loss of each sample as the intrinsic reward. To reduce the interference of immediate rewards with the retention reward, we learn a separate critic function, the immediate reward critic \ud835\udc44\ud835\udc3c(\ud835\udc60\ud835\udc56\ud835\udc61,\ud835\udc4e\ud835\udc56\ud835\udc61|\ud835\udc64\ud835\udc3c) to estimate the sum of the intrinsic rewards with the immediate rewards, \ud835\udc5f\ud835\udc3c(\ud835\udc60\ud835\udc56\ud835\udc61,\ud835\udc4e\ud835\udc56\ud835\udc61) = \ud835\udc3c(\ud835\udc60\ud835\udc56\ud835\udc61,\ud835\udc4e\ud835\udc56\ud835\udc61) + ||\ud835\udc38(\ud835\udc62\ud835\udc56\ud835\udc61|\ud835\udc64\ud835\udc52) \u2212\ud835\udc38(\ud835\udc62\ud835\udc56\ud835\udc61|\ud835\udc64\u2217 \ud835\udc52)||2 2. The loss of the immediate reward critic, \ud835\udc3f(\ud835\udc64\ud835\udc3c) is similar to Eq.(1), shown in Figure 1(e). 3.3 Uncertainty The returning time is highly uncertain and affected by many factors outside the system. To reduce its variance , we now propose a normalization technique. We use the ratio of the true returning time over the predicted returning time as the normalized retention reward. For predicting the returning time, we learn a session-level classification model \ud835\udc47\u2032 to predict the returning probability. We firstly calculate the empirical distribution of returning time, then use the \ud835\udefd% percentile of the distribution, \ud835\udc47\ud835\udefdto determine whether the sample \ud835\udc65is positive or negative. The label \ud835\udc66of sample \ud835\udc65is positive if the returning time is less than \ud835\udc47\ud835\udefdas the shorter time means the better. The loss function of \ud835\udc47\u2032 is \ud835\udc66\u2217\ud835\udc59\ud835\udc5c\ud835\udc54\ud835\udc47\u2032(\ud835\udc65) + (1 \u2212 \ud835\udc66) \u2217\ud835\udc59\ud835\udc5c\ud835\udc54(1 \u2212\ud835\udc47\u2032(\ud835\udc65)), and \ud835\udc47\u2032(\ud835\udc65) predicts the probability that the returning time is shorter than\ud835\udc47\ud835\udefd. Then we get a lower bound of the expected returning time by the Markov Inequality [29], \ud835\udc38\ud835\udc47(\ud835\udc65) \u2265 \ud835\udc43(\ud835\udc47(\ud835\udc65) \u2265\ud835\udc47\ud835\udefd) \u2217\ud835\udc47\ud835\udefd\u223c(1 \u2212\ud835\udc47\u2032(\ud835\udc65)) \u2217\ud835\udc47\ud835\udefdas the predicted returning time. Thus we get the normalized retention reward, \ud835\udc5f(\ud835\udc60\ud835\udc56\ud835\udc61,\ud835\udc4e\ud835\udc56\ud835\udc61) = \ud835\udc50\ud835\udc59\ud835\udc56\ud835\udc5d{0, \ud835\udc47(\ud835\udc60\ud835\udc56) (1\u2212\ud835\udc47\u2032(\ud835\udc65))\u2217\ud835\udc47\ud835\udefd, \ud835\udefc}, where \ud835\udefcis a positive constant. 3.4 Bias As the returning time of high active users is much shorter than that of low active users, and the habit of these groups are quite different, we learn two separate policies \ud835\udf0b(\u00b7|\ud835\udf03high) and \ud835\udf0b(\u00b7|\ud835\udf03low) for high active group and low active group respectively. As we want to minimize the returning time and maximize the immediate rewards, the loss of the policy \ud835\udf0b(\u00b7|\ud835\udf03high) is a weighted sum of the immediate critic and retention critic, \ud835\udc3f(\ud835\udf03high) = \ud835\udf06\ud835\udc47\ud835\udc44\ud835\udc47(\ud835\udc60\ud835\udc56\ud835\udc61, \ud835\udf0b(\ud835\udc60\ud835\udc56\ud835\udc61|\ud835\udf03high)|\ud835\udc64\ud835\udc47) \u2212\ud835\udf06\ud835\udc3c\ud835\udc44\ud835\udc3c(\ud835\udc60\ud835\udc56\ud835\udc61, \ud835\udf0b(\ud835\udc60\ud835\udc56\ud835\udc61|\ud835\udf03high)|\ud835\udc64\ud835\udc3c), (2) where \ud835\udf06\ud835\udc47, \ud835\udf06\ud835\udc3care positive weights. The loss of the other policy is similar. The learning of Actor is shown in Figure 1(c). 3.5 Tackling the Unstable Training Problem Different from that the reward returns instantly in games, the retention reward returns in several hours to days, which causes a much larger distribution shift between current policy and behavior policy and the instability of the training of RL algorithms in our setting. Regularization methods [7] that adds a behavior cloning loss are proposed to stabilize the training in the off-policy setting. However, we find that this method either fails to stabilize or limits the sample efficiency. To better balance the sample efficiency and the stability, we now propose a novel soft regularization method. The actor loss is defined as exp(max{\ud835\udf06\u2217(\ud835\udc59\ud835\udc5c\ud835\udc54(\ud835\udc5d(\ud835\udc4e\ud835\udc56\ud835\udc61|\ud835\udc60\ud835\udc56\ud835\udc61)) \u2212\ud835\udc59\ud835\udc5c\ud835\udc54(\ud835\udc5d\ud835\udc4f(\ud835\udc4e\ud835\udc56\ud835\udc61|\ud835\udc60\ud835\udc56\ud835\udc61))), 0})\ud835\udc3f(\ud835\udf03), \fWWW\u201923, April 30 May 4, 2023, Austin, Texas, USA Cai, Liu, et al. Table 1: Offline Results Algorithm Returning time\u2193 User retention\u2191 CEM 2.036 0.587 TD3 2.009 0.592 RLUR (naive, \ud835\udefe= 0) 2.001 0.596 RLUR (naive, \ud835\udefe= 0.9) 1.961 0.601 RLUR 1.892 0.618 where \ud835\udf06> 0 is the regularization coefficient, \ud835\udc5d(\ud835\udc4e\ud835\udc56\ud835\udc61|\ud835\udc60\ud835\udc56\ud835\udc61) is the probability density of the Gaussian distribution of the current policy, and \ud835\udc5d\ud835\udc4f(\ud835\udc4e\ud835\udc56\ud835\udc61|\ud835\udc60\ud835\udc56\ud835\udc61) is the probability density of the Gaussian distribution of the behavior policy. The intuition is that samples with higher distribution shift get less weight in the learning, which softly regularizes the learning. \ud835\udf06controls the regularization degree, larger \ud835\udf06 makes a stronger regularization on the actor loss. Note that when \ud835\udf06is 0, we do not make any regularization on the actor loss. 4 OFFLINE EXPERIMENTS We validate the effectiveness of RLUR on a public short video recommendation dataset, KuaiRand [10], which is an unbiased dataset that contains the logs of the interactions of users and the system with multiple sessions. We build a simulator based on this dataset, which includes three parts: the user immediate feedback module that predicts multiple user behaviors; the leave module that predicts whether the user leaves the session or not; and the return module that predicts the probability of returning to the app on day \ud835\udc58\u2208{1, . . . , \ud835\udc3e} after each session. We set \ud835\udc3e= 10. We compare RLUR with a black-box optimization method, the Cross Entropy Method (CEM) [24] that is commonly used for ranking parameter searching and a state of the art reinforcement learning method, TD3 [8]. We evaluate the performance of each algorithm in terms of the averaged returning day (returning time) and averaged retention on the 1st day (user retention) across all user sessions. We train each algorithm until convergence, and we report the averaged performance of the last 50 episodes in Table 1. For both metrics, TD3 outperforms CEM, which demonstrates the effectiveness of reinforcement learning. RLUR outperforms both TD3 and CEM significantly. We also train a variant of RLUR that only contains the part of learning the returning time in Section 3.1, called RLUR (naive). RLUR (naive, \ud835\udefe= 0.9) outperforms RLUR (naive, \ud835\udefe= 0), which shows that it is more reasonable to minimize the cumulative retention of multiple sessions than one session as \ud835\udefe controls the importance of future sessions. RLUR outperforms RLUR (naive, \ud835\udefe= 0.9) substantially, which validates the effectiveness of the proposed techniques for the retention challenge. 5 LIVE EXPERIMENTS We test our proposed RLUR algorithm by live experiments in Kuaishou, a billion-scale short video platform. We compare RLUR with a blackbox optimization baseline that is widely used for ranking parameter searching in recommender systems, CEM [24]. We do not compare TD3 here as the training of TD3 is unstable, which is discussed in Section 3.5. We randomly split users into two buckets, deploy RLUR in the test bucket, and run CEM in the base bucket. We evaluate the performance of algorithms in terms of app open frequency, DAU, user retention at 1st day, and user retention at 7th day. We test algorithms for a long time to get convincing results as DAU and user retention varies between days. Now we introduce the details. Inference and Training. The inference procedure is shown in Figure 1(b). At each user request, the user state is sent to the Actor, and the mean \ud835\udf07and variance \ud835\udf0eare returned. Then the action is sampled from a Gaussian distribution \ud835\udc41(\ud835\udf07, \ud835\udf0e). Then the ranking function calculates the linear product of the action with the predicted scores of each video, and recommends 6 videos with the highest scores to the user. The training of actor is illustrated in Figure 1(c). The actor is trained to minimize the weighted sum of the retention critic and the immediate reward critic, as in Eq.(2). MDP. The state is a vector of user profile, behavior history(user statistics, the id of videos and corresponding feedback of the user in previous 3 requests), request context, and the candidate video features. The user profile covers age, gender, and location. The user statistics include statistics of various feedback. For the action, we choose an 8-dimensional continuous vector ranging in [0, 4]8, and the policy outputs the parameters of eight scoring models predicting the main feedback (watchtime, shortview, longview, like, follow, forward, comment, and personal-page entering). The immediate reward \ud835\udc3c(\ud835\udc60\ud835\udc56\ud835\udc61,\ud835\udc4e\ud835\udc56\ud835\udc61) of each request is designed as the the sum of watch time (in seconds) and interactions (including like, follow, comment, and share) of 6 videos. Hyper-parameters. The discounted factor is 0.95, and 0.95 outperforms other values in our system. The weights in actor loss is set to be 1.0, 1.0. As for the session-level retention model, we choose 60% percentile to determine the label, and the value of upper bound, \ud835\udefcis 3. The regularization coefficient \ud835\udf06is 1.5. Figure 2 plot the comparison results of RLUR with CEM, where the x axis stands for the number of days after deployment, and the y axis shows the percentage performance gap between RLUR and CEM. As we can see, app open frequency which directly reflects the returning time, increases consistently from Day 0 to Day 80 and sharply from Day 80 to Day 100. The app open frequency converges to 0.450% after Day 100. That validates the training of RLUR can consistently increase the retention reward and converges at expected. The user retention at 1st day/7th day and DAU increase slowly with the training from Day 0 to Day 80. From day 80 to day 100, these metrics increases sharply. After day 100, the performance gap of DAU converges to 0.2%, user retention at 1st day converges to 0.053%, and user retention at 7th day converges to 0.063%. Note that 0.01% improvement of user retention and 0.1% improvement of DAU are statistically significant in short video platforms. That is, the performance of RLUR is quite significant. Note that DAU and retention continue to increase. 6" + }, + { + "url": "http://arxiv.org/abs/2302.01680v3", + "title": "Two-Stage Constrained Actor-Critic for Short Video Recommendation", + "abstract": "The wide popularity of short videos on social media poses new opportunities\nand challenges to optimize recommender systems on the video-sharing platforms.\nUsers sequentially interact with the system and provide complex and\nmulti-faceted responses, including watch time and various types of interactions\nwith multiple videos. One the one hand, the platforms aims at optimizing the\nusers' cumulative watch time (main goal) in long term, which can be effectively\noptimized by Reinforcement Learning. On the other hand, the platforms also\nneeds to satisfy the constraint of accommodating the responses of multiple user\ninteractions (auxiliary goals) such like, follow, share etc. In this paper, we\nformulate the problem of short video recommendation as a Constrained Markov\nDecision Process (CMDP). We find that traditional constrained reinforcement\nlearning algorithms can not work well in this setting. We propose a novel\ntwo-stage constrained actor-critic method: At stage one, we learn individual\npolicies to optimize each auxiliary signal. At stage two, we learn a policy to\n(i) optimize the main signal and (ii) stay close to policies learned at the\nfirst stage, which effectively guarantees the performance of this main policy\non the auxiliaries. Through extensive offline evaluations, we demonstrate\neffectiveness of our method over alternatives in both optimizing the main goal\nas well as balancing the others. We further show the advantage of our method in\nlive experiments of short video recommendations, where it significantly\noutperforms other baselines in terms of both watch time and interactions. Our\napproach has been fully launched in the production system to optimize user\nexperiences on the platform.", + "authors": "Qingpeng Cai, Zhenghai Xue, Chi Zhang, Wanqi Xue, Shuchang Liu, Ruohan Zhan, Xueliang Wang, Tianyou Zuo, Wentao Xie, Dong Zheng, Peng Jiang, Kun Gai", + "published": "2023-02-03", + "updated": "2024-01-09", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.IR" + ], + "main_content": "INTRODUCTION The surging popularity of short videos has been changing the status quo of social media. Short video consumption has brought in huge business opportunities for organizations. As a result, there has been an increasing interest in optimizing recommendation strategies [Gong et al. 2022; Lin et al. 2022; Wang et al. 2022a; Zhan et al. arXiv:2302.01680v3 [cs.LG] 9 Jan 2024 \fWWW \u201923, May 1\u20135, 2023, Austin, TX, USA Cai, et al. HQNNQY NKMG EQOOGPV EQNNGEV UJCTG (a) (b) Figure 1: An example of a popular short video (TikTok, Kuaishou, etc) platform. 2022] for short video platforms. Users interact with the platform by scrolling up and down and watching multiple videos as shown in Figure 1(a). Users provide multi-dimensional responses at each video. As shown in the left part of Figure 1(b), potential responses from a user after consuming a video include WatchTime (the time spent on watching the video), and several types of interactions: Follow (follow the author of the video), Like (Like this video), Comment (provide comments on the video), Collect (Collect this video), Share (share this video with his/her friends), etc. On the one hand, the main goal of the platform is to optimize the cumulative WatchTime of multiple videos, as WatchTime reflects user attention and is highly related to daily active users (DAU). Recently, a growing literature has focused on applying reinforcement learning (RL) to recommender systems, due to its ability to improve cumulative reward [Afsar et al. 2021; Chen et al. 2019b, 2018; Gao et al. 2022a; Ge et al. 2021; Liu and Yang 2019; Ma et al. 2020; Nemati et al. 2016; Wang et al. 2022b; Xian et al. 2019; Xin et al. 2022; Zhao et al. 2018, 2017; Zou et al. 2019]. In particular, WatchTime , can be effectively cumulatively maximized to increase user spent time across multiple videos with RL approaches. On the other hand, other responses such as Like/Follow/Share also reflect user satisfaction levels. Thus the platform needs to satisfy the constraints of user interactions. Thereby, established recommender systems that exclusively optimize a single objective (such as gross merchandise volume for e-commerce platforms [Pi et al. 2020]) is no longer sufficient\u2014the applied systems should take all aspects of responses into consideration to optimize user experiences. In this paper, we model the problem of short video recommendation as a Constrained Markov Decision Process: users serve as the environments, and the recommendation algorithm is the agent; at each time step the agent plays an action (recommend a video to the user), the environment sends multiple rewards (responses) to the agent. The objective of the agent is to maximize the cumulative WatchTime (main goal) subject to the constraints of other interaction responses (auxiliary goals). Our aim is different from Pareto optimality that aims to find a Pareto optimal solution [Chen et al. 2021; Lin et al. 2019; Sener and Koltun 2018], which may not prioritize the main goal of the system. The problem of this constrained policy optimization is much more challenging as compared to its unconstrained counterpart. A natural idea would be applying standard constrained reinforcement learning algorithms that maximize the Lagrangian with prespecified multipliers [Tessler et al. 2018]. However, such method can not apply to our setting for the following two reasons: First, it is not sufficient to use a single policy evaluation model to estimate the Lagrangian dual objective due to different types of responses from the user. Such response combination is not adequate, particularly for responses with their own discount factors\u2014the formulation of temporal difference error in value-based models only allows for a single discount value. In scenarios where one discount factor suffices, it can still be difficult for a single value model to evaluate the policy accurately, especially when different responses are observed at various frequencies, as typical for short video recommendations. The WatchTime response is dense and observed from each video view, while the interaction-signal such as Like/Follow/Share is much more sparse and may not be provided within dozens of views. The signal from the sparse responses will be weakened by the dense responses when naively summing them up together. To address this multi-response evaluation difficulty, we separately evaluate each response via its own value model, which allows for response-specific discount factors and mitigates the interference on evaluation from one response on another. Experiments in Section 4.1 validates the effectiveness of this method. Second, different from only one constraint is considered in [Tessler et al. 2018], multiple constraints exist in recommender systems, especially in short video systems. We find that it is more difficult for algorithms that maximize the Lagrangian to optimize due to larger search space of multi-dimensional Lagrangian multipliers. It is time costly to grid search on the Lagrangian multipliers as the training of reinforcement learning algorithms takes long time. On account of this, we propose to firstly learn policies to optimize each auxiliary response and then \u201csoftly\u201d regularize the policy of the main response to be close to others instead of searching optimal value of Lagrangian multipliers. We theoretically prove the closed form of the optimal solution. We demonstrate empirically that our approach can better maximize the main response and balance other responses in both offline and live experiments. Together, we summarize our contributions as below: \u2022 Constrained Optimization in Short Video Recommendations: We formalize the problem of constrained policy learning in short video recommendations, where different responses may be observed at various frequencies, and the agent maximizes one with the constraint of balancing others. \u2022 Two-Stage Constrained Actor-Critic Algorithm We propose a novel two-stage constrained actor-critic algorithm that effectively tackles the challenge: (1) Multi-Critic Policy Estimation: To better evaluate policy on multiple responses that may differ in discount factors and observation frequencies, we propose to separately learn a value model to evaluate each response. (2) Two-Stage Actor Learning: We propose a \fTwo-Stage Constrained Actor-Critic for Short Video Recommendation WWW \u201923, May 1\u20135, 2023, Austin, TX, USA two-stage actor learning method which firstly learns a policy to optimize each auxiliary response and secondly softly regularizes the policy of the main response to be not far from others, which we demonstrate to be a more effective way in constrained optimization with multiple constraints as compared with other alternatives. \u2022 Significant Gains in Offline and Live Experiments: We demonstrate the effectiveness of our method in both offline and live experiments. \u2022 Deployment in real world short video application: We fully launch our method in a popular short video platform. 2 RELATED WORK Reinforcement Learning for Recommendation. There is a growing literature in applying RL to recommender systems, for its ability to optimize user long-term satisfaction [Afsar et al. 2021]. Value-based approaches estimate user satisfaction of being recommended an item from the available candidate set and then select the one with the largest predicted satisfaction [Chen et al. 2018; Liu and Yang 2019; Nemati et al. 2016; Zhao et al. 2018]. Policy-based methods directly learn the policy (which item to recommend) and optimize it in the direction of increasing user satisfaction [Chen et al. 2019b,a; Ma et al. 2020; Xian et al. 2019]. Recently, growing attention has been paid to adapting reinforcement learning for more complex recommendation applications beyond optimizing one single objective, such as promoting equal exposure opportunities for content items [Ge et al. 2021], increasing diversity and novelty of recommendations [Stamenkovic et al. 2021], and characterizing more comprehensive user dynamics with representational reward shaping [Chen et al. 2021]; we view our work as complementary to the third line. In face of the multi-faceted user responses, the system in real applications often has preferences on different types of user responses, for which we propose the constrained optimization problem in contrast to pursuing the Pareto optimality as proposed in [Chen et al. 2021] and [Ge et al. 2022]. Constrained Reinforcement Learning. Our work is also closely related to the literature of constrained reinforcement learning, where the sequential decision making problem is formulated into a constrained Markov Decision Process [Sutton and Barto 2018], and the policy learning procedure is expected to respect the constraints[Chow et al. 2017; Dalal et al. 2018; Garc\u0131a and Fern\u00e1ndez 2015; Liu et al. 2021; Tessler et al. 2018]. As an example, [Tessler et al. 2018] propose to update the policy and the Lagrangian multiplier alternatively and prove the convergence of their algorithm to a fixed point. This approach however only models one constraint, and can not scale well on problems with multiple constraints. In contrast, for each auxiliary response, we learn a policy to maximize it specifically, then we \u201csoftly\u201d regularize the main policy to be close to others. We show empirically that this is a more effective way for constrained policy learning when dealing with multiple responses in recommender systems. Different from [Nair et al. 2020] that studies in offline RL and regularizes the learned policy to be near to one behavior policy, we softly restrict the policy within other policies maximizing other auxiliary responses. Multi-objective Optimization. We also discuss a relevant line on multi-objective optimization. To trade off different objectives, methods in this field can be broadly categorized into two classes: the Pareto optimization and the joint optimization with pre-specified weights. The goal of Pareto optimization is to find a solution such that no other solutions can concurrently improve all objectives, named as Pareto optimality [Chen et al. 2021; Ge et al. 2022; Nguyen et al. 2020; Sener and Koltun 2018]. However, a Pareto optimal solution may not prioritize the objective that is most valued in applications. The other method combines different objectives together into a single one via pre-specifying the weights [Mossalam et al. 2016; White et al. 1980]. However, it is difficult to quantify these weights that can accurately reflect preferences in real applications [Tessler et al. 2018]. 3 CONSTRAINED MARKOV DECISION PROCESS FOR SHORT VIDEO RECOMMENDATION Figure 2: The MDP of short video recommendation. We start by formulating the problem of short video recommendation, which is shown in Figure 2. When a user \ud835\udc62opens the app, a new session starts. A session consists of multiple requests. At each request \ud835\udc61the recommender system (agent) takes an action \ud835\udc4e\ud835\udc61 that recommends the user a video based on the user current state. Then the user provides multi-faceted responses (such as WatchTime, Like, Share, and Follow) on the shown video, which are received by the agent as vector-valued reward signal. After the user leaves the app, the session ends. The goal of the recommender system is to optimize cumulative reward of the main response (e.g., WatchTime), with the constraint of not sacrificing others much. We model the above procedure as a Constrained Markov Decision Process (CMDP) [Sutton and Barto 2018] (\ud835\udc46,\ud835\udc34, \ud835\udc43, \ud835\udc45,\ud835\udc36, \ud835\udf0c0, \u0393), where \ud835\udc46is the state space of user current representation \ud835\udc60\ud835\udc61, \ud835\udc34is the action space (and each action \ud835\udc4e\ud835\udc61corresponds to a recommended video for one request), \ud835\udc43: \ud835\udc46\u00d7 \ud835\udc34\u2192\u0394(\ud835\udc46) captures the state transition, \ud835\udc45: \ud835\udc46\u00d7\ud835\udc34\u2192R\ud835\udc5adefines the vector-valued reward function that yields \ud835\udc5adifferent rewards \ud835\udc5f(\ud835\udc60\ud835\udc61,\ud835\udc4e\ud835\udc61) = \u0000\ud835\udc5f1(\ud835\udc60\ud835\udc61,\ud835\udc4e\ud835\udc61), . . . ,\ud835\udc5f\ud835\udc5a(\ud835\udc60\ud835\udc61,\ud835\udc4e\ud835\udc61)\u0001, \ud835\udf0c0 is the initial state distribution, \u0393 = (\ud835\udefe1, . . . ,\ud835\udefe\ud835\udc5a) \u2208(0, 1)\ud835\udc5adenotes the vector of discount factor for reward of each response.\ud835\udc36specifies the constraints on the auxiliary responses, which denotes the lower bound of the total numbers of signals of other objectives. Define the vector-valued discounted cumulative reward \ud835\udc45\ud835\udc61as \ud835\udc45\ud835\udc61= \u00cd\ud835\udc47 \ud835\udc61\u2032=\ud835\udc61\u0393\ud835\udc61\u2032\u2212\ud835\udc61\u00b7 \ud835\udc5f(\ud835\udc60\ud835\udc61\u2032,\ud835\udc4e\ud835\udc61\u2032), where \ud835\udc47is the session length (i.e., the number of requests), \u0393\ud835\udc4f= \u0000\ud835\udefe\ud835\udc4f 1, . . . ,\ud835\udefe\ud835\udc4f \ud835\udc5a \u0001, and x \u00b7 y denotes the pointwise product. Let \ud835\udc49\ud835\udf0b(\ud835\udc60) = \u0000\ud835\udc49\ud835\udf0b 1 (\ud835\udc60), . . . ,\ud835\udc49\ud835\udf0b \ud835\udc5a(\ud835\udc60)\u0001 be the state value \fWWW \u201923, May 1\u20135, 2023, Austin, TX, USA Cai, et al. \ud835\udc38\ud835\udf0b[\ud835\udc45\ud835\udc61|\ud835\udc60\ud835\udc61= \ud835\udc60] under actions sampled in accordance with policy \ud835\udf0band \ud835\udc44(\ud835\udc60,\ud835\udc4e) = \u0000\ud835\udc44\ud835\udf0b 1 (\ud835\udc60,\ud835\udc4e), . . . ,\ud835\udc44\ud835\udf0b \ud835\udc5a(\ud835\udc60,\ud835\udc4e)\u0001 be its state-action value \ud835\udc38\ud835\udf0b[\ud835\udc45\ud835\udc61|\ud835\udc60\ud835\udc61= \ud835\udc60,\ud835\udc4e\ud835\udc61= \ud835\udc4e]. Denote \ud835\udf0c\ud835\udf0bas the state distribution induced by policy \ud835\udf0b. Without loss of generality, we set the first response as our main response. The goal is to learn a recommendation policy \ud835\udf0b(\u00b7|\ud835\udc60) to solve the following optimization problem: max \ud835\udf0b \ud835\udc38\ud835\udf0c\ud835\udf0b \u0002 \ud835\udc49\ud835\udf0b 1 (\ud835\udc60) \u0003 s.t. \ud835\udc38\ud835\udf0c\ud835\udf0b \u0002 \ud835\udc49\ud835\udf0b \ud835\udc56(\ud835\udc60) \u0003 \u2265\ud835\udc36\ud835\udc56, \ud835\udc56= 2, . . . ,\ud835\udc5a (1) where \ud835\udc36\ud835\udc56is constraint on the auxiliary response \ud835\udc56. 4 TWO-STAGE CONSTRAINED ACTOR-CRITIC In this section, we propose a novel two-stage constrained actorcritic method, addressing the learning challenges in the context of short video recommendation: Multi-Critic Policy Estimation We propose to estimate the responses separately to better estimate dense and sparse signals. Stage One For each auxiliary response, we learn a policy to optimize its cumulative reward. Stage Two For the main response, we learn a policy to optimize its cumulative reward, while softly limiting it to be close to other policies that are learned to optimize the auxiliary. We first discuss the advantage of evaluating different policies separately over estimating jointly. Secondly, we elaborate our method in the settings of online learning with stochastic policies in Sections 4.2 and 4.3. We then discuss its extensions to the offline setting and deterministic policies. 4.1 Multi-Critic Policy Estimation We showcase the advantage of separate evaluation for each response over a joint evaluation of summed response. Specifically, we consider two types of responses from each video view: WatchTime and interactions (which is an indicator function of whether the interactions happen during the view). \u2022 For the joint evaluation, we learn a value model \ud835\udc49\ud835\udc57\ud835\udc5c\ud835\udc56\ud835\udc5b\ud835\udc61with reward as a sum of WatchTime and interactions. \u2022 For the separate evaluation, we learn two value models \ud835\udc49\ud835\udc64 and \ud835\udc49\ud835\udc56with reward as WatchTime and interactions respectively. Define the value of separate evaluation as \ud835\udc49\ud835\udc60\ud835\udc52\ud835\udc5d\ud835\udc4e\ud835\udc5f\ud835\udc4e\ud835\udc61\ud835\udc52= \ud835\udc49\ud835\udc64+ \ud835\udc49\ud835\udc56 For fair comparison, we share the same discount factor 0.95 for all value models and train them on the same data collected from a popular short video platform for one day. To evaluate the accuracy of the value model in terms of WatchTime and interactions, we compute the correlation between model values \ud835\udc49\ud835\udc57\ud835\udc5c\ud835\udc56\ud835\udc5b\ud835\udc61and \ud835\udc49\ud835\udc60\ud835\udc52\ud835\udc5d\ud835\udc4e\ud835\udc5f\ud835\udc4e\ud835\udc61\ud835\udc52with the Monte Carlo value of the sum of the corresponding responses in each session. As compared to \ud835\udc49\ud835\udc57\ud835\udc5c\ud835\udc56\ud835\udc5b\ud835\udc61, \ud835\udc49\ud835\udc60\ud835\udc52\ud835\udc5d\ud835\udc4e\ud835\udc5f\ud835\udc4e\ud835\udc61\ud835\udc52is more correlated with WatchTime and interactions by 0.19% and 0.14% respectively(a 0.1% improvement on WatchTime and interactions is significant), demonstrating that the separate evaluation better learns different reward responses than jointly learning. 4.2 Stage One: Policy Learning for Auxiliary Responses At this stage, we learn policies to optimize the cumulative reward of each auxiliary response separately. For completeness, we write out our procedure for stochastic policies [Williams 1992]. Considering response \ud835\udc56, let the learned actor and the critic be parameterized by \ud835\udf0b\ud835\udf03\ud835\udc56and \ud835\udc49\ud835\udf19\ud835\udc56respectively. At iteration \ud835\udc58, we observe sample (\ud835\udc60,\ud835\udc4e,\ud835\udc60\u2032) collected by \ud835\udf0b\ud835\udf03(\ud835\udc58) \ud835\udc56 , i.e., \ud835\udc60\u223c\ud835\udf0c\ud835\udf0b\ud835\udf03(\ud835\udc58) \ud835\udc56 ,\ud835\udc4e\u223c\ud835\udf0b\ud835\udf03(\ud835\udc58) \ud835\udc56 (\u00b7|\ud835\udc60) and \ud835\udc60\u2032 \u223c\ud835\udc43(\u00b7|\ud835\udc60,\ud835\udc4e). We update the critic to minimize the Bellman equation: \ud835\udf19(\ud835\udc58+1) \ud835\udc56 \u2190arg min \ud835\udf19\ud835\udc38\ud835\udf0b\ud835\udf03(\ud835\udc58) \ud835\udc56 h\u0000\ud835\udc5f\ud835\udc56(\ud835\udc60,\ud835\udc4e) + \ud835\udefe\ud835\udc56\ud835\udc49\ud835\udf19(\ud835\udc58) \ud835\udc56 (\ud835\udc60\u2032) \u2212\ud835\udc49\ud835\udf19(\ud835\udc60)\u00012i . (2) We update the actor to maximize the advantage: \ud835\udf03(\ud835\udc58+1) \ud835\udc56 \u2190arg max \ud835\udf03 \ud835\udc38\ud835\udf0b\ud835\udf03(\ud835\udc58) \ud835\udc56 h \ud835\udc34(\ud835\udc58) \ud835\udc56 log \u0000\ud835\udf0b\ud835\udf03(\ud835\udc4e|\ud835\udc60)\u0001i where \ud835\udc34(\ud835\udc58) \ud835\udc56 = \ud835\udc5f\ud835\udc56(\ud835\udc60,\ud835\udc4e) + \ud835\udefe\ud835\udc56\ud835\udc49\ud835\udf19(\ud835\udc58) \ud835\udc56 (\ud835\udc60\u2032) \u2212\ud835\udc49\ud835\udf19(\ud835\udc58) \ud835\udc56 (\ud835\udc60). (3) 4.3 Stage Two: Softly Constrained Optimization of the Main Response After pre-training the policies \ud835\udf0b\ud835\udf032, . . . , \ud835\udf0b\ud835\udf03\ud835\udc5athat optimize the auxiliary responses, we now move onto the second stage of learning the policy to optimize the main response. We propose a new constrained policy optimization method with multiple constraints. Let the actor and the critic be \ud835\udf0b\ud835\udf031 and \ud835\udc49\ud835\udf191 respectively. At iteration \ud835\udc58, we similarly update the critic to minimize the Bellman equation: \ud835\udf19(\ud835\udc58+1) 1 \u2190arg min \ud835\udf19\ud835\udc38\ud835\udf0b\ud835\udf03(\ud835\udc58) 1 h\u0000\ud835\udc5f1(\ud835\udc60,\ud835\udc4e) + \ud835\udefe1\ud835\udc49\ud835\udf19(\ud835\udc58) 1 (\ud835\udc60\u2032) \u2212\ud835\udc49\ud835\udf19(\ud835\udc60)\u00012i . (4) The principle of updating the actor is two-fold: (i) maximizing the advantage; (ii) restricting the policy to the domain that is not far from other policies. The optimization is formalized below: max \ud835\udf0b \ud835\udc38\ud835\udf0b[\ud835\udc34(\ud835\udc58) 1 ] s.t. \ud835\udc37\ud835\udc3e\ud835\udc3f(\ud835\udf0b||\ud835\udf0b\ud835\udf03\ud835\udc56) \u2264\ud835\udf16\ud835\udc56, \ud835\udc56= 2, . . . ,\ud835\udc5a, where \ud835\udc34(\ud835\udc58) 1 = \ud835\udc5f1(\ud835\udc60,\ud835\udc4e) + \ud835\udefe1\ud835\udc49\ud835\udf19(\ud835\udc58) 1 (\ud835\udc60\u2032) \u2212\ud835\udc49\ud835\udf19(\ud835\udc58) 1 (\ud835\udc60). (5) We get the closed form solution of the Lagrangian of Eq. (5) in the following theorem. We omit the proof due to lack of space, please refer to Appendix A. Theorem 1. The Lagrangian of Eq. (5) has the closed form solution \ud835\udf0b\u2217(\ud835\udc4e|\ud835\udc60) \u221d \ud835\udc5a \u00d6 \ud835\udc56=2 \u0000\ud835\udf0b\ud835\udf03\ud835\udc56(\ud835\udc4e|\ud835\udc60)\u0001 \ud835\udf06\ud835\udc56 \u00cd\ud835\udc5a \ud835\udc57=2 \ud835\udf06\ud835\udc57exp \u0012 \ud835\udc34(\ud835\udc58) 1 \u00cd\ud835\udc5a \ud835\udc57=2 \ud835\udf06\ud835\udc57 \u0013 , (6) where \ud835\udf06\ud835\udc56with \ud835\udc56= 2, . . . ,\ud835\udc5aare Lagrangian multipliers. Given data collected by \ud835\udf0b\ud835\udf03(\ud835\udc58) 1 , we learn the policy \ud835\udf0b\ud835\udf031 by minimizing its KL divergence from the optimal policy \ud835\udf0b\u2217: \ud835\udf03(\ud835\udc58+1) 1 \u2190arg min \ud835\udf03 \ud835\udc38\ud835\udf0b\ud835\udf03(\ud835\udc58) 1 [\ud835\udc37\ud835\udc3e\ud835\udc3f(\ud835\udf0b\u2217(\ud835\udc4e|\ud835\udc60)||\ud835\udf0b\ud835\udf03(\ud835\udc4e|\ud835\udc60))] = arg max \ud835\udf03 \ud835\udc38\ud835\udf0b\ud835\udf03(\ud835\udc58) 1 h \u00ce\ud835\udc5a \ud835\udc56=2 \u0010 \ud835\udf0b\ud835\udf03\ud835\udc56(\ud835\udc4e|\ud835\udc60) \u0011 \ud835\udf06\ud835\udc56 \u00cd\ud835\udc5a \ud835\udc57=2 \ud835\udf06\ud835\udc57 \ud835\udf0b\ud835\udf03(\ud835\udc58) 1 (\ud835\udc4e|\ud835\udc60) exp \u0012 \ud835\udc34(\ud835\udc58) 1 \u00cd\ud835\udc5a \ud835\udc57=2 \ud835\udf06\ud835\udc57 \u0013 log \ud835\udf0b\ud835\udf03(\ud835\udc4e|\ud835\udc60) i . (7) \fTwo-Stage Constrained Actor-Critic for Short Video Recommendation WWW \u201923, May 1\u20135, 2023, Austin, TX, USA The procedure of the two-stage constrained actor-critic algorithm is shown in Appendix B, and we name it as TSCAC for short. We here provide some intuition behind actor updating in (7). The term \ud835\udf0b\ud835\udf03\ud835\udc56(\ud835\udc4e|\ud835\udc60) denotes the probability the action selected by policy \ud835\udc56and serves as an importance, which softly regularizes the learned policy \ud835\udf0b\ud835\udf031 to be close to other policies \ud835\udf0b\ud835\udf03\ud835\udc56. Smaller Lagrangian multipliers \ud835\udf06\ud835\udc56indicate weaker constraints, and when \ud835\udf06\ud835\udc56= 0, we allow the learned policy \ud835\udf0b\ud835\udf031 to be irrelevant of the constraint policy \ud835\udf0b\ud835\udf03\ud835\udc56. Note that we set the value of \ud835\udf06to be the same, which is more practical for the production system. The performance of TSCAC would be better if we fine-tune it with different Lagrangian multiplier value. But the effectiveness of TSCAC with the same value of \ud835\udf06is validated in both offline and live experiments, as we will see in following sections. Offline Learning We now discuss adapting our constrained actor-critic method to the offline setting, i.e., a fixed dataset. The main change when moving from the online learning to the offline learning is the bias correction on the policy gradient. The actor is no longer updated on data collected by current policy but by another behavior policy \ud835\udf0b\ud835\udefd, which may result in a different data distribution induced by the policy being updated. To address the distribution mismatch when estimating the policy gradient, a common strategy is to apply bias-correction ratio via importance sampling [Precup 2000; Precup et al. 2001]. Given a trajectory \ud835\udf0f= (\ud835\udc601,\ud835\udc4e1,\ud835\udc602,\ud835\udc4e2, . . . ), the bias-correction ratio on the policy gradient for policy \ud835\udf0b\ud835\udf03\ud835\udc56is \ud835\udc64(\ud835\udc60\ud835\udc61,\ud835\udc4e\ud835\udc61) = \u00ce\ud835\udc61 \ud835\udc61\u2032=1 \ud835\udf0b\ud835\udf03\ud835\udc56(\ud835\udc60\ud835\udc61\u2032 |\ud835\udc4e\ud835\udc61\u2032 ) \ud835\udf0b\ud835\udefd(\ud835\udc60\ud835\udc61\u2032 |\ud835\udc4e\ud835\udc61\u2032 ) , which gives an unbiased estimation, but the variance can be huge. Therefore, we suggest using a firstorder approximation, and using the current action-selection ratio when optimizing the actors of auxiliary responses, \ud835\udf03(\ud835\udc58+1) \ud835\udc56 \u2190arg max \ud835\udf03 \ud835\udc38\ud835\udf0b\ud835\udefd \u0014\ud835\udf0b\ud835\udf03(\ud835\udc58) \ud835\udc56 (\ud835\udc4e|\ud835\udc60) \ud835\udf0b\ud835\udefd(\ud835\udc4e|\ud835\udc60) \ud835\udc34(\ud835\udc58) \ud835\udc56 log(\ud835\udf0b\ud835\udf03(\ud835\udc4e|\ud835\udc60)) \u0015 . (8) When updating the actor of the main response, we have \ud835\udf03(\ud835\udc58+1) 1 \u2190arg max \ud835\udf03 \ud835\udc38\ud835\udf0b\ud835\udefd h \u00ce\ud835\udc5a \ud835\udc56=2 \u0010 \ud835\udf0b\ud835\udf03\ud835\udc56(\ud835\udc4e|\ud835\udc60) \u0011 \ud835\udf06\ud835\udc56 \u00cd\ud835\udc5a \ud835\udc57=2 \ud835\udf06\ud835\udc57 \ud835\udf0b\ud835\udefd(\ud835\udc4e|\ud835\udc60) \u00d7 exp \u0012 \ud835\udc34(\ud835\udc58) 1 \u00cd\ud835\udc5a \ud835\udc57=2 \ud835\udf06\ud835\udc57 \u0013 log(\ud835\udf0b\ud835\udf03(\ud835\udc4e|\ud835\udc60)) i . (9) Deterministic Policies We now discuss the extension of TSCAC to deterministic policies[Lillicrap et al. 2015], inspired by the updating rule for the actor of constrained policy discussed in (7). Similarly, at stage one, for each auxiliary response \ud835\udc56, we learn separate critic models \ud835\udc44\ud835\udf19\ud835\udc56(\ud835\udc60,\ud835\udc4e) and actor models \ud835\udf0b\ud835\udf03\ud835\udc56(\ud835\udc60). At stage two, for the main response, we learn critic \ud835\udc44\ud835\udf191 (\ud835\udc60,\ud835\udc4e) via temporal learning, and for actor \ud835\udf0b\ud835\udf031 (\ud835\udc60), the updating rule follows the form: max \ud835\udf03 \ud835\udc5a \u00d6 \ud835\udc56=2 \u0012 \u210e(\ud835\udf0b\ud835\udf03\ud835\udc56(\ud835\udc60), \ud835\udf0b\ud835\udf031 (\ud835\udc60)) \u0013\ud835\udf06\ud835\udc56 \ud835\udc44\ud835\udf191 (\ud835\udc60, \ud835\udf0b\ud835\udf03(\ud835\udc60)), (10) where\u210e(\ud835\udc4e1,\ud835\udc4e2) scores high when two actions\ud835\udc4e1,\ud835\udc4e2 are close to each other and scores low vice versa, and \u210e(\ud835\udf0b\ud835\udf03\ud835\udc56(\ud835\udc60), \ud835\udf0b\ud835\udf031 (\ud835\udc60)) scores high when the actions selected by policy \ud835\udf0b\ud835\udf031 and \ud835\udf0b\ud835\udf03\ud835\udc56are close. \ud835\udf06\ud835\udc56\u22650 plays a similar role as the constraint Lagrangian multiplier\u2014larger \ud835\udf06\ud835\udc56denotes stronger constraint. As an example, given \ud835\udc5bdimensional Table 1: The statistics of KuaiRand. Dimension Number Sparse Ratio users 26858 items 10,221,515 samples 68,148,288 click 25,693,008 37.70% like 1094434 1.61% comment 163977 0.24% hate 32449 0.048% action space, one can choose \u210e(\ud835\udc4e1,\ud835\udc4e2) = \u00cd\ud835\udc5b \ud835\udc51=1 exp \u0000 \u2212(\ud835\udc4e1\ud835\udc51\u2212\ud835\udc4e2\ud835\udc51)2 2 \u0001. The deterministic version of TSCAC can apply to the setting with continuous actions, such as the embedding of the user preference. 5 OFFLINE EXPERIMENTS In this section, we evaluate our method on a public dataset about short video recommendation via extensive offline learning simulations. We demonstrate the effectiveness of our approach as compared to existing baselines in both achieving the main goal and balancing the auxiliaries. We also test the versatility of our method on another public recommendation dataset, please refer to Appendix C due to lack of space. 5.1 Setup Dataset. We consider a public dataset for short video recommendation named KuaiRand (https://kuairand.com/) [Gao et al. 2022b], which is collected from a famous video-sharing mobile app and suitable for the offline evaluation of RL methods as it is unbiased. This dataset collects not only the overall WatchTime of the videos, but also the interaction behavior of the users including Click, Like, Comment and Hate. The statistics of the dataset are illustrated in Table 1. It shows that Like, Comment, and Hate are sparse signals. Note that Hate is extremely sparse. Logs provided by the same user are concatenated to form a trajectory; we choose top 150 videos that are most frequently viewed. MDP. \u2022 state \ud835\udc60\ud835\udc61: A 1044 dimension vector, which is a concatenation of user features(user property), the last 20 video features viewed by the user(user history) and all the 150 candidate video features(context). \u2022 action \ud835\udc4e\ud835\udc61: the video ID to be recommended currently. \u2022 reward \ud835\udc5f\ud835\udc61: a vector of five scores the user provided for the viewed videos in terms of Click, Like, Comment, Hate, and WatchTime. \u2022 episode: a sequence of users\u2019 video viewing history. \u2022 discount factor \ud835\udefe: 0.99 \u2022 objective: We set the main goal to be maximizing the video WatchTime, and treat others as the auxiliaries. Evaluation. We use the Normalised Capped Importance Sampling (NCIS) approach to evaluate different policies, which is a standard offline evaluation approach for RL methods in recommender systems [Zou et al. 2019]. We also evaluate our method in terms of \fWWW \u201923, May 1\u20135, 2023, Austin, TX, USA Cai, et al. other metrics, please refer to Appendix D. The NCIS score is defined: \ud835\udc41(\ud835\udf0b) = \u00cd \ud835\udc60,\ud835\udc4e\u2208\ud835\udc37\ud835\udc64(\ud835\udc60,\ud835\udc4e)\ud835\udc5f(\ud835\udc60,\ud835\udc4e) \u00cd \ud835\udc60,\ud835\udc4e\u2208\ud835\udc37\ud835\udc64(\ud835\udc60,\ud835\udc4e) ,\ud835\udc64(\ud835\udc60,\ud835\udc4e) = min{\ud835\udc50, \ud835\udf0b(\ud835\udc4e|\ud835\udc60) \ud835\udf0b\ud835\udefd(\ud835\udc4e|\ud835\udc60) }, (11) where \ud835\udc37is the dataset, \ud835\udc64(\ud835\udc60,\ud835\udc4e) is the clipped importance sampling ratio, \ud835\udf0b\ud835\udefddenotes the behavior policy, \ud835\udc50is a positive constant. Baselines. We compare TSCAC with the following baselines. \u2022 BC: A supervised behavior-cloning policy \ud835\udf0b\ud835\udefdto mimic the recommendation policy in the dataset, which inputs the user state and outputs the video ID. \u2022 Wide&Deep [Cheng et al. 2016]: A supervised model which utilizes wide and deep layers to balance both memorization and generalization, which inputs the user state, outputs the item id, and the weight of each sample is set to be the weighted sum of all responses of this item. \u2022 DeepFM [Guo et al. 2017]: a supervised recommendation model which combines deep neural network and factorization machine, which inputs the user state, outputs the item id, and the weight of each sample is set to be the weighted sum of all responses of this item. \u2022 RCPO [Tessler et al. 2018] : A constrained actor-critic approach called reward-constrained policy optimization which optimizes the policy to maximize the Lagrange dual function of the constrained program. Specifically, the reward function is defined as \ud835\udc5f= \ud835\udc5f0 + \u00cd\ud835\udc5b \ud835\udc56=1 \ud835\udf06\ud835\udc56\u2217\ud835\udc5f\ud835\udc56, where \ud835\udc5f0 is main objective, WatchTime and \ud835\udc5f\ud835\udc56denotes other feedback, and \ud835\udf06\ud835\udc56is the Lagrangian Multiplier. \u2022 RCPO-Multi-Critic: We test an improved version of RCPO with multiple critics. We separately learn multiple critic models to evaluate the cumulative rewards of each feedback. Then when optimizing the actor, we maximize a linear combination of critics, weighted by the Lagrangian multipliers. \u2022 Pareto [Chen et al. 2021]: A multi-objective RL algorithm that finds the Pareto optimal solution for recommender systems. \u2022 TSCAC: our two-stage constrained actor-critic algorithm. 5.2 Overall Performance Table 2 presents the performance of different algorithms in terms of five scores. We can see that our TSCAC algorithm significantly outperforms other algorithms including both constrained reinforcement learning and supervised learning methods: for the main goal (WatchTime), TSCAC achieves the highest performance 13.14(2.23%); for the auxiliary goal, TSCAC also ranks highest for 3 out of 4 scores (Click, Like, Comment). Note that TSCAC outperforms BC and RCPO at each dimension. The Pareto algorithm indeed learns a Pareto optimal solution that achieves best performance at Hate, but gets the lowest performance 11.90(\u22127.4%), i.e., it does not satisfy the setting with the main goal to optimize the WatchTime. The RCPO algorithm achieves the second highest performance at WatchTime, 13.07(1.70%), but the score at Hate is the worst as the sparse signals are dominated by dense signals in a single evaluation model. Compared with RCPO, RCPO-Multi-Critic achieves much better score at Hate, which demonstrates the effectiveness of the multi-critic policy estimation method. TSCAC also outperforms RCPO-Multi-Critic at each dimension, which shows that the ability of our two-stage actor learning method to deal with multiple responses. 5.3 Ablation Study We investigate how the value of Lagrangian multiplier affects the performance. As we set the value of \ud835\udf06of all constraints to be the same in the second stage, we vary \ud835\udf06across [1\ud835\udc52\u22121, 1\ud835\udc52\u22122, 1\ud835\udc52\u2212 3, 1\ud835\udc52\u22124, 1\ud835\udc52\u22125] and present performance of TSCAC in terms of all responses. Recall that larger \ud835\udf06denotes stronger constraints of auxiliary responses. Figure 3 shows that with \ud835\udf06increasing, the main goal, WatchTime decreases as the constraints of auxiliary responses become stronger. As shown in Figure 3, the performance of interactions drops with small \ud835\udf061\ud835\udc52\u22125 as the constraints are weak. Interestingly, the performance of interactions also decreases with larger \ud835\udf06, which shows that too strong constraints affect the learning of the policy. The value of 1\ud835\udc52\u22124 achieves the best performance at interactions, and improve WatchTime significantly compared with other baselines. 6 LIVE EXPERIMENTS To demonstrate the effectiveness of our algorithm, we test its performance as well as other alternatives via live experiments in a popular short video platform. Algorithms are embodied in a candidateranking system used in production at a popular short video platform, that is, when a user arrives, these algorithms are expected to rank the candidate videos, and the system will recommend the top video to the user. We show that the proposed TSCAC algorithm is able to learn a policy that maximizes the main goal while also effectively balancing the auxiliary goal, and in particular, we set the main one as maximizing the WatchTime and the auxiliary one as improving the interactions between users and videos. 6.1 Setup Evaluation metrics. We use online metrics to evaluate policy performance. For the main goal, we look at the total amount of time user spend on the videos, referred to as WatchTime. For the auxiliary goal, users can interact with videos through multiple ways, such as sharing the video to friends, downloading it, or providing comments. Here, we focus on the three online metrics associated with the user-video interactions\u2014the total number of Share, Download, Comment interactions. MDP. Following the formulation in Section 3, we present the details of the Constrained MDP for short video recommendation. \u2022 state \ud835\udc60\ud835\udc61: user historical interactions (the list of items recommended to users at previous rounds and corresponding user feedbacks), user property (such as device and location) and the feature (the embeddings and statistics) of candidate videos at time \ud835\udc61. \u2022 action \ud835\udc4e\ud835\udc61: a vector embedding of algorithm-predicted user preferences on different video topics, which determines the actual recommendation action(the video to be recommended) via a ranking function described below: the ranking function: for each candidate video, this function calculates the dot product between the predicted user \fTwo-Stage Constrained Actor-Critic for Short Video Recommendation WWW \u201923, May 1\u20135, 2023, Austin, TX, USA Table 2: Performance of different algorithms on KuaiRand. Algorithm Click\u2191 Like\u2191(e-2) Comment\u2191(e-3) Hate\u2193(e-4) WatchTime\u2191 BC 0.5338 1.231 3.225 2.304 12.85 Wide&Deep 0.5544 1.244 3.344 2.011 12.84 3.86% 1.07% 3.69% \u221212.7% \u22120.08% DeepFM 0.5549\u2217 1.388\u2217 3.310 2.112 12.92 3.95%\u2217 12.76%\u2217 2.64% \u22128.31% 0.53% RCPO 0.5510 1.386 3.628\u2217 2.951 13.07\u2217 3.23% 12.57% 12.5%\u2217 28.1% 1.70%\u2217 RCPO-Multi-Critic 0.5519 1.367 3.413 2.108 13.00 3.41% 11.04% 5.83% \u22128.49% 1.14% Pareto 0.5438 1.171 3.393 0.9915\u2217 11.90 1.87% \u22124.85% 5.22% \u221256.96%\u2217 \u22127.4% TSCAC 0.5570 1.462 3.728 1.870 13.14 4.35% 18.80% 15.6% \u221218.83% 2.23% The number in the bracket stands for the unit of this column; The number in the first row of each algorithm is the NCIS score. The percentage in the second row means the performance gap between the algorithm and the BC algorithm. The numbers with \u2217denote the best performance among all baseline methods in each response dimension. The last row is marked by bold font when TSCAC achieves the best performance at each response dimension. 1e-1 1e-2 1e-3 1e-4 1e-5 0.550 0.552 0.554 0.556 Click 1e-1 1e-2 1e-3 1e-4 1e-5 1.350 1.375 1.400 1.425 1.450 1e 2 Like 1e-1 1e-2 1e-3 1e-4 1e-5 3.2 3.3 3.4 3.5 3.6 3.7 1e 3 Comment 1e-1 1e-2 1e-3 1e-4 1e-5 2.0 2.5 3.0 3.5 1e 4 Hate 1e-1 1e-2 1e-3 1e-4 1e-5 13.0 13.1 13.2 Watchtime Figure 3: Effect of the value of the Lagrangian multiplier on the performance. Figure 4: The workflow of RL in production system. preference vector (\ud835\udc4e\ud835\udc61) and the video embedding (representing its topic and quality) as in [Dulac-Arnold et al. 2015]. Then the video with the largest score is recommended. \u2022 reward \ud835\udc5f\ud835\udc61= (\ud835\udc59\ud835\udc61,\ud835\udc56\ud835\udc61): after each recommendation, the system observes how long the user spent on the video, WatchTime , denoted as \ud835\udc59\ud835\udc61, and whether the user has interacted with the video (Share/Download/Comment), denoted as \ud835\udc56\ud835\udc61. \u2022 episode: a trajectory starts when a user opens the app and ends when the user leaves. \u2022 policy: we choose to learn a Gaussian policy in the live experiments. Specifically, the action \ud835\udc4e\ud835\udc61is sampled from a multivariate Gaussian distribution whose mean and variance are output of the actor model. Workflow. As shown in Figure 4, RL runs as follows: \u2022 Inference When the user comes, the user state are sent to the actor network, the actor network sample action by the Gaussian distribution. Then the ranking function inputs both \fWWW \u201923, May 1\u20135, 2023, Austin, TX, USA Cai, et al. Table 3: Performance comparison of different algorithms with the LTR baseline in live experiments. Algorithm WatchTime Share Download Comment RCPO +0.309% \u22120.707% 0.153% \u22121.313% Interaction-AC +0.117% +5.008% +1.952% \u22120.101% TSCAC +0.379% +3.376% +1.733% \u22120.619% the action and the embedding of candidates, calculates the dot product between the action and the video embeddings as scores, and output the item with the highest score to the user. After that, (state, action, rewards, next state) are saved in the replay buffer. \u2022 Training The actor and the critic networks are trained with a mini-batch (state, action, rewards, next state), sampled from the replay buffer. Compared algorithms. We complement our evaluation with a supervised learning-to-rank (LTR) baseline, which is the default model run on the platform. \u2022 RCPO: Following [Tessler et al. 2018], we define a combined reward \ud835\udc59\ud835\udc61+\ud835\udf06\ud835\udc56\ud835\udc61and learn a policy to maximize the cumulative combined reward with discount factor 0.95, where \ud835\udf06is the Lagrangian multiplier. \u2022 TSCAC: We first learn a policy \ud835\udf0b2 to optimize the auxiliary goal. Then we learn a policy \ud835\udf0b1 to optimize the main goal with the soft constraint that \ud835\udf0b1 is close to \ud835\udf0b2. \u2013 Interaction-AC: At the first stage, we learn a policy \ud835\udf0b2 to maximize the interaction reward, with critic update following (2) and actor update following (3). \u2013 TSCAC At the second stage, we learn a main policy \ud835\udf0b1 to maximize the cumulative reward of WatchTime and softly regularize \ud835\udf0b1 to be close to \ud835\udf0b2, with critic update following (4) and actor update following (7). \u2022 LTR (Baseline): The learning-to-rank model [Liu et al. 2009] that takes user state embedding and video embedding as input and fits the sum of responses. Experimental details. To test different algorithms, we randomly split users on the platform into several buckets. The first bucket runs the baseline LTR model, and the remaining buckets run models RCPO, Interaction-AC, and TSCAC. Models are trained for a couple of days and then are fixed to test performance within one day. 6.2 Results Table 3 shows the performance improvement of algorithm comparison with the LTR baseline regarding metrics WatchTime, Share, Download, and Comment. As we can see, RCPO can learn to improve the WatchTime as compared to the baseline; but interactionsignals are too sparse with respect to WatchTime, such that when combining these responses together, it cannot effectively balance the interaction well. Performance of the Interaction-AC algorithm is as expected: with signal from only the interaction reward, it learns to improve the interaction-related metrics (Share, Download, Comment); such interactions between users and videos also improve Day 1 Day 3 Day 5 Day 7 Day 9 Day 11 0.0 0.1 0.2 0.3 % Watch Time Day 1 Day 3 Day 5 Day 7 Day 9 Day 11 0 2 4 % Share Day 1 Day 3 Day 5 Day 7 Day 9 Day 11 0 1 2 3 % Download Day 1 Day 3 Day 5 Day 7 Day 9 Day 11 1 0 1 % Comment Figure 5: Online performance gap of TSCAC over the LTR baseline of each day. the user WatchTime, since more interesting videos with high potential of invoking interactions are recommended, which optimizes user whole experience. Finally, The TSCAC algorithm achieves the best performance: as compared to RCPO, it has better WatchTime and does much better on interaction metrics, thanks to the effective softly regularization during training that it should not be too far from the Interaction-AC policy. Note that 0.1% improvement of WatchTime and 1% improvement of interactions are statistically significant in the short video platform. That is, the performance improvement of our proposed method over baselines is significant. The universal drop of Comment for all RL methods is due to the natural trade-off between WatchTime and Comment. To understand how the TSCAC algorithm learns to balance the main and auxiliary goal, Figure 5 plots the online performance gap of the second stage over the LTR baseline on both WatchTime and interactions. As shown, the algorithm quickly learns to improve the interaction metrics Share and Comment at the beginning, with the constraint of Interaction-AC policy. Then gradually, the model learns to improve WatchTime over time with sacrificing interactions a little. Note that the live performance of TSCAC outperforms RCPO significantly at each dimension, which demonstrates the effectiveness of our method. 7" + }, + { + "url": "http://arxiv.org/abs/2205.13248v1", + "title": "Constrained Reinforcement Learning for Short Video Recommendation", + "abstract": "The wide popularity of short videos on social media poses new opportunities\nand challenges to optimize recommender systems on the video-sharing platforms.\nUsers provide complex and multi-faceted responses towards recommendations,\nincluding watch time and various types of interactions with videos. As a\nresult, established recommendation algorithms that concern a single objective\nare not adequate to meet this new demand of optimizing comprehensive user\nexperiences. In this paper, we formulate the problem of short video\nrecommendation as a constrained Markov Decision Process (MDP), where platforms\nwant to optimize the main goal of user watch time in long term, with the\nconstraint of accommodating the auxiliary responses of user interactions such\nas sharing/downloading videos.\n To solve the constrained MDP, we propose a two-stage reinforcement learning\napproach based on actor-critic framework. At stage one, we learn individual\npolicies to optimize each auxiliary response. At stage two, we learn a policy\nto (i) optimize the main response and (ii) stay close to policies learned at\nthe first stage, which effectively guarantees the performance of this main\npolicy on the auxiliaries. Through extensive simulations, we demonstrate\neffectiveness of our approach over alternatives in both optimizing the main\ngoal as well as balancing the others. We further show the advantage of our\napproach in live experiments of short video recommendations, where it\nsignificantly outperforms other baselines in terms of watch time and\ninteractions from video views. Our approach has been fully launched in the\nproduction system to optimize user experiences on the platform.", + "authors": "Qingpeng Cai, Ruohan Zhan, Chi Zhang, Jie Zheng, Guangwei Ding, Pinghua Gong, Dong Zheng, Peng Jiang", + "published": "2022-05-26", + "updated": "2022-05-26", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.IR" + ], + "main_content": "Introduction The surging popularity of short videos has been changing the status quo of social media. As of 2021, the monthly active users on TikTok have reached one billion worldwide Tik [[n. d.]]. Such prevalence of short video consumption has brought in huge business opportunities for organizations. As a result, there has been an increasing interest in optimizing recommendation strategies for short video platforms, where user feedback is multifaceted. Potential responses from a user after consuming a video include WatchTime (the time spent on watching the video), and several types of interactions: Share (sharing this video with his/her friends), Download (downloading the video), Comment (providing comments on the video), etc. Thereby, established recommender systems that exclusively optimize a single objective (such as gross merchandise volume for e-commence platforms Pi et al. [2020]) is no longer suf\ufb01cient\u2014the applied systems should take all aspects of responses into consideration to optimize user experiences. In this paper, we present our solution in the context of constrained optimization. As opposed to Pareto optimality that is often applied to study multi-objective strategies Sener and Koltun [2018], Chen et al. [2021], preferences on different objectives are often pre-speci\ufb01ed in real applications. Notably, one main goal for short video platforms is to increase the watch time, which is observed from each video view and widely concerns all users. Besides, watch time re\ufb02ects user attention, which is the scarce resource that companies compete for. Conversely, other responses such as Share/Comment are not mutually exclusive among platforms and thus could be sacri\ufb01ced mildly. On the other hand, platforms have been focusing on optimizing user long-term engagement, which directly drives daily active users (DAU) arXiv:2205.13248v1 [cs.LG] 26 May 2022 \fand thereby the revenue growth. Recently, a growing literature has focused on applying reinforcement learning (RL) to recommender systems, due to its ability to improve cumulative reward Nemati et al. [2016], Zhao et al. [2017, 2018], Chen et al. [2018], Zou et al. [2019], Liu and Yang [2019], Chen et al. [2019b], Xian et al. [2019], Ma et al. [2020], Afsar et al. [2021], Ge et al. [2021]. In particular, watch time, as the dense response, can be effectively cumulatively maximized to increase user spent time across multiple requests with RL approaches Chen et al. [2019a]. Thereby, we propose to learn an RL-based agent that optimizes the main goal (WatchTime), with the constraint of compensating other auxiliary responses (Share, Download, and Comment) with reasonable levels. The problem of this constrained policy learning is much more challenging as compared to its unconstrained counterpart. A natural idea would be learning a value-based or policy-based model that maximizes the Lagrangian with pre-speci\ufb01ed multipliers. However, such method is often dif\ufb01cult to be realized in practice via standard RL methods for the following two reasons. First, it is not suf\ufb01cient to use a single policy evaluation model to estimate the Lagrangian dual objective. As discussed, the agent may receive different types of responses from the user. A straightforward approach is to combine them into a single weighted sum using pre-speci\ufb01ed multipliers, and learn a value-based model such as Q-learning Mnih et al. [2013] to optimize it, as proposed in Stamenkovic et al. [2021]. Such response combination is not adequate, particularly for responses with their own discount factors\u2014the formulation of temporal difference error in value-based models only allows for a single discount value. In scenarios where one discount factor suf\ufb01ces, it can still be dif\ufb01cult for a single value model to evaluate the policy accurately, especially when different responses are observed at various frequencies, as typical for short video recommendations. The WatchTime response is dense and observed from each video view, while the interaction-signal such as Share/Comment is much more sparse and may not be provided within dozens of views. Signal from the sparse responses will be weakened by the dense responses when naively summing them up together. To address this multi-response evaluation dif\ufb01culty, we separately evaluate each response via its own value model, which allows for response-speci\ufb01c discount factors and mitigates the interference on evaluation from one response on another, similar to the procedure conducted in Chen et al. [2021], Tajmajer [2018], Hessel et al. [2019]. As an example, we evaluate the behavior policy on a popular short video platform using data collected real time and \ufb01nd that such separate evaluation improves learning on WatchTime and interaction-signal by 0.191% and 0.143% respectively1; Appendix A elaborates the experimental detail. Second, it is hard for a single policy to balance both dense responses and sparse responses. Learning a sparse response itself is well-known to be problematic\u2014it may take the agent undesirably long to learn something meaningful Florensa et al. [2017], Riedmiller et al. [2018]. Coexistence of both dense and sparse responses exacerbates the learning dif\ufb01culty. In most time, the agent only learns to optimize the policy in the direction of optimizing dense responses, which may negatively affect its learning for sparse responses. On account of this, we propose to \ufb01rstly learn a policy to optimize each auxiliary response and then \u201csoftly\u201d regularize the policy of the main response to be in the neighborhood of others. We demonstrate empirically that our approach can better balance different responses in both simulated data and live experiments. Together, we summarize our contributions as below: \u2022 Constrained Optimization in Short Video Recommendations: We formalize the problem of constrained policy learning in short video recommendations, where different aspects of responses may be observed at various frequencies, and the agent learns to optimize one with the constraint of balancing others. \u2022 Multi-Critic Policy Estimation: To better evaluate policy on multiple responses that may differ in discount factors and observation frequencies, we propose to separately learn a value model to evaluate each response. \u2022 Two-Stage Actor-Critic Learning: We propose a two-stage actor-critic framework which \ufb01rstly learns a policy to optimize each auxiliary response and secondly regularizes the policy of the main response to be not far from others, which we demonstrate to be a more effective way in constrained optimization as compared with other alternatives. \u2022 Gains in Live Experiments: We demonstrate the effectiveness of our approach in live experiments, showing the ability of our approach in optimizing the main response of WatchTime as well as balancing other interaction ones. 1In real applications for video recommendations, an improvement around 0.1% on value estimation is already signi\ufb01cant to be re\ufb02ected in production performance. 2 \f2 Related Work Constrained Reinforcement Learning Our work is also closely related to the literature of constrained reinforcement learning, where the sequential decision making problem is formulated into a constrained Markov Decision Process Sutton and Barto [2018], and the policy learning procedure is expected to respect the constraints. There are mainly two categories of constraints: cumulative ones (sum of a given signal should be limited into certain region) and instantaneous ones (constraints should be satis\ufb01ed at each step) Liu et al. [2021], Perkins and Barto [2002], Garc\u0131a and Fern\u00e1ndez [2015]. To deal with cumulative constraints, there is a large body of literature focusing on Lagrangian relaxation Chow et al. [2017, 2019], Tessler et al. [2018], Dalal et al. [2018]. As an example, Tessler et al. [2018] propose to update the policy and the Lagrangian multiplier alternatively and prove the convergence of their algorithm to a \ufb01x point. This approach however does not deal with the dif\ufb01culty of policy learning on rewards with different observation frequencies and thus is dif\ufb01cult to achieve a good balance among multiple responses. In contrast, for each cumulative reward, we learn a policy to maximize it speci\ufb01cally, then we \u201csoftly\u201d regularize the main policy to be in the neighborhood of others. We show empirically that this is a more effective way for constrained policy learning when dealing with both sparse and dense rewards. Different from Nair et al. [2020] that studies in of\ufb02ine RL and regularizes the learned policy to be in the neighborhood of one behavior policy, we softly restrict the policy within other policies maximizing other auxiliary responses and we do not limit to of\ufb02ine settings. Multi-objective Optimization We also discuss a relevant line on multi-objective optimization. To trade off different objectives, methods in this \ufb01eld can be broadly categorized into two classes: the Pareto optimization and the joint optimization with pre-speci\ufb01ed weights. The goal of Pareto optimization is to \ufb01nd a solution such that no other solutions can concurrently improve all objectives, named as Pareto optimality Nguyen et al. [2020], Sener and Koltun [2018], Chen et al. [2021], ?. However, a Pareto optimal solution may not prioritize the objective that is most valued in applications. The second method combines different objectives together into a single one via pre-specify the weights White et al. [1980], Mossalam et al. [2016]. However, it is dif\ufb01cult to quantify these weights that can accurately re\ufb02ect preferences in real applications Tessler et al. [2018]. 3 Preliminaries 3.1 Constrained Markov Decision Process We start by formulating the problem of short video recommendation on mobile app services. When a user u opens the app, a new session starts. A session consists of multiple requests. At each request t when the user slides down the app, the recommender system (agent) takes an action at that recommends the user a video based on the user current state, characterized by his/her demographics, historical interactions, etc. Then the user provides multi-faceted responses (such as WatchTime, Share, Download, Comment) on the shown video, which are received by the agent as vector-valued reward signal and used for future planning; let m be the number of types of responses. The goal of the recommender system is to optimize cumulative reward of the main response (e.g., WatchTime), with the constraint of not sacri\ufb01cing others much. We model the above procedure as a Constrained Markov Decision Process(CMDP) Sutton and Barto [2018] (S, A, P, R, C, \u03c10, \u0393), where S is the state space of user current representation st, A is the action space (and each action at corresponds to a recommended video for one request), P : S \u00d7 A \u2192\u2206(S) captures the state transition, R : S \u00d7 A \u2192Rm de\ufb01nes the vector-valued reward function that yields m different rewards r(st, at) = \u0000r1(st, at), . . . , rm(st, at) \u0001 , \u03c10 is the initial state distribution, \u0393 = (\u03b31, . . . , \u03b3m) \u2208(0, 1)m denotes the vector of discount factor for reward of each response, and C speci\ufb01es the constraints on the auxiliary responses. De\ufb01ne the vector-valued discounted cumulative reward Rt as Rt = PT t\u2032=t \u0393t\u2032\u2212t \u00b7 r(st\u2032, at\u2032), where T is the session length (i.e., the number of requests), \u0393b = \u0000\u03b3b 1, . . . , \u03b3s m \u0001 , and x \u00b7 y denotes the pointwise product. Let V \u03c0(s) = \u0000V \u03c0 1 (s), . . . , V \u03c0 m(s) \u0001 be the state value E\u03c0[Rt|st = s] under actions sampled in accordance with policy \u03c0 and Q(s, a) = \u0000Q\u03c0 1(s, a), . . . , Q\u03c0 m(s, a) \u0001 be its state-action value E\u03c0[Rt|st = s, at = a]. Denote \u03c1\u03c0 as the state distribution induced by policy \u03c0. Without loss of generality, we set the \ufb01rst response as our main response. The goal is to learn a recommendation policy \u03c0(\u00b7|s) over the action space to solve the following optimization problem: max \u03c0 E\u03c1\u03c0 \u0002 V \u03c0 1 (s) \u0003 s.t. E\u03c1\u03c0 \u0002 V \u03c0 i (s) \u0003 \u2265Ci, i = 2, . . . , m (1) where Ci is constraint on the auxiliary response i. 3 \f4 Two-Stage Constrained Actor Critic In this section, we propose our two-stage constrained policy learning based on actor-critic framework, addressing the learning challenges in the context of dense and sparse rewards: Stage One For each auxiliary response, we learn a policy to optimize its cumulative reward. Stage Two For the main response, we learn a policy to optimize its cumulative reward, while limiting it to be close to other policies that are learned to optimize the auxiliary. We \ufb01rst elaborate our framework in the settings of online learning with stochastic policies (such as A2C and A3C Williams [1992], Mnih et al. [2016]) in Sections 4.1 and 4.2(the procedure is summarized in Appendix C). We then discuss its extensions to deterministic policies (such as DDPG and TD3 Lillicrap et al. [2015], Fujimoto et al. [2018]). For of\ufb02ine setting, please refer to Appendix D. 4.1 Stage One: Policy Learning for Auxiliary Responses At this stage, we learn policies to optimize the cumulative reward of each auxiliary response separately. For completeness, we write out our procedure following the standard advantage actor critic approach Williams [1992]. Considering response i, let the learned actor and the critic be parameterized by \u03c0\u03b8i and V\u03c6i respectively. At iteration k, we observe sample (s, a, s\u2032) collected by \u03c0\u03b8(k) i , i.e., s \u223c\u03c1\u03c0 \u03b8(k) i , a \u223c\u03c0\u03b8(k) i (\u00b7|s) and s\u2032 \u223cP(\u00b7|s, a). We update the critic to minimize the Bellman equation: \u03c6(k+1) i \u2190arg min \u03c6 E\u03c0 \u03b8(k) i h\u0000ri(s, a) + \u03b3iV\u03c6(k) i (s\u2032) \u2212V\u03c6(s) \u00012i . (2) We update the actor to maximize the advantage: \u03b8(k+1) i \u2190arg max \u03b8 E\u03c0 \u03b8(k) i h A(k) i log \u0000\u03c0\u03b8(a|s) \u0001i where A(k) i = ri(s, a) + \u03b3iV\u03c6(k) i (s\u2032) \u2212V\u03c6(k) i (s). (3) 4.2 Stage Two: Constrained Optimization of the Main Response After pretraining the policies \u03c0\u03b82, . . . , \u03c0\u03b8m that optimize the auxiliary responses, we now move onto the second stage of learning the policy to optimize the main response. We propose a new constrained advantage actor critic approach. Let the actor and the critic be \u03c0\u03b81 and V\u03c61 respectively. At iteration k, we similarly update the critic to minimize the Bellman equation: \u03c6(k+1) 1 \u2190arg min \u03c6 E\u03c0 \u03b8(k) 1 h\u0000r1(s, a) + \u03b31V\u03c6(k) 1 (s\u2032) \u2212V\u03c6(s) \u00012i . (4) The principle of updating the actor is two-fold: (i) maximizing the advantage; (ii) restricting the policy to the domain that is not far from other policies. The optimization is formalized below: max \u03c0 E\u03c0[A(k) 1 ] s.t. DKL(\u03c0||\u03c0\u03b8i) \u2264\u03f5i, i = 2, . . . , m, where A(k) 1 = r1(s, a) + \u03b31V\u03c6(k) 1 (s\u2032) \u2212V\u03c6(k) 1 (s). (5) Equation (5) has the closed form solution \u03c0\u2217(a|s) \u221d m Y i=2 \u0000\u03c0\u03b8i(a|s) \u0001 \u03bbi Pm j=2 \u03bbj exp \u0012 A(k) 1 Pm j=2 \u03bbj \u0013 , (6) where \u03bbi with i = 2, . . . , m are Lagrangian multipliers for constraints in (5), and the value of \u03bbi controls the degree of constraint. Given data collected by \u03c0\u03b8(k) 1 , we learn the policy \u03c0\u03b81 by minimizing its KL divergence from the optimal policy \u03c0\u2217: \u03b8(k+1) 1 \u2190arg min \u03b8 E\u03c0 \u03b8(k) 1 [DKL(\u03c0\u2217(\u00b7|s)||\u03c0\u03b8(\u00b7|s))] = arg max \u03b8 E\u03c0 \u03b8(k) 1 h m Y i=2 \u0010 \u03c0\u03b8i(a|s) \u03c0\u03b8(k) 1 (a|s) \u0011 \u03bbi Pm j=2 \u03bbj exp \u0012 A(k) 1 Pm j=2 \u03bbj \u0013 log \u03c0\u03b8(a|s) i . (7) 4 \fAppendix B contains the derivation details. We here provide some intuition behind actor updating in (7). The ratio \u03c0\u03b8i(a|s) \u03c0 \u03b8(k) 1 (a|s) suggests that the updating direction of policy \u03c0\u03b81 will be favored when it\u2019s aligned with the constraint policies \u03c0\u03b8i, which effectively regularizes the learned policy \u03c0\u03b81 to be in the neighborhood of other policies \u03c0\u03b8i. Small Lagrangian multipliers \u03bbi indicate weaker constraints, and when \u03bbi = 0, we allow the learned policy \u03c0\u03b81 to be irrelevant of the constraint policy \u03c0\u03b8i. Deterministic Policies We now shed light on adaptation of our framework to deterministic policies such as deep deterministic policy gradient (DDPG) algorithm Lillicrap et al. [2015], inspired by the updating rule for the actor of constrained policy discussed in (7). Similarly, at stage one, for each auxiliary response i, we learn the actor \u03c0\u03b8i(s) and critic Q\u03c6i(s, a) via DDPG algorithm respectively. At stage two, for the main response, we learn critic Q\u03c61(s, a) via temporal learning; and for actor \u03c0\u03b81(s), the updating rule follows the form of max \u03b81 m Y i=2 \u0012h(a, \u03c0\u03b8i(s)) h(a, \u03c0\u03b8i(s)) \u0013 \u03bb1 Pm j=2 \u03bbj f \u0012Q\u03c61(s, \u03c0(s)) Pm j=2 \u03bbj \u0013 , (8) where f is an increasing function which pushes the gradient of \u03c0\u03b81 towards increasing the policy value; h(a1, a2) scores high when two actions a1, a2 are close to each other and scores low vice versa; \u03bbi \u22650 plays the same role as the constraint Lagrangian multiplier in (7)\u2014larger \u03bbi denotes stronger constraint. As a demonstration, one can choose f to be the identity function and h(a1, a2) = exp \u0000\u2212(a1\u2212a2)2 2 \u0001 . Section 5 showcases how this construction of softly constrained DDPG algorithm effectively achieves the main goal as well as balancing the auxiliaries. 5 Of\ufb02ine Experiments In this section, we evaluate our approach on a public dataset via extensive of\ufb02ine learning simulations. We demonstrate the effectiveness of our approach as compared to existing baselines in both achieving the main goal and balancing the auxiliaries. 5.1 Setup Dataset We consider a hotel-review dataset named TripAdvisor, which is a standard dataset for studying policy optimization in recommender system with multiple responses in Chen et al. [2021]. In this data, customers not only provide an overall rating for hotels but also score hotels in multiple aspects including service, business, cleanliness, check-in, value, rooms, and location Alam et al. [2016]. 2 Reviews provided by the same user are concatenated chronologically to form a trajectory; we \ufb01lter trajectories with length smaller than 20. In total, we have 20277 customers, 150 hotels, and 257932 reviews. MDP A trajectory tracks a customer hotel-reviewing history. For each review, we have state st: customer ID and the last three reviewed hotel IDs as well as corresponding multi-aspect review scores; action at: currently reviewed hotel ID; reward rt: a vector of eight scores the customer provided for the reviewed hotel in terms of service, business, cleanliness, check-in, value, rooms, location, and overall rating; discount factor \u03b3: 0.99. We set the main goal to be maximizing the cumulative overall rating, and treat others as the auxiliaries. Evaluation We use the Normalised Capped Importance Sampling ((NCIS) approach to evaluate different policies, which is a standard of\ufb02ine evaluation method in literature Swaminathan and Joachims [2015]. Compared algorithms We compare our approach with a range of recommendation algorithms. \u2022 BC: a supervised behavior-cloning policy \u03c0\u03b2 to mimic customer reviewing pattern, with input as the user state and output as the reviewed hotel ID. \u2022 Wide&DeepCheng et al. [2016]: a supervised model which utilizes wide and deep layers to balance both memorization and generalization, with input as the user state, output as the reviewed hotel id, and sample weight as the weighted sum of 8 scores for this review. \u2022 A3CMnih et al. [2016]: an online RL approach with one actor and one critic, where reward is the weighted sum of 8 scores for a given customer-hotel review. 2The dataset consists of both the main objective and other responses, which can also be used to evaluate constrained policy optimization in recommender system. 5 \fAlgorithm BC Wide&Deep A3C DDPG RCPO Pareto Constrained Service 3.38 3.41\u2217 3.37 3.4 3.41 3.36 3.43 Business \u22121.86 \u22121.86 \u22121.78\u2217 \u22121.82 \u22121.82 \u22121.79 -1.82 Cleanliness 3.57 3.62\u2217 3.56 3.61 3.62\u2217 3.57 3.64 Check-in \u22120.73 \u22120.75 \u22120.65 \u22120.71 \u22120.68 \u22120.62\u2217 \u22120.68 Value 3.32 3.36\u2217 3.27 3.34 3.35 3.29 3.37 Rooms 2.92 2.96 2.97\u2217 2.95 2.97\u2217 2.93 3.00 Location 2.93\u2217 2.88 2.88 2.91 2.87 2.86 2.98 Overall Rating 3.92 3.98 3.94 3.97 3.99\u2217 3.95 3.99 Table 1: Performance of different algorithms on an of\ufb02ine dataset. The results with \u2217denote the best performance among all baseline methods in each response dimension, and the data in last column is marked by bold font when our constrained-DDPG achieves the best performance. \u2022 DDPGLillicrap et al. [2015]: an of\ufb02ine RL approach with one actor and one critic, where reward is the weighted sum of 8 scores for a given customer-hotel review. \u2022 RCPO: an of\ufb02ine RL approach that extends the reward-constrained policy optimization of A3C algorithm in Tessler et al. [2018] to DDPG algorithm. Contrary to the standard DDPG, we learn eight critics for the eight scores and use Lagrangian multipliers to sum them up for the actor optimization. \u2022 Pareto: a recommendation model based on DDPG algorithm to \ufb01nd the Pareto optimal solution for multiobjective optimization. \u2022 Constrained (Ours): our constrained actor critic approach based on DDPG algorithm, where the construction follows the discussion in Section 4.2. We note that we use DDPG instead of A3C as the base actor critic model to develop constrained policy optimization (RCPO, Pareto, and Ours), by the nature of of\ufb02ine learning. As a comparison, we also present the performance of A3C, which is for online learning and thus is outperformed by DDPG on this dataset\u2014as we shall see shortly. 5.2 Overall Performance Table 1 presents the results of different algorithms in terms of eight scores. First note that A3C is outperformed by DDPG in most scores, which is as expected since A3C is an online learning algorithm that does not \ufb01t the of\ufb02ine setup here; this justi\ufb01es our comparison focusing on DDPG-based constrained RL algorithms. We can see that our approach Constrained-DDPG performs the best among all algorithms: for the main goal, Constrained-DDPG achieves the highest overall rating 3.99; for the auxiliary goal, Constrained-DDPG also ranks highest for 5 out of 7 scores (service, cleanliness, value, rooms, location). The Pareto algorithm indeed learns a Pareto-optimal solution that achieves best performance on the check-in score, which however does not satisfy the setting here with the main goal to optimize the overall rating. The RCPO algorithm achieves the same best overall score as our approach, but they sacri\ufb01ce much on the others, and in particular, the location score is even lower than that from the BC algorithm. 5.3 Ablation Study The ablation study contains discussing the effect of the Lagrangian multiplier as well as the effect of the discount factor. Due to lack of space, the latter is attached in Appendix E. We investigate how the Lagrangian multiplier, which controls the strength of constraint, affects our model performance. We vary \u03bb across [1e \u22128, 2.56e \u22126, 1e \u22124, 1.6e \u22123, 1, 1e4] and present performance of our constrained-DDPG in terms of all eight scores. Recall that larger \u03bb denotes stronger constraint that optimizes scores other than the overall one. Figure 1 shows that with \u03bb increasing, our policy performance is also improved on most constraint scores (including service, business, cleanliness, value, rooms, and location), which is as expected since the learned policy becomes closer to the constraint policy that optimizes those scores. 6 Live Experiments The ultimate goal of recommender systems is to improve online user experience. To demonstrate the effectiveness of our algorithm, we test its real-world performance as well as other alternatives via A/B experiments. Algorithms are embodied in a candidate-ranking system used in production at a popular short video platform, that is, when a user arrives, these algorithms are expected to rank the candidate videos, and the system will recommend the top video to the 6 \f1e-8 2.56e-6 1e-4 1.6e-3 1 1e4 3.426 3.428 3.430 3.432 Service 1e-8 2.56e-6 1e-4 1.6e-3 1 1e4 1.818 1.817 1.816 1.815 Business 1e-8 2.56e-6 1e-4 1.6e-3 1 1e4 3.634 3.636 3.638 Cleanliness 1e-8 2.56e-6 1e-4 1.6e-3 1 1e4 0.6775 0.6750 0.6725 0.6700 0.6675 0.6650 Check-in 1e-8 2.56e-6 1e-4 1.6e-3 1 1e4 3.366 3.368 3.370 3.372 Value 1e-8 2.56e-6 1e-4 1.6e-3 1 1e4 2.992 2.994 2.996 2.998 Rooms 1e-8 2.56e-6 1e-4 1.6e-3 1 1e4 2.970 2.972 2.974 2.976 2.978 Location 1e-8 2.56e-6 1e-4 1.6e-3 1 1e4 3.980 3.982 3.984 3.986 3.988 3.990 Overall Figure 1: Effect of Lagrangian multiplier on model performance. The X-axis is the of Lagrangian multiplier and the Y-axis is the score of each response. user. We show that the proposed constrained actor-critic model is able to learn a policy that maximizes the main goal while also effectively balancing the auxiliary goal, and in particular, we set the main one as maximizing the watch time and the auxiliary one as improving the interactions between users and videos. 6.1 Setup Evaluation metrics We use online metrics to evaluate policy performance. For the main goal, we look at the total amount of time user spend on the videos, referred to as WatchTime. For the auxiliary goal, users can interact with videos through multiple ways, such as sharing the video to friends, downloading it, or providing comments. Here, we focus on the three online metrics associated with the user-video interactions\u2014the total number of Share, Download, Comment interactions. MDP Following the formulation in Section 3.1, we present the constrained MDP in the context of short video recommendation. A trajectory starts when a user opens the app and ends when the user leaves. At time t, we have \u2022 state st: a vector embedding of user current representation, for which we concatenate embeddings of user historical interactions (encoded by recurrent neural networks) and instantaneous context (such as device and location). \u2022 action at: a vector embedding of algorithm-predicted user preferences on different video topics, which determines the actual recommendation action\u2013the video to be recommended\u2014via a ranking function described below. \u2022 the ranking function: for each candidate video, this function calculates the dot product between the predicted user preference vector (at) and the video embedding (representing its topic and quality). The platform then recommends the video that achieves the largest score. \u2022 reward rt = (lt, it): after each recommendation, the system observes how long the user spent on the video, denoted as lt, and whether the user has interacted with the video (such as sharing/downloading/commenting on it), denoted as it. \u2022 discount factor: we set \u03b3l = 0.95 for the time reward lt and \u03b3i = 0.0 for interaction reward it if not speci\ufb01ed otherwise.3 Compared algorithms We choose A3C Mnih et al. [2016] as the base actor critic model, since algorithms compared are trained online in our live experiment setup, as opposed to the of\ufb02ine learning in Section 5 that uses DDPG as the base actor critic model. Speci\ufb01cally, the action at is sampled from a multivariate Gaussian distribution whose mean and variance are output of the actor model. We also complement our evaluation with a supervised learning-to-rank (LTR) baseline, which is the default model run on the platform. \u2022 A3C: We de\ufb01ne a combined reward mt = lt + \u03bbit and learn a standard A3C Mnih et al. [2016] policy to maximize cumulative mt with discount factor 0.95. 3We \ufb01nd that 0.95 is optimal for optimizing Watch time and 0 is optimal for maximizing the interactions in live experiments. 7 \fAlgorithm WatchTime Share Download Comment A3C +0.309% \u22120.707% 0.153% \u22121.313% RCPO-A3C +0.283% \u22121.075% \u22120.519% \u22120.773% Interaction +0.117% +5.008% +1.952% \u22120.101% Constrained +0.336% +3.324% +1.785% \u22120.618% Table 2: Performance of different algorithms relative to a supervised LTR baseline in a live experiment. \u2022 RCPO-A3C : We separately learn two critic models Vl, Vi to evaluate cumulative time reward lt (with \u03b3l = 0.95) and instant interaction reward it (with \u03b3i = 0). Then when optimizing the actor, we use advantage as a linear combination of advantages calculated from two critic models respectively: At = Al,t + \u03bbAi,t, where Al,t = lt+\u03b3lVl(st+1)\u2212Vl(st), Ai,t = it+\u03b3iVi(st+1)\u2212Vi(st), and \u03bb can be viewed as the Lagrangian multiplier in Tessler et al. [2018] 4. \u2022 Two-Stage constrained A3C (Ours): Following Algorithm 1, we \ufb01rst learn a policy \u03c0i to optimize the auxiliary goal. Then we learn a policy \u03c0d to optimize the main goal with the constraint that \u03c0d is in the neighborhood of \u03c0i. \u2013 Interaction: At the \ufb01rst stage, we learn a A3C policy \u03c0i to maximize instant interaction reward it, with critic update following (2) and actor update following (3). \u2013 Constrained At the second stage, we learn a constrained A3C policy \u03c0d which maximizes the cumulative time reward dt in the neighborhood of policy \u03c0i, with critic update following (4) and actor update following (7). \u2022 LTR (Baseline): The learning-to-rank model? that takes user state embedding and video embedding as input and \ufb01ts the sum of responses. Experimental details To test different algorithms, we randomly split users on the platform into \ufb01ve buckets with splitting ratio being 80%, 5%, 5%, 5%, 5%. The \ufb01rst bucket runs the baseline LTR model, and the remaining buckets run models A3C, RCPO-A3C, Interaction-A3C, and Constrained-A3C respectively. Models are pre-trained online for a couple of days and then are \ufb01xed to concurrently test performance within one day. 6.2 Results Table 2 shows the results of algorithm comparison regarding metrics WatchTime, Share, Download, and Comment. As we can see, both A3C with combined reward and RCPO-A3C with combined advantage learn to improve the WatchTimeas compared to the base model; but interaction-signal is too sparse with respect to WatchTime, such that when combining these two responses together\u2013in the form of either reward or advantage\u2013both models cannot effectively balance the interaction well. Performance of the Interaction model is as expected: with signal from only the interaction reward, the model learns to improve the interaction-related metrics (Share, Download, Comment); such interactions between users and videos also improve the user watch time, since more interesting videos with high potential of invoking interactions are recommended, which optimizes user whole experience. Finally, our model achieves the best performance: as compared to A3C and RCPO-A3C, it has slightly better WatchTimeand does much better on interaction metrics, thanks to the effective regularization during training that it should not be too far from the Interaction-A3C policy. To understand how our Constrained-A3C model learns to balance the main and auxiliary goal, Figure 2 plots its online performance\u2013relative to the LTR baseline\u2013during the learning phase on the two live metrics: WatchTimeand Share. As shown, The model quickly learns to improve the Sharemetric by being restricted to the neighborhood of Interaction-A3C policy, demonstrating the effectiveness of our soft constraint. Then gradually, the model learns to improve WatchTime over time. 7" + }, + { + "url": "http://arxiv.org/abs/2108.04526v3", + "title": "A Survey on Deep Reinforcement Learning for Data Processing and Analytics", + "abstract": "Data processing and analytics are fundamental and pervasive. Algorithms play\na vital role in data processing and analytics where many algorithm designs have\nincorporated heuristics and general rules from human knowledge and experience\nto improve their effectiveness. Recently, reinforcement learning, deep\nreinforcement learning (DRL) in particular, is increasingly explored and\nexploited in many areas because it can learn better strategies in complicated\nenvironments it is interacting with than statically designed algorithms.\nMotivated by this trend, we provide a comprehensive review of recent works\nfocusing on utilizing DRL to improve data processing and analytics. First, we\npresent an introduction to key concepts, theories, and methods in DRL. Next, we\ndiscuss DRL deployment on database systems, facilitating data processing and\nanalytics in various aspects, including data organization, scheduling, tuning,\nand indexing. Then, we survey the application of DRL in data processing and\nanalytics, ranging from data preparation, natural language processing to\nhealthcare, fintech, etc. Finally, we discuss important open challenges and\nfuture research directions of using DRL in data processing and analytics.", + "authors": "Qingpeng Cai, Can Cui, Yiyuan Xiong, Wei Wang, Zhongle Xie, Meihui Zhang", + "published": "2021-08-10", + "updated": "2022-02-04", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "cs.DB" + ], + "main_content": "Introduction In the age of big data, data processing and analytics are fundamental, ubiquitous, and crucial to many organizations which undertake a digitalization journey to improve and transform their businesses and operations. Data analytics typically entails other key operations such as data acquisition, data cleansing, data integration, modeling, etc., before insights could be extracted. Big data can unleash signi\ufb01cant value creation across many sectors such as healthcare and retail [56]. However, the complexity of data (e.g., high volume, high velocity, and high variety) presents many challenges in data analytics and hence renders the di\ufb03culty in drawing meaningful insights. To tackle the challenge and facilitate the data processing and analytics e\ufb03ciently and e\ufb00ectively, a large number of algorithms and techniques have been designed and numerous learning systems have also been developed by researchers and practitioners such as Spark MLlib [63], and Ra\ufb01ki [106]. To support fast data processing and accurate data analytics, a huge number of algorithms rely on rules that are developed based on human knowledge and experience. For example, shortest-job-\ufb01rst is a scheduling algorithm that chooses the job with the smallest execution time for the next execution. However, without fully exploiting characteristics of the workload, it can achieve inferior performance compared to a learning-based scheduling algorithm [58]. Another example is packet classi\ufb01cation in computer networking which matches a packet to a rule from a set of rules. One solution is to construct the decision \u2217These authors have contributed equally to this work, and M. Zhang is the contact author. 1 arXiv:2108.04526v3 [cs.LG] 4 Feb 2022 \ftree using hand-tuned heuristics for classi\ufb01cation. Speci\ufb01cally, the heuristics are designed for a particular set of rules and thus may not work well for other workloads with different characteristics [47]. We observe three limitations of existing algorithms [97, 46]. First, the algorithms are suboptimal. Useful information such as data distribution could be overlooked or not fully exploited by the rules. Second, the algorithm lacks adaptivity. Algorithms designed for a speci\ufb01c workload cannot perform well in another di\ufb00erent workload. Third, the algorithm design is a time-consuming process. Developers have to spend much time trying a lot of rules to \ufb01nd one that empirically works. Learning-based algorithms have also been studied for data processing and analytics. Two types of learning methods are often used: supervised learning and reinforcement learning. They achieve better performance by direct optimization of the performance objective. Supervised learning typically requires a rich set of high-quality labeled training data, which could be hard and challenging to acquire. For example, con\ufb01guration tuning is important to optimize the overall performance of a database management system (DBMS)[44]. There could be hundreds of tuning knobs that are correlated in discrete and continuous space. Furthermore, diverse database instances, query workloads, hardware characteristics render data collection infeasible, especially in the cloud environment. Compared to supervised learning, reinforcement learning shows good performance because it adopts a trial-and-error search and requires fewer training samples to \ufb01nd good con\ufb01guration for cloud databases [123]. Another speci\ufb01c example would be query optimization in query processing. Database system optimizers are tasked to \ufb01nd the best execution plan for a query to reduce its query cost. Traditional optimizers typically enumerate many candidate plans and use a cost model to \ufb01nd the plan with minimal cost. The optimization process could be slow and inaccurate [42]. Without relying on an inaccurate cost model, deep reinforcement learning (DRL) methods improve the execution plan (e.g., changing the table join orders) by interacting with the database[61, 37]. Figure 1 provides a typical work\ufb02ow for query optimization using DRL. When the query is sent to the agent (i.e., DRL optimizer), it produces a state vector via conducting featurization on essential information, such as the accessed relations and tables. Taking the state as the input, the agent employs neural networks to produce the probability distribution of an action set, where the action set could contain all possible join operations as potential actions. Each action denotes a partial join plan on a pair of tables, and the state will be updated once an action is taken. After taking possible actions, a complete plan is generated, which is then executed by a DBMS to get the reward. In this query optimization problem, the reward can be calculated by the real latency. During the training process with the reward signal, the agent can improve the policy and produce a better join ordering with a higher reward (i.e., less latency). 2 \fDBMS Engine A B Policy State \ud835\udc60: Featurization ( ) A B C D A.age, B.age, C.age, \u2026 Action Set a! \ud835\udf0b!(\ud835\udc4e|\ud835\udc60) SQL Query: SELECT * FROM A, B, C, D WHERE A.age = B.age AND A.age = C.age AND \u2026 \u22c8 A.Age = B.age A C a( \u22c8 A.Age = C.age \u2026 \u2026 Reward \ud835\udc45(\ud835\udc60, \ud835\udc4e) Transition \ud835\udc43(\ud835\udc60!|\ud835\udc60, \ud835\udc4e) Agent (DRL Optimizer) D C B A complete join ordering plan \u22c8 \u22c8 \u22c8 Figure 1: The Work\ufb02ow of DRL for Query Optimization. A, B, C and D are four tables. Reinforcement learning (RL) [89] focuses on learning to make intelligent actions in an environment. The RL algorithm works on the basis of exploration and exploitation to improve itself with feedback from the environment. In the past decades, RL has achieved tremendous improvements in both theoretical and technical aspects [86, 89]. Notably, DRL incorporates deep learning (DL) techniques to handle complex unstructured data and has been designed to learn from historical data and self-exploration to solve notoriously hard and large-scale problems (e.g., AlphaGo[85]). In recent years, researchers from di\ufb00erent communities have proposed DRL solutions to address issues in data processing and analytics[116, 58, 52]. We categorize existing works using DRL from two perspectives: system and application. From the system\u2019s perspective, we focus on fundamental research topics ranging from general ones, such as scheduling, to system-speci\ufb01c ones, such as query optimization in databases. We shall also emphasize how it is formulated in the Markov Decision Process and discuss how the problem can be solved by DRL more e\ufb00ectively compared to traditional methods. Many techniques such as sampling and simulation are adopted to improve DRL training e\ufb03ciency because workload execution and data collection in the real system could be time-consuming [31]. From the application\u2019s perspective, we shall cover various key applications in both data processing and data analytics to provide a comprehensive understanding of the DRL\u2019s usability and adaptivity. Many domains are transformed by the adoption of DRL, which helps to learn domain-speci\ufb01c knowledge about the applications. In this survey, we aim at providing a broad and systematic review of recent advancements in employing DRL in solving data systems, data processing and analytics issues. In Section 3 \f2, we introduce the key concepts, theories, and techniques in RL to lay the foundations. To gain a deeper understanding of DRL, readers could refer to the recently published book [13], which covers selected DRL research topics and applications with detailed illustrations. In Section 3, we review the latest important research works on using DRL for system optimization to support data processing and analytics. We cover fundamental topics such as data organization, scheduling, system tuning, index, query optimization, and cache management. In Section 4, we discuss using DRL for applications in data processing and analytics ranging from data preparation, natural language interaction to various real-world applications such as healthcare, \ufb01ntech, E-commerce, etc. In Section 5, we highlight various open challenges and potential research problems. We conclude in Section 6. This survey focuses on recent advancements in exploring RL for data processing and analytics that spurs great interest, especially in the database and data mining community. There are survey papers discussing DRL for other domains. We refer readers to the survey of DRL for healthcare in [118], communications and networking in [54], and RL explainability in [76]. Another work[107] discusses how deep learning can be used to optimize database system design, and vice versa. In this paper, we use \"DRL\" and \"RL\" interchangeably. 2 Theoretical Foundation and Algorithms of Reinforcement Learning RL is targeted to solve the sequential decision making problem and the goal is to take actions with maximum expected rewards. In detail, the agent follows a policy to make a series of decisions (i.e. taking actions) in di\ufb00erent states of the environment, and the sequence of the states and the actions form a trajectory. To estimate whether the policy is good or not, each decision under the policy will be evaluated by the accumulated rewards through the trajectory. After evaluating the policy from the trajectories, the agent next improves the policy by increasing the probabilities of making decisions with greater expected rewards. By repeating these steps, the agent can improve the policy through trial-and-error until the policy reaches the optimal, and such a sequential decision-making process is modeled via Markov Decision Process (MDP). 2.1 Markov Decision Process Mathematically, MDP, shown in Figure 1, is a stochastic control process M de\ufb01ned by a tuple with 5 elements, M = (S, A, R, P, \u03b3), which are explained as follows. \u2022 State S: S is the space for states that denote di\ufb00erent situations in the environment and st \u2208S denotes the state of the situation at the time t. \u2022 Action A: A is the space for actions that the agent can take; the actions can either be discrete or continuous, and at \u2208A denotes the action taken at the time t. \u2022 Reward function R(st, at): It denotes the immediate reward of the action at taken under the state st. \u2022 Transition function P(st+1 = s\u2032|st = s, at = a): It denotes the probability of transition to the state s\u2032 at the time t + 1 given the current state s and the taken action a at the time t. \u2022 Discount factor \u03b3 \u2208[0, 1]: The total rewards of a certain action consist of both immediate rewards and future rewards, and the \u03b3 quanti\ufb01es how much importance we give for future rewards. 4 \fWe take the query optimization problem demonstrated in Figure 1 to help explain the \ufb01ve components of the MDP. In this example, the state is expressed as a state vector, which summarizes the information of relations and tables that are assessed by the query q. In each state, the RL agent produces a probability distribution over all potential actions where each action denotes a partial join plan on a pair of tables. After repeating these two processes, it reaches a terminal state where the \ufb01nal join ordering is generated for an agent to execute, and all actions\u2019 target rewards are measured by the actual performance (i.e., latency) or a cost model. As for the transition function, the transitions of the states are always deterministic in both this problem and most of the other DB problems. In RL, we aim to train the agent with a good policy \u03c0 that is a mapping function from state to action. Through the policy, the agent can take a series of actions that will result in continuous changes in the states, and the sequence of the states and the actions following the policy \u03c0 form a trajectory \u03c4 = (s0, a0, s1, a1, ...). From each \u03c4, we can evaluate the e\ufb00ect of each action by the accumulated rewards G, and it consists of the immediate reward of this action and the discounted rewards of its following actions in the trajectory. The total result G for the action at is as follows: G(\u03c4) = P t=0 \u03b3trt, where \u03b3 quanti\ufb01es how much importance we give for future rewards. With a bigger \u03b3, the RL agent will be more likely to take any action that may have a less immediate reward at the current time but has a greater future reward in expectation. RL continuously evaluates the policy \u03c0 and improves it until it reaches the optimal policy \u03c0\u2217= arg max(\u03c4\u223c\u03c0) G(\u03c4) where the agent always takes actions that maximize the expected return. To evaluate the policy \u03c0, RL algorithms estimate how good or bad it is for a state and a state-action pair by the function V and function Q respectively. Both of these two value functions are calculated according to the discounted return G in expectation which can be written as: V\u03c0(s) = E\u03c4\u223c\u03c0[G(\u03c4)|s0 = s] (1) Q\u03c0(s, a) = E\u03c4\u223c\u03c0[G(\u03c4)|s0 = s, a0 = a] (2) These two value functions have a close association where the V\u03c0(st) is the expectation of the function Q of all possible actions under the state st according to the policy \u03c0, and the Q\u03c0(st, at) is the combination of the immediate reward of the action at and the expectation of all possible states\u2019 values after taking the action at under the state st. Hence, we have: V\u03c0(s) = X a\u2208A \u03c0(a|s)Q\u03c0(s, a) (3) Q\u03c0(s, a) = R(s, a) + \u03b3 X s\u2032\u2208S P(s\u2032|s, a)V\u03c0(s\u2032) (4) Given a policy \u03c0, we can evaluate its value functions by Bellman equations [89] which utilize the recursive relationships of these value functions. Formally, Bellman equations deduce the relationships between a given state (i.e. function V) or a given state-action pair (i.e. function Q) and its successors which can be written as: V\u03c0(s) = X at\u2208A \u03c0(a|s)[R(s, a) + \u03b3 X s\u2032\u2208S P(s\u2032|s, a)V\u03c0(s\u2032)] (5) 5 \fBasic Techniques Model-based Methods Model-free Methods Dynamic Programming, Alpha Zero, \u2026 Value-based Policy-based SARSA (On-policy) Q-learning (Off-policy) Policy Gradient Actor-Critic Advanced Techniques Data Sampling Model Efficiency Data Utilization Data Correlation Policy Reward Value Policy Exploration Policy Representation Policy Optimization Multiple Reward Unknown Reward Value Representation RL Deep Q-learning DDPG Figure 2: Broad categorization of RL techniques. Q\u03c0(s, a) = X s\u2032\u2208S P(s\u2032|s, a)[R(s, a) + \u03b3 X a\u2032\u2208A \u03c0(a\u2032|s\u2032)Q\u03c0(s\u2032, a\u2032)] (6) By iterating the Bellman equations, we can easily obtain the value functions for a policy, and to compare policies, we de\ufb01ne that the policy \u03c0 is better than \u03c0\u2032 if the function V according to the \u03c0 is no less than the function V according to the \u03c0\u2032 for all states, that is V\u03c0(s) \u2265V\u03c0\u2032(s), \u2200s. It has been proven in [89] that the existence of the optimal policy \u03c0\u2217 is guaranteed in the MDP problem, where V\u2217(s) = max\u03c0 V\u03c0(s) and Q\u2217(s) = max\u03c0 Q\u03c0(s). These two functions are de\ufb01ned as the optimal function V and the optimal function Q. We can obtain the optimal policy \u03c0\u2217by maximizing over the Q\u2217(\u03c0) which can be written as: \u03c0\u2217(a|s) = arg max Q\u2217(s, a) (7) To improve the policy, we apply the Bellman optimality equations [89] to update value functions by taking the action with maximum value instead of trying all possible actions. To facilitate the optimization of the policy, many RL techniques are proposed from di\ufb00erent perspectives, and Figure 2 provides a diagram outlining the broad categorization of these techniques, illustrating how these techniques can be applied. 2.2 Basic Techniques Based on the representation of MDP elements, basic techniques can be categorized into two classes: model-based method and model-free method. The main di\ufb00erence is whether the agent has access to model the environment, i.e. whether the agent knows the transition function and the reward function. These two functions are already known in the modelbased method where Dynamic Programming (DP)[6] and Alpha-Zero [86] are the classical methods which have achieved signi\ufb01cant results in numerous applications. In these methods, agents are allowed to think ahead and plan future actions with known e\ufb00ects on the environment. Besides, an agent can learn the optimal policy from the planned experience which results in high sample e\ufb03ciency. 6 \fIn many RL problems, the reward and the transition function are typically unknown due to the complicated environment and its intricate inherent mechanism. For example, as illustrated in Figure 1, we are unable to obtain the actual latency as the reward in the joint query optimization example. Besides, in the stochastic job scheduling problem [59], it is also impossible to directly model the transition function because of the randomness of the job arrivals in the practical scenarios. Hence, in these problems, agents usually employ model-free methods that can purely learn the policy from the experience gained during the interaction with the environment. Model-free methods can mainly be classi\ufb01ed into two categories, namely the value-based method and the policy-based method. In the value-based method, the RL algorithm learns the optimal policy by maximizing the value functions. There are two main approaches in estimating the value functions that are Mento-Carlo (MC) methods and Temporal di\ufb00erence (TD) methods. MC methods calculate the V(s) by directly applying its de\ufb01nition, that is Equation 1. MC methods can directly update the value functions once they get a new trajectory \u03c4 as follows: V\u03c0(s) \u2190V\u03c0(s) + \u03b1(G\u03c4\u223c\u03c0(\u03c4|s0 = s) \u2212V\u03c0(s)) (8) where \u03b1 \u2208[0, 1) denotes the learning rate which controls the rate of updating the policy with new experiences. However, it has an obvious drawback that a complete trajectory requires the agent to reach a terminal state, while it is not practical in some applications, such as online systems. Di\ufb00erent from MC methods, the TD method builds on the recursive relationship of value functions, and hence, can learn from the incomplete trajectory. Mathematically, the update of TD methods can be written as: V\u03c0(s) \u2190V\u03c0(s) + \u03b1(R(s, a) + \u03b3V\u03c0(s\u2032) \u2212V\u03c0(s)) (9) However, there is bias when estimating the function V with TD methods because they learn from the recursive relationship. To reduce the bias, TD methods can extend the length of the incomplete trajectories and update the function V by thinking more steps ahead, which is called n-steps TD methods. As n grows to the length of whole trajectories, MC methods can be regarded as a special case of TD methods where function V is an unbiased estimate. On the other side of the coin, as the length n increases, the variance of the trajectory also increases. In addition to the above consideration, TD-based methods are more e\ufb03cient and require less storage and computation, thus they are more popular among RL algorithms. In value-based methods, we can obtain the optimal policy by acting greedily via Equation 7. The update of the function Q with TD methods is similar to the update of the function V, and is as follows: Q\u03c0(s, a) \u2190Q\u03c0(s, a) + \u03b1(R(s, a) + \u03b3Q\u03c0\u2032(s\u2032, a\u2032) \u2212Q\u03c0(s, a)) where the agent follows the policy \u03c0 to take actions and follows the policy \u03c0\u2032 to maximize the function Q. If the two policies are the same, that is \u03c0\u2032 = \u03c0, we call such RL algorithms the on-policy methods where the SARSA[77] is the representative method. In addition, other policies can also be used in \u03c0\u2032. For example, in Q-learning[109], the agent applies the greedy policy and updates the function Q with the maximum value in its successor. Its update formula can be written as: Q\u03c0(s, a) \u2190Q\u03c0(s, a)+\u03b1(R(s, a)+\u03b3 maxa\u2032 Q\u03c0(s\u2032, a\u2032)\u2212Q\u03c0(s, a)). Both value-based methods can work well without the model of the environment, and Qlearning directly learns the optimal policy, whilst SARSA learns a near-optimal policy during exploring. Theoretically, Q-learning should converge quicker than SARSA, but its generated samples have a high variance which may su\ufb00er from the problems of converging. In RL, storage and computation costs are very high when there is a huge number of states or actions. To overcome this problem, DRL, as a branch of RL, adopts Deep Neural 7 \fNetwork (DNN) to replace tabular representations with neural networks. For function V, DNN takes the state s as input and outputs its state value V\u03b8(s) \u2248V\u03c0(s) where the \u03b8 denotes the parameter in the DNN. When comes to function Q, It takes the combination of the state s and the action a as input and outputs the value of the state-action pair Q\u03b8(s, a) \u2248Q\u03c0(s, a), As for the neural networks, we can optimize them by applying the techniques that are widely used in deep learning (e.g. gradient descent). Deep Q-learning network (DQN) [65], as a representative method in DRL, combines the DNN with Qlearning and its loss function is as follows: Lw = ED[(R(s, a) + \u03b3 max a\u2217\u2208A Qw(s\u2032, a\u2217) \u2212Qw(s, a))2] (10) where D denotes the experience replay which accumulates the generated samples and can stabilize the training process. Policy-based methods are another branch of the model-free RL algorithm that have a clear representation of the policy \u03c0(a|s), and they can tackle several challenges that are encountered in value-based methods. For example, when the action space is continuous, value-based methods need to discretize the action which could increase the dimensionality of the problem, and memory and computation consumption. Value-based methods learn a deterministic policy that generates the action given a state through an optimal function Q (i.e. \u03c0(s) = a). However, for policy-based methods, they can learn a stochastic policy (i.e. \u03c0\u03b8(ai|s) = pi, P i pi = 1) as the optimal policy, where pi denotes the probability of taking the action ai given a state s, and \u03b8 denotes the parameters where neural networks can be used to approximate the policy. Policy Gradient [90] method is one of the main policybased methods which can tackle the aforementioned challenges. Its goal is to optimize the parameters \u03b8 by using the gradient ascent method, and the target can be denoted in a generalized expression: \u2207\u03b8J(\u03b8) = E\u03c4\u223c\u03c0\u03b8[R(\u03c4)\u2207\u03c0\u03b8 log\u03c0\u03b8(a|s)] (11) The speci\ufb01c proof process can refer to [89]. Sampling via the MC methods, we will get the entire trajectories to improve the policy for the policy-based methods. After training, the action with higher rewards in expectation will have a higher probability to be chosen and vice versa. As for the continuous action, The optimal policy learned from the Policy Gradient is stochastic which still needs to be sampled to get the action. However, the stochastic policy still requires lots of samples to train the model when the search space is huge. Deterministic Policy Gradient (DPG) [87], as an extension of the Policy Gradient, overcomes this problem by using a stochastic policy to perform sampling while applying deterministic policy to output the action which demands relatively fewer samples. Both value-based methods and policy-based methods have their strengths and weaknesses, but they are not contradictory to each other. Actor-Critic (AC) method, as the integration of both methods, divides the model into two parts: actor and critic. The actor part selects the action based on the parameterized policy and the critic part concentrates on evaluating the value functions. Di\ufb00erent from previous approaches, AC evaluates the advantage function A\u03c0(s, a) = Q\u03c0(s, a) \u2212V\u03c0(s) which re\ufb02ects the relative advantage of a certain action a to the average value of all actions. The introduction of the value functions also allows AC to update by step through the TD method, and the incorporation of the policy-based methods makes AC be suitable for continuous actions. However, the combination of the 8 \ftwo methods also makes the AC method more di\ufb03cult to converge. Moreover, Deep Deterministic Policy Gradient (DDPG) [49], as an extension of the AC, absorbs the advanced techniques from the DQN and the DPG which enables DDPG to learn the policy more e\ufb03ciently. In all the above-mentioned methods, there always exists a trade-o\ufb00between exploring the unknown situation and exploiting with learned knowledge. On the one hand, exploiting the learned knowledge can help the model converge quicker, but it always leads the model into a local optimal rather than a globally optimal. On the other hand, exploring unknown situations can \ufb01nd some new and better solutions, but always being in the exploring process causes the model hard to converge. To balance these two processes, researchers have been devoting much energy to \ufb01nding a good heuristics strategy, such as \u03f5 \u2212greedy strategy, Boltzmann exploration (Softmax exploration), upper con\ufb01dence bound (UCB) algorithm [2], Thompson sampling [92], and so on. Here, we consider the \u03f5 \u2212greedy, a widely used exploration strategy, as an example. \u03f5\u2212greedy typically selects the action with the maximal Q value to exploit the learned experience while occasionally selecting an action evenly at random to explore unknown cases. \u03f5 \u2212greedy exploration strategy with m actions can be denoted as follow: \u03c0(a|s) = \u001a \u03f5/m + (1 \u2212\u03f5) a\u2217= arg maxa\u2208A Q(s, a), \u03f5/m a \u0338= a\u2217. (12) \u03f5 \u2208[0, 1) is an exploration factor. The agent is more likely to select the action at random when the \u03f5 is closer to 1, and the \u03f5 will be continuously reduced during the training process. 2.3 Advanced Techniques This section mainly discusses some advanced techniques in RL which focus on e\ufb03ciently using the limited samples and building sophisticated model structures for better representation and optimization. According to the di\ufb00erent improvements, they can be broadly classi\ufb01ed into two parts: data sampling and model e\ufb03ciency. 2.3.1 Data Sampling Data sampling is one of the most important concerns in training the DRL in data processing and analytics. In most applications, the sample generation process costs a great amount of time and computation resources. For example, a sample may refer to an execution run for workload and repartitioning for the database, which can take about 40 minutes[31]. Hence, to train the model with limited samples, we need to increase data utilization and reduce data correlation. Data utilization: Most DRL algorithms train the optimal policy and sample data at the same time. Instead of dropping samples after being trained, experience replay[50] accumulates the samples in a big table where samples are randomly selected during the learning phase. With this mechanism, samples will have a higher utilization rate and a lower variance, and hence, it can stabilize the training process and accelerate the training convergence. Samples after several iterations may di\ufb00er from the current policy, and hence, Growing-batch [40] can continuously refresh the table and replace these outdated samples. In addition, samples that are far away from the current policy should be paid more attention and Prioritized Experience Replay[79] uses TD error as the priority to measure the sample importance, and hence, focus more on learning the samples with high errors. In a nutshell, 9 \fwith the experience replay, DRL cannot only stable the learning phase but also e\ufb03ciently optimize the policy with fewer samples. Data correlation: Strong correlation of training data is another concern that may lead the agent to learn a sub-optimal solution instead of the globally optimal one. Apart from the experience replay, the mechanism of the distributed environments is another research direction to alleviate this problem. For example, the asynchronous advantage actor-critic (A3C) [64] and Distributed PPO (DPPO) [27] apply multi-threads to build multiple individual environments where multiple agents take actions in parallel, and the update is calculated periodically and separately which can accelerate the sampling process and reduce the data correlation. 2.3.2 Model E\ufb03ciency RL model with better e\ufb03ciency is the major driving force of the development of RL, and there are many researchers improving it from three major aspects, namely policy, reward function, and value function. Policy: The policy-related techniques focus on stably and e\ufb00ectively learning a comprehensive policy, and the advanced techniques to e\ufb03ciently learn the policy can be classi\ufb01ed into three parts in detail, which are policy exploration, policy representation, and policy optimization. a) Policy exploration: Its target is to explore as many actions as possible during the training process in case the policy will be trapped into the local optimal. For example, entropy regularisation [64] adds the entropy of the actions\u2019 probabilities into the loss item which can su\ufb03ciently explore the actions. Besides, adding noise to the action is another research direction to increase the randomness into policy exploration. The DDPG applies an Ornstein\u2013Uhlenbeck process [96] to generate temporal noise N which are directly injected into policy. Noisy-Net [22] incorporates the noise into the parameters of neural networks which is easy to implement, and it shows a better performance than the \u03f5 \u2212greedy and entropy regularisation methods. Further, Plappert et al. [75] investigate an e\ufb00ective way to combine the parameter space noise to enrich the exploratory behaviors which can bene\ufb01t both on-policy methods and o\ufb00-policy methods. b) Policy representation: The states in some RL problems are in a huge dimension which causes challenges during the training. To approximate a better policy, a branch of DRL models improve the policy representation by absorbing convolutional neural networks (CNN) into DQN to analyze the data, such as Dueling DQN [108], DRQN [26], and so on. In addition, DRQN also incorporates the LSTM structure to increase the capacity of the policy which is able to capture the temporal information, such as speed, direction. c) Policy optimization: The update of the value functions following the Equation 5 and 6 tends to overestimate the value functions and introduce a bias because they learn estimates from the estimates. Mnih et al.[66] separate the two estimation process by using two same Q-networks which can reduce the correlation of two estimation processes and hence, stabilize the course of training. However, the action with the maximum Q-value may di\ufb00er between two Q-networks which will be hard for convergence. and Double DQN (DDQN) [99] alleviate the issue by disaggregating the step of selecting the action and calculating the max Q-value. When we apply the policy-based RL methods, the learning rate of the policy plays an essential role in achieving superior performance. A higher learning rate can always maxi10 \fmize the improvement on a policy by step, but it also causes the instability of the learning phase. Hence, The Trust Region Policy Optimization (TRPO) [80] builds constraints on the old policy and new policy via KL divergence to control the change of the policy in an acceptable range. With this constraint, TRPO can iteratively optimize the policy via a surrogate objective function which can monotonically improve policies. However, the design of the KL constraint makes it hard to be trained, and Proximal Policy Optimization (PPO) [81] simpli\ufb01es the constraint through two ways: adding it into the objective function, designing a clipping function to control the update rate. Empirically, PPO methods are much simpler to implement and are able to perform at least as well as TRPO. Reward: Reward function as one of the key components in the MDP plays an essential role in the RL. In some speci\ufb01c problems, the agent has to achieve multiple goals which may have some relationships. For example, the robot can only get out through the door only if it has already found the key. To tackle this challenge, Hierarchical DQN [38] proposes two levels of hierarchical RL (HRL) models to repeatedly select a new goal and achieve the chosen goal. However, there is a limitation that the goal needs to be manually prede\ufb01ned which may be unknown or unmeasurable in some environments, such as the market and the e\ufb00ect of a drug. To overcome it, Inverse RL (IRL) [68] learns the rewards function from the given experts\u2019 demonstrations (i.e. the handcraft trajectories), but the agent in IRL can only prioritize the entire trajectories over others. It will cause a shift when the agent comes to a state that never appears before, and Generative Adversarial Imitation Learning (GAIL) [32], as an imitation learning algorithm, applies adversarial training methods to generate fake samples and is able to learn the expert\u2019s policy explicitly and directly. Value: As we have mentioned earlier, the tabular representation of the value functions has several limitations which can be alleviated via DRL. Di\ufb00erent from directly taking the state-action pair as the input to calculate the Q-function, Dueling DQN [108] estimates its value by approximating two separate parts that are the state-values and the advantage values, and hence, can distinguish whether the value is brought by the state or the action. The aforementioned advanced algorithms and techniques improve and enhance the DRL from di\ufb00erent perspectives, which makes DRL-based algorithms be a promising way to improve data processing and analytics. We observe that problems with the following characteristics may be amenable to DRL-based optimization. First, problems are incredibly complex and di\ufb03cult. The system and application involve a complicated operational environment (e.g., large-scale, high-dimensional states) and internal implementation mechanisms, which is hard to construct a white-box model accurately. DRL can process complex data and learn from experience generated from interacting, which is naturally suitable for data processing and analytics where many kinds of data exist and are processed frequently. Second, the optimization objectives can be represented and calculated easily as the reward because the RL agent improves itself towards maximizing the rewards and rewards could be computed a lot of times during training. Third, the environment can be well described as MDP. DRL has been shown to solve MDP with theoretical guarantees and empirical results. Thus, problems involving sequential decision making such as planning, scheduling, structure generation (e.g., tree, graph), and searching could be expressed as MDP and a good \ufb01t for DRL. Fourth, collecting required labels of data massively is hard. Compared to supervised learning, DRL can utilize data e\ufb03ciently to gain good performance. 11 \f3 Data System Optimizations DRL learns knowledge about the system by interacting with it and optimizes the system. In this section, we focus on several fundamental aspects with regards to system optimization in data processing and analytics including data organization, scheduling, tuning, indexing, query optimization, and cache management. We discuss how each problem is formulated in MDP by de\ufb01ning three key elements (action, state, and reward) in the system and solved by DRL. Generally, the states are de\ufb01ned by some key characteristics of the system. The actions are possible decisions (e.g., system con\ufb01guration), that a\ufb00ect the system performance and the reward is calculated based on the performance metrics (e.g. throughput, latency). Table 1 presents a summary of representative works and the estimated dimension ranges on the state and action space of each work are added as signals on the DRL training di\ufb03culty. As a comparison, OpenAI Five[8], a Dota-playing AI, observes the state as 20,000 numbers representing useful game information and about 1,000 valid actions (like ordering a hero to move to a location) for per hero. Dota is a real-time strategy game between two teams of \ufb01ve players where each player controls a character called a \u201chero\u201d. 3.1 Data Organization 3.1.1 Data Partitioning E\ufb00ective data partitioning strategy is essential to accelerate data processing and analytics by skipping irrelevant data for a given query. It is challenging as many factors need to be considered, including the workload and data characteristics, hardware pro\ufb01les, and system implementation. In data analytics systems, data is split into blocks in main memory or secondary storage, which are accessed by relevant queries. A query may fetch many blocks redundantly and, therefore, an e\ufb00ective block layout avoids reading unnecessary data and reduces the number of block accesses, thereby improving the system performance. Yang et al.[116] propose a framework called the qd-tree that partitions data into blocks using DRL over the analytical workload. The qd-tree resembles the classic k-d tree and describes the partition of multidimensional data space where each internal node splits data using a particular predicate and represents a subspace. The data in the leaf node is assigned to the same block. In the MDP, each state is a node representing the subspace of the whole data and featured as the concatenation of range and category predicates. After the agent takes an action to generate two child nodes, two new states will be produced and explored later. The available action set is the predicates parsed from workload queries. The reward is computed by the normalized number of skipped blocks over all queries. They do not execute queries and a sampling technique is used to estimate the reward e\ufb03ciently. The formulation of using DRL to learn a tree is similar to NeuroCuts[47] that learns a tree for packet classi\ufb01cation. However, the qd-tree may not support a complex workload containing userde\ufb01ned functions (UDFs) queries. Horizontal partitioning in the database chooses attributes of large tables and splits them across multiple machines to improve the performance of analytical workloads. The design relies on either the experience of database administrators (DBAs) or cost models that are often inaccurate[42] to predict the runtime for di\ufb00erent partitions. Data collection is too challenging and costly to train the accurate supervised learning model in the cloud environment. Hilprecht et al.[31] learn to partition using DRL on analytical workloads in cloud databases, on the fact that DRL is able to e\ufb03ciently navigate the partition search and 12 \frequires less training data. In the MDP, the state consists of two parts. The database part encodes whether a table is replicated, an attribute is used for partitioning, and which tables are co-partitioned. The workload part incorporates normalized frequencies of representative queries. Supported actions are: partitioning a table using an attribute, replicating a table, and changing tables co-partition. The reward is the negative of the runtime for the workload. One challenge is that the cost of database partitioning is high during training. To alleviate the problem, the agent is trained in the simulation environment and is further re\ufb01ned in the real environment by estimating the rewards using sampling. One limitation is that it may not support new queries well because only the frequency features of queries are considered. Durand et al. in [17, 18] utilize DRL to improve vertical partitioning that optimizes the physical table layout. They show that the DQN algorithm can easily work for a single workload with one table but is hard to generalize to random workloads. For UDFs analytics workloads on unstructured data, partitioning is more challenging where UDFs could express complex computations and functional dependency is unavailable in the unstructured data. Zou et al.[127] propose the Lachesis system to provide automatic partitioning for non-relational data analytics. Lachesis translates UDFs to graph-based intermediate representations (IR) and identi\ufb01es partition candidates based on the subgraph of IR as a two-terminal graph. Lachesis adopts DRL to learn to choose the optimal candidate. The state incorporates features for each partition extracted from historical work\ufb02ows: frequency, the execution interval, time of the most recent run, complexity, selectivity, key distribution, number, and size of co-partition. In addition, the state also incorporates other features such as hardware con\ufb01gurations. The action is to select one partition candidate. The reward is the throughput speedup compared to the average throughput of the historical executions of applications. To reduce the training time, the reward is derived from historical latency statistics without partitioning the data when running the applications. One limitation is that Lachesis largely depends on historical statistics to design the state and calculate the reward, which could lead to poor performance when the statistics are inadequate. 3.1.2 Data Compression Data compression is widely employed to save storage space. The e\ufb00ectiveness of a compression scheme however relies on the data types and patterns. In time-series data, the pattern can change over time and a \ufb01xed compression scheme may not work well for the entire duration. Yu et al.[120] propose a two-level compression framework, where a scheme space is constructed by extracting global features at the top level and a compression schema is selected for each point at the bottom level. The proposed AMMMO framework incorporates compression primitives and the control parameters, which de\ufb01ne the compression scheme space. Due to the fact that the enumeration is computationally infeasible, the framework proposes to adopt DRL to \ufb01nd the compression scheme. The agent takes a block that consists of 32 data points with the compressed header and data segment, timestamps, and metrics value as the state. The action is to select a scheme from compression scheme space and then the compression ratio is computed as the reward. The limitation is that the method may not work for other data types like images and videos. 3.2 Scheduling Scheduling is a critical component in data processing and analytics systems to ensure that resources are well utilized. Job scheduling in a distributed computing cluster faces many challenging factors such as workload (e.g., job dependencies, sizes, priority), data 13 \fTable 1: Representative Works using DRL for Data System Optimizations. D(X) denotes the approximate dimension of X space. Domain Work Algorithm D(State) D(Action) DRL-based Approach Open Source Data organization Analytical system data partition[116] PPO 10 100 100 1000 Exploit workload patterns and Generate the tree NO Database horizontal partition [31] DQN 100 10 Navigate the partition search e\ufb03ciently NO UDF-centric workload data partition [127] A3C 10 1-10 Exploit the features of partition and search YES Time series data compression [120] PG 100 10 Search parameters interactively NO Scheduling Distributed job processing [58] PG 100 10 Exploit the job dependencies and learn schedule decision YES Distributed stream data [45] DDPG 100 10-100 Learn schedule decision NO Tuning Database con\ufb01guration [123] [44] DDPG 100 10 Search con\ufb01guration parameters interactively YES Index Index Selection [84] CEM 100 10 Search the index interactively NO R-tree construction [24] DQN 10-100 10 Learn to generate the tree NO Query Optimization Join order selection [61, 37, 119, 29] PG, DQN, ... 10-100 1-10 Learn to decide the join order Only [29] Cache Management View Materialization [121] DQN 100 10 Model the problem as IIP and solve NO locality, and hardware characteristics. Existing algorithms using general heuristics such as shortest-job-\ufb01rst do not utilize these factors well and fail to yield top performance. To this end, Mao et al.[58] propose Decima to learn to schedule jobs with dependent stages using DRL for data processing clusters and improve the job completion time. In the data processing systems such as Hive[93], Pig[70], Spark-SQL[1], jobs could have up to hundreds of stages and many stages run in parallel, which are represented as directed acyclic graphs (DAGs) where the nodes are the execution stages and each edge represents the dependency. To handle parallelism and dependencies in job DAGs, Decima \ufb01rst applies graph neural network (GNN) to extract features as the state instead of manually designing them while achieving scalability. Three types of feature embeddings are generated. Node embedding captures information about the node and its children including the number of remaining tasks, busy and available executors, duration, and locality of executors. Job embedding aggregates all node embeddings in the job and cluster embedding combines job embeddings. To balance possible large action space and long action sequences, The action determines the job stage to be scheduled next and the parallelism limit of executors. The reward is based on the average job completion time. To train e\ufb00ectively in a job streaming environment, Decima gradually increases the length of training jobs to conduct curriculum learning[7]. The variance reduction technique[59] is applied to handle stochastic job arrivals for robustness. However, we note that Decima is non-preemptive and does not re-schedule for higher priority jobs. 14 \fIn distributed stream data processing, streams of continuous data are processed at scale in a real-time manner. The scheduling algorithm assigns workers to process data where each worker uses many threads to process data tuples and aims to minimize average data tuple processing time. Li et al.[45] design a scheduling algorithm using DRL for distributed stream data processing, which learns to assign tuples to work threads. The state consists of the scheduling plan (e.g., the current assignment of workers) and the workload information (e.g., tuple arrival rate). The action is to assign threads to machines. The reward is the negative tuple processing time on average. The work shows that DQN does not work well because the action space is large and applies DDPG to train the actor-critic based agent instead. To \ufb01nd a good action, the proposed method looks for k nearest neighbors of the action that the actor network outputs and selects the neighbor with the highest value that the critic network outputs. The algorithm is implemented on Apache Storm and evaluated with representative applications: log stream processing, continuous queries, and word count. Many works have been recently proposed to improve scheduling using DRL[122, 35]. Query scheduling determines the execution order of queries, which has a great in\ufb02uence on query performance and resource utilization in the database system. SmartQueue[122] improves query scheduling by leveraging overlapping data access among queries and learns to improve cache hits using DRL. In addition, Tim et al.[35] design a scheduling system in SageDB using RL techniques. Other works using RL for scheduling include Bayesian RL for scheduling in heterogeneous clusters[3], operation scheduling in devices[23], application container scheduling in clusters[102], etc. 3.3 Tuning Tuning the con\ufb01guration of data processing and analytic systems plays a key role to improve system performance. The task is challenging because up to hundreds of parameters and complex relations between them could exist. Furthermore, other factors such as hardware and workload also impact the performance. Existing works often employ search-based or supervised learning methods. The former takes much time to get an acceptable con\ufb01guration and the latter such as OtterTune[98] needs large high-quality data that is non-trivial to obtain in practice. Zhang et al.[123] design a cloud database tuning system CDBTune using DRL to \ufb01nd the best parameter in high-dimensional con\ufb01guration space. The CDBTune formulates MDP as follows. The state is represented by the internal metrics (e.g., bu\ufb00er size, pages read). The action is to increase or decrease the knob values. The reward is the performance di\ufb00erence between two states, which is calculated using throughput and latency. CDBTune takes several hours on o\ufb04ine training in simulation and online training in the real environment. Compared to OtterTune, CDBTune eases the burden of collecting large training data sets. In the experiments, CDBTune is shown to outperform DBA experts and OtterTune and improve tuning e\ufb03ciency under 6 di\ufb00erent workloads on four databases. One limitation of the approach is that the workload information is ignored and thus it may not perform well when the query workload is changed. To address the issue, Li et al.[44] propose QTune that considers query information to tune the database using DRL. First, Qtune extracts features from SQL query including types (e.g., insert, delete), tables, and operation (e.g., scan, hash join) costs estimated by the database engine. The columns attributes and operations like selection conditions in the query are ignored. Subsequently, Qtune trains a DNN model to predict the di\ufb00erence of statistics (e.g., updated tuples, the number of committed transactions) in the state after executing the queries in the workload and updates the state using it. The action 15 \fand reward design are similar to CDBTune. Additionally, QTune supports three levels of tuning granularity for balancing throughput and latency. For query-level, QTune inputs query vector and tries to \ufb01nd good knobs for each query. For workload-level, vectors for all queries are merged and used. For cluster-level, QTune employs a clustering method based on deep learning to classify queries and merge queries into clusters. One drawback of QTune is that the query featurization could lose key information such as query attributes (i.e., columns) and hurt the performance especially when the cost estimation is inaccurate. The prediction model for state changes is trained alone and needs accurate training data. An end-to-end training framework is therefore essential and a good direction to undertake. 3.4 Indexing 3.4.1 Database Index Selection Database index selection considers which attributes to create an index to maximize query performance. Sharma et al.[84] show how DRL can be used to recommend an index based on a given workload. The state encodes selectivity values for workload queries and columns in the database schema and current column indexes. The action is to create an index on a column. The reward is the improvement compared to the baseline without indexes. The experiments show that the approach can perform as well or better as having indexes on all columns. Sadri et al.[78] utilize DRL to select the index for a cluster database where both query processing and load balancing are considered. Welborn et al.[110] optimize the action space design by introducing task-speci\ufb01c knowledge for index selection tasks in the database. However, these works only consider the situation where single-column indexes are built. Lan et al.[39] propose both single-attribute and multi-attribute indexes selection using DRL. Five rules are proposed to reduce the action and state space, which help the agent learn e\ufb00ective strategy easier. The method uses what-if caller[9] to get the cost of queries under speci\ufb01c index con\ufb01gurations without building indexes physically. These works conduct basic experiments with small and simple datasets. Extensive and large-scale experiments using real datasets are therefore needed to benchmark these methods to ensure that they can scale well. 3.4.2 Index Structure Construction The learned index is proposed recently as an alternative index to replace the B+-Tree and bloom \ufb01lter by viewing indexes as models and using deep learning models to act as indexes[36]. DRL can enhance the traditional indexes instead of replacing them. Hierarchical structures such as the B+-tree and R-tree are important indexing mechanisms to locate data of interest e\ufb03ciently without scanning a large portion of the database. Compared to the single dimensional counterpart, the R-tree is more complex to optimize due to bounding box e\ufb03ciency and multi-path traversals. Earlier conventional approaches use heuristics to determine these two operations (i.e. choosing the insertion subtree and splitting an over\ufb02owing node) during the construction of the R-tree[71]. Gu et al.[24] propose to use DRL to replace heuristics to construct the R-tree and propose the RLR-tree. The approach models two operations ChooseSubtree and Split as two MDPs respectively and combines them to generate an R-Tree. For ChooseSubtree, the state is represented as the concatenation of the four features (i.e., area, perimeter, overlap, occupancy rate) of each selected child node. More features are evaluated but do not improve the performance in the reported experiments. The action is to select a node to insert from top-k child nodes in terms of the increase of area. The reward is the performance improvement from the 16 \fRLR-tree. For Split MDP, the state is the areas and perimeters of the two nodes created by all top-k splits in the ascending order of total area. The action is to choose one split rule from k rules and the reward is similar to that of ChooseSubtree. The two agents are trained alternately. As expected, the optimizations render the RLR-tree improved performance in range and KNN queries. Graphs can be used as e\ufb00ective indexes to accelerate nearest neighbors search[55, 15]. Existing graph construction methods generally propose di\ufb00erent rules to generate graphs, which cannot provide adaptivity for di\ufb00erent workloads[104]. Baranchuk et al.[5] employ DRL to optimize the graph for nearest neighbors search. The approach learns the probabilities of edges in the graph and tries to maximize the search e\ufb03ciency. It considers the initial graph and the search algorithm as the state. The action is to keep an edge or not. The reward is the performance for search. It chooses the TRPO[80] algorithm to train. The reported experimental results show that the agent can re\ufb01ne state-of-the-art graphs and achieve better performance. However, this approach does not learn to explore and add new edges to the initial graph that may a\ufb00ect the performance. Searching and constructing a new index structure is another line of interesting research [33]. Inspired by Neural Architecture Search (NAS)[126], Wu et al.[112] propose an RNN-based neural index search (NIS) framework that employs DRL to search the index structures and parameters given the workload. NIS can generate tree-like index structures layer by layer via formalizing abstract ordered blocks and unordered blocks, which can provide a well-designed search space. The keys in the ordered block are sorted in ascending order, and the skip list or B+-Tree can be used. The keys in the unordered block are partitioned using customized functions and the hash bucket can be used. Overall, the whole learning process is similar to that of NAS. 3.5 Query Optimization Query optimization aims to \ufb01nd the most e\ufb03cient way to execute queries in database management systems. There are many di\ufb00erent plans to access the query data that can have a large processing time variance from seconds to hours. The performance of a query plan is determined mostly by the table join orders. Traditionally, query optimizers use certain heuristics combined with dynamic programming to enumerate possible e\ufb03cient execution plans and evaluate them using cost models that could produce large errors[42]. Marcus et al.[61] propose Rejoin that applies DRL to learn to select better join orders utilizing past experience. The state encodes join tree structure and join predicates. The action is to combine two subtrees, where each subtree represents an input relation to join. The reward is assigned based on the cost model in the optimizer. The experiments show that ReJOIN can match or outperform the optimizer in PostgreSQL. Compared to ReJoin, DQ[37] presents an extensible featurization scheme for state representation and improves the training e\ufb03ciency using the DQN[65] algorithm. Heitz et al.[29] compare di\ufb00erent RL algorithms including DQN[65], DDQN[49], and PPO[81] for join order optimization and use a symmetric matrix to represent the state instead of vector. Yu et al.[119] introduce a graph neural network (GNN) with DRL for join order selection that replaces \ufb01xed-length hand-tuned vector in Rejoin[61] and DQ[37] with learned scalable GNN representation and better captures and distinguishes the join tree structure information. These works mainly di\ufb00er in encoding what information and how to encode them. Instead of learning from past query executions, Trummer et al.[94] propose SkinnerDB to learn from the current query execution status to optimize the remaining execution of a 17 \fTable 2: Methods of query optimization. Method Techniques Training Workload Adaptivity Rejoin[61], DQ[37] learn from execution experience High Low SkinnerDB [94] learn from current execution status Medium Medium Bao[60] learn to choose existing optimizers Low High query using RL. Speci\ufb01cally, SkinnerDB breaks the query execution into many small time intervals (e.g., tens to thousands of slices per second) and processes the query adaptively. At the beginning of each time interval, the RL agent chooses the join order and measures the execution progress. SkinnerDB adopts a similar adaptive query processing strategy in Eddies[95] and uses the UCT algorithm[34], which provides formal guarantees that the di\ufb00erence is bounded between the rewards obtained by the agent and those by optimal choices. The reward is calculated by the progress for the current interval. A tailored execution engine is designed to fully exploit the learning strategy with tuple representations and specialized multi-way join algorithms. SkinnerDB o\ufb00ers several advantages. First, it is inherently robust to query distribution changes because its execution only depends on the current query. Second, it relies on less assumption and information (e.g., cardinality models) than traditional optimizers and thus is more suitable for the complicated environment where cardinality is hard to estimate. Third, it predicts the optimal join order based on real performance. However, it may introduce overhead caused by join order switching. Learning-based methods that have been proposed to replace traditional query optimizers often incur a great deal of training overhead because they have to learn from scratch. To mitigate the problem, Bao [60] (the Bandit optimizer)) is designed to take advantage of the existing query optimizers. Speci\ufb01cally, Bao learns to choose the best plan from the query plan candidates provided by available optimizers by passing di\ufb00erent \ufb02ags or hints to them. Bao transforms query plan trees into vectors and adopts a tree convolutional neural network to identify patterns in the tree. Then it formulates the choosing task as a contextual multi-armed bandit problem and uses Thompson sampling[92] to solve it. Bao is a hybrid solution for query optimization. It achieves good training time and is robust to changes in workload [60]. 3.6 Cache Management 3.6.1 View Materialization View materialization is the process of deciding which view, i.e., results of query or subquery, to cache. In database systems, a view is represented as a table and other queries could be accelerated by reading this table instead of accessing the original tables. There is an overhead of materializing and maintaining the view when the original table is updated. Existing methods are based on heuristics, which either rely on simple Least-Recently-Used rule or cost-model based approaches[74]. The performance of these approaches is limited because feedback from the historical performance of view materialization is not incorporated. Liang et al.[48] implement Deep Q-Materialization (DQM) system that leverages DRL to improve the view materialization process in the OLAP system. First, DQM analyzes SQL queries to \ufb01nd candidate views for the current query. Second, it trains a DRL agent to select from the set of candidates. Third, it uses an eviction policy to delete the 18 \fmaterialized views. In the MDP, the state encodes view state and workload information. The action is to create the view or do nothing. The reward is calculated by the query time improvement minus amortized creation cost. Additionally, the eviction policy is based on credit and it evicts the materialized view with the lowest score. Yuan et al.[121] present a di\ufb00erent way that use DRL to automate view generation and select the most bene\ufb01cial subqueries to materialize. First, the approach uses a DNN to estimate the bene\ufb01ts of a materialized view where features from tables, queries, and view plans are extracted. Then the approach models selection as an Integer Linear Programming (IIP) problem and introduce an iterative optimization method to \ufb01gure it out. However, the method cannot guarantee convergence. To address the issue, the problem is formulated as the MDP. The state encodes the subqueries that are selected to materialize and status if queries use these materialized views. The action is to choose the subquery to materialize or not. The reward is the di\ufb00erence between bene\ufb01t changes of two states. Both cost estimation and view selection models are trained o\ufb04ine using the actual cost of queries and bene\ufb01ts. Then the cost estimation model is used for the online recommendation for view materialization. Performance study shows its good performance; However, it lacks a comparison with DQM. 3.6.2 Storage Cache management impacts the performance of computer systems with hierarchical hardware structures directly. Generally, a caching policy considers which objects to cache, to evict when the cache is full to maximize the object hit rate in the cache. In many systems, the optimal caching policy depends on workload characteristics. Phoebe[111] is the RL-based framework for cache management for storage models. The state encodes the information from a preceding \ufb01xed-length sequence of accesses where for each access, nine features are extracted including data block address, data block address delta, frequency, reuse distance, penultimate reuse distance, average reuse distance, frequency in the sliding window, the number of cache misses, and a priority value. The action is to set a priority value ranging within [\u22121, 1] to the data. The reward is computed from if the cache is hit or missed and values are 1 and -1 respectively. It applies the DDPG algorithm to train the agent. Periodical training is employed to amortize training costs in online training. In network systems, one issue is that the reward delay is very long in systems with a large cache, i.e., CDN cache can host up to millions of objects. Wang et al.[100] propose a subsampling technique by hashing the objects to mitigate the issue when applying RL on caching systems. 4 Data Analytics Applications In this section, we shall discuss DRL applications from the perspective of data processing and data analytics. These two categories of DRL applications form indispensable parts of a pipeline, in which data processing provides a better basis for data analytics. In addition, these two categories share some overlapping topics, making these topics mutually motivating and stimulating. We have summarized the technical comparisons of di\ufb00erent applications in Table 3. We shall \ufb01rst discuss DRL applications in data preparation and then in data analytics. 19 \fTable 3: Representative works for RL applications. D(X) denotes the approximate dimension of X space. Domain Work Algorithm D(State) D(Action) DRL-based Approach Data processing Entity matching[11, 20] PG 100 1000 100 1000 Select target entity from the candidate entities application Database interaction with natural language [125, 14] PG 100 1000 100 1000 Learn to generate the query Feature engineering [52] DQN 100 1-10 Select features and model feature correlations in states Exploratory data analysis [4] A3C 10-100 100000 Learn to query a dataset for key characteristics Abnormal detection [69] IRL 1-10 1-10 Learn the reward function for normal sequences AutoML pipeline generation [28] DQN 10 100 Learn to select modules of a pipeline Healthcare Treatment recommendation [103] DDPG 10 100-1000 Select treatment from candidate treatments Diagnostic inference [51] DQN 100-1000 1-10 Learn diagnostic decision Hospital resource allocation [19] DDPG 100 100010000 Learn resource scheduling Fintech Portfolio optimization [12] QLearning 100 100 Select the portfolio weights for stocks Trading [115, 114] IRL 1-10 10 Learn the reward function of trading behaviors Fraud detection [114] IRL 100 10-100 Learn the reward function of trading behaviors EOnline advertising [124] DQN 1-10 1-10 Learn to schedule the advertisements Commerce Online recommendation [10] DQN 100 10000 Learn to schedule recommendations Search results aggregation [91] DQN 10-100 10-100 Learn to schedule search results Others User pro\ufb01ling [105] DQN 100-1000 100010000 Select users\u2019 next activities by modeling spatial semantics Spammer detection [16] PG 100 100 Search for the detector by interacting with spammers Transportation [83] PG 100010000 1000 Learn to schedule transportation 20 \f4.1 Data Preparation 4.1.1 Entity Matching Entity matching is a data cleaning task that aligns di\ufb00erent mentions of the same entity in the context. Clark et al. [11] identify the issue that the heuristic loss function cannot e\ufb00ectively optimize the evaluation metric B3, and propose using reinforcement learning to directly optimize the metric. The problem is formulated as a sequential decision problem where each action is performed on one mention of a document. The action maps the mention to an entity in the database at each step by a mention ranking model. Then the reward is calculated using the evaluation metric B3. This work originally proposes scaling each action\u2019s weight by measuring its impact on the \ufb01nal reward since each action is independent. However, this work does not consider the global relations between entities. Fang et al. [20] propose a reinforcement learning framework based on the fact that an easier entity will create a better context for the subsequent entity matching. Speci\ufb01cally, both local and global representations of entity mentions are modeled and a learned policy network is devised to choose from the next action (i.e., which entity to recognize). However, the selection of the easier entity to learn the context could be less powerful than context modeling with more recent techniques in NLP such as the transformer. 4.1.2 Database Interaction With Natural Language To facilitate query formulation for relational databases, there have been e\ufb00orts in generating SQL queries from various other means that do not require knowledge of SQL and schema. Zhong et al. [125] propose to generate SQL from a natural language using Reinforcement Learning. For queries formed by a natural language, the model Seq2SQL will learn a policy transforming the queries into SQL queries. The transformed queries will then be executed in the database system to get results. The results will be compared with the ground truth to generate RL rewards. Earlier work [14] using generic autoencoder model for semantic parsing with Softmax as the \ufb01nal layer may generate unnecessarily large output spaces for SQL query generation tasks. Thus the structure of SQL is used to prune the output space of query generating and policy-based reinforcement learning to optimize the part which cannot be optimized by cross-entropy. However, RL is observed to have limited performance enhancement by [113] due to unnecessary modeling of query serialization. E\ufb03ciently querying a database of documents is a promising data processing application. Karthik et al. [67] propose collecting evidence from external sources of documents to boost extraction accuracy to original sources where data might be scarce. The problem is formulated as an MDP problem, where each step the agent needs to decide if current extracted articles are accepted and stop querying, or these articles are rejected and more relevant articles are queried. Both data reconciliation (from original sources) and data retrieval (from external sources) are represented as states. Extraction accuracy and penalties for extra retrieval actions are re\ufb02ected in the reward function. 4.1.3 Feature Engineering Feature engineering can be formulated as a single-agent reinforcement learning problem to search for an optimal subset of features in a large space: the agent selects one feature at each action step. The state is the current feature subspace. A reward is assigned to the agent based on the predictive performance of the current features subset. Liu et al. [52] 21 \fpropose a method to reformulate feature engineering as a multi-agent reinforcement learning problem. The multi-agent RL formulation reduces the large action space of a single agent since now each of the agents has a smaller action space for one feature selection. However, this formulation also brings challenges: interactions between agents, representation of the environment, and selection of samples. Three technical methods in [52] have been proposed to tackle them respectively: adding inter-feature information to reward formulation, using meta statistics, and deep learning methods to learn the representation of the environment, and Gaussian mixture to independently determine samples. However, although this formulation reduces the action space, the trade-o\ufb00is using more computing resources to support more agents\u2019 learning. Also, the method is di\ufb03cult to scale to a large feature space. 4.1.4 Exploratory Data Analysis Exploratory data analysis (EDA) is useful for users to understand the characteristics of a new dataset. In [4], the problem is formulated as a MDP. The action space is the combination of a \ufb01nite set of operators and their corresponding parameters to query a dataset. The result of a query shows the characteristics of the dataset. The characteristics are modeled as the state, which is represented by descriptive statistics and recent operators. The reward signal measures the interestingness, diversity, and coherency of the characteristics by an episode of EDA operations. DRL is applied to the non-di\ufb00erential signals and discrete states in MDP. However, challenges arise when applying deep reinforcement learning given a large number of possible actions as parameterized operations (i.e., for each type of operation, the corresponding possible action is the Cartesian product of all parameters\u2019 possible values). In [4], a two-fold layer architecture is proposed to replace a global softmax layer into two local layers, which e\ufb00ectively reduces the intractable large numbers of actions. However, the global interactions of operations and attributes are not considered. 4.1.5 Abnormal Detection Abnormal detection is important for high-stake applications such as healthcare (e.g., predicting patients\u2019 status) and \ufb01ntech (e.g., \ufb01nancial crime). Based on the assumptions, there are two approaches to this problem. One approach models the dynamics in the unlabeled datasets as a sequential decision process where the agent performs an action on each observation. Oh et al. [69] propose to use IRL to learn a reward function and a Bayesian network to estimate a con\ufb01dence score for a potential abnormal observation. To achieve this, the prior distribution of the reward function is assumed. Then a reward function is sampled from the distribution to determine the sample generating policy, which generates sample background trajectories. As explained by the reward part of Section 2.3.2, experts\u2019 trajectories are observed. With these experts\u2019 trajectories and sample background trajectories, the parameters of the reward function are updated and thus the policy is improved. The sequence of actions is the input into the neural network. This network is trained to learn the normal pattern of a targeted agent and to predict if the next observation is abnormal or not. However, this approach relies too much on mining unlabeled datasets and ignores the labeled dataset. To address this issue, another approach also uses DRL but focus on the Exploit-Explore trade-o\ufb00on both unlabeled and labeled dataset. Pang et al. [73] propose a DRL model with a sampling function to select data instances from both the unlabeled and labeled dataset. This sampling function helps the DRL model to exploit the scarce but useful labeled anomaly data instances and to explore the large unlabeled dataset for novel anomaly data instances. Thus, more anomaly data instances are selected 22 \fto train the DRL model with better model capacity. 4.1.6 AutoML Pipeline Generation Pipeline generation includes generating all data processing and analytics steps or modules to perform ML tasks. He\ufb00etz et al. [28] propose a grid-world to represent all possible families of each step of a data pipeline as cells and connect all possible cells as a graph. Subsequently, a hierarchical method is used to reduce the space of all actions and represent all actions by layers of clusters. Finally, the state representations are inputs to the value sub-network in a DQN network, and action representations are inputs to evaluate the advantage-to-average sub-network. 4.2 Healthcare Healthcare analytics has gained increasing attention in tandem with the advancement of healthcare treatment and availability of medical data and computational capacity [41]. Naturally, a great amount of e\ufb00ort has been spent on applying DRL to healthcare. As before, implementing DRL-based models in healthcare requires the understanding of the application context and de\ufb01ning the key elements of MDP. However, di\ufb00erences occur in the approaches to learning better decisions: learning the motivation of expert decisions by IRL, learning better decisions without an expert by interacting with an environment or interacting with an environment with expert decisions as supervising signals. 4.2.1 Treatment Recommendation Treatment recommendation systems are designed to assist doctors to make better decisions based on electronic health records. However, the doctors\u2019 prescriptions are not ground truth but valuable suggestions for high stake medical cases. The ground truth is the delayed condition of the patients. Thus model predictions must not deviate from the doctors\u2019 judgments too much, and not use those judgments as true labels. To tackle this challenge, Wang et al. [103] propose an architecture to combine supervised learning and reinforcement learning. This model reduces the inconsistency between indicator signals learned from doctor\u2019s prescriptions via supervised learning and evaluation signals learned from the long-term outcome of patients via reinforcement learning. In the formulated MDP, the domain expert makes a decision based on an unknown policy. The goal is to learn a policy that simultaneously reduces the di\ufb00erence between the chosen action of the agent and the expert\u2019s decision and to maximize the weighted sum of discounted rewards. 4.2.2 Diagnostic Inference Using DRL to perform diagnosis can provide a second opinion in high-intensity diagnosis from historical medical records to reduce diagnostic errors. Ling et al. [51] propose modeling the integration of external evidence to capture diagnostic concept as a MDP. The objective is to \ufb01nd the optimal policy function. The inputs are case narratives and the outputs are improved concepts and inferred diagnoses. The states are a set of measures over the similarity of current concepts and externally extracted concepts. The actions are whether to accept (part of) the extracted concepts from external evidence. The environments are the top extracted case narratives from Wikipedia as the document pool for concepts extraction and a knowledge base for evaluating the intermediate results for current best concepts. The rewards are evaluated based on an external knowledge base mapping from the concepts to the diagnoses. The whole process is modeled by DQN. At 23 \feach step, narrative cases and evidence are extracted, which provide the initial concepts and external concepts. The state representing the agent\u2019s con\ufb01dence in the learned concept is duly calculated. Then the state is sent to the DQN agent to estimate the reward to model the long-run accuracy of the learned concept by the agent. Iteratively, the model converges with better concepts and diagnoses. 4.2.3 Hospital Resource Allocation Allocating limited hospital resources is the key to providing timely treatment for patients. In [19], the problem is formulated as a classi\ufb01cation problem where the patients\u2019 features are given and the target is to predict the location of admissions. The RL framework uses a student network to solve the classi\ufb01cation problem. The weights of the student network are used as states, which are fed into a teacher network to generate actions to select which batch of data to train the student network. The accuracy of the classi\ufb01cation is used as the reward. This method provides a view on the resource allocation problem from a curriculum learning perspective. However, the temporal information of the data samples is not considered but it could a\ufb00ect resource allocation since some hours during a day could have fewer patients than the others. 4.3 Fintech Reinforcement learning has wide applications in the \ufb01nance domain. Firstly, reinforcement learning has brought new perspectives to let the \ufb01nance research community revisit many classic \ufb01nancial research topics. For example, traditional \ufb01nancial research topics such as option pricing that are typically solved by the classic Black\u2013Scholes model can be steered through with a data-driven insight by reinforcement learning [25]. Secondly, portfolio optimization, typically formulated as a stochastic optimal control problem, can be addressed by reinforcement learning. Finally, the agents are \ufb01nancial market participants with different intentions. Reward functions can be learned to model these intentions, and hence, make better decisions as illustrated in Figure 3. We refer readers with further interest in \ufb01nance to [62]. 4.3.1 Dynamic Portfolio Optimization The portfolio optimization problem is challenging because of the high scale of the dimensionality and the high noise-to-signal ratio nature of stock price data. The latter problem of noisy observation can cause uncertainty in a learned policy. Therefore, [12] proposes a novel model structure based on the Q-learning to handle noisy data and to scale to high dimensionality. The quadratic form of reward function is shown to have a semi-analytic solution that is computationally e\ufb03cient. In the problem formulation, the agent\u2019s actions are represented as the changes in the assets at each time step. The states are the concatenation of market signals and the agent\u2019s holding assets. This method enhances Q-learning by introducing an entropy term measuring the noise in the data. This term acts as a regularization term forcing the learned policy to be close to a reference policy that is modeled by a Gaussian distribution. 4.3.2 Algorithm Trading Strategy Identi\ufb01cation Identi\ufb01cation of algorithm trading strategies from historical trades is important in fraud detection and maintaining a healthy \ufb01nancial environment. [114] proposes using IRL to learn the reward function behind the trading behaviors. The problem is formulated as 24 \fFigure 3: DRL in \ufb01ntech applications. an Inverse Markov Decision Process (IMDP). The states are the di\ufb00erences between the volumes of bid orders and ask orders, which are discretized into three intervals based on the values of the volumes. The actions are the limit and market order discretized into 10 intervals each by their values. The prior distribution of the reward function is a Gaussian Process parameterized by \u03b8. Given \u03b8, the approximation of the posterior distribution of reward is performed by maximum a posteriori (MAP). This step would give a MAP estimated value of the reward. \u03b8 is optimized by a log-likelihood function on the posterior of observations. The optimization process can be proved to be convex which guarantees the global minimum. The learned features are then used to identify and classify trading strategies in the \ufb01nancial markets. 4.3.3 Sentiment-based Trading One of the main predictors in stock trading is sentiment, which drives the demand of bid orders and asks orders. Sentiment scores are often represented by unstructured text data such as news or twitters. [115] proposes treating the sentiment as the aggregated action of all the market participants, which has the advantage of simplifying the modeling of the numerous market participants. Speci\ufb01cally, the sentiment scores are categorized into three intervals: high, medium, and low as the action spaces. Compared to previous works, the proposed method can model the dependency between the sentiment and the market state by the policy function. This method is based on Gaussian Inverse Reinforcement Learning [43] similar to [114] as discussed at the beginning of Section 4.3, which is e\ufb00ective at dealing with uncertainty in the stock environment. This method provides a method for modeling market sentiments. However, as IRL faces the challenge of non-uniqueness of reward [13] of one agent\u2019s actions, the method does not address how aggregated actions of multiple market participants can infer a unique reward function. 25 \f4.4 E-Commerce 4.4.1 Online Advertising With the increasing digitalization of businesses, sales and competition for market shares have moved online in tandem. As a result, online advertising has been increasing in its presence and importance and exploiting RL in various aspects. One of the topics in online advertising, bidding optimization, can be formulated as a sequential decision problem: the advertiser is required to have strategic proposals with bidding keywords sequentially to maximize the overall pro\ufb01t. In [124], the issue of using static transitional probability to model dynamic environments is identi\ufb01ed and a new DRL model is proposed to exploit the pattern discovered from dynamic environments. Including but not limited to advertising, Feng et al. [21] propose to consider the whole picture of multiple ranking tasks that occurred in the sequence of user\u2019s queries. A new multi-agent reinforcement learning model is proposed to enable multiple agents to partially observe inputs and choose actions through their own actor networks. The agents communicate through a centralized critic model to optimize a shared objective. This allows di\ufb00erent ranking algorithms to reconcile with each other when taking their own actions and consider the contextual information. 4.4.2 Online Recommendation The problem of an unstabilized reward function arises because of the dynamic environment in the online recommendation. For example, user preference is modeled as the reward in DRL and it changes unexpectedly when a special discount happens for some products. In [10], a random strati\ufb01ed sampling method is proposed to calculate the optimal way of stratifying by allocating more samples to the strata with more weighted variance. Then the replay sampling is improved to consider key attributes of customers (e.g., gender, age, etc.), which are less volatile in the dynamic environment. This allows the modeling of reward function based on sampling from a pool with a longer horizon, thus reducing the bias in the estimation of the reward function. Lastly, the dynamic environment poses a challenge in setting an optimal policy used in regretting. A new method in [10] is proposed to train an o\ufb04ine model to calculate a real-time reward for a subset of customers to approximate a reference policy, that is used as an o\ufb00set in the reward recalibration to stabilize the performance of the DRL algorithm. 4.4.3 Search Results Aggregation Aggregating useful search results in online shopping search is important to improve the shopping experience. However, the challenge of aggregating heterogeneous data sources is often encountered. The heterogeneous data sources in online shopping are di\ufb00erent product categories such as a shoe brand group or a particular topic group, each of which is a ranking system. A new model in [91] is proposed to decompose the task into two sub-tasks. The \ufb01rst one is to select a data source for the current page of search results based on historical users\u2019 clicks on previous pages. Learning to select the correct data source for each page is a sequential decision-making problem. The second sub-task is to \ufb01ll the sequence of a page by selecting the best source from the candidate sources. However, the items from di\ufb00erent sources cannot be directly compared because of their heterogeneous nature. The problem is solved by formulating the sub-task as an RL task to let an agent \ufb01ll up the sequence. However, one limitation of this method is that lacking full annotations of item relevance scores may constrain the model\u2019s performance on various scenarios [91]. 26 \f4.5 Other Applications DRL has been applied to various other applications. These DRL methods are often used with a knowledge graph, confounders, or game theory to model application-speci\ufb01c dynamics. These methods are not only well motivated from their respective applications but also general enough to be applied in other applications. However, these methods often fail to be evaluated by experiments in other applications. The problem of mobile user pro\ufb01ling aims to identify user pro\ufb01les to provide personalized services. In [105], the action is the selection of a place of visit. The environment is comprised of all users and a knowledge graph learning the semantic connections between the spatial entities. The knowledge graph is updated once a user\u2019s new activity is performed and then a\ufb00ects the agent\u2019s prediction. The state is the embedding of a user and the knowledge graph for the current time step. The reward is determined by several metrics measuring the similarity between the predicted spatial entities and the ground truth. This method considers the spatial semantics of entities but does not consider how the change of a user\u2019s key attributes (e.g., career) will a\ufb00ect activity prediction and policy learning, which could cause instability in policy updating. In the transportation system, drivers often get recommendations and provide feedback in return to improve the service. However, the recommendation often fails when drivers make decisions in a complex environment. To address this issue, in [83] a new method is proposed to model hidden causal factors, called confounders, in a complex environment. Speci\ufb01cally, the framework in [32] is extended to include the confounders. First, all three elements (i.e., policy agent, environment, confounder) are treated as agents. The e\ufb00ect of a confounder is modeled as the policy of the hidden agent, which takes the observation and action of the policy agent as inputs and performs an action. The environment in turn takes the action based on inputs of the hidden agent\u2019s action and the policy agent\u2019s action and observation. The problem of spammer detection aims to detect spam generating strategies. The challenge is that the detectors only detect easier spams while missing spams with strategies. In [16], the problem is formulated as two agents counteracting each other. One agent is the spammer, whose policy is to maintain a distribution of spam strategies and the action is to sample from the distribution. Another agent is the detector, whose state is the detection results after a spam attack and the action is to identify the spam. The rewards of two agents are measured by winning or losing revenue manipulation, respectively. The limitation of this method is that there is no guarantee for equilibrium. 5 Open Challenges and Future Directions RL approaches provide strong alternatives to traditional heuristics or supervised learningbased algorithms. However, many challenges remain to be addressed to make RL a practical solution in the context of data processing and analytics. We also foresee many important future research directions to be developed. 5.1 Open Challenges For System Optimization 5.1.1 MDP Formulation and Lack of Justi\ufb01cation The design of MDP impacts the performance and e\ufb03ciency of the RL algorithm greatly. The state should satisfy Markov property that its representation contains enough relevant 27 \finformation for the RL agent to make the optimal decision. It should summarize the environment compactly because a complicated state design will cause more training and inference costs. The action space should be designed carefully to balance learning performance and computational complexity. The reward de\ufb01nition directly a\ufb00ects the optimization direction and the system performance. Additionally, the process of reward calculation can involve costly data collection and computation in the data systems optimization. Currently, many works rely on experimental exploration and experience to formulate MDP while some works exploit domain knowledge to improve the MDP formulation by injecting task-speci\ufb01c knowledge into action space[110]. Generally, MDP can in\ufb02uence computational complexity, data required, and algorithm performance. Unfortunately, many works lack ablation studies of their MDP formulations and do not justify the design in a convincing manner. Therefore, automation of MDP formulation remains an open problem. 5.1.2 RL Algorithm and Technique Selection RL algorithms and techniques have di\ufb00erent tradeo\ufb00s and assumptions. Value-based DRL algorithms like DQN are not stable and guaranteed convergence. Policy-based DRL algorithms like TRPO and PPO are often not e\ufb03cient. Model-based DRL algorithms do not guarantee that a better model can result in a better policy. Value-based methods assume full observability while policy-based ones assume episodic learning. O\ufb00-policy algorithms are usually more e\ufb03cient than on-policy algorithms in terms of sample e\ufb03ciency. One example is that DQ[37] uses o\ufb00-policy deep Q-learning to increase data e\ufb03ciency and reduce the number of training queries needed. Training e\ufb03ciency can be a big concern for DRL-based system optimization, especially when the workload of the system could change dramatically and the model needs to be retrained frequently. Generally, RL algorithms and techniques selection a\ufb00ect the training e\ufb03ciency and e\ufb00ectiveness greatly. 5.1.3 Integration with Existing Systems Integrating RL-based methods into the real system more naturally and seamlessly faces many challenges. The RL agent has to be evolved when the system environment changes (e.g., workload) and the performance is degraded. We need to design new model management mechanisms to monitor, maintain, and upgrade the models. Furthermore, we \ufb01nd that the RL-based solutions can be lightweight or intrusive. The lightweight approach in which the RL agent is not designed as a component of the system, e.g. using RL to generate the qd-tree[116], is easier to integrate into the system because it does not change the architecture of the system dramatically. In contrast, the intrusive approach such as using RL models for join order optimization[61] is deeply embedded in the system and hence may need a redesign and optimization of the original system architecture to support model inference e\ufb03ciently. SageDB[35] proposes to learn various database system components by integrating RL and other ML techniques. Nevertheless, the proposed model-driven database system is yet to be fully implemented and benchmarked. It is likely that the data system architecture needs to be overhauled or signi\ufb01cantly amended in order to graft data-driven RL solutions into the data system seamlessly to yield an overall performance gain. 5.1.4 Reproducibility and Benchmark In the data system optimization problem, RL algorithms are not easy to be reproduced due to many factors such as lacking open source codes, workload, historic statistics used, and the unstable performance of RL algorithms. The landscape of problems in system 28 \foptimization is vast and diverse. It could prevent fair comparison and optimization for future research works and deployments in practice. Lacking benchmarks is another challenge to evaluate these RL approaches. The benchmarks are therefore to provide standardized environments and evaluation metrics to conduct experiments with di\ufb00erent RL approaches. There are some e\ufb00orts to mitigate the issue. For example, Park[57] is an open platform for researchers to conduct experiments with RL. However, it only provides a basic interface and lacks system speci\ufb01cations. There is much room to improve with regards to the reproducibility and benchmark in order to promote the development and adoption of RL-based methods[30]. 5.2 Open Challenges For Applications 5.2.1 Lack of Adaptability There is a lack of adaptability for methods on a single component of a data pipeline to the whole. For example, many works focus on data cleaning tasks such as entity matching. However, little works have shown their e\ufb03ciency in deploying their model in an end-to-end data pipeline. These works treat the tasks isolatedly from other tasks in the pipeline, thereby limiting the pipeline\u2019s performance. In healthcare, each method is applied in di\ufb00erent steps of the whole treatment process, without being integrated and evaluated as one pipeline. One possible direction could be considering DRL as a module in the data pipeline optimization. However, data pipeline optimization has been focusing on models simpler than DRL to enable fast pipeline evaluation [53]. How to e\ufb03ciently incorporate DRL into the data pipeline optimization remains a challenge. 5.2.2 Di\ufb03culty in Comparison with Di\ufb00erent Applications To date, most works with generalized contributions are only evaluated domain-speci\ufb01cally. Research questions are often formulated in their own platform as in E-Commerce. This presents di\ufb03culty in evaluating the methods for di\ufb00erent environments. For example, the confounders modeling hidden causal factors in [83] can also contribute to DRL modeling in E-commerce. This is because modeling customers\u2019 interests are always subject to changing environments and a new environment may contain hidden causal factors. For example, consumers are more willing to buy relevant products for certain situations such as Covid19. Thus a general DRL method is yet to show the robustness and e\ufb00ectiveness under the environment of di\ufb00erent applications. 5.2.3 Lack of Prediction in Multi-modality In healthcare and \ufb01nance, multiple sources of data bring di\ufb00erent perspectives. For example in healthcare, electronic health records, image scans, and medical tests can provide di\ufb00erent features for accurate prediction. In addition, these sources of data with di\ufb00erent sample frequencies provide contextual information for modeling a patient\u2019s visits to the hospital or symptom development. However, most innovations in healthcare focus on one particular source of data. How to integrate the contextual information with multi-modality e\ufb00ectively remains an unsolved di\ufb03cult problem. 5.2.4 Injecting Domain Knowledge in Experience Replay In high-stake applications such as healthcare and \ufb01nance, injecting domain knowledge can make decision making in RL more robust and explainable. One possible way is to inject the 29 \fknowledge of human beings\u2019 experience into an agent\u2019s experience pool as a prior distribution for the policy. For example, in dynamic portfolio optimization, a portfolio manager could have a large source of experience for risk management and pro\ufb01t optimization. Such experience could be useful for warming up the agent\u2019s exploration in the search space. Some works have shown positive e\ufb00ects of domain knowledge injection on selecting important experiences (i.e., transition samples) [79]. Notwithstanding, it remains a big challenge to inject useful and relevant knowledge from the experience into the agent\u2019s experience pool. 5.3 Future Research Directions 5.3.1 Data Structure Design DRL provides an alternative way to \ufb01nd good data structures through feedback instead of designing them based on human knowledge and experience, e.g., decision tree[47] and the qd-tree[116]. These trees are optimized better because they are learned by interacting with the environment. DRL has also been e\ufb00ective in graph designs (e.g., molecular graph[117]). However, large-scale graph generation using DRL is di\ufb03cult and daunting because it involves a huge search space. Generating other important structures using DRL remains to be explored. Idreos et al.[33] propose a Data Alchemist that learns to synthesize data structures by DRL and other techniques including Genetic Algorithms and Bayesian Optimization. In summary, DRL has a role in the design of more e\ufb03cient data structures by interacting and learning from the environment. These indexes have to be adaptive to di\ufb00erent data distributions and workloads. 5.3.2 Interpretability The underlying logic behind the DRL agent is still unknown. In high-risk application areas such as healthcare, the adoption of DRL will be a big issue in the case that these approaches make wrong decisions and people do not know why it happens due to lack of interpretability. Many techniques have been proposed to mitigate the issue and provide interpretability[76]. However, they neglect domain knowledge from related \ufb01elds and applications and the explanations are not e\ufb00ective to human users. To instill con\ufb01dence in the deployment of DRL-based systems in practice, interpretability is an important component and we should avoid treating DRL solutions as black boxes especially in critical applications. 5.3.3 Robustness by Causal Reasoning Modeling real-world applications by DRL inevitably su\ufb00ers from the problem of distribution changes. The real world has independent physical mechanisms that can be seen as di\ufb00erent modules. For example, an image is subjected to the light of the environment. Given the modular property, a structural type of modeling focusing on factorizing the causal mechanisms can extract the invariant causal mechanisms and show robustness cross distribution changes [82]. One research direction towards DRL robust decision making is to perform sampling from past actions from a causal perspective. Given the invariance property of causal mechanisms, past actions can be reused by capturing the invariant mechanisms in a changing environment. 5.3.4 Extension to Other Domains Beyond existing works, many classic problems in the data system and analytics could potentially be solved by DRL. For example, Polyjuice[101] learns the concurrency control 30 \falgorithm for a given workload by de\ufb01ning \ufb01ne-grained actions and states in the context of concurrency control. Though they use an evolutionary algorithm to learn and outperform a simple DRL baseline, we believe that there are huge potentials to further improve DRL for niche applications. Hence, we expect that more problems will be explored and solved with DRL in various domains in the near future. 5.3.5 Towards Intelligent and Autonomous Databases Although DRL algorithms could provide breakthrough performance on many tasks than traditional methods, many issues need to be addressed towards intelligent and autonomous databases. First, database schema could be updated and DRL models trained on the previous snapshots may not work. DRL algorithms need to tackle generalization[72]. Second, it would be so costly and infeasible to train models from scratch for each scenario and setting. Transfer learning from existing models could be a potential way to ease the workload greatly. Third, we have to choose appropriate DRL algorithms automatically, in the same spirit as AutoML. Fourth, current DBMS systems were designed without considering much about the learning mechanism. A radically new DBMS design may be proposed based on the learning-centric architecture. To support intelligent and autonomous database systems, DRL models intelligent behaviors and may provide a solid basis for achieving arti\ufb01cial general intelligence based on reward maximization and trial-and-error experience[88]. 6" + }, + { + "url": "http://arxiv.org/abs/1909.03939v2", + "title": "Deterministic Value-Policy Gradients", + "abstract": "Reinforcement learning algorithms such as the deep deterministic policy\ngradient algorithm (DDPG) has been widely used in continuous control tasks.\nHowever, the model-free DDPG algorithm suffers from high sample complexity. In\nthis paper we consider the deterministic value gradients to improve the sample\nefficiency of deep reinforcement learning algorithms. Previous works consider\ndeterministic value gradients with the finite horizon, but it is too myopic\ncompared with infinite horizon. We firstly give a theoretical guarantee of the\nexistence of the value gradients in this infinite setting. Based on this\ntheoretical guarantee, we propose a class of the deterministic value gradient\nalgorithm (DVG) with infinite horizon, and different rollout steps of the\nanalytical gradients by the learned model trade off between the variance of the\nvalue gradients and the model bias. Furthermore, to better combine the\nmodel-based deterministic value gradient estimators with the model-free\ndeterministic policy gradient estimator, we propose the deterministic\nvalue-policy gradient (DVPG) algorithm. We finally conduct extensive\nexperiments comparing DVPG with state-of-the-art methods on several standard\ncontinuous control benchmarks. Results demonstrate that DVPG substantially\noutperforms other baselines.", + "authors": "Qingpeng Cai, Ling Pan, Pingzhong Tang", + "published": "2019-09-09", + "updated": "2019-11-13", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "stat.ML" + ], + "main_content": "Introduction Silver et al. propose the deterministic policy gradient (DPG) algorithm (Silver et al. 2014) that aims to \ufb01nd an optimal deterministic policy that maximizes the expected long-term reward, which lowers the variance when estimating the policy gradient, compared to stochastic policies (Sutton et al. 2000). Lillicrap et al. further combine deep neural networks with DPG to improve the modeling capacity, and propose the deep deterministic policy gradient (DDPG) algorithm (Lillicrap et al. 2015). It is recognized that DDPG has been successful in robotic control tasks such as locomotion and manipulation. Despite the effectiveness of DDPG in these tasks, it suffers from the high sample complexity problem (Schulman et al. 2015). Deterministic value gradient methods (Werbos 1990; Nguyen and Widrow 1990; Jordan and Rumelhart 1992; \u2217The \ufb01rst two authors contributed equally to this work. Copyright c \u20dd2020, Association for the Advancement of Arti\ufb01cial Intelligence (www.aaai.org). All rights reserved. Fairbank 2008) compute the policy gradient through back propagation of the reward along a trajectory predicted by the learned model, which enables better sample ef\ufb01ciency. However, to the best of our knowledge, existing works of deterministic value gradient methods merely focus on \ufb01nite horizon, which are too myopic and can lead to large bias. Stochastic value gradient (SVG) methods (Heess et al. 2015) use the re-parameterization technique to optimize the stochastic policies. Among the class of SVG algorithms, although SVG(1) studies in\ufb01nite-horizon problems, it only uses onestep rollout, which limits its ef\ufb01ciency. Also, it suffers from the high variance due to the importance sampling ratio and the randomness of the policy. In this paper, we study the setting with in\ufb01nite horizon, where both state transitions and policies are deterministic. (Heess et al. 2015) gives recursive Bellman gradient equations of deterministic value gradients, but the gradient lacks of theoretical guarantee as the DPG theorem does not hold in this deterministic transition case. We prove that the gradient indeed exists for a certain set of discount factors. We then derive a closed form of the value gradients. However, the estimation of the deterministic value gradients is much more challenging. The dif\ufb01culty of the computation of the gradient mainly comes from the dependency of the gradient of the value function over the state. Such computation may involve in\ufb01nite times of the product of the gradient of the transition function and is hard to converge. Thus, applying the Bellman gradient equation recursively may incur high instability. To overcome these challenges, we use model-based approaches to predict the reward and transition function. Based on the theoretical guarantee of the closed form of the value gradients in the setting, we propose a class of deterministic value gradients DVG(k) with in\ufb01nite horizon, where k denotes the number of rollout steps. For each choice of k, we use the rewards predicted by the model and the action-value at k + 1 step to estimate of the value gradients over the state, in order to reduce the instability of the gradient of the value function over the state. Different number of rollout steps maintains a trade-off between the accumulated model bias and the variance of the gradient over the state. The deterministic policy gradient estimator can be viewed as a special case arXiv:1909.03939v2 [cs.LG] 13 Nov 2019 \fof this class, i.e., it never use the model to estimate the value gradients, and we refer it to DVG(0). As the model-based approaches are more sample ef\ufb01cient than model-free algorithms (Li and Todorov 2004; Levine and Koltun 2013), and the model-based deterministic value gradients may incur model bias (Wahlstr\u00a8 om, Sch\u00a8 on, and Deisenroth 2015), we consider an essential question: How to combines the model-based gradients and the model-free gradients ef\ufb01ciently? We propose a temporal difference method to ensemble gradients with different rollout steps. The intuition is to ensemble different gradient estimators with geometric decaying weights. Based on this estimator, we propose the deterministic value-policy gradient (DVPG) algorithm. The algorithm updates the policy by stochastic gradient ascent with the ensembled value gradients of the policy, and the weight maintains a trade-off between sample ef\ufb01ciency and performance. To sum up, the main contribution of the paper is as follows: \u2022 First of all, we provide a theoretical guarantee for the existence of the deterministic value gradients in settings with in\ufb01nite horizon. \u2022 Secondly, we propose a novel algorithm that ensembles the deterministic value gradients and the deterministic policy gradients, called deterministic value-policy gradient (DVPG), which effectively combines the model-free and model-based methods. DVPG reduces sample complexity, enables faster convergence and performance improvement. \u2022 Finally, we conduct extensive experiments on standard benchmarks comparing with DDPG, DDPG with modelbased rollouts, the stochastic value gradient algorithm, SVG(1) and state-of-the-art stochastic policy gradient methods. Results con\ufb01rm that DVPG signi\ufb01cantly outperforms other algorithms in terms of both sample ef\ufb01ciency and performance. Related Work Model-based algorithms has been widely studied (Moldovan et al. 2015; Montgomery and Levine 2016; Ha and Schmidhuber 2018; Hafner et al. 2018; Chua et al. 2018; Zhang et al. 2018) in recent years. Model-based methods allows for more ef\ufb01cient computations and faster convergence than model-free methods (Wang and Dietterich 2003; Li and Todorov 2004; Levine and Koltun 2013; Watter et al. 2015). There are two classes of model-based methods, one is to use learned model to do imagination rollouts to accelerate the learning. (Gu et al. 2016; Kurutach et al. 2018) generate synthetic samples by the learned model. PILCO (Deisenroth and Rasmussen 2011) learns the transition model by Gaussian processes and applies policy improvement on analytic policy gradients. The other is to use learned model to get better estimates of action-value functions. The value prediction network (VPN) uses the learned transition model to get a better target estimate (Oh, Singh, and Lee 2017). (Feinberg et al. 2018; Buckman et al. 2018) combines different modelbased value expansion functions by TD(k) trick or stochastic distributions to improve the estimator of the action-value function. Different from previous model-based methods, we present a temporal difference method that ensembles model-based deterministic value gradients and model-free policy gradients. Our technique can be combined with both the imagination rollout technique and the model-based value expansion technique. Preliminaries A Markov decision process (MDP) is a tuple (S, A, p, r, \u03b3, p0), where S and A denote the set of states and actions respectively. p(st+1|st, at) represents the conditional density from state st to state st+1 under action at. The density of the initial state distribution is denoted by p0(s). At each time step t, the agent interacts with the environment with a deterministic policy \u00b5\u03b8. We use r(st, at) to represent the immediate reward, contributing to the discounted overall rewards from state s0 following \u00b5\u03b8, denoted by J(\u00b5\u03b8) = E[P\u221e k=0 \u03b3kr(ak, sk)|\u00b5\u03b8, s0]. Here, \u03b3 \u2208[0, 1) is the discount factor. The Q-function of state st and action at under policy \u00b5\u03b8 is denoted by Q\u00b5\u03b8(st, at) = E[P\u221e k=t \u03b3k\u2212tr(ak, sk)|\u00b5\u03b8, st, at]. The corresponding value function of state st under policy \u00b5\u03b8 is denoted by V \u00b5\u03b8(st) = Q\u00b5\u03b8(st, \u00b5\u03b8(st)). We denote the density at state s \u2032 after t time steps from state s following the policy \u00b5\u03b8 by p(s, s \u2032, t, \u00b5\u03b8) . We denote the discounted state distribution by \u03c1\u00b5\u03b8(s \u2032) = R S P\u221e t=1 \u03b3t\u22121p0(s)p(s, s \u2032, t, \u00b5\u03b8)ds. The agent aims to \ufb01nd an optimal policy that maximizes J(\u00b5\u03b8). Deterministic Value Gradients In this section, we study a setting of in\ufb01nite horizon with deterministic state transition, which poses challenges for the existence of deterministic value gradients. We \ufb01rst prove that under proper condition, the deterministic value gradient does exist. Based on the theoretical guarantee, we then propose a class of practical algorithms by rolling out different number of steps. Finally, we discuss the difference and connection between our proposed algorithms and existing works. Deterministic Policy Gradient (DPG) Theorem (Silver et al. 2014), proves the existence of the deterministic policy gradient for MDP that satis\ufb01es the regular condition, which requires the probability density of the next state p(s \u2032|s, a) to be differentiable in a. In the proof of the DPG theorem, the existence of the gradient of the value function is \ufb01rstly proven, i.e., \u2207\u03b8V \u00b5\u03b8(s) = Z S \u221e X t=0 \u03b3tp(s, s\u2032, t, \u00b5\u03b8)\u2207\u03b8\u00b5\u03b8(s\u2032) \u2207a\u2032Q\u00b5\u03b8(s\u2032, a\u2032)|a\u2032=\u00b5\u03b8(s\u2032)ds\u2032, (1) then the gradient of the long-terms rewards exists. Without this condition, the arguments in the proof of the DPG theorem do not work 1, and poses challenges for cases where the differentiability is not satis\ufb01ed. Note this condition does not hold in any case with deterministic transitions. Therefore, one must need a new theoretical guarantee to determine the 1Readers can refer to http://proceedings.mlr.press/v32/silver14supp.pdf \fexistence of the gradient of V \u00b5\u03b8(s) over \u03b8 in deterministic state transition cases. Deterministic value gradient theorem We now analyze the gradient of a deterministic policy. Denote T(s, a) the next state given current state s and action a. Without loss of generality, we assume that the transition function T is continuous, differentiable in s and a and is bounded. Note that the regular condition is not equivalent to this assumption. Consider a simple example that a transition T(s, a) = s + a, then the gradient of p(s\u2032|s, a) over a is in\ufb01nite or does not exist. However, the gradient of T(s, a) over a exists. By de\ufb01nition, \u2207\u03b8V \u00b5\u03b8(s) =\u2207\u03b8 \u0010 r (s, \u00b5\u03b8(s)) + \u03b3V \u00b5\u03b8(s \u2032)|s\u2032=T (s,\u00b5\u03b8(s)) \u0011 =\u2207\u03b8r(s, \u00b5\u03b8(s)) + \u03b3\u2207\u03b8V \u00b5\u03b8(s \u2032)|s\u2032=T (s,\u00b5\u03b8(s)) + \u03b3\u2207\u03b8T(s, \u00b5\u03b8(s))\u2207s\u2032V \u00b5\u03b8(s \u2032). Therefore, the key of the existence (estimation) of the gradient of V \u00b5\u03b8(s) over \u03b8 is the existence (estimation) of \u2207sV \u00b5\u03b8(s). In Theorem 1, we give a suf\ufb01cient condition of the existence of \u2207sV \u00b5\u03b8(s). Theorem 1 For any policy \u00b5\u03b8, the gradient of the value function over the state, \u2207sV \u00b5\u03b8(s), exists with two assumptions: \u2022 A.1: The set of states that the policy visits starting from any initial state s is \ufb01nite. \u2022 A.2: For any initial state s, by Assumption A.1, we get that there is a periodic loop of visited states. Let (s0, s1, ..., sk) denote the loop, and A(s) = \u03b3k+1 Qk i=0 \u2207siT(si, \u00b5\u03b8(si)), the power sum of A(s), P\u221e m=0 Am(s) converges. Proof 1 By de\ufb01nition, V \u00b5\u03b8(s) = r(s, \u00b5\u03b8(s)) + \u03b3V \u00b5\u03b8(s \u2032)|s\u2032=T (s,\u00b5\u03b8(s)). (2) Taking the gradient of Eq. (2), we obtain \u25bdsV \u00b5\u03b8(s) = \u25bds r(s, \u00b5\u03b8(s)) +\u03b3 \u25bds T(s, \u00b5\u03b8(s)) \u25bds\u2032 V \u00b5\u03b8(s \u2032)|s\u2032=T (s,\u00b5\u03b8(s)). (3) Unrolling Eq. (3) with in\ufb01nite steps, we get \u25bdsV \u00b5\u03b8(s) = \u221e X t=0 \u03b3tg(s, t, \u00b5\u03b8) \u25bdst r(st, \u00b5\u03b8(st)), (4) where g(s, t, \u00b5\u03b8) = Qt\u22121 i=0 \u25bdsiT(si, \u00b5\u03b8(si)), s0 = s and si is the state after i steps following policy \u00b5\u03b8. With the assumption A.1, we rewrite (4) by the indicator function I(s, s \u2032, t, \u00b5\u03b8) that indicates whether s \u2032 is obtained after t steps from the initial state s following the policy \u00b5\u03b8: \u25bdsV \u00b5\u03b8(s) = \u221e X t=0 X s\u2032\u2208B(s,\u03b8) \u03b3tg(s, t, \u00b5\u03b8)I(s, s \u2032, t, \u00b5\u03b8) \u25bds\u2032 r(s \u2032, \u00b5\u03b8(s \u2032)), (5) Where B(s, \u03b8) is the set of states the policy visits from s. We now prove that for any \u00b5\u03b8, s, s \u2032, the in\ufb01nite sum of gradients, P\u221e t=0 \u03b3tg(s, t, \u00b5\u03b8)I(s, s \u2032, t, \u00b5\u03b8) converges. For each state s\u2032, there are three cases during the process from the initial state s with in\ufb01nite steps: 1. Never visited: P\u221e t=0 \u03b3tg(s, t, \u00b5\u03b8)I(s, s \u2032, t, \u00b5\u03b8) = 0. 2. Visited once: Let ts\u2032 denote the number of steps that it takes to reach the state s\u2032, then P\u221e t=0 \u03b3tg(s, t, \u00b5\u03b8)I(s, s \u2032, t, \u00b5\u03b8) = \u03b3ts\u2032 g(s, ts\u2032 , \u00b5\u03b8). 3. Visited in\ufb01nite times: Let t1 denote the number of steps it takes to reach s\u2032 for the \ufb01rst time. The state s\u2032 will be revisited every k steps after the previous visit. By de\ufb01nition, \u221e X t=0 \u03b3tg(s, t, \u00b5\u03b8)I(s, s \u2032, t, \u00b5\u03b8) = \u221e X a=0 \u03b3t1g(s, t1, \u00b5\u03b8)Aa(s). (6) By the assumption A.2 we get (6) converges. By exchanging the order of the limit and the summation, \u25bdsV \u00b5\u03b8(s) = X s\u2032\u2208B(s,\u03b8) \u221e X t=0 \u03b3tg(s, t, \u00b5\u03b8)I(s, s \u2032, t, \u00b5\u03b8) \u25bds\u2032 r(s \u2032, \u00b5\u03b8(s \u2032)). (7) Assumption A.1 guarantees the existence of the stationary distribution of states theoretically. Actually, it holds on most continuous tasks, e.g., InvertedPendulum-v2 in MuJoCo. We directly test a deterministic policy with a 2-layer fully connected network on this environment with 10,000 episodes2, and we count the number that each state is visited. After projecting the data into 2D space by t-SNE (Maaten and Hinton 2008), we obtain the state visitation density countour as shown in Figure 1. We have two interesting \ufb01ndings: (1) The set of states visited by the policy is \ufb01nite. (2) Many states are visited for multiple times, which justi\ufb01es Assumption A.1. By the analysis of Assumption A.2, we get that for any policy and state, there exists a set of discount factors such that the the gradient of the value function over the state exists, as illustrated in Corollary 1. Please refer to Appendix A for the proof. Corollary 1 For any policy \u00b5\u03b8 and any initial state s, let (s0, s1, ..., sk) denote the loop of states following the policy and the state, C(s, \u00b5\u03b8, k) = Qk i=0 \u2207siT(si, \u00b5\u03b8(si)), the gradient of the value function over the state, \u2207sV \u00b5\u03b8(s) exists if \u03b3k+1 max {||C(s, \u00b5\u03b8, k)||\u221e, ||C(s, \u00b5\u03b8, k)||1} < 1. In Theorem 2, we show that the deterministic value gradients exist and obtain the closed form based on the analysis in Theorem 1. Please refer to Appendix B for the proof. Theorem 2 (Deterministic Value Gradient Theorem) For any policy \u00b5\u03b8 and MDP with deterministic state transitions, 2We test different weights, the observation of \ufb01nite visited states set is very common among different weights. \fFigure 1: State visitation density countour on InvertedPendulum-v2. if assumptions A.1 and A.2 hold, the value gradients exist, and \u2207\u03b8V \u00b5\u03b8(s) = X s\u2032\u2208B(s,\u03b8) \u03c1\u00b5\u03b8(s, s\u2032)\u2207\u03b8\u00b5\u03b8(s\u2032)(\u2207a\u2032r(s\u2032, a\u2032)+ \u03b3\u2207a\u2032T(s\u2032, a\u2032)\u2207s\u2032\u2032 V \u00b5\u03b8(s \u2032\u2032)|s\u2032\u2032=T (s\u2032,a\u2032)), where a\u2032 is the action the policy takes at state s\u2032, \u03c1\u00b5\u03b8(s, s\u2032) is the discounted state distribution starting from the state s and the policy, and is de\ufb01ned as \u03c1\u00b5\u03b8(s, s\u2032) = P\u221e t=1 \u03b3t\u22121I(s, s\u2032, t, \u00b5\u03b8). Deterministic value gradient algorithm The value gradient methods estimate the gradient of value function recursively (Fairbank and Alonso 2012): \u2207\u03b8V \u00b5\u03b8(s) =\u2207\u03b8r(s, \u00b5\u03b8(s)) + \u03b3\u2207\u03b8T(s, \u00b5\u03b8(s))\u2207s\u2032V \u00b5\u03b8(s\u2032) + \u03b3\u2207\u03b8V \u00b5\u03b8(s\u2032) (8) \u25bdsV \u00b5\u03b8(s) = \u25bds r(s, \u00b5\u03b8(s)) + \u03b3 \u25bds T(s, \u00b5\u03b8(s)) \u25bds\u2032 V \u00b5\u03b8(s \u2032)|s\u2032=T (s,\u00b5\u03b8(s)). (9) In fact, there are two kinds of approaches for estimating the gradient of the value function over the state, i.e., in\ufb01nite and \ufb01nite. On the one hand, directly estimating the gradient of the value function over the state recursively by Eq. (9) for in\ufb01nite times is slow to converge. On the other hand, estimating the gradient by \ufb01nite horizon like traditional value gradient methods (Werbos 1990; Nguyen and Widrow 1990; Heess et al. 2015) may cause large bias of the gradient. We set out to estimate the action-value function denoted by Qw(s, a) with parameter w, and replace \u2207sV \u00b5\u03b8(s) by \u2207sQw(s, \u00b5\u03b8(s)) in Eq. (8). In this way, we can directly obtain a 1-step estimator of the value gradients, G1(\u00b5\u03b8, s) =\u2207\u03b8r(s, \u00b5\u03b8(s)) + \u03b3\u2207\u03b8T(s, \u00b5\u03b8(s)) \u2207s1Qw(s1, \u00b5\u03b8(s1)) + \u03b3G1(\u00b5\u03b8, s1), (10) where s1 is the next state of s, which can be generalized to k(k \u22652) rollout steps. Let si denote the state visited by the policy at the i-th step starting form the initial state s0, g(s, t, \u00b5\u03b8, T) = Qt\u22121 i=1 \u2207siT(si, \u00b5\u03b8(si)). We choose to rollout k \u22121 steps to get rewards, then replace \u2207skV \u00b5\u03b8(sk) by \u2207skQw(sk, \u00b5\u03b8(sk)) in Eq. (9), and we get Lk(\u00b5\u03b8, s, r, T) = k\u22121 X t=1 \u03b3t\u22121g(s, t, \u00b5\u03b8, T)\u2207str(st, \u00b5\u03b8(st)) + \u03b3k\u22121g(s, k, \u00b5\u03b8, T)\u2207skQw(sk, \u00b5\u03b8(sk)). Replacing \u2207s\u2032V \u00b5\u03b8(s\u2032) with Lk(\u00b5\u03b8, s, r, T) in Eq. (8), we get a k-step estimator of the value gradients: Gk(\u00b5\u03b8, s) =\u2207\u03b8r(s, \u00b5\u03b8(s)) + \u03b3\u2207\u03b8T(s, \u00b5\u03b8(s) Lk(\u00b5\u03b8, s, r, T) + \u03b3Gk(\u00b5\u03b8, s1). (11) It is easy to see that Gk(\u00b5\u03b8, s) and G1(\u00b5\u03b8, s) are the same if we have the true reward and transition functions, which is generally not the case as we need to learn the model in practical environments. Let Dk(\u00b5\u03b8, s, T \u2032, r\u2032) denote the value gradient at the sampled state s with k rollout steps, on learned transition function T \u2032 and reward function r\u2032, which is de\ufb01ned as: Dk(\u00b5\u03b8, s, T \u2032, r\u2032) =\u2207\u03b8r\u2032(s, \u00b5\u03b8(s)) + \u03b3\u2207\u03b8T \u2032(s, \u00b5\u03b8(s)) Lk(\u00b5\u03b8, s, r\u2032, T \u2032). (12) Based on Eq.(12), we propose the deterministic value gradients with in\ufb01nite horizon, where the algorithm is shown in Algorithm 1: given n samples (sj, aj, rj, sj+1), for each choice of k, we use 1 n P j Dk(\u00b5\u03b8, sj, T \u2032, r\u2032) to update the current policy. We use sample-based methods to estimate the deterministic value gradients. For each state in the trajectory, we take the analytic gradients by the learned model. As the model is not given, we choose to predict the reward function and the transition function. We choose to use experience replay to compare with the DDPG algorithm fairly. Different choices of the number of rollout steps trade-off between the variance and the bias. Larger steps means lower variance of the value gradients, and larger bias due to the accumulated model error. The difference between in\ufb01nite and \ufb01nite horizon In this section, we discuss the advantage of our proposed DVG algorithm over \ufb01nite horizon and validate the effect on a continuous control task. The estimator of deterministic value gradients with \ufb01nite horizon, DVGF, is de\ufb01ned as (Fairbank and Alonso 2012): Fk(\u00b5\u03b8, s) =\u2207\u03b8r(s, \u00b5\u03b8(s)) + \u03b3\u2207\u03b8T(s, \u00b5\u03b8(s)) k\u22121 X t=1 \u03b3t\u22121 g(s, t, \u00b5\u03b8, T)\u2207str(st, \u00b5\u03b8(st)) + \u03b3Fk(\u00b5\u03b8, s1). Note that Fk(\u00b5\u03b8, s) does not take rewards after the k-th step into consideration. Therefore, given n samples {(sj, aj, rj, sj+1)}, DVGF uses the sample \fAlgorithm 1 The DVG(k) algorithm 1: Initialize the reward network r\u2032, transition network T \u2032, critic network Q, actor network \u00b5\u03b8, target networks Q\u2032, \u00b5\u2032 \u03b8 and experience replay buffer B 2: for episode= 0, ..., N \u22121 do 3: for t = 1, ..., T do 4: Select action according to the current policy and exploration noise 5: Execute action at, observe reward rt and new state st+1, and store transition (st, at, rt, st+1) in B 6: Sample a random minibatch of n transitions from B 7: Update the critic Q by minimizing the TD error: 1 n Pn j (rj + \u03b3Q\u2032(sj+1, \u00b5\u03b8(sj+1)) \u2212Q(sj, aj))2 8: Update the reward network r\u2032 and the transition network T \u2032 on the batch by minimizing the square loss 9: Estimate the value gradients by 1 n P j Dk(\u00b5\u03b8, sj, T \u2032, r\u2032) and perform gradient update on the policy 10: Update the target networks by \u03b8Q \u2032 = \u03c4\u03b8Q + (1 \u2212 \u03c4)\u03b8Q \u2032 ; \u03b8\u00b5 \u2032 = \u03c4\u03b8\u00b5 + (1 \u2212\u03c4)\u03b8\u00b5 \u2032 11: end for 12: end for mean of D\u2032 k(\u00b5\u03b8, s, T \u2032, r\u2032) to update the policy, where D\u2032 k(\u00b5\u03b8, s, T \u2032, r\u2032) is de\ufb01ned as: D\u2032 k(\u00b5\u03b8, s, T \u2032, r\u2032) =\u2207\u03b8r\u2032(s, \u00b5\u03b8(s)) + \u03b3\u2207\u03b8T \u2032(s, \u00b5\u03b8(s)) k\u22121 X t=1 \u03b3t\u22121g(s, t, \u00b5\u03b8, T \u2032)\u2207s\u2032 tr(s \u2032 t, \u00b5\u03b8(s \u2032 t)). We then test the two approaches on the environment HumanoidStandup-v2, where we choose the parameter k to be 23. As shown in Figure 2, DVG signi\ufb01cantly outperforms DVGF, which validates our claim that only considering \ufb01nite horizon fails to achieve the same performance as that of in\ufb01nite horizon. 0 200000 400000 600000 800000 1000000 Steps 50000 60000 70000 80000 90000 100000 110000 Return HumanoidStandup-v2 DVG DVGF Figure 2: Comparisons of DVG and DVGF. 3For the choice of k, we test DVGF with steps ranging from 1 to 5, and we choose the parameter with the best performance for fair comparison. 0 200000 400000 600000 800000 1000000 Steps 100 200 300 400 500 600 700 800 Return Hopper-v2 DVG(1) DVG(3) DVG(5) DDPG Figure 3: Comparisons of DVG with DDPG. Connection and comparison of DVG and DDPG By the proof of the DPG theorem in (Silver et al. 2014), Eq. (8) can be re-written as \u2207\u03b8V \u00b5\u03b8(s) = \u2207\u03b8\u00b5\u03b8(s)\u2207aQ\u00b5\u03b8(s, a) + \u03b3\u2207\u03b8V \u00b5\u03b8(s1). (13) The DDPG algorithm uses the gradient of the estimator of the Q function over the action, \u2207aQw(s, a) to estimate \u2207aQ\u00b5\u03b8(s, a), i.e., G0(\u00b5\u03b8, s) = \u2207\u03b8\u00b5\u03b8(s)\u2207aQw(s, a)+ \u03b3G0(\u00b5\u03b8, s1). The DDPG algorithm is a model-free algorithm which does not predict the reward and the transition, and can be viewed as the DVG(0) algorithm. We compare the DVG algorithm with different rollout steps k and DDPG on a continuous control task in MuJoCo, Hopper-v2. From Figure 3, we get that DVG with any choice of the number of rollout steps is more sample ef\ufb01cient than DDPG, which validates the power of modelbased techniques. DVG(1) outperforms DDPG and DVG with other number of rollout steps in terms of performance as it trades off well between the bias and the variance of the value gradients. Note that with a larger number of step, DVG(5) is not stable due to the propagated model error. The DVPG Algorithm As discussed before, the model-based DVG algorithm are more sample ef\ufb01cient than the model-free DDPG algorithm. However, it suffers from the model bias which results in performance loss. In this section, we consider to ensemble these different gradient estimators for better performance. Motivated by the idea of TD(\u03bb) algorithm (Sutton and Barto 2018), which ensembles the TD(k) error with a geometric decaying weight \u03bb, we propose a temporal-difference method to ensemble DVG with varying rollout steps and the model-free deterministic policy gradients. We de\ufb01ned the temporal difference deterministic value gradients as G\u03bb,t(\u00b5\u03b8, s) = (1 \u2212\u03bb) Pt k=0 \u03bbkGk(\u00b5\u03b8, s), where t denotes the maximal number of rollout steps by the learned model. For the gradient update rule, we also apply sample based methods: given n samples {(sj, aj, rj, sj+1)}, we use 1 n X j ((1 \u2212\u03bb)\u2207\u03b8\u00b5\u03b8(sj)\u2207aQw(sj, a) + (1 \u2212\u03bb) t X k=1 \u03bbkDk(\u00b5\u03b8, sj, T \u2032, r\u2032)) (14) \fto update the policy. Based on this ensembled deterministic value-policy gradients, we propose the deterministic valuepolicy gradient algorithm, shown in Algorithm 2 4. Algorithm 2 The DVPG algorithm 1: Initialize the weight \u03bb and the number of maximal steps t 2: Initialize the reward network r\u2032, transition network T \u2032, critic network Q, actor network \u00b5\u03b8, target networks Q\u2032, \u00b5\u2032 \u03b8 and experience replay buffer B 3: for episode= 0, ..., N \u22121 do 4: for t = 1, ..., T do 5: Select action according to the current policy and exploration noise 6: Execute action at, observe reward rt and new state st+1, and store transition (st, at, rt, st+1) in B 7: Sample a random minibatch of n transitions from B 8: Update the critic Q by minimizing the TD error: 1 n Pn j (rj + \u03b3Q\u2032(sj+1, \u00b5\u03b8(sj+1)) \u2212Q(sj, aj))2 9: Update the reward network r\u2032 and the transition network T \u2032 on the batch by minimizing the square loss 10: Estimate the value gradients by Eq. (14), and perform gradient update on the policy 11: Update the target networks by \u03b8Q \u2032 = \u03c4\u03b8Q + (1 \u2212 \u03c4)\u03b8Q \u2032 ; \u03b8\u00b5 \u2032 = \u03c4\u03b8\u00b5 + (1 \u2212\u03c4)\u03b8\u00b5 \u2032 12: end for 13: end for Experimental Results We design a series of experiments to evaluate DVG and DVPG. We investigate the following aspects: (1) What is the effect of the discount factor on DVG? (2) How sensitive is DVPG to the hyper-parameters? (3) How does DVPG compare with state-of-the-art methods? We evaluate DVPG in a number of continuous control benchmark tasks in OpenAI Gym based on the MuJoCo (Todorov, Erez, and Tassa 2012) simulator. The implementation details are referred to Appendix C. We compare DVPG with DDPG, DVG, DDPG with imagination rollouts (DDPG(model)), and SVG with 1 step rollout and experience replay (SVG(1)) in the text. We also compare DVPG with methods using stochastic policies, e.g. ACKTR, TRPO, in Appendix D. We plot the averaged rewards of episodes over 5 different random seeds with the number of real samples, and the shade region represents the 75% con\ufb01dence interval. We choose the same hyperparameters of the actor and critic network for all algorithms. The prediction models of DVPG, DVG and DDPG(model) are the same. The effect of discount factors on DVG From Eq. (9), we get that \u2207sV \u00b5\u03b8(s) is equivalent to the in\ufb01nite sum of the gradient vectors. To study the effect of the discount factor on DVG, we train the algorithm with 2 rollout steps with different values of the discount factor on the environment InvertedPendulum-v2. As shown in Figure 5, 0.95 performs the best in terms of rewards and stability while 4The only difference between the DVG(k) algorithm and the DVPG algorithm is the update rule of the policy. 0.85 and 0.99 performs comparably, while the performance of 0.8 and 0.6 are inferior to other values. This is because the convergence of the computation of the gradient of the value function over the state may be slow if the discount factor is close to 1 while a smaller value of \u03b3 may enable better convergence of \u2207sV \u00b5\u03b8(s). However, the sum of rewards discounted by a too small \u03b3 will be too myopic, and fails to perform good. Here, 0.95 trades-off well between the stability and the performance, which is as expected that there exists an optimal intermediate value for the discount factor. 0 200000 400000 600000 800000 1000000 Steps 0 100 200 300 400 500 600 Return InvertedPendulum-v2 = 0.6 = 0.8 = 0.85 = 0.95 = 0.99 Figure 5: The effect of discount factors. Ablation study of DVPG We evaluate the effect of the weight of bootstrapping on DVPG with different values from 0.1 to 0.9, where the number of rollout steps is set to be 4. From Figure 6, we get that the performance of the DVPG decreases with the increase of the value \u03bb, where 0.1 performs the best in terms of the sample ef\ufb01ciency and the performance. Thus, we choose the value of the weight to be 0.1 in all experiments. 0 200000 400000 600000 800000 1000000 Steps 0 1000 2000 3000 4000 Return HalfCheetah-v2 DVPG_0.1 DVPG_0.3 DVPG_0.5 DVPG_0.7 DVPG_0.9 Figure 6: The weight of bootstrapping. We evaluate the effect of the number of rollout steps ranging from 1 to 5. Results in Figure 7 show that DVPG with different number of rollout steps all succeed to learn a good policy, with 1 rollout step performing the best. Indeed, the number of rollout steps trade off between the model-error and the variance. There is an optimal value of the number of rollout steps for each environment, which is the only one parameter we tune. To summarize, for the number of look steps, 1 rollout step works the best on Humanoid-v2, Swimmer-v2 and HalfCheetah-v2, while 2 rollout steps performs the best on HumanoidStandup-v2, Hopper-v2 and Ant-v2. For fair \f0 200000 400000 600000 800000 1000000 Steps 40000 60000 80000 100000 120000 Return HumanoidStandup-v2 0 200000 400000 600000 800000 1000000 Steps 0 200 400 600 800 1000 Return Hopper-v2 0 200000 400000 600000 800000 1000000 Steps 1500 1000 500 0 500 Return Ant-v2 0 200000 400000 600000 800000 1000000 Steps 0 50 100 150 200 Return Swimmer-v2 0 200000 400000 600000 800000 1000000 Steps 0 1000 2000 3000 4000 Return HalfCheetah-v2 DVPG DVG DDPG DDPG(Model) SVG(1) 0 200000 400000 600000 800000 1000000 Steps 200 400 600 800 1000 1200 Return Humanoid-v2 Figure 4: Performance comparisons on environments from the MuJoCo simulator. comparisons, we choose the same number of rollout steps for both the DVG and the DVPG algorithm. 0 200000 400000 600000 800000 1000000 Steps 0 1000 2000 3000 4000 Return HalfCheetah-v2 DVPG_1 DVPG_2 DVPG_3 DVPG_4 DVPG_5 Figure 7: The number of rollout steps. Performance comparisons In this section we compare DVPG with the model-free baseline DDPG, and model-based baselines including DVG, DDPG(model) and SVG(1) on several continuous control tasks on MuJoCo. As shown in Figure 8, there are two classes of comparisons. Firstly, we compare DVPG with DDPG and DVG to validate the effect of the temporal difference technique to ensemble model-based and model-free deterministic value gradients. The DVG algorithm is the most sample ef\ufb01cient than other algorithms in environments HumanoidStandup-v2, and Hopper-v2. For sample ef\ufb01ciency, DVPG outperforms DDPG as it trades off between the model-based deterministic value gradients and the model-free deterministic policy gradients. In the end of the training, DVPG outperforms other two algorithms signi\ufb01cantly, which demonstrates the power of the temporal difference technique. In other four environments, DVPG outperforms other algorithms in terms of both sample ef\ufb01ciency and performance. The performance of DVG and DDPG on Swimmer-v2 and Ant-v2 are comparable, while DVG performs bad in Halfcheetah-v2 and Humanoid-v2 due to the model-error. Secondly, we compare DVPG with SVG(1) and DDPG with imagination rollouts. Results show that the DVPG algorithm signi\ufb01cantly outperforms these two model-based algorithms in terms of sample ef\ufb01ciency and performance, especially in environments where other model-based algorithms do not get better performance than the model-free DDPG algorithm. For the performance of the SVG(1) algorithm, it fails to learn good policies in Ant-v2, which is also reported in (Kurutach et al. 2018)." + }, + { + "url": "http://arxiv.org/abs/1906.06639v1", + "title": "Reinforcement Learning Driven Heuristic Optimization", + "abstract": "Heuristic algorithms such as simulated annealing, Concorde, and METIS are\neffective and widely used approaches to find solutions to combinatorial\noptimization problems. However, they are limited by the high sample complexity\nrequired to reach a reasonable solution from a cold-start. In this paper, we\nintroduce a novel framework to generate better initial solutions for heuristic\nalgorithms using reinforcement learning (RL), named RLHO. We augment the\nability of heuristic algorithms to greedily improve upon an existing initial\nsolution generated by RL, and demonstrate novel results where RL is able to\nleverage the performance of heuristics as a learning signal to generate better\ninitialization.\n We apply this framework to Proximal Policy Optimization (PPO) and Simulated\nAnnealing (SA). We conduct a series of experiments on the well-known\nNP-complete bin packing problem, and show that the RLHO method outperforms our\nbaselines. We show that on the bin packing problem, RL can learn to help\nheuristics perform even better, allowing us to combine the best parts of both\napproaches.", + "authors": "Qingpeng Cai, Will Hang, Azalia Mirhoseini, George Tucker, Jingtao Wang, Wei Wei", + "published": "2019-06-16", + "updated": "2019-06-16", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "stat.ML" + ], + "main_content": "INTRODUCTION Combinatorial optimization [15] aims to find the optimal solution with the minimum cost from a finite set of candidates to discrete problems such as the bin packing problem, the traveling salesman problem, or integer programming. Combinatorial optimization has seen broad applicability in fields ranging from telecommunications network design, to task scheduling, to transportation systems planning. As many of these combinatorial optimization problems are NP-complete, optimal solutions cannot be tractably found [3]. Heuristic algorithms such as simulated annealing (SA) [1, 11, 14] are designed to search for the optimal solution by randomly perturbing candidate solutions and accepting those that satisfy some greedy criterion such as Metropolis-Hastings. Heuristics are widely used in combinatorial optimization problems such as Concorde for the traveling salesman problem, or METIS for graph partitioning [2, 5]. Some heuristic algorithms like SA are theoretically guaranteed to find the optimal solution to a problem given a low enough temperature and enough perturbations [4]. However, the framework for heuristic algorithms begins the solution search from a randomly initialized candidate solution. For example, in the bin packing problem, the initial solution fed into SA would be a random assignment of objects to bins, which would then be repeatedly perturbed until convergence. Starting hill climbing from a cold start is time-consuming and limits the applicability of heuristic algorithms on practical problems. Reinforcement learning (RL) has been proposed as a technique to yield efficient solutions to combinatorial optimization problems by first learning a policy, and then using it to generate a solution to the problem. RL has seen interesting applications in real world combinatorial optimization problems [8, 16]. However, RL lacks the theoretical guarantees of algorithms like SA, which use a hillclimbing approach and are less susceptible to problems like policy collapse. By setting the greedy criterion to only accept better solutions, SA can achieve monotonically better performance, whereas RL cannot. Thus, it is best to generate an initial solution using RL and continuously improve this solution using heuristic algorithms like SA. Furthermore, it is advantageous for RL to learn how to provide an optimal initialization to SA to maximize the performance of both techniques in tandem. In this paper, we address these two points by introducing the Reinforcement Learning Driven Heuristic Optimization Framework (RLHO), shown in Figure 1. There are two components in this arXiv:1906.06639v1 [cs.LG] 16 Jun 2019 \fDRL4KDD \u201919, August 5, 2019, Anchorage, AK, USA Q. Cai et al. Figure 1: The RLHO framework. framework: the RL agent and the heuristic optimizer (HO). The RL agent generates solutions that act as initialization for HO, and HO searches for better solutions starting from the solution generated by RL. After HO finishes executing (upon convergence or after a set number of search steps), it returns the found solution and the reward to the RL agent. Our learning process is an alternating loop of (1) generating initial solutions with RL and then (2) searching for better solutions with HO. To the RL agent, HO is part of the environment. We apply RLHO to the bin packing problem where the RL agent is modeled using Proximal Policy Optimization (PPO) [13] and HO is simulated annealing (SA). We demonstrate that not only does combining PPO and SA yield superior performance to PPO alone, but also that PPO is actually able to learn to generate better initialization for SA. By observing the end performance of SA on a problem, PPO can generate inputs to SA that improve the performance of SA itself. In summary, our contributions in this paper are as follows: \u2022 We demonstrate a novel approach to combinatorial optimization where reinforcement learning and heuristic algorithms are combined to yield superior results to reinforcement learning alone on a combinatorial optimization problem. \u2022 We demonstrate that we can train reinforcement learning to enable heuristic algorithms to achieve superior performance than when they are decoupled on a combinatorial optimization problem. 1.1 Related Work Reinforcement learning and evolutionary algorithms achieve competitive performance on MuJoCo tasks and Atari games [12]. The idea of applying evolutionary algorithms to reinforcement learning [9] has been widely studied. [6] proposes a framework to apply evolutionary strategies to selectively mutate a population of reinforcement learning policies. [7, 10] use a gradient method to enhance evolution. Our work is different from the above as we apply deep reinforcement learning to generate better initializations for heuristic algorithms. The heuristic part in the RLHO framework only changes the solution, rather than the parameters of the policy. To our knowledge, our work is the first that does this. 2 COMBINING PPO AND SA 2.1 Preliminary Discussion What is the best way to combine an RL agent and a heuristic algorithm? A first approach is to allow an RL agent to generate an initial solution to a combinatorial optimization problem, then execute a heuristic algorithm to refine this initial solution until convergence, and then train the RL policy with the rewards obtained from the performance of the heuristic algorithm. This would delineate one episode. However, on large problems, heuristics take a long time to converge. Thus, in our approach, we allow the heuristic algorithm to run for a only limited number of steps in one episode. We now introduce the RLHO algorithm. 2.2 The RLHO Algorithm Our approach is a two-stage process as detailed in Algorithm 1: at the start of each episode, first run RL for x steps to generate an initial solution sx. Then, run pure HO for y steps starting from sx. Finally we update RL with the cost of the final solution. We repeat this process with a fresh start every time. Our action space is designed as perturbing the currently available solution. In our bin packing problem discussed in more detail in Section 3, the agent is first presented with a randomly initialized assignment s0 of items to bins. The environment around the bin packing problem will then present the agent with an item i. The agent then needs to decide which other item j to swap locations with item i based on the current state. For the design of the reward function, we define the intermediate reward as the difference between the cost of the previous solution and the cost of the current solution, as the goal is to minimize cost. When the agent\u2019s action space consists of perturbations, the MDP for the combinatorial optimization problem results in an infinite horizon. We are not privileged with V (sterm) = 0 that would normally denote the terminal state of the MDP. The agent is free to continue perturbing the state forever, and thus, V (sterm) is undefined. However, our agents are trained with a finite number of steps x, so V (sx) would normally need to be estimated with a baseline such as a value function. The value function is a poor estimator because it does not accurately estimate the additional expected performance of the agent in the limit of time, because we simply don\u2019t possess such data. To address this, a novelty in our approach is to obtain a better estimate for V (sx) using the performance of HO. The additional optimization provided by HO gives us an additional training signal to RL as to how RL actions contribute to the future return provided by HO. Therefore, RL can be trained by two signals in RLHO: (1) the intermediate reward at each RL step, and (2) the discounted future reward provided by HO conditioned on the initialization provided by RL. This approach provides RL with a training signal to generate better initialization for HO. V (st ) = x\u22121 \u00d5 k=t \u03b3 krk + \u221e \u00d5 k=x \u03b3 krk = x\u22121 \u00d5 k=t \u03b3 krk + V (sx). (1) \fReinforcement Learning Driven Heuristic Optimization DRL4KDD \u201919, August 5, 2019, Anchorage, AK, USA As shown in Equation (1), we can replace the infinite horizon term with a stationary, tractable value V (sx). We obtain V (sx) by running pure HO for y steps starting from sx, and then taking the difference between the cost of sx and the cost of the final solution sx+y as an estimate for the value of V (sx). Algorithm 1 The RLHO algorithm Initialize the replay buffer B and the solution randomly Initialize the number of RL steps x and the number of SA steps y in one episode for iteration = 1, 2, ... do Rollout using RL policy for x steps and store the transitions in B, obtaining initial solution sx from RL Run HO on sx for y steps to obtain sx+y Get the new reward rn as the difference of costs of sx and sx+y Train RL using V (sx) Reset the solution and hyperparameters of HO end for Algorithm 2 Simulated Annealing Initialize the temperature T = tm, the maximal number of steps of SA in one path, y q = 0, a = \u2212ln( tm t0 ) Obtain the PPO solution sx for t = 1, 2, ...,y do Perturb the current solution sx+t randomly, get s\u2032 x+t if cost(s\u2032 x+t ) > cost(sx+t ) then Reject s\u2032 x+t with probability p = 1 \u2212e\u2212(c(s\u2032)\u2212c(s))/T else sx+t+1 = s\u2032 x+t end if T = tmeaq/y q = q + 1 end for 3 PERFORMANCE EVALUATION We validate our methods on the bin packing problem. In this section we first introduce the bin packing problem, and then discuss the performance gain obtained when combining the RL part (PPO) and the heuristic optimizer (SA) in our RLHO framework. The details of SA are shown in Algorithm 2. 3.1 The Bin Packing Problem Bin packing is a classical combinatorial optimization problem where the objective is to use the minimum number of bins to pack items of different sizes, with the constraint that the sum of sizes of items in one bin is bounded by the size of the bin. Let n denote the number of bins and the number of items, and v denote the vector representing the of sizes of all items. Let xij be the 0/1 matrix that represents one assignment of items to bins (a packing), i.e., xij = 1 means the item j is put in the bin i. Given a packing x, let c(x) denote the cost, the number of bins used in this solution, i.e., c(x) = \u00cdn i=1 \u00cdn j=1 xij. 3.2 Learning to Generate Better Initializations We evaluate the ability of RLHO to generate better initializations for heuristic algorithms. In this set of experiments, during training, we allow RLHO to generate an initialization using RL for x timesteps, and then run HO using y timesteps. After N training episodes, we take the initialization generated by the RL step of RLHO and use it to initialize a HO that will run until convergence. Table 1 and Table 2 count the average number of used bins of the best solution during training with x = 128,y = 5000,tm = 5 and x = 128,y = 50000,tm = 5 respectively, over 5 independent trials. We also report results where random perturbations (Random) are used instead of RL to generate the initial solutions as a baseline. We collect results for 10000 iterations of running RLHO and Random until convergence. Our results show that RLHO does learn better initializations for HO than Random, and the performance gap increases with larger problem sizes. The training signal provided by the HO performance used to augment the value function indeed does help RLHO allow heuristic algorithms to perform better. Most interestingly, when the RL part of RLHO is trained using signal from SA that is run for 5000 steps, the initialization it generates is still effective for SA that runs until convergence, e.g. millions of timesteps. n RLHO Random, then HO 100 59 69 200 128.4 141 500 347 361 1000 714 734 Table 1: Average cost of the best solution found by each algorithm with y = 5000/50000 HO steps. n RLHO Random, then HO 100 59 69 200 127 141 500 344.4 359 1000 711 731 Table 2: Average cost of the best solution found by each algorithm with y = 50000 HO steps. 3.3 Having RL and HO Work Together Now we extend our experimental evaluation to answer the following question: can HO help RL train better? Can running HO after an RL training step help RL explore better states? We adjust RLHO to perform alternating optimization on a combinatorial optimization problem. RL will generate a solution, which will then be optimized by HO. RL will then be trained with additional signal from HO. The same solution will then be passed back to RL for continuous optimization. This differs from our previous approach because we do not reset the solution on each episode. The greedy nature of HO will perform hill climbing, allowing RL to see more optimal states throughout training. \fDRL4KDD \u201919, August 5, 2019, Anchorage, AK, USA Q. Cai et al. n RL RLHO 50 22 22 100 50 50 200 102 101 500 283 266 1000 613 601 Table 3: Average cost of the best solution found by each algorithm with x = 128,y = 1000 n RL RLHO 50 22 22 100 50 50 200 102 101 500 283 265 1000 613 572 Table 4: Average cost of the best solution found by each algorithm with x = 128,y = 5000 Figure 2: Training performance on 500bins. We run the two algorithms side-by-side to evaluate our approach. Table 3 and Table 4 show the average number of used bins of the best solution (over 5 independent runs) searched by both algorithms during training. For RL, we simply keep running PPO without any SA. In RLHO, PPO learns from SA. We choose to set x = 128, y = 1000 and the initial temperature of SA to be 5. We compare the performance of two algorithms in terms of the number of steps the RL policy performs, with the hyperparameters of the RL part of both approaches kept constant. We also evaluate our approaches on different sizes of the bin packing problem. We report the results until 2000 iterations run for the alternating optimization. The convergence curves of all approaches are shown in Figure 2. We conclude that the pure RL algorithm is more sample efficient but performs worse as the RL algorithm has no additional outlet for exploration. RLHO achieves better performance because it adopts the HO to perform better exploration. 4" + }, + { + "url": "http://arxiv.org/abs/1807.03708v3", + "title": "Deterministic Policy Gradients With General State Transitions", + "abstract": "We study a reinforcement learning setting, where the state transition\nfunction is a convex combination of a stochastic continuous function and a\ndeterministic function. Such a setting generalizes the widely-studied\nstochastic state transition setting, namely the setting of deterministic policy\ngradient (DPG).\n We firstly give a simple example to illustrate that the deterministic policy\ngradient may be infinite under deterministic state transitions, and introduce a\ntheoretical technique to prove the existence of the policy gradient in this\ngeneralized setting. Using this technique, we prove that the deterministic\npolicy gradient indeed exists for a certain set of discount factors, and\nfurther prove two conditions that guarantee the existence for all discount\nfactors. We then derive a closed form of the policy gradient whenever exists.\nFurthermore, to overcome the challenge of high sample complexity of DPG in this\nsetting, we propose the Generalized Deterministic Policy Gradient (GDPG)\nalgorithm. The main innovation of the algorithm is a new method of applying\nmodel-based techniques to the model-free algorithm, the deep deterministic\npolicy gradient algorithm (DDPG). GDPG optimize the long-term rewards of the\nmodel-based augmented MDP subject to a constraint that the long-rewards of the\nMDP is less than the original one.\n We finally conduct extensive experiments comparing GDPG with state-of-the-art\nmethods and the direct model-based extension method of DDPG on several standard\ncontinuous control benchmarks. Results demonstrate that GDPG substantially\noutperforms DDPG, the model-based extension of DDPG and other baselines in\nterms of both convergence and long-term rewards in most environments.", + "authors": "Qingpeng Cai, Ling Pan, Pingzhong Tang", + "published": "2018-07-10", + "updated": "2018-10-02", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "stat.ML" + ], + "main_content": "Introduction Reinforcement learning has been one of the most successful computational tools for solving complex decision making problems [29], with extensive applications in both discrete tasks such as general game playing [17, 18] and continuous control tasks such as robotics [10]. In contrast to the traditional value-based methods [31, 35, 17, 18] that are meant for solving problems with discrete and low-dimensional action space, policy gradient methods [23, 30] aim to tackle these limitations, by optimizing a parameterized policy via estimating the gradient of the expected long-term reward, using gradient ascent. [26] propose the deterministic policy gradient (DPG) algorithm that aims to \ufb01nd an optimal deterministic policy, which lowers the variance when estimating the policy gradient [39], compared to stochastic policies [30]. It is shown that the algorithm can be applied to domains with continuous and high-dimensional action spaces. [15] further propose the deep deterministic policy gradient (DDPG) algorithm, by combining deep neural networks to improve convergence. It is recognized that DDPG has been successful in robotic control tasks such as locomotion [27] and manipulation [7]. Despite the e\ufb00ectiveness of DDPG in these tasks, it is limited for problems with stochastic continuous state transitions. Here, the continuity means that the probability density of the next state is continuous in the action taken at the current state. In fact, many important control problems, such as MountainCar, Pendulum [2], and autonomous driving, include both stochastic and deterministic state transitions. For example, in most autonomous driving tasks, state transitions are deterministic under normal driving conditions, yet are still stochastic due to sudden disturbances. As a result, DDPG, which assumes stochastic state transitions, does not generalize well in practice. Tasks with deterministic state transitions pose serious technical challenges due to the discontinuity of the transition function, where the gradient of the transition probability density function over actions does not always exist. [37, 5, 9] consider the gradient of the value function over states and the deterministic policy gradient in the setting of deterministic state transitions, but the existence of the value function\u2019s gradient over states and the deterministic policy gradient is not studied. Lacking of theoretical guarantees for the existence of the gradient limits the applicability of deterministic policy gradient algorithms. As a result, an important question for policy gradient based methods is, Does the gradient exist in settings with deterministic state transitions? If yes, can one solve the problem e\ufb03ciently by its gradient? In this paper, we study a generalized setting, where the state transition is a convex combination of a stochastic continuous transition function and a deterministic discontinuous transition function. As a result, it includes both the stochastic case and the deterministic case as special cases. Our setting is arguably closer to the mixed control problems mentioned above than those stochastic settings. We \ufb01rst give a simple example to illustrate that the deterministic policy gradient may be in\ufb01nite under deterministic state transitions. Then we introduce a new theoretical technique to prove the existence of the policy gradient in this 2 \fgeneralized setting. Using this technique, we prove that the deterministic policy gradient indeed exists for a certain set of discount factors. We further present two conditions that guarantee the existence for all discount factors. We then derive a closed form of the policy gradient. However, the estimation of the deterministic policy gradient is much more challenging due to the sample complexity of model-free algorithms [25] and complex state transitions. As for the state transition, the di\ufb03culty of the computation of the gradient mainly comes from the dependency of the policy gradient and the gradient of the value function over the state. Such computation may involve in\ufb01nite times of sampling the whole state space. Thus applying DDPG directly in a general setting even with low-dimensional state space may incur high sample complexity. To overcome these challenges, we approximate the original Markov decision process (MDP) by a model-based augmented MDP with the same reward function and the transition function being the expectation of original MDP. By the form of the deterministic policy gradient with deterministic state transitions, we get that the model-based augmented MDP has a simple structure, which allows for more e\ufb03cient computations and faster convergence than model-free methods [14, 13, 36]. Unfortunately, applying this mode-based technique directly does not help to solve environments with large continuous state space as it is hard to represent the transition dynamics [34]. This leads to an essential question: How to apply the model-based technique to deterministic policy gradient algorithms e\ufb00ectively? We then consider a program that maximizes the long-term rewards of the augmented MDP with the constraint that its long-term rewards is less than that of the original MDP. The intuition is that we choose a objective with less sample complexity to optimize, and it serves as a lower bound of the original objective. Note that the improvement of the new objective, guarantees the improvement of the original objective. As the constrainted problem is hard to optimize, we choose to optimize the Lagrangian dual function of the program, which can be interpreted as a weighted objective between the long-term reward of the original MDP and the augmented MDP. Based on this dual function, we propose the Generalized Deterministic Policy Gradient (GDPG) algorithm. The algorithm updates the policy by stochastic gradient ascent with the gradient of the weighted objective over the parameter of the policy, and the weight maintains a trade-o\ufb00 between fast convergence and performance. To sum up, the main contribution of the paper is as follows: \u2022 First of all, we provide a theoretical guarantee for the existence of the gradient in settings with deterministic state transitions. \u2022 Secondly, we propose a novel policy gradient algorithm, called Generalized Deterministic Policy Gradient (GDPG), which combines the model-free and model-based methods. GDPG reduces sample complexity, enables faster convergence and performance improvement. \u2022 Finally, we conduct extensive experiments on standard benchmarks com3 \fparing with state-of-the-art stochastic policy gradient methods including TRPO [25], ACKTR [38] and the direct model-based extension of DDPG, called MDPG. Results con\ufb01rm that GDPG signi\ufb01cantly outperforms other algorithms in terms of both convergence and performance. 2 Preliminaries A Markov decision process (MDP) is a tuple (S, A, p, r, \u03b3, p1), where S and A denote the set of states and actions respectively. Let p(st+1|st, at) represent the conditional density from state st to state st+1 under action at, which satis\ufb01es the Markov property, i.e., p(st+1|s0, a0, ..., st, at) = p(st+1|st, at). The density of the initial state distribution is denoted by p0(s). At each time step t, the agent interacts with the environment with a deterministic policy \u00b5\u03b8, which is parameterized by \u03b8. We use r(st, at) to represent the corresponding immediate reward, contributing to the discounted overall rewards from state s0 following \u00b5\u03b8, denoted by J(\u00b5\u03b8) = E[P\u221e k=0 \u03b3kr(ak, sk)|\u00b5\u03b8, s0]. Here, \u03b3 \u2208[0, 1] is the discount factor. The Q-function of state st and action at under policy \u00b5\u03b8 is denoted by Q\u00b5\u03b8(st, at) = E[P\u221e k=t \u03b3k\u2212tr(ak, sk)|\u00b5\u03b8, st, at]. The corresponding value function of state st under policy \u00b5\u03b8 is denoted by V \u00b5\u03b8(st) = Q\u00b5\u03b8(st, \u00b5\u03b8(st)). We denote the density at state s \u2032 after t time steps from state s by p(s, s \u2032, t, \u00b5\u03b8) following the policy \u00b5\u03b8. We denote the (improper) discounted state distribution by \u03c1\u00b5\u03b8(s \u2032) = R S P\u221e t=1 \u03b3t\u22121p0(s)p(s, s \u2032, t, \u00b5\u03b8)ds. The agent aims to \ufb01nd an optimal policy that maximizes J(\u00b5\u03b8). 2.1 Why is the DPG theorem not applicable for deterministic state transitions? An important property of the DPG algorithms is the Deterministic Policy Gradient Theorem [26], \u25bd\u03b8J(\u00b5\u03b8) = R S \u03c1\u00b5\u03b8(s)(\u25bd\u03b8\u00b5\u03b8(s) \u25bda Q\u00b5\u03b8(s, a)|a=\u00b5\u03b8(s))ds, which proves the existence of the deterministic policy gradient. The DPG theorem holds under the regular condition presented by [26], i.e., p(s \u2032|s, a) is continuous in a. The arguments in the proof of the DPG theorem do not work without this condition1. Now we give a simple example to show the policy gradient is in\ufb01nite for some discount factors. Example 2.1. Given a MDP with two dimensional state spaces and action spaces, whose transition and reward functions are de\ufb01ned by T(s, a) = (2s1 + 2s2 + a1, 2s1 + 2s2 + a2)T , r(s, a) = \u2212sT a. Consider a deterministic policy \u00b5\u03b8(s) = \u03b8, then \u25bdsT(s, \u00b5\u03b8(s)) = \u00142 2 2 2 \u0015 , and \u25bdsV \u00b5\u03b8(s) = \u2212(I + P\u221e n=1 \u03b3n \u001422n\u22121 22n\u22121 22n\u22121 22n\u22121 \u0015 )\u03b8. Then \u25bdsV \u00b5\u03b8(s) converges if and only if \u03b3 < 1/4. 1Readers can refer to http://proceedings.mlr.press/v32/silver14-supp.pdf 4 \fOne must need a new technique to determine the existence of the gradient of J(\u00b5\u03b8) over \u03b8 in irregular cases. 3 Deterministic State Transitions In this section we study a simple setting where the state transition is a deterministic function. As discussed before, the DPG theorem does not apply to this setting. To analyze the gradient of a deterministic policy, we let T(s, a) denote the next state given the current state s and the action a. Without loss of generality, we assume that T(s, a), \u25bdaT(s, a), \u25bdsT(s, a), r(s, a), \u25bdsr(s, a), \u25bdar(s, a) are all continuous in s and a and bounded. By de\ufb01nition, \u25bd\u03b8V \u00b5\u03b8(s) = \u25bd\u03b8(r(s, \u00b5\u03b8(s)) + \u03b3V \u00b5\u03b8(s \u2032)|s\u2032=T (s,\u00b5\u03b8(s))). Thus the key of the existence of the gradient of V \u00b5\u03b8(s) over \u03b8 is the existence of \u25bdsV \u00b5\u03b8(s). Now we give a su\ufb03cient condition of the existence of \u25bdsV \u00b5\u03b8(s). Lemma 1. For any policy \u00b5\u03b8, let n denote the dimension of the state, and c be the maximum of the max norm of all Jacobain matrices, maxs || \u25bds T(s, \u00b5\u03b8(s))||max, for any discount factor \u03b3 in [0, 1 nc), \u25bdsV \u00b5\u03b8(s) exists. Proof. By de\ufb01nition, V \u00b5\u03b8(s) = Q\u00b5\u03b8(s, \u00b5\u03b8(s)) = r(s, \u00b5\u03b8(s))+\u03b3V \u00b5\u03b8(s \u2032)|s\u2032=T (s,\u00b5\u03b8(s))). Then \u25bdsV \u00b5\u03b8(s) = \u25bdsr(s, \u00b5\u03b8(s)) + \u03b3 \u25bds T(s, \u00b5\u03b8(s)) \u25bds\u2032 V \u00b5\u03b8(s \u2032)|s\u2032=T (s,\u00b5\u03b8(s)). (1) By unrolling (1) with in\ufb01nite steps, we get \u25bdsV \u00b5\u03b8(s) = \u221e X t=0 Z S \u03b3tg(s, t, \u00b5\u03b8)I(s, s \u2032, t, \u00b5\u03b8) \u25bds\u2032 r(s \u2032, \u00b5\u03b8(s \u2032))ds \u2032, where I(s, s \u2032, t, \u00b5\u03b8) is an indicator function that indicates whether s \u2032 is obtained after t steps from the state s following the policy \u00b5\u03b8. Here, g(s, t, \u00b5\u03b8) = Qt\u22121 i=0 \u25bdsiT(si, \u00b5\u03b8(si)), where s0 = s and si is the state after i steps following policy \u00b5\u03b8. The state transitions and policies are both deterministic. We now prove that for any \u00b5\u03b8, s, s \u2032 and \u03b3 \u2208[0, 1 nc), A(s) = P\u221e t=0 \u03b3tg(s, t, \u00b5\u03b8)I(s, s \u2032, t, \u00b5\u03b8) converges. We describe the proof sketch here and the complete proof is referred to Appendix A. For each state s\u2032, which is reached from the initial state s with in\ufb01nite steps, there are three cases due to deterministic state transitions: never visited, visited once, and visited in\ufb01nite times. It is easy to see that A(s) converges in the \ufb01rst two cases. In the last case, as A(s) is the sum of the power of the matrix \u03b3t2g(s, t2, \u00b5\u03b8), then we get a upper bound of \u03b3 such that A(s) converges. By Lebesgue\u2019s Dominated Convergence Theorem [24], we exchange the order of the limit and the integration, \u25bdsV \u00b5\u03b8(s) = R S P\u221e t=0 \u03b3tg(s, t, \u00b5\u03b8)I(s, s \u2032, t, \u00b5\u03b8) \u25bds\u2032 r(s \u2032, \u00b5\u03b8(s \u2032))ds \u2032. By the continuity of T, r and \u00b5\u03b8, the gradient of V \u00b5\u03b8(s) over s exists. 5 \fNote that the condition proposed in Lemma 1 is indeed necessary in Example 2.1, where n = 2, c = 2 and the gradient exists if and only if the discount factor \u03b3 < 1/4. By Lemma 1, we show that the deterministic policy gradient exists and obtain the closed form. The proof is referred to Appendix B. Theorem 1. For any policy \u00b5\u03b8 and MDP with deterministic state transitions, for any discount factor \u03b3 in [0, 1 nc), the policy gradient exists, and \u25bd\u03b8J(\u00b5\u03b8) = Z S \u03c1\u00b5\u03b8(s)\u25bd\u03b8\u00b5\u03b8(s)(\u25bdar(s, a)|a=\u00b5\u03b8(s)+\u03b3\u25bdaT(s, a)|a=\u00b5\u03b8(s)\u25bds\u2032 V \u00b5\u03b8(s \u2032)|s\u2032=T (s,a))ds. 4 Deterministic Policy Gradients with general state transitions In this section we consider a general setting where the state transition for any state s and any action a is a convex combination of a deterministic transition function T(s, a) with probability f(s, a), and a stochastic probability transition density function p(s \u2032|s, a) with probability 1 \u2212f(s, a). Note that this setting generalizes that of DPG. Here, T also satis\ufb01es the same condition as in Section 3. We assume that f(s, a), \u25bdsf(s, a) and \u25bdaf(s, a) are continuous and bounded. By the similar technique to the setting with deterministic state transitions, we get the main theorem which proves the existence of the gradient of J(\u00b5\u03b8) over \u03b8 for a set of discount factors and proposes two conditions such that for all discount factors the policy gradient exists: Condition A.1: maxs f(s, \u00b5\u03b8(s)) \u2264 1 nc. Condition A.2: For any sequence of states (s0, ..., st\u22121) and any timestep t, the eigenvalues of Qt\u22121 i=0 f(si, \u00b5\u03b8(si)) \u25bdsi T(si, \u00b5\u03b8(si)) are in [\u22121, 1]. Theorem 2. The GDPG Theorem For any MDP in the general cases and any policy \u00b5\u03b8, for any discount factor \u03b3 in [0, 1 nc maxs f(s,\u00b5\u03b8(s))), the policy gradient exists. If the MDP satis\ufb01es Condition A.1 or Condition A.2, for any discount factor and any policy \u00b5\u03b8, the policy gradient exists. The form is \u25bd\u03b8J(\u00b5\u03b8) = Z S \u03c1\u00b5\u03b8(s)(\u25bd\u03b8\u00b5\u03b8(s) \u25bda r(s, a)|a=\u00b5\u03b8(s) + \u03b3f(s, \u00b5\u03b8(s)) \u25bd\u03b8 \u00b5\u03b8(s) \u25bda T(s, a)|a=\u00b5\u03b8(s) \u25bds\u2032 V \u00b5\u03b8(s \u2032)|s\u2032=T (s,a) + \u03b3(1 \u2212f(s, \u00b5\u03b8(s))) Z S \u25bd\u03b8\u00b5\u03b8(s) \u25bda p(s \u2032|s, a)|a=\u00b5\u03b8(s) V \u00b5\u03b8(s \u2032)ds \u2032 + \u03b3 \u25bd\u03b8 f(s, \u00b5\u03b8(s))V \u00b5\u03b8(s \u2032)|s\u2032=T (s,\u00b5\u03b8(s)) \u2212\u03b3 \u25bd\u03b8 f(s, \u00b5\u03b8(s)) Z S p(s \u2032|s, a)|a=\u00b5\u03b8(s)V \u00b5\u03b8(s \u2032)ds \u2032)ds = Z S \u03c1\u00b5\u03b8(s)(\u25bd\u03b8\u00b5\u03b8(s) \u25bda Q\u00b5\u03b8(s, a)|a=\u00b5\u03b8(s))ds. (2) The proof is referred to Appendix C. It is interesting to note that the form is the same as the form of gradient of DPG. In fact, the assumption of the condition 6 \fA.1 and A.2 would become weaker when the probability of the deterministic state transition becomes lower. In the extreme case, i.e., the stochastic case, where the probability is zero, the policy gradient exists without any assumption as in [26]. In fact, the form of the policy gradient is the same in settings of the deterministic state transition and the general case. However, given an estimator of the value function, the complexity of calculating the gradient of these two cases is di\ufb00erent. By comparing (1) with (2), we get that it is the more computationally expensive for the gradient of the general case than the deterministic case. The gradient of deterministic state transitions only involves \u25bd\u03b8r(s, \u00b5\u03b8(s)) and \u25bds\u2032 V \u00b5\u03b8(s \u2032), while the gradient of the general case introduces additional integration on the state space. 4.1 Direct Model-based Extension of DDPG As discussed before, even for the environment with low-dimensional state space, the sample complexity of DDPG is signi\ufb01cantly high for the general case, which may limit the capability of the model-free algorithms due to slow convergence. Thus, we consider a model-based augmented MDP M\u2217of the original MDP M with the same reward function, while the state transition function is de\ufb01ned as the expectation of the distribution of the next state of the original MDP, i.e., T\u2217(s, a) = E[s \u2032|s, a]. M\u2217is easier to solve as the state transition of M\u2217is deterministic. Note that if the environment is indeed deterministic, M\u2217= M. Now we de\ufb01ne a direct model-based extension of DDPG, called MDPG. MDPG directly uses the gradient of the long-term rewards of M\u2217with policy \u00b5\u03b8 to improve the policy instead of the deterministic policy gradient, i.e., \u25bd\u03b8J\u2217(\u00b5\u03b8) = \u00b5\u03b8(s) \u25bda Q\u00b5\u03b8 \u2217(s, a), where Q\u00b5\u03b8 \u2217(s, a) denotes the action value function of the augmented MDP. However, it is hard to represent the transition dynamics in complex environments, and it may cause the policy to move to a wrong direction as shown in Section 5.2 on problems with large state space. 4.2 The GDPG Algorithm On the one hand, only solving the model-based augmented MDP may be too myopic. On the other hand, the model-free algorithm su\ufb00ers from high sample complexity as mentioned. Consequently, we consider a program that maximizes the long-term rewards of the augmented MDP, with the constraint being that the long-term rewards of the augmented MDP is less than the original MDP, i.e., max \u03b8 J\u2217(\u00b5\u03b8), s.t.J\u2217(\u00b5\u03b8) \u2264J(\u00b5\u03b8). (3) It is easy to check that the optimum of this program is less than max\u03b8 J(\u00b5\u03b8), and it serves as a lower bound of the long-term rewards of the original MDP. The intuition of this program is to optimize a model-based objective which is easier to solve and the improvement of the new objective guarantees the improvement of the original objective. 7 \fAlgorithm 1: GDPG algorithm 1 Initialize a positive weight \u03b1 2 Initialize the transition network T(s, a|\u03b8T ) with random weights \u03b8T 3 Initialize the original and augmented critic networks Q(s, a|\u03b8Q), Q\u2217(s, a|\u03b8Q\u2217) with random weights \u03b8Q, \u03b8Q\u2217 4 Initialize the actor network \u00b5(s|\u03b8\u00b5) with random weights \u03b8\u00b5 5 Initialize the target networks Q \u2032, Q\u2217 \u2032 and \u00b5 \u2032 with weights \u03b8Q \u2032 = \u03b8Q, \u03b8Q \u2032 \u2217= \u03b8Q\u2217, \u03b8\u00b5 \u2032 = \u03b8\u00b5 6 Initialize Experience Replay bu\ufb00er B 7 for episode= 0, ..., N \u22121 do 8 Initialize a random process N for action exploration 9 Receive initial observation state s0. 10 for t = 1, ..., T do 11 Select action at = \u00b5(st|\u03b8\u00b5) + Nt according to the current policy and exploration noise 12 Execute action at, observe reward rt and new state st+1, and store transition (st, at, rt, st+1) in B 13 Sample a random minibatch of N transitions (si, ai, ri, si+1) from B 14 Set yi = ri + \u03b3Q \u2032(si+1, \u00b5 \u2032(si+1|\u03b8\u00b5 \u2032 )|\u03b8Q \u2032 ) 15 Update the critic Q by minimizing the loss: L1 = 1 N P i (yi \u2212Q(si, ai|\u03b8Q)) 2 16 Set y \u2032 i = ri + \u03b3Q \u2032 \u2217(T(si, ai|\u03b8T ), \u00b5 \u2032(T(si, ai|\u03b8T )|\u03b8\u00b5 \u2032 )|\u03b8Q \u2032 \u2217) 17 Update the augmented critic Q\u2217by minimizing the loss: L2 = 1 N P i (y \u2032 i \u2212Q\u2217(si, ai|\u03b8Q\u2217)) 2 18 Upate the transition T by minimizing the loss: L3 = 1 N P i (si+1 \u2212T(si, ai|\u03b8T )) 2 19 Update the actor by the sampled policy gradient and target networks: \u25bd\u03b8\u00b5J(\u03b8\u00b5) = 1 N X i (1 \u2212\u03b1) \u25bd\u03b8\u00b5 \u00b5(s|\u03b8\u00b5) \u25bda Q\u2217(s, a|\u03b8Q\u2217) + \u03b1 \u25bd\u03b8\u00b5 \u00b5(s|\u03b8\u00b5) \u25bda Q(s, a|\u03b8Q) 20 21 \u03b8Q \u2032 = \u03c4\u03b8Q +(1\u2212\u03c4)\u03b8Q \u2032 ; \u03b8Q \u2032 \u2217= \u03c4\u03b8Q\u2217+(1\u2212\u03c4)\u03b8Q \u2032 \u2217; \u03b8\u00b5 \u2032 = \u03c4\u03b8\u00b5 +(1\u2212\u03c4)\u03b8\u00b5 \u2032 8 \fIf the value function is convex in states 2, the long-term rewards of M\u2217with policy \u00b5\u03b8, J\u2217(\u00b5\u03b8) is no larger than the long-term rewards of M, as illustrated in Theorem 3. That is, the program turns into a problem that maximizes the model-based objective. The proof is referred to Appendix D. Theorem 3. If V \u00b5\u03b8(s) is convex in s, J(\u00b5\u03b8) \u2265J\u2217(\u00b5\u03b8). In the other case that the value function is not convex, it is hard to solve the program directly. Therefore, we choose to optimize its Lagrangian dual program, min \u03b1\u22650 max \u03b8 J\u2217(\u00b5\u03b8) + \u03b1(J(\u00b5\u03b8) \u2212J\u2217(\u00b5\u03b8)). (4) Then for each choice of \u03b1, we use the gradient of J\u2217(\u00b5\u03b8) + \u03b1(J(\u00b5\u03b8) \u2212J\u2217(\u00b5\u03b8)), i.e., (1 \u2212\u03b1)\u00b5\u03b8(s) \u25bda Q\u00b5\u03b8 \u2217(s, a) + \u03b1\u00b5\u03b8(s) \u25bda Q\u00b5\u03b8(s, a), (5) which generalizes the gradient of the DDPG algorithm, to improve the policy by stochastic gradient ascent, where Q\u00b5\u03b8 \u2217(s, a) denotes the action value function of the augmented MDP. However, the estimation of the value function of the augmented MDP relies on the expectation of the distribution of the next state, which is unknown. To overcome this challenge, we follow the idea of [22], where neural networks are applied to predict the next state. Di\ufb00erent from [22] where they take model predictive control as the control policy, we apply the estimators of state transitions to estimate the action-value function of the augmented MDP. We now propose the Generalized Deterministic Policy Gradient (GDPG) algorithm, as shown in Algorithm 1. Apart from training the actor and the critic, we also train a transition network T which predicts the next state. 5 Experiments In this section, we design a series of experiments to evaluate GDPG. We aim to investigate the following questions: (1) How does the value of \u03b1 a\ufb00ect the performance on a toy problem with general state transitions? (2) How does GDPG compare with DDPG, MDPG, and other state-of-the-art methods on continuous control benchmarks? We \ufb01rst illustrate the in\ufb02uence of the weight \u03b1 in a toy environment, ComplexPoint-v0 with general state transitions. Then we evaluate GDPG in a number of continuous control benchmark tasks in OpenAI Gym [2], including a classic control problem [21] and a task in the Box2D and MuJoCo [33] simulator. The details of our benchmarks are referred to Appendix E. We compare GDPG with the following baselines: (a) DDPG, (b) MDPG, (c) TRPO, (d) ACKTR. For the experiments, we run each algorithm 1M steps on each environment over 5 random seeds. Note that the con\ufb01guration of GDPG is the same as that of DDPG of except for the transition network. Full con\ufb01guration 2The value functions of Linear Quadratic Regulation [1] and Linearly-solvable Markov Decision Process [32] are indeed convex. 9 \fis referred to Appendix E. We use the averaged return of previous 100 episodes as the performance metric. 5.1 The ablation study of GDPG To better understand the e\ufb00ect of \u03b1 in the dual function, we evaluate GDPG with \ufb01ve di\ufb00erent choices of the weight \u03b1 = 0, 0.25, 0.5, 0.75, 1, 2 in ComplexPointv0. Figure 1(a) shows a snapshot of this environment, where the state is the coordinates of the agent in the 5D space while the feasible action set is [\u22120.1, 0.1]5. The state transition is a convex combination of the deterministic transition T(s, a) = s + a with probability f(s, a), and uniform distribution [\u22121, 1]5 with probability 1 \u2212f(s, a), where f(s, a) = ||a||2 2/0.05. The reward function is r(s, a) = \u2212||s + a||2, i.e., the distance between the agent and the origin. The task is terminated either when the agent enters the termination area or the number of steps exceeds a threshold of 100 steps. Figure 1(b) shows the performance comparison, and Figure 1(c) and Figure 1(d) correspond to its earlier stage and convergence stage, which illustrates convergence and performance more clearly. As shown, for \u03b1 = 1, which indeed corresponds to DDPG, results in a bad performance and slow convergence. The slow convergence attributes to the computation complexity of gradient in this environment. For \u03b1 = 0, the goal corresponds to optimize the augmented MDP, which performs better than DDPG as it e\ufb03ciently reduces sample complexity. However, it is too myopic as it solely focuses on the augmented MDP, which may deviate from the original objective and limit its performance. We observe that the best performance is achieved when \u03b1 = 0.5. We can view the weighted objective as a convex combination of the model-free objective and the model-based objective when \u03b1 \u2208[0, 1]. \u03b1 trades-o\ufb00between the convergence and the performance. A large \u03b1 may introduce bias while a small \u03b1 may su\ufb00er from sample complexity. Note that the choice of 2 for the value of \u03b1 achieves the worst performance. Recall (5), the reason is that setting a value of \u03b1 larger than 1 may lead the gradient of the policy to a totally opposite direction and induce large variance of the policy gradient. 5.2 Performance comparison with baselines on continuous control benchmarks We now present and discuss the \ufb01ndings from our experiments on several continuous control tasks, all of which are standard benchmark de\ufb01ned in OpenAI Gym [2]. Tasks range from low-dimensional input space to high-dimensional input space. For the baselines algorithms, we use the implementation from OpenAI Baselines [4]. Figure 2, 3 and 4 show the sample mean and the standard deviation of the averaged returns in each environment. As shown in Figure 2, GDPG outperforms other baselines in tasks with low-dimensional input space including a classic continuous control task and a task simulated by Box2D. From Figure 3 and 4, we observe that GDPG outperforms high-dimensional tasks simulated by MuJoCo by a large margin, especially in Swimmer-v2, HalfCheetah-v2, 10 \f(a) The ComplexPoint environment. 0 200000 400000 600000 800000 1000000 Steps 160 140 120 100 80 60 40 20 Return ComplexPoint-v0 =0 =0.25 =0.5 =0.75 =1 =2 (b) E\ufb00ect of \u03b1. 15000 20000 25000 30000 35000 40000 Steps 135 130 125 120 115 110 105 100 Return Stage 1 =0 =0.25 =0.5 =0.75 =1 (c) Earlier stage. 950000 960000 970000 980000 990000 1000000 Steps 40 38 36 34 32 30 28 26 24 Return Stage 2 =0 =0.25 =0.5 =0.75 =1 (d) Convergence stage. Figure 1: Return/steps of training on algorithms 11 \f0 200000 400000 600000 800000 1000000 Steps 1400 1200 1000 800 600 400 200 Return Pendulum-v0 GDPG DDPG MDPG TRPO ACKTR (a) Pendulum-v0. 0 200000 400000 600000 800000 1000000 Steps 600 400 200 0 200 Return GDPG LunarLanderContinuous-v2 GDPG DDPG MDPG TRPO ACKTR (b) LunarLander-v2. Figure 2: Return/steps of training on environments from the MuJoCo simulator. 0 200000 400000 600000 800000 1000000 Steps 50 0 50 100 150 200 250 Return Swimmer-v2 GDPG DDPG MDPG TRPO ACKTR (a) Swimmer-v2. 0 200000 400000 600000 800000 1000000 Steps 1000 0 1000 2000 3000 4000 5000 Return HalfCheetah-v2 GDPG DDPG MDPG TRPO ACKTR (b) HalfCheetah-v2. Figure 3: Return/steps of training on environments from the MuJoCo simulator. and Humanoid-v2. This demonstrates that GDPG combines the model-based augmented MDP and the original MDP e\ufb03ciently. Note that the direct modelbased extension of DDPG, MDPG performs the worst in all environments except Swimmer-v2. It shows that the model-based technique can not solve complex settings like MuJoCo as it is hard to represent the transition dynamics. 6 Related Work Model-based algorithms has been widely studied [11, 16, 19, 20] in recent years. Iterative LQG [14] applies model-based methods and assumes a speci\ufb01c form of both transition dynamics and the value function while [28, 8, 12] generate synthetic samples by the learned model. Di\ufb00erent from traditional model-based methods, we optimize the dual function that involves the model-based augmented MDP and the original MDP. Perhaps the most related model-based approach to our work is PILCO [3], which learns the transition model by Gaussian processes. 12 \f0 200000 400000 600000 800000 1000000 Steps 40000 60000 80000 100000 120000 Return HumanoidStandup-v2 GDPG DDPG MDPG TRPO ACKTR (a) HumanoidStandup-v2. 0 200000 400000 600000 800000 1000000 Steps 0 200 400 600 800 1000 1200 1400 1600 Return Humanoid-v2 GDPG DDPG MDPG TRPO ACKTR (b) Humanoid-v2. Figure 4: Return/steps of training on environments from the MuJoCo simulator. With the non-parametric transition model, [3] applies policy improvement on analytic policy gradients. However this method does not scale well to nonlinear transition dynamics or high-dimensional state spaces. Di\ufb00erent from [3], we do not rely on assumptions of the transition model. 7" + }, + { + "url": "http://arxiv.org/abs/1708.07607v3", + "title": "Reinforcement Mechanism Design for e-commerce", + "abstract": "We study the problem of allocating impressions to sellers in e-commerce\nwebsites, such as Amazon, eBay or Taobao, aiming to maximize the total revenue\ngenerated by the platform. We employ a general framework of reinforcement\nmechanism design, which uses deep reinforcement learning to design efficient\nalgorithms, taking the strategic behaviour of the sellers into account.\nSpecifically, we model the impression allocation problem as a Markov decision\nprocess, where the states encode the history of impressions, prices,\ntransactions and generated revenue and the actions are the possible impression\nallocations in each round. To tackle the problem of continuity and\nhigh-dimensionality of states and actions, we adopt the ideas of the DDPG\nalgorithm to design an actor-critic policy gradient algorithm which takes\nadvantage of the problem domain in order to achieve convergence and stability.\nWe evaluate our proposed algorithm, coined IA(GRU), by comparing it against\nDDPG, as well as several natural heuristics, under different rationality models\nfor the sellers - we assume that sellers follow well-known no-regret type\nstrategies which may vary in their degree of sophistication. We find that\nIA(GRU) outperforms all algorithms in terms of the total revenue.", + "authors": "Qingpeng Cai, Aris Filos-Ratsikas, Pingzhong Tang, Yiwei Zhang", + "published": "2017-08-25", + "updated": "2018-02-27", + "primary_cat": "cs.MA", + "cats": [ + "cs.MA", + "cs.AI" + ], + "main_content": "INTRODUCTION A fundamental problem that all e-commerce websites are faced with is to decide how to allocate the buyer impressions to the potential sellers. When a buyer searches a keyword such as \u201ciPhone 7 rose gold\u201d, the platform will return a ranking of different sellers providing an item that fits the keyword, with different prices and different historical sale records. The goal of the platform is to come up with algorithms that will allocate the impressions to the most appropriate sellers, eventually generating more revenue from the transactions. This setting can be modeled as a resource allocation problem over a sequence of rounds, where in each round, buyers arrive, the algorithm inputs the historical records of the sellers and their prices and outputs such an allocation of impressions. The sellers and the buyers carry out their transactions and the historical records are updated. In reality, most e-commerce websites employ a class of heuristic algorithms, such as collaborative filtering or content based filtering [34], many of which rank sellers in terms of \u201chistorical scores\u201d calculated based on the transaction history of the sellers with buyers of similar characteristics. However, this approach does not typically take into account the fact that sellers strategize with the choice of prices, as certain sub-optimal prices in one round might affect the record histories of sellers in subsequent rounds, yielding more revenue for them in the long run. Even worse, since the sellers are usually not aware of the algorithm in use, they might \u201cexplore\u201d with their pricing schemes, rendering the system uncontrollable at times. It seems natural that a more sophisticated approach that takes all these factors into account should be in place. In the presence of strategic or rational individuals, the field of mechanism design [29] has provided a concrete toolbox for managing or preventing the ill effects of selfish behaviour and achieving desirable objectives. Its main principle is the design of systems in such a way that the strategic behaviour of the participants will lead to outcomes that are aligned with the goals of the society, or the objectives of the designer. Cai et al. [10] tackle the problem of faking transactions and fraudulent seller behaviour in e-commerce using the tools from the field of mechanism design. A common denominator in most of the classical work in economics is that the participants have access to either full information or some distributional estimate of the preferences of others. However, in large and constantly evolving systems like e-commerce websites, the participants interact with the environment in various ways, and arXiv:1708.07607v3 [cs.MA] 27 Feb 2018 \fadjust their own strategies accordingly and dynamically [32]. In addition to that, their rationality levels are often bounded by either computational or financial constraints, or even cognitive limitations [35]. For the reasons mentioned above, a large recent body of work has advocated that other types of agent behaviour, based on learning and exploration, are perhaps more appropriate for such large-scale online problems encountered in reality [13, 18\u201321, 28, 32, 33]. In turn, this generates a requirement for new algorithmic techniques for solving those problems. Our approach is to use techniques from deep reinforcement learning for solving the problem of the impression allocation to sellers, given their selfish nature. In other words, given a rationality model for the sellers, we design reinforcement learning algorithms that take this model into account and solve the impression allocation problem efficiently. This general approach is called reinforcement mechanism design [11, 36, 40], and we can view our contribution in this paper as an instance of this framework. No-regret learning as agent rationality As mentioned earlier, the strong informational assumptions of classical mechanism design are arguably unrealistic in complex and dynamic systems, like diverse online marketplaces. Such repeated game formulations typically require that the participants know the values of their competitors (or that they can estimate them pretty accurately based on known prior distributions) and that they can compute their payoff-maximizing strategies over a long sequence of rounds. Such tasks are usually computationally burdensome and require strong cognitive assumptions, as the participants would have to reason about the future, against all possible choices of their opponents, and in a constantly evolving environment. Given this motivation, an alternative approach in the forefront of much of the recent literature in algorithmic mechanism design is to assume that the agents follow some type of no-regret strategies; the agent picks a probability mixture over actions at each round and based on the generated payoffs, it updates the probabilities accordingly, minimizing the long-term regret. This is more easily conceivable, since the agents only have to reason about their own strategies and their interaction with the environment, and there is a plethora of no-regret algorithms at their disposal. Precisely the same argument has been made in several recent works [13, 21, 28, 32, 33] that study popular auction settings under the same rationality assumptions of no-regret, or similar types. In fact, there exist data from Microsoft\u2019s Ad Actions which suggest that advertisers do use no-regret algorithms for their actions [41]. For a more detailed discussion on related rationality models, the reader is referred to [18]. The seller rationality model: To model the different sophistication levels of sellers, we consider four different models of rationality, based on well-established no-regret learning approaches. The first two, \u03f5-Greedy [43] and \u03f5-First are known as semi-uniform methods, because they maintain a distinction between exploration and exploitation. The later is often referred to as \u201cA/B testing\u201d and is widely used in practice [9, 12]. The other two approaches, UCB1 [2, 5] and Exp3 [5, 6] are more sophisticated algorithms that differ in their assumptions about the nature of the rewards, i.e. whether they follow unknown distributions or whether they are completely adversarial. Note that all of our rationality models employ algorithms for the multi-arm bandit setting, as in platforms like Taobao or eBay, the impression allocation algorithms are unknown to the sellers and therefore they can not calculate the payoffs of unused auctions. The update of the weights to the strategies is based solely on the observed payoffs, which is often referred to as the bandit feedback setting [16]. We note here that while other related rationality models can be used, the goal is to choose a model that real sellers would conceivably use in practice. The semi-uniform algorithms are quite simpler and model a lower degree of seller sophistication, whereas the other two choices correspond to sellers that perhaps put more effort and resources into optimizing their strategies some examples of sophisticated optimization services that are being used by online agents are provided in [28]. Note that both UCB1 and Exp3 are very well-known [9] and the latter is perhaps the most popular bandit feedback implementation of the famous Hedge (or Multiplicative Weights Update) algorithm for no-regret learning in the fully informed feedback setting. The impression allocation problem We model the impression allocation problem as a Markov decision process (MDP) in which the information about the prices, past transactions, past allocations of impressions and generated revenue is stored in the states, and the actions correspond to all the different ways of allocating the impressions, with the rewards being the immediate revenue generated by each allocation. Given that the costs of the sellers (which depend on their production costs) are private information, it seems natural to employ reinforcement learning techniques for solving the MDP and obtain more sophisticated impression allocation algorithms than the heuristics that platforms currently employ. In our setting however, since we are allocating a very large number of impressions, both the state space and the action space are extremely large and high-dimensional, which renders traditional reinforcement learning techniques such as temporal difference learning [38] or more specifically Q-learning [14] not suitable for solving the MDP. In a highly influential paper, Mnih et al. [31] employed the use of deep neural networks as function approximators to estimate the action-value function. The resulting algorithm, coined \u201cDeep Q Network\u201d (DQN), can handle large (or even continuous) state spaces but crucially, it can not be used for large or continuous action domains, as it relies on finding the action that maximizes the Q-function at each step. To handle the large action space, policy gradient methods have been proposed in the literature of reinforcement learning with actor-critic algorithms rising as prominent examples [7, 15, 39], where the critic estimates the Q-function by exploring, while the actor adjusts the parameters of the policy by stochastic gradient ascent. To handle the high-dimensionality of the action space, Silver et al. [37] designed a deterministic actor-critic algorithm, coined \u201cDeterministic Policy Gradient\u201d (DPG) which performs well in standard reinforcement-learning benchmarks such as mountain car, pendulum and 2D puddle world. As Lillicrap et al. [27] point out however, the algorithm falls short in large-scale problems and for \fthat reason, they developed the \u201cDeep-DPG\u201d (DDPG) algorithm which uses the main idea from [31] and combines the deterministic policy gradient approach of DPG with deep neural networks as function approximators. To improve convergence and stability, they employ previously known techniques such as batch normalization [23], target Q-networks [30], and experience replay [1, 22, 31]. The IA(GRU) algorithm: We draw inspiration from the DDPG algorithm to design a new actor-critic policy gradient algorithm for the impression allocation problem, which we refer to as the IA(GRU) algorithm. IA(GRU) takes advantage of the domain properties of the impression allocation problem to counteract the shortcomings of DDPG, which basically lie in its convergence when the number of sellers increases. The modifications of IA(GRU) to the actor and critic networks reduce the policy space to improve convergence and render the algorithm robust to settings with variable sellers, which may arrive and depart in each round, for which DDPG performs poorly. We evaluate IA(GRU) against DDPG as well as several natural heuristics similar to those usually employed by the online platforms and perform comparisons in terms of the total revenue generated. We show that IA(GRU) outperforms all the other algorithms for all four rationality models, as well as a combined pool of sellers of different degrees of sophistication. 2 THE SETTING In the impression allocation problem of e-commerce websites, there are m sellers who compete for a unit of buyer impression.1 In each round, a buyer2 searches for a keyword and the platform returns a ranking of sellers who provide an item that matches the keyword; for simplicity, we will assume that all sellers provide identical items that match the keyword exactly. Each selleri has a private costci for the item, which can be interpreted as a production or a purchasing cost drawn from an i.i.d. distribution Fs. Typically, there are n slots (e.g. positions on a webpage) to be allocated and we let xij denote the probability (or the fraction of time) that seller i is allocated the impression at slot j. With each slot, there is an associated click-through-rate \u03b1j which captures the \u201cclicking potential\u201d of each slot, and is independent of the seller, as all items offered are identical. We let qi = \u00cdn j=1 xij\u03b1j denote the probability that the buyer will click the item of seller i. Given this definition (and assuming that sellers can appear in multiple slots in each page), the usual feasibility constraints for allocations, i.e. for all i, for all j, t holds that 0 \u2264xij \u22641 and or all j, it holds that \u00cdm i=1 xij \u22641 can be alternatively written as for all i,qi \u22650, it holds that m \u00d5 i=1 qi \u2264 n \u00d5 j=1 \u03b1j and n \u00d5 j=1 \u03b1j = 1. That is, for any such allocation q, there is a feasible ranking x that realizes q (for ease of notation, we assume that the sum of clickthrough rates of all slots is 1) and therefore we can allocate the 1Since the buyer impressions to be allocated is a huge number, we model it as a continuous unit to be fractionally allocated. Even if we used a large integer number instead, the traditional approaches like DDPG fall short for the same reasons and furthermore all of the performance guarantees of IA(GRU) extend to that case. 2As the purchasing behavior is determined by the valuation of buyers over the item, without loss of generality we could consider only one buyer at each round. buyer impression to sellers directly instead of outputting a ranking over these items when a buyer searches a keyword.3 Let hit = (vit,pit,nit, \u2113it ) denote the records of seller i at round t, which is a tuple consisting of the following quantities: (1) vit is the expected fraction of impressions that seller i gets, (2) pit is the price that seller i sets, (3) nit is the expected amount of transactions that seller i makes, (4) \u2113it is the expected revenue that seller i makes at round t. Let Ht = (h1t,h2t, ...,hit ) denote the records of all sellers at round t, and let Hit = (hi1,hi2, ...,hit ) denote the vectors of records of seller i from round 1 to round t, which we will refer to as the records of the seller. At each round t + 1, seller i chooses a price pi(t+1) for its item and the algorithm allocates the buyer impression to sellers. MDP formulation: The setting can be defined as a Markov decision process (MDP) defined by the following components: a continuous state space S, a continuous action space A, with an initial state distribution with density p0(s0), and a transition distribution of states with conditional density p(st+1|st,at ) satisfying the Markov property, i.e. p(st+1|s0,a0, ...,st,at ) = p(st+1|st,at ). Furthermore, there is an associated reward function r : S \u00d7 A \u2192R assigning payoffs to pairs of states and actions. Generally, a policy is a function \u03c0 that selects stochastic actions given a state, i.e, \u03c0 : S \u2192P(A), where P(A) is the set of probability distributions on A. Let Rt denote the discounted sum of rewards from the state st , i.e, Rt (st ) = \u00cd\u221e k=t \u03b3 k\u2212tr(sk,ak), where 0 < \u03b3 < 1. Given a policy and a state, the value function is defined to be the expected total discounted reward, i.e. V \u03c0 (s) = E[Rt (st )|st = s; \u03c0] and the action-value function is defined as Q\u03c0 (s,a) = E[Rt (st )|st = s,at = a; \u03c0]. For our problem, a state st of the MDP consists of the records of all sellers in the last T rounds, i.e. st = (Ht\u2212T , ...,Ht\u22121), that is, the state is a (T,m, 4) tensor, the allocation outcome of the round is the action, and the immediate reward is the expected total revenue generated in this round. The performance of an algorithm is defined as the average expected total revenue over a sequence ofT0 rounds. Buyer Behaviour: We model the behaviour of the buyer as being dependent on a valuation that comes from a distribution with cumulative distribution function Fb. Intuitively, this captures the fact that buyers may have different spending capabilities (captured by the distribution). Specifically, the probability that the buyer purchases item i is nit = (1 \u2212Fb(pit )) \u00b7 vit , that is, the probability of purchasing is decided by the impression allocation and the price seller i sets. For simplicity and without loss of generality with respect to our framework, we assume that the buyer\u2019s valuation is drawn from U (0, 1), i.e. the uniform distribution over [0, 1]. Seller Rationality As we mentioned in the introduction, following a large body of recent literature, we will assume that the sellers employ no-regret type strategies for choosing their prices in the next round. Generally, a seller starts with a probability distribution over all the possible prices, and after each round, it observes the payoffs that these 3The framework extends to cases where we need return similar but different items to a buyer, i.e, the algorithm outputs a ranking over these items. Furthermore, our approach extends trivially to the case when sellers have multiple items. \fstrategies generate and adjusts the probabilities accordingly. As we already explained earlier, it is most natural to assume strategies in the bandit feedback setting, where the seller does not observe the payoffs of strategies in the support of its strategy which were not actually used. The reason is that even if we assume that a seller can see the prices chosen in a round by its competitors, it typically does not have sufficient information about the allocation algorithm used by the platform to calculate the payoffs that other prices would have yielded. Therefore it is much more natural to assume that the seller updates its strategy based on the observed rewards, using a multi-arm bandit algorithm. More concretely, the payoff of a seller i that receives vit impressions in round t when using price pij(t), is given by uij(t) = nit (pij(t) \u2212ci) = vit (1 \u2212Fb(pit ))(pij(t) \u2212ci). For consistency, we normalize the costs and the prices to lie in the unit interval [0, 1] and we discretize the price space to a \u201cdense enough\u201d grid (of size 1/K, for some large enough K). This discretization can either be enforced by the platform (e.g. the sellers are required to submit bids which are multiples of 0.05) or can be carried out by the sellers themselves in order to be able to employ the multi-arm bandit algorithms which require the set of actions to be finite, and since small differences in prices are unlikely to make much difference in their payoffs. We consider the following possible strategies for the sellers, based on well-known bandit algorithms. \u03b5-Greedy [43]: With probability \u03b5, each seller selects a strategy uniformly at random and with probability 1 \u2212\u03b5, the strategy with the best observed (empirical) mean payoff so far. The parameter \u03b5 denotes the degree of exploration of the seller, whereas 1 \u2212\u03b5 is the degree of exploitation; here \u03b5 is drawn i.i.d. from the normal distribution N(0.1, 0.1/3). \u03b5-First: For a horizon of T rounds, this strategy consists of an exploration phase first, over \u03b5 \u00b7 T rounds, followed by an exploitation phase, for the remaining period. In the exploration phase, the seller picks a strategy uniformly at random. In the remaining rounds, the sellers picks the strategy that maximizes the empirical mean of the observed rewards. For each seller, we set T = 200 and \u03b5 = 0.1. Exponential-weight Algorithm for Exploration and Exploitation (Exp3) [5, 6]: We use the definition of the algorithm from [6]. Let \u03b3 \u2208(0, 1] be a real number and initialize wi(1) = 1 for i = 1, . . . ,K + 1 to be the initial weights of the possible prices4. In each round t, \u2022 For i = 1, . . . ,K + 1, let \u03c0i(t) = (1 \u2212\u03b3) wi(t) \u00cdK+1 j=1 wj(t) + \u03b3 K+1, where wi(t) is the weight of price pi in round t. \u2022 Select a price pj(t) according to the probability distribution defined by \u03c01(t), . . . , \u03c0K+1(t). \u2022 Receive payoff uj(t) \u2208[0, 1]. 4For ease of notation, we drop the subscript referring to a specific seller, as there is no ambiguity. \u2022 For \u2113= 1, . . . ,K + 1, let \u02c6 u\u2113(t) = ( u\u2113(t)/\u03c0\u2113(t), if \u2113= j 0, otherwise and w\u2113(t + 1) = w\u2113(t)e\u03b3 \u00b7 \u02c6 u\u2113(t)/(K+1). We remark here that since the payoff of each seller in some round t actually takes values in [\u22121, 1], we scale the payoff to [0, 1] by applying the transformation f (u) = (u + 1)/2 to any payoff u. Upper Confidence Bound Algorithm (UCB1) [2, 4]: For each price pj \u2208[0, 1/K, 2/K, . . . , 1], initialize xj(1) = 0. At the end of each round t, update xj(t) as: xj(t) = ( xj(t \u22121)/t + uj(t)/t, if j was chosen in this round t xj(t \u22121), otherwise For any round t \u2208{0, . . . ,K}, the seller chooses a price pj that has not been used before in any of the previous rounds (breaking ties arbitrarily). For any round t \u2265K + 1, the seller chooses the price pj with the maximum weighted value xj, i.e, pj(t) \u2208arg max j \u2208{0,1/K,...,1} xj(t) + log2 t \u00cdt \u03c4 =1 Ij\u03c4 , where Ij\u03c4 is the indicator function, i.e. Ij\u03c4 = ( 1, if pj was chosen in round \u03c4 0, otherwise. \u03b5-Greedy and \u03b5-First are simple strategies that maintain a clear distinction between exploration and exploitation and belong to the class of semi-uniform strategies. Exp3 is the most widely used bandit version of perhaps the most popular no-regret algorithm for the full information setting, the Hedge (or Multiplicative Weight updates) algorithm [17] and works in the adversarial bandit feedback model [6], where no distributional assumptions are being made about the nature of the rewards. UCB1, as the name suggests, maintains a certain level of optimism towards less frequently played actions (given by the second part of the sum) and together with this, it uses the empirical mean of observed actions so far to choose the action in the next round. The algorithm is best suited in scenarios where the rewards do follow some distribution which is however unknown to the seller. For a more detailed exposition of all these different algorithms, [9] provides a concise survey. The point made here is that these choices are quite sensible as they (i) constitute choices that a relatively sophisticated seller, perhaps with a research team at its disposal could make, (ii) can model sellers with different degrees of sophistication or pricing philosophies and (iii) are consistent with the recent literature on algorithmic mechanism design, in terms of modeling agent rationality in complex dynamic environments. 3 ALLOCATION ALGORITHMS In this section, we will briefly describe the algorithms that we will be comparing IA(GRU) against two natural heuristics similar to those employed by platforms for the impression allocation problem, as well as the DDPG algorithm of Lillicrap et al. [27]. \fHeuristic Allocation Algorithms As the strategies of the sellers are unknown to the platform, and the only information available is the sellers\u2019 historical records, the platform can only use that information for the allocation. Note that these heuristics do not take the rationality of the sellers into account, when deciding on the allocation of impressions. The first algorithm is a simple greedy algorithm, which allocates the impressions proportionally to the revenue contribution. Greedy Myopic Algorithm: At round 0, the algorithm allocates a 1/m-fraction of the buyer impression to each seller. At any other round \u03c4 + 1 (for \u03c4 \u22650), the algorithm allocates a fraction of \u2113i\u03c4 /\u00cdm j=1 \u2113j\u03c4 of the buyer impression to each seller, i.e. proportionally to the contribution of each seller to the total revenue of the last round. The second algorithm is an algorithm for the contextual multi-arm bandit problem, proposed by [26], based on the principles of the family of upper confidence bound algorithms (UCB1 is an algorithm in this family). The algorithm is among the state of the art solutions for recommender systems [9] and is an example of contextual bandit approaches, which are widely applied to such settings [3, 8, 25, 26]. To prevent any confusion, we clarify here that while we also used bandit algorithms for the seller rationality models, the approach here is fundamentally different as the Linear UCB Algorithm is used for the allocation of impressions not the choice of prices and the arms in this case are the different sellers. Linear UCB Algorithm [26]: We implement the algorithm as described in [26] in the interest of space, we do not provide the definition of the algorithm, but refer the reader to Algorithm 1 in [26]. We model each seller as an arm and set hit as the feature of each arm i in each round t. The parameter \u03b1 is set to 1. Deep Deterministic Policy Gradient Here, we briefly describe the DDPG algorithm of [27], which we we draw inspiration from in order to design our impression allocation algorithm. Before describing the algorithm, we briefly mention the main ingredients of its predecessor, the DPG algorithm of Silver et al. [37]. Deterministic Policy Gradient: The shortcoming of DQN [31] is that while it can handle continuous states, it can not handle continuous actions or high-dimensional action spaces. Although stochastic actor-critic algorithms could handle continuous actions, they are hard to converge in high dimensional action spaces. The DPG algorithm [37] aims to train a deterministic policy \u00b5\u03b8 : S \u2192A with parameter vector \u03b8 \u2208Rn. This algorithm consists of two components: an actor, which adjusts the parameters \u03b8 of the deterministic policy \u00b5\u03b8 (s) by stochastic gradient ascent of the gradient of the discounted sum of rewards, and the critic, which approximates the action-value function. Deep Deterministic Policy Gradient: Directly training neural networks for the actor and the critic of the DPG algorithm fails to achieve convergence; the main reason is the high degree of temporal correlation which introduces high variance in the approximation of the Q-function by the critic. For this reason, the DDPG algorithm uses a technique known as experience replay, according to which the experiences of the agent at each time step are stored in a replay buffer and then a mini-batch is sampled uniformly at random from this set for learning, to eliminate the temporal correlation. The other modification is the employment of target networks for the regularization of the learning algorithm. The target network is used to update the values of \u00b5 and Q at a slower rate instead of updating by the gradient network; the prediction yt will be relatively fixed and violent jitter at the beginning of training is absorbed by the target network. A similar idea appears in [42] with the form of double Q-value learning. 4 THE IMPRESSION ALLOCATION (GRU) ALGORITHM In this section, we present our main deep reinforcement learning algorithm, termed IA(GRU) (\u201cIA\u201d stands for \u201cimpression allocation\u201d and \u201cGRU\u201d stands for \u201cgated recurrent unit\u201d) which is in the center of our framework for impression allocations in e-commerce platforms and is based on the ideas of the DDPG algorithm. Before we present the algorithm, we highlight why simply applying DDPG to our problem can not work. Shortcomings of DDPG: First of all, while DDPG is designed for settings with continuous and often high-dimensional action spaces, the blow-up in the number of actions in our problem is very sharp as the number of sellers increases; this is because the action space is the set of all feasible allocations, which increases very rapidly with the number of sellers. As we will show in Section 5, the direct application of the algorithm fails to converge even for a moderately small number of sellers. The second problem comes from the inability of DDPG to handle variability on the set of sellers. Since the algorithm uses a two-layer fully connected network, the position of each seller plays a fundamental role; each seller is treated as a different entity according to that position. As we show in Section 5, if the costs of sellers at each round are randomly selected, the performance of the DDPG algorithm deteriorates rapidly. The settings in real-life e-commerce platforms however are quite dynamic, with sellers arriving and leaving or their costs varying over time, and for an allocation algorithm to be applicable, it should be able to handle such variability. We expect that each seller\u2019s features are only affected by its historical records, not some \u201cidentity\u201d designated by the allocation algorithm; we refer to this highly desirable property as \"permutation invariance\". Based on time-serial techniques, our algorithm uses Recurrent Neural Networks at the dimension of the sellers and achieves the property. The IA(GRU) algorithm: Next, we explain the design of our algorithm, but we postpone some implementation details for Section 5. At a high level, the algorithm uses the framework of DDPG with different network structures and different inputs of networks. It maintains a sub-actor network and a sub-critic network for each \fFigure 1: The framework of the actor network of the IA(GRU) algorithm. seller and employs input preprocessing at each training step, to ensure permutation invariance. Input Preprocessing: In each step of training, with a state tensor of shape (T,m, 4), we firstly utilize a background network to calculate a public vector containing information of all sellers: it transforms the state tensor to a (T,m \u00d74) tensor and performs RNN operations on the axis of rounds. At this step, it applies a permutation transformation, i.e. a technique for maintaining permutation invariance. Specifically, it first orders the sellers according to a certain metric, such as the weighted average of their past generated revenue and then inputs the (state, action) pair following this order to obtain the public vector (pv). On the other hand, for each seller i, it applies a similar RNN operation on its history, resulting in an individual temporal feature called (fi). Combining those two features, we obtain a feature vector (pv, fi) that we will use as input for the sellers\u2019 sub-actor and sub-critic networks. Actor network: For each seller, the input to the sub-actor network is (pv, fi) and the output is a score. This algorithm uses a softmax function over the outputs of all sub-actor networks in order to choose an action. The structure of the policy which is shown in Figure 1 ensures that the policy space is much smaller than that of DDPG as the space of inputs of all sub-actor networks is restricted, and allows for easier convergence, as we will show in Section 5. Critic network: For the critic, we make use of a domain-specific property, namely that the immediate reward of each round is the sum of revenues of all sellers and the record of each seller has the same space. Each sub-critic network inputs the expected fraction of buyer impression the seller gets (the sub-action) and (pv, fi) (the sub-state) as input and outputs the Q-value of the corresponding seller, i.e, the expected discounted sum of revenues from the substate following the policy. Then, it sums up the estimated Q-value of all sub-critic networks to output the final estimated Q-value, with the assumption that the strategy of each seller is independent of the records of other sellers, which is the case in all of our rationality models. The framework of the critic network is similar to Figure 1. 5 EXPERIMENTAL EVALUATION In this section, we present the evaluation of our algorithms in terms of convergence time and revenue performance against several benchmarks, namely the direct application of the DDPG algorithm (with a fully connected network) and the heuristic allocation algorithms that we defined in Section 3. We use Tensorflow and Keras as the engine for the deep learning, combining the idea of DDPG and the techniques mentioned in Section 4, to train the neural network. Designed experiments: First, we will compare IA(GRU) and DDPG in terms of their convergence properties in the training phase and show that the former converges while the latter does not. Next, we will compare the four different algorithms (Greedy Myopic, Linear UCB, DDPG and IA(GRU)) in terms of the generated revenue for two different settings, a setting with fixed sellers and a setting with variable sellers. The difference is that in the former case, we sample the costs ci once in the beginning whereas in the latter case, the cost ci of each seller is sampled again in each round. This can either model the fact that the production costs of sellers may vary based on unforeseeable factors or simply that sellers of different capabilities may enter the market in each round. For each one of these two settings, we will compare the four algorithms for each one of the four different rationality models (\u03f5-Greedy, \u03f5-First, UCB1 and Exp3) separately as well as in a combined manner, by assuming a mixed pool of sellers, each of which may adopt a different rationality model from the ones above. The latter comparison is meant to capture cases where the population of sellers is heterogeneous and may consist of more capable sellers that employ their R&D resources to come up with more sophisticated approaches (such as UCB1 or Exp3) but also on more basic sellers that employ simpler strategies (such as \u03f5-Greedy). Another interpretation is that the distinction is not necessarily in terms of sophistication, but could also be due to different market research, goals, or general business strategies, which may lead to different decisions in terms of which strategy to adopt. Our experiments are run for 200 sellers, a case which already captures a lot of scenarios of interest in real e-commerce platforms. A straightforward application of the reinforcement learning algorithms for much larger numbers of sellers is problematic however, as the action space of the MDP increases significantly, which has drastic effects on their running time. To ensure scalability, we employ a very natural heuristic, where we divide the impression allocation problem into sub-problems and then solve each one of those in parallel. We show at the end of the section that this \u201cscale-and-solve\u201d version of IA(GRU) clearly outperforms the other algorithms for large instances consisting of as many as 10.000 sellers. Experimental Setup: In the implementation of DDPG, the actor network uses two full connected layers, a rectified linear unit (ReLu) \f(a) Rewards of DDPG in training (b) Rewards of IA(GRU) in training Figure 2: Rewards of DDPG and IA(GRU) in training for rational sellers. as the activation function, and outputs the action by a softmax function. The critic network inputs a (state,action) pair and outputs the estimation of the Q-value using similar structure. The algorithm IA(GRU) uses the same structure, i.e. the fully connected network in the sub-actor and sub-critic networks, and uses a Recurrent Neural Network with gate recurrent units (GRU) in cyclic layers to obtain the inputs of these networks. For the experiments we set T = 1, i.e, the record of all items of the last round is viewed as the state.5 We employ heuristic algorithms such as the Greedy Myopic Algorithm for exploration, i.e. we add these samples to the replay buffer before training. Experimental Parameters: We use 1000 episodes for both training and testing, and there are 1000 steps in each episode. The valuation of the buyer in each round is drawn from the standard uniform distribution U (0, 1) and the costs of sellers follow a Gaussian distribution with mean 1/2 and variance 1/2. The size of the replay buffer is 105, the discount factor \u03b3 is 0.99, and the rate of update of the target network is 10\u22123. The actor network and the critic network are trained via the Adam algorithm, a gradient descent algorithm presented in [24], and the learning rates of these two networks are 10\u22124. Following the same idea as in [27], we add Gaussian noise to the action outputted by the actor network, with the mean of the noise decaying with the number of episodes in the exploration. Convergence of DDPG and IA(GRU) First, to show the difference in the convergence properties of DDPG and IA(GRU), we train the algorithms for 200 sellers using the \u03f5greedy strategy as the rationality model with variable costs for the sellers. Figure 2 shows the comparison between the rewards of the algorithms and Figure 3 shows the comparison in terms of the training loss with the number of steps. The gray band shows the variance of the vector of rewards near each step. From the figures, we see that DDPG does not converge, while IA(GRU) converges, as the training loss of the algorithm decreases with the number of steps. The convergence properties for the other rationality models are very similar. 5We found out that training our algorithms for larger values of T does not help to improve the performance. (a) Loss of DDPG in training (b) Loss of IA(GRU) in training Figure 3: Loss of DDPG and IA(GRU) in training for rational sellers. Performance Comparison In this subsection, we present the revenue guarantees of IA(GRU) in the setting with 200 sellers and how it fairs against the heuristics and DDPG, for either each rationality model separately, or for a heterogeneous pool of sellers, with a 1/4-fraction of the sellers following each strategy. As explained in the previous page, we consider both the case of fixed sellers and variable sellers. Performance Comparison for Fixed Sellers: We show the performance of DDPG, IA(GRU), Greedy Myopic and Linear UCB on sellers using \u2022 the \u03f5-Greedy strategy (Figure 4), \u2022 the \u03f5-First strategy (Figure 5), \u2022 the UCB1 strategy (Figure 6), \u2022 the Exp3 strategy (Figure 7). We also show the performance of the four different algorithms in the case of a heterogeneous population of sellers in Figure 8. Every point of the figures shows the reward at the corresponding step. We can conclude that the IA(GRU) algorithm is clearly better than the other algorithms in terms of the average reward on all rationality models. We also note that DPPG does not converge with 200 sellers and this is the reason for its poor performance. Performance Comparison for Variable Sellers: We show the performance of DDPG, IA(GRU), Greedy Myopic and Linear UCB on sellers using \u2022 the \u03f5-Greedy strategy (Figure 9), \u2022 the \u03f5-First strategy (Figure 10), \u2022 the UCB1 strategy (Figure 11), \u2022 the Exp3 strategy (Figure 12). We also show the performance of the four different algorithms in the case of a heterogeneous population of sellers in Figure 13. Again here, we can conclude that the IA(GRU) algorithm clearly outperforms all the other algorithms in terms of the average reward on all rationality models. Also, IA(GRU) fairs better in terms of stability, as the other algorithms perform worse in the setting with variable sellers, compared to the setting with fixed sellers. Scalability In this subsection, we present the revenue guarantees of IA(GRU) in the setting with 10000 fixed sellers and how it fairs against the \fFigure 4: Rewards for fixed sellers and \u03f5-Greedy strategies. Figure 5: Rewards for fixed sellers and \u03f5-First strategies. Figure 6: Rewards for fixed sellers and UCB1 strategies. heuristics and DDPG to show the scalability properties of IA(GRU) with the number of sellers. For IA(GRU) and DDPG, we will employ a simple \u201cscale-and-solve\u201d variant, since applying either of them directly to the pool of 10.000 sellers is prohibitive in terms of their running time. We design 50 allocation sub-problems, consisting of 200 sellers each, and divide the total number of impressions in 50 sets of equal size, reserved for each sub-problem. We run IA(GRU) and DDPG algorithms in parallel for each sub-problem, which is Figure 7: Rewards for fixed sellers and Exp3 strategies. Figure 8: Rewards for fixed sellers and heterogeneous strategies. Figure 9: Rewards for variable sellers and \u03f5-Greedy strategies. feasible in reasonable time. For the heuristics, we run the algorithms directly on the large population of 10.000 sellers. The results for the case of \u03f5-Greedy seller strategies are show in Figure 14 (the results for other strategies are similar). We can see that even though we are applying a heuristic version, the performance of IA(GRU) is \fFigure 10: Rewards for variable sellers and \u03f5-First strategies. Figure 11: Rewards for variable sellers and UCB1 strategies. Figure 12: Rewards for variable sellers and Exp3 strategies. still clearly superior to all the other algorithms, which attests to the algorithm being employable in larger-case problems as well. 6" + } + ], + "Bowen Sun": [ + { + "url": "http://arxiv.org/abs/2312.05486v1", + "title": "FreeFlow: A Comprehensive Understanding on Diffusion Probabilistic Models via Optimal Transport", + "abstract": "The blooming diffusion probabilistic models (DPMs) have garnered significant\ninterest due to their impressive performance and the elegant inspiration they\ndraw from physics. While earlier DPMs relied upon the Markovian assumption,\nrecent methods based on differential equations have been rapidly applied to\nenhance the efficiency and capabilities of these models. However, a theoretical\ninterpretation encapsulating these diverse algorithms is insufficient yet\npressingly required to guide further development of DPMs. In response to this\nneed, we present FreeFlow, a framework that provides a thorough explanation of\nthe diffusion formula as time-dependent optimal transport, where the\nevolutionary pattern of probability density is given by the gradient flows of a\nfunctional defined in Wasserstein space. Crucially, our framework necessitates\na unified description that not only clarifies the subtle mechanism of DPMs but\nalso indicates the roots of some defects through creative involvement of\nLagrangian and Eulerian views to understand the evolution of probability flow.\nWe particularly demonstrate that the core equation of FreeFlow condenses all\nstochastic and deterministic DPMs into a single case, showcasing the\nexpansibility of our method. Furthermore, the Riemannian geometry employed in\nour work has the potential to bridge broader subjects in mathematics, which\nenable the involvement of more profound tools for the establishment of more\noutstanding and generalized models in the future.", + "authors": "Bowen Sun, Shibao Zheng", + "published": "2023-12-09", + "updated": "2023-12-09", + "primary_cat": "cs.AI", + "cats": [ + "cs.AI", + "cs.LG", + "math.PR" + ], + "main_content": "Introduction Content generations by artificial intelligence are increasingly attractive because of their remarkable performance not only on image generation [1\u20133] but also in broader domains such as context [4,5] and video/audio generations [6,7]. DPMs inspired by diffusion phenomenon in physics [8] compose one of the most vibrant domains that have recently achieved considerable attention for its stable training and solid probabilistic deduction. In a variety of scopes, DPMs show significant promise as a flexible approach to model complex high-dimensional distributions, whether on data generation or density estimation [9,10]. In terms of image generation, these methods take effect by learning from the simulated gradual diffusion of information then recover the origin inputs from noise through predicting the reverse process. Earlier DDPM [11] applies Markovian assumption to conduct a series of diffusion steps, which is subsequently reformulated to non-Markovian process in DDIM [12] and Ito process solved by the stochastic differential equation (SDE) in [13]. Similar to SMLD [14], score matching is also used in [13] for estimating data distribution during Langevin dynamics. A series of ordinary differential equations (ODEs) [15\u201318] that exclude uncertainty by converting forward and backward process to deterministic procedure are then proposed to decrease calculation consumption. The formulations appear to be divergent yet their ultimate fountain is worth investigation and unification. [19] suggests an uniform expression of these approaches by proving their equivalence to the hierarchical variational autoencoders, which is only limited to probably theory. According to GenPhys [18], it is possible for an extension to ODEs that satisfy continuity equations in physics to be considered as potential candidates for probability flow, which reflects the generality of DPMs from one perspective. Unfortunately, there is still a lack of an expandable theoretical framework that not only can comprehensively explain the mechanisms of all DPMs but also serve as a bridge to involve more sophisticated analytical tools. arXiv:2312.05486v1 [cs.AI] 9 Dec 2023 \fFigure 1: Illustration for the gradient flow of energy functional defined on Wasserstein space P(Rn) that is an infinite dimensional Riemannian manifold comprising sets of probability measures on Rn endowed with Wasserstein distance.1 The red line with arrow and blue plane denote probability flow \u03c1t and tangent space T\u03c1P(Rn) respectively. The functional E(\u03c1t) shown on the right is decreasing most rapidly if \u03c1t evolves in accordance with its gradient flow. To address this requirement and explore the potential of DPMs, we present the FreeFlow declaring that the diffusion process is intrinsically the gradient flow of free energy functional defined on Wasserstein space, as illustrated in Fig. 1. Combined with additional concepts from fluid dynamics, the FreeFlow further highlights that current models essentially compel random variables to stream in the manner of the Lagrangian description. These perceptions presented in our work are consequently rooted in the optimal transport theory whose cost formula determine the metrics of Wasserstein space and tracks of probability evolution as well. Primarily, we apply time-dependent optimal transport to observe the probability density as vector field \u03c1t on Riemannian manifold, then naturally transfer the distance between initial and final density to energy viewpoint by Benamou-Brenier theorem. The gradient of functional E(\u03c1t) that represents the fastest dissipation direction is established as the kernel of our framework and afterward proved to be precisely the diffusion process of DPMs if a special formulation of E is selected. On basis of the FreeFlow, we are capable to undertake a more comprehensive investigation regarding DPMs. To be specific, we derives the Fokker-Planck equation by the gradient flow of a typical defined functional in FreeFlow thus constract its association with different kinds of diffusion algorithms. In virtue of Lagrangian description, we indicate that linear map is the unique optimal transport with strictly convex cost function and analyze the danger of shock wave if randomly sampling data pairs. It is also significant to emphasize the formulation of the cost function in optimal transport, particularly due to the eminent property of displacement interpolation we present when adopting a Euclidean distance as the cost. By introducing extra analysis on propositions of the functional, which is useful for progressing of DPMs in the future. This paper is organized as follow: In Section 2, we introduce the overall background on DPMs about probability evolution in diffusion and related works; Section 3 is devoted to fundamental concepts on optimal transport, gradient flows and Lagrangian-Eulerian descriptions; Section 4 is the kernel of this paper which provides the definitions of FreeFlow, the deduction to Fokker-Planck equation and the displacement interpolation proposition for preparation; We subsequently apply FreeFlow to analyze DPMs through unified diffusion patterns and a Eulerian view for avoiding shock waves in Section 5; Conclusions and discussions are conducted in Section 6. Our main contributions are summarized as three points: \u2022 We propose an unified framework named FreeFlow to show that various diffusion patterns in DPMs can be intrinsically interpolated as the gradient flow of free energy functional, or equally, the geodesic in Wasserstein space. \u2022 We demonstrate the Fokker-Planck equation is only one case of FreeFlow and analyze the significance impact on DPMs from formulations of cost function in optimal transport. 1Generally, Wasserstein space is defined by P(X), where X is any compact metric space not necessarily the Rn. Discussions are limited to P(Rn) in this work. 2 \f\u2022 FreeFlow highlights the Lagrangian description in fluid dynamics to observe current pipelines of DPMs, enabling the essential reveal on shock waves during straight generation and the reformulation to optimality equations by Eulerian manner. 2 Background Within this section, we firstly present the explanation that pipelines of DPMs can be boiled down to the FP equation and then introduce some related works. 2.1 Evolutionary Probability Density in DPMs Given an open subset U of Rn, the function u(x, t) : U \u2192R of position x and time t subject to velocity field v(x, t) and diffusivity D(x, t) evolves, irrespective of sources or sinks, according to convection\u2013diffusion equation: \u2202u(x, t) \u2202t =\u2207\u00b7 \u0000u(x, t)v(x, t) \u0001 + \u2207\u00b7 \u0000D(x, t)\u2207u(x, t) \u0001 , (1) where \u2207\u00b7 and \u2207denotes divergence and gradient with respect to position x respectively. It is essential for DPMs to be regarded as probabilistic version of Eq. (1) describing the diffusion process in physics, since it serves as the inspiration for diffusion-based frameworks. Replacing u with time-dependent probability density \u03c1(x, t) (denoted as \u03c1t(x) for convenience), Eq. (1) is simply reformed to FP equation that is practically conducted in recent successful diffusion models. If D (denoted by Dt) is irrelevant of x, then Eq. (1) can be rewritten as an n-dimensional FP equation in the following form: \u2202\u03c1t \u2202t = \u2207\u00b7 (\u03c1tvt) + Dt\u2206\u03c1t, (2) where \u2206is Laplacian operator, and \u03c1t is probability density on Rn that is non negative and R Rn \u03c1(x, t)dx = 1. For the sake of simplicity, independent variables will be omitted in subsequent discussions if there is no ambiguity, as done in Eq. (2). Fokker-Planck equation describes the distribution evolution of probability density for random variables in Ito process. Considering an n-dimensional random variable Xt and the standard Wiener process with zero mean and unit variance Wt, the Ito process is given by dXt = \u03bet(Xt)dt + \u03c3tdWt, (3) where \u03bet(Xt) = \u2212v(x, t) and \u03c3t = \u221a2Dt. To be specific, \u03bet(Xt) is the coefficient representing deterministic drift of the system influencing the mean shift of Xt, while \u03c3t represents the variance of diffusion resulting from stochastic noise otherwise. Under the constraint of sharing the same marginal probability densities at t = 0 (the clean input) and the assumption of zero variance noise sampling, the practical reverse path of Xt can be further rewritten to ODE: dXt = ft(Xt)dt, (4) where the function ft(Xt) related to \u03be and \u03c3 is the target for models to learn from in training stage. Variations in the form of ft have a significant impact on the solution to Eq. (4), leading to diverse performance outcomes for DPMs. Besides, a stochastic process controlled by SDE is converted to a deterministic procedure, simplifying the realization of reverse. The FP equation is a central component in the analysis of DPMs, as it not only provides crucial guidance during forward pipelines, but also has connections to more profound theories that are extensively discussed in Section 4.2. 2.2 Related Works Based on the diffusion of probability flow with stochastic process in FP equation, different DPMs are proposed due to various formulation of ft(Xt). The famous DDPM [11] realizes image generation under the assumption of Markovian process that is inherently discrete form of FP equation. Endowed with Eq. (3), forward and reverse SDE are finally adopted in [13]. ODEs can be derived by ignoring the noise item on the right hand side of Eq. (3), giving rise to equations expressing dynamics in continuous time along continuous path. Without the stochastic item of SDE, the resultant ODEs makes it possible to boost the speed of generation in DPM-Solver [15] via introducing intensively developed ODE solvers. With other selections of ft(Xt), some similar methods are proposed, e.g. Poisson flow generative models (PFGM) [17] and GenPhys [18]. Besides, the field of mathematics is currently abuzz with the study of optimal transport theory and gradient flows due to their intriguing associations. We apply relevant results, e.g., energy functionals and displacement convexity on metric space to realize our analysis; see [20\u201324]. 3 \f3 Preliminaries In this section, we offer fundamental definitions and theories to lay the groundwork for our framework, which will be thoroughly analyzed afterwords. Optimal transport theory has been continuously developed since Monge first presented this problem [25] and is currently in connection with Riemannian geometry, partial differential equations, gradient flow, etc. Given two probability spaces (X, \u00b5), (Y, \u03bd) and a cost function c : X \u00d7 Y \u2192R+ \u222a{+\u221e}, the Monge problem is solving optimal map T : X \u2192Y such that inf \u001a M(T) := Z X c(x, T(x))d\u00b5 \f \f \f \f T#\u00b5 = \u03bd \u001b , (5) where T#\u00b5 is push forward of \u00b5 subject to (T#\u00b5)(A) := \u00b5(T \u22121(A)) for any measurable set A \u2282Y . Instead of finding the map T in original Monge problem, the relaxed Kantorovich optimal scheme K(\u03c0) is obtained by \u03c0 realizing inf \u001a K(\u03c0) := Z X\u00d7Y c(x, y)d\u03c0(x, y) \f \f \f \f \u03c0 \u2208\u03a0(\u00b5, \u03bd) \u001b , (6) where \u03a0(\u00b5, \u03bd) is the space composed of all joint probability measures \u03c0 on X \u00d7 Y with marginals \u00b5 and \u03bd. If \u00b5 and \u03bd are two probability measures in Polish space (\u2126, d), the Wasserstein metric with order 2 between them is thus defined by W2(\u00b5, \u03bd) := \u0012 inf \u03c0\u2208\u03a0(\u00b5,\u03bd) Z \u2126\u00d7\u2126 d(x, y)2d\u03c0(x, y) \u00131/2 . (7) Equipped with distance W2 as the metrics, the Wasserstein space P(\u2126) comprising all the set of probability measures on \u2126is established. Gradient flows are commonly used to describe certain equations in differential Riemannian space. The Wasserstein space P(\u2126) is a classic example of an infinite-dimensional Riemannian space, from which one can derive the gradient flows. For a time-dependent density function \u03c1t in Riemannian manifold P(\u2126) and functional \u03a6 : P(\u2126) \u2192R assumed to be continuously differentiable, the gradient flow of \u03a6(\u03c1t) on P(\u2126) is the equation d\u03c1t dt = \u2212grad\u03c1t\u03a6, (8) where grad\u03c1t denotes the gradient of the functional at \u03c1t. Within this work, we typically consider the situation that \u2126= Rn. Note that tangent space T\u03c1P(Rn) at \u03c1 is composed by functions s on Rn that R s = 0. Lagrangian and Eulerian descriptions are two perspectives for observing flow phenomenons connected by material derivative in the context of fluid dynamics. The Lagrangian representation emphasizes trajectories of individual particles, whereas the Eulerian counterpart considers the physical quantity at fix positions in the field. Likewise for probability flow field, the relation between Lagrangian and Eulerian view is given by \u001a d dt\u03b3x(t) = vt(\u03b3x(t)), \u03b3x(0) = x, (9) where \u03b3x(t) is the trace of particle x at time t. Their otherness and correlation are so universally applicable that we are permitted to treat FP equation in Eq. (2) as Eulerian perspective yet the equal Ito process in Eq. (3) as Lagrangian one. Moreover, if the velocity field vt(x) is Lipschitz continuous, there exists the unique solution \u03b3x(t) to Eq. (9) for any initial point x and (t, x) 7\u2192\u03b3x(t) is bijective and overall Lipschitz. We will implement these properties for following analysis about defects of DPMs. 4 Theoretical Framework We propose FreeFlow, whose form and related definitions are presented in Section 4.1. The Fokker-Planck equation is typically discussed as a special case of FreeFlow in Section 4.2. Subsequent analysis on the convexity of cost functions in Section 4.3 highlights the importance of their formulations to prepare for further discussions about DPMs. 4.1 FreeFlow The FreeFlow framework provides a geometric interpretation of the diffusion-based evolutionary pipeline of probability density, which is formulated as a time-dependent optimal transport problem exploring the geodesic in Wasserstein 4 \fspace. Intrinsically, this diffusion process is linked to the gradient flow of the free energy function E(\u03c1), which can be expressed as a differential equation: d dtE(\u03c1) = \u2212 \u001c\u2202\u03c1 \u2202t , \u2202\u03c1 \u2202t \u001d \u03c1 , (10) where \u27e8\u00b7, \u00b7\u27e9\u03c1 denotes the Wasserstein scalar product of two vectors in tangent space T\u03c1P(Rn). It can be further indicated by FreeFlow that DPMs are actually learning from the resultant direction of maximizing energy dissipation, which accounts for why multiple DPMs can operate. To facilitate further investigation, we will first provide the definition of time-dependent optimal transport and then the metrics of space P(Rn) taking the Riemannian geometric perspective of view. Definition 4.1 (Time-dependent Optimal Transport). If the continuous map/trajectory \u03b6t(x) with t \u2208[0, 1] is associated by initial point x and final point y in space \u2126, where \u03b6t(x) represents the displacement of x at time t, then the time-dependent optimal transport map is inf \u001aZ X C(\u03b6t(x))d\u00b5(x) \f \f \f \f \u03b60 = id, \u03b61#\u00b5 = \u03bd \u001b , (11) where \u03bd(y) is the measure pushed forward from \u00b5(x) and C(\u03b6t(x)) is the corresponding cost for displacement \u03b6t(x). Moreover, time-dependent optimal transport is compatible with primal optimal transport if for all x and y we have c(x, y) = inf{C(\u03b6t(x))|\u03b60 = x, \u03b61 = y}. (12) By abuse of the notion, \u03b6t tends to be a transport map in Eq.(11) similar to T(x) of Eq. (5) and a trajectory in Eq. (12). Note that t 7\u2192\u03b6t(x) should be at least segmental C1 with respect to t for x of \u00b5-\u00e6thus velocity can be denoted by \u02d9 \u03b6t. While primal Monge problem solely pays attention on the initial and final positions, time-dependent OT calculates the cost by considering the traces of all particles involved. Definition 4.2 (Norm of T\u03c1P(Rn)). If velocity field v of particles evolving in accordance with probability density \u03c1 is completely controlled by their position, the norm \u2225\u00b7 \u2225\u03c1 of tangent space T\u03c1P is defined by \r \r \r \r \u2202\u03c1 \u2202t \r \r \r \r \u03c1 = inf \u001aZ Rn \u03c1|v|2dx \f \f \f \f \u2202\u03c1 \u2202t + \u2207\u00b7 (\u03c1v) = 0 \u001b . (13) Note that the probability density \u03c1t at time t is the weak solution of continuity equation: \u2202\u03c1t \u2202t + \u2207\u00b7 (\u03c1tvt) = 0, (14) which is shown as the condition of Eq. (13). The norm of T\u03c1P(Rn) is actually defined through borrowing the concept of total kinetic energy of particles in constraint of Eq. (14) that represents the conservation of mass. Definition 4.3 (Metrics of P(Rn)). Endowed with Eq. (7) and Eq. (13), the Riemannian metrics of P(Rn) can be given by 2-Wasserstein distance: W 2 2( \u03c10,\u03c11)=inf ( Z 1 0 \r \r \r \r \u2202\u03c1 \u2202t \r \r \r \r 2 \u03c1( t ) dt \f \f \f \f \f\u03c1(0)=\u03c10,\u03c1(1)=\u03c11 ) , (15) where \u03c10 and \u03c11 are two probability densities on Rn at time t = 0 and t = 1 respectively. We can regard this definition as the Benamou-Brenier problem minimizing the velocity field related action: A(\u03c1, v) = Z 1 0 \u0012Z Rn \u03c1t(x)|vt(x)|2dx \u0013 dt. Thanks to the Benamou-Brenier theorem [26], the square of Wasserstein distance is proved to be equivalent to the minimal action given by W 2 2 (\u03c10, \u03c11) = inf{A(\u03c1, v)}. Therefore, the Wasserstein distance described by positions is converted to energy form where evolution under Eulerian view dominates. Definition 4.4 (Wasserstein Scalar Product). For two tangent vector s1, s2 in T\u03c1P(Rn), their Wasserstein scalar product is defined as \u27e8s1, s2\u27e9\u03c1 = Z Rn \u03c1(\u2207\u03c61 \u00b7 \u2207\u03c62)dx, (16) where si = \u2207\u00b7 (\u03c1\u2207\u03c6i) in Rn. 5 \fIn reality, the Wasserstein scalar product is searching for the velocity field that minimizes total kinetic energy in Eq. (13) satisfying the compatibility of continuity equation of Eq. (14). Thanks to the Benamou-Brenier formula, the velocity field vt is simply proved to be orthogonal with any solenoidal vector fields such that they are enabled to be the gradient field of some potential \u03c6. More details about why v = \u2207\u03c6 is deduced in Appendix A.1. Finally, the Wasserstein scalar product equips the definition of the gradient of a functional in the Wasserstein space, i.e., the Wasserstein gradient. Definition 4.5 (Wasserstein Gradient). For any smooth curve \u03c1t \u2208P(Rn), the Wasserstein gradient of functional \u03a6 :P(Rn)\u2192R at \u02c6 \u03c1 is the unique function grad\u02c6 \u03c1\u03a6 such that d\u03a6(\u03c1t) dt \f \f \f \f t=0 = \u001c grad\u02c6 \u03c1\u03a6, \u2202\u03c1t \u2202t \f \f \f \f t=0 \u001d \u02c6 \u03c1 , (17) where \u03c10 = \u02c6 \u03c1. In consideration of Eq. (8) and Eq. (16), we attain the conclusion that the dissipation velocity of energy with respect to time is relevant to the Wasserstein gradient of E. By selecting the form of the functional, multiple optimal directions are derived, giving rise to diverse generation trajectories proposed in generative diffusion methods. 4.2 FreeFlow to Fokker-Planck Equation The FP equation, which is known to be the continuous expression equivalent to Markovian process in DDPM [11] and Ito process in SDE-based methods [13], is one of the fundamental principles of DPMs. As a simple example of FreeFlow framework, one can naturally obtain the FP equation by deducing the gradient flow of an energy functional in specifically defined form. Theorem 4.6. Fokker-Planck equation is equivalent to the gradient flow of energy E(\u03c1) given by E(\u03c1) := Dt Z Rn \u03c1 log \u03c1dx + Z Rn \u03c1\u03a8dx, (18) where \u03a8 : Rn \u2192R is a smooth function subject to normalization condition that a constant Z = R Rn e\u2212\u03a8/Dt exists. Proof. Taking \u03b4\u03a6(\u02c6 \u03c1) \u03b4\u03c1 as the first L2-variation of \u03a6, we have d\u03a6(\u03c1t) dt \f \f \f \f t=0 = Z Rn \u03b4\u03a6(\u02c6 \u03c1) \u03b4\u03c1 \u2202\u03c1t \u2202t \f \f \f \f t=0 dx. Based on Eq. (17) and Eq. (14) with zero Neumann boundary condition, we then have \u001c grad\u02c6 \u03c1\u03a6, \u2202\u03c1t \u2202t \f \f \f \f t=0 \u001d \u02c6 \u03c1 = Z Rn \u02c6 \u03c1 \u0012 \u2207\u03b4\u03a6(\u02c6 \u03c1) \u03b4\u03c1 \u00b7 \u2207\u03c6 \u0013 dx. Because of the definition of Wasserstein scalar product in Eq. (16), a neat form of Wasserstein gradient is deduced to grad\u02c6 \u03c1\u03a6 = \u2212div \u0014 \u2207 \u0012\u03b4\u03a6(\u02c6 \u03c1) \u03b4\u03c1 \u0013 \u02c6 \u03c1 \u0015 . (19) Now we take \u03a6 = E defined in Eq. (18) and substitute it to Eq. (19), we obtain \u2202\u02c6 \u03c1 \u2202t = \u2212grad\u02c6 \u03c1E = div [(Dt\u2207log \u02c6 \u03c1 + \u2207\u03a8)\u02c6 \u03c1] = div(Dt\u2207\u02c6 \u03c1 + \u02c6 \u03c1\u2207\u03a8) = Dt\u2206\u02c6 \u03c1 + \u2207\u00b7 (\u02c6 \u03c1\u2207\u03a8), (20) where the fisrt equation is from Eq. (8). The final Eq. (20) is obviously the Fokker-Planck equation shown in Eq. (2) hence Theorem 4.6 is proved. This theorem clearly demonstrates that the Fokker-Planck equation, or the forward diffusion dynamics in DPMs, is the steepest descent of free energy E defined in Eq. (18). That is to say, the optimal evolutionary trace satisfying the gradient flow of the energy functional is implicitly adopted for models to learn from, which results in the efficiency of DPMs. Specifically, the diffusion process in DPMs is achieved by the velocity field of \u03c1t moving towards maximal entropy and minimal energy. It is evident that the negative entropy, which is represented by R Rn \u03c1 log \u03c1dx, needs to be minimized in accordance with the principle of maximum entropy. On the other hand, R Rn \u03c1\u03a8dx acts as the energy term, with \u2207\u03a8 affecting the deterministic component of the Ito process in Eq. (3). 6 \fCorollary 4.7. The energy functional defined in Eq. (18) is Kullback\u2013Leibler (KL) divergence up to addition by a constant. Proof. As t approaches infinity, \u03c1t in FP equation reaches a stationary state \u03c1\u221echaracterized by the Boltzmann distribution: \u03c1\u221e= e\u2212\u03a8/Dt R Rn e\u2212\u03a8/Dtdx = 1 Z e\u2212\u03a8/Dt. The original energy functional can be hence rewritten as E(\u03c1) = Dt Z Rn \u03c1 \u0012 log \u03c1 \u03c1\u221e + log Z \u0013 . (21) If Z = 1 and Dt = 1, then Eq (21) is the KL divergence of \u03c1 with respect to \u03c1\u221e. This corollary illustrates that when evolving along diffusion curves in DPMs, the gap of probability density is not measured by a strict mathematical distance but rather by KL divergence in practice. Proposition 4.8. If we specifically have \u03a8 = \u03b2tx2/2 where \u03b2t is a parameter irrelevant to x, then the stationary solution \u03c1\u221eto Eq. (18) when t approaches infinity is a normal distribution and \u03c1\u221e\u223cN(0, Dt \u03b2t I). Proof. Substituting \u03a8 = \u03b2tx2/2 to the normalizing condition, we have Z = Z Rn e\u2212\u03b2tx2 2Dt dx = s 2\u03c0Dt \u03b2t . Therefore, the stationary solution is \u03c1\u221e= r \u03b2t 2\u03c0Dt e\u2212\u03b2tx2 2Dt , (22) which is a Gaussian distribution. This proposition presents that that with a subtle design of \u03a8 and a sufficiently large value of t, the gradient flow of energy will eventually converge to a normal distribution. Moreover, this property also enables models to be trained to revert from Gaussian white noise. With perturbation gradually added to origin image x0 in DPMs, the ultimate xt is hence close to a sample from this distribution controlled by \u03b2t and Dt. 4.3 Displacement Interpolation Although the explicit cost function appears to be discarded in the previous Eulerian field viewpoint, its impact on the transport process is inevitable due to its relation with displacement interpolation. We demonstrate that transport functions deserve special consideration, as their structures can yield desirable properties that we prioritize. Recalling transport cost function C and c in Eq. (12), we are permitted to rewrite their relation by C(\u03b6t) = Z 1 0 c( \u02d9 \u03b6t)dt, if \u03b6t is differential with respect to time and c( \u02d9 \u03b6t) is differential transport cost. We typically consider a special case that the cost function c is convex on Euclidean space. Lemma 4.9. Let c be a convex function on Rn, then c(y \u2212x) = inf \u001aZ 1 0 c( \u02d9 \u03b6t)dt \f \f \f \f \u03b60 = x, \u03b61 = y \u001b . (23) By a stronger assumption that c is strictly convex, then the unique minimal is obtained by the line: \u03b6t = x + t(y \u2212x), t \u2208[0, 1]. (24) This lemma can be proved by Jensen\u2019s inequality, which is also seen in Rectified Flow [27] thus is hereby omitted. 7 \fType Pattern Methods Dt \u2207\u03a8(x) Stochastic Markovian DDPM \u03b2t \u03b2tx Ito Process VP-SDE \u03b2t \u03b2tx VE-SDE \u02d9 \u03b1t 0 Deterministic ODE DPM-Solver 0 ft(x) PFGM 0 ft(x) Table 1: The diffusion patterns in typical methods are represented by energy functional E(\u03c1t) of FreeFlow framework. Corresponding values of parameters and formulation of functions are summarized in this table.2 Definition 4.10. The function X : X \u2192R is called to be c-concave if there exists a function Y : Y \u2192R such that X = inf y\u2208Y c(x, y) \u2212Y(y). Theorem 4.11. If the transport cost function on Rn is strictly convex c(x, y) = c(x \u2212y) and c(0) = 0, there is an unique c-concave function \u03c8(x) presenting the solution to time-dependent optimal transport by \u03b6t(x) = x \u2212t\u2207c\u2217(\u2207\u03c8(x)), t \u2208[0, 1], (25) where c\u2217is Legendre transformation of c. Furthermore, if we specifically have c(x, y) = |x \u2212y|2/2 and \u00b5, \u03bd are probability measures on Rn, then there exists convex \u03c8 such that \u2207\u03c8#\u00b5 = \u03bd and the solution in Eq. (25) is reformed to displacement interpolation: \u03c1t = [(1 \u2212t)id + t\u2207\u03c8]#\u00b5, t \u2208[0, 1]. (26) This theorem, which is fully proved in Appendix A.2, asserts that time-dependent optimal transport can be achieved through the linear interpolation of identical mapping and primal optimal transport mapping, provided that the cost function c(x, y) is half of the square of Euclidean distance. In other words, optimality is realized throughout the entire transport process, resulting in \u03b6t being the optimal transport from \u00b5 to \u03b6t#\u00b5. By carefully formulating the implicit cost function c according to Theorem 4.11, we can naturally attain the optimal scheme through a linear transformation from the source to the destined distribution. 5 Rethink DPMs by FreeFlow FreeFlow is valuable for its powerful theoretic viewpoint to elaborately explain benefits and reveal essential drawbacks behind DPMs. We demonstrate that our framework naturally encapsulates classic patterns of diffusion formulations in Section 5.1. We then typically rethink straight line generation by revealing potential jeopardize of shock waves and deducing the optimality equation in Section 5.2. 5.1 Diffusion Pattern During the forward process, clean images undergo a gradual addition of stochastic noise until they are finally transformed into scheduled distribution. Otherwise, the reverse process acting as the counterpart recovers noise to origin input by predictions from models. FreeFlow can be used to summarize the diffusion patterns of mainstream DPMs as one of its applications. The Theorem 4.6 allows for various patterns to be consistently analyzed, with their parameters and formulations summarized in Tab. 1. DDPM takes the assumption of Markovian process that the conditional probability (or transition) q(xt|xt\u22121) \u223c N(\u221a1 \u22122\u03b2txt\u22121, 2\u03b2tI) is a Gaussian distribution parameterized by \u03b2t for random variable xt and xt\u22121, which leads to the final normal distribution of q(xt|x0). Note that \u03b2t hereby in perturbation kernel is the same parameter defined in Proposition 4.8 impacting the velocity of drift. In fact, such diffusion procedure from origin input x0 to xt is equivalent to the gradient flow of free energy defined in Eq. (18), because any continuous state Markovian process satisfies Chapman-Kolmogorov (CK) equation: q(xt3|xt1) = Z q(xt3|xt2)q(xt2|xt1)dxt2, (27) where t3 > t2 > t1. CK equation is known to be directly derived to the FP equation which is proved in Section 4.2 as an example of FreeFlow. By definition of Dt and \u2207\u03a8 listed in Tab. 1, we will obtain approximate standard normal distribution if images are perturbed by DDPM. 2Specific forms of \u2207\u03a8 for ODE pattern are included in Section 5.1 and hereby simplified to ft in the table due to complexity. 8 \fShock Wave Lagrangian Eulerian Figure 2: Illustration for Lagrangian (path lines, left) and Eulerian (streamlines, right) descriptions. A random transport from distribution \u00b5 to \u03bd may trigger shock waves by the intersections as only individual particles are concerned in Lagrangian description; however, the probability density field \u03c1t at all positions are simultaneously presented in Euleraian manner without intersections. SDE methods (e.g., VE-SDE, VP-SDE) directly adopt Ito process as the diffusion procedure to produce perturbed data through Eq. (3). While they differ from DDPM in inspiration and external form, the discrepancies can be limited to selection on the form of Dt and \u03a8. As shown in Tab. 1, VP-SDE is equivalent to DDPM with the drift item preserved that is able to be explained by Proposition 4.8, too. Nevertheless, the drift item disappears in VE-SDE and Dt varies by function \u03b1t of t making q(xt|xt\u22121) \u223cN(xt\u22121, p 2(\u03b1t \u2212\u03b1t\u22121)I), where the variance increases solely. SDEs uniformly evolves to normal distribution because of the entropy item, which is distinctly different from the following ODE methods. ODE methods (e.g., DPM-Solver, PFGM, GenPhys) otherwise simplifies Ito process to a deterministic probability evolution via neglecting the diffusivity item, which makes it more likely to regard as flow methods [28]. Their noise scheduler may not necessarily proceed toward normal distribution consequently. However, the simplified formulation, Eq. (4), is obtained at the cost of more complex velocity field ft(xt) conformed to continuity equation and should be invertible. Their structures are thus not unique yet required to satisfy initial condition and possess straightforward final distribution for sampling backwards. DPM-Solver rewrites Eq. (3) to Eq. (4) and takes \u03bet(Xt) \u2212 \u03c32 t \u2207log \u03c1t(Xt)/2 as ft(Xt). In addition, PFGM involves Poisson equation and utilize Green\u2019s function to give ft by ft = R (x\u2212y)p(y) Sn\u22121(1)\u2225x\u2212y\u2225n dy, where Sn\u22121(1) is the surface area of the unit (n \u22121)-sphere. GenPhy extends it to smooth PDEs whose solutions should behave as probability density. The formulation of forward processes is varied, however, the flow directions of probability are consistently proved to be the Wasserstein gradient flow of energy E proposed in Section 4.2. In summary, FreeFlow generally explain the extensively adopted forward process as discrete variant consistently controlled by the gradient flow of E(\u03c1). 5.2 Shock Waves and Optimality Equation Generating images using DPMs can be a time-consuming process due to the multiple iterations of predictions during the reverse process. To eliminate this issue, one might anticipate a solution that simplifies the cumbersome gradual inversion into a single step. This possibility is explored in rectified flow [27] where a linear transport track is effectively employed and a trick named reflow is additionally proposed to avoid the intersections of trajectories. Nonetheless, flaws worthy of consideration are contained in rectified flow because of the disregarded cost function. Indeed, shock waves illustrated in Fig. 2, as the essential form of the crossing, are possible to happen in finite time for compressible fluid even with smooth initial conditions when evolving by the Lagrangian method, which is the reason for its inevitable requirement for reflow. We have demonstrated in Lemma 4.9 that for a convex cost function, the optimal transport curve corresponds precisely to a straight line where particles move at a constant speed. Transferring to Eulerian description, we reformulate it to a vector field with the lemma as follow. 9 \fLemma 5.1. Let v0 : Rn \u2192Rn be a differentiable vector field and \u03b6t(x) = x \u2212tv0(x) is the trajectory field of particles with uniform motion, then the Eulerian velocity field vt associated with \u03b6t satisfies \u2202vt \u2202t + vt \u00b7 \u2207vt = 0. (28) Proof. Since the velocity is a constant, the second derivative of \u03b6t(x) is zero. We naturally have d2 dt2 \u03b6t(x) = \u2202vt(\u03b6t(x)) \u2202t + vt(\u03b6t(x)) \u00b7 \u2207vt(\u03b6t(x)) = 0, which is Eq. (28). Theorem 5.2. If the velocity field vt associated with \u03b6t is consistently Lipschitz continuous and \u00b5 is the initial probability measure, then \u03c1t = \u03b6t#\u00b5 is the unique solution to \u2202\u03c1t \u2202t + \u2207\u00b7 (\u03c1tvt) = 0, \u03c10 = \u00b5. Combined with Lemma 5.1, the time-dependent optimal transport in Eulerian view is given by \u001a \u2202\u03c1t \u2202t + \u2207\u00b7 (\u03c1tvt) = 0, \u03c10 = \u00b5, \u2202vt \u2202t + vt \u00b7 \u2207vt = 0. (29) This theorem (proved in Appendix A.3) implicitly authorizes the bond between cost function c and velocity field vt thus gives rise to optimality equation Eq. (29) from the Eulerian perspective. Moreover, the formulation of c itself, as demonstrated in Theorem 4.11, fundamentally determines the trajectory and ensures the elimination of shock waves. Therefore, the relation of strictly convex c and optimal initial velocity field v0 can be given by v0(x) = \u2212\u2207c\u2217(\u2207\u03c8), (30) where c\u2217and \u03c8 are executed by the same definitions in Theorem 4.11. That is to say, the shock waves triggered by intersections can be radically shun if we implement proper cost function by designing relevant initial velocity field. Moreover, the refrained shock wave is also guaranteed by the Eulerian field which is suitable to be declared as differential automorphism groups. We recall Lagrange-Euler converter in Eq. (9) thus note that the trace \u03b3x(t) is actually a homeomorphism gt(x) := \u03b3x(t) deduced by the velocity field. The optimal map \u03b6t is obtained if given \u03b6t := gt and the trace collection gt is a differential homeomorphism collection which is ensured by the regularity of Monge-Ampere equation with specific conditions in Theorem 4.11. 6" + }, + { + "url": "http://arxiv.org/abs/2312.01367v1", + "title": "DiFace: Cross-Modal Face Recognition through Controlled Diffusion", + "abstract": "Diffusion probabilistic models (DPMs) have exhibited exceptional proficiency\nin generating visual media of outstanding quality and realism. Nonetheless,\ntheir potential in non-generative domains, such as face recognition, has yet to\nbe thoroughly investigated. Meanwhile, despite the extensive development of\nmulti-modal face recognition methods, their emphasis has predominantly centered\non visual modalities. In this context, face recognition through textual\ndescription presents a unique and promising solution that not only transcends\nthe limitations from application scenarios but also expands the potential for\nresearch in the field of cross-modal face recognition. It is regrettable that\nthis avenue remains unexplored and underutilized, a consequence from the\nchallenges mainly associated with three aspects: 1) the intrinsic imprecision\nof verbal descriptions; 2) the significant gaps between texts and images; and\n3) the immense hurdle posed by insufficient databases.To tackle this problem,\nwe present DiFace, a solution that effectively achieves face recognition via\ntext through a controllable diffusion process, by establishing its theoretical\nconnection with probability transport. Our approach not only unleashes the\npotential of DPMs across a broader spectrum of tasks but also achieves, to the\nbest of our knowledge, a significant accuracy in text-to-image face recognition\nfor the first time, as demonstrated by our experiments on verification and\nidentification.", + "authors": "Bowen Sun, Shibao Zheng", + "published": "2023-12-03", + "updated": "2023-12-03", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.AI" + ], + "main_content": "Introduction In contemporary artificial intelligence (AI), generative models [1, 2] and multi-modal learning emerge as thriving domains. As a prominent and blooming field within generative AI, DPMs, also referred to as diffusion models, have exhibited exceptional prowess in the realm of content generation, effectively generating visually stunning and realistic media of superior quality. Noteworthy contributions, e.g., image generation [3,4], audio synthesis [5], video generation [6] and data purification [7], have solidified their presence in various fields that require the application of generative artificial intelligence. Multi-modal content analysis [8, 9] and generation [10], have further garnered significant attention, in consideration of the diverse modalities from which human cognition originates. The advent of text-to-image models endowed with controllable generation, exemplified by Stable Diffusion (SD) [11] and DALLE [12,13], has revolutionized the multi-modal generation, ushering in newfound abilities for creative endeavors. By leveraging the power of DPMs, these notable achievements expand the boundaries of artistic creation and have the possibility to enhance assorted industries. While diffusion models excel at capturing the intricate details for synthesis, their potential in extensive domains irrelevant to generation, such as face recognition, is yet to be fully explored. Traditional face recognition methods relying on normal RGB images [14,15] have achieved high accuracy with limited scope for further enhancement though, the complication of cross-modal recognition [16,17] pose a significant bottleneck that is widely acknowledged and \u2217This work has been submitted to the IEEE for possible publication. Copyright may be transferred without notice, after which this version may no longer be accessible. arXiv:2312.01367v1 [cs.CV] 3 Dec 2023 \fProbe Gallery Pairs Verification Identification Male, No Beard, Oval Face Male, Bald, Big Lips, Big Nose, Chubby, Double Chin Female, Arched Eyebrows, Bangs, Blond Hair Male, Big Nose, Brown Hair, Wavy Hair Figure 1: The illustration of text-to-image face recognition including verification (left) and identification (right). During the verification process, the model assesses whether each pair of textual description and facial image pertains to the same subject (green frame) or different subjects (red frame). In the identification phase, the recognition model compares each verbal description in the probe set with all the images in the gallery to rank the corresponding similarity scores. considered essential in advancing the field. One intriguing approach to cross-modal face recognition is face recognition by textual descriptions illustrated in Fig. 1, which holds immense value in numerous scenarios, spanning from public security applications to object retrieval. It becomes feasible to establish a connection between visual and textual modalities, facilitating identity filtering solely based on verbal descriptions. This capability effectively resolves an otherwise insurmountable difficulty arising from the absence of visual information. Regrettably, existing applications of diffusion models are completely reliant on their generative capability and crossmodal face recognition predominantly encompasses visual information across diverse modalities. Contemporary text-guided generative models typically employ language modules absorbing prompts [18] or natural languages [19] to exert influence on the diffusion directions, thereby enabling the creation of vibrant images through pre-trained generation modules based on variational autoencoders (VAEs) [20] or generative adversarial networks (GANs) [1]. VAEs and GANs have emerged as effective frameworks for learning rich latent representations and generating high-quality images thus serve as a crucial role in shaping the overall outcomes. During the intermediate process of the controllable diffusion, initial random noises are skillfully conveyed and channeled into the latent space of generation modules by encoded word embeddings that capture the semantic meaning of the given text. On the other hand, current multi-modal face recognition primarily focuses on various aspects, including near-infrared [21,22], forensic sketches [23,24], depth imagery [25,26], and caricature [16], etc. These different aspects of multi-modal face recognition address the demands 2 \fwithin their respective domains to a certain extent, contributing to the development of robust and versatile recognition systems capable of handling diverse modalities and real-world challenges. Notwithstanding the imperative requirement for text-to-image face recognition, the enduring challenge of resolving this predicament remains unresolved, primarily due to the intricate complexities inherent in the task and the inadequacy of available data. The primary point is that verbal descriptions inherently lack the precision and richness of visual information, rendering cross-modal text-to-image recognition itself incapable of achieving the level of effectiveness achieved by direct image-to-image algorithms. Additionally, in comparison to the relatively limited disparity observed among modalities within images, e.g., sketch-photo pairs, the divergence between textual and visual signals is considerably more substantial. This pronounced dissimilarity poses formidable obstacles in devising powerful recognition algorithms capable of effectively bridging the gap between textual and visual representations. Moreover, the scarcity of facial datasets that contain both comprehensive identity information and accompanying textual descriptions constitutes a formidable impediment to the advancement of related research endeavors. The dearth of such databases, which simultaneously capture and integrate textual and visual information, significantly hampers the training and evaluation of models, thereby impeding the exploration of novel approaches and innovative solutions in this specialized domain. In response to the growing demand for text-oriented face recognition, we propose the method named DiFace, which unleash the untapped potential of current diffusion models far limited by generation-centric employment. We commence by presenting a probabilistic density movement as an elucidation of the mechanisms of diffusion models, deviating from the well-known Evidence Lower Bound (ELBO) viewpoint [27]. In this alternative perspective, we employ a theory of distribution transport to comprehend the fundamental mechanisms governing diffusion models. By harnessing the power of this understanding, we have successfully achieved text-to-image face recognition without the need for any intermediate generation procedures, which allows us to instead directly utilize the capabilities of DPMs to accomplish the expected task. In order to augment the recognition capability of DPMs, we have additionally devised an additional refinement module, leading to the attainment of a final accuracy level of approximately 80%. Rigorous and impartial experiments, encompassing verification and identification as benchmarks, have been meticulously conducted to showcase the effectiveness of DiFace. These findings not only demonstrate the possibility for DPMs to perform recognition tasks but also lay the foundation for future advancements in this particular domain. The contributions can be summarized as follow: \u2022 We have achieved a noteworthy advancement in the field of cross-modal face recognition through the textual descriptions, a previously unexplored perspective. \u2022 Our approach creatively designs a refinement module, enabling the realization of recognition tasks via the probabilistic diffusion process, which circumvents the typical dependence on image synthesis. \u2022 We offer a theoretical analysis as the cornerstone of this endeavor, establishing a vital linkage between probability diffusion flow and feature-based recognition. This expanded scope of application for generationoriented DPMs emphasizes their substantial potential across broader domains. 2 Related Work We review typical diffusion models and cross-modal face recognition methods in this section. 2.1 Diffusion Models Drawing inspiration from the principles of nonequilibrium thermodynamics in physics, Sohl-Dickstein et al. [2] pioneer a generative model, serving as a precursor to subsequent DPMs, that tractably samples intricate data from simple distributions instead of earlier GAN [1] algorithm. In order to effectively synthesize high-quality images, Denoising Diffusion Probabilistic Models (DDPM) [3] and Denoising Diffusion Implicit Models (DDIM) [28] facilitate the learning of neural networks from parameterized Markov chains. These chains are designed to reverse the diffusion process by adding noise to the data in the opposite direction of sampling until the signal is eliminated. Particularly, when this process gradually involves small amounts of Gaussian noise, it becomes feasible to set the transitions in the sampling chain as conditional Gaussian distributions. A series of methods is subsequently proposed to enhance the efficiency of generation. Song et al. [4,29] provide a score matching perspective to reformulate the previous Markovian process into a Stochastic Differential Equation (SDE), which in turn derives an Ordinary Differential Equation (ODE) using the Fokker-Planck equation (Kolmogorov\u2019s forward equation). This viewpoint fosters the development of solvers [30] aimed at minimizing computational overhead and accommodates diverse ODE forms [31\u201333] that sample images from fundamental distributions. With the advancements in natural language processing (NLP) propelled by the transformer [34], neural networks have gained the ability to rapidly understand and generate contextually relevant conversations, thereby ushering in a 3 \fnew era for text-guided generation. SD [11], as one of latent diffusion models (LDMs), introduces a text-to-image generative technique that demonstrates strong scalability in producing highly detailed and efficient image synthesis. This multi-modal generation is achieved by compressing the higher-dimensional distribution of images into a lowerdimensional latent space accepted by an encoder/decoder [35,36] and employment of a diffusion process guided by word embeddings tokenized from the CLIP model [18]. The emergence of similar techniques such as DALL-E [12,13] enhances the flourishing of this domain. The UNet, recognized as the prevailing architectural framework utilized in contemporary DPMs, was originally conceived with the specific objective of biomedical image segmentation [37]. Notably, this network is engineered to produce output that aligns precisely with the dimensions of the input, ensuring consistency and preserving the spatial information inherent in the probability distribution. It has been adapted and enhanced for deployment in SD, where token-based conditioning mechanisms are utilized to exert control over the diffusion process. The UNet structure, with the flexible tokenizer, enables the incorporation of these conditioning mechanisms, thereby empowering more nuanced and fine-grained control during the text-guided generation. 2.2 Cross-Modal Face Recognition Face recognition is a longstanding and quintessential problem in the field of computer vision, which has witnessed substantial advancements over time. In its initial stages, traditional approaches rely on local descriptors (e.g., LBP [38], HOG [39], SIFT [40]) to extract face features. With the advent of deep convolutional neural networks (CNNs) and delicate design of loss functions [14, 15, 41], contemporary research has shifted towards utilizing these powerful frameworks to obtain superior performance and rapidly extended to concerns on cross-modal face recognition tasks. By analyzing multi-modal facial features and mapping them to consistent latent space, these methods allow for the identification and categorization of individuals. The Near-Infrared Spectrum (NIS) images and Visible Light Spectrum (VIS) images are regarded as two modalities, as demonstrated in [21, 22, 42], whose discrepancies are managed through subspace learning employing deep neural networks and the Wasserstein distance. By altering facial attributes, a 3D Morphable Model is used in [24] to generate a large set of synthetic images that are then utilized to fine-tune a deep network, originally pre-trained on face photos, for face photo-sketch recognition through transfer learning. [16] specifically focuses on the recognition of photo-caricature faces through the utilization of multi-task learning. Their approach incorporates a dynamic weights learning module that automatically assigns weights based on the significance of each task, which enables the network to allocate more attention to challenging tasks rather than simpler ones. LDCTBP [17] presents a simultaneous demonstration of the efficacy of handcrafted features in photo-sketch and NIS-VIS recognition by using discrete cosine transform as an effective local feature descriptor for illumination normalization. Indeed, current cross-modal face recognition systems have not yet transcended the scope of visual data and comparable investigations concerning photo-text modalities are scarce. This can be attributed to the inherent challenges involved in resolving the substantial disparity between linguistic and visual processing. On the other hand, the availability of data sets containing accurate facial descriptions is significantly inadequate, making it challenging to effectively promote the corresponding research efforts. The ongoing project, Face2Text [43], aims to assemble a dataset comprising natural language descriptions of human faces but its size remains relatively small, with its latest v2 version containing only 10,559 images and 17,022 corresponding descriptions. Several other databases designed for face synthesis, such as MM-CelebA-HQ [44] and CelebAText-HQ [45], incorporate automatically generated or manually annotated natural descriptions based on CelebFaces Attributes (CelebA) dataset [46]. Nevertheless, they are unsuitable for our intended investigations due to the inseparable mixture of identity-relevant (e.g., eyebrows, nose) and identity-irrelevant information (e.g., expression, accessories, makeup) in the descriptions. That is to say, although multiple cross-modal algorithms have been developed, it is important to note that text-based face recognition is limited and warrants further attention. 3 Method The objective of our research endeavors is to expand the capabilities of diffusion models beyond generation, enabling them to accomplish text-to-image face recognition. In Sec. 3.1, we first provide a comprehensive analysis to establish the theoretical connection between DPMs and recognition problems, serving as the foundation of our framework. The general problem formulation and specific algorithmic details pertaining to our methodology are subsequently presented in Sec 3.2 and Sec. 3.3, respectively. 4 \fSample Diffusion Reverse Refinement Figure 2: Theoretical depiction of probability density transport. Dots of the same color correspond to identical subjects. A random variable drawn from XT is transported along the reverse path, contrary to the diffusion direction, to a sample subject to X0 through DPMs. Note that feature similarities are not explicitly regulated during this process. Ultimately, face recognition is accomplished through the refinement module, which further adjusts the feature distances within the space F. See Sec. 3.1 for details. 3.1 Theoretical Analysis Let zt represent a series of random variables indexed by time t \u2208[0, T] in a diffusion process, then the initial samples z0 \u223cX0, which exhibit independent and identically distributed (i.i.d) characteristics, will undergo an evolution leading to zT \u223cXT while gradually introducing additional noise. The process is illustrated in Fig. 2 by the dashed arrow line, depicting the transition from samples in X0 to those in XT . Typically, XT is a simple distribution to facilitate straightforward sampling during the reverse process. It has been clarified in [4] that this forward diffusion process in DPMs can be formulated to the stochastic It\u00f4 process: dzt = \u03bet(zt)dt + \u03c3tdWt, (1) where \u03bet and \u03c3t are the drift and diffusion coefficient respectively, and Wt is the standard Wiener process. The Eq. (1) signifies that the diffusion random variables under the Markov assumption is influenced by both deterministic and stochastic processes simultaneously. This process has been demonstrated to be mathematically equivalent to the n-dimensional Fokker-Planck equation, which describes the partial derivative of the probability density \u03c1t with respect to time. The equation takes the following form: \u2202\u03c1t \u2202t = \u2207\u00b7 (\u03c1tvt) + Dt\u2206\u03c1t, (2) where vt is the time-varying velocity field and Dt = \u03c32 t /2 denotes the diffusivity. Eq. (2) offers an alternative perspective for understanding the diffusion process. Rather than adopting the particle-centered viewpoint found in Eq. (1), it allows us to perceive diffusion as a transportation mechanism between probability distributions. If the duration is sufficiently long, the distribution XT will ultimately converge to a standard Gaussian distribution when vt is specifically selected for degradation in each step, irrespective of the initial distribution X0, which has been demonstrated 5 \fUNet UNet Diffusion Distribution Transport Space Reverse Feature Space Figure 3: The complete training procedure and network architecture. As one of the bifurcated branches of encoder E, the intermediate feature z0 is extracted as the initial sample in the diffusion process. The diffusion model D\u03b8, utilizing the UNet structure and taking the vectors tokenized by \u03c4 as inputs, is subsequently employed iteratively as the reverse of the diffusion process. Once \u02c6 z0 is obtained, the refinement network R maps it to f p within the feature space F. The final decision of the similarity between the facial image and the textual description is based on the distance between f p and f x. in [3]. The objective of generating data from random noise is accomplished by reversing this straightforward forward process, wherein the direction and magnitude of each step are predicted by DPMs. In order to restore the initial distribution X0, diffusion models are trained to predict the disparity between Xt\u22121 and Xt, utilizing the provided values of t and zt as input. To be specific, for the DDPM algorithm, given a group of random time step t and sample z0, the diffusion model parameterized by \u03b8 (denoted by D\u03b8) during training is essentially searching for argmin \u03b8 E \u0002 d \u0000D\u03b8(zt, t), d(zt, zt\u22121) \u0001\u0003 , (3) where d(\u00b7, \u00b7) is the distance that is specifically detailed in Sec. 3.2 and Sec. 3.3. When D\u03b8 is properly trained, the reverse of the diffusion process, initiated at \u02c6 zT which is sampled from XT for generation, is accomplished through an iterative procedure described by \u02c6 zt\u22121 = f \u0000\u02c6 zt, D\u03b8(\u02c6 zt, t) \u0001 . (4) The Eq. (4) demonstrates that \u02c6 z0 can be obtained by some specific function f when \u02c6 zT is given. The reverse process is visually represented in Fig. 2 through blue arrow lines. Under ideal conditions, it should be possible to to generate a sample subject to X0 by sampling from XT distribution through multiple iterations, which is sufficient for unconditional generative algorithms. While this approach successfully accomplishes the transportation from one distribution to another, thereby facilitating the resolution of cross-modal problems, it is not entirely appropriate for recognition tasks, whose benchmark is based on rigorous feature similarity. In the task of face recognition, it is expected that the distances between inter-class embeddings should be noticeably greater than the distances between intra-class embeddings. In fact, the explicit assurance of this requirement is not deemed necessary in the LDMs currently tailored for content generation. Owing to their powerful decoders, LDMs possess the ability to produce satisfactory images, provided that the resulting sample \u02c6 z0 approximately subjects to the distribution of X0. However, it is imperative to emphasize that the distances of samples in X0 from the identical subject (e.g., blue dots) in Fig. 2 are not mandated to be closer than those of different subjects (e.g., the upper blue dot and purple dot). The latent embeddings \u02c6 z0 are consequently unsuitable for direct face recognition, a conclusion that is further substantiated through experimental evidence presented in Sec. 4.3. In order to ensure the viability of the framework for the face recognition task, we have undertaken the specific design of an additional network, denoted as R, with the purpose of further refining the rough estimate \u02c6 z0 by mapping it into a more reasonable feature space referred to as F. When undergoing rearrangement through the application of R, the refined features R(\u02c6 z0) within F, as depicted in Fig. 2, are clustered based on their corresponding identities. Details about the structure and implementation are clarified in Sec. 3.3. 6 \fAlgorithm 1 Training D\u03b8 Input:Facial images x; description prompt p; encoder E; maximum time step T Output:Trained diffusion model D\u03b8 1: while not converged do 2: z0 \u2190Ez(x) 3: t \u223cU({1, ..., T}) 4: \u03f5 \u223cN(0, I) 5: zt \u2190\u221a\u00af \u03b1tz0 + \u221a1 \u2212\u00af \u03b1t\u03f5 6: L \u2190\u2225D\u03b8(zt, t, p) \u2212\u03f5\u22252 7: Update \u03b8 to reduce L 3.2 Problem Formulation for Cross-Modal Face Recognition After establishing the theoretical framework in Sec. 3.1, our subsequent focus is directed towards the specific problem of text-to-image face recognition. Based on the LDMs, it is reasonable to take the lower-dimensional i.i.d latent variables as z, rather than higher-dimensional images in the original DPMs. Furthermore, X0 is considered to be reconstructed by D\u03b8 from XT , guided by prompts denoted by p. That is to say, the model D\u03b8 is anticipated to predict the added noise through vectorized prompts p. Since the diffusion process is deterministic, the series of zt is able to be simply obtained when zT sampled from Gaussian distribution N(0, I). Taking zt, t and p as inputs, the loss function for training D\u03b8 is LLDM = Ez0,t,p,\u03f5\u223cN (0,I) \u2225D\u03b8(zt, t, p) \u2212\u03f5\u22252 , (5) where t is sampled from a uniform distribution U({1, ..., T}) and \u2225\u00b7\u2225is chosen to be the \u21132 norm in this work. In a manner akin to the process described in [11], the initial variable z0 undergoes degradation to yield zt through a function that is associated with the noise \u03f5, employing the reparameterization trick. Specifically, we employ a pretrained CNN-based network, denoted as E, which possesses sufficient capabilities in conventional face recognition, as the encoder. Given an input image x in RGB space and the encoder E, the corresponding feature denoted by f x for recognition is f x = E(x) = Ef \u0000Ez(x) \u0001 , (6) where the encoder E is divided into two branches, namely Ez and Ef. In our work, Ez(x) serves as the initial sample z0 for recovery during the training of D\u03b8, which means z0 = Ez(x). This bifurcation is also clearly described in Fig. 3. To achieve the appropriate mapping from X0 to F, the refinement network R is trained after the completion of training of D\u03b8. We utilize the cosine embedding loss to train R, meaning that the loss function LR is defined as LR = E\u02c6 z0 \u0014 R(\u02c6 z0) \u2225R(\u02c6 z0)\u2225\u00b7 E(x) \u2225E(x)\u2225 \u0015 , (7) where \u02c6 z0 is attained through the iteration described in Eq. (4), given a fix time step t and textual description p. Finally, the textual feature based on the description of a facial image is obtained by f p = R(\u02c6 z0). The rest layers Ef of E progressively encode z0 into f x, enabling a conclusive comparison with the text-based feature f p within the feature space F for the purpose of recognition. 3.3 Algorithm and Architecture In this section, we present comprehensive information regarding the complete design of algorithms and structures illustrated in Fig. 3. The encoder E used in this study is a conventional face recognition network constructed by ResNet [47], incorporating the marginal loss proposed by ArcFace [14]. Initially, we train the model on the specific task of face recognition with pure facial images x, until both f x and z0 achieve a significantly high level of accuracy. The parameters of the encoder are then completely fixed throughout all subsequent procedures. Once the encoder is adequately prepared, z0, the output of the intermediate layers, is obtained by Ez(x) and utilized for crucial training on D\u03b8 in the continuous steps. The tokenizer, denoted as \u03c4, serves as the initial step in the diffusion 7 \fAlgorithm 2 Training R Input:Facial images x; description prompt p; inference steps \u02dc T; encoder E; Output:Trained refinement network R 1: while not converged do 2: zT \u223cN(0, I) 3: for t = \u02dc T, ..., 1 do 4: \u02c6 zt\u22121 \u2190Eq. (9) 5: L \u2190 R(\u02c6 z0) \u2225R(\u02c6 z0)\u2225\u00b7 E(x) \u2225E(x)\u2225 6: Update parameters of R to reduce L 0.0 0.2 0.4 0.6 0.8 1.0 False Positive Rate 0.0 0.2 0.4 0.6 0.8 1.0 True Positive Rate Training Step=2000 Training Step=16000 Training Step=20000 (a) 0.0 0.2 0.4 0.6 0.8 1.0 False Positive Rate 0.0 0.2 0.4 0.6 0.8 1.0 True Positive Rate T=5 T=10 T=20 T=30 (b) Figure 4: Intermediate observations during training steps evaluated on the validation set. process, transforming the prompts p into vectors. The main outline of the algorithm for training D\u03b8 is shown in Alg. 1. Given the step t sampled from a uniform distribution U({1, ..., T}) and the noise \u03f5 sampled from standard normal distribution N(0, I), the diffused product zt at step t is given by zt = \u221a\u00af \u03b1tz0 + \u221a 1 \u2212\u00af \u03b1t\u03f5, (8) where \u00af \u03b1t is a hyperparameter controlling the added noise in each step of diffusion process. In fact, Eq. (8) is equivalent to zt = \u221a\u03b1tzt\u22121 + \u221a 1 \u2212\u03b1t\u03f5, which indicates zt is sampled from the normal distribution N(\u221a\u03b1tzt\u22121, (1 \u2212\u03b1t)I) with \u00af \u03b1t = Qt i \u03b1i. The loss function described in Eq. (5) is then computed using the prompts p and the corresponding zt, which are assigned to the variable L in Alg. 1 to realize the optimization process. Before training the refinement network R, we need to iteratively sample \u02c6 z0, which is initialized from zT , after fixing the parameters of D\u03b8. During the sampling process, the parameter \u02dc T is designated as a hyperparameter that determines the length of inference step. We designate the variable p as one of the direct inputs to D\u03b8, omitting the presence of the tokenizer \u03c4 for the sake of simplicity. Similar to the approach employed in DDPM [3], the sampling function f in Eq. (4) is executed through the following procedure: zt\u22121 = 1 \u221a\u03b1t \u0012 zt \u2212 \u03b2t \u221a1 \u2212\u00af \u03b1t D\u03b8(zt, t, p) \u0013 + \u03c3t\u03b7, (9) where \u03b2t = 1 \u2212\u03b1t = \u03c32 t with \u03b7 \u223cN(0, I) for t > 1 and \u03b7 = 0 for t = 1. We proceed to train the refinement network R using the cosine similarity defined in Eq. (7), based on the values of z0 that have already been obtained through this sampling procedure. The features obtained from a sufficiently trained encoder E exhibit reduced inner-class distances compared to inter-class distances, making them particularly suitable for face recognition. Due to the incorporation of E(x) as a guiding factor, the refinement module R successfully transforms the space X0 into F by employing a lightweight network consisting solely of PReLU [48] and linear layers. This effectiveness is further supported by the experimental outcomes presented in Sec. 4. 8 \fTable 1: Detailed partitions on CelebA.The quantity is displayed in each cell. Purpose Subjects Images Training 5000 96097 Validation 2000 40242 Test 3177 61260 4 Experiments In this section, we present a thorough panorama of the experiments conducted on our cross-modal DiFace model, designed to achieve text-to-face recognition, through rigorous evaluations. The experimental settings in Sec. 4.1, encompassing the utilized datasets, the benchmark criteria and the specific parameters, are serving as the foundation for subsequent analysis. Results of evaluations and analyses of our algorithm are impartially presented in Sec. 4.2 and Sec. 4.3. In Sec. 4.4, we additionally provide visualized examples to elucidate the inherent difficulties arising from the intrinsic imprecision of verbal descriptions compared to texture information. These challenges cannot be surmounted by algorithms. 4.1 Experimental Setting 4.1.1 Datasets Existing facial image databases containing a vast number of images paired with corresponding identity labels and descriptive metadata are considered insufficient, as discussed in further detail in Sec. 2.2. Given the circumstances, the CelebA dataset employed for the evaluation could be considered the most suitable option for this research owing to its extensive compilation of 202,599 celebrity images, each annotated with 40 binary attributes. We subsequently transform a portion of annotations related to identity features into linguistic prompts, and partition the dataset into distinct training, validation and test sets without any intersection. The scale of each component of the reorganized CelebA is summarized in Tab. 1. Furthermore, a subset of the WebFace [49] is applied for the pretrained face recognition encoder E. AgeDB [50], LFW [51], CPLFW [52], CALFW [53], CFP-FF/FP [54] are used for ablation study. 4.1.2 Criteria We employ a verification (1:1) approach to assess the performance of DiFace, as indicated by the success rate of accurately predicting the positive or negative pairs within the test set. To be more specific, each of the cosine similarity S(x, p) between normalized feature of images and prompts is obtained through S(x, p) = f x \u00b7 f p \u2225f x\u2225 \r \rf p \r \r. (10) The i-th Boolean prediction \u0393(x, p)i is defined \u0393(x, p)i = \u001a 1 S(x, p) \u2265s, 0 S(x, p) < s , (11) where s is the threshold defined according to the best performance in the validation set. More discussions about the details of s are provided in Sec. 4.2.1. Let i represent the index of the i-th pair (x, p) sampled from the test set, where a total of N pairs are considered. If yi denotes the ground truth label, wherein it takes the value of 1 only if both x and p are chosen from the same identity, and 0 otherwise, the accuracy rate r can be straightforwardly given by r = 1 N N X i=1 1(\u0393(x, p)i = yi), (12) where 1 represents the indicator function. We additionally perform the more challenging identification (1:N) benchmark to further investigate the efficacy of DiFace. The gallery set consists of images depicting various different subjects, while the corresponding prompts serve as the probe. Specifically, the feature similarity between each description pi indexed by i in the probe set and all N images xj indexed by j in the gallery set is calculated, and the top k scores are ranked. The indexes of the top k scores constitute the collection ck i , which is defined by ck i = argmax I\u2282[N]:|I|=k X j\u2208I S(xj, pi), (13) 9 \f0.0 0.2 0.4 0.6 0.8 1.0 False Positive Rate 0.0 0.2 0.4 0.6 0.8 1.0 True Positive Rate Refinement No Refinement (a) 0.0 0.2 0.4 0.6 0.8 1.0 False Positive Rate 0.0 0.2 0.4 0.6 0.8 1.0 True Positive Rate Refinement No Refinement (b) Figure 5: ROC curves of the optimal model on the validation (a) and test (b) set. where [N] = {1, 2, . . . , N}. In our work, it is chosen as the ground truth that the i-th paired images and prompts, denoted as (x, p)i, correspondingly describe the identical subject. The prediction, represented by \u03b3(x, p)i, is defined by \u03b3(x, p)i = \u001a 1 i \u2208ck i , 0 i / \u2208ck i . (14) Similar to Eq. (12), the accuracy for identification task is r = 1 N N X i=1 1(\u03b3(x, p)i = 1). (15) Due to the interference of erroneous images within the gallery, the task of identification becomes considerably more challenging when contrasted with verification. The value of both benchmarks, accompanied by relevant experiments, is extensively deliberated in Sec. 4.2. 4.1.3 Parametric settings The facial images in RGB channels utilized in this study undergo alignment, cropping and resizing to achieve a resolution of 112 \u00d7 112 pixels. The final evaluated DiFace model is trained starting from a learning rate of 1 \u00d7 10\u22124 and a mixed precision of BF16. Both the batch size and gradient accumulation steps are set to 4 with the maximum gradient norm restricted to 1. We incorporate the exponential moving average (EMA) technique for the models\u2019 weights, along with an 8-bit Adam optimizer. Additionally, the feature scaling factor and the feature embedding dimension are applied to 0.3 and 512 respectively. 4.2 Evaluations on DiFace In this section, we present a comprehensive account of the experimental procedure and provide a thorough analysis of the results obtained. 4.2.1 Training procedure Driven by algorithms in Sec. 3.3, the controlled diffusion model is initially trained to achieve convergence, as evidenced by the reduction in loss and the resultant improvement in validation accuracy. The determination of thresholds and their corresponding accuracies is founded upon the receiver operating characteristic (ROC) curve depicted in Fig. 4. In this graphical representation, the axes represent the true positive rate (TPR) and false positive rate (FPR). The values of TPR and FPR fluctuate relative to the threshold s, making them functions of s that can be denoted as T(s) and F(s). As the number of training steps increases, the ROC curve in Fig. 4a demonstrates improvement. The determination of the threshold for the similarity score s in Eq. (11) is achieved in detail by maximizing the expression: s = argmax s T(s) \u2212F(s). 10 \f1 2 3 4 5 List number 60 70 80 90 Accuracy (%) 79.2 77.4 78.5 77.9 77.5 72.9 72.0 72.8 72.6 72.6 Refinement No Refinement Figure 6: The verification accuracy is evaluated on five different pair lists randomly extracted from the test set. The consistently high and stable accuracy demonstrated by our model indicates its reliable performance, suggesting that the test data is unbiased without any filtration. Based on the line chart depicted in Fig. 4a, we opt to select the checkpoint at step 20,000 to undergo a finetune will learning rate of 5 \u00d7 10\u22125 before the final decision to ensure the stability and reliability of subsequent experiments. The designated step \u02dc T for inference significantly impacts the performance of recognition, as illustrated in Fig. 4b. Excessively large values of \u02dc T result in increased inference time, while excessively small values of \u02dc T lead to a decrease in accuracy. Therefore, we tend to choose a predetermined time series that strikes a balance between efficiency and effectiveness. Based on these considerations and the experimental results depicted in Fig. 4b, we ultimately determine that the value of \u02dc T should be set to 20 during the subsequent tests.. Once the training process of the diffusion model is completed, we proceed to undertake the individual training on the refinement model according to Alg. 2. During the process of enhancing the capacity of the refinement model, the distance between the feature embeddings R(\u02c6 z0) and E(x) is continuous reduced. The resultant ROC curves for face verification, represented by the red line, are prominently illustrated in Fig. 5a to showcase the performance of the optimal model in both the validation and test datasets. Additionally, Fig. 5a provides evidence that the refinement module effectively enhances recognition performance, a topic further elaborated by the ablation study in Sec. 4.3. 4.2.2 Results of face verification During the testing phase, we employ a random selection process to compile a list comprising 12,000 paired facial images and corresponding description prompts. It is noteworthy that half of these pairs belong to the same identity, 11 \f5 10 15 20 25 30 k 10 20 30 40 50 60 70 Accuracy (%) 30.0 40.0 47.3 53.0 58.3 63.0 19.0 33.7 38.7 44.3 48.0 53.0 Refinement No Refinement Figure 7: The identification accuracy with respect to k is evaluated on the pair list randomly extracted from the test set. The accuracy curves manifest the substantial efficacy of our model in discerning and eliminating irrelevant images solely based on textual descriptions. while the remaining pairs involve distinct identities. In order to mitigate the occurrence of chance factors, these procedures are repeatedly executed to yield a total of five distinct lists in the final assessments. The success rates of verification for all pair lists in the benchmark are depicted in Fig. 6 in order to present the stability of our model. Fig. 6 further demonstrates that our DiFace approach has attained a remarkable level of accuracy of nearly 80% in text-to-face recognition, surpassing mere stochastic effectiveness. Besides, the red line in Fig. 5b also displays the ROC result for the paired list of number 1. Indeed, accurately identifying a face solely based on a few linguistic prompts is a challenging task, even for humans, within the realm of reality. Given the substantial disparity between textual descriptions and visual images, our algorithm has demonstrated impressive performance, exhibiting a sufficiently high level of accuracy. To be precise, it is possible for two distinct individuals to possess comparable descriptions or even identical prompts, owing to the limited annotations present within the database. Considering this constraint, it is inevitable that a portion of misidentifications will occur, thus making it reasonable for the theoretical upper limit of text-to-image face recognition to be significantly below 100 percent. The performance of DiFace has demonstrated its capacity for verification in situations where image-image matches are prohibited. 4.2.3 Results of face identification In accordance with the criteria outlined in Sec. 4.1, we conduct the more challenging task of face identification to evaluate the DiFace model. A total of 300 images and their corresponding textual descriptions are randomly chosen 12 \f0.0 0.2 0.4 0.6 0.8 1.0 False Positive Rate 0.0 0.2 0.4 0.6 0.8 1.0 True Positive Rate Face feature VAE embedding (a) AgeDB 0.0 0.2 0.4 0.6 0.8 1.0 False Positive Rate 0.0 0.2 0.4 0.6 0.8 1.0 True Positive Rate Face feature VAE embedding (b) LFW 0.0 0.2 0.4 0.6 0.8 1.0 False Positive Rate 0.0 0.2 0.4 0.6 0.8 1.0 True Positive Rate Face feature VAE embedding (c) CPLFW 0.0 0.2 0.4 0.6 0.8 1.0 False Positive Rate 0.0 0.2 0.4 0.6 0.8 1.0 True Positive Rate Face feature VAE embedding (d) CALFW 0.0 0.2 0.4 0.6 0.8 1.0 False Positive Rate 0.0 0.2 0.4 0.6 0.8 1.0 True Positive Rate Face feature VAE embedding (e) CFP-FF 0.0 0.2 0.4 0.6 0.8 1.0 False Positive Rate 0.0 0.2 0.4 0.6 0.8 1.0 True Positive Rate Face feature VAE embedding (f) CFP-FP Figure 8: The ROC curves representing the distinctive verification abilities of the face recognition model and VAE across 6 databases including AgeDB, LFW, CPLFW, CALFW, CFP-FF, CFP-FP. from the test set to form a list. Each pair originates from a distinct subject, indicating that the remaining 299 pairs are considered noise in relation to each individual pair. For every defined k, the accuracy is computed in accordance with Eq. (14), and the outcomes are presented in Fig. 7. The accuracy rate r represented by the red line increases as k increases, as a larger value of k indicates a wider range for the matching between texts and facial images. Based on Fig. 7, we can draw the conclusion that our framework holds significant value in terms of filtering, particularly in scenarios where image-image matching is prohibited. To elaborate further, DiFace effectively eliminates a substantial number of incorrect candidates with a commendable level of accuracy. For instance, in the experiment where k is set to 30, the model provides a precise prediction accuracy of 63% when excluding 90% of interference candidates. In circumstances where only textual descriptions are provided, our algorithm exhibits exceptional proficiency in discerning and eliminating inconsequential facial images, attaining a remarkably high level of accuracy. This achievement is particularly noteworthy when considering the inherent limitations of textual prompts, which lack the informative richness and contextual nuances present in visually-driven images. 4.3 Ablation Study In order to conduct a comprehensive analysis of our DiFace model, we undertake an ablation study that specifically focuses on two modules, namely the refinement network R and the encoder E. 4.3.1 Refinement network We have presented a theoretical analysis regarding the rationale behind the implementation of the refinement network as discussed in Sec. 3.1. By effectively mapping samples from the space X0 to features in F, we achieve enhanced clustering of embeddings, thereby improving the performance of face recognition. In this section, we perform experiments to validate the aforementioned theoretical analysis through empirical evidence. We exhibit the recognition performance using the similarity scores of z0 and \u02c6 z0, independent of the involvement of the refinement module R. The depiction of this performance is represented by the blue dashed lines in Fig. 5, Fig. 6 and Fig. 7. In contrast, the recognition performance based on the scores of f x and f p is illustrated by the red lines for comparison. The observation from both figures reveals that the complete model, encompassing the refinement network R, significantly outperforms the conditions in which R is absent. 13 \fPositive Pairs Negative Pairs Positive Prediction Negative Prediction Female, Black Hair, High Cheekbones, No Beard, Oval Face, Straight Hair Male, 5 O\u2019 Clock Shadow, Brown Hair, Straight Hair Male, Bags Under Eyes, Big Nose, Gray Hair, No Beard, Oval Face, Receding Hairline, Straight Hair Female, Arched Eyebrows, Bushy Eyebrows, Narrow Eyes, No Beard, Pointy Nose, Wavy Hair Female, Brown Hair, No Beard, Pointy Nose, Wavy Hair Female, Bangs, Big Lips, No Beard, Wavy Hair Female, Bags Under Eyes, Bangs, No Beard Female, Arched Eyebrows, Big Lips, High Cheekbones, No Beard, Pointy Nose Female, No Beard Male, 5 O'Clock Shadow, Narrow Eyes, No Beard, Pointy Nose Male, 5 O'Clock Shadow, Bags Under Eyes, High Cheekbones, No Beard Female, No Beard, Oval Face, Wavy Hair Male, Black Hair, High Cheekbones, No Beard, Oval Face Female, No Beard, Pale Skin, Wavy Hair Female, Brown Hair, High Cheekbones, No Beard, Oval Face, Rosy Cheeks, Wavy Hair Male, Bald, Big Lips, Big Nose, Chubby, High Cheekbones, No Beard, Oval Face Figure 9: Visualization of face verification. Correct predictions are distinguished by a green background, while incorrect ones are marked with a red background. To be more precise, the ROC curve of feature f exhibits greater elevation compared to that of the latent variable z in Fig. 5. This observation implies that employing R for verification purposes results in a higher TPR compared to z at an equivalent FPR. Moreover, the average accuracy in Fig. 6 is enhanced by an additional 5.52 percentage points when employing the refinement network R for the face verification task. Likewise, the findings depicted in Fig. 7 further affirm that the refinement module, chosen based on its performance in the verification benchmark within the validation set, exhibits commendable efficacy in the context of the identification task as well. In conclusion, the significance of the refinement network, as discussed in Sec. 3.1, has been substantiated through those experiments in Sec. 4.2. Without the inclusion of this module, the pure diffusion network solely accomplishes the transformation of probability density from the language space to the latent space X0. The actual recognition process is ultimately achieved through the utilization of R, which adjusts the feature distance specifically tailored for face recognition. 14 \fMale, Bags Under Eyes, Bald, Big Lips, Big Nose, Chubby, Goatee, Narrow Eyes, Oval Face Female, Big Lips, Narrow Eyes, No Beard, Straight Hair 0.49 0.26 0.22 0.20 Male, Oval Face, Pointy Nose, Receding Hairline, Sideburns 0.28 0.22 0.21 Male, Bags Under Eyes, Brown Hair, No Beard 0.35 0.33 0.31 Female, Bangs, Big Lips, Blond Hair, No Beard, Straight Hair Female, High Cheekbones, No Beard, Oval Face 0.30 0.30 0.28 0.27 0.27 0.26 0.24 0.24 0.24 0.23 Probe Gallery 0.22 0.20 0.20 0.18 0.14 0.13 0.12 0.35 0.33 0.31 0.29 0.27 0.26 0.23 0.20 0.19 0.18 0.17 0.17 0.16 0.19 0.17 0.17 0.16 0.16 0.29 0.28 0.28 Figure 10: Visualization of face identification. Each image originates from distinct subjects. The ranking of predicted similarity decreases from left to right, accompanied by the scores presented below the images. The images enclosed within the yellow frame represent the ground truth facial image corresponding to the description provided on the left. 4.3.2 Encoder In the original framework of SD, a VAE structure assumes the role of an encoder responsible for encoding images into embeddings within a latent space. These embeddings can be regarded as features in some sense, as they maintain a vital connection to the input images, enabling the subsequent reconstruction of the images through the decoder. However, it is important to note that despite their effectiveness in generative tasks, such embeddings are inherently unsuitable for recognition purposes. This limitation has been discussed in Sec. 3.1, where a comprehensive theoretical analysis has been presented. Therefore, in this subsection, we present experimental evidence that highlights both the necessity and feasibility of utilizing a pretrained face recognition network as the dedicated encoder for such purposes. The experiment is conducted by directly comparing the recognition ability between VAE-encoded embeddings and feature vectors derived from the face recognition model employed in this study. We employ the identical VAE encoder that is utilized in the work of SD, in addition to our face recognition model which is trained using the technique introduced by [14]. Based on the ROC curves depicted in Fig. 8, it is evident that the face features represented by the red lines exhibit superior performance compared to the VAE-encoded embeddings represented by the blue dashed lines. It is important to note that the VAE-encoded embeddings play a crucial role as the target for the original diffusion models in SD to reverse. Due to their inherent limitations in face recognition, if the original VAE encoder is retained for training our framework, the whole performance will be further compromised. Moreover, the findings additionally suggest that the embeddings within the sapce X0 are unsuitable for the purpose of face recognition when compared to the clustered feature vector in F. This observation again strengthens the evidence presented in Sec. 3.1. 15 \f4.4 Visualization In order to facilitate a comprehensive comprehension of the text-to-image face recognition process, we hereby present the visualized outcomes within this specific section. Demonstrating a portion of our authentic results visually becomes imperative due to the inclusion of textual language as one of the modalities in this investigation, thereby distinguishing it from the conventional face recognition approach that solely relies on image matching. In contrast, akin to the ordinary face recognition paradigm, the objectives of the tasks at hand are further categorized into verification and identification as well. In Fig. 9, we illustrate the visualized results in the verification task. The correct predictions are depicted against a green background, whereas the incorrect ones are represented with a red background. As an example, our model demonstrates precise negative classification for the negative pairs displayed in the four images at the bottom right corner indicated by a green background, where the DiFace takes into account various details. Notably, in the third image, the model identifies a discrepancy between the facial image and the accompanying descriptions such as \u201cpale skin\" or \u201cwavy hair,\" despite partial alignment with the description, such as the attribute \u201cfemale.\" Regarding the failures denoted by the red background, certain instances can be attributed to sparse descriptions, such as the first pair situated in the upper right quadrant and the second pair located in the lower left quadrant. In these cases, the limited information provided in the descriptions hinders the model\u2019s ability to accurately match the facial attributes, leading to erroneous predictions. Additional failures can arise due to the presence of ambiguous descriptions or indeterminate facial features. This can be exemplified by the contradictory nature between the description of \u201c5 o\u2019clock shadow\" and \u201cno beard\" in the first pair located in the lower left quadrant. Furthermore, in the fourth pair situated in the upper right quadrant, the occlusion of eyebrows further contributes to potential difficulties or inaccuracies. The visualization of the identification process are shown in Fig. 10, wherein 6 descriptions from the probe set have been randomly chosen, and their corresponding matched facial images of top 8 ranks are accompanied underneath by their respective scores. The ground truth image of each description is precisely delineated by a yellow frame, while every image pertains to a distinct subject. In the initial two rows, the facial images exhibiting the highest degree of similarity perfectly align with the ground truth. Despite the possibility of an image enclosed within a yellow frame not always attaining the foremost rank, as observed in rows three to six, it is noteworthy that images showcasing higher similarity scores on their left consistently adhere to the textual description in an impeccable manner. The reason behind this phenomenon can be attributed to the inherent imprecision of textual information when contrasted with the intricate details present in visual textures, rather than indicating any deficiency in the capabilities of DiFace. Evidently, a majority of the facial images depicted in Fig. 10 conform closely to the provided descriptions, thereby serving as a demonstration to the efficacy of our model in successfully discerning and prioritizing the relevant images. These instances highlight the complexity and potential pitfalls associated with interpreting facial attributes, which can lead to erroneous outcomes in text-to-image face recognition technology. 5" + } + ], + "Dong Zheng": [ + { + "url": "http://arxiv.org/abs/2403.16074v2", + "title": "Finding Candidate TeV Halos among Very-High Energy Sources", + "abstract": "We search for possible pulsar TeV halos among the very-high-energy (VHE)\nsources reported in different VHE surveys, among which in particular we use the\nresults from the first Large High Altitude Air Shower Observatory (LHAASO)\ncatalog of $\\gamma$-ray sources. Six candidates are found. They share similar\nproperties of containing a middle-aged, gamma-ray--bright pulsar in their\npositional error circles (the respective pulsars are J0248+6021, J0359+5414,\nJ0622+3749, J0633+0632, J2006+3102, and J2238+5903), being in a rather clean\nfield without any common Galactic VHE-emitting supernova remnants or (bright)\npulsar wind nebulae (PWNe), and showing an absence of any gamma-ray emissions\nin 0.1--500\\,GeV after removing the pulsars' emissions. Combining these\ncandidates with several reported (candidate) TeV halos, we obtain the\nrelationships between their luminosity at 50\\,TeV ($L_{\\rm 50TeV}$) and the\ncorresponding pulsars' spin-down energy ($\\dot{E}$), which are $L_{\\rm\n50TeV}\\sim \\dot{E}^{0.9}$ and $L_{\\rm 50TeV}/\\dot{E}\\sim 6.4\\times 10^{-4}$.\nThe relationships are nearly identical to previously reported ones. We probe\npossible connections between the extension sizes of the VHE sources and the\npulsars' ages, and find a weak older-and-smaller trend. By comparing to the VHE\ndetection results for PWNe, it is clear that the (candidate) TeV halos have\nhard emissions by either having their power-law indices be smaller than 2 in\n1--25\\,TeV or by only being detected in 25--100\\,TeV. In addition, we also\nconsider seven other VHE sources as possible TeV halos based on the results\nfrom different studies of them, but they do not fit cleanly with the properties\nlisted above, indicating their potentially complex nature.", + "authors": "Dong Zheng, Zhongxiang Wang", + "published": "2024-03-24", + "updated": "2024-05-08", + "primary_cat": "astro-ph.HE", + "cats": [ + "astro-ph.HE" + ], + "main_content": "INTRODUCTION TeV halos are extended Very-High-Energy (VHE; \u2265100 GeV) \u03b3-ray emissions around middle-aged pulsars (\u223c100 kyr). Their existence has been \ufb01rmly established due to the detection of extended emissions around the nearby pulsars Geminga and Monogem, as observed by the High-Altitude Water Cherenkov (HAWC) Observatory (Abeysekara et al. 2017). Since the detection revelation, various studies focusing on their general existence and possible properties have been carried out (see, e.g., Linden et al. 2017, Sudoh et al. 2019, Fang 2022, and Mukhopadhyay & Linden 2022 and references therein). Importantly, TeV halos can be a signi\ufb01cant contributor of cosmic electrons and positrons in our Galaxy (Giacinti et al. 2020; L\u00b4 opez-Coto et al. 2022a; Yan et al. 2024). In our recent studies of Galactic VHE sources for those whose nature is not clear (i.e., unidenti\ufb01ed), we have focused on \ufb01nding and studying their possible lower energy counterparts by analyzing available multi-energy data (e.g., Xing et al. 2022; Zheng et al. 2023b). The primary ones used are the GeV \u03b3-ray data (in energy range of 0.1\u2013500GeV) obtained with the Large Area Telescope (LAT) onboard the Fermi Gamma-ray Space Telescope (Fermi). It has been ascertained that for a signi\ufb01cant fraction of VHE sources, a known pulsar is often found located in the \ufb01eld, within the error circle of such a VHE source (e.g., Albert et al. 2020). Some of these pulsars with the positional coincidence are \u03b3-raybright, which can cause di\ufb03culties in analyses because of the low-spatial resolution of the LAT data (e.g., \u223c1 deg at 1 GeV). The strategy we have applied to overcome such di\ufb03culties is to remove the \u2018contamination\u2019 of a pulsar, the pulsed emission, by timing this pulsar at \u03b3rays. This helps reveal the residual emission in a source \f2 Zheng & Wang Table 1. Properties of pulsars and likely associated VHE sources. 1LHAASO P0 \u02d9 P \u02d9 E/1035 Distance Age F PSR X\u2212ray/10\u221213 F PWN X\u2212ray/10\u221213 F50TeV/10\u221213 Extension References PSR (s) (10\u221214) (erg s\u22121) (kpc) (kyr) (erg cm\u22122 s\u22121) (erg cm\u22122 s\u22121) (erg cm\u22122 s\u22121) (deg) J0249+6022 3.72 \u00b1 0.36 0.38 \u00b1 0.08 J0248+6021 0.22 5.51 2.13 2.0 \u00b1 0.2 62.4 < 9.0 \u2013 1, 2 J0359+5406 3.40 \u00b1 0.24 0.30 \u00b1 0.04 J0359+5414 0.08 1.67 13.0 3.45 75.2 0.09 \u00b1 0.03 0.20 \u00b1 0.03 3 J0622+3754 5.68 \u00b1 0.28 0.46 \u00b1 0.03 J0622+3749 0.33 2.54 0.27 <3.47 208 <0.14 \u2013 4 J0635+0619 3.76 \u00b1 0.40 0.60 \u00b1 0.07 J0633+0632 0.30 7.96 1.20 1.35+0.65 \u22120.65 59.2 0.33 \u00b1 0.06 1.17+0.11 \u22120.13 5 J2005+3050 1.84 \u00b1 0.20 0.27 \u00b1 0.05 J2006+3102 0.16 2.49 2.24 4.7 104 < 9.0 \u2013 6 J2238+5900 8.12 \u00b1 0.48 0.43 \u00b1 0.03 J2238+5903 0.16 9.70 8.89 2.83 26.6 < 0.44 \u2013 4 J0542+2311u 11.72 \u00b1 0.48 0.98 \u00b1 0.05 B0540+23 0.25 1.54 0.41 1.57 253 0.08 \u00b1 0.04 \u2013 4 J1740+0948u 1.64 \u00b1 0.16 < 0.11 J1740+1000 0.15 2.13 2.32 1.23 114 0.24 \u00b1 0.02 0.60 \u00b1 0.06 7, 8 J1809-1918u 37.84 \u00b1 5.08 < 0.22 J1809-1917 0.08 2.55 17.8 3.27 51.4 0.47+0.01 \u22120.04 2.6\u20134.9 9 J1813-1245 5.68 \u00b1 1.08 < 0.31 J1813-1246 0.05 1.76 62.4 2.64 43.4 10.80 \u00b1 0.10 < 1.5 10 J1825-1256u 20.32 \u00b1 1.68 < 0.2 J1826-1256 0.11 12.1 36.0 1.55 14.4 1.04+0.14 \u22120.13 0.85+0.10 \u22120.09 11 J1825-1337u 40.40\u00b1 2.44 < 0.18 J1826-1334 0.10 7.53 28.4 3.61 21.4 0.16 \u00b1 0.04 4.5+0.3 \u22120.2 12 J1928+1746u 2.88 \u00b1 0.28 < 0.16 J1928+1813u 9.92 \u00b1 0.64 0.63 \u00b1 0.03 J1928+1746 0.07 1.32 16.0 4.34 82.6 < 0.08 \u2013 13 Note\u2014References for X-ray \ufb02uxes and distances: (1) Marelli et al. (2011), (2) Theureau et al. (2011), (3) Zyuzin et al. (2018), (4) Prinz & Becker (2015), (5) Danilenko et al. (2020), (7) Nice et al. (2013), (7) Rigoselli et al. (2022), (8) Kargaltsev et al. (2008), (9) Klingler et al. (2020), (10) Marelli et al. (2014), (11) Karpova et al. (2019), (12) Pavlov et al. (2008), (13) Kargaltsev et al. (2012) \ufb01eld, which allows us to conduct clean studies of it as a candidate counterpart (e.g., Xing et al. 2022). However in the studies of the VHE sources 3HWC J0631+107 (Albert et al. 2020; or 1LHAASO J0631+1040), 1LHAASO J1959+2846u, and 1LHAASO J2028+3352, where 1LHAASO stands for the \ufb01rst Large High Altitude Air Shower Observatory (LHAASO; Cao et al. 2019) catalog of Gamma-ray sources (Cao et al. 2023), no signi\ufb01cant residual emissions were found after we removed the pulsed emissions of PSRs J0631+1036, J1958+2846, and J2028+3332, respectively. The non-detections, combined with the pulsars\u2019 many similarities to Geminga and the fact that no primary Galactic VHE-emitting sources (e.g., H. E. S. S. Collaboration et al. 2018a), such as supernova remnants (SNRs) or pulsar wind nebulae (PWNe), are known in the \ufb01elds, led to our identi\ufb01cation of the three VHE sources as being TeV halos powered by their respective pulsars (Zheng et al. 2023a; Zheng & Wang 2023). Our series of work and obtained results suggest that there could be more TeV halos among the unidenti\ufb01ed VHE sources. Particularly for those reported in 1LHAASO, the energy range spreads from 1 TeV to ap\fFinding Pulsar TeV Halos 3 Table 2. Timing solutions and phase ranges for six pulsar targets. Source End time f f1/10\u221212 On-pulse O\ufb00-pulse (MJD) (Hz) (Hz s\u22121) J0248+6021 58839 4.605826479 \u22121.17136 0.125\u20130.5625 0\u20130.125, 0.5625\u20131 J0359+5414 58044 12.58999189 \u22122.64978 0.125\u20130.5625 0\u20130.125, 0.5625\u20131 J0622+3749 58835 3.001119808 \u22120.228945 0\u20130.625, 0.9375\u20131 0.625\u20130.9375 J0633+0632 58738 3.362415888 \u22120.899654 0\u20130.3125, 0.5\u20130.625 0.3125\u20130.5, 0.625\u20131 J2006+3102 57697 6.108786932 \u22120.928096 0.375\u20130.625 0\u20130.375, 0.625\u20131 J2238+5903 58680 6.144764381 \u22123.65692 0\u20130.375, 0.5\u20130.625 0.375\u20130.5, 0.625\u20131 Note\u2014Frequencies were from 3PC. proximately 100 TeV, which is covered by the LHAASO Water Cherenkov Detector Array (WCDA) in 1\u201325 TeV and Kilometer Square Array (KM2A) in 25\u2013100TeV (Cao et al. 2019). The wide energy-range coverage allows us to select sources with spectra harder than those of the PWNe (H. E. S. S. Collaboration et al. 2018b), which is a possible feature that may be used to di\ufb00erentiate the TeV halos from the PWNe (see Zheng et al. 2023a and Zheng & Wang 2023). In addition, the Third Fermi Large Area Telescope Catalog of Gamma-ray Pulsars (3PC) has very recently been released (Smith et al. 2023). It provides the timing solutions for Fermi-LAT\u2013 detected \u03b3-ray pulsars. These solutions allow us to easily carry out our studies of VHE sources when pulsed \u03b3-ray emissions need to be removed. We have thus conducted a further search for candidate TeV halos. We have found six good candidates (\ufb01rst six in Table 1) and report the results in this paper. In this work, we mainly used the detection results in 1LHAASO, but also included results reported in the High Energy Spectroscopy System (HESS) Galactic plane survey (HGPS; H. E. S. S. Collaboration et al. 2018a) and in the third HAWC catalog (3HWC; Albert et al. 2020). In some cases, reported results from the observations conducted with the Very Energetic Radiation Imaging Telescope Array System (VERITAS) and the Milagro Gamma-ray Observatory (MGRO; Abdo et al. 2009b) were also used. To be as complete as possible, we essentially went through all the Galactic VHE sources that are likely associated with a pulsar and show some aspects of a TeV halo. As a result, we found another seven sources and list them in the lower part of Table 1. Some of these sources are in a complex region, such as being potentially associated with an SNR/PWN in the \ufb01eld, and some contain a pulsar that does not have \u03b3-ray emission or clear o\ufb00-pulse phases in the case of being \u03b3-ray bright. We included these sources in our discussion (Section 4), and a brief introduction for each of them is provided in Appendix A. In the following Section 2, we describe the Fermi-LAT data we used and our data analyses, which include how we obtained the o\ufb00-pulse data of the six pulsar targets through pulsar timing. In Section 3, in conjunction with our analysis results, we provide the properties of each pulsar target and its associated VHE source, which helps identify the latter as a candidate TeV halo. In Section 4, we discuss these sources\u2019 general properties by considering the VHE sources as being TeV halos powered by the corresponding pulsars. 2. DATA ANALYSIS 2.1. LAT Data and Source Model Photon data \ufb01les with timing analysis results for each of the Fermi-LAT\u2013detected pulsars are provided in 3PC. There are two types of data with di\ufb00erent sizes, one containing photons within 3\u25e6with an energy band ranging from 50 MeV to 300 GeV1, and the other containing photons of energies from 20 MeV to 1 TeV within 15\u25e62. Both types of data \ufb01les are centered at the position of each pulsar. The data were selected from the latest Fermi Pass 8 database with Event Class of 128. Events with a zenith angle larger than 105\u25e6and bad quality \ufb02ags were excluded. We used both of the data \ufb01les in the following analyses. In our analyses, we chose photons in the energy band of 0.1\u2013500GeV with a zenith angle of < 90\u25e6. The start times of the data are 2008 August 4 15:43:36 (UTC), but because the timing solutions did not cover the whole observation time period of Fermi-LAT, the end times are 1 https://heasarc.gsfc.nasa.gov/FTP/fermi/data/lat/catalogs /3PC/photon/3deg 50MeV/ 2 https://heasarc.gsfc.nasa.gov/FTP/fermi/data/lat/catalogs /3PC/photon/15deg 20MeV/ \f4 Zheng & Wang Figure 1. Pulse pro\ufb01les (top) and two-dimensional phaseograms (bottom) of six pulsar targets. The onand o\ufb00-pulse phase ranges we de\ufb01ned based on the pulse pro\ufb01les are marked by the dashed lines. di\ufb00erent and are given in Table 2 for each of the pulsar targets. We set the regions of interests (RoIs) with a size of 15\u25e6\u00d7 15\u25e6, centered at each of the pulsar targets. The latest Fermi-LAT Fourth Source Catalog (4FGL-DR4, Ballet et al. 2023) was used to construct source models. For each target, the sources within a range of 15\u25e6radius were included, and their 4FGL-DR4 spectral forms were used. In addition, two background models, the Galactic and extragalactic di\ufb00use emission, were included in the source models, which were the \ufb01les gll iem v07.\ufb01ts and iso P8R3 SOURCE V3 v1.txt, respectively. 2.2. Timing Analysis In both the 3\u25e6and 15\u25e6photon data \ufb01les, a photon\u2019s probability (to be from a pulsar) and spin phase (of the pulsar) were included. We used the 3\u25e6photon \ufb01les to construct the pulse pro\ufb01les and de\ufb01ne the onand o\ufb00pulse phase ranges. The pulse pro\ufb01les, with the onand o\ufb00-pulse phase ranges marked, are shown in Figure 1. The values of the phase ranges, as well as the timing solutions, are given in Table 2. 2.3. Likelihood Analysis of the onand o\ufb00-pulse Data 2.3.1. On-pulse Data The standard binned likelihood analysis was performed to the on-pulse data of each pulsar in 0.1\u2013 500 GeV. The spectral parameters of sources in a source model within 5\u25e6from a pulsar target were set as free, while those of the other sources were \ufb01xed at the values given in 4FGL-DR4. In addition, the \fFinding Pulsar TeV Halos 5 Table 3. Binned likelihood analysis results from the onand o\ufb00-pulse phase data. Pulsar Phase Range F0.1\u2212500/10\u22128 \u0393 ExpfactorS TS (photons s\u22121 cm\u22122) J0248+6021 On-pulse 3.91 \u00b1 0.27 2.32 \u00b1 0.04 0.89 \u00b1 0.08 2828.9 O\ufb00-pulse \u22640.11 2 0.2 J0359+5414 On-pulse 2.77 \u00b1 0.20 2.21 \u00b1 0.04 0.52 \u00b1 0.07 1658.3 O\ufb00-pulse \u22640.03 2 0.0 J0622+3749 On-pulse 2.56 \u00b1 0.14 2.36 \u00b1 0.04 1.19 \u00b1 0.10 2686.7 O\ufb00-pulse \u22640.08 2 1.5 J0633+0632 On-pulse 8.49 \u00b1 0.32 1.97 \u00b1 0.02 0.63 \u00b1 0.03 18803.2 O\ufb00-pulse \u22640.06 2 0.0 J2006+3102 On-pulse 1.06 \u00b1 0.18 2.15 \u00b1 0.08 0.65 \u00b1 0.14 424.0 O\ufb00-pulse \u22640.19 2 1.2 J2238+5903 On-pulse 8.24 \u00b1 0.30 2.25 \u00b1 0.02 0.51 \u00b1 0.03 8913.0 O\ufb00-pulse \u22640.17 2 4.3 normalizations of the two background components were set as free parameters. We used a PLSuperExpCuto\ufb004 (PLSEC; Abdollahi et al. 2022) model shape to \ufb01t the on-pulse data of the pulsars. There are two forms of PLSEC based on the conditions set for the forms (see Abdollahi et al. 2022 for details). One, dN dE = N0( E E0 )\u2212\u0393\u2212d 2 ln( E E0 )\u2212db 6 ln2( E E0 )\u2212db2 24 ln3( E E0 ), was used for PSRs J0359+5414, J0633+0632, J2006+3102 and J2238+5903 (hereafter J0359, J0633, J2006, and J2238, respectively), and the other, dN dE = N0( E E0 )\u2212\u0393+ d b exp{ d b2 [1 \u2212( E E0 )b]}, was used for PSRs J0248+6021 and J0622+3749 (hereafter J0248 and J0622, respectively). In the two forms, \u0393 and d (or ExpfactorS) are the photon index and the local curvature at energy E0, respectively, and b is a measure of the shape of the exponential cuto\ufb00. Following 4FGL-DR4, we \ufb01xed the value of b at 2/3 for our analysis. The likelihood analysis results for each pulsar are given in Table 3. We also obtained the spectral data points of the pulsars from their on-pulse data. The energy range from 0.1 to 500 GeV was evenly divided logarithmically into 10 bins. Binned likelihood analysis was performed to each bin\u2019s data to obtain the \ufb02uxes. In this analysis, the normalizations of the sources within 5\u25e6of a pulsar and the two background components were set as free parameters, while the other parameters were \ufb01xed at the values obtained above from the binned likelihood analysis of the data in the whole energy range. When the test statistic (TS) value of a bin was <4, we replaced the \ufb02ux with the 95% upper limit as derived from the data of the bin. The obtained spectra are shown in Figure 2. 2.3.2. O\ufb00-pulse Data We also performed standard binned likelihood analysis to the o\ufb00-pulse data of the pulsars. The parameter setup was the same as that in the above analysis of the on-pulse data (Section 2.3.1). We assumed a power law (PL) for any emission at the position of each pulsar, dN dE = N0( E E0 )\u2212\u0393. From the analysis, no signi\ufb01cant emissions were detected during the o\ufb00-pulse phase ranges of the pulsars. In Table 3, we provided the TS values when we assumed \u0393 = 2 as the exemplary result. To show the non-detection results in the o\ufb00-pulse data and provide a clear view of the source \ufb01elds, we obtained 0.1\u2013500GeV TS maps for the pulsars\u2019 regions. As indicated by the TS maps (Figure 3), no signi\ufb01cant residual emissions are seen in any of the pulsar regions. 3. PULSARS AND THEIR ASSOCIATED VHE SOURCES Below, based on the analysis results we obtained for each pulsar target (cf., Figures 2 & 3), we brie\ufb02y describe the properties of the pulsars and their likely associated VHE sources in the following sections. We also searched for X-ray observational results for the pulsars, which helps us learn about the properties of their PWNe. When needed, we analyzed the archival X-ray data by ourselves. 3.1. PSR J0248+6021 PSR J0248 is a \u03b3-ray pulsar, with its radio pulsations \ufb01rst detected by the Nan\u00b8 cay radio telescope (Foster et al. 1997). At the pulsar\u2019s position, no X-ray emission was detected, and no evidence showed any ex\f6 Zheng & Wang 10 1 10 3 10 5 10 7 10 9 Ener gy (Me0) 10 \u221216 10 \u221215 10 \u221214 10 \u221213 10 \u221212 10 \u221211 10 \u221210 10 \u22129 E 2 dN/dE (erg c) 2 \u2212 \u22121 ) PLSEC 1LHA ASO J0249+6022 KM2A 1LHA ASO J0249+6022 WCD A On+/(\u2212e Off+/(\u2212e /++er(i)it ERIT AS U++er(i)it (a) PSR J0248+6021 10 1 10 3 10 5 10 7 10 9 Ener gy (Mev) 10 \u221216 10 \u221215 10 \u221214 10 \u221213 10 \u221212 10 \u221211 10 \u221210 10 \u22129 E 2 dN/dE (erg cm 2 ( \u22121 ) PLSEC 1LHA ASO J0359+5406 WCD A 1LHA ASO J0359+5406 KM2A HA WC Onpulse Offpulse upperlimit (b) PSR J0359+5414 10 1 10 3 10 5 10 7 10 9 Ener gy (M v) 10 \u221216 10 \u221215 10 \u221214 10 \u221213 10 \u221212 10 \u221211 10 \u221210 10 \u22129 E 2 dN/dE ( (g cm 2 ) \u22121 ) 3HWC J0621+382 1LHA ASO J0622+3754 WCD A 1LHA ASO J0622+3754 KM2A PLSEC Onp+l) Offp+l) +pp (limit (c) PSR J0622+3749 10 1 10 3 10 5 10 7 10 9 Ener gy (Mev) 10 \u221216 10 \u221214 10 \u221212 10 \u221210 10 \u22128 E 2 dN/dE (er( cm 2 / \u22121 ) PLSEC 3HWC J0634+067 1LHA ASO J0635+0619 KM2A Onpulse WCD A Upperlimit Offpulse upperlimit VERIT AS Upperlimit MIL A GRO upperlimit (d) PSR J0633+0632 10 1 10 3 10 5 10 7 10 9 Ener gy (Mev) 10 \u221216 10 \u221215 10 \u221214 10 \u221213 10 \u221212 10 \u221211 10 \u221210 10 \u22129 E 2 dN/dE (erg cm 2 ( \u22121 ) 3HWC J2005+311 1LHA ASO J2005+3050 KM2A 1LHA ASO J2005+3050 WCD A PLSEC Onpulse Offpulse upperlimit (e) PSR J2006+3102 10 1 10 3 10 5 10 7 10 9 Ener gy (Mev) 10 \u221216 10 \u221215 10 \u221214 10 \u221213 10 \u221212 10 \u221211 10 \u221210 10 \u22129 E 2 dN/dE (er cm 2 s \u22121 ) PLSEC 1LHA ASO J2238+5900 KM2A Onp)lse Offp)lse )pperlimi( (f) PSR J2238+5903 Figure 2. Spectra and spectral upper limits of the six pulsars during their onand o\ufb00-pulse phase ranges, which are shown as black data points (and black dashed curves, the best-\ufb01t PLSEC models) and red lines (assuming \u0393 = 2), respectively. In addition, we overplot the spectra of the LHAASO sources, and available HAWC \ufb02ux measurements and/or VERITAS (Archer et al. 2019) and MGRO (Abdo et al. 2009b) \ufb02ux upper limits of the VHE sources. For details, see Section 3 and Figure 3. tended X-ray emission around the pulsar (Marelli et al. 2011). The unabsorbed X-ray \ufb02ux upper limit was 9 \u00d7 10\u221213 erg cm\u22122 s\u22121 in 0.3\u201310.0 keV (Marelli et al. 2011). The distance used in this work was derived by Theureau et al. (2011). LHAASO detected an extended source, 1LHAASO J0249+6022, that is in positional coincidence with J0248. The extension of the source is \u223c0. \u25e638. In this region, no excess \u03b3-ray emission was detected in o\ufb00-pulse phase data analysis (see Figure 3). We noted that the region is rather clean, within which no SNRs are listed in the SNR catalog SNRcat3. Therefore we suggest that the extended TeV emission, 1LHAASO J0249+6022, is a TeV halo candidate powered by PSR J0248. 3.2. PSR J0359+5414 The region of this pulsar is clean with no residual emissions detected in the o\ufb00-pulse data (Figure 3). In X-rays, a weak PWN was detected with a luminosity 3 http://snrcat.physics.umanitoba.ca of \u22432.8 \u00d7 1031 erg s\u22121 at a pseudo distance of 3.45 kpc (Zyuzin et al. 2018). Extended TeV emission at the region was reported by HAWC, with a size of 0. \u25e62\u00b10. \u25e61, and this detection was matched by the LHAASO detection results. The VHE source was already posited to be a TeV halo candidate powered by J0359 (Albert et al. 2023b). To distinguish TeV halos and PWNe, it has been discussed that the VHE \u03b3-ray emissions are from a larger region than those of PWNe (Linden et al. 2017; L\u00b4 opez-Coto et al. 2022b; Albert et al. 2023b). However, this case is complicated by the existence of a nearby radio pulsar B0355+54 (Figure 3), which has a spin-down luminosity of \u02d9 E = 4.5 \u00d7 1034 erg s\u22121 and a characteristic age of \u03c4 = 564 kyr, and as such, this radio pulsar\u2019s possible association with the VHE source could not be excluded (Albert et al. 2023b). 3.3. PSR J0622+3749 This pulsar is radio quiet, with an X-ray \ufb02ux upper limit of 1.4 \u00d7 10\u221214 erg cm\u22122 s\u22121 in 0.1\u2013 2.0 keV (Prinz & Becker 2015). In the region, \fFinding Pulsar TeV Halos 7 0 3 6 9 12 15 18 21 24 27 30 WCDA KM2A 1LHAASO J0249+6022 0.5 deg PSR J0248+6021 4FGL J0240.5+6113 (a) 0 1.4 2.8 4.2 5.6 7 8.4 9.8 11 13 14 PSR B0355+54 HAWC WCDA KM2A 0.5 deg 1LHAASO J0359+5406 4FGL J0402.5+5402 PSR J0359+5414 (b) 0 0.9 1.8 2.7 3.6 4.5 5.4 6.3 7.2 8.1 9 3HWC J0621+382 KM2A WCDA 4FGL J0620.3+3804 PSR J0622+3749 0.5 deg 1LHAASO J0622+3754 (c) 0 0.7 1.4 2.1 2.8 3.5 4.2 4.9 5.6 6.3 7 3HWC J0634+067 PSR J0633+0632 4FGL J0631.8+0645 4FGL J0632.8+0550 KM2A 0.5 deg 1LHAASO J0635+0619 (d) 0 1.5 3 4.5 6 7.5 9 10 12 14 15 SNR G68.6-1.2 3HWC J2005+311 KM2A WCDA 0.5 deg 1LHAASO J2005+3050 PSR J2006+3102 (e) 0 2.2 4.4 6.6 8.8 11 13 15 18 20 22 4FGL J2247.5+5812c 4FGL J2240.6+5833 PSR J2238+5903 KM2A 0.5 deg 1LHAASO J2238+5900 (f) Figure 3. TS maps of the regions of the six pulsar targets in 0.1\u2013500 GeV, calculated from the o\ufb00-pulse data of the pulsars. Each panel has a size of 3\u25e6\u00d7 3\u25e6centered at a pulsar target. Green diamonds and crosses mark the positions of the pulsars and nearby Fermi-LAT sources, respectively. The positional error circles and extension regions of LHAASO, HAWC, and HESS sources are marked by solid and dash circles respectively, and the name of the corresponding 1LHAASO source is given at top-left of each panel. In the region of 1LHAASO J0359+5406, a non\u2013\u03b3-ray pulsar is indicated by a yellow diamond, and in that of 1LHAASO J2005+3050, an SNR is shown as the magenta dashed circle. LHAASO detected extended VHE \u03b3-ray emission named LHAASO J0621+3755, and it is likely a TeV halo (Aharonian et al. 2021). In the 1LHAASO catalog, 1LHAASO J0622+3754 was assigned to be associated with LHAASO J0621+3755, with the separation between them being only 0. \u25e603. Our analysis of the o\ufb00pulse data veri\ufb01ed the emptiness of the \ufb01eld at GeV \u03b3-rays. The distance of the pulsar was estimated to be 1.6 kpc by Pletsch et al. (2012), where the pulsar\u2019s \u03b3ray luminosity L\u03b3 in 0.1\u2013100GeV was estimated from a L\u03b3\u02d9 E relationship that was derived based on the \u03b3-ray pulsars with distance measures (for details see Saz Parkinson et al. 2010; Pletsch et al. 2012). We reestimated the distance by using the \ufb02ux value given in the recent 4FGL-DR4, and found a value of 1.4 kpc. However this value can be highly di\ufb00erent from the actual one. Another method to estimate the distance is to require L\u03b3 \u2264\u02d9 E, which sets an upper limit of 3.47 kpc for the distance, and if considering L\u03b3 \u223c0.1 \u02d9 E, the distance would be \u223c1.1 kpc. We adopted 1.1 kpc for J0622 but with an upper limit of 3.47 kpc. 3.4. PSR J0633+0632 This pulsar is also radio quiet, \ufb01rst detected at \u03b3-rays by Fermi-LAT (Abdo et al. 2009a). In this source region, di\ufb00use X-ray emission was detected and identi\ufb01ed as a PWN (Ray et al. 2011; Danilenko et al. 2020). For J0633 and its PWN, the unabsorbed X-ray \ufb02uxes were 3.31+0.58 \u22120.62 \u00d7 10\u221214 erg cm\u22122 s\u22121 and \f8 Zheng & Wang 1.17+0.11 \u22120.13 \u00d7 10\u221213 erg cm\u22122 s\u22121 in 2\u201310 keV, respectively (Danilenko et al. 2020). The source distance was discussed to be within a range of 0.7\u20132 kpc, based on an interstellar absorption-distance relationship (Danilenko et al. 2020). The LHAASO detection indicated that the VHE source has a hard emission, as the WCDA observations only provided a \ufb02ux upper limit (Cao et al. 2023). Our analysis of the o\ufb00-pulse data provided a \ufb02ux upper limit of \u223c10\u221213 erg cm\u22122 s\u22121 in the GeV energies. Considering that the source 1LHAASO J0635+0619 (as well as 3HWC J0634+067) is a TeV halo candidate powered by PSR J0633, the VHE emission is likely from a larger region than that of the X-ray PWN. Khokhriakova et al. (2023) searched for X-ray counterparts of TeV halos (socalled X-ray halos) around 5 pulsars that include J0633. However, no such extended emission was found. 3.5. PSR J2006+3102 This radio pulsar was reported with a distance of 4.7 kpc in Nice et al. (2013), but the updated value is 6.035 kpc in the Australia Telescope National Facility (ATNF) pulsar catalog (Manchester et al. 2005). Very limited information is available for this pulsar. We searched the Chandra and XMM-Newton archival data, but no observations were found. Using a set of data (Obsid : 03103085001, exposure time = 1.1 ks) obtained with the X-ray Telescope (XRT) onboard the Neil Gehrels Swift Observatory (Swift), we derived a 3\u03c3 upper limit of 0.01 cts s\u22121 in 0.3\u201310keV at the pulsar\u2019s position. The corresponding energy-\ufb02ux upper limit was 9.0\u00d710\u221213 erg cm\u22122 s\u22121, where we assumed a PL source spectrum with an index of 2 and hydrogen column density NH = 8.27 \u00d7 1021 cm\u22122 (towards the source direction, from HI4PI Collaboration et al. 2016). Close to the edge of the extension region given by the LHAASO KM2A, there is an SNR, G68.6\u22121.2 (Figure 3), which, however, is faint and poorly de\ufb01ned according to the SNRcat. Given its poorly known properties and relatively large separation (\u223c0. \u25e668) from the VHE source, it is not clear if the SNR can be connected to the latter. We noted that 3HWC J2005+311 is also located in this region (Albert et al. 2020), and its spectrum is similar to that of 1LHAASO J2005+3050. However, the positions of the two sources do not overlap. The relation between them remains to be resolved from further observational results. 3.6. J2238+5903 J2238 also has very limited information available. An X-ray \ufb02ux upper limit was reported by Prinz & Becker (2015) to be 4.4\u00d710\u221214 erg cm\u22122 s\u22121 in 0.1\u20132 keV (where a PL with index = 1.7 was assumed). The LHAASO WCDA observations were in\ufb02uenced by the Galactic Di\ufb00use Emission (Cao et al. 2023), and we did not consider the WCDA measurements. 4. DISCUSSION Following our previous work on identifying candidate pulsar TeV halos (Zheng et al. 2023a; Zheng & Wang 2023) by mainly analyzing the o\ufb00-pulse GeV data of \u03b3ray pulsars in the \ufb01elds of VHE sources, from which any residual emissions may help reveal their nature as possibly being primary Galactic sources, such as SNRs or PWNe, we further found six candidates because of the non-detection of any signi\ufb01cant residual emissions. The pulsars\u2019 properties, including information for their X-ray emissions, are summarized in Table 1. As discussed in Zheng & Wang (2023), there may contain a relationship between the TeV halos\u2019 luminosity at 50 TeV, L50TeV, and corresponding pulsars\u2019 spin-down energy \u02d9 E. This relationship helps indicate the fraction of the total energy spent on powering the TeV halos. Since most of the sources (including those presented in the Appendix) in this work have been detected by LHAASO KM2A in 25\u2013100TeV, we thus also estimated their L50TeV from the di\ufb00erential \ufb02uxes at 50 TeV given in the LHAASO results (Cao et al. 2023). The L50TeV values are provided in Table 1. Fitting the data points that include four sources in Zheng & Wang (2023) and \ufb01ve sources in this work (excluding J0622 whose distance is highly uncertain), we obtained L50TeV = 2.27+1.82 \u22121.72 \u02d9 E0.90+0.02 \u22120.01, with a reduced \u03c72 value of \u22430.8 for 7 degree of freedom (DoF), where we assumed a 30% uncertainty for distances (this uncertainty was dominant). We used the Markov Chain Monte Carlo (MCMC) code emcee (Foreman-Mackey et al. 2013) for the \ufb01tting, since it conveniently provides error ranges. It can be noted that the L50TeV \u223c\u02d9 E0.9 relationship (see Figure 4), reported in Zheng & Wang (2023), still holds. Another relationship we tested was L50TeV/ \u02d9 E being either a function of the pulsars\u2019 characteristic ages \u03c4 or a constant. Fitting the data points, we obtained L50TeV/ \u02d9 E = 1.3+1.8 \u22120.8 \u00d7 10\u22123\u03c4 \u22120.18+0.23 \u22120.21 kyr (where \u03c4 is in units of kyr) with reduced \u03c72 \u22430.8 for DoF=7, or L50TeV/ \u02d9 E = 6.4 \u00b1 0.8 \u00d7 10\u22124 with reduced \u03c72 \u22430.6 for DoF=8. Both results are also very similar to those previously obtained in Zheng & Wang (2023). For the \ufb01rst result, the large uncertainty for the \u03c4 index indicates its value close to zero, and thus the second result, L50TeV/ \u02d9 E being a constant (as in Zheng & Wang 2023), is preferred. In addition, we also tested the physical sizes S of the VHE sources as a function of \u03c4. The sizes were derived \fFinding Pulsar TeV Halos 9 (a) (b) (c) Figure 4. Left: relationship between luminosities of (candidate) TeV halos at 50 TeV L50TeV (from the 1LHAASO measurements) and the corresponding pulsars\u2019 spin-down energy \u02d9 E, L50T eV \u221d\u02d9 E0.9 (dashed line). The shaded area indicates the 1\u03c3 error range. Middle: L50TeV/ \u02d9 E being a function of the pulsars\u2019 characteristic ages \u03c4, L50TeV/ \u02d9 E \u223c1.3 \u00d7 10\u22123\u03c4 \u22120.18 kyr (dotted line with the shaded area indicating the 1\u03c3 error range), or being \u223c6.4 \u00d7 10\u22124 (dark line region, with the width indicating the 1\u03c3 error range). Right: physical sizes S of (candidate) TeV halos as a function of \u03c4, S \u223c\u03c4 \u22120.25 kyr (dotted line with the shaded area indicating the 1\u03c3 error range). For details, see Section 4. from the extension sizes in degrees, as summarized in Table 1 from the LHAASO KM2A measurements. We obtained S = 64.51+21.54 \u221221.04 \u00d7 \u03c4 \u22120.25+0.09 \u22120.07 kyr pc, with reduced \u03c72 \u22432.1 for DoF=6. The uncertainties are large, and there is a source, J2028+3332 (Zheng & Wang 2023), signi\ufb01cantly deviating from the relationship (although the source\u2019s distance is uncertain). In any case, there is a possible older-and-smaller trend, which could be an interesting feature that may reveal the evolutionary processes of electron/positron ejection of pulsars and halos. Further observational results obtained from more collected data with LHAASO may verify this trend. As we also searched for other potential TeV halo candidates from among mainly 1LHAASO sources, there are seven of them whose properties may provide hints as to their possible TeV-halo nature based on di\ufb00erent studies (See Appendix Section A). We showed their corresponding properties (Table 1) in Figure 4. As can be seen, they generally have large scattering around the relationships we obtained above. In particular, \ufb01ve of them were compact sources (see Appendix Section A and Figure A2) in the LHAASO KM2A measurements. Because di\ufb00erent VHE observational facilities have different sensitive energy bands and spatial resolutions, we did not try to replace the KM2A results with those from other facilities. Thus, most of these sources currently do not \ufb01t in the S \u223c\u03c4 relationship at all. From this comparison, we may conclude that either they are not TeV halos or their emissions may contain signi\ufb01cant contributions from other sources, which would be in agreement with the various results from many multi-energy studies about them (Appendix Section A). 1.5 2.0 2.5 3.0 3.5 \u0393 s 2.0 2.5 3.0 3.5 4.0 4.5 \u0393 h J1826-1334 CT A 1 \u0393 s = \u0393 h HESS PWN Ot er Geminga Monogem J0631+1036 J1958+2846 J2028+3332 J1849-0001 J0248+6021 J0359+5414 J0622+3749 J0633+0632 J2006+3102 J2238+5903 Figure 5. Power law indices of the (candidate) TeV halos and HESS-con\ufb01rmed PWNe. Values of the \ufb01rst group are from LHAASO WCDA (\u0393s in 1\u201325 TeV) and KM2A (\u0393h in 25\u2013100 TeV), and those of the latter are mostly from HESS in 1\u201310 TeV (H. E. S. S. Collaboration et al. 2018b). When there is only one measurement, either \u0393s or \u0393h, the source is put at the \u0393h = \u0393s line (dash-dotted). Because two WCDA measurements su\ufb00ered GDE, the sources are also put at the \u0393h = \u0393s line as well, indicated by the dotted lines. HESS 1825\u2212137 (or 1LHAASO J1825\u22121337u, associated with PSR J1826\u22121334) and CTA 1 (Aliu et al. 2013; or 1LHAASO J0007+7303u) have di\ufb00erent reported \u0393s values, and both values of each of them are shown (connected with a grey line and pointed with an arrow). The PWN of the Vela pulsar is the lowest grey data point along the \u0393h = \u0393s line (with \u0393s < 2). \f10 Zheng & Wang Finally, H. E. S. S. Collaboration et al. (2018b) studied all PWNe and candidates in 1\u201310 TeV. On the basis of their results, one conclusion that may be drawn is PWNe tend to have a soft spectrum with the PL index \u0393s > 2. By comparison, as pointed by Zheng & Wang (2023), candidate TeV halos often show hard spectra with \u0393s < 2. We further explored this possible feature by constructing Figure 5, in which the PL indices of the candidate TeV halos (as well as the sources described in the Appendix) and the HESS-con\ufb01rmed PWNe are shown, where the hard PL indices \u0393h are from the LHAASO KM2A 25\u2013100TeV measurements. Some of the sources, in particular those HESS PWNe, were only detected in one energy band (such as the soft 1\u201310 TeV band), and we put these sources at the \u0393h = \u0393s line; note that the error bars indicate the measurements at which energy band are known. It is clear to see that most candidate TeV halos either show emissions with \u0393s < 2, or simply have detectable hard TeV emissions (only with known \u0393h in 25\u2013100TeV). By comparison, PWNe have soft emissions with \u0393s > 2 or do not have any detectable hard TeV emissions (those at the \u0393h = \u0393s line). One notable source of the PWNe is that of the Vela pulsar, which has \u0393s < 2 (the data point at the low left along the \u0393h = \u0393s line in Figure 5). On the other hand, one exception among the candidate TeV halos is 1LHAASO J0249+6022 (associated with PSR J0248), which has \u0393s > 2. Detailed studies of this source may help understand the cause of the deviation. In any case, the comparison strengthens our previous suggestion in Zheng & Wang (2023) that TeV halos are di\ufb00erent from PWNe by having hard emissions. We thank the anonymous referee for helpful comments. This research is supported by the Basic Research Program of Yunnan Province (No. 202201AS070005), the National Natural Science Foundation of China (12273033), and the Original Innovation Program of the Chinese Academy of Sciences (E085021002). D.Z. acknowledges the support of the science research program for graduate students of Yunnan University (KC23234629). 1 2 3 4 5 6 7 8 9" + } + ], + "Kun Gai": [ + { + "url": "http://arxiv.org/abs/1704.05194v1", + "title": "Learning Piece-wise Linear Models from Large Scale Data for Ad Click Prediction", + "abstract": "CTR prediction in real-world business is a difficult machine learning problem\nwith large scale nonlinear sparse data. In this paper, we introduce an\nindustrial strength solution with model named Large Scale Piece-wise Linear\nModel (LS-PLM). We formulate the learning problem with $L_1$ and $L_{2,1}$\nregularizers, leading to a non-convex and non-smooth optimization problem.\nThen, we propose a novel algorithm to solve it efficiently, based on\ndirectional derivatives and quasi-Newton method. In addition, we design a\ndistributed system which can run on hundreds of machines parallel and provides\nus with the industrial scalability. LS-PLM model can capture nonlinear patterns\nfrom massive sparse data, saving us from heavy feature engineering jobs. Since\n2012, LS-PLM has become the main CTR prediction model in Alibaba's online\ndisplay advertising system, serving hundreds of millions users every day.", + "authors": "Kun Gai, Xiaoqiang Zhu, Han Li, Kai Liu, Zhe Wang", + "published": "2017-04-18", + "updated": "2017-04-18", + "primary_cat": "stat.ML", + "cats": [ + "stat.ML", + "cs.LG" + ], + "main_content": "Introduction Click-through rate (CTR) prediction is a core problem in the multi-billion dollar online advertising industry. To improve the accuracy of CTR prediction, more and more data are involved, making CTR prediction a large scale learning problem, with massive samples and high dimension features. Traditional solution is to apply a linear logistic regression (LR) model, trained in a parallel manner (Brendan et al. 2013, Andrew & Gao 2007). LR model with L1 regularization can generate sparse solution, making it fast for online prediction. Unfortunately, CTR prediction problem is a highly nonlinear problem. In particular, user-click generation involves many complex factors, like ad quality, context information, user interests, and complex interactions of these factors. To help LR model catch the nonlinearity, feature engineering technique is explored, which is both time and humanity consuming. Another direction, is to capture the nonlinearity with well-designed models. Facebook (He et al. 2014) uses a hybrid model which combines decision trees with logistic regression. Decision tree plays a nonlinear feature transformation role, whose output is fed to LR model. However, tree-based method is not suitable for very sparse and high dimensional data (Safavian S. R. & Landgrebe D. 1990). (Rendle S. 2010) introduces Factorization Machines(FM), which involves interactions among features using 2nd order functions (or using other given-number-order functions). However, FM can not \ufb01t all general nonlinear patterns in data (like other higher order patterns). 1 arXiv:1704.05194v1 [stat.ML] 18 Apr 2017 \fIn this paper, we present a piece-wise linear model and its training algorithm for large scale data. We name it Large Scale Piecewise Linear Model (LS-PLM). LS-PLM follows the divide-and-conquer strategy, that is, \ufb01rst divides the feature space into several local regions, then \ufb01ts a linear model in each region, resulting in the output with combinations of weighted linear predictions. Note that, these two steps are learned simultaneously in a supervised manner, aiming to minimize the prediction loss. LS-PLM is superior for web-scale data mining in three aspects: \u2022 Nonlinearity. With enough divided regions, LS-PLM can \ufb01t any complex nonlinear function. \u2022 Scalability. Similar to LR model, LS-PLM is scalable both to massive samples and high dimensional features. We design a distributed system which can train the model on hundreds of machines parallel. In our online product systems, dozens of LS-PLM models with tens of million parameters are trained and deployed everyday. \u2022 Sparsity. As pointed in (Brendan et al. 2013), model sparsity is a practical issue for online serving in industrial setting. We show LS-PLM with L1 and L2,1 regularizer can achieve good sparsity. The learning of LS-PLM with sparsity regularizer can be transformed to be a non-convex and nondi\ufb00erential optimization problem, which is di\ufb03cult to be solved. We propose an e\ufb03cient optimization method for such problems, based on directional derivatives and quasi-Newton method. Due to the ability of capturing nonlinear patterns and scalability to massive data, LS-PLMs have become main CTR prediction models in the online display advertising system in alibaba, serving hundreds of millions users since 2012 early. It is also applied in recommendation systems, search engines and other product systems. The paper is structured as follows. In Section 2, we present LS-PLM model in detail, including formulation, regularization and optimization issues. In Section 3 we introduce our parallel implementation structure. in Section 4, we evaluate the model carefully and demonstrate the advantage of LS-PLM compared with LR. Finally in Section 5, we close with conclusions. Figure 1: A demo illustration of LS-PLM model. Figure A is the demo dataset. It is a binary classi\ufb01cation problem, with red points belong to positive class and blue points belong to negative class. Figure B shows the classi\ufb01cation result using LR model. Figure C shows the classi\ufb01cation result using LS-PLM model. It\u2019s clear that LS-PLM can capture the nonlinear distribution of data. 2 Method We focus on the large scale CTR prediction application. It is a binary classi\ufb01cation problems, with dataset {xt, yt}|n t=1. yt \u2208{0, 1} and xt \u2208Rd is usually high dimensional and sparse. 2.1 Formulation To model the nonlinearity of massive scale data, we employ a divide-and-conquer strategy, similar with (Jordan & Jacobs 1994). We divide the whole feature space into some local regions. For each region we 2 \femploy an individual generalized linear-classi\ufb01cation model. In this way, we tackle the nonlinearity with a piece-wise linear model. We give our model as follows: p(y = 1|x) = g \u0010 m X j=1 \u03c3(uT j x)\u03b7(wT j x) \u0011 (1) Here \u0398 = {u1, \u00b7 \u00b7 \u00b7 , um, w1, \u00b7 \u00b7 \u00b7 , wm} \u2208Rd\u00d72m denote the model parameters. {u1, \u00b7 \u00b7 \u00b7 , um} is the parameters for dividing function \u03c3(\u00b7), and {w1, \u00b7 \u00b7 \u00b7 , wm} for \ufb01tting function \u03b7(\u00b7). Given instance x, our predicating model p(y|x) consists of two parts: the \ufb01rst part \u03c3(uT j x) divides feature space into m (hyper-parameter) di\ufb00erent regions, the second part \u03b7(wT j x) gives prediction in each region. Function g(\u00b7) ensures our model to satisfy the de\ufb01nition of probability function. Special Case. Take softmax ( Kivinen & Warmuth 1998) as dividing function \u03c3(x) and sigmoid (Hilbe 2009) as \ufb01tting function \u03b7(x) and g(x) = x, we get a speci\ufb01c formulation: p(y = 1|x) = m X i=1 exp(uT i x) Pm j=1 exp(uT j x) \u00b7 1 1 + exp (\u2212wT i x) (2) In this case, our mixture model can be seen as a FOE model (Jordan & Jacobs 1994, Wang & Puterman 1998) as follows: p(y = 1|x) = m X i=1 p(z = i|x)p(y|z = i, x) (3) Eq.(2) is the most common used formulation in our real applications. In the reminder of the paper, without special declaration, we take Eq.(2) as our prediction model. Figure 1 illustrates the model compared with LR in a demo dataset, which shows clearly LS-PLM can capture the nonlinear pattern of data. The objective function of LS-PLM model is formalized as Eq.(4): arg min \u0398f(\u0398) = loss(\u0398) + \u03bb\u2225\u0398\u22252,1 + \u03b2\u2225\u0398\u22251 (4) loss(\u0398)=\u2212 n X t=1 h yt log (p(yt =1|xt, \u0398)) + (1 \u2212yt) log(p(yt =0|xt, \u0398)) i (5) Here loss(\u0398) de\ufb01ned in Eq.(5) is the neg-likelihood loss function and \u2225\u03982,1\u2225and \u2225\u03981\u2225are two regularization terms providing di\ufb00erent properties. First, L2,1 regularization (\u2225\u0398\u22252,1 = Pd i=1 qP2m j=1 \u03b82 ij) is employed for feature selection. As in our model, each dimension of feature is associated with 2m parameters. L2,1 regularization are expected to push all the 2m parameters of one dimension of feature to be zero, that is, to suppress those less important features. Second, L1 regularization (\u2225\u0398\u22251 = P ij |\u03b8ij|) is employed for sparsity. Except with the feature selection property, L1 regularization can also force those parameters of left features to be zero as much as possible, which can help improve the interpretation ability as well as generalization performance of the model. However, both L1 norm and L2,1 norm are non-smooth functions. This causes the objective function of Eq.(4) to be non-convex and non-smooth, making it di\ufb03cult to employ those traditional gradient-descent optimization methods (Andrew & Gao 2007, Zhang 2004, Bertsekas 2003) or EM method (Wang & Puterman 1998). Note that, while (Wang & Puterman 1998) gives the same mixture model formulation as Eq.(3), our model is more general for employing di\ufb00erent kinds of prediction functions. Besides, we propose a di\ufb00erent objective function for large scale industry data, taking the feature sparsity into consideration explicitly. This is crucial for real-world applications, as prediction speed and memory usage are two key indicators for online model serving. Furthermore, we give a more e\ufb03cient optimization method to solve the large-scale non-convex problem, which is described in the following section. 3 \f2.2 Optimization Before we present our optimization method, we establish some notations and de\ufb01nitions that will be used in the reminder of the paper. Let \u2202+ ijf(\u0398) denote the right partial derivative of f at \u0398 with respect to \u0398ij: \u2202+ ijf(\u0398) = lim \u03b1\u21930 f(\u0398 + \u03b1eij) \u2212f(\u0398) \u03b1 (6) where eij is the ijth standard basis vector. The directional derivative of f as \u0398 in direction d is denoted as f \u2032(\u0398; d), which is de\ufb01ned as: f \u2032(\u0398; d) = lim \u03b1\u21930 f(\u0398 + \u03b1d) \u2212f(\u0398) \u03b1 (7) A vector d is regarded as a descent direction if f \u2032(\u0398; d) < 0. sign(\u00b7) is the sign function takes value {\u22121, 0, 1}. The projection function \u03c0ij(\u0398; \u2126) = ( \u0398ij , sign(\u0398ij) = sign(\u2126ij) 0 , otherwise (8) means projecting \u0398 onto the orthant de\ufb01ned by \u2126. 2.2.1 Choose descent direction As discussed above, our objective function for large scale CTR prediction problem is both non-convex and non-smooth. Here we propose a general and e\ufb03cient optimization method to solve such kind of non-convex problems. Since the negative-gradients of our objective function do not exists for all \u0398, we take the direction d which minimizes the directional derivative of f with \u0398 as a replace. The directional derivative f \u2032(\u0398; d) exists for any \u0398 and direction d, whic is declared as Lemma 1. Lemma 1. When an objective function f(\u0398) is composed by a smooth loss function with L1 and L2,1 norm, for example the objective function given in Eq. (4), the directional derivative f \u2032(\u0398; d) exists for any \u0398 and direction d. We leave the proof in Appendix A. Since the directional derivative f \u2032(\u0398; d) always exists, we choose the direction as the descent direction which minimizes the directional derivative f \u2032(\u0398; d) when the negative gradient of f(\u0398) does not exist. The following proposition 2 explicitly gives the direction. Proposition 2. Given a smooth loss function loss(\u0398) and an objective function f(\u0398) = loss(\u0398)+\u03bb\u2225\u0398\u22252,1 + \u03b2\u2225\u0398\u22251, the bounded direction d which minimizes the directional derivative f \u2032(\u0398; d) is denoted as follows: dij = \uf8f1 \uf8f4 \uf8f2 \uf8f4 \uf8f3 s \u2212\u03b2sign(\u0398ij), \u0398ij \u0338= 0 max{|s| \u2212\u03b2, 0}sign(s), \u0398ij = 0, \u2225\u0398i\u00b7\u22252,1 \u0338= 0 max{\u2225v\u22252,1\u2212\u03bb,0} \u2225v\u22252,1 v, \u2225\u0398i\u00b7\u22252,1 = 0, (9) where s = \u2212\u2207loss(\u0398)ij \u2212\u03bb \u0398ij \u2225\u0398i\u00b7\u22252,1 and v = max{| \u2212\u2207loss(\u0398)ij| \u2212\u03b2, 0}sign(\u2212\u2207loss(\u0398)ij). More details about the proof can be found in Appendix B. According to the proof, we can see that the negative pseudo-gradient de\ufb01ned in Gao\u2019s work (Andrew & Gao 2007) is a special case of our descent direction. Our proposed method is more general to \ufb01nd the descent direction for those non-smooth and non-convex objective functions. Based on the direction dk in Eq.(9), we update the model parameters along a descent direction calculated by limited-memory quasi-newton method (LBFGS) (Wang & Puterman 1998), which approximates the inverse Hessian matrix of Eq.(4) on the given orthant. Motivated by the OWLQN method (Andrew & 4 \fAlgorithm 1 Optimize problem Eq.(4) Input:Choose initial point \u0398(0) S \u2190{}, Y \u2190{} for k = 0 to MaxIters do 1. Compute d(k) with Eq. (9). 2. Compute pk with Eq. (11) using S and Y . 3. Find \u0398(k+1) with constrained line search (12). 4. If termination condition satis\ufb01ed then stop and return \u0398(k+1) End if 5. Update S with s(k) = \u0398(k) \u2212\u0398(k\u22121) 6. Update Y with y(k) = \u2212d(k) \u2212(\u2212d(k\u22121)) end for Gao 2007), we also restrict the signs of model parameters not changing in each iteration. Given the chosen direction dk and the old \u0398(k), we constrain the orthant of current iteration as follows: \u03be(k) ij = ( sign(\u0398(k) ij ), \u0398(k) ij \u0338= 0 sign(d(k) ij ), \u0398(k) ij = 0 . (10) When \u0398(k) ij \u0338= 0, the new \u0398ij would not change sign in current iteration. When \u0398(k) ij = 0, we choose the orthant decided by the selected direction d(k) ij for the new \u0398(k) ij . 2.2.2 Update direction constraint and line search Given the descent direction dk, we approximate the inverse-Hessian matrix Hk using LBFGS method with a sequence of y(k), s(k). Then the \ufb01nal update direction is Hkd(k). Here we give two tricks to adjust the update direction. First, we constrain the update direction in the orthant with respect to d(k). Second, as our objective function is non-convex, we cannot guarantee that Hk is positive-de\ufb01nite. We use y(k)T s(k) > 0 as a condition to ensure Hk is a positive-de\ufb01nite matrix. If y(k)T s(k) \u22640, we switch to d(k) as the update direction. The \ufb01nal update direction pk is de\ufb01ned as follows: pk = ( \u03c0(Hkd(k); d(k)), y(k)T s(k) > 0 d(k), otherwise (11) Given the update direction, we use backtracking line search to \ufb01nd the proper step size \u03b1. Same as OWLQN, we project the new \u0398(k+1) onto the given orthant decided by the Eq. (10). \u0398(k+1) = \u03c0(\u0398(k) + \u03b1pk; \u03be(k)) (12) 2.3 Algorithm A pseudo-code description of optimization is given in Algorithm 1. In fact, only a few steps of the standard LBFGS algorithm need to change. These modi\ufb01cations are: 1. The direction d(k) which minimizes the direction derivative of the non-convex objective is used in replace of negative gradient. 5 \f2. The update direction is constrained to the given orthant de\ufb01ned by the chosen direction d(k). Switch to d(k) when the Hk is not positive-de\ufb01nite. 3. During the line search, each search point is projected onto the orthant of the previous point. 3 Implementation In this section, we \ufb01rst provide our parallel implementation of LS-PLM model for large-scale data, then introduce an important trick which helps to accelerate the training procedue greatly. Figure 2: The architecture of parallel implementation. Figure A illustrates the physical distributed topology. It\u2019s a variant of parameter server, where each computation node runs with both a server and a worker, aiming to maximize the utility of computation power as well as memory usuage. Figure B illustrates the parameter server structure in model-parallelism and data-parallelism manner. 3.1 Parallel implementation To apply Algorithm 1 in large-scale settings, we implement it with a distributed learning framework, as illustrated in Figure 2. It\u2019s a variant of parameter server. In our implementation, each computation node runs with both a server node and a worker node, aiming to \u2022 Maximize the utility of CPU\u2019s computation power. In traditional parameter server setting, server nodes work as a distributed KV storer with interfaces of push and pull operations, which are low computation costly. Running with worker nodes can make full use of the computation power. \u2022 Maximize the utility of memory. Machines today usually have big memory, for example 128GB. Running on the same computation node, server node and worker node can share and utilize the big memory better. In brief, there are two roles in the framework. The \ufb01rst role is the work node. Each node stores a part of training data and a local model, which only saves the model parameters used for the local training data. The second role is the server node. Each node stores a part of global model, which is mutually-exclusive. In each iteration, all of the worker nodes \ufb01rst calculate the loss and the descent direction with local model and local data in parallel(data parallelism). Then server nodes aggregate the loss and the direction d(k) as 6 \fwell as the corresponding entries of \u0398 needed to calculate the revised gradient (model parallelism). After \ufb01nishing calculating the steepest descent direction in Step 1, workers synchronize the corresponding entries of \u0398, and then, perform Step 2\u20136 locally. 3.2 Common Feature Trick Figure 3: Common feature pattern in display advertising. Usually in each page view, a user will see several di\ufb00erent ads at the same time. In this situation, user features can be shared across these samples. In addition to the general-purpose parallel implementation, we also optimized the implementation in online advertising context. Training samples in CTR prediction tasks usually have similar common feature pattern. Take display advertising as an example, as illustrated in Figure 3, during each page view, a user will see several di\ufb00erent ads at the same time. For example, user U1 in Figure 3 sees three ads in one visit session, thus generates three samples. In this situation, features of user U1 can be shared across these three samples. These features include user pro\ufb01les (sex, age, etc.) and user behavior histories during visits of Alibabas e-commerce websites, for example, his/her shopping item IDs, preferred brands or favorite shop IDs. Remember the model de\ufb01ned in Eq. 2, most computation cost focus on \u00b5T i x and wT i x. By employing the common feature trick, we can split the calculation into common and non-common parts and rewrite them as follows: \u00b5T i x = \u00b5T i,cxc + \u00b5T i,ncxnc (13) wT i x = wT i,cxc + wT i,ncxnc Hence, for the common feature part, we need just calculate once and then index the result, which will be utilized by the following samples. Speci\ufb01cally, we optimize the parallel implementation with common features trick in the following three aspects: \u2022 Group training samples with common features and make sure these samples are stored in the same worker. \u2022 Save memory by storing common features shared by multiple samples only once. \u2022 Speed up iteration by updating loss and gradient w.r.t. common features only once. Due to the common feature pattern of our production data, employing the common feature trick improves the performance of training procedure greatly, which will be shown in the following section 4.3. 4 Experiments In this session, we evaluate the performance of LS-PLM. Our datasets are generated from Alibaba\u2019s mobile display advertising product system. As shown in Table 1, we collect seven datasets in continuous sequential 7 \fTable 1: Alibaba\u2019s mobile display advertising CTR prediction datasets Dataset #features #samples (training/validation/testing) 1 3.04 \u00d7 106 1.34/0.25/0.26 \u00d7 109 2 3.27 \u00d7 106 1.44/0.26/0.26 \u00d7 109 3 3.49 \u00d7 106 1.56/0.26/0.25 \u00d7 109 4 3.67 \u00d7 106 1.62/0.25/0.26 \u00d7 109 5 3.82 \u00d7 106 1.69/0.26/0.26 \u00d7 109 6 3.95 \u00d7 106 1.74/0.26/0.26 \u00d7 109 7 4.07 \u00d7 106 1.78/0.26/0.26 \u00d7 109 periods, aiming to evaluate the consist performance of the proposed model, which is important for online product serving. In each dataset, training/validation/testing samples are disjointly collected from di\ufb00erent days, with proportion about 7:1:1. AUC (Fawcett 2006) metric is used to evaluate the model performance. 4.1 E\ufb00ectiveness of division number LS-PLM is a piece-wise linear model, with division number m controlling the model capacity. We evaluate the division e\ufb00ectiveness on model\u2019s performance. It is executed on dataset 1 and results are shown in Figure 4. Generally speaking, larger m means more parameters and thus leads to larger model capacity. But the training cost will also increase, both time and memory. Hence, in real applications we have to balance the model performance with the training cost. Figure 4: Model performance with di\ufb00erent divisions. Figure 4 shows the training and testing AUC with di\ufb00erent division number m. We try m = 6, 12, 24, 36, the testing AUC for m = 12 is markedly better than m = 6, and improvement for m = 24, 36 is relatively gentle. Thus in all the following experiments , the parameter m is set as 12 for LS-PLM model. 4.2 E\ufb00ectiveness of regularization As stated in Session 2, in order to keep our model simpler and more generalized, we prefer to constrain the model parameters sparse by both L1 and L2,1 norms. Here we evaluate the strength of both the regularization terms. Table 2 gives the results. As expected, both L1 and L2,1 norms can push our model to be sparse. Model trained with L2,1 norm has only 9.4% non-zero parameters left and 18.7% features are kept back. While in L1 norm case, there are only 1.9% non-zero parameters left. Combining them together, we get the sparsest 8 \fTable 2: Regularization e\ufb00ects on model sparsity and performance \u03b2/\u03bb(L1/L2,1) #features #non-zero parameters testing auc 0/0 3.04 \u00d7 106 7.30 \u00d7 107 0.6489 0/1 5.68 \u00d7 105 6.64 \u00d7 106 0.6570 1/0 3.87 \u00d7 105 1.33 \u00d7 106 0.6617 1/1 2.55 \u00d7 105 1.15 \u00d7 106 0.6629 Table 3: Training cost comparision with/without common feature trick Cost Without CF. With CF. Cost Saving Memory cost/node 89.2 GB 31 GB 65.2% Time cost/iteration 121s 10s 91.7% result. Meanwhile, models trained with di\ufb00erent norm get di\ufb00erent AUC performance. Again adding the two norms together the model reaches the best AUC performance. In this experiment, the hyper-parameter m is set to be 12. Parameters \u03b2 and \u03bb are selected by grid search. {0.01, 0.1, 1, 10} are tried for both norms in the all cases. The model with \u03b2 = 1 and \u03bb = 1 preforms best. 4.3 E\ufb00ectiveness of common feature trick We prove the e\ufb00ectiveness of common features trick. Speci\ufb01cally, we set up the experiments with 100 workers, each of which uses 12 CPU cores, with up to 110 GB memory totally. As shown in Table 3, compressing instances with common feature trick does not a\ufb00ect the actual dimensions of feature space. However, in practice we can signi\ufb01cantly reduce memory usage (reduce to about 1/3) and speed up the calculation (around 12 times faster) compared to the training without common feature trick. Figure 5: Model performance comparison on 7 di\ufb00erent test datasets. LS-PLM owns consistent and markable promotion compared with LR. 4.4 Comparison with LR We now compare LS-PLM with LR, the widely used CTR prediction model in product setting. The two models are trained using our distributed implementation architecture, running with hundreds of machines for speed-up. The choice of the L1 and L2,1 parameters for LS-PLM and the L1 parameter for LR are based on grid search. \u03b2 = 0.01, 0.1, 1, 10 and \u03bb = 0.01, 0.1, 1, 10 are tried. The best parameters are \u03b2 = 1 and \u03bb = 1 for LS-PLM, and \u03b2 = 1 for LR. 9 \fAs shown in Figure 5, LS-PLM outperforms LR clearly. The average improvement of AUC for LR is 1.44%, which has signi\ufb01cant impact to the overall online ad system performance. Besides, the improvement is stable. This ensures LS-PLM can be safely deployed for daily online production system. 5" + } + ] + }, + "edge_feat": {} + } +} \ No newline at end of file