diff --git "a/abs_29K_G/test_abstract_long_2405.01248v1.json" "b/abs_29K_G/test_abstract_long_2405.01248v1.json" new file mode 100644--- /dev/null +++ "b/abs_29K_G/test_abstract_long_2405.01248v1.json" @@ -0,0 +1,885 @@ +{ + "url": "http://arxiv.org/abs/2405.01248v1", + "title": "DiffusionPipe: Training Large Diffusion Models with Efficient Pipelines", + "abstract": "Diffusion models have emerged as dominant performers for image generation. To\nsupport training large diffusion models, this paper studies pipeline parallel\ntraining of diffusion models and proposes DiffusionPipe, a synchronous pipeline\ntraining system that advocates innovative pipeline bubble filling technique,\ncatering to structural characteristics of diffusion models. State-of-the-art\ndiffusion models typically include trainable (the backbone) and non-trainable\n(e.g., frozen input encoders) parts. We first unify optimal stage partitioning\nand pipeline scheduling of single and multiple backbones in representative\ndiffusion models with a dynamic programming approach. We then propose to fill\nthe computation of non-trainable model parts into idle periods of the pipeline\ntraining of the backbones by an efficient greedy algorithm, thus achieving high\ntraining throughput. Extensive experiments show that DiffusionPipe can achieve\nup to 1.41x speedup over pipeline parallel methods and 1.28x speedup over data\nparallel training on popular diffusion models.", + "authors": "Ye Tian, Zhen Jia, Ziyue Luo, Yida Wang, Chuan Wu", + "published": "2024-05-02", + "updated": "2024-05-02", + "primary_cat": "cs.DC", + "cats": [ + "cs.DC" + ], + "label": "Original Paper", + "paper_cat": "Diffusion AND Model", + "gt": "Diffusion models have emerged as dominant performers for image generation. To\nsupport training large diffusion models, this paper studies pipeline parallel\ntraining of diffusion models and proposes DiffusionPipe, a synchronous pipeline\ntraining system that advocates innovative pipeline bubble filling technique,\ncatering to structural characteristics of diffusion models. State-of-the-art\ndiffusion models typically include trainable (the backbone) and non-trainable\n(e.g., frozen input encoders) parts. We first unify optimal stage partitioning\nand pipeline scheduling of single and multiple backbones in representative\ndiffusion models with a dynamic programming approach. We then propose to fill\nthe computation of non-trainable model parts into idle periods of the pipeline\ntraining of the backbones by an efficient greedy algorithm, thus achieving high\ntraining throughput. Extensive experiments show that DiffusionPipe can achieve\nup to 1.41x speedup over pipeline parallel methods and 1.28x speedup over data\nparallel training on popular diffusion models.", + "main_content": "INTRODUCTION Diffusion models have become the dominant choice for content generation today, including text-image synthesis (Choi et al., 2021) and video generation (Ramesh et al., 2022). Large diffusion models such as Stable Diffusion (Rombach et al., 2022), ControlNet (Zhang & Agrawala, 2023), and Imagen (Saharia et al., 2022) achieve state-of-the-art performance in various scenarios. There is a continuing trend to develop larger diffusion models by increasing the backbone size (Rombach et al., 2022; Peebles & Xie, 2022a; Bao et al., 2023; Podell et al., 2023), cascading multiple backbones to enable higher resolution image generation (Nichol et al., 2021; Peebles & Xie, 2022a; Saharia et al., 2022; Ho et al., 2022; Podell et al., 2023), and combining different transformer architectures with diffusion models (Peebles & Xie, 2022a; Zhang & Agrawala, 2023; Wu et al., 2023). Data parallelism is adopted for distributed diffusion model training (Falcon & The PyTorch Lightning team, 2019; Bian et al., 2021; von Platen et al., 2022). For large diffusion models, this method duplicates parameters, which limits the training batch size (Rombach et al., 2022; Ho et al., 2022; Saharia et al., 2022) and device utilization, and causes significant synchronization overhead, especially when the *Work done during internship at AWS. 1The University of Hong Kong, Hong Kong 2Amazon Web Services, USA 3The Ohio State University, USA. Correspondence to: Ye Tian . Proceedings of the 7 th MLSys Conference, Santa Clara, CA, USA, 2024. Copyright 2024 by the author(s). training scale is large (Narayanan et al., 2019). Pipeline parallelism (Huang et al., 2019; Narayanan et al., 2019; Luo et al., 2022) has been widely adopted to train large DNN models, which partitions networks across multiple devices and pipelines micro-batch processing across model partitions, substantially alleviating memory consumption on a single device and enabling larger training batch sizes. Although pipeline parallelism is potentially useful in enabling larger diffusion model training, it has not been well explored for diffusion models and its application faces several challenges as follows: First, the structural characteristics and special training procedures of diffusion models cannot be handled well by traditional pipelining methods. A diffusion model typically contains a trainable part with one or multiple backbone models (e.g., U-Net) (Rombach et al., 2022), and a nontrainable part with frozen text and image encoders, and they are usually trained with special techniques such as selfconditioning (Chen et al., 2022), which involves an additional forward computation pass on the backbone. Pipeline training involves only the trainable part, while the nontrainable part is not readily handled by existing pipeline training methods because it does not require pipelining. Self-conditioning is beyond the scope of existing pipeline systems, as they assume that there is only one forward pass. Second, pipeline bubbles are often significant in synchronous pipeline training (Huang et al., 2019; Fan et al., 2021; Luo et al., 2022), which is more widely used in practice due to not altering model performance but involves arXiv:2405.01248v1 [cs.DC] 2 May 2024 \fDiffusionPipe: Training Large Diffusion Models with Efficient Pipelines periodic pipeline flushing. We identify a unique opportunity to fill the pipeline bubbles using the computation of nontrainable model components, to substantially improve device utilization and expedite training speed. However, there are dependencies between the trainable and non-trainable part that block pipeline bubble filling by overlapping their execution. In addition, how to partition the non-trainable part into sets of layers and insert them into the pipeline bubble is not studied. Third, Non-trainable layers with extra-long execution time are common in frozen encoders (Kingma & Welling, 2013). Such layers may not fit into any pipeline bubble and block filling pipeline bubble with all subsequent layers in the nontrainable part, which cannot be solved by only partitioning the non-trainable part into sets of layers. In addition, as nontrainable layers\u2019 execution time is discrete, it is unlikely to fully utilize idle time in individual pipeline bubble, leading to performance degradation. In this paper, we propose DiffusionPipe, an efficient pipeline training system designed specifically for large diffusion models. DiffusionPipe systematically determines optimized model partitioning, stages, and replication settings while applying pipeline bubble filling techniques. These optimizations are tailored for a variety of representative diffusion models and training methods. To the best of our knowledge, we are the first to enable efficient pipeline parallel training of diffusion models. Our contributions can be summarized as follows: \u25b7We propose a unified dynamic programming-based algorithm for optimized model partitioning, that can handle various training scenarios, e.g., models with different numbers of backbones and models trained with self-conditioning. The proposed partitioning algorithm optimizes the model partitioning scheme under various settings of number of stages and number of micro-batches, with performance comparable to state-of-the-art pipeline paradigms under traditional pipelining, and effectively handles scenarios beyond traditional pipelining and specific to diffusion models. \u25b7We design a novel pipeline bubble filling strategy that fills the non-trainable part computation into the bubble time of the pipeline training of the backbone(s), effectively eliminating pipeline bubbles. It efficiently partitions the nontrainable components and the input data for bubble filling, and effectively addresses dependencies between the non-trainable part and the trainable part by allowing crossiteration overlapping of backbone training of an iteration and non-trainable part computation of the next iteration and filling pipeline bubbles of the former with the latter. \u25b7We effectively handles extra-long non-trainable layers which do not fit into individual pipeline bubbles, by a partialbatch processing design, for the non-training layer to proConditional encoder First stage encoder Backbone (forward computation) Text input Image input Backbone (back propagation) Self-conditioning feedback Figure 1. Training process of Stable Diffusion v2.1 (Rombach et al., 2022) and additional feedback of self-conditioning (Chen et al., 2022). Non-trainable components are marked in grey boxes. Table 1. Ratio of the forward time of the non-trainable part to the forward and backward time of the trainable part on A100 GPU Model / Batch size 8 16 32 64 Stable Diffusion v2.1 38% 41% 43% 44% ControlNet v1.0 76% 81% 86% 89% cess only a portion of a training batch. Partial-batch layer\u2019s execution time can be precisely controlled by its input batch size, enabling it to be inserted into bubbles. In addition, partial-batch layers help eliminate the remaining idle time in pipeline bubbles after inserting non-trainable layers (processing a complete batch). We implement DiffusionPipe and compare it to state-ofthe-art data parallel training systems (Rasley et al., 2020) and ZeRO-3 (Rajbhandari et al., 2021), together with synchronous pipeline training paradigms, including SPP (Luo et al., 2022) and GPipe (Huang et al., 2019). Experimental results show that DiffusionPipe achieves up to 1.28x speedup over data parallel training and up to 1.41x speedup over existing pipeline parallel methods on representative diffusion models. We observe that DiffusionPipe achieves almost complete elimination of pipeline bubbles and effectively handles multiple training scenarios of diffusion models. 2 BACKGROUND AND MOTIVATION 2.1 Diffusion models and training Diffusion models (Ho et al., 2020; Song et al., 2020; Chen et al., 2022; Rombach et al., 2022; Ho et al., 2022; Saharia et al., 2022; Podell et al., 2023) are generative models that learn to reverse the diffusion process that gradually turns data into noise. They typically comprise a backbone model that performs image generation and multiple frozen encoders that encode image and conditional information, e.g., class information (Yu et al., 2015), text description (Deng et al., 2009), canny edge (Canny, 1986) and human pose (Kreiss et al., 2021), and provide it as input to the backbone. During diffusion model training, the encoders are typically fixed and executed in advance in the forward computation pass (referred to as the non-trainable part), while the backbone (the trainable part) is trained with both forward computation and backward propagation (Fig. 1). Table 1 compares the execution time of the non-trainable part and the training time (forward and backward) of the trainable part. \fDiffusionPipe: Training Large Diffusion Models with Efficient Pipelines 0 0 0 0 1 1 1 2 2 3 0 1 1 2 2 0 2 3 3 3 0 0 1 1 1 2 2 2 3 3 3 3 Warm-up Stable Cool-down Device 0 Device 1 Device 2 Device 3 Figure 2. FIFO-1F1B schedule of a DNN. Gray blocks without numbers indicate pipeline bubbles. Potential critical paths are marked with a dashed line. Numbers indicate micro-batch index in both forward (blue) and backward (pink) steps. Device 0 Device 1 Device 2 Device 3 0 0 0 0 4 4 4 4 1 1 1 1 5 5 5 5 6 6 6 6 2 2 2 2 3 3 3 3 7 7 7 7 0 1 2 3 0 1 2 3 0 1 2 3 0 1 2 3 4 5 6 7 4 5 6 7 4 5 6 7 4 5 6 7 Warm-up Stable Cool-down Figure 3. Bidirectional pipeline schedule of a DNN. Communication omitted. The same meaning of number and color with Fig. 2. Micro-batch 0 to 3 pipeline from device 0 to 3 (down direction), while micro-batch 4 to 7 pipeline from device 3 to device 0 (up direction). Some diffusion models, e.g., Cascaded Diffusion Models (CDM) (Ho et al., 2022; Ramesh et al., 2022; Podell et al., 2023), involve multiple backbones of different capacities for high-resolution image generation. Multiple backbones accept the same encoder outputs, and each backbone also takes the output of the preceding backbone as input. The training of backbones in a CDM are typically independent, and each is trained on a different set of devices using the same procedure, as shown in Fig. 1. In the current mainstream diffusion models, U-Net (Ho et al., 2020; Rombach et al., 2022) is widely used as the backbone model. Transformer models can also serve as the backbone (Peebles & Xie, 2022a; Bao et al., 2023). T5xxl (Raffel et al., 2020), BERT (Devlin et al., 2018) and CLIP (Radford et al., 2021) text encoders are popular text encoders, while the image encoders are often variational auto-encoders (Kingma & Welling, 2013), ViT (Dosovitskiy et al., 2020) and CLIP image encoders. There are corresponding encoders (Zhang & Agrawala, 2023) for other modalities, such as canny edge and human pose. Self-conditioning (Chen et al., 2022) has become a very popular technique for training diffusion models (Rombach et al., 2022; Saharia et al., 2022; Yuan et al., 2022), which improves the sampling quality by introducing an additional forward computation pass of the backbone (Fig. 1). The output of this forward pass is fed back to the backbone and serves as a conditional input. The fidelity of the image is then improved because each step is conditioned on the previously generated samples. Table 2. Proportion of synchronization in training iteration time at local batch size 8 on A100 GPUs Model / GPU count 8 16 32 64 Stable Diffusion v2.1 5.2% 19.3% 36.1% 38.1% ControlNet v1.0 6.9% 22.7% 39.1% 40.1% 2.2 Pipeline parallel training, schedule and pipeline bubble Pipeline parallel training partitions the model into stages, and each stage is deployed on a single device; the input data batch in each training iteration is divided into multiple micro-batches, which are processed through the model stages in a pipelined manner. The micro-batch execution pipelines are typically scheduled by a First-In-First-Out (FIFO) heuristic (Chen et al., 2015; Abadi et al., 2016; Sergeev & Del Balso, 2018), which executes micro-batches on model stages according to their ready order. The OneForward-One-Backward (1F1B) schedule is widely adopted with FIFO, that alternatively executes forward computation and back propagation of micro-batches on each model stage in the stable phase of the pipeline execution (when multiple micro-batches are available to run on a model stage at the same time). As illustrated in Fig. 2, this schedule allows releasing intermediate activations and reduces peak memory usage by launching the backward computation when forward computation of the micro-batch is complete. Chimera (Li & Hoefler, 2021) proposes bidirectional pipelining to reduce pipeline bubbles while retaining training synchronous. Chimera maintains two pipelines of micro-batch execution in different device rank orders (i.e., pipeline directions) on the same set of model stages, with the two pipeline execution schedules being symmetric along the device dimension. An example of bidirectional pipelining is shown in Fig. 3. Each micro-batch\u2019s execution can fit perfectly into pipeline bubbles of its counterpart in the pipeline of the other direction (when the number of stages is even). In synchronous pipeline training, pipeline bubbles generally exist in the pipeline schedule (Fig. 2). There is a barrier that gradient synchronization imposes between pipeline stages of the trainable part of diffusion models at different iterations, disabling pipeline bubbles be filled by the trainable part at different iterations. Therefore, although pipeline bubbles can be partially reduced by applying a better model partitioning and pipeline schedule, e.g., SPP (Luo et al., 2022) and Chimera (Li & Hoefler, 2021), such approaches cannot fundamentally eliminate pipeline bubbles, as they only manipulate the trainable part of the model and do not take the non-trainable part into consideration. 2.3 Synchronization overhead and memory consumption of data parallel training Diffusion models are largely trained using data parallelism nowadays (Rombach et al., 2022; Ho et al., 2022; Saharia \fDiffusionPipe: Training Large Diffusion Models with Efficient Pipelines 1 2 3 4 Number of micro-batches 4 3 2 Number of stages 67.6% 684.3% 51.0% 342.2% 41.0% 228.1% 34.3% 171.1% 58.2% 456.2% 41.0% 228.1% 31.7% 152.1% 25.8% 114.1% 41.0% 228.1% 25.8% 114.1% 18.8% 76.0% 14.8% 57.0% (a) Stable Diffusion v2.1 1 2 3 4 Number of micro-batches 4 3 2 Number of stages 61.3% 335.4% 44.2% 167.7% 34.5% 111.8% 28.4% 83.9% 51.4% 223.6% 34.5% 111.8% 26.0% 74.5% 20.9% 55.9% 34.5% 111.8% 20.9% 55.9% 15.0% 37.3% 11.7% 28.0% (b) ControlNet v1.0 Figure 4. Ratio of pipeline bubble time to iteration time (upper) and ratio of pipeline bubble time to non-trainable part execution time (lower) at batch size 64 using FIFO-1F1B scheduling. et al., 2022; Podell et al., 2023), which involves significant parameter synchronization overhead among devices and large memory consumption on each device that restricts the maximum feasible local batch size and the device utilization. For example, Stable Diffusion is trained at a local batch size of only 8 on each TPU-v3 (32GB) in (Rombach et al., 2022) consuming about 24.3 GB memory, which results in limited device utilization and exacerbates the synchronization portion of the training time. The synchronization overhead in Table 2 is computed as the ratio of parameter synchronization time to the end-to-end time of a training iteration. As the number of devices increases, parameter synchronization soon takes up a significant portion of the iteration time. In summary, the data parallel style of diffusion model training limits the training batch size and imposes high synchronization overhead. 2.4 Efficient pipeline bubble filling with non-trainable components We profile the iteration training time of two popular diffusion models (without self-conditioning) by pipelining their backbones under different model stage and micro-batch number settings, and executing the non-trainable part using data parallelism before backbone training. Fig. 4 shows the pipeline bubble ratios, where the iteration time is the sum of pipeline training time of the backbone and the execution time of the non-trainable part in each training iteration. Pipeline bubbles can take up to 68% of the overall training time, which is quite significant, according to the upper line in Fig. 4. In the lower line, a ratio close to 1 indicates that the pipeline bubble time can be almost completely filled by scheduling the non-trainable part in pipeline bubbles, under the respective model stage and micro-batch numbers. This observation motivates us to advocate pipeline bubble filling with the non-trainable part, and to study the detailed bubble filling strategies. Fig. 5 shows that many non-trainable layers (indexed 0 to 21) in both models have short execution times, which belong to the frozen text encoder. Most layers (indexed 22 to 41) from the frozen image encoder take a moderate amount of time to compute (less than 30 ms). Such a distribution of 0 10 20 30 40 Non-trainable layer index 10 1 10 0 10 1 10 2 Execution time (ms) (a) Stable Diffusion v2.1 0 20 40 60 Non-trainable layer index 10 1 10 0 10 1 10 2 Execution time (ms) (b) ControlNet v1.0 Figure 5. Execution time of non-trainable layers at batch size 64. 0 20 40 60 Batch size 0 100 200 300 400 Execution time (ms) Top-1 Top-2 Top-3 2 stages 3 stages 4 stages (a) Stable Diffusion v2.1 0 20 40 60 Batch size 0 100 200 300 400 Execution time (ms) Top-1 Top-2 Top-3 2 stages 3 stages 4 stages (b) ControlNet v1.0 Figure 6. Execution time of top-3 non-trainable layers with longest execution time under different batch sizes, compared to longest pipeline bubble time when there are 4 micro-batches and different number of stages at batch size 64 using FIFO-1F1B scheduling. non-trainable layers with a large proportion of short and moderately long layer execution times provides excellent opportunities for executing individual layers in pipeline bubbles ranging from 10 to 100 ms. There are also some non-trainable layers with extra-long execution times (more than 400 ms), as shown in Fig. 5. Such layers may not fit into any pipeline bubble. Nevertheless, we observe that the layer execution time can be precisely controlled by adjusting the input batch size. Fig. 6 shows the execution times of the layers with the longest execution times at different batch sizes. When the batch size is reduced to 16, most of these non-trainable layers can fit into the longest pipeline bubble obtained by the way we identify bubbles in Fig. 2, implying that we can run such layers in pipeline bubbles by partitioning their input. We seek to design an efficient algorithm to schedule the execution of non-trainable layers into pipeline bubbles. 3 SYSTEM DESIGN Fig. 7 presents an overview of DiffusionPipe, which comprises of two modules: (1) The front-end carries out our workflow of generating an optimized pipeline training schedule for an input diffusion model, including pipeline training configurations of the backbone(s) and bubble-filling strategies of the non-trainable part; (2) The back-end is an execution engine that performs pipeline training according to the optimized pipeline schedule. \fDiffusionPipe: Training Large Diffusion Models with Efficient Pipelines Profile records Parallel profile Trainable part partition schemes Schedule Overall partition schemes Front-end Dynamic programming Greedy filling Back-end Instruction generation Invoke Schedule 1 2 4 5 3 6 Model stages Execution engine Pipeline instructions of current device Optimal overall pipeline schedule Trainable part pipeline schedules Pipeline instructions of all devices Pipeline instruction implementations Synchronization Trainable stage forward Trainable stage backward Non-trainable stage forward Load micro-batch data Send or receive data Input model Figure 7. The architecture of DiffusionPipe Table 3. Pipeline training hyper-parameters Symbol Description S Number of model stages M Number of micro-batches D Pipeline parallel group size1 3.1 Workflow DiffusionPipe takes the diffusion model configuration, the training batch size, and the cluster configuration (i.e., number of machines and number of devices per machine) as inputs. DiffusionPipe first performs parallel profiling on the entire cluster to obtain the model layer execution time at different batch sizes (step 1 in Fig. 7), which is used in steps 2 to 5. Based on the input specifications, DiffusionPipe searches for pipeline training hyper-parameters as listed in Table 3. Note that DiffusionPipe supports mixed pipeline and data parallelism, as shown in Fig. 8. For each feasible hyper-parameter combination (S, M, and D), DiffusionPipe generates a near-optimal partitioning scheme for the trainable backbone(s) (\u00a74, step 2), including the number of layers in each model stage and the number of devices on which each stage replicates. According to the corresponding pipeline schedule generated in step 3, DiffusionPipe further partitions the non-trainable part and fills it into pipeline bubbles (\u00a75, step 4). Then DiffusionPipe generates the overall pipeline training schedules, and selects the optimal one with minimum iteration time (step 5). Finally, DiffusionPipe generates pipeline instructions for the back-end module according to the overall pipeline schedule (step 6). 3.2 Cross-iteration pipelining For effective pipeline bubble filling that respects data dependencies between the non-trainable part and the backbone(s), DiffusionPipe advocates cross-iteration pipeline bubble fill1Pipeline parallel group is a minimum group of devices on which a complete set of pipeline communications is performed. In DiffusionPipe, pipeline parallel group size (i.e., D) = world size (i.e., number of devices in the cluster) / data parallel degree. 0 1 2 3 4 5 6 7 Stage 0 Stage 1 Stage 2 Machine 0 Machine 1 Device mapping Data dependency Figure 8. DiffusionPipe\u2019s data and pipeline parallelism. Devices in the same color run the same model stage. 0 0 1 1 0 1 0 1 N' N' N' N' (F) 0 0 1 1 0 1 0 1 F N N Non-trainable part of the current iteration N' Non-trainable part of the next iteration N(F) Overlapped synchronization and non-trainable part execution Pipeline bubble 0 Forward stage 0 Backward stage F Synchronization (pipeline flush) Backbone pipeline only Cross-iteration pipelining Device 0 Device 1 Device 0 Device 1 Saved time Time elapses Figure 9. Cross-iteration pipelining of a diffusion model. Numbers indicate the micro-batch index of a pipeline stage. ing, filling the bubble time of the backbone pipeline training in one iteration with the non-trainable part computation of the next iteration, as shown in Fig. 9. Non-trainable layers can be computed in a data parallel manner without pipelining, following their inter-layer data dependencies. At the end of a training iteration, the output of the non-trainable part is collected and divided into micro-batches according to the pipeline training configurations of the backbone(s). In the next iteration, these intermediate results are loaded onto the correct devices and fed as input to the pipeline training of the backbone(s). In addition, we only run the non-trainable part in the first iteration to enable such overlapping. The cross-iteration pipeline is mathematically equivalent to data parallel and synchronous pipeline training. 4 BACKBONE PARTITIONING In this section, we present a unified dynamic programming approach to optimize partitioning and device assignment of the trainable part in diffusion models. 4.1 Single backbone We first consider a diffusion model with a single backbone. The high-level idea is to analyze the critical path of FIFO1F1B pipelining of the backbone and derive an upper bound on its execution time, to identify the optimal partitioning scheme that minimizes the execution time. We use the notations in Table 4. FIFO pipeline execution can be divided into 3 phases, i.e., warm-up, stable and cool-down, as shown in Fig. 2. It launches micro-batch processing one by one in the warmup phase and waits for all micro-batches to be completed in the cool-down phase. When we enlarge the last stage\u2019s execution time to the longest among all stages, enforcing it on the critical path, the warm-up phase contains forward \fDiffusionPipe: Training Large Diffusion Models with Efficient Pipelines Table 4. Notations Symbol Definition L Number of layers in backbone model B, B, b Training batch size, micro-batch size and number of samples in a partial-batch S, s Set of model stages and model stage Pf l (B), Pb l (B) Forward and backward computation time of layer l given batch size B Cf l,l+1(B), Cb l+1,l(B) Data size of communication in forward and backward pass between layers l and l + 1 given batch size B Rx, Lx Bandwidth and latency of communication type x (e.g., allreduce (ar), point-to-point (p2p)) Gl(B) Gradient size of layer l given batch size B Ol(B) Output size of layer l given batch size B TS(s) Synchronization time of stage s TC(s) Compensation time of stage s T0 Maximum micro-batch execution time per stage or inter-stage communication time T S\u2212C 0 Maximum gap between synchronization time and compensation time per stage TB Length of a pipeline bubble (idle time) 0 1 C0,1 0 1 C0,1 Cf Stage 0 Stage 1 Communication 0 1 0 Cf C0,1 C0,1 Warm-up Self-conditioning forward 1 0 C1,0 1 0 1 C1,0 F0 F1 Compensation time (Tc) Cool-down Stable FIFO-1F1B pipelining Figure 10. FIFO-1F1B scheduling of pipelining a backbone model with 2 stages, 2 micro-batches and self-conditioning. The same color and number setting with Fig. 9. Ci,j is communication from stage i to stage j. Cf feeds back the output of the backbone to stage 0. Fi refers to the parameter synchronization of stage i, Tc is the compensation time of stage 1. computation on S \u22121 model stages (aka forward stages). Similarly, the cool-down phase includes backward computation on S \u22121 model stages (aka backward stages). The stable phase of the critical path contains M forward stages and M backward stages, where M is the number of microbatches. Therefore, there are a total of 2(M+S\u22121) forward and backward stages on the critical path of the FIFO-1F1B pipeline schedule in total. Considering the intermediate data communication between model stages in pipeline training, we add S \u22121 inter-stage communications in the forward and backward passes, respectively, which then becomes 2(M + S \u22121) + 2(S \u22121) forward and backward stages, together with communications on the critical path. We use T0 to denote the maximum of the time to run the forward plus backward computation of a micro-batch on a model stage, and the communication time between two stages, among all model stages. Then we have an upper bound T0(M + 2S \u22122) on the execution time of the critical path. We further consider the parameter synchronization time among the micro-batches and add T S\u2212C 0 , i.e., the maximum gap between TS(s) and TC(s) to the pipeline training time of the backbone for all stages s, where TS(s) indicates the synchronization time of stage s and TC(s) is used to compensate the overlapping time of parameter synchronization of stage s and computation of later stages. Fig. 10 gives an illustration. Putting the above together, an upper bound on the FIFO-1F1B pipeline execution time is: T max = T0(M + 2S \u22122) + T S\u2212C 0 (1) We design a dynamic programming approach to identify the backbone partition and device assignment by minimizing T max. We order the D devices in a pipeline parallel group into a chain according to their rank. Let W(L, S, r, D) denote T0 when partitioning the first L consecutive layers of the backbone into S stages, with these stages placed on devices 1 to D and the last stage s replicated on the last r devices (of the 1 to D device chain). Additionally, let Y (L, S, r, D) denote T S\u2212C 0 under the same setting. The optimal partition of the backbone into S stages with the device placement of each stage can be computed by: min 1\u2264r\u2264D{(M + 2S \u22122)W(L, S, r, D) + Y (L, S, r, D)} (2) W(L, S, r, D) can be decomposed into sub-problems that further partition the first l model layers into S \u22121 stages on the remaining D \u2212r devices, with the last stage replicated on r\u2032 devices2. Then, W(L, S, r, D) can be computed by the maximum of W(l, S \u22121, r\u2032, D \u2212r) and the estimation of T0 by the last stage s (i.e., T0(s)), and Y (L, S, r, D) can be computed in the same way, following Eqn. (3) to (8). Then we add the range in Eqn. (9) when optimizing Eqn. (2). T0(s) = max{ X l. Proceedings of the 7 th MLSys Conference, Santa Clara, CA, USA, 2024. Copyright 2024 by the author(s). training scale is large (Narayanan et al., 2019). Pipeline parallelism (Huang et al., 2019; Narayanan et al., 2019; Luo et al., 2022) has been widely adopted to train large DNN models, which partitions networks across multiple devices and pipelines micro-batch processing across model partitions, substantially alleviating memory consumption on a single device and enabling larger training batch sizes. Although pipeline parallelism is potentially useful in enabling larger diffusion model training, it has not been well explored for diffusion models and its application faces several challenges as follows: First, the structural characteristics and special training procedures of diffusion models cannot be handled well by traditional pipelining methods. A diffusion model typically contains a trainable part with one or multiple backbone models (e.g., U-Net) (Rombach et al., 2022), and a nontrainable part with frozen text and image encoders, and they are usually trained with special techniques such as selfconditioning (Chen et al., 2022), which involves an additional forward computation pass on the backbone. Pipeline training involves only the trainable part, while the nontrainable part is not readily handled by existing pipeline training methods because it does not require pipelining. Self-conditioning is beyond the scope of existing pipeline systems, as they assume that there is only one forward pass. Second, pipeline bubbles are often significant in synchronous pipeline training (Huang et al., 2019; Fan et al., 2021; Luo et al., 2022), which is more widely used in practice due to not altering model performance but involves arXiv:2405.01248v1 [cs.DC] 2 May 2024 \fDiffusionPipe: Training Large Diffusion Models with Efficient Pipelines periodic pipeline flushing. We identify a unique opportunity to fill the pipeline bubbles using the computation of nontrainable model components, to substantially improve device utilization and expedite training speed. However, there are dependencies between the trainable and non-trainable part that block pipeline bubble filling by overlapping their execution. In addition, how to partition the non-trainable part into sets of layers and insert them into the pipeline bubble is not studied. Third, Non-trainable layers with extra-long execution time are common in frozen encoders (Kingma & Welling, 2013). Such layers may not fit into any pipeline bubble and block filling pipeline bubble with all subsequent layers in the nontrainable part, which cannot be solved by only partitioning the non-trainable part into sets of layers. In addition, as nontrainable layers\u2019 execution time is discrete, it is unlikely to fully utilize idle time in individual pipeline bubble, leading to performance degradation. In this paper, we propose DiffusionPipe, an efficient pipeline training system designed specifically for large diffusion models. DiffusionPipe systematically determines optimized model partitioning, stages, and replication settings while applying pipeline bubble filling techniques. These optimizations are tailored for a variety of representative diffusion models and training methods. To the best of our knowledge, we are the first to enable efficient pipeline parallel training of diffusion models. Our contributions can be summarized as follows: \u25b7We propose a unified dynamic programming-based algorithm for optimized model partitioning, that can handle various training scenarios, e.g., models with different numbers of backbones and models trained with self-conditioning. The proposed partitioning algorithm optimizes the model partitioning scheme under various settings of number of stages and number of micro-batches, with performance comparable to state-of-the-art pipeline paradigms under traditional pipelining, and effectively handles scenarios beyond traditional pipelining and specific to diffusion models. \u25b7We design a novel pipeline bubble filling strategy that fills the non-trainable part computation into the bubble time of the pipeline training of the backbone(s), effectively eliminating pipeline bubbles. It efficiently partitions the nontrainable components and the input data for bubble filling, and effectively addresses dependencies between the non-trainable part and the trainable part by allowing crossiteration overlapping of backbone training of an iteration and non-trainable part computation of the next iteration and filling pipeline bubbles of the former with the latter. \u25b7We effectively handles extra-long non-trainable layers which do not fit into individual pipeline bubbles, by a partialbatch processing design, for the non-training layer to proConditional encoder First stage encoder Backbone (forward computation) Text input Image input Backbone (back propagation) Self-conditioning feedback Figure 1. Training process of Stable Diffusion v2.1 (Rombach et al., 2022) and additional feedback of self-conditioning (Chen et al., 2022). Non-trainable components are marked in grey boxes. Table 1. Ratio of the forward time of the non-trainable part to the forward and backward time of the trainable part on A100 GPU Model / Batch size 8 16 32 64 Stable Diffusion v2.1 38% 41% 43% 44% ControlNet v1.0 76% 81% 86% 89% cess only a portion of a training batch. Partial-batch layer\u2019s execution time can be precisely controlled by its input batch size, enabling it to be inserted into bubbles. In addition, partial-batch layers help eliminate the remaining idle time in pipeline bubbles after inserting non-trainable layers (processing a complete batch). We implement DiffusionPipe and compare it to state-ofthe-art data parallel training systems (Rasley et al., 2020) and ZeRO-3 (Rajbhandari et al., 2021), together with synchronous pipeline training paradigms, including SPP (Luo et al., 2022) and GPipe (Huang et al., 2019). Experimental results show that DiffusionPipe achieves up to 1.28x speedup over data parallel training and up to 1.41x speedup over existing pipeline parallel methods on representative diffusion models. We observe that DiffusionPipe achieves almost complete elimination of pipeline bubbles and effectively handles multiple training scenarios of diffusion models. 2 BACKGROUND AND MOTIVATION 2.1 Diffusion models and training Diffusion models (Ho et al., 2020; Song et al., 2020; Chen et al., 2022; Rombach et al., 2022; Ho et al., 2022; Saharia et al., 2022; Podell et al., 2023) are generative models that learn to reverse the diffusion process that gradually turns data into noise. They typically comprise a backbone model that performs image generation and multiple frozen encoders that encode image and conditional information, e.g., class information (Yu et al., 2015), text description (Deng et al., 2009), canny edge (Canny, 1986) and human pose (Kreiss et al., 2021), and provide it as input to the backbone. During diffusion model training, the encoders are typically fixed and executed in advance in the forward computation pass (referred to as the non-trainable part), while the backbone (the trainable part) is trained with both forward computation and backward propagation (Fig. 1). Table 1 compares the execution time of the non-trainable part and the training time (forward and backward) of the trainable part. \fDiffusionPipe: Training Large Diffusion Models with Efficient Pipelines 0 0 0 0 1 1 1 2 2 3 0 1 1 2 2 0 2 3 3 3 0 0 1 1 1 2 2 2 3 3 3 3 Warm-up Stable Cool-down Device 0 Device 1 Device 2 Device 3 Figure 2. FIFO-1F1B schedule of a DNN. Gray blocks without numbers indicate pipeline bubbles. Potential critical paths are marked with a dashed line. Numbers indicate micro-batch index in both forward (blue) and backward (pink) steps. Device 0 Device 1 Device 2 Device 3 0 0 0 0 4 4 4 4 1 1 1 1 5 5 5 5 6 6 6 6 2 2 2 2 3 3 3 3 7 7 7 7 0 1 2 3 0 1 2 3 0 1 2 3 0 1 2 3 4 5 6 7 4 5 6 7 4 5 6 7 4 5 6 7 Warm-up Stable Cool-down Figure 3. Bidirectional pipeline schedule of a DNN. Communication omitted. The same meaning of number and color with Fig. 2. Micro-batch 0 to 3 pipeline from device 0 to 3 (down direction), while micro-batch 4 to 7 pipeline from device 3 to device 0 (up direction). Some diffusion models, e.g., Cascaded Diffusion Models (CDM) (Ho et al., 2022; Ramesh et al., 2022; Podell et al., 2023), involve multiple backbones of different capacities for high-resolution image generation. Multiple backbones accept the same encoder outputs, and each backbone also takes the output of the preceding backbone as input. The training of backbones in a CDM are typically independent, and each is trained on a different set of devices using the same procedure, as shown in Fig. 1. In the current mainstream diffusion models, U-Net (Ho et al., 2020; Rombach et al., 2022) is widely used as the backbone model. Transformer models can also serve as the backbone (Peebles & Xie, 2022a; Bao et al., 2023). T5xxl (Raffel et al., 2020), BERT (Devlin et al., 2018) and CLIP (Radford et al., 2021) text encoders are popular text encoders, while the image encoders are often variational auto-encoders (Kingma & Welling, 2013), ViT (Dosovitskiy et al., 2020) and CLIP image encoders. There are corresponding encoders (Zhang & Agrawala, 2023) for other modalities, such as canny edge and human pose. Self-conditioning (Chen et al., 2022) has become a very popular technique for training diffusion models (Rombach et al., 2022; Saharia et al., 2022; Yuan et al., 2022), which improves the sampling quality by introducing an additional forward computation pass of the backbone (Fig. 1). The output of this forward pass is fed back to the backbone and serves as a conditional input. The fidelity of the image is then improved because each step is conditioned on the previously generated samples. Table 2. Proportion of synchronization in training iteration time at local batch size 8 on A100 GPUs Model / GPU count 8 16 32 64 Stable Diffusion v2.1 5.2% 19.3% 36.1% 38.1% ControlNet v1.0 6.9% 22.7% 39.1% 40.1% 2.2 Pipeline parallel training, schedule and pipeline bubble Pipeline parallel training partitions the model into stages, and each stage is deployed on a single device; the input data batch in each training iteration is divided into multiple micro-batches, which are processed through the model stages in a pipelined manner. The micro-batch execution pipelines are typically scheduled by a First-In-First-Out (FIFO) heuristic (Chen et al., 2015; Abadi et al., 2016; Sergeev & Del Balso, 2018), which executes micro-batches on model stages according to their ready order. The OneForward-One-Backward (1F1B) schedule is widely adopted with FIFO, that alternatively executes forward computation and back propagation of micro-batches on each model stage in the stable phase of the pipeline execution (when multiple micro-batches are available to run on a model stage at the same time). As illustrated in Fig. 2, this schedule allows releasing intermediate activations and reduces peak memory usage by launching the backward computation when forward computation of the micro-batch is complete. Chimera (Li & Hoefler, 2021) proposes bidirectional pipelining to reduce pipeline bubbles while retaining training synchronous. Chimera maintains two pipelines of micro-batch execution in different device rank orders (i.e., pipeline directions) on the same set of model stages, with the two pipeline execution schedules being symmetric along the device dimension. An example of bidirectional pipelining is shown in Fig. 3. Each micro-batch\u2019s execution can fit perfectly into pipeline bubbles of its counterpart in the pipeline of the other direction (when the number of stages is even). In synchronous pipeline training, pipeline bubbles generally exist in the pipeline schedule (Fig. 2). There is a barrier that gradient synchronization imposes between pipeline stages of the trainable part of diffusion models at different iterations, disabling pipeline bubbles be filled by the trainable part at different iterations. Therefore, although pipeline bubbles can be partially reduced by applying a better model partitioning and pipeline schedule, e.g., SPP (Luo et al., 2022) and Chimera (Li & Hoefler, 2021), such approaches cannot fundamentally eliminate pipeline bubbles, as they only manipulate the trainable part of the model and do not take the non-trainable part into consideration. 2.3 Synchronization overhead and memory consumption of data parallel training Diffusion models are largely trained using data parallelism nowadays (Rombach et al., 2022; Ho et al., 2022; Saharia \fDiffusionPipe: Training Large Diffusion Models with Efficient Pipelines 1 2 3 4 Number of micro-batches 4 3 2 Number of stages 67.6% 684.3% 51.0% 342.2% 41.0% 228.1% 34.3% 171.1% 58.2% 456.2% 41.0% 228.1% 31.7% 152.1% 25.8% 114.1% 41.0% 228.1% 25.8% 114.1% 18.8% 76.0% 14.8% 57.0% (a) Stable Diffusion v2.1 1 2 3 4 Number of micro-batches 4 3 2 Number of stages 61.3% 335.4% 44.2% 167.7% 34.5% 111.8% 28.4% 83.9% 51.4% 223.6% 34.5% 111.8% 26.0% 74.5% 20.9% 55.9% 34.5% 111.8% 20.9% 55.9% 15.0% 37.3% 11.7% 28.0% (b) ControlNet v1.0 Figure 4. Ratio of pipeline bubble time to iteration time (upper) and ratio of pipeline bubble time to non-trainable part execution time (lower) at batch size 64 using FIFO-1F1B scheduling. et al., 2022; Podell et al., 2023), which involves significant parameter synchronization overhead among devices and large memory consumption on each device that restricts the maximum feasible local batch size and the device utilization. For example, Stable Diffusion is trained at a local batch size of only 8 on each TPU-v3 (32GB) in (Rombach et al., 2022) consuming about 24.3 GB memory, which results in limited device utilization and exacerbates the synchronization portion of the training time. The synchronization overhead in Table 2 is computed as the ratio of parameter synchronization time to the end-to-end time of a training iteration. As the number of devices increases, parameter synchronization soon takes up a significant portion of the iteration time. In summary, the data parallel style of diffusion model training limits the training batch size and imposes high synchronization overhead. 2.4 Efficient pipeline bubble filling with non-trainable components We profile the iteration training time of two popular diffusion models (without self-conditioning) by pipelining their backbones under different model stage and micro-batch number settings, and executing the non-trainable part using data parallelism before backbone training. Fig. 4 shows the pipeline bubble ratios, where the iteration time is the sum of pipeline training time of the backbone and the execution time of the non-trainable part in each training iteration. Pipeline bubbles can take up to 68% of the overall training time, which is quite significant, according to the upper line in Fig. 4. In the lower line, a ratio close to 1 indicates that the pipeline bubble time can be almost completely filled by scheduling the non-trainable part in pipeline bubbles, under the respective model stage and micro-batch numbers. This observation motivates us to advocate pipeline bubble filling with the non-trainable part, and to study the detailed bubble filling strategies. Fig. 5 shows that many non-trainable layers (indexed 0 to 21) in both models have short execution times, which belong to the frozen text encoder. Most layers (indexed 22 to 41) from the frozen image encoder take a moderate amount of time to compute (less than 30 ms). Such a distribution of 0 10 20 30 40 Non-trainable layer index 10 1 10 0 10 1 10 2 Execution time (ms) (a) Stable Diffusion v2.1 0 20 40 60 Non-trainable layer index 10 1 10 0 10 1 10 2 Execution time (ms) (b) ControlNet v1.0 Figure 5. Execution time of non-trainable layers at batch size 64. 0 20 40 60 Batch size 0 100 200 300 400 Execution time (ms) Top-1 Top-2 Top-3 2 stages 3 stages 4 stages (a) Stable Diffusion v2.1 0 20 40 60 Batch size 0 100 200 300 400 Execution time (ms) Top-1 Top-2 Top-3 2 stages 3 stages 4 stages (b) ControlNet v1.0 Figure 6. Execution time of top-3 non-trainable layers with longest execution time under different batch sizes, compared to longest pipeline bubble time when there are 4 micro-batches and different number of stages at batch size 64 using FIFO-1F1B scheduling. non-trainable layers with a large proportion of short and moderately long layer execution times provides excellent opportunities for executing individual layers in pipeline bubbles ranging from 10 to 100 ms. There are also some non-trainable layers with extra-long execution times (more than 400 ms), as shown in Fig. 5. Such layers may not fit into any pipeline bubble. Nevertheless, we observe that the layer execution time can be precisely controlled by adjusting the input batch size. Fig. 6 shows the execution times of the layers with the longest execution times at different batch sizes. When the batch size is reduced to 16, most of these non-trainable layers can fit into the longest pipeline bubble obtained by the way we identify bubbles in Fig. 2, implying that we can run such layers in pipeline bubbles by partitioning their input. We seek to design an efficient algorithm to schedule the execution of non-trainable layers into pipeline bubbles. 3 SYSTEM DESIGN Fig. 7 presents an overview of DiffusionPipe, which comprises of two modules: (1) The front-end carries out our workflow of generating an optimized pipeline training schedule for an input diffusion model, including pipeline training configurations of the backbone(s) and bubble-filling strategies of the non-trainable part; (2) The back-end is an execution engine that performs pipeline training according to the optimized pipeline schedule. \fDiffusionPipe: Training Large Diffusion Models with Efficient Pipelines Profile records Parallel profile Trainable part partition schemes Schedule Overall partition schemes Front-end Dynamic programming Greedy filling Back-end Instruction generation Invoke Schedule 1 2 4 5 3 6 Model stages Execution engine Pipeline instructions of current device Optimal overall pipeline schedule Trainable part pipeline schedules Pipeline instructions of all devices Pipeline instruction implementations Synchronization Trainable stage forward Trainable stage backward Non-trainable stage forward Load micro-batch data Send or receive data Input model Figure 7. The architecture of DiffusionPipe Table 3. Pipeline training hyper-parameters Symbol Description S Number of model stages M Number of micro-batches D Pipeline parallel group size1 3.1 Workflow DiffusionPipe takes the diffusion model configuration, the training batch size, and the cluster configuration (i.e., number of machines and number of devices per machine) as inputs. DiffusionPipe first performs parallel profiling on the entire cluster to obtain the model layer execution time at different batch sizes (step 1 in Fig. 7), which is used in steps 2 to 5. Based on the input specifications, DiffusionPipe searches for pipeline training hyper-parameters as listed in Table 3. Note that DiffusionPipe supports mixed pipeline and data parallelism, as shown in Fig. 8. For each feasible hyper-parameter combination (S, M, and D), DiffusionPipe generates a near-optimal partitioning scheme for the trainable backbone(s) (\u00a74, step 2), including the number of layers in each model stage and the number of devices on which each stage replicates. According to the corresponding pipeline schedule generated in step 3, DiffusionPipe further partitions the non-trainable part and fills it into pipeline bubbles (\u00a75, step 4). Then DiffusionPipe generates the overall pipeline training schedules, and selects the optimal one with minimum iteration time (step 5). Finally, DiffusionPipe generates pipeline instructions for the back-end module according to the overall pipeline schedule (step 6). 3.2 Cross-iteration pipelining For effective pipeline bubble filling that respects data dependencies between the non-trainable part and the backbone(s), DiffusionPipe advocates cross-iteration pipeline bubble fill1Pipeline parallel group is a minimum group of devices on which a complete set of pipeline communications is performed. In DiffusionPipe, pipeline parallel group size (i.e., D) = world size (i.e., number of devices in the cluster) / data parallel degree. 0 1 2 3 4 5 6 7 Stage 0 Stage 1 Stage 2 Machine 0 Machine 1 Device mapping Data dependency Figure 8. DiffusionPipe\u2019s data and pipeline parallelism. Devices in the same color run the same model stage. 0 0 1 1 0 1 0 1 N' N' N' N' (F) 0 0 1 1 0 1 0 1 F N N Non-trainable part of the current iteration N' Non-trainable part of the next iteration N(F) Overlapped synchronization and non-trainable part execution Pipeline bubble 0 Forward stage 0 Backward stage F Synchronization (pipeline flush) Backbone pipeline only Cross-iteration pipelining Device 0 Device 1 Device 0 Device 1 Saved time Time elapses Figure 9. Cross-iteration pipelining of a diffusion model. Numbers indicate the micro-batch index of a pipeline stage. ing, filling the bubble time of the backbone pipeline training in one iteration with the non-trainable part computation of the next iteration, as shown in Fig. 9. Non-trainable layers can be computed in a data parallel manner without pipelining, following their inter-layer data dependencies. At the end of a training iteration, the output of the non-trainable part is collected and divided into micro-batches according to the pipeline training configurations of the backbone(s). In the next iteration, these intermediate results are loaded onto the correct devices and fed as input to the pipeline training of the backbone(s). In addition, we only run the non-trainable part in the first iteration to enable such overlapping. The cross-iteration pipeline is mathematically equivalent to data parallel and synchronous pipeline training. 4 BACKBONE PARTITIONING In this section, we present a unified dynamic programming approach to optimize partitioning and device assignment of the trainable part in diffusion models. 4.1 Single backbone We first consider a diffusion model with a single backbone. The high-level idea is to analyze the critical path of FIFO1F1B pipelining of the backbone and derive an upper bound on its execution time, to identify the optimal partitioning scheme that minimizes the execution time. We use the notations in Table 4. FIFO pipeline execution can be divided into 3 phases, i.e., warm-up, stable and cool-down, as shown in Fig. 2. It launches micro-batch processing one by one in the warmup phase and waits for all micro-batches to be completed in the cool-down phase. When we enlarge the last stage\u2019s execution time to the longest among all stages, enforcing it on the critical path, the warm-up phase contains forward \fDiffusionPipe: Training Large Diffusion Models with Efficient Pipelines Table 4. Notations Symbol Definition L Number of layers in backbone model B, B, b Training batch size, micro-batch size and number of samples in a partial-batch S, s Set of model stages and model stage Pf l (B), Pb l (B) Forward and backward computation time of layer l given batch size B Cf l,l+1(B), Cb l+1,l(B) Data size of communication in forward and backward pass between layers l and l + 1 given batch size B Rx, Lx Bandwidth and latency of communication type x (e.g., allreduce (ar), point-to-point (p2p)) Gl(B) Gradient size of layer l given batch size B Ol(B) Output size of layer l given batch size B TS(s) Synchronization time of stage s TC(s) Compensation time of stage s T0 Maximum micro-batch execution time per stage or inter-stage communication time T S\u2212C 0 Maximum gap between synchronization time and compensation time per stage TB Length of a pipeline bubble (idle time) 0 1 C0,1 0 1 C0,1 Cf Stage 0 Stage 1 Communication 0 1 0 Cf C0,1 C0,1 Warm-up Self-conditioning forward 1 0 C1,0 1 0 1 C1,0 F0 F1 Compensation time (Tc) Cool-down Stable FIFO-1F1B pipelining Figure 10. FIFO-1F1B scheduling of pipelining a backbone model with 2 stages, 2 micro-batches and self-conditioning. The same color and number setting with Fig. 9. Ci,j is communication from stage i to stage j. Cf feeds back the output of the backbone to stage 0. Fi refers to the parameter synchronization of stage i, Tc is the compensation time of stage 1. computation on S \u22121 model stages (aka forward stages). Similarly, the cool-down phase includes backward computation on S \u22121 model stages (aka backward stages). The stable phase of the critical path contains M forward stages and M backward stages, where M is the number of microbatches. Therefore, there are a total of 2(M+S\u22121) forward and backward stages on the critical path of the FIFO-1F1B pipeline schedule in total. Considering the intermediate data communication between model stages in pipeline training, we add S \u22121 inter-stage communications in the forward and backward passes, respectively, which then becomes 2(M + S \u22121) + 2(S \u22121) forward and backward stages, together with communications on the critical path. We use T0 to denote the maximum of the time to run the forward plus backward computation of a micro-batch on a model stage, and the communication time between two stages, among all model stages. Then we have an upper bound T0(M + 2S \u22122) on the execution time of the critical path. We further consider the parameter synchronization time among the micro-batches and add T S\u2212C 0 , i.e., the maximum gap between TS(s) and TC(s) to the pipeline training time of the backbone for all stages s, where TS(s) indicates the synchronization time of stage s and TC(s) is used to compensate the overlapping time of parameter synchronization of stage s and computation of later stages. Fig. 10 gives an illustration. Putting the above together, an upper bound on the FIFO-1F1B pipeline execution time is: T max = T0(M + 2S \u22122) + T S\u2212C 0 (1) We design a dynamic programming approach to identify the backbone partition and device assignment by minimizing T max. We order the D devices in a pipeline parallel group into a chain according to their rank. Let W(L, S, r, D) denote T0 when partitioning the first L consecutive layers of the backbone into S stages, with these stages placed on devices 1 to D and the last stage s replicated on the last r devices (of the 1 to D device chain). Additionally, let Y (L, S, r, D) denote T S\u2212C 0 under the same setting. The optimal partition of the backbone into S stages with the device placement of each stage can be computed by: min 1\u2264r\u2264D{(M + 2S \u22122)W(L, S, r, D) + Y (L, S, r, D)} (2) W(L, S, r, D) can be decomposed into sub-problems that further partition the first l model layers into S \u22121 stages on the remaining D \u2212r devices, with the last stage replicated on r\u2032 devices2. Then, W(L, S, r, D) can be computed by the maximum of W(l, S \u22121, r\u2032, D \u2212r) and the estimation of T0 by the last stage s (i.e., T0(s)), and Y (L, S, r, D) can be computed in the same way, following Eqn. (3) to (8). Then we add the range in Eqn. (9) when optimizing Eqn. (2). T0(s) = max{ X l 500 KB-facts) 4.2 Benchmark Characteristics Topic entities. For creating Tiq (Temporal Implicit Questions) we started with the years 1801-2025 and obtained an initial set of 229,318 entities. From this set, we uniformly sampled 10,000 topic entities based on their frequency, to capture a similar amount of long-tail and more prominent entities (see Table 1 for details). These fractions can be configured as required. Since some entity types were over-represented in the calendar year pages (e.g., politicians or countries), we also ensured that individual entity types are not taking up more than 10% of the topic entities. In general, the topic entity set allows to control the domain coverage within the generated implicit questions, by choosing entities of the desired types. We did not specifically configure the proportions to which the individual information sources are used within the questions, since we observed a naturally diverse distribution. Fig. 3 shows the distribution among source combinations for initiating the main and implicit part. The questions are finally split into train (6,000), dev (2,000), and test sets (2,000). Table 1 shows the basic statistics, and Table 2 shows representative questions of the Tiq benchmark. Meta-data. Tiq provides implicit questions and gold answers, as strings as well as canonicalized to Wikipedia and Wikidata. The meta-data includes the information snippets grounding the question, the sources these were obtained from, the explicit temporal value expressed by the implicit constraint, the topic entity, the question entities detected in the snippets, and the temporal signal. The Tiq dataset is available at https://faith.mpi-inf.mpg.de. 5 EXPERIMENTS 5.1 Experimental Setup Benchmarks. We conduct experiments on our new Tiq benchmark and TimeQuestions [24], which has been actively used in recent work on temporal QA. For ordinal questions (e.g., \u201cwhat was the first album by Queen?\u201d) in TimeQuestions, we apply the same method as outlined in Sec. 3, without applying any temporal filtering. Metrics. We use the standard QA metrics precision at 1 (P@1), mean reciprocal rank (MRR), and hit at 5 (Hit@5) [46]. 5 \fWWW \u201924, May 13\u201317, 2024, Singapore, Singapore Zhen Jia, Philipp Christmann, & Gerhard Weikum Table 2: Representative questions from the Tiq benchmark. The sources below indicate the source that was used for populating the [main question part; implicit question part] of the implicit question. 1. Who bought the Gainesville Sun after it was owned by Cowles Media Company? 2. During Colin Harvey\u2019s senior football career, which club was he a member of while he played for the England national football team? 3. Which album released by Chris Brown topped the Billboard 200 when he was performing in Sydney? 4. What television series was Hulk Hogan starring in when he signed with World Championship Wrestling? 5. Who was Bristol Palin\u2019s partner before she participated in the fall season of Dancing with the Stars, and reached the finals, finishing in third place? The New York Times Company Everton F.C. Fortune Thunder in Paradise Levi Johnston [Text; KB] [Infobox; KB] [Text; Infobox] [Text; Text] [Infobox; Text] 6. During the onset of the COVID19 pandemic, who was the New York City head of government? 7. Who was the chief executive officer at Robert Bosch GmbH before revenue reached \u20ac78.74 billion? 8. After graduating from the Rostovon-Don College of Economics and Finance, which political party did Gyula Horn join? 9. Which national football team did Carlos Alberto Torres manage before joining Flamengo? 10. What university did Robert Lee Moore work for after Northwestern University? Bill de Blasio Volkmar Denner Hungarian Working People\u2019s Party Oman national football team University of Pennsylvania [KB; Text] [KB; Infobox] [Infobox; Text] [Infobox; Infobox] [KB; KB] Table 3: Main results comparing the performance of Faith against baselines on the test sets of Tiq and TimeQuestions. Benchmark \u2192 Tiq TimeQuestions Method \u2193 P@1 MRR Hit@5 P@1 MRR Hit@5 InstructGpt [39] 0.237 n/a n/a 0.224 n/a n/a Gpt-4 [37] 0.236 n/a n/a 0.306 n/a n/a Uniqorn [42] 0.236 0.255 0.277 0.331 0.409 0.538 Unik-Qa [38] 0.425 0.480 0.540 0.424 0.453 0.486 Explaignn [13] 0.446 0.584 0.765 0.525 0.587 0.673 TempoQR [31] 0.011 0.018 0.022 0.438 0.465 0.488 CronKGQA [48] 0.006 0.011 0.014 0.395 0.423 0.450 Exaqt [24] 0.232 0.378 0.587 0.565 0.599 0.664 Faith (Proposed) 0.491 0.603 0.752 0.535 0.582 0.635 Un-Faith 0.459 0.604 0.799 0.571 0.640 0.724 Baselines. We compare Faith with a suite of baselines, covering a diverse range of competitors: \u2022 Generative LLMs. We compare with InstructGpt [39] (\u201ctextdavinci-003\u201d) and Gpt-4 [37] (\u201cgpt-4\u201d) using the OpenAI API3. We tried different prompts, and found the following to perform best: \u201cPlease answer the following question by providing the crisp answer entity, date, year, or number.\u201d. For computing P@1, we check whether the generated answer string matches with the label or any alias of the gold answer. If this is the case, P@1 is 1, else 0. Other (ranking) metrics are not applicable for LLMs. \u2022 Heterogeneous QA methods. Further, we compare against a range of recent general-purpose methods for heterogeneous QA: Uniqorn [42], UniK-Qa [38], and the vanilla Explaignn [13]. \u2022 Temporal QA methods. We also compare with state-of-the-art methods for temporal QA: TempoQR (TempoQR-Hard) [31], CronKGQA [48], and Exaqt [24]. Finally, we show results for a variant of our approach, which does not prune out evidence temporally-inconsistent with the temporal constraint, i.e. drops the temporal pruning component. We term this variant Un-Faith. Configuration. Wikidata [55] is used as the KB for Faith and all baselines. We use Wikipedia text, tables and infoboxes as additional information sources for methods operating over heterogeneous sources. The BART models are initialized via Hugging Face4. We use AdamW as optimizer with a learning rate of 5\u00d710\u22125, batch 3https://platform.openai.com 4https://huggingface.co size of 10, weight decay of 0.01, 5 epochs, and 500 warm-up steps. Explaignn is run using the public code5, retaining the original settings and parameters for optimization. For Faith, we choose the candidate at rank 1 as the answer for intermediate questions in the implicit question resolver. In case too many evidences are obtained as input to the answering stage, we consider the top-100 evidences as computed by a BERT-based reranker [36]. Further detail is given in the Appendix A.4. We follow an epoch-wise evaluation strategy for each module and baseline, and take the version with the best performance on the respective dev set. All training processes and experiments are run on a single GPU (NVIDIA Quadro RTX 8000, 48 GB GDDR6). 5.2 Main Results Answering performance of Faith and baselines on TimeQuestions and on Tiq are in Table 3. Faith outperforms baselines on Tiq. The main insight from Table 3 is that Faith surpasses all baselines on the Tiq dataset for P@1, which is the most relevant metric, demonstrating the benefits of our proposed method for answering implicit temporal questions. Temporal QA methods operating over KBs lack the required coverage on the Tiq dataset, and perform worse than general-purpose QA methods operating over heterogeneous sources. Explaignn comes close to the performance of Faith, and even slightly improves on the Hit@5 metrics. Note, however, that Explaignn and all other baselines do not verify that temporal constraints are met during answering. Thus, the most prominent among answer candidates may simply be picked up, even if no temporal information is provided or matching. Such possibly \u201caccidental\u201d and unfaithful answers are, by design, not considered by Faith. Trade-off between faithfulness and answering performance. Results for Un-Faith illustrate the effect of this phenomenon on our approach: especially the MRR and Hit@5 results are substantially improved. Consequently, Un-Faith outperforms all competitors on TimeQuestions. However, its answers are not always faithfully grounded in evidence sources. These results emphasize the trade-off between faithfulness and answering performance. Faith shows robust performance on TimeQuestions. Faith also shows strong performance on the TimeQuestions benchmark, on which it outperforms all baselines on P@1, except for Exaqt. This indicates the robustness of Faith across different datasets. 5https://github.com/PhilippChr/EXPLAIGNN 6 \fFaithful Temporal Question Answering over Heterogeneous Sources WWW \u201924, May 13\u201317, 2024, Singapore, Singapore Table 4: Comparing the faithfulness of Faith and Un-Faith for correct answers, and how often temporal constraints are violated or ignored. Benchmark \u2192 Tiq TimeQuestions Temporally Temporally Method \u2193 Faithful Unfaithful Faithful Unfaithful Faith 0.95 0.00 0.94 0.01 Un-Faith 0.90 0.08 0.87 0.13 Existing methods for temporal QA show major performance gaps between the two benchmarks: the P@1 of the strongest method on TimeQuestions, Exaqt, substantially drops from 0.565 at P@1 to 0.232 on the Tiq benchmark. Note that all methods are trained on the specific benchmark, if applicable. LLMs fall short on temporal questions. Another key insight from Table 3 is that current LLMs are clearly not capable of answering temporal questions. InstructGpt and Gpt-4 can merely answer \u224323-30% of the questions correctly, and are constantly underperforming Faith and baselines operating over heterogeneous sources. One explanation is that reasoning with continuous variables, such as time, is a well-known weakness of LLMs [15]. 5.3 Faithfulness Evaluation Our main results in Table 3 indicate that ignoring the temporal condition of the question can yield improvements on automatic metrics (compare performance of Faith vs. Un-Faith on TimeQuestions). However, we observe that this can lead to critical failure cases of QA systems and sometimes boils down to lucky guesses of the answer based on priors (e.g., prominence of an answer candidate). Faith refrains to answer in absence of consistent evidence. If there is no temporal information associated with the evidence of candidate answers, or the temporal information does not satisfy the temporal constraint, Faith will refuse answering the question. For example, for the question \u201cWho did Lady Jane Grey marry on the 25th of May 1533?\u201d, there is no answer satisfying the temporal constraint because Lady Jane Grey did not marry anyone on the 25th of May 1533, since she was only born four years later in 1937. However, all of the baselines provide an answer to the question, without indicating that the temporal constraint is violated. Since questions without a temporally-consistent answer are not available at large scale, we randomly sample 500 explicit questions from TimeQuestions, and replace the temporal value with a random date (e.g., \u201c12 October 6267\u201d). None of the resulting questions has a temporally-consistent answer. As expected, the competitors still provide answers6. In contrast, Faith successfully refrained from answering for 467 of the 500 questions (93.4%). Upon investigating the failure cases, we noticed that the date recognition identifies four-digit numbers as years matching with the constraint (e.g., in the infobox entry \u201cVeysonnaz, SFOS number, 6267\u201d). Fallback to Un-Faith. Completely refraining from answering could also be sub-optimal: the user might have made a typo (e.g., \u201cMay 1533\u201d instead of \u201cMay 1553\u201d). We investigated to fall back to UnFaith in such scenarios, which could be indicated to end users with an appropriate warning. Performance on both datasets was slightly 6Except for the LLMs for which we are not able to investigate the behavior at scale, since they would often generate longer texts. Table 5: Ablation study using different source combinations as input for Faith on dev sets. Note that Faith is trained using all sources as input for all cases. Benchmark \u2192 Tiq TimeQuestions Method \u2193 P@1 MRR Hit@5 P@1 MRR Hit@5 KB 0.293 0.368 0.468 0.425 0.464 0.513 Text 0.194 0.262 0.351 0.224 0.269 0.320 Infoboxes 0.169 0.223 0.296 0.093 0.117 0.149 Tables 0.032 0.057 0.083 0.078 0.094 0.114 KB+Text 0.429 0.527 0.649 0.520 0.567 0.626 KB+Tables 0.299 0.379 0.480 0.435 0.479 0.536 KB+Infoboxes 0.384 0.488 0.634 0.443 0.487 0.543 Text+Tables 0.196 0.267 0.362 0.252 0.298 0.350 Text+Infoboxes 0.283 0.372 0.490 0.251 0.299 0.355 Tables+Infoboxes 0.179 0.244 0.331 0.143 0.174 0.208 All sources 0.497 0.610 0.756 0.538 0.583 0.639 Table 6: Ablation studies of Faith on dev sets. Benchmark \u2192 Tiq TimeQuestions Method \u2193 P@1 P@1 Faith 0.497 0.538 w/o temporal pruning 0.443 0.573 w/o implicit question resolver 0.467 0.559 w/o GNN-based answering 0.316 0.399 improved: the P@1 metric increased from 0.491 to 0.492 on Tiq and from 0.535 to 0.539 on TimeQuestions. We further investigated to fall back to Un-Faith in case Faith answered incorrectly. The P@1 metric was improved substantially on both datasets: from 0.491 to 0.622 on Tiq and from 0.535 to 0.653 on TimeQuestions. Manual analysis. Finally, we investigated the faithfulness of correct answers provided by Faith and Un-Faith, to understand how often the question is answered correctly even though the evidence is not faithful to the question. To analyze this qualitatively, we randomly selected 100 questions (from each benchmark) for which both Faith and Un-Faith answered correctly, and then manually verified the faithfulness, based on the definition in Sec. 2. Results are in Table 4. Faith provides faithful answers and evidence in 95%/94% of the time. By design, answers are faithful to the temporal constraints in the question (except for one question which specifies two different temporal constraints). In comparison, Un-Faith violates or ignores the temporal condition in 8%/13% of the cases. For example, to answer the question \u201cWhat movies starring Taylor Lautner in 2011?\u201d (answer: Abduction), the evidence for Faith is \u201cTaylor Lautner, Year is 2011, Title is Abduction, Role is Nathan Harper\u201d (from table), while the evidence for Un-Faith is \u201cAbduction, cast member, Taylor Lautner\u201d (from KB). Even though both pieces of evidence mention the correct answer Abduction, Un-Faith fails to satisfy the temporal constraint (\u201cin 2011\u201d) with its evidence. 5.4 In-depth Analysis Integrating heterogeneous sources is decisive. We further investigated the effect of integrating heterogeneous sources into Faith, and tested giving each individual source independently, and their pairwise combinations as input, in comparison to the default setting with \"All sources\". Results are in Table 5. Each information 7 \fWWW \u201924, May 13\u201317, 2024, Singapore, Singapore Zhen Jia, Philipp Christmann, & Gerhard Weikum Table 7: Anecdotal examples that Faith answered correctly in Tiq and TimeQuestions. Evidence shows the supporting information snippets along with their source provided in brackets. The part mentioning the predicted answer is in bold, and the detected temporal values are underlined. For the first example from the Tiq benchmark, we show the answering process of the intermediate question, which can be used by end users to verify the entire answer derivation of the system. Benchmark Tiq Question After managing FC Nantes, which football club did Antoine Raab take on next? Answer Stade Lavallois TSF \u27e8question entity: \u201cAntoine Raab, FC Nantes football\u201d, question relation: \u201cAfter managing which club did take on next\u201d, expected answer type: \u201cassociation football club\u201d, temp. signal: after, temp. category: implicit, temp. value: [1946, 1949] \u27e9 Evidence Antoine Raab, Managerial career, 1949\u20131950, Stade Lavallois. (from Infobox) Intermediate questions (i) When Antoine Raab managed FC Nantes start date? (ii) When Antoine Raab managed FC Nantes end date? Answers (to int. questions) (i) 1946, (ii) 1949 TSFs (for int. questions) (i) \u27e8question entity: \u201cFC Nantes, start, Antoine Raab\u201d, question relation: \u201cWhen managed date\u201d, expected answer type: \u201cyear\u201d, temp. signal: _; temp. category: non-implicit; temp. value: _ \u27e9 (ii) \u27e8question entity: \u201cFC Nantes, end, Antoine Raab\u201d, question relation: \u201cWhen managed date\u201d, expected answer type: \u201cyear\u201d, temp. signal: _; temp. category: non-implicit; temp. value: _ \u27e9 Evidence (for int. questions) (i, ii) Antoine Raab, Managerial career, 1946\u20131949, FC Nantes. (from Infobox) (ii) Antoine Raab, After the liberation of Nantes in 1944 Raab joined FC Nantes and played for the club until 1949. (from Text) Benchmark TimeQuestions Question What award did Thomas Keneally receive in the year 1982? Answer Booker Prize TSF \u27e8question entity: \u201cThomas Keneally\u201d, question relation: \u201cWhat award did receive in the year 1982\u201d, expected answer type: \u201cscience award\u201d, temp. signal: overlap, temp. category: non-implicit, temp. value: 1982 \u27e9 Evidence Man Booker Prize, winner, Thomas Keneally, point in time, 1982, for work, Schindler\u2019s Ark. (from KB) Thomas Keneally, Awards is Booker Prize, is Schindler\u2019s Ark, winner 1982. (from table) Thomas Keneally, He is best known for his non-fiction novel Schindler\u2019s Ark, the story of Oskar Schindler\u2019s rescue of Jews during the Holocaust, which won the Booker Prize in 1982. (from Text) source contributes to the performance of Faith, and integrating more information sources consistently enhances all metrics. Ablation studies. We tested variations of our pipeline on the dev sets. Table 6 shows results for Un-Faith (w/o temporal pruning), results without the implicit time resolver, and results with a Seq2seq model for answering (we used BART) instead of the GNN-based approach. Using a GNN-based answering approach plays a crucial role, and enhances not only answering performance, but also explainability. The implicit question resolver is decisive on Tiq, but slightly decreases performance on TimeQuestions. Un-Faith also shows strong performance on the dev sets. However, all modules contribute to the explainability and faithfulness of our approach. Anecdotal examples. Table 7 shows sample cases for which Faith provided the correct answer, and illustrates the answer derivation process providing traceable evidence for end users. Error analysis. To better understand failure cases, we conducted an error analysis measuring the answer presence (i.e. whether the gold answer is among answer candidates) throughout the pipeline. We identified the following error cases and list their percentage among all failure cases for Tiq and TimeQuestions, respectively: (i) the answer was not found in the initial retrieval stage (3.14/29.89), (ii) the answer is lost during temporal pruning (22.00/25.81), (iii) the answer is lost during scoring/graph shrinking (8.45/10.33), (iv) the answer is not considered among top-5 answers (15.13/12.47), (v) the answer is among top candidates but not at rank 1 (51.28/21.51). 6 RELATED WORK General-purpose QA. Question answering has extensive work using single sources like KBs (e.g., [2, 62, 64]) or text (e.g., [7, 21, 44]). Some works have shown that integrating different sources can substantially improve performance [8, 18, 47, 51, 52, 60, 61]. UnikQa [38] verbalizes snippets from a KB, text, tables and infoboxes, as input to a Fusion-in-decoder (FiD) model [21] for answer generation. Udt-QA [29] improved the verbalization technique. Explaignn [13] constructs graphs among such verbalized snippets, and applies graph neural networks for computing answers and explanatory evidence. None of these methods is geared for temporal questions. Another direction is to directly apply large language models (LLMs) for QA [3, 14, 41, 43]. However, LLMs cannot present traceable provenance for the generated outputs, falling short on faithfulness and explainability [1, 30, 33]. Also, LLMs struggle with reasoning on temporal conditions [15]. Temporal QA. Prior work that specifically targets temporal QA [9, 10, 16, 23\u201325, 28, 31, 34, 48\u201350, 57, 58, 63], can largely be divided into work using a KB (e.g., [24, 31, 34]), and work using text (e.g., [9, 35]). Methods operating over KBs, include template-based [16, 23, 34], KBembedding-based [10, 31, 48, 58], and graph-based methods [24, 50, 63]. Methods using textual inputs typically involve an extractive or generative reader [9, 35]. The three methods [24, 31, 48] represent the state-of-the-art on temporal QA. However, temporal constraints are handled solely in the latent space, without explicitly (or faithfully) pruning out temporally inconsistent answer candidates. Other approaches are based on handcrafted rules, and hence bound to fail for unseen question patterns (e.g., [23]). None of the existing work on temporal QA has considered incorporating heterogeneous sources. Temporal KBs. There is substantial work on temporal KBs [4, 19, 26, 32, 40, 56, 59], to assign temporal scopes to KB facts. Advances on the KB itself benefits QA, but is an orthogonal direction. 7" + }, + { + "url": "http://arxiv.org/abs/2109.08935v1", + "title": "Complex Temporal Question Answering on Knowledge Graphs", + "abstract": "Question answering over knowledge graphs (KG-QA) is a vital topic in IR.\nQuestions with temporal intent are a special class of practical importance, but\nhave not received much attention in research. This work presents EXAQT, the\nfirst end-to-end system for answering complex temporal questions that have\nmultiple entities and predicates, and associated temporal conditions. EXAQT\nanswers natural language questions over KGs in two stages, one geared towards\nhigh recall, the other towards precision at top ranks. The first step computes\nquestion-relevant compact subgraphs within the KG, and judiciously enhances\nthem with pertinent temporal facts, using Group Steiner Trees and fine-tuned\nBERT models. The second step constructs relational graph convolutional networks\n(R-GCNs) from the first step's output, and enhances the R-GCNs with time-aware\nentity embeddings and attention over temporal relations. We evaluate EXAQT on\nTimeQuestions, a large dataset of 16k temporal questions we compiled from a\nvariety of general purpose KG-QA benchmarks. Results show that EXAQT\noutperforms three state-of-the-art systems for answering complex questions over\nKGs, thereby justifying specialized treatment of temporal QA.", + "authors": "Zhen Jia, Soumajit Pramanik, Rishiraj Saha Roy, Gerhard Weikum", + "published": "2021-09-18", + "updated": "2021-09-18", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR", + "cs.CL" + ], + "main_content": "INTRODUCTION Motivation. Questions and queries with temporal information needs [7, 8, 14, 20, 40] represent a substantial use case in search. For factual questions, knowledge graphs (KGs) like Wikidata [75], Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the owner/author(s). CIKM \u201921, November 1\u20135, 2021, Virtual Event, QLD, Australia \u00a9 2021 Copyright held by the owner/author(s). ACM ISBN 978-x-xxxx-xxxx-x/YY/MM. https://doi.org/10.1145/nnnnnnn.nnnnnnn Figure 1: Wikidata excerpt showing the relevant KG zone for the question where did obama\u2019s children study when he became president? with answer Sidwell Friends School. YAGO [64], or DBpedia [10], have become the go-to resource for search engines, tapping into structured facts on entities. While question answering over KGs [1, 12, 13, 16, 26, 55, 72, 77, 79] has been a major topic, little attention has been paid to the case of temporal questions. Such questions involve explicit or implicit notions of constraining answers by associated timestamps in the KG. This spans a spectrum, starting from simpler cases such as when was obama born?, where did obama live in 2001?, and where did obama live during 9/11? to more complex temporal questions like: where did obama\u2019s children study when he became president? Complex questions must consider multi-hop constraints (Barack Obama \u21a6\u2192child \u21a6\u2192Malia Obama, Sasha Obama \u21a6\u2192educated at \u21a6\u2192 Sidwell Friends School), and reason on the overlap of the intersection of time points and intervals (the start of the presidency in 2009 with the study period at the school, 2009 \u2013 2016). A simplified excerpt of the relevant zone in the Wikidata KG necessary for answering the question, is shown in Fig. 1. This paper addresses these challenges that arise for complex temporal questions. Limitations of state-of-the-art. Early works on temporal QA over unstructured text sources [5, 18, 33, 53, 56, 58, 71] involve various forms of question and document parsing, but do not carry over to KGs with structured facts comprised of entities and predicates. The few works specifically geared for time-aware QA over KGs include [23, 38, 76]. [38] uses a small set of hand-crafted rules for question decomposition and temporal reasoning. This approach needs human experts for the rules and does not cope with complex questions. [23] creates a QA collection for KGs that capture events and their timelines. A key-value memory network in [76] includes time information from KGs for answering simple questions. arXiv:2109.08935v1 [cs.IR] 18 Sep 2021 \fCIKM \u201921, November 1\u20135, 2021, Virtual Event, QLD, Australia Jia et al. Approach. We present Exaqt: EXplainable Answering of complex Questions with Temporal intent, a system that does not rely on manual rules for question understanding and reasoning. Exaqt answers complex temporal questions in two steps: (i) Identifying a compact, tractable answer graph that contains all cues required for answering the question, based on densesubgraph algorithms and fine-tuned BERT models; and (ii) A relational graph convolutional network (R-GCN) [66] to infer the answer in the graph, augmented with signals about time. The two stages work as follows (partly illustrated in Fig. 1). Stage 1: Answer graph construction. Exaqt fetches all KG facts of entities mentioned in the question (Barack Obama, President of the United States: dashed outline boxes), as detected by off-theshelf NERD systems [30, 36, 44]. The resulting noisy set of facts is distilled into a tractable set by means of a fine-tuned BERT model (admitting information about the children Malia and Sasha, but not Michelle Obama). To construct a KG subgraph of all questionrelevant KG items and their interconnections from this set, Group Steiner Trees (GST) [22, 47, 61] are computed (dark orange nodes, terminals or keyword matches underlined: \u201cobama\u201d, \u201cpresident\u201d, \u201cchild\u201d, \u201ceducated at\u201d) and completed (light orange nodes). The last and decisive step at this point augments this candidate answer graph with pertinent temporal facts, to bring in cues (potentially multiple hops away from the question entities) about relevant dates, events and time-related predicates. To this end, we use an analogous BERT model for identifying question-relevant temporal facts (blue nodes: educational affiliations of Malia and Sasha and their dates). The resulting answer graph is the input of the second stage. Stage 2: Answer prediction by R-GCN. Inspired by the popular GRAFT-Net model [66] and related work [59, 65], we construct an R-GCN that learns entity embeddings over the answer graph and casts answer prediction into a node classification task. However, R-GCNs as used in prior works are ignorant of temporal constraints [6]. To overcome this obstacle, we augment the R-GCN with time-aware entity embeddings, attention over temporal relations, and encodings of timestamps [80], temporal signals [60], and temporal question categories [38]. In our running example, temporal attention helps Exaqt focus on educated at as a question-relevant relation (partly shaded nodes). The time-enhanced representation of Barack Obama flows through the R-GCN (thick edges) and boosts the likelihood of Sidwell Friends School as the answer (node with thick borders), which contains 2009 (in bold) among its temporal facts. By producing such concise KG snippets for each question (as colored in Fig. 1), Exaqt yields explainable evidence for its answers. Contributions. This work makes the following contributions: \u2022 We propose Exaqt, the first end-to-end system for answering complex temporal questions over large-scale knowledge graphs; \u2022 Exaqt applies fine-tuned BERT models and convolutional graph networks to solve the specific challenges of identifying relevant KG facts for complex temporal questions; \u2022 We compile and release TimeQuestions, a benchmark of about 16\ud835\udc58temporal questions (examples in Table 1); \u2022 Experiments over the full Wikidata KG show the superiority of Exaqt over three state-of-the-art complex KG-QA baselines. All resources from this project are available at https://exaqt.mpiinf.mpg.de/ and https://github.com/zhenjia2017/EXAQT. Category Question who won oscar for best actress 1986? Explicit which movie did jaco van dormael direct in 2009? what currency is used in germany 2012? who was king of france during the ninth crusade? Implicit what did thomas jefferson do before he was president? what club did cristiano ronaldo play for after manchester united? what was the first film julie andrews starred in? Ordinal what was the second position held by pierre de coubertin? who is elizabeth taylor\u2019s last husband? what year did lakers win their first championship? Temp. Ans. when was james cagney\u2019s spouse born? when was the last time the orioles won the world series? Table 1: Sample temporal questions from TimeQuestions. 2 CONCEPTS AND NOTATION We now define the salient concepts that underlie Exaqt. Knowledge graph. A knowledge graph (aka knowledge base) is a collection of facts \ud835\udc39organized as a set of triples. It can be stored as an RDF database of such triples, or equivalently as a graph with nodes and edges. Examples are Wikidata [75], YAGO [64], DBpedia [10], Freebase [17] and industrial KGs. When stored as a graph, edges are directed: subject \u21a6\u2192 predicate \u21a6\u2192object. Subjects and objects are always nodes, while predicates (aka relations) often become edge labels. Fact. A fact \ud835\udc53\u2208\ud835\udc39can either be binary, containing a subject and an object connected by a predicate, or \ud835\udc5b-ary, combining multiple items via main predicates and qualifier predicates. An example of a binary fact is , where subjects are entities (Barack Obama), and objects may be entities (Malia Obama), literals (constants such as dates in ), or types aka classes (private school in ). We use the terms predicate and relation interchangeably in this text. An \ud835\udc5b-ary fact combines several triples that belong together, such as (see Fig. 1). position held is the main predicate, President of the US is the main object, while the remaining data are pairs. \ud835\udc5b-ary facts are of vital importance in temporal QA, with a large fraction of temporal information in modern KGs being stored as qualifiers. One way of representing qualifiers in a KG is shown in Fig. 1, via paths from the main predicate to the qualifier predicate and on to the qualifier object. Temporal fact. We define a temporal fact \ud835\udc61\ud835\udc53\u2208\ud835\udc39as one where the main object or any of the qualifier objects is a timestamp. Examples are (binary), or, (\ud835\udc5b-ary). Temporal predicate. We define a temporal predicate as one that can have a timestamp as its direct object or one of its qualifier objects. Examples are date of birth and position held. Temporal question. A temporal question is one that contains a temporal expression or a temporal signal, or whose answer is of temporal nature [37]. Examples of temporal expressions are \u201cin the year 1998\u201d, \u201cObama\u2019s presidency\u201d, \u201cNew Year\u2019s Eve\u201d, etc. which indicate explicit or implicit temporal scopes [41]. Temporal signals [60] are \fComplex Temporal Question Answering on Knowledge Graphs CIKM \u201921, November 1\u20135, 2021, Virtual Event, QLD, Australia Figure 2: An overview of the two-stage Exaqt pipeline. markers of temporal relations (BEFORE, AFTER, OVERLAP, ...) [6] and are expressed with words like \u201cprior to, after, during, ...\u201d that indicate the need for temporal reasoning. In our models, a question \ud835\udc5eis represented as a set of keywords <\ud835\udc5e1,\ud835\udc5e2, . . .\ud835\udc5e|\ud835\udc5e|>. Temporal question categories. Temporal questions fall into four basic categories [37]: (i) containing explicit temporal expressions (\u201cin 2009\u201d), (ii) containing implicit temporal expressions (\u201cwhen Obama became president\u201d), (iii) containing temporal ordinals (\u201cfirst president\u201d), and (iv) having temporal answers (\u201cWhen did ...\u201d). Table 1 gives several examples of temporal questions. A question may belong to multiple categories. For example, what was the first film julie andrews starred in after her divorce with tony walton? contains both an implicit temporal expression and a temporal ordinal. Answer. An answer to a temporal question is a (possibly singleton) set of entities or literals, e. g., {Chicago University Lab School, Sidwell Friends School} for Where did Malia Obama study before Harvard?, or {08-2017} for When did Malia start at Harvard? Answer graph. An answer graph is a subset of the KG that contains all the necessary facts for correctly answering the question. 3 CONSTRUCTING ANSWER GRAPHS Fig. 2 is an overview of Exaqt, with two main stages: (i) answer graph construction (Sec. 3), and (ii) answer prediction (Sec. 4). 3.1 Finding question-relevant KG facts NERD for question entities. Like most QA pipelines [16, 54], we start off by running named entity recognition and disambiguation (NERD) [36, 44, 73] on the input question (where did obama\u2019s children study when he became president?). NERD systems identify spans of words in the question as mentions of entities (\u201cobama\u201d, \u201cpresident\u201d), and link these spans to KG items or Wikipedia articles (which can easily be mapped to popular KGs). The facts of these linked entities (Barack Obama, President of the United States) provide us with a zone in the KG to start looking for the answer. NERD is a critical cog in the QA wheel: entity linking errors leave the main QA pipeline helpless with respect to answer detection. To mitigate this effect, we use two different systems, TagMe and ELQ [30, 44], to boost answer recall. Complex questions often contain multiple entity mentions, and accounting for two NERD systems, we could easily have 2 \u22124 different entities per question. The total number of associated facts can thus be several hundreds or more. To reduce this large and noisy set of facts to a few question-relevant ones, we fine-tune BERT [24] as follows. Training a classifier for question-relevant facts. For each question in our training set, we run NERD and retrieve all KG facts of the detected entities. We then use a distant supervision mechanism: out of these facts, the ones that contain the gold answer(s) are labeled as positive instances. While several complex questions may not have their answer in the facts of the question entities (multi-hop cases), the ones that do, comprise a reasonable amount of training data for our classifier for question-relevance. Note that facts with qualifiers are also retrieved for the question entities (complete facts where the question entity appears as a subject, object, or qualifier object): this increases our coverage for obtaining positive examples. For each positive instance, we randomly sample five negative instances from the facts that do not contain the answer. Sampling question-specific negative instances helps learn a more discriminative classifier, as all negative instances are guaranteed to contain at least one entity from the question (say, ). Using all facts that do not contain an answer would result in severe class imbalance, as this is much higher than the number of positive instances. We then pool together the paired positive and negative instances for all training questions. The fact in this pair is now verbalized as a natural language sentence by concatenating its constituents; qualifier statements are joined using \u201cand\u201d [50]. For example, the full fact for Obama\u2019s marriage (a negative instance) is: . This has two qualifiers, and would be verbalized as \u201cBarack Obama spouse Michelle Obama and start date 03-10-1992 and place of marriage Trinity United Church of Christ.\u201d. The questions paired with the verbalized facts, along with the binary ground-truth labels, are fed as training input to a sequence pair classification model for BERT. Applying the classifier. Following [24], the question and the fact are concatenated with the special separator token [SEP] in between, and the special classification token [CLS] is added in front of this sequence. The final hidden vector corresponding to [CLS], denoted by \ud835\udc6a\u2208R\ud835\udc3b(\ud835\udc3bis the size of the hidden state), is considered to be the accumulated representation. Weights \ud835\udc7eof a classification layer are the only parameters introduced during fine-tuning, where \ud835\udc7e\u2208R\ud835\udc3e\u00d7\ud835\udc3b, where \ud835\udc3eis the number of class labels (\ud835\udc3e= 2 here, fact is question-relevant or not). log(softmax(\ud835\udc6a\ud835\udc7e\ud835\udc7b)) is used as the classification loss function. Once the classifier is trained, given a new pair, it outputs the probability (and the label) of the fact being relevant for the question. We make this prediction for all candidate facts pertinent to a question, and sort them in descending order of this question relevance likelihood. We pick the top scoring facts {\ud835\udc53\ud835\udc5e\ud835\udc5f\ud835\udc52\ud835\udc59} from here as our question-relevant set. 3.2 Computing compact subgraphs The set of facts {\ud835\udc53\ud835\udc5e\ud835\udc5f\ud835\udc52\ud835\udc59} contains question-relevant facts but is not indicative as to which are a set of coherent KG items that matter for this question, and how they are connected. To this end, we induce a graph as shown in Fig. 1, from the above set of facts where each KG item (entity, predicate, type, literal) becomes a node of its own. Edges run between components of the same fact in the direction mandated in the KG: subject \u21a6\u2192predicate \u21a6\u2192object for the main fact, and subject \u21a6\u2192predicate \u21a6\u2192qualifier predicate \u21a6\u2192qualifier object for (optional) qualifiers. Injecting connectivity. BERT selects {\ud835\udc53\ud835\udc5e\ud835\udc5f\ud835\udc52\ud835\udc59} from the facts of a number of entities as detected by our NERD systems. These entities may not be connected to each other via shared KG facts. However, a connected graph is needed so that our subsequent GST and R-GCN \fCIKM \u201921, November 1\u20135, 2021, Virtual Event, QLD, Australia Jia et al. algorithms can produce the desired effects. To inject connectivity in the graph induced from BERT facts, we compute the shortest KG path between every pair of question entities, and add these paths to our graph. In case of multiple paths of same length between two entities, they are scored for question-relevance as follows. A KG path is set of facts: a path of length one is made up of one fact (Barack Obama \u21a6\u2192position held \u21a6\u2192President of the United States), a path of length two is made up of two facts (Barack Obama \u21a6\u2192country \u21a6\u2192United States of America \u21a6\u2192office held by head of state \u21a6\u2192President of the United States), and so on. Each candidate path is verbalized as a set of facts (a period separating two facts) and encoded with BERT [39], and so is the question. These BERT encodings are stored in corresponding [CLS] tokens. We compute the cosine similarity of [CLS](question) with [CLS](path), and add the path with the highest cosine similarity to our answer graph. GST model. Computing Group Steiner Trees (GST) [47, 52, 61, 67] has been shown to be an effective mechanism in identifying queryspecific backbone structures in larger graphs, for instance, in keyword search over database graphs [4, 27]. Given a subset of nodes in the graph, called terminals, the Steiner Tree (ST) is the lowestcost tree that connects all terminals. This reduces to the minimum spanning tree problem when all nodes of the graph are terminals, and to the shortest path problem when there are only two terminals. The GST models a more complex situation where the terminals are arranged into groups or sets, and it suffices to find a Steiner Tree that connects at least one node from each group. This scenario fits our requirement perfectly, where each question keyword can match multiple nodes in the graph, and naturally induces a terminal group. Finding a tree that runs through each and every matched node is unrealistic, hence the group model. Edge costs. An integral part of the GST problem is how to define edge costs. Since edges emanate from KG facts, we leverage questionrelevance scores assigned by the classifier of Sec. 3.1: \ud835\udc35\ud835\udc38\ud835\udc45\ud835\udc47(\ud835\udc53\ud835\udc5e\ud835\udc5f\ud835\udc52\ud835\udc59) \u2208 [0, 1], converted to edge costs 1 \u2212\ud835\udc35\ud835\udc38\ud835\udc45\ud835\udc47(\ud835\udc53\ud835\udc5e\ud835\udc5f\ud835\udc52\ud835\udc59) \u2208[0, 1]. GST algorithm. There are good approximation algorithms for GSTs [45, 67], but QA needs high precision. Therefore, we adopted the fixed-parameter-tractable exact algorithm by Ding et al. [27]. It iteratively grows and merges smaller trees over the bigger graph to arrive at the minimal trees. Only taking the best tree can be risky in light of spurious connections potentially irrelevant to the question. Thus, we used a top-\ud835\udc58variant that is naturally supported by the dynamic programming algorithm of [27]. GST completion. As shown in Fig. 1, the GST yields a skeleton connecting the most relevant question nodes. To transform this into a coherent context for the question, we need to complete it with facts from where this skeleton was built. Nodes introduced due to this step are shown in light orange in the figure: dates about the presidency, Obama\u2019s children, and the (noisy) fact about Obama\u2019s education. In case the graph has multiple connected components (still possible as our previous connectivity insertions worked only pairwise over entities), top-\ud835\udc58GSTs are computed for each component and the union graph is used for this fact completion step. Example. We show a simplified example in Fig. 1, where the node Barack Obama matches the question keyword \u201cObama\u201d, child matches \u201cchildren\u201d, educated at matches \u201cstudy\u201d, and President of the United States matches \u201cpresident\u201d. The educated at nodes connected to Malia and Sasha do not feature here as they are not contained in the facts of Barack Obama, and do not yet feature in our answer graph. We consider exact matches, although not just in node labels but also in the set of aliases present in the KG that list common synonyms of entities, predicates and types. This helps us consider relaxed matches without relying on models like word2vec [48] or GloVe [51], that need inconvenient thresholding on similarity values as a noisy proxy for synonyms. The GST is shown using dark orange nodes with the associated question keyword matches underlined (denoting the terminal nodes). In experiments, we only consider as terminals NERD matches for entities, and keyword matches with aliases for other KG items. The GST naturally includes the internal nodes and edges necessary to connect the terminals. Note that the graph is considered undirected (equivalently, bidirectional) for the purpose of GST computation. 3.3 Augmenting subgraphs with temporal facts The final step towards the desired answer graph is to enhance it with temporal facts. Here, we add question-relevant temporal facts of entities in the completed GST. This pulls in temporal information necessary for answering questions that need evidence more than one hop away from the question entities (blue nodes in Fig. 1): (+ noise like Malia\u2019s date of birth). The rationale behind this step is to capture facts necessary for faithfully answering the question, where faithful refers to arriving at the answer not by chance but after satisfying all necessary constraints in the question. For example, the question which oscar did leonardo dicaprio win in 2016? can be answered without temporal reasoning, as he only won one Oscar. We wish to avoid such cases in faithful answering. To this end, we first retrieve from the KG all temporal facts of each entity in the completed GST. We then use an analogously fine-tuned BERT model for question-relevance of temporal facts. The model predicts, for each temporal fact, its likelihood of containing the answer. It is trained using temporal facts of question entities that contain the answer as positive examples, while negative examples are chosen at random from these temporal facts. To trap multi-hop temporal questions in our net, we explore 2-hop facts of question entities for ground truth answers. A larger neighborhood was not used during the first fine-tuning as the total number of facts in two hops of question entities is rather large, but the count of 2-hop temporal facts is a much more tractable number. Moreover, this is in line with our focus on complex temporal questions. Let the likelihood score for a temporal fact \ud835\udc61\ud835\udc53of an entity in the completed GST be \ud835\udc35\ud835\udc38\ud835\udc45\ud835\udc47(\ud835\udc61\ud835\udc53\ud835\udc5e\ud835\udc5f\ud835\udc52\ud835\udc59). As before, we take the top scoring {\ud835\udc61\ud835\udc53\ud835\udc5e\ud835\udc5f\ud835\udc52\ud835\udc59}, add them to the answer graph, that is then passed on to Stage 2. 4 PREDICTING ANSWERS WITH R-GCN R-GCN basics. The answer prediction method of Exaqt is inspired by the Relational Graph Convolution Network model [59], an extension of GCNs [29] tailored for handling large-scale relational data such as knowledge graphs. Typically, a GCN convolves the features (equivalently, representations or embedding vectors) of nodes belonging to a local neighborhood and propagates them to their nearest neighbors. The learned entity representations are used in node classification. Here, this classification decision is whether a node is an answer to the input question or not. \fComplex Temporal Question Answering on Knowledge Graphs CIKM \u201921, November 1\u20135, 2021, Virtual Event, QLD, Australia Figure 3: Architecture of the R-GCN model in Exaqt, that includes several signals of temporal information. In this work, we use the widely popular GRAFT-Net model [66] that adapted R-GCNs to deal with heterogeneous QA over KGs and text [15, 50]. In order to apply such a mechanism for answer prediction in our setup, we convert our answer graph from the previous step into a directed relational graph and build upon the \ud835\udc3e\ud835\udc3aonly setting of GRAFT-Net. In a relational graph, entities, literals, and types become nodes, while predicates (relations) become edge labels. Specifically, we use the KG RDF dump that contains normal SPO triples for binary facts by employing reification [35]. Reified triples can then be straightforwardly represented as a directed relational graph [66]. Exaqt introduces four major extensions over the R-GCN in GRAFT-Net to deal with the task of temporal QA: \u2022 we embed temporal facts to enrich representations of entity nodes, creating time-aware entity embeddings (TEE); \u2022 we encode temporal question categories (TC) and temporal signals (TS) to enrich question representations; \u2022 we employ time encoding (TE) to obtain the vector representations for timestamps; \u2022 we propose attention over temporal relations (ATR) to distinguish the same relation but with different timestamps as objects. In the following, we describe how we encode and update the node representations and perform answer prediction in our extended R-GCN architecture for handling temporal questions. Our neural architecture is shown in Fig. 3, while Table 2 summarizes notation for the salient concepts used in this phase. 4.1 Question representation 4.1.1 Initialization. To encode a temporal question, we first determine its temporal category and extract temporal signals (Sec. 2). Temporal category encoding (TCE). We adopt a noisy yet effective strategy for labeling categories for temporal questions, and leave more sophisticated (multi-label) classification as future work. We use a four-bit multi-hot (recall that a question can belong to multiple categories) vector where each bit indicates whether the question falls into that category. Our tagger works as follows: \u2022 A question is tagged with the \u201cEXPLICIT\u201d category if the annotators SUTime [21] or HeidelTime [62] detect an explicit temporal expression inside it; \u2022 A question is tagged with the \u201cIMPLICIT\u201d category if it contains any of the temporal signal words (we used the dictionary compiled by [60]), and satisfies certain part-of-speech patterns; \u2022 A question is of type \u201cTEMPORAL ANSWER\u201d if it starts with phrases like \u201cwhen ...\u201d, \u201cin which year ...\u201d, and \u201con what date ...\u201d; \u2022 A question is tagged with the \u201cORDINAL\u201d category if it contains an ordinal tag as labeled by the Stanford CoreNLP system [9], along with certain keywords and part-of-speech patterns. Temporal signal encoding (TSE). There are 13 temporal relations defined in Allen\u2019s interval algebra for temporal reasoning [6], namely: \u201cequals\u201d, \u201cbefore\u201d, \u201cmeets\u201d, \u201coverlaps\u201d, \u201cduring\u201d, \u201cstarts\u201d, and \u201cfinishes\u201d, with respective inverses for all of them except \u201cequals\u201d. We simplify these relations and adapt the strategy in [37] into 7 broad classes of temporal signals: \u2022 \u201cbefore\u201d and \u201cmeets\u201d relations are treated as \u201cBEFORE\u201d signals; \u2022 \u201cbefore-inverse\u201d and \u201cmeet-inverse\u201d relations are collapsed into \u201cAFTER\u201d signals; \u2022 \u201cstarts\u201d and \u201cfinishes\u201d relations are respectively mapped to \u201cSTART\u201d and \u201cFINISH\u201d signals; \u2022 words with ordinal tags and \u201clast\u201d are mapped to \u201cORDINAL\u201d; \u2022 all other relations are treated as \u201cOVERLAP\u201d signals; \u2022 absence of any signal word triggers the \u201cNO SIGNAL\u201d case. We map signal words to temporal signals in questions using a dictionary. We then encode these signals using a 7-bit (a question can contain multiple signals) vector, where each bit indicates the presence or absence of a particular temporal signal. Along with these temporal categories and temporal signals, we use a Long Short-Term Memory Network (LSTM) to model the \fCIKM \u201921, November 1\u20135, 2021, Virtual Event, QLD, Australia Jia et al. words in the question as a sequence (see block A in Fig. 3). Overall, we represent a question \ud835\udc5ewith |\ud835\udc5e| words as: \ud835\udc890 \ud835\udc92= \ud835\udc39\ud835\udc39\ud835\udc41(\ud835\udc7b\ud835\udc6a\ud835\udc6c(\ud835\udc92) \u2295\ud835\udc7b\ud835\udc7a\ud835\udc6c(\ud835\udc92) \u2295\ud835\udc3f\ud835\udc46\ud835\udc47\ud835\udc40(\ud835\udc981, ...,\ud835\udc98|\ud835\udc92|)) (1) Here \ud835\udc7b\ud835\udc6a\ud835\udc6c(\ud835\udc92) and \ud835\udc7b\ud835\udc7a\ud835\udc6c(\ud835\udc92) are multi-hot vectors encoding the temporal categories and temporal signals present in \ud835\udc5e, and\ud835\udc98\ud835\udc8arepresent the pre-trained word embeddings (from Wikipedia2Vec [78]) of the \ud835\udc56\ud835\udc61\u210eword in \ud835\udc5e. We concatenate (\u2295) the \ud835\udc7b\ud835\udc6a\ud835\udc6c(\ud835\udc92) and \ud835\udc7b\ud835\udc7a\ud835\udc6c(\ud835\udc92) vectors with the output vector from the final state of the LSTM. Finally, we pass this concatenated vector through a Feed Forward Network (FFN) and obtain the initial embedding of \ud835\udc5e, denoted as \ud835\udc890 \ud835\udc92. 4.1.2 Update. In subsequent layers, the embedding of the question gets updated with the embeddings of the entities belonging to it (i.e. the question entities obtained from NERD) as follows: \ud835\udc89\ud835\udc8d \ud835\udc92= \ud835\udc39\ud835\udc39\ud835\udc41( \u2211\ufe01 \ud835\udc52\u2208\ud835\udc41\ud835\udc38\ud835\udc45\ud835\udc37(\ud835\udc5e) \ud835\udc89\ud835\udc8d\u22121 \ud835\udc86 ) (2) where \ud835\udc41\ud835\udc38\ud835\udc45\ud835\udc37(\ud835\udc5e) contains the entities for question \ud835\udc5eand \ud835\udc89\ud835\udc8d\u22121 \ud835\udc86 denotes the embedding of an entity \ud835\udc52at layer \ud835\udc59\u22121. 4.2 Entity representation 4.2.1 Initialization. For initializing each entity \ud835\udc52in the relational graph, we use fixed-size pre-trained embeddings \ud835\udc99\ud835\udc86, also from Wikipedia2Vec [78]. Along with conventional skip-gram and context models, Wikipedia2Vec utilizes the Wikipedia link graph that learns entity embeddings by predicting neighboring entities in the Wikipedia graph, producing more reliable entity embeddings: \ud835\udc890 \ud835\udc86= \ud835\udc99\ud835\udc86 (3) 4.2.2 Update. Prior to understanding the update rule for the entities in subsequent layers, we need to introduce the following concepts: (i) Time encoding (TE); (ii) Time-aware entity embeddings (TEE); and (iii) Attention over temporal relations (ATR). Time encoding (TE). Time as an ordering sequence has an inherent similarity to positions of words in text: we thus employ a sinusoidal position encoding method [74, 80] to represent a timestamp \ud835\udc61\ud835\udc60. Here, the \ud835\udc58\ud835\udc61\u210eposition (day, month, etc.) in \ud835\udc61\ud835\udc60will be encoded as: \ud835\udc47\ud835\udc38(\ud835\udc58, \ud835\udc57) = ( sin(\ud835\udc58/10000 2\ud835\udc56 \ud835\udc51), if \ud835\udc57= 2\ud835\udc56 cos(\ud835\udc58/10000 2\ud835\udc56 \ud835\udc51), if \ud835\udc57= 2\ud835\udc56+ 1 (4) where\ud835\udc51is the dimension of the time encoding and \ud835\udc57is the (even/odd) position in the \ud835\udc51-dimensional vector. Further, we represent \ud835\udc7b\ud835\udc6c(\ud835\udc95\ud835\udc94), i.e. the time encoding of \ud835\udc61\ud835\udc60, as the summation of the encodings of each of its corresponding positions. This time encoding method provides an unique encoding to each timestamp and ensures sequential ordering among the timestamps [80], that is vital for reasoning signals like before and after in temporal questions. Time-aware entity embedding (TEE). An entity \ud835\udc52present in the relational graph is associated with a number of temporal facts \ud835\udc61\ud835\udc53\ud835\udc52 1 ,\ud835\udc61\ud835\udc53\ud835\udc52 2 , ...\ud835\udc61\ud835\udc53\ud835\udc52 \ud835\udc5b(Sec. 2) in our answer graph. A temporal fact \ud835\udc61\ud835\udc53\ud835\udc52is said to be associated with an entity \ud835\udc52if \ud835\udc52is present in any position of the fact (subject, object or qualifier object). We encode each \ud835\udc61\ud835\udc53\ud835\udc52 as the concatenation of its entity embeddings, relation embeddings (averaged) and time encodings of the timestamps (as shown in block B of Fig. 3). Further, we arrange each fact in {\ud835\udc61\ud835\udc53\ud835\udc52} in a chronological order and pass them through an LSTM network. Finally, the output from the final state of the LSTM can be used as the time-aware entity representation of \ud835\udc52, TEE(\ud835\udc52), that is vital for reasoning through the R-GCN model: \ud835\udc890 \ud835\udc7b\ud835\udc6c\ud835\udc6c(\ud835\udc86) = \ud835\udc3f\ud835\udc46\ud835\udc47\ud835\udc40(\ud835\udc890 \ud835\udc95\ud835\udc87\ud835\udc86 1 , \ud835\udc890 \ud835\udc95\ud835\udc87\ud835\udc86 2 , ..., \ud835\udc890 \ud835\udc95\ud835\udc87\ud835\udc86 \ud835\udc8f) (5) In subsequent layers, the embedding of \ud835\udc47\ud835\udc38\ud835\udc38(\ud835\udc52) will be updated as the embeddings of its constituent entities get updated. Attention over temporal relations (ATR). In temporal QA, we need to distinguish entities associated with the same relation but having different timestamps (facts with same temporal predicate but different objects, like several educated at facts for a person). We thus introduce the concept of temporal attention here, adapting the more general notion of attention over relations in GRAFT-Net [66]. While computing temporal attention over a relation \ud835\udc5fconnected with entity\ud835\udc52, we concatenate the corresponding relation embedding with the time encoding of its timestamp object and compute its similarity with the question embedding at that stage: \ud835\udc34\ud835\udc47\ud835\udc45(\ud835\udc52,\ud835\udc5f) = \ud835\udc60\ud835\udc5c\ud835\udc53\ud835\udc61\ud835\udc5a\ud835\udc4e\ud835\udc65(\ud835\udc99\ud835\udc93\u2295\ud835\udc7b\ud835\udc6c(\ud835\udc95\ud835\udc94\ud835\udc93)\ud835\udc47\ud835\udc89(\ud835\udc8d\u22121) \ud835\udc92 ) (6) where the softmax normalization is over all outgoing edges from \ud835\udc52, \ud835\udc99\ud835\udc93is the pre-trained relation vector embedding for relation \ud835\udc5f (Wikipedia2Vec embeddings averaged over each word of the KG predicate), and \ud835\udc7b\ud835\udc6c(\ud835\udc95\ud835\udc94\ud835\udc93) is the time encoding of the timestamp associated with the relation \ud835\udc5f. For relations not connected with any timestamp, we use a random vector for \ud835\udc7b\ud835\udc6c(\ud835\udc95\ud835\udc94\ud835\udc93). Putting it together. We are now in a position to specify the update rule for entity nodes which involves a single-layer FFN over the concatenation of the following four states (see block C of Fig. 3): \ud835\udc89\ud835\udc8d \ud835\udc86= \ud835\udc39\ud835\udc39\ud835\udc41 \u00a9 \u00ad \u00ad \u00ad \u00ad \u00ab \uf8ee \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8f0 \ud835\udc89\ud835\udc8d\u22121 \ud835\udc86 \ud835\udc89\ud835\udc8d\u22121 \ud835\udc92 \ud835\udc89\ud835\udc8d\u22121 \ud835\udc7b\ud835\udc6c\ud835\udc6c(\ud835\udc86) \u00cd \ud835\udc5f \u00cd \ud835\udc52\u2032\u2208\ud835\udc5b\ud835\udc4f\ud835\udc51\ud835\udc5f(\ud835\udc52) (\ud835\udc34\ud835\udc47\ud835\udc45(\ud835\udc52\u2032,\ud835\udc5f).\ud835\udf4d\ud835\udc93(\ud835\udc89\ud835\udc8d\u22121 \ud835\udc86\u2032 )) \uf8f9 \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fb \u00aa \u00ae \u00ae \u00ae \u00ae \u00ac (7) Here, (i) the first term corresponds to the entity\u2019s representation from the previous layer; (ii) the second term denotes the question\u2019s representation from the previous layer; (iii) the third term denotes the previous layer\u2019s representation of the time-aware entity representation \ud835\udc47\ud835\udc38\ud835\udc38(\ud835\udc52); and (iv) the fourth term aggregates the states from the entity \ud835\udc52\u2019s neighbors. In the fourth term, the relation-specific neighborhood \ud835\udc5b\ud835\udc4f\ud835\udc51\ud835\udc5fcorresponds to the set of entities connected to \ud835\udc52via relation \ud835\udc5f, \ud835\udc34\ud835\udc47\ud835\udc45(\ud835\udc52\u2032,\ud835\udc5f) is the attention over temporal relations, and \ud835\udf4d\ud835\udc93(\ud835\udc89\ud835\udc8d\u22121 \ud835\udc86\u2032 ) is the relation-specific transformation depending on the type and direction of an edge: \ud835\udf4d\ud835\udc93(\ud835\udc89\ud835\udc8d\u22121 \ud835\udc86\u2032 ) = \ud835\udc43\ud835\udc43\ud835\udc45\ud835\udc59\u22121 \ud835\udc52\u2032 \u00b7 \ud835\udc39\ud835\udc39\ud835\udc41(\ud835\udc99\ud835\udc93, \ud835\udc89\ud835\udc8d\u22121 \ud835\udc86\u2032 ) (8) Here \ud835\udc43\ud835\udc43\ud835\udc45\ud835\udc59\u22121 \ud835\udc52\u2032 is a Personalized PageRank [34] score obtained in the same way as in GRAFT-Net [66] to control the propagation of embeddings along paths starting from the question entities. 4.3 Answer prediction The final entity representations (\ud835\udc89\ud835\udc8d \ud835\udc86) obtained at layer \ud835\udc59, are then used in a binary classification setup to select the answers. For each entity \ud835\udc52, we define its probability to be an answer to \ud835\udc5e: \ud835\udc43\ud835\udc5f(\ud835\udc52\u2208{\ud835\udc4e}\ud835\udc5e|\ud835\udc45\ud835\udc3a\ud835\udc5e,\ud835\udc5e) = \ud835\udf0e(\ud835\udc98\ud835\udc7b\ud835\udc89\ud835\udc8d \ud835\udc86+ \ud835\udc83) (9) where {\ud835\udc4e}\ud835\udc5eis the set of ground truth answers for question \ud835\udc5e, \ud835\udc45\ud835\udc3a\ud835\udc5e is the relational graph built for answering \ud835\udc5efrom its answer graph, \fComplex Temporal Question Answering on Knowledge Graphs CIKM \u201921, November 1\u20135, 2021, Virtual Event, QLD, Australia Notation Concept \ud835\udc89\ud835\udc8d \ud835\udc86 Representation of entity \ud835\udc52at layer \ud835\udc59 \ud835\udc89\ud835\udc8d \ud835\udc92 Representation of question \ud835\udc5eat layer \ud835\udc59 \ud835\udc7b\ud835\udc6a\ud835\udc6c(\ud835\udc92) Temporal category encoding for question \ud835\udc5e \ud835\udc7b\ud835\udc7a\ud835\udc6c(\ud835\udc92) Temporal signal encoding for question \ud835\udc5e \ud835\udc41\ud835\udc38\ud835\udc45\ud835\udc37(\ud835\udc5e) Question entities obtained from NERD \ud835\udc99\ud835\udc86, \ud835\udc99\ud835\udc93 Pre-trained entity (\ud835\udc52) and relation (\ud835\udc5f) embeddings \ud835\udc7b\ud835\udc6c(\ud835\udc95\ud835\udc94) Time encoding for timestamp \ud835\udc61\ud835\udc60 \ud835\udc61\ud835\udc53\ud835\udc52 1 ,\ud835\udc61\ud835\udc53\ud835\udc52 2 , . . . Chronologically ordered temporal facts for \ud835\udc52 \ud835\udc89\ud835\udc8d \ud835\udc95\ud835\udc87\ud835\udc86 \ud835\udc8a Representation of the \ud835\udc56\ud835\udc61\u210etemporal fact for \ud835\udc52at \ud835\udc59 \ud835\udc89\ud835\udc8d \ud835\udc7b\ud835\udc6c\ud835\udc6c(\ud835\udc86) Time-aware entity representation of \ud835\udc52at \ud835\udc59 \ud835\udc34\ud835\udc47\ud835\udc45(\ud835\udc52,\ud835\udc5f) Attention over temporal relation \ud835\udc5fconnected with \ud835\udc52 \ud835\udf4d\ud835\udc93(\ud835\udc89\ud835\udc8d \ud835\udc86) Relation \ud835\udc5f-specific transformation of \u210e\ud835\udc59 \ud835\udc52 \ud835\udc43\ud835\udc43\ud835\udc45\ud835\udc59 \ud835\udc52 Personalized PageRank score for entity \ud835\udc52at \ud835\udc59 Table 2: Notation for concepts in the R-GCN of Exaqt. Category Explicit Implicit Temp. Ans. Ordinal Total Free917 [19] 44 4 76 11 135 WebQ [13] 315 77 283 113 788 ComplexQ [11] 217 131 43 33 424 GraphQ [63] 264 30 13 42 349 ComplexWebQ [68] 1356 224 595 315 2490 ComQA [2] 669 355 1180 1587 3791 LC-QuAD [69] 122 19 0 26 167 LC-QuAD 2.0 [28] 3534 636 3726 819 8715 Total 6521 1476 5916 2946 16859 Table 3: Distribution of question types by source in TimeQuestions. The sum 16859 exceeds the number of questions 16181 as some questions belong to multiple categories. and \ud835\udf0eis the sigmoid activation function. \ud835\udc98and \ud835\udc83are respectively the weight and bias vectors corresponding to the classifier which is trained using binary cross-entropy loss over these \ud835\udc43\ud835\udc5fprobabilities. 5 EXPERIMENTAL SETUP 5.1 Benchmark Previous collections on temporal questions, TempQuestions [37] and Event-QA [23] contain only about a thousand questions each, and are not suitable for building neural models. We leverage recent community efforts in QA benchmarking, and we search through eight KG-QA datasets for time-related questions. The result is a new compilation, TimeQuestions, with 16, 181 questions, that we release with this paper (details in Table 3). Since some of these previous benchmarks were over Freebase or DBpedia, we used Wikipedia links in these KGs to map them to Wikidata, the largest and most actively growing public KG today, and the one that we use in this work. Questions in each benchmark are tagged for temporal expressions using SUTime [21] and HeidelTime [62], and for signal words using a dictionary compiled by [60]. Whenever a question is found to have at least one temporal expression or signal word, it becomes a candidate temporal question. This candidate set (ca. 20\ud835\udc58 questions) was filtered for false positives by the authors. For each of these questions, the authors manually verified the correctness of the answer, and if incorrect, replaced it with the right one. Moreover, each question is manually tagged with its temporal question category (explicit, implicit, temporal answer, or ordinal) that may help in building automated classifiers for temporal questions, a sub-problem interesting in its own right. We split our benchmark in a 60 : 20 : 20 ratio for creating the training (9708 questions), development (3236) and test (3237) sets. 5.2 Baselines We use the following recent methods for complex KG-QA as baselines to compare Exaqt with. All baselines were trained and finetuned using the train and dev sets of TimeQuestions, respectively. They are the most natural choice of baselines as Exaqt is inspired by components in these methods for building its pipeline: while Uniqorn [52] showed the effectiveness of GSTs in complex KG-QA, GRAFT-Net [66] and PullNet [65] showed the value of R-GCNs for answer prediction. These techniques are designed for dealing with heterogeneous answering sources (KGs and text), and we use their KG-only variants: \u2022 Uniqorn [52]: This is a method for answering complex questions using Group Steiner Trees, and is an extension of [47]; \u2022 GRAFT-Net [66]: This was the first technique to adapt R-GCNs for QA over heterogeneous sources; \u2022 PullNet [65]: This algorithm extended the GRAFT-Net classifier to the scenario of multi-hop questions. We used a reimplementation as the code is not public. 5.3 Metrics All systems return a ranked list of answers, consisting of KG entities or literals associated with unique identifiers. We thus use the following metrics for evaluating Exaqt and the baselines, averaged over questions in the benchmark: \u2022 P@1: Precision at the top rank is one if the highest ranked answer is correct, and zero otherwise. \u2022 MRR: This is the reciprocal of the first rank where we have a correct answer. If the correct answer does not feature in the ranked list, MRR is zero. \u2022 Hit@5: This is set to one if a correct answer appears in the first five positions, and zero otherwise. 5.4 Initialization Configuration. We use the Wikidata KG dump (https://dumps. wikimedia.org/wikidatawiki/entities/) in NTriples format from April 2020, comprising 12\ud835\udc35triples and taking 2 TB when uncompressed on disk. We subsequently removed language tags, external IDs, schema labels and URLs from the dump, leaving us with about 2\ud835\udc35 triples with 340 GB disk space consumption. For BERT fine-tuning, positive and negative instances were created from the TimeQuestions train and dev sets in the ratio 1 : 5. These instances were combined and split in the ratio 80 : 20 (test set not needed), where the first split was used for training and the second for hyperparameter selection, respectively, for BERT finetuning. We use the BERT-base-cased model for sequence pair classification (https://bit.ly/3fRVqAG). Best parameters for fine-tuning were: accumulation = 512, number of epochs = 2, dropout = 0.3, mini-batch size = 50 and weight decay = 0.001. We use AdamW as the optimizer with a learning rate of 3\u00d710\u22125. During answer graph construction, we use top-25 question-relevant facts (|{\ud835\udc53\ud835\udc5e\ud835\udc5f\ud835\udc52\ud835\udc59}| = 25), top-25 GSTs (\ud835\udc58= 25), and top-25 temporal facts (|{\ud835\udc61\ud835\udc53\ud835\udc5e\ud835\udc5f\ud835\udc52\ud835\udc59}| = 25). \fCIKM \u201921, November 1\u20135, 2021, Virtual Event, QLD, Australia Jia et al. R-GCN model training. 100-dimensional embeddings for question words, relation (KG predicate) words and entities, are obtained from Wikipedia2Vec [78], and learned from the Wikipedia dump of March 2021. Dimensions of TCE, TSE, TE and TEE (Sec. 4) were all set to 100 as well. The last hidden states of LSTMs were used as encodings wherever applicable. This was trained on an Nvidia Quadro RTX 8000 GPU server. Hyperparameter values were tuned on the TimeQuestions dev set: number of GCN layers = 3, number of epochs = 100, mini-batch size = 25, gradient clip = 1, learning rate = 0.001, LSTM dropout = 0.3, linear dropout = 0.2, and fact dropout = 0.1. The ReLU activation function was used. 6 KEY FINDINGS Answering performance of Exaqt and baselines are in Table 4 (best value in column in bold). Main observations are as follows. Exaqt outperforms baselines. The main observation from Table 4 is the across-the-board superiority of Exaqt over the baselines. Statistically significant results for each category, baseline and metric, indicate that general-purpose complex QA systems are not able to deal with the challenging requirements of temporal QA, and that temporally augmented methods are needed. Outperforming each baseline offers individual insights, as discussed below. GSTs are not enough. GSTs are a powerful mechanism for complex QA that identify backbone skeletons in KG subsets and prune irrelevant information from noisy graphs. While this motivated the use of GSTs as a building block in Exaqt, outperforming the Uniqorn [52] method shows that non-terminals (internal nodes) in GSTs, by themselves, are not enough to answer temporal questions. Augmenting R-GCNs with time information works well. The fact that R-GCNs are a powerful model is clear from the fact that GRAFT-Net, without any explicit support for temporal QA, emerges as the strongest baseline in this challenging setup. A core contribution of our work is to extend R-GCNs with different kinds of temporal evidence. Improving over GRAFT-Net shows that our multi-pronged mechanism (with TEE, ATR, TCE, TSE, and TE) succeeds in advancing the scope of R-GCN models to questions with temporal intent. Ablation studies (Sec. 7) show that each of these \u201cprongs\u201d play active roles in the overall performance of Exaqt. Not every question is multi-hop. PullNet is a state-of-the-art system for answering multi-hop chain-join questions (where was Obama\u2019s father born?). It may appear strange that PullNet, offered as an improvement over GRAFT-Net, falls short in our setup. Inspecting examples makes the reason for this clear: PullNet has an assumption that all answers are located on a 2-hop circumference of the question entities (ideally, \ud835\udc47-hop, where \ud835\udc47is a variable that needs to be fixed for a benchmark: 1 is an oversimplification, while 3 is intractable for a large KG, and hence our choice of 2 for TimeQuestions). When this is not the case (for instance, the slightly tricky situation when an answer is in a qualifier of a 2-hop fact: when did obama\u2019s children start studying at sidwell friends school? or the question is simple: when was obama born?), PullNet cannot make use of this training point as it relies on shortest KG paths between question and answer entities. This uniform\ud835\udc47-hop assumption is not always practical, and does not generalize to situations beyond what PullNet was trained and evaluated on. Temporal categories vary by difficulty. We use manual groundtruth labels of question categories from our benchmark to drill down on class-wise results (the noisy tagger from Sec. 4.1.1 has \u224390% accuracy). Questions with temporal answers are clearly the easiest. Note that this includes questions starting with \u201cwhen\u201d, that many models tackle with dedicated lexical answer types [3, 12], analogous to location-type answers for \u201cwhere ...?\u201d questions. Questions with explicit temporal expressions are the next rung of the ladder: while they do require reasoning, explicit years often make this matching easier (who became president of south africa in 1989?). Questions with implicit expressions are more challenging: we believe that this is where the power of R-GCNs truly shine, as GST-based Uniqorn clearly falls short. Finally, questions with temporal ordinals seem to be beyond what implicit reasoning in graph neural networks can handle: with P@1 < 0.5, they pose the biggest research challenge. We believe that this calls for revisiting symbolic reasoning, ideally plugged into neural GCN architectures. 7 IN-DEPTH ANALYSIS NERD variants. We experimented with TagMe [30], AIDA [36], and ELQ [44], going by the most popular to the most recent choices. Effects of various choices are in Table 5. Our best configuration is TagMe + ELQ. TagMe (used without threshold on pruning entities) and ELQ (run with default parameters) nicely complement each other, since one is recall-oriented (TagMe) and the other precisionbiased (ELQ). Answer recall measures the fraction of questions for which at least one gold answer was present in the final answer graph (test set). AIDA + ELQ detects a similar number of entities per question, but is slightly worse w.r.t. answer recall. Understanding Stage 1. Traversing over the steps in the recalloriented graph construction phase of Exaqt, we try to understand where we gain (and lose) answers to temporal questions (Table 6, test set). First, we see that even two NERD systems cannot guarantee perfect answer recall (75.8%). The fall from Row 1 to 2 is expected, as one cannot compute graph algorithms efficiently over such large graphs as induced by all facts from Row 1. Adding shortest paths (Row 3), while making the answer graph more connected (before: 1.58 connected components per question, after: 1.16), also marginally helps in bringing correct answers into the graph. From Rows 4 and 5, we see that taking a union of top-\ud835\udc58(\ud835\udc58= 25) GSTs from each connected component proves worthwhile (increase from 0.613 to 0.640), and so does completing the GSTs (further rise to 0.671). Finally, adding temporal facts provides a critical boost, taking the answer recall at the end of Stage 1 to a respectable 72.4%. This translates to 2343 questions having answers in the graph passed on to the R-GCN (cf. 1989 answers are present in the PPR-based answer graph of GRAFT-Net), out of which 1830 are answered correctly at the end. The second column, that counts the average number of entities and literals in the answer graph (answer candidates) is highly insightful to get an idea of the graph size at each step, and its potential trade-off with respect to answer recall. Understanding Stage 2. We performed ablation studies to understand the relative influence of the individual temporal components in the precision-oriented Stage 2 of Exaqt: the R-GCN answer classifier. Table 7 shows P@1 results on the test set, where the full model achieves the best results overall and also for each category. The amount of drop from the full model (Row 1) indicates the degree of importance of a particular component. The most vital enhancement is the attention over temporal relations (ATR). All \fComplex Temporal Question Answering on Knowledge Graphs CIKM \u201921, November 1\u20135, 2021, Virtual Event, QLD, Australia Category Overall Explicit Implicit Temp. Ans. Ordinal Method P@1 MRR Hit@5 P@1 MRR Hit@5 P@1 MRR Hit@5 P@1 MRR Hit@5 P@1 MRR Hit@5 Uniqorn [52] 0.331 0.409 0.538 0.318 0.406 0.536 0.316 0.415 0.545 0.392 0.472 0.597 0.202 0.236 0.356 GRAFT-Net [66] 0.452 0.485 0.554 0.445 0.478 0.531 0.428 0.465 0.525 0.515 0.568 0.660 0.322 0.313 0.371 PullNet [65] 0.105 0.136 0.186 0.022 0.043 0.075 0.081 0.123 0.192 0.234 0.277 0.349 0.029 0.049 0.083 Exaqt 0.565* 0.599* 0.664* 0.568* 0.594* 0.636* 0.508* 0.567* 0.633* 0.623* 0.672* 0.756* 0.420* 0.432* 0.508* Statistical significance of Exaqt over the strongest baseline (GRAFT-Net), under the 2-tailed paired \ud835\udc61-test, is marked with an asterisk (*) (\ud835\udc5d< 0.05). Table 4: Performance comparison of Exaqt with three complex QA baselines over the TimeQuestions test set. NERD Recall #Question entities TagMe 0.682 2.9 ELQ 0.716 1.7 AIDA 0.541 2.8 TagMe + ELQ 0.758 3.5 AIDA + ELQ 0.729 3.5 TagMe + AIDA 0.701 4.3 Table 5: Comparing various NERD methods on the test set. Step in Exaqt pipeline Recall #Candidates All KG facts of NERD entities 0.758 2491 Facts selected by BERT 0.719 48 Shortest paths injected for connectivity 0.720 49 GSTs on largest component 0.613 13 Union of GSTs from all components 0.640 14 Completed GSTs from all components 0.671 21 Temporal facts added by BERT 0.724 67 Table 6: Understanding the recall-oriented Stage 1 of Exaqt. Category Overall Explicit Implicit Temp. Ans. Ordinal Exaqt (Full) 0.565 0.568 0.508 0.623 0.420 Exaqt TCE 0.545 0.556 0.481 0.590 0.406 Exaqt TSE 0.543 0.545 0.465 0.598 0.411 Exaqt TEE 0.556 0.564 0.475 0.614 0.413 Exaqt TE 0.553 0.556 0.495 0.613 0.398 Exaqt ATR 0.534 0.527 0.465 0.594 0.411 Table 7: Inspecting the precision-oriented Stage 2 of Exaqt. what did abraham lincoln do before he was president? who was the king of troy when the trojan war was going on? what films are nominated for the oscar for best picture in 2009? where did harriet tubman live after the civil war? when did owner bill neukom\u2019s sports team last win the world series? Table 8: Anecdotal examples that Exaqt answered correctly. other factors offer varying degrees of assistance. An interesting observation is that TCE, while playing a moderate role in most categories, is of the highest importance for questions with temporal answers: even knowing that a question belongs to this category helps the model. Anecdotal examples. Table 8 shows samples of test questions that are successfully processed by Exaqt but none of the baselines. 8 RELATED WORK Temporal QA in IR. Supporting temporal intent in query and document processing has been a long-standing research topic in IR [8, 14, 20, 40, 49, 60]. This includes work inside the specific use case of QA over text [5, 33, 46, 56]. Most of these efforts require significant preprocessing and markup of documents. There is also onus on questions to be formulated in specific ways so as to conform to carefully crafted parsers. These directions often fall short of realistic settings on the Web, where documents and questions are both formulated ad hoc. Moreover, such corpus markup unfortunately does not play a role in structured knowledge graphs. Notable effort in temporal QA includes work of [56], which decompose complex questions into simpler components, and recompose answer fragments into responses that satisfy the original intent. Such approaches have bottlenecks from parsing issues. Exaqt makes no assumptions on how questions are formulated. Temporal QA over KGs. Questions with temporal conditions have not received much attention in the KG-QA literature. The few works that specifically address temporal questions include [23, 38, 76]. Among these, [38] relies on hand-crafted rules with limited generalization, whereas Exaqt is automatically trained with distant supervision and covers a much wider territory of questions. [23] introduces the task of event-centric QA, which overlaps with our notion of temporal questions, and introduces a benchmark collection. [76] presents a key-value memory network to include KG information about time into a QA pipeline. The method is geared for simple questions, as present in the WebQuestions benchmark. Temporal KGs. Of late, understanding large KGs as a dynamic body of knowledge has gained attention, giving rise to the notion of temporal knowledge graphs or temporal knowledge bases [25, 70]. Here, each edge (corresponding to a fact) is associated with a temporal scope or validity [43], with current efforts mostly focusing on the topic of temporal KG completion [31, 32, 42]. A very recent approach has explored QA over such temporal KGs, along with the creation of an associated benchmark [57]. 9" + }, + { + "url": "http://arxiv.org/abs/1908.03650v4", + "title": "TEQUILA: Temporal Question Answering over Knowledge Bases", + "abstract": "Question answering over knowledge bases (KB-QA) poses challenges in handling\ncomplex questions that need to be decomposed into sub-questions. An important\ncase, addressed here, is that of temporal questions, where cues for temporal\nrelations need to be discovered and handled. We present TEQUILA, an enabler\nmethod for temporal QA that can run on top of any KB-QA engine. TEQUILA has\nfour stages. It detects if a question has temporal intent. It decomposes and\nrewrites the question into non-temporal sub-questions and temporal constraints.\nAnswers to sub-questions are then retrieved from the underlying KB-QA engine.\nFinally, TEQUILA uses constraint reasoning on temporal intervals to compute\nfinal answers to the full question. Comparisons against state-of-the-art\nbaselines show the viability of our method.", + "authors": "Zhen Jia, Abdalghani Abujabal, Rishiraj Saha Roy, Jannik Stroetgen, Gerhard Weikum", + "published": "2019-08-09", + "updated": "2021-01-25", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR", + "cs.CL" + ], + "main_content": "INTRODUCTION Motivation and Problem. Knowledge-based question answering (KB-QA) aims to answer questions over large knowledge bases (e.g., DBpedia, Wikidata, YAGO, etc.) or other structured data. KBQA systems take as input questions such as: Q1: \u201cWhich teams did Neymar play for?\u201d and translate them into structured queries, in a formal language like SPARQL or SQL, and execute the queries to retrieve answers from the KB. In doing so, KB-QA methods need to address the vocabulary mismatch between phrases in the input question and entities, types, and predicates in the KB: mapping \u2018Neymar\u2019 to the uniquely identi\ufb01ed entity, \u2018teams\u2019 to the KB type footballClub and \u2018played for\u2019 to the KB predicate memberOf. State-of-the-art KB-QA (see surveys [9, 17]) can handle simple questions like the above example very well, but struggle with complex questions that involve multiple conditions on di\ufb00erent entities and need to join the results from corresponding sub-questions. For example, the question: Q2: \u201cAfter whom did Neymar\u2019s sister choose her last name?\u201d would require a three-way join that connects Neymar, his sister Rafaella Beckran, and David Beckham. Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for pro\ufb01t or commercial advantage and that copies bear this notice and the full citation on the \ufb01rst page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior speci\ufb01c permission and/or a fee. Request permissions from permissions@acm.org. CIKM\u201918, October 2018, Turin, Italy \u00a9 2021 Association for Computing Machinery. ACM ISBN 978-x-xxxx-xxxx-x/YY/MM...$15.00 https://doi.org/10.1145/nnnnnnn.nnnnnnn An important case of complex questions are temporal information needs. Search often comes with explicit or implicit conditions about time [16]. Consider the two examples: Q3: \u201cWhich teams did Neymar play for before joining PSG?\u201d Q4: \u201cUnder which coaches did Neymar play in Barcelona?\u201d In Q3, no explicit date (e.g., August 2017) is mentioned, so a challenge is to detect its temporal nature. The phrase \u2018joining PSG\u2019 refers to an event (Neymar\u2019s transfer to that team). We could detect this, but have to properly disambiguate it to a normalized date. The temporal preposition \u2018before\u2019 is a strong cue as well, but words like \u2018before\u2019, \u2018after\u2019, etc. are also used in non-temporal contexts; Q2 is an example for this. Q4 does not seem to be time-dependent at all, when looking at its surface form. However, it is crucial for correct answers that only coaches are selected whose job periods at FC Barcelona overlap with that of Neymar. Here, detecting the temporal nature is a big challenge. A second challenge is how to decompose such questions and ensure that the execution contains an overlap test for the respective time periods. Approach and Contributions. The key idea of this paper is to judiciously decompose such temporal questions and rewrite the resulting sub-questions so that they can be separately evaluated by a standard KB-QA system. The answers for the full questions are then computed by combining and reasoning on the sub-question results. For example, Q3 should be decomposed and rewritten into Q3.1: \u201cWhich teams did Neymar play for?\u201d and Q3.2: \u201cWhen did Neymar join PSG?\u201d. For the results of Q3.1, we could then retrieve time scopes from the KB, and compare them with the date returned by Q3.2, using a BEFORE operator. Analogously, Q4 would require an OVERLAP comparison as a \ufb01nal step. With the exception of the work by [4], to which we experimentally compare our method, we are not aware of any KB-QA system for such composite questions. Our solution, called TEQUILA, is built on a rule-based framework that encompasses four stages of processing: (i) detecting temporal questions, (ii) decomposing questions and rewriting sub-questions, (iii) retrieving candidate answers for sub-questions, and (iv) temporal reasoning to combine and reconcile the results of the previous stage into \ufb01nal answers. For stage (iii), we leverage existing KB-QA systems (state-of-the-art systems QUINT [2] and AQQU [6] used in experiments), that are geared for answering simple questions. To the best of our knowledge, this is the \ufb01rst paper that presents a complete pipeline speci\ufb01c to temporal KB-QA. Novel contributions also include: (i) a method for decomposing complex questions, and (ii) the time-constraint-based reasoning for combining sub-question results into overall answers. All data and code are \fCIKM\u201918, October 2018, Turin, Italy Z. Jia et al. public at https://github.com/zhenjia2017/tequila, and a demo is available at https://tequila.mpi-inf.mpg.de/. 2 CONCEPTS In NLP, the markup language TimeML (www.timeml.org) is widely used for annotating temporal information in text documents. Our de\ufb01nition of temporalquestions is based on two of its concepts (tags for temporal expressions and temporal signals). Temporal expressions. TIMEX3 tags demarcate four types of temporal expressions. Dates and times refer to points in time of di\ufb00erent granularities (e.g., \u2018May 1, 2010\u2019 and \u20189 pm\u2019, respectively). They occur in fullyor under-speci\ufb01ed forms (e.g., \u2018May 1, 2010\u2019 vs. \u2018last year\u2019). Durations refer to intervals (e.g., \u2018two years\u2019), and sets to periodic events (e.g., \u2018every Monday\u2019). Going beyond TimeML, implicit expressions (e.g., \u2018the Champions League \ufb01nal\u2019) are used to capture events and their time scopes [14]. Expressions can be normalized into standard format (e.g., \u2018May 2\ud45b\ud451, 2016\u2019 into 2016-05-02). Temporal signals. SIGNAL tags mark textual elements that denote explicit temporal relations between two TimeML entities (i.e., events or temporal expressions), such as \u2018before\u2019 or \u2018during\u2019. We extend the TimeML de\ufb01nition to also include cues when an event is mentioned only implicitly, such as \u2018joining PSG\u2019. In addition, we consider ordinals like \u2018\ufb01rst\u2019, \u2018last\u2019, etc. These are frequent in questions when entities can be chronologically ordered, such as \u2018last\u2019 in \u201cNeymar\u2019s last club before joining PSG\u201d. Temporal questions. Based on these considerations, we can now de\ufb01ne a temporal question as any question that contains a temporal expression or a temporal signal, or whose answer type is temporal. Temporal relations. Allen [3] introduced 13 temporal relations between time intervals for temporal reasoning: EQUAL, BEFORE, MEETS, OVERLAPS, DURING, STARTS, FINISHES, and their inverses for all but EQUAL. However, for an input temporal question, it is not always straightforward to infer the proper relation. For example, in Q3 the relation should be BEFORE; but if we slightly vary Q3 to: Q5: \u201cWhich team did Neymar play for before joining PSG?\u201d, the singular form \u2018team\u2019 suggests that we are interested in the MEETS relation, that is, only the last team before the transfer. Frequent trigger words suggesting such relations are, for instance, the signals before, prior to (for BEFORE or MEETS), after, following (for AFTER), and during, while, when, in (for OVERLAP). 3 METHOD Given an input question, TEQUILA works in four stages: (i) detect if the question is temporal,(ii) decompose the question into simpler sub-questions with some form of rewriting, (iii) obtain candidate answers and dates for temporal constraints from a KB-QA system, and (iv) apply constraint-based reasoning on the candidates to produce \ufb01nal answers. Our method builds on ideas from the literature on question decomposition for general QA [2, 5, 19]. Standard NLP tasks like POS tagging, NER, and coreference resolution, are performed on the input question before passing it on to TEQUILA. Table 1: Decomposition and rewriting of questions. The constraint is the fragment after the SIGNAL word. wh\u2217is the question word (e.g., who), and \ud464\ud456are tokens in the question. Expected input: wh\u2217\ud4641 . . . \ud464\ud45bSIGNAL \ud464\ud45b+1 . . . \ud464\ud45d? Case 1: Constraint has both an entity and a relation Sub-question 1 pattern: wh\u2217\ud4641 . . . \ud464\ud45b? Sub-question 2 pattern: when \ud464\ud45b+1 . . . \ud464\ud45d? E.g.: \u201cwhere did neymar play before he joined barcelona?\u201d Sub-question 1: \u201cwhere did neymar play?\u201d Sub-question 2: \u201cwhen neymar joined barcelona?\u201d Case 2: Constraint has no entity but a relation Sub-question 1 pattern: wh\u2217\ud4641 . . . \ud464\ud45b? Sub-question 2 pattern: when sq1-entity \ud464\ud45b+1 . . . \ud464\ud45d? E.g.: \u201cwhere did neymar live before playing for clubs?\u201d Sub-question 1: \u201cwhere did neymar live?\u201d Sub-question 2: \u201cwhen neymar playing for clubs?\u201d Case 3: Constraint has no relation but an entity Sub-question 1 pattern: wh\u2217\ud4641 . . . \ud464\ud45b? Sub-question 2 pattern: when \ud464\ud45b+1 . . . \ud464\ud45d\ud4641 . . . \ud464\ud45b? E.g.: \u201cwho was the brazil team captain before neymar?\u201d Sub-question 1: \u201cwho was the brazil team captain?\u201d Sub-question 2: \u201cwhen neymar was the brazil team captain?\u201d Case 4: Constraint is an event name Sub-question 1 pattern: wh\u2217\ud4641 . . . \ud464\ud45b? Sub-question 2 pattern: when did \ud464\ud45b+1 . . . \ud464\ud45dhappen? E.g.: \u201cwhere did neymar play during south africa world cup?\u201d Sub-question 1: \u201cwhere did neymar play?\u201d Sub-question 2: \u201cwhen did south africa world cup happen?\u201d 3.1 Detecting temporal questions A question is identi\ufb01ed as temporal if it contains any of the following: (a) explicit or implicit temporal expressions (dates, times, events), (b) temporal signals (i.e., cue words for temporal relations), (c) ordinal words (e.g., \ufb01rst), (d) an indication that the answer type is temporal (e.g., the question starts with \u2018When\u2019). We use HeidelTime [22] to tag TIMEX3 expressions in questions. Named events are identi\ufb01ed using a dictionary curated from Freebase. Speci\ufb01cally, if the type of an entity is \u2018time.event\u2019, its surface forms are added to the event dictionary. SIGNAL words and ordinal words are detected using a small dictionary as per suggestions from Setzer [21], and a list of temporal prepositions. To spot questions whose answers are temporal, we use a small set of patterns like when, what date, in what year, and which century. 3.2 Decomposing and rewriting questions TEQUILA decomposes a composite temporal question into one or more non-temporalsub-questions (returning candidate answers), and one or more temporalsub-questions (returning temporal constraints). Results of sub-questions are combined by intersecting their answers. The constraints are applied to time scopes associated with \fTEQUILA: Temporal Qestion Answering over Knowledge Bases CIKM\u201918, October 2018, Turin, Italy results of the non-temporal sub-questions. For brevity, the following explanation focuses on the case with one non-temporal subquestion, and one temporal sub-question. We use a set of lexicosyntactic rules (Table 1) designed from \ufb01rst principles to decompose and rewrite a question into its components. Basic intuitions driving these rules are as follows: \u2022 The signal word separates the non-temporal and temporal subquestions, acting as a pivot for decomposition; \u2022 Each sub-question needs to have an entity and a relation (generally represented using verbs) to enable the underlying KBQA systems to handle sub-questions; \u2022 If the second sub-question lacks the entity or the relation, it is borrowed from the \ufb01rst sub-question; \u2022 KB-QA systems are robust to ungrammatical constructs, thus precluding the need for linguistically correct sub-questions. 3.3 Answering sub-questions Sub-questions are passed on to the underlying KB-QA system, which translates them into SPARQL queries and executes them on the KB. This produces a result set for each sub-question. Results from the non-temporal sub-question(s) are entities of the same type (e.g., football teams). These are candidate answers for the full question. With multiple sub-questions, the candidate sets are intersected. The temporal sub-questions, on the other hand, return temporal constraints such as dates, which act as constraints to \ufb01lter the nontemporal candidate set. Candidate answers need to be associated with time scopes, so that we can evaluate the temporal constraints. Retrieving time scopes. To obtain time scopes, we introduce additional KB lookups; details depend on the speci\ufb01cs of the underlying KB. Freebase, for example, often associates SPO triples with time scopes by means of compound value types (CVTs); other KBs may use \ud45b-tuples (\ud45b> 3) to attach spatio-temporal attributes to facts. For example, the Freebase predicate marriage is a CVT with attributes including marriage.spouse and marriage.date. When the predicate marriage.spouse is used to retrieve answers, the time scope is retrieved by looking up marriage.date in the KB. On the other hand, playing for a football club could be captured in a predicate like team.players without temporal information attached, and the job periods are represented as events in predicates like footballPlayer. team. joinedOnDate and footballPlayer. team. leftOnDate). In such cases, TEQUILA considers all kinds of temporal predicates for the candidate entity, and chooses one based on a similarity measure between the non-temporal predicate (team.players) and potentially relevant temporal predicates (footballPlayer. team. joinedOnDate, footballPlayer.award.date). The similarity measure is implemented by selecting tokens in predicate names (footballPlayer, team, etc.), contextualizing the tokens by computing word2vec embeddings for them, averaging per-token vectors to get a resultant vector for each predicate [24], and comparing the cosine distance between two predicate vectors. The best-matching temporal predicate is chosen for use. When time periods are needed (e.g., for a temporal constraint using OVERLAP), a pair of begin/end predicates is selected (e.g., footballPlayer. team. joinedOnDate and leftOnDate). Table 2: Temporal reasoning constraints. Relation Signal word(s) Constraint BEFORE \u2018before\u2019, \u2018prior to\u2019 \ud452\ud45b\ud451\ud44e\ud45b\ud460\u2264\ud44f\ud452\ud454\ud456\ud45b\ud450\ud45c\ud45b\ud460 AFTER \u2018after\u2019 \ud44f\ud452\ud454\ud456\ud45b\ud44e\ud45b\ud460\u2265\ud452\ud45b\ud451\ud450\ud45c\ud45b\ud460 OVERLAP \u2018during\u2019, \u2018while\u2019, \u2018when\u2019 \ud44f\ud452\ud454\ud456\ud45b\ud44e\ud45b\ud460\u2264\ud452\ud45b\ud451\ud450\ud45c\ud45b\ud460\u2264\ud452\ud45b\ud451\ud44e\ud45b\ud460 \u2018since\u2019, \u2018until\u2019, \u2018in\u2019 \ud44f\ud452\ud454\ud456\ud45b\ud44e\ud45b\ud460\u2264\ud44f\ud452\ud454\ud456\ud45b\ud450\ud45c\ud45b\ud460\u2264\ud452\ud45b\ud451\ud44e\ud45b\ud460 \u2018at the same time as\u2019 \ud44f\ud452\ud454\ud456\ud45b\ud450\ud45c\ud45b\ud460\u2264\ud44f\ud452\ud454\ud456\ud45b\ud44e\ud45b\ud460\u2264\ud452\ud45b\ud451\ud44e\ud45b\ud460\u2264\ud452\ud45b\ud451\ud450\ud45c\ud45b\ud460 3.4 Reasoning on temporal intervals For temporal sub-questions, the results are time points, time intervals, or sets of dates (e.g., a set of consecutive years during which someone played for a football team). We cast all these into intervals with start point \ud44f\ud452\ud454\ud456\ud45b\ud450\ud45c\ud45b\ud460and end point \ud452\ud45b\ud451\ud450\ud45c\ud45b\ud460. These form the temporal constraints against which we test the time scopes of the non-temporal candidate answers, also cast into intervals [\ud44f\ud452\ud454\ud456\ud45b\ud44e\ud45b\ud460,\ud452\ud45b\ud451\ud44e\ud45b\ud460]. The test itself depends on the temporal operator derived from the input question (e.g., BEFORE, OVERLAP, etc.) (Table 2). For questions with ordinal constraints (e.g., last), we sort the (possibly open) intervals to select the appropriate answer. 4 EXPERIMENTS 4.1 Setup We evaluate TEQUILA on the TempQuestions benchmark [13], which contains 1, 271 temporal questions labeled as questions with explicit, implicit, and ordinal constraints, and those with temporal answers. Questions are paired with their answers over Freebase. We use three state-of-the-art KB-QA systems as baselines: AQQU [6], QUINT [2] (code from authors for both), and Bao et al. [4] (detailed results from authors). The \ufb01rst two are geared for simple questions, while Bao et al. handle complex questions, including temporal ones. We use TEQUILA as a plug-in for the \ufb01rst two, and directly evaluate against the system of Bao et al. on 341 temporal questions from the ComplexQuestions test set [4]. For evaluating baselines, the full question was fed directly to the underlying system. We report precision, recall, and F1 scores of the retrieved answer sets w.r.t. the gold answer sets, and average them over all test questions. 4.2 Results and insights Results on TempQuestions and the 341 temporal questions in ComplexQuestions are shown in Table 3. AQQU + TEQUILA and QUINT + TEQUILA refer to the TEQUILA-enabled versions of the respective baseline systems. We make the following observations. TEQUILA enables KB-QA systems to answer composite questionswith temporalconditions. Overall and category-wise F1-scores show that TEQUILA-enabled systems signi\ufb01cantly outperform the baselines. Note that these systems neither have capabilities for handling compositional syntax nor speci\ufb01c support for temporal questions. Our decomposition and rewrite methods are crucial for compositionality, and constraint-based reasoning on answers is decisive for the temporal dimension. The improvement in F1-scores stems from a systematic boost in precision, across most categories. TEQUILA outperforms state-of-the-art baselines. Bao et al. [4] represents the state-of-the-art in KB-QA, with a generic \fCIKM\u201918, October 2018, Turin, Italy Z. Jia et al. Table 3: Detailed performance of TEQUILA-enabled systems on TempQuestions and ComplexQuestions. TempQuestions Aggregate results Explicit constraint Implicit constraint Temporal answer Ordinal constraint (1,271 questions) Prec Rec F1 Prec Rec F1 Prec Rec F1 Prec Rec F1 Prec Rec F1 AQQU [6] 24.6 48.0 27.2 27.6 60.7 31.1 12.9 34.9 14.5 26.1 33.5 27.4 28.4 57.4 32.7 AQQU+TEQUILA 36.0* 42.3 36.7* 43.8* 53.8 44.6* 29.1* 34.7 29.3* 27.3* 29.6 27.7* 38.0* 41.3 38.6* QUINT [2] 27.3 52.8 30.0 29.3 60.9 32.6 25.6 54.4 27.0 25.2 38.2 27.3 21.3 54.9 26.1 QUINT+TEQUILA 33.1* 44.6 34.0* 41.8* 51.3 42.2* 13.8 43.7 15.7 28.6* 34.5 29.4* 37.0* 42.2 37.7* ComplexQuestions Aggregate results Explicit constraint Implicit constraint Temporal answer Ordinal constraint (341 questions) Prec Rec F1 Prec Rec F1 Prec Rec F1 Prec Rec F1 Prec Rec F1 Bao et al. [4] 34.6 48.4 35.9 41.1 53.2 41.9 26.4 36.5 27.0 18.6 40.2 22.3 31.1 60.8 36.1 AQQU [6] 21.5 50.0 23.3 25.0 60.1 28.4 11.2 31.2 11.4 19.6 35.7 19.2 22.2 54.9 25.3 AQQU+TEQUILA 36.2* 45.9 37.5* 41.2* 54.7 43.5* 27.5* 32.6 27.0* 29.5* 32.1 29.9* 40.2* 45.1 40.8* QUINT [2] 22.0 50.3 24.5 24.7 54.7 27.5 18.8 47.9 19.0 16.6 37.5 20.7 20.9 51.3 26.0 QUINT+TEQUILA 29.6* 44.9 31.1* 34.6* 47.3 36.3* 12.3 42.1 13.9 33.4* 37.5 33.9* 44.9* 51.6* 45.8* Aggregate results are averaged over the four categories. The highest value in a column for each dataset is in bold. An asterisk (*) indicates statistical signi\ufb01cance of TEQUILA-enabled systems over their standalone counterparts, under the 2-tailed paired \ud461-test at \ud45d< 0.05 level. mechanism for handling constraints in questions. TEQUILA-enabled systems outperformBao et al. on the temporal slice of ComplexQuestions, showing that a tailored method for temporal information needs is worthwhile. TEQUILA enabled QUINT and AQQU to answer questions like: \u201cwho is the \ufb01rst husband of julia roberts?\u201d, \u201cwhen did francesco sabatini start working on the puerta de san vicente?\u201d, and \u201cwho was governor of oregon when shanghai noon was released?\u201d. Error analysis. Analyzing cases when TEQUILA fails yields insights towards future work: (i) Decomposition and rewriting were incorrect (for example, in \u201cwhere did the pilgrims come from before landing in america?\u201d, \u2018landing\u2019 is incorrectly labeled as a noun, triggering case 3 instead of case 1 in Table 1); (ii) The correct temporal predicate was not found due to limitations of the similarity function; and (iii) The temporal constraint or the time scope to use during reasoning was wrongly identi\ufb01ed. 5 RELATED WORK QA has a long tradition in IR and NLP, including benchmarking tasks in TREC, CLEF, and SemEval. This has predominantly focused on retrieving answers from textual sources. The recent TREC CAR (complex answer retrieval) resource [10], explores multi-faceted passage answers, but information needs are still simple. In IBM Watson [11], structured data played a role, but text was the main source for answers. Question decomposition was leveraged, for example, in [11, 19, 28] for QA over text. However, re-composition and reasoning over answers works very di\ufb00erently for textual sources [19], and are not directly applicable for KB-QA. Compositional semantics of natural language sentences has been addressed by [15] from a general linguistic perspective. Although applicable to QA, existing systems support only speci\ufb01c cases of composite questions. KB-QA is a more recent trend, starting with [7, 8, 12, 23, 26]. Most methods have focused on simple questions, whose SPARQL translations contain only a single variable (and a few triple patterns for a single set of qualifying entities). For popular benchmarks like WebQuestions [7], the best performing systems use templates and grammars [1, 2, 6, 18, 28], leverage additional text [20, 25], or learn end-to-end with extensive training data [25, 27]. These methods do not cope well with complex questions. Bao et al. [4] combined rules with deep learning to address a variety of complex questions. 6" + } + ], + "Yida Wang": [ + { + "url": "http://arxiv.org/abs/2205.03899v1", + "title": "SoftPool++: An Encoder-Decoder Network for Point Cloud Completion", + "abstract": "We propose a novel convolutional operator for the task of point cloud\ncompletion. One striking characteristic of our approach is that, conversely to\nrelated work it does not require any max-pooling or voxelization operation.\nInstead, the proposed operator used to learn the point cloud embedding in the\nencoder extracts permutation-invariant features from the point cloud via a\nsoft-pooling of feature activations, which are able to preserve fine-grained\ngeometric details. These features are then passed on to a decoder architecture.\nDue to the compression in the encoder, a typical limitation of this type of\narchitectures is that they tend to lose parts of the input shape structure. We\npropose to overcome this limitation by using skip connections specifically\ndevised for point clouds, where links between corresponding layers in the\nencoder and the decoder are established. As part of these connections, we\nintroduce a transformation matrix that projects the features from the encoder\nto the decoder and vice-versa. The quantitative and qualitative results on the\ntask of object completion from partial scans on the ShapeNet dataset show that\nincorporating our approach achieves state-of-the-art performance in shape\ncompletion both at low and high resolutions.", + "authors": "Yida Wang, David Joseph Tan, Nassir Navab, Federico Tombari", + "published": "2022-05-08", + "updated": "2022-05-08", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.AI" + ], + "main_content": "Introduction Several data representations exist for 3D shapes. One common choice is the use of spatially discretized representations such as volumetric data (Yang et al. 2017; Wang et al. 2019b; Yang et al. 2018a). Alternative popular choices are implicit descriptions (Park et al. 2018; Chibane et al. 2020) as well as sparse 3D coordinate-based representations such as point clouds (Yang et al. 2018b; Xie et al. 2020b; Yuan et al. 2018) and 3D meshes (Groueix et al. 2018). Among this latter category of 3D data formats, point clouds are arguably the simplest, since they store 3D coordinates without any Communicated by Akihiro Sugimoto. B Yida Wang yida.wang@tum.de David Joseph Tan djtan@google.com Nassir Navab nassir.navab@tum.de Federico Tombari tombari@google.com 1 Technische Universit\u00e4t M\u00fcnchen, Munich, Germany 2 Google, Zurich, Switzerland additional topological information such as faces or edges associated to the vertices. Hence, investigating how to process and learn 3D shape geometry based on these simple, yet effective representations is currently a hot research topic. This has recently motivated several tasks in 3D computer vision such as estimating point cloud deformation (Yang et al. 2018b; Yuan et al. 2018), registration (Aoki et al. 2019; Park et al. 2017), completion (Wang et al. 2020b; Groueix et al. 2018; Yuan et al. 2018), segmentation (Qi et al. 2017a; Lei et al. 2020; Xu et al. 2020) and 3D object detection (Shi et al. 2020; Qi et al. 2019). This paper focuses on the point cloud completion task. The goal is to \ufb01ll out occluded parts of the input 3D geometry represented by a partial scan, in a way that is coherent with the global shape while preserving \ufb01ne local surface details. This is a useful task for many real world applications since occluded regions are normally present as part of most 3D data capture processes within, e.g., SLAM or multi-view reconstruction pipelines. State-of-the-art approaches targeting this task are based on neural networks and mostly rely on learning how to deform a set of 2D grids at different scales into 3D points, based on global shape descriptors typically represented by PointNet (Qi et al. 2017a) features. Examples of 123 \fInternational Journal of Computer Vision these approaches are FoldingNet (Yang et al. 2018b), AtlasNet (Groueix et al. 2018) and PCN (Yuan et al. 2018). To overcome the aforementioned problem related to information loss due to feature compression at the level of the encoder\u2013decoder bottleneck, GRNet (Xie et al. 2020b) suggests to preserve \ufb01ne geometry details by discretizing the features via volumetric feature maps used at the different layers of the encoder. It also suggests using volumetric U-Net (Yang et al. 2018a) to build skip connections between the encoder and the decoder, eventually merging the obtained features with the input point cloud. The idea of leveraging skip connections among different layers of an encoder\u2013 decoder model follows the successful paradigm already exploited for volumetric shape completion, in particular 3DRecGAN (Yang et al. 2017) and ForkNet (Wang et al. 2019b). While effective, converting sparse point cloud features into volumetric maps brings in all the disadvantages of discretized 3Ddatarepresentationswithrespecttopointclouds,inparticular the loss of \ufb01ne shape details, the inability to \ufb02exibly deal with local point density variations, as well as the unpractical trade-off between 3D resolution and memory occupancy. Recently, we have demonstrated how, by means of sorting features based on their activations rather than applying max pooling, we can build up point clouds embeddings that store more informative features for a point cloud with respect to PointNet. This feature-learning approach, named SoftPool (Wang et al. 2020b), obtained state-of-the-art results for different point cloud-related tasks, such as completion and classi\ufb01cation. In this work, we build up on our previous work (Wang et al. 2020b) to propose a more complete endto-end framework. Our contributions are two-folds and are listed as follows: 1. We generalize our feature extraction technique into a module called SoftPool++. This module introduces truncated softpool features aimed to decrease the memory requirementsoftheoriginalmethodduringtraining,making it compatible with off-the-shelf GPUs. Notably, a disadvantage of the SoftPool features (Wang et al. 2020b) is that each point is processed independently from the rest. Due to this, the proposed module further processes thetruncatedsoftpoolfeatureswithregionalconvolutions in order to recognize the relationships between the feature points. In contrast to Wang et al. (2020b) that applies their feature once, this module can be applied multiple times as demonstrated in our architecture, which uses it across multiple layers. 2. We propose a novel encoder\u2013decoder architecture characterized by the use of point-wise skip connections. By connecting corresponding layers between encoder and decoder, this has the advantage of preserving \ufb01ne geometric details from the given partial input cloud. This is to the best of our knowledge the \ufb01rst approach using skip connections for unorganized sets of 3D feature maps, relaxing the need of spatial discretization as deployed in Xie et al. (2020b), with bene\ufb01ts in terms of completion accuracy and memory occupancy. In addition, we also adapt the discriminator from TreeGAN (Shu et al. 2019) for the shape completion problem to further improve our model. Our method is evaluated on ShapeNet (Chang et al. 2015) for the task of shape completion and on ModelNet (Zhirong et al. 2015) and PartNet (Mo et al. 2019) for the task of classi\ufb01cation. Figure 1 illustrates a teaser of the shape completion results. It compares the architectures that are built on PointNet (Qi et al. 2017a) and SoftPool (Wang et al. 2020b) features. Visually, we show the advantage of the reconstructions that rely on SoftPool features as they are remarkably more similar to the ground truth. Moreover, the \ufb01gure also highlights the improvements of SoftPool++ with respect to our previous approach (Wang et al. 2020b). (a) (b) (c) (c) (d) (e) Fig. 1 Object completion results of the PointNet features such as FoldingNet (Yang et al. 2018b) and PCN (Yuan et al. 2018); and, the SoftPool features such as SoftPoolNet (Wang et al. 2020b) and the proposed SoftPool++ 123 \fInternational Journal of Computer Vision 2 Related Work Based on the focus of our contributions, we browse through the relevant methods in 3D object completion from partial scans and the use of skip connections with 3D data. 2.1 3D Object Completion Inspired by the way humans perceive the 3D world from 2D projections, 3D-R2N2 (Choy et al. 2016) builds recurrent neural networks (RNNs) to fuse multiple feature maps extracted from input RGB images sequentially to recover the 3D geometries. To further improve the reconstruction, a coarse-to-\ufb01ne 3D decoder was presented in Pix2Vox (Xie et al. 2019) as well as the residual re\ufb01ner in Pix2Vox++ (Xie et al. 2020a). Due to the recent popularity of the attention mechanisms, AttSets (Yang et al. 2020) proposed to build attention layers to correlate the image features from different views. In contrast, our 3D reconstruction in this paper focuses on only a single depth image. Takingadepthimageofanobjectfromanarbitrarycamera pose, the objective of 3D object completion is to complete its missing structure and build its full reconstruction. Focusing on learning-based completion, most related work can be categorized depending on the input data they process\u2014voxelized grid or point cloud. Interestingly, a notable work from OcCo (Wang et al. 2021a) demonstrates that the weights trained for completion are also valuable for other tasks like segmentation and classi\ufb01cation. Voxelized Grid Due to the popularity of 2D convolution operations in CNNs (Azad et al. 2019; Kirillov et al. 2020; Yang et al. 2020) for RGB images, its straightforward extension to 3D convolutions on volumetric data also rose to fame. 3D-EPN (Dai et al. 2017) and 3D-RecGAN (Yang et al. 2017) are the \ufb01rst works on this topic, where they extended the typical encoder\u2013decoder architecture (Noh et al. 2015) to 3D. Adopting a similar architecture, 3D-RecGAN++ (Yang et al. 2018a) and ForkNet (Wang et al. 2019b) utilize adversarial training with 3D discriminator to improve the reconstruction. The main advantage of volumetric completion is the structure of its data such that deep learning methods developed for RGB images can be extended to 3D. However, this advantage is also its limitation. The \ufb01xed local resolution makes it hard to reconstruct the object\u2019s \ufb01ner details without consuming a huge amount of memory. Point Cloud Having the inverse problem, point clouds have the potential to reconstruct the object at a higher resolutions but exhibited so far a limited application in deep learning due to its unstructured data. Note that, unlike RGB images or voxel maps, point clouds do not have a particular order, and the number of points varies as we change the camera pose or the object. Targeted to solve the unordered structure of point clouds, PointNet (Qi et al. 2017a) proposes to implement maxpooling in order to achieve a permutation invariant latent feature. Based on this one dimensional feature, FoldingNet (Yang et al. 2018b) proposes an object completion solution that deforms a 2D rectangular grids by multi-layer perceptron (MLP). By increasing the number of 2D rectangular grids, AtlasNet (Groueix et al. 2018) and PCN (Yuan et al. 2018) added more complexity as well as details into the reconstruction. MSN (Liu et al. 2020) then further improves the completion by adding restrictions to separate different patches apart from each other. Moreover, Cycle4Completion (Wen et al. 2021) is also based on PointNet features but solves the problem by training with an unsupervised cycle transformation. Moving away from the global feature representation, PointNet++ (Qi et al. 2017b) samples the local subset of points with farthest point sampling (FPS) then feeds it into PointNet (Qi et al. 2017a). Based on this feature, PMPNet (Wen et al. 2020b) completes the entire object gradually from the observed regions to the nearest occluded regions. Snow\ufb02akeNet (Xiang et al. 2021) also uses the PointNet++ features to split points in the coarsely reconstructed object to execute the completion progressively. In addition, building a similar feature as PointNet, ME-PCN (Gong et al. 2021) takes both the occupied and the empty regions on the depth image as input for 3D completion, showing the advantage of masking the empty regions in completion. Unlike the methods which are dependent on a vectorized global feature to solve the permutation invariant problem, RFNet (Huang et al. 2021) and PointTr (Yu et al. 2021) produce several global features in their encoder. On one hand, RFNet (Huang et al. 2021) uses their features to complete the object in an recurrent way by concatenating the incomplete input and the predicted points level by level. On the other, PointTr (Yu et al. 2021) relies on transformers to produce a set of queries directly from the observed points with the help of positional coding. In effect, PointTr (Yu et al. 2021) does not need to compress the input into a single vector. TherecentworkfromPVD(Zhouetal.2013),GRNet(Xie et al. 2020b) and VE-PCN (Wang et al. 2021b) leverage both the point cloud and the voxel grid representations. Unlike most works that rely on Chamfer distance to optimize the model, PVD (Zhou et al. 2013) uses a simple Euclidean loss to optimize the shape generation model from the voxelized point cloud representation. GRNet (Xie et al. 2020b) \ufb01rst voxelizes the point cloud, processes the voxel grid with deep learning and converts the results back to point cloud. While this solves the unorganized structure of the point clouds, its discretization removes its advantage on reconstructing in higher resolutions. VE-PCN (Wang et al. 2021b) improves the completion by supplementing the features of the decoder 123 \fInternational Journal of Computer Vision in the volumetric completion with the edges. This method then converts the voxels to point clouds by Adaptive Instance Normalization (Lim et al. 2019). Another solution is presented in our previous work SoftPoolNet (Wang et al. 2020b) that builds local groups of features by sorting them into a feature map. 2D convolutions are then applied to the feature map. Consequently, this approach is able to deal with unorganized point clouds and achieve reconstruction results at high resolution. We build upon SoftPoolNet (Wang et al. 2020b) and generalize the feature extraction into a module which we call SoftPool++. This then allows us to connect multiple modules in an encoder\u2013 decoder architecture. As a consequence, we achieve better quantitive and qualitative results. 2.2 Skip Connections in 3D Skip connections were initially proposed for image processing (Mazaheri et al. 2019; Kim et al. 2016; Gao et al. 2019; Azad et al. 2019) then later adapted for 3D volumetric reconstruction (Yang et al. 2017, 2018a; Wang et al. 2019b). Given a point cloud as input, the methods like GRNet (Xie et al. 2020b) and InterpConv (Mao et al. 2019) require to convert the input point cloud to voxel grids. Aiming at alleviating this limitation on point clouds, the work from Std (Yang et al. 2019) bypasses the encoder features into decoder point-by-point while GACNet (Wang et al. 2019a) constructs a graph from the points then constructs the skip connection with the graph. The problem of these point-wise skip connections is that new points cannot be introduced in the decoder. To solve this, SA-Net (Wen et al. 2020a) groups PointNet++ (Qi et al. 2017b) features in different resolutions with KNN. The skip connection from the encoder then matches the resolution of the decoder. Contrary to these methods, in the context of object completion, the objective of our skip connection is compensate for the lost data in the encoder and bypass the observed geometry to the decoder. We also introduce the concept of feature transformation to compensate for the difference between the features from the encoder and decoder. Later in our evaluation, we found that the skip connection is a crucial step to achieve higher accuracy. Moreover, the SoftPool++ features also contribute to make our skip connection simpler. Since it is an organized feature, we avoid the time-consuming KNN, which signi\ufb01cantly decreases our inference time. 3 Feature Extraction Given the partial scan of an object, the input to our network is a point cloud with Nin points written in matrix form as Pin = [xi]Nin i=1, where each point is represented as the 3D coordinates xi = [xi, yi, zi]. On one hand, the \ufb01rst objective of this section is to build a feature descriptor from the unorganized point cloud such that the feature remains the same for any permutation of the point cloud in Pin. On the other hand, the second objective is togeneralizethisprocessintoafeatureextractionmodulethat takes an arbitrary input Pin. In this way, the proposed module can be implemented at multiple instances in our architecture. 3.1 SoftPool Feature From the point cloud vector, we then convert each point into a feature vector fi with N f elements by projecting every point with a point-wise multi-layer perceptron (Qi et al. 2017a) with its parameters assembled in WMLP. Thus, we de\ufb01ne the Nin \u00d7 N f feature matrix as F = [fi]Nin i=1. Note that we applied a softmax function to the output neuron of the perceptron so that the elements in fi range between 0 and 1. Throughout this section, we refer to the toy example in Fig. 2 to visualize the various steps. This example assumes that there are only \ufb01ve points in the point cloud such that Nin = 5 as shown in Fig. 2a. One of the main challenges in processing a point cloud is its unstructured arrangement. If we look at Fig. 2a, changing the order of the points in Pin reorganizes the rows of the feature map F. There is consequently no guarantee that the feature map remains constant for the same set of points. To solve this problem, we propose to organize the feature vectors in F so that their k-th elements are sorted in a descending order, which is denoted as F\u2032 k. Note that k should not be larger than N f .ThisisdemonstratedinFig.2awherewearrangethe \ufb01ve feature vectors from F = [fi]5 i=1 to F\u2032 k = [fi]i={3,5,1,2,4} by comparing the k-th element of each vector. The features in SoftPoolNet (Wang et al. 2020b) repeat this process for all of the N f elements in fi. Altogether, the feature is a 3D tensor with the dimension of Nin \u00d7 N f \u00d7 N f denoted as F\u2032 = [F\u2032 1, F\u2032 2, . . . F\u2032 N f ] in Fig. 2b. Finally, we assemble the SoftPool features F\u2217by taking the Nr rows with the highest activations of all F\u2032 i in F\u2032. Since each row in F\u2032 i is equivalent to a point, we can then interpret the Nr rows of F\u2032 i as one region in the point cloud, summing up to all N f regions in F\u2217. Although both PointNet (Qi et al. 2017a) and SoftPoolNet (Wang et al. 2020b) utilize MLP in their architecture, they have signi\ufb01cant differences on handling the results thereof. Compared to the max-pooling operation in PointNet (Qi et al. 2017a), the motivation of the SoftPool feature is to capture a larger amount of information and to further process it with regional convolution operations, as explained later in Sect. 4. 123 \fInternational Journal of Computer Vision (a) (b) Fig. 2 Toy examples of the truncated SoftPool feature. Given 5 points in (a), they go through Multi-layer Perceptron (MLP) to produce F. At the k-th element, the vectors are sorted to build F\u2032 k and consequently F\u2032. In (b), we concatenation of the \ufb01rst Nr rows of F\u2032 k to construct the 3D tensor F\u2217which corresponds to the regions with high activations then truncated to assemble \u02c6 F 3.2 Generalizing and Truncating the SoftPool Feature In practice, we noticed that we can generalize the SoftPool feature formulation to an arbitrary input feature Pin\u2014thus, alleviating the de\ufb01nition of points\u2014to produce the SoftPool features F\u2032. From this perspective, we can construct an architecture with a series of SoftPool feature extractions. Therefore, we take the point cloud as the input to the architecture and extract the \ufb01rst SoftPool features. Then, after processing the \ufb01rst features, we can then extract the second features from them and so on. This is discussed later in Sect. 4 with an encoder\u2013decoder architecture. However, the drawback of such architecture is the size of the SoftPool features. With a dimension of Nr \u00d7N f \u00d7N f , the memory footprint increases with the size of the feature but we are constrained by the memory size of our off-the-shelf GPU. Notably, in Wang et al. (2020b), they set the feature dimension N f to a small value of 8. In this work, since we are interested in building a series of SoftPool features in an encoder\u2013decoder architecture, N f increases up to 256 in the latent space. Hence, we propose to further truncate the SoftPool features to Nr \u00d7 N f \u00d7 Ns, where the third dimension takes the \ufb01rst Ns matrices in F\u2217as illustrated in Fig. 2b. To distinguish from Wang et al. (2020b), we refer this as the Truncated SoftPool feature, denoted as \u02c6 F in Fig. 2b. 3.3 Regional Convolutions Considering that each point in the cloud independently goes through MLP while the operations thereafter to produce the truncated SoftPool features rely on sorting, each row of our feature remains independent from each other. However, in contrast to max-pooling which produces a vector, our feature is a 3D tensor which can undergo convolutional operations. Instead of applying the same kernel to all regions as Wang et al. (2020b), we generalize the regional convolutions and impose distinct kernels for each region. We \ufb01rst split \u02c6 F = [\u02c6 Fr]Ns r=1 into separate regions \u02c6 Fr and correspondingly apply a set of kernels Wconv = {Wr}Ns r=1. Assigning the concatenated output tensor as Fout = [Pr]Ns r=1, we can formally describe this operation as Pr(i, j) = N f \u0002 l=1 Nk \u0002 k=1 \u02c6 Fr(i + k,l)Wr( j, k,l) (1) for the r-th region. The dimension of each kernel is Nk\u00d7N f \u00d7Nout, where Nk indicates the number of neighbors to consider and Nout is the desired size of the output Pr. Note that the kernels convolves on the entire width of \u02c6 Fr, i.e.corresponding to its width N f . This implies that we only pad on the vertical axis. Similar to other convolutional operators, the stride s distinguishes between a convolutional and deconvolutional operation. If the stride is greater than 1, \u02c6 Fr is downsampled, while it is upsampled if the stride is less than 1. 3.4 SoftPool++ Module Now, we have all the components to build the feature extraction module as shown in Fig. 3, which we call SoftPool++. Since Pin is de\ufb01ned as the input point cloud, we generalize the input of the module as Fin where we set Fin = Pin in the \ufb01rst layer. Hence, the input matrix Fin goes through a 3layer perceptron then builds the truncated SoftPool features. Thereafter, we perform regional convolution and reshape the results by squeezing the third dimension to \ufb01nally acquire our output feature matrix Fout. Fig. 3 Overview of the feature extraction module called SoftPool++ 123 \fInternational Journal of Computer Vision Fig. 4 Object completion results with MSN (Liu et al. 2020) while using PointNet (Qi et al. 2017a) features and SoftPool++ features on its encoder When constructing our architecture in Sect. 4, the encoder and decoder are distinguished primarily on the stride s. In this paper, we show the versatility of this novel module to act as an encoder and decoder as well as to re\ufb01ne a coarse point cloud with more elaborate details. The differences between decoding from PointNet features and SoftPool++ features are evident in Fig. 4, where we replace the PointNet feature in MSN (Liu et al. 2020) with a SoftPool++ feature with the same size of 1024. By replacing the PointNet (Qi et al. 2017a) encoder in MSN (Liu et al. 2020) with our SoftPoolNet++ encoder, we show that the SoftPool++ feature supplements the MSN\u2019s decoder where all the wheels are clearly separated from the body of the SUV, while the original PointNet feature in MSN follow the more generic structure of a vehicle with tiny gaps between wheel and body. This proves that SoftPool++ makes our decoder able to take all observable geometries into account to complete the shape, while the max-pooled PointNet feature cannot deal with geometric structures which are rarely or not at all seen in the training data. 4 Network Architecture The volumetric U-Net (\u00c7i\u00e7ek et al. 2016; Yang et al. 2018a) in 3D-RecGAN (Yang et al. 2017) and GRNet (Xie et al. 2020b) has shown signi\ufb01cant improvements in object completion as it injects more data from the encoder to the decoder in order to supplement the compressed latent feature. Without the skip connection in U-Net, we end up losing most of the input data as it goes through the encoder. Consequently, the decoder starts hallucinating the overall structure without being faithful to the given information. Inspired by this idea, we introduce a novel U-Net connection that directly takes the point cloud as input, i.e.without the need of voxelization at any stage of the network. Our network architecture is composed of an encoder\u2013 decoder structure with a skip connection as shown in Fig. 5. Such connection between encoder and decoder makes the completion more likely to preserve input geometries. The encoder is composed of consecutive feature extraction modules from Sect. 3.4 to downsample the input to the latent feature while the decoder is composed of the similar feature extraction modules to upsample to the output. As discussed in Sect. 3.4, the stride s is a signi\ufb01cant parameter to distinguish the two layers. Table 1 lists the values of all the parameters for the module in the convolution and deconvolution layers. Fig. 5 Overview of our object completion architecture where the parameters for the convolution and deconvolution operations based on the feature extraction module are listed in Table 1. Note that, in our evaluation, we compare three point cloud results from the decoder: (1) the skip-connection output; (2) the coarse output; and, (3) the \ufb01ne-grained output which is the \ufb01nal reconstruction 123 \fInternational Journal of Computer Vision Table 1 Dimensions and parameters on each feature extraction module in our architecture Feature extraction module Conv1 Conv2 Deconv1 Deconv2 MDS Deconv3 Deconv4 Input variable Pin Fconv1 out Fconv2 out [Fdeconv1 out , Fconv1 out R] Fdeconv2 out PMDS Fdeconv3 out Number of rows Nin Nin/8 256 512 + Nin/8 1024 + Nin/4 2048 4096 Number of columns 3 256 256 256 3 3 3 Stride (s) 8 2 1/2 1/2 \u2013 1/2 1/4 Feature dimension (N f ) 256 256 256 256 3 256 256 Number of rows in the region (Nr) Nin 32 Nin Nin \u2013 Nin Nin Truncation size (Ns) 1 8 1 1 \u2013 1 1 Kernel size (Nk) 8 8 4 4 \u2013 4 4 Dimension of output feature (Nout) 256 256 256 3 3 3 3 Output variable Fconv1 out Fconv2 out Fdeconv1 out Fdeconv2 out PMDS Fdeconv3 out Pout Number of rows Nin/8 256 512 1024 + Nin/4 2048 4096 16,384 Number of columns 256 256 256 3 3 3 3 Note that the input to the architecture is the point cloud Pin with a dimension of Nin \u00d73 while the output is another point cloud Pout with 16,384\u00d73 Skip Connection with Feature Transform We bridge the encoder and decoder with a skip connection to build a UNet-like (\u00c7i\u00e7ek et al. 2016; Yang et al. 2018a) structure. This connection links the results of conv1, denoted as Fconv1 out , to the results of deconv1, denoted as Fdeconv1 out . However, instead of simply concatenating them, we introduce a square matrix R that transforms the features from the encoder as Fconv1 out R. Note that the multiplication by the transform is on the right side because the points are arranged row-wise in Pin, which implies that the feature vectors are also arranged rowwise. Subsequently, we concatenate the two matrices into [Fdeconv1 out , Fconv1 out R] that serves as the input to the feature extraction module, producing Fdeconv2 out . In order to avoid randomly large values in the transformations and attain numerical stability during training, we regularize the transformation matrix to be orthonormal such that all elements are between [\u22121, 1] and it mathematically satis\ufb01es RR\u22a4= I where I is an identity matrix. Geometrically, the regularizer imposes to rotate the features by R. Minimum Density Sampling Since the number of points in the input cloud vary, the results of deconv2 would also produce a varying number of points, i.e.with 1024 + Nin 4 points from Table 1, since it depends on the input dimension. Thus, we include a Minimum Density Sampling (MDS) (Liu et al. 2020) in the decoder to standardize the output to a coarse resolution of 2048 points. The coarse resolution is then re\ufb01ned with two deconvolutional operations to 16,384 points. The motivation of adding the MDS is to help the \ufb01nal deconvolutional layers to converge faster during training. Later in Sect. 6, we investigate further the differences between the point clouds from the skip-connection as well as the coarse and \ufb01ne as illustrated in Fig. 5. 5 Loss Functions Since the main goal here is point cloud completion (Groueix et al. 2018; Yang et al. 2018b; Yuan et al. 2018), we \ufb01rst analyse whether the predicted point feature Pout matches the given ground truth Pgt through the Chamfer distance Lcomplete = Chamfer(Pout, Pgt) . (2) Furthermore, we optimize our architecture with two sets of loss functions that are related to the feature extraction module for all the convolution and deconvolution layers in the architecture from Sect. 3.4 as well as the skip connection with the feature transform from Sect. 4. 5.1 Optimizing the Feature Extraction Module For the feature extraction module that utilizes SoftPool features, we adopt the same loss terms as in Wang et al. (2020b), where their main objective is to optimize the distribution of the features across different regions. Intra-regional Entropy The ideal case for the feature vector fi is a one-hot code, i.e. each vector gets assigned to only one region. To accomplish this goal, we measure the probability of fi belonging to region k in all Ns regions by directly applying the softmax on the elements of the vector as P(fi, k) = efi[k] \u0003Ns j=1 efi[ j] . (3) This implies that P is maximized when fi is a one-hot code, with the k-th element equal to one. However, in presence of multiple peaks in the vector, P(fi, k) might decrease significantly. Therefore, by taking the entropy into account, the 123 \fInternational Journal of Computer Vision (a) (b) (c) (d) Fig. 6 Our object completion results with and without the in\ufb02uence of Lboundary intra-regional loss function Lintra = \u22121 Nin 1 B Nin \u0002 i=1 B \u0002 j=1 Ns \u0002 k=1 P(fi, k) log P(fi, k) (4) where B is the batch size, tries to enforce the feature vector to have one peak so that it con\ufb01dently falls into just one region. Inter-regional Entropy The drawback of Lintra is that all feature vectors have the same peak at the k-th element. Looking at a more holistic perspective, the inter-regional loss function aims at distributing the features across different regions. It relies on maximizing the regional entropy Er = \u22121 B B \u0002 j=1 Ns \u0002 k=1 \u00af Pk \u00b7 log \u00af Pk (5) given that \u00af Pk = 1 Nin Nin \u0002 i=1 P(fi, k) . (6) We can then de\ufb01ne the loss function as Linter = log(Ns) \u2212Er (7) since the upper-bound of Er is computed as \u2212log 1 Ns or simply log(Ns). Boundary Overlap Minimization In addition to optimizing the holistic distribution of the points, we also incorporate a loss function that is applied on pairs of regions i and j. We collect a set of points Bi j from region i with activations of region j larger than a threshold \u03c4, i.e.set to 0.3. Similarly, we also take the inverse B j i . Consequently, we squeeze the overlaps between the two regions. By minimizing the Chamfer distance between Bi j and B j i , we obtain the loss Lboundary = Ns \u0002 i=1 Ns \u0002 j=i Chamfer(B j i , Bi j) (8) that tries to make the overlapping sets of points smaller, ideally down to just a line. In Fig. 6, we visualize the difference of optimizing with and without Lboundary, where the distribution of the point cloud is less noisy on the occluded regions such as the armrest. Notably, this loss function is general enough to be effectively applied also on other methods that rely on a subdivision of the point cloud into different regions, such as AtlasNet (Groueix et al. 2018), PCN (Yuan et al. 2018) and MSN (Liu et al. 2020). In Sect. 7.2, we formally evaluate these methods with and without Lboundary. Feature Duplicate Minimization The last loss term Lpreserve = Earth-moving(\u02c6 F, F) (9) imposes that the resulting truncated SoftPool feature \u02c6 F takes most of the features from original F so that it avoids duplicates. To make the earth moving distance (Li et al. 2013) more ef\ufb01cient, 256 vectors are randomly selected from F and \u02c6 F. In practice, Fig. 7 visualizes the effects of Lpreserve in the reconstruction, where lower weights of this loss produce a large hole, while incorporating this loss builds a point cloud with similar densities. 5.2 Optimizing the Skip Connection We \ufb01rst visualize a subset of the architecture and focus on the skip connection as shown in Fig. 8. Here, we de\ufb01ne Ppartial as the partial reconstruction on Fdeconv2 out contributed by the skip connection with the feature transform. However, note that Ppartial is not a subset of Fdeconv2 out . It is produced by taking the input point cloud through conv1, feature transform and deconv2. Since the skip connection aims to maintain the given input structure, we de\ufb01ne a loss function that acts as an autoencoder such that Lskip = Chamfer(Ppartial, Pin) . (10) In addition, based on Sect. 4, we regularize the values in the feature transform such that LR = \u2225RR\u22a4\u2212I\u22252 (11) 123 \fInternational Journal of Computer Vision (a) (b) (c) (d) (e) (f) Fig. 7 Our object completion results while increasing the weight of Lpreserve Fig. 8 Subset of the architecture that focuses on the skip connection makes R orthonormal. 5.3 Discriminative Training RecognizingtheadvantagesfromTreeGAN(Shuetal.2019), we also investigate applying discriminative training conditions on the input partial scan Pin. In this case, we \ufb01rst introduce the conditional feature maps Pout|Pin and Pgt|Pin by concatenating them along the point axis. We build our discriminator D with the same parametric model proposed in Shu et al. (2019). By restricting the output of the discriminator to a range between 0 and 1, we can then apply Linfer = \u2212log(D(Pout|Pin)) , (12) to optimize our completion architecture while Ldiscri = \u2212log(D(Pgt|Pin)) \u2212log(1 \u2212D(Pout|Pin)) (13) to optimize the discriminator D. In practice, we impose the loss functions in (12) and (13) alternatively in order to optimize the completion architecture and the discriminators separately. 6 Experiments For all evaluations, we train our model with an NVIDIA Titan V and parameterize it with a batch size of 8. Moreover, we apply the Leaky ReLU with a negative slope of 0.2 on the output of each regional convolution. 6.1 Completion on ShapeNet We evaluate the performance of the geometric completion of a single object on the ShapeNet (Chang et al. 2015) database where they have the point clouds of the partial scanning as input and the corresponding ground truth completed shape. To make it comparable to other approaches, we adopt the standard 8 category evaluation (Yuan et al. 2018) for a single object completion. Both sampled from ShapeNet meshes, PCN (Yuan et al. 2018) and TopNet (Tchapmi et al. 2019) supplement two set of datasets individually for low and high resolutions evaluation, which contain 2048 and 16,384 points, respectively, where the inputs are provided with 2048 points. Notice that the low resolution dataset provided by TopNet is also commonly referred to Completion3D benchmark. Since previous works report their results in terms of L1/L2 metric of the Chamfer distance separately, we also report our results in both resolutions (2048 and 16,384) and metrics (L1 and L2). We compare against state-of-the-art point cloud completion approaches such as PCN (Yuan et al. 2018), FoldingNet (Yang et al. 2018b), AtlasNet (Groueix et al. 2018), PointNet++ (Qi et al. 2017b), MSN (Liu et al. 2020) and GRNet (Xie et al. 2020b). To show the advantages over volumetric completion, we also compare against 3D-EPN (Dai et al. 2017) and ForkNet (Wang et al. 2019b) with an output resolution of 64 \u00d7 64 \u00d7 64. As for point cloud resolutions, PCN (Yuan et al. 2018), GRNet (Xie et al. 2020b) and SoftPoolNet (Wang et al. 2020b) report the best performance with 16,384 points while MSN (Liu et al. 2020) presents their \ufb01nal output resolution with 8192 points. Aiming at a fair numerical comparison at different resolutions, we modify the last layers of these architectures so as to attain the same resolution for all methods. Low Resolution At low resolution, we achieve competitive results, attaining the 0.13 \u00d7 10\u22124 from PMP-Net (Wen et al. 2020b) with the L2-Chamfer distance in Table 2, while we achieve state-of-the-art results when evaluating on the L1-Chamfer distance in Table 3. 123 \fInternational Journal of Computer Vision Table 2 Evaluation on the object completion based on the Chamfer distance trained with L2 distance (multiplied by 104) with the output resolution of 2048 Method Plane Cabinet Car Chair Lamp Sofa Table Vessel Avg. Completion3D (Tchapmi et al. 2019) benchmark, Output Resolution = 2048, L2 metric FoldingNet (Yang et al. 2018b) 12.83 23.01 14.88 25.69 21.79 21.31 20.71 11.51 19.07 PointSetVoting (Zhang et al. 2020a) 6.88 21.18 15.78 22.54 18.78 28.39 19.96 11.16 18.18 AtlasNet (Groueix et al. 2018) 10.36 23.40 13.40 24.16 20.24 20.82 17.52 11.62 17.77 PCN (Yuan et al. 2018) 9.79 22.70 12.43 25.14 22.72 20.26 20.27 11.73 18.22 TopNet (Tchapmi et al. 2019) 7.32 18.77 12.88 19.82 14.60 16.29 14.89 8.82 14.25 GRNet (Xie et al. 2020b) 6.13 16.90 8.27 12.23 10.22 14.93 10.08 5.86 10.64 SA-Net (Wen et al. 2020a) 5.27 14.45 7.78 13.67 13.53 14.22 11.75 8.84 11.22 PMP-Net (Wen et al. 2020b) 3.99 14.70 8.55 10.21 9.27 12.43 8.51 5.77 9.23 SoftPoolNet (Wang et al. 2020b) 6.39 17.26 8.72 13.16 10.78 14.95 11.01 6.26 11.07 Ours 4.59 15.82 6.78 11.41 8.82 13.37 9.15 4.93 9.36 Without skip-connection 4.63 16.35 9.10 13.40 10.55 13.85 10.90 6.23 10.63 Without D 5.07 16.12 6.86 11.56 8.88 13.67 9.21 5.33 9.59 Without LR 5.38 17.04 9.93 14.13 11.35 14.52 11.63 6.81 11.35 Bold indicates the best performance achieved in certain column Table 3 Evaluation on the object completion based on the Chamfer distance trained with L1 distance (multiplied by 104) with the output resolution of 2048 Method Plane Cabinet Car Chair Lamp Sofa Table Vessel Avg. Completion3D (Tchapmi et al. 2019) benchmark, output resolution = 2048, L1 metric FoldingNet (Yang et al. 2018b) 11.18 20.15 13.25 21.48 18.19 19.09 17.80 10.69 16.48 AtlasNet (Groueix et al. 2018) 10.37 23.40 13.41 24.16 20.24 20.82 17.52 11.62 17.69 AtlasNet + Lboundary 9.25 22.51 12.12 22.64 18.82 19.11 16.50 11.53 16.56 PCN (Yuan et al. 2018) 8.09 18.32 10.53 19.33 18.52 16.44 16.34 10.21 14.72 PCN + Lboundary 6.39 16.32 9.30 18.61 16.72 16.28 15.29 9.00 13.49 TopNet (Tchapmi et al. 2019) 5.50 12.02 8.90 12.56 9.54 12.20 9.57 7.51 9.72 SA-Net (Wen et al. 2020a) 2.18 9.11 5.56 8.94 9.98 7.83 9.94 7.23 7.74 SoftPoolNet (Wang et al. 2020b) 4.76 10.29 7.63 11.23 8.97 10.08 7.13 6.38 8.31 Ours 3.50 9.95 7.01 10.48 8.45 8.86 5.99 5.60 7.48 Without skip-connection 4.29 10.24 7.76 11.10 9.13 9.72 6.33 6.46 8.13 Without D 3.72 10.07 7.23 10.76 8.50 9.15 6.10 5.92 7.68 Without LR 4.68 10.54 8.06 11.42 9.45 10.03 6.70 6.77 8.46 Without Linter 9.81 19.49 13.24 18.20 16.83 17.00 15.64 7.16 14.67 Without Lintra 3.70 15.93 10.78 12.97 12.89 11.79 11.33 5.60 10.62 Without Linter, Lintra 9.07 19.62 14.31 19.17 16.46 17.82 14.34 7.60 14.80 Without Lboundary 4.71 10.43 8.14 11.27 9.27 10.57 7.43 7.36 8.65 Without Lpreserve 7.43 18.84 11.58 15.68 17.38 17.53 14.46 6.49 13.68 Bold indicates the best performance achieved in certain column High Resolution We achieve the best results on most objects with the high resolution as presented in Tables 4 and 5 with 8.31\u00d710\u22123 and 2.55\u00d710\u22123, respectively. Table 5 also shows that volumetric approaches like 3D-EPN (Dai et al. 2017) and ForkNet (Wang et al. 2019b) having large issues when evaluated in Chamfer distance because the converted point clouds from the \ufb01xed volumetric grids are at much smaller local resolutions. Validating with F-Score@1% Since the Chamfer distance hardly re\ufb02ect the errors in the local geometry as suggested in Tatarchenko et al. (2019), the evaluation in GRNet (Xie et al. 2020b) uses the metric F-Score@1% that computes the FScore after matching the predicted point cloud to the ground truth with a distance threshold of 1% of the side length of the reconstructed volume. The evaluations on reconstructing higher resolutions are reported in Tables 6 and 7 on ShapeNet 123 \fInternational Journal of Computer Vision Table 4 Evaluation on the object completion based on the Chamfer distance trained with L1 distance (multiplied by 103) with the output resolution of 16,384 Method Plane Cabinet Car Chair Lamp Sofa Table Vessel Avg. PCN (Yuan et al. 2018) dataset, Output Resolution = 16,384, L1 metric 3D-EPN (Dai et al. 2017) 13.16 21.80 20.31 18.81 25.75 21.09 21.72 18.54 20.15 ForkNet (Wang et al. 2019b) 9.08 14.22 11.65 12.18 17.24 14.22 11.51 12.66 12.85 PointNet++ (Qi et al. 2017b) 10.30 14.74 12.19 15.78 17.62 16.18 11.68 13.52 14.00 FoldingNet (Yang et al. 2018b) 9.49 15.80 12.61 15.55 16.41 15.97 13.65 14.99 14.31 AtlasNet (Groueix et al. 2018) 6.37 11.94 10.11 12.06 12.37 12.99 10.33 10.61 10.85 TopNet (Tchapmi et al. 2019) 7.61 13.31 10.90 13.82 14.44 14.78 11.22 11.12 12.15 PCN (Yuan et al. 2018) 5.50 10.63 8.70 11.00 11.34 11.68 8.59 9.67 9.64 PCN + Lboundary 5.13 9.12 7.58 9.35 9.40 9.31 7.30 8.91 8.26 MSN (Liu et al. 2020) 5.60 11.96 10.78 10.62 10.71 11.90 8.70 9.49 9.97 GRNet (Xie et al. 2020b) 6.45 10.37 9.45 9.41 7.96 10.51 8.44 8.04 8.83 PMP-Net (Wen et al. 2020b) 5.65 11.24 9.64 9.51 6.95 10.83 8.72 7.25 8.73 CRN (Wang et al. 2020a) 4.79 9.97 8.31 9.49 8.94 10.69 7.81 8.05 8.51 SoftPoolNet (Wang et al. 2020b) 6.93 10.91 9.78 9.56 8.59 11.22 8.51 8.14 9.20 Ours 5.50 10.02 8.73 9.05 7.53 10.24 8.01 7.43 8.31 Without skip-connection 6.72 10.46 9.70 9.12 8.42 10.85 8.48 7.80 8.95 Without D 5.73 10.19 8.79 9.10 7.55 10.47 8.12 7.75 8.46 Without LR 5.77 11.92 11.60 11.47 9.02 12.14 11.82 9.87 10.45 Bold indicates the best performance achieved in certain column Table 5 Evaluation on the object completion based on the Chamfer distance trained with L2 distance (multiplied by 103) with the output resolution of 16,384 Method Plane Cabinet Car Chair Lamp Sofa Table Vessel Avg. PCN (Yuan et al. 2018) dataset, Output Resolution = 16,384, L2 metric FoldingNet (Yang et al. 2018b) 3.15 7.94 4.68 9.23 9.23 8.90 6.69 7.33 7.14 FoldingNet + SoftPool++ 3.02 7.86 4.50 9.07 9.03 8.69 6.49 7.31 7.00 AtlasNet (Groueix et al. 2018) 1.75 5.10 3.24 5.23 6.34 5.99 4.36 4.18 4.52 TopNet (Tchapmi et al. 2019) 2.15 5.62 3.51 6.35 7.50 6.95 4.78 4.36 5.15 NSFA (Zhang et al. 2020b) 1.75 5.31 3.43 5.01 4.73 6.41 4.00 3.56 4.28 PCN (Yuan et al. 2018) 1.40 4.45 2.45 4.84 6.24 5.13 3.57 4.06 4.02 PCN + SoftPool++ 1.10 4.37 2.40 4.81 5.67 4.70 3.41 3.82 3.79 MSN (Liu et al. 2020) 1.54 7.25 4.71 4.54 6.48 5.89 3.80 3.85 4.76 MSN + SoftPool++ 1.13 7.24 4.64 4.21 6.28 5.83 3.57 3.45 4.54 PF-Net (Huang et al. 2020) 1.55 4.43 3.12 3.96 4.21 5.87 3.35 3.89 3.80 CRN (Wang et al. 2020a) 1.46 4.21 2.97 3.24 5.16 5.01 3.99 3.96 3.75 GRNet (Xie et al. 2020b) 1.53 3.62 2.75 2.95 2.65 3.61 2.55 2.12 2.72 SoftPoolNet (Wang et al. 2020b) 1.63 3.79 3.05 3.27 2.95 3.78 2.59 2.25 2.91 Ours 1.27 3.43 2.65 2.98 2.67 3.38 2.27 1.85 2.55 Without skip-connection 1.53 3.75 2.96 3.15 2.90 3.59 2.35 1.96 2.77 Without D 1.37 3.59 2.78 3.13 2.74 3.51 2.43 2.03 2.69 Without LR 1.42 4.74 2.91 4.63 3.66 4.14 2.83 2.29 3.33 Bold indicates the best performance achieved in certain column 123 \fInternational Journal of Computer Vision Table 6 Evaluation on the object completion based on the F-Score@1% trained with L2 Chamfer distance and the output resolution of 16,384 Method Plane Cabinet Car Chair Lamp Sofa Table Vessel Avg. PCN (Yuan et al. 2018) dataset, output resolution = 16,384, F-Score@1% FoldingNet (Yang et al. 2018b) 0.642 0.237 0.382 0.236 0.219 0.197 0.361 0.299 0.322 FoldingNet + SoftPool++ 0.687 0.347 0.455 0.237 0.236 0.257 0.377 0.428 0.378 AtlasNet (Groueix et al. 2018) 0.845 0.552 0.630 0.552 0.565 0.500 0.660 0.624 0.616 TopNet (Tchapmi et al. 2019) 0.771 0.404 0.544 0.413 0.408 0.350 0.572 0.560 0.503 PCN (Yuan et al. 2018) 0.881 0.651 0.725 0.625 0.638 0.581 0.765 0.697 0.695 PCN + SoftPool++ 0.880 0.671 0.777 0.723 0.755 0.578 0.819 0.661 0.733 MSN (Liu et al. 2020) 0.885 0.644 0.665 0.657 0.699 0.604 0.782 0.708 0.705 MSN + SoftPool++ 0.903 0.727 0.721 0.736 0.718 0.633 0.796 0.750 0.748 GRNet (Xie et al. 2020b) 0.843 0.618 0.682 0.673 0.761 0.605 0.751 0.750 0.708 SoftPoolNet (Wang et al. 2020b) 0.831 0.605 0.685 0.649 0.715 0.601 0.746 0.721 0.694 Ours 0.867 0.693 0.706 0.712 0.794 0.689 0.825 0.804 0.761 Without skip-connection 0.836 0.658 0.670 0.671 0.753 0.652 0.753 0.791 0.723 Without D 0.843 0.672 0.700 0.686 0.767 0.653 0.768 0.796 0.736 Without LR 0.824 0.634 0.593 0.670 0.695 0.575 0.686 0.755 0.679 Bold indicates the best performance achieved in certain column objects provided by the Completion3D (Tchapmi et al. 2019) and MVP (Pan et al. 2021), respectively. Here, the average FScore with SoftPool++ outperforms the other methods. The tables also validate the bene\ufb01t of our individual contributions in the overall result. In addition, Table 7 shows that, by applying our SoftPool++ module on the variational coarse sub-architecture of VRCNet (Pan et al. 2021), the average performance of the \ufb01ne reconstruction reached the state-ofthe-art with the improvement from 78.1 to 79.9%. Advantage over SoftPoolNet Wang et al. (2020b). Compared to SoftPoolNet (Wang et al. 2020b), our contributions in the proposed SoftPool++ features improve (Wang et al. 2020b) by 0.83 \u00d7 10\u22124 in the L1 Chamfer distance and 1.71 \u00d7 10\u22124 for L2. Strikingly, even without the skip connections, we have already outperformed SoftPoolNet (Wang et al. 2020b). This then demonstrate the strength of the proposed SoftPool++ over (Wang et al. 2020b). Moreover, the results from high resolution reconstruction also validates our conclusion when evaluating against SoftPoolNet (Wang et al. 2020b). With or without the skip connections, our SoftPool++ performs better than (Wang et al. 2020b). 6.2 Qualitative Evaluation Similar to Sect. 6.1, the objects in this section are also trained from and evaluated on ShapeNet (Chang et al. 2015). However, for the qualitative results in Fig. 9, we show the results in the original points resolution speci\ufb01ed in their respective methods. Comparison against PointNet (Qi et al. 2017a) feature. From Fig. 9, the max-pooling operation from the PointNet (Qi et al. 2017a) feature is embedded in FoldingNet (Yang et al. 2018b), PCN (Yuan et al. 2018) and MSN (Liu et al. 2020). We noticed that these methods are either over-smoothens the reconstruction or start introducing noise. On one hand, FoldingNet (Yang et al. 2018b) and PCN (Yuan et al. 2018) smoothens out the reconstruction so that the \ufb01ne details such as the armrest of the chair are no longer visible and the wheels of the car are no longer separated. On the other, MSN (Liu et al. 2020) tries to reconstruct the \ufb01ner details but produces a noisy point cloud. Contrary to these methods, we achieve a smoother surface reconstruction with with visible geometric details of the object like the armrest and the wheels. Advantage of Skip Connections We also explore the combination of 3D-GCN (Lin et al. 2020) and TreeGAN (Shu et al. 2019) that uses graph convolutions in an encoder\u2013decoder architecture. Its latent feature is presented as a vector with a lengthof1024.Withouttheskipconnection,severalinconsistencies emerge. For instance, the shape of the boat is slimmer than the ground truth while one dimension of the bookshelf is thicker. These information are part of the input but are not propagated to the output. Among these methods, GRNet (Xie et al. 2020b) achieves similar quantitative results compared to our approach in Table 5. They also build skip connections between their encoder and decoder. However, as input to the architecture, they \ufb01rst voxelize the input point cloud. After going through the encoder\u2013decoder, they convert the 3D grid back to point cloud. Due to the discretization of the point cloud, this affects the results of GRNet (Xie et al. 2020b). It fails to reconstruct 123 \fInternational Journal of Computer Vision Table 7 Evaluation on the object completion based on the F-Score@1% trained with L2 Chamfer distance and the output resolution of 16,384 Method Plane Cabinet Car Chair Lamp Sofa Table Vessel Avg. MVP (Pan et al. 2021) dataset, Output Resolution = 16,384, F-Score@1% TopNet (Tchapmi et al. 2019) 0.789 0.621 0.612 0.443 0.387 0.506 0.639 0.609 0.576 PCN (Yuan et al. 2018) 0.816 0.614 0.686 0.517 0.455 0.552 0.646 0.628 0.614 PCN + SoftPool++ 0.853 0.643 0.729 0.563 0.472 0.566 0.670 0.643 0.642 MSN (Liu et al. 2020) 0.879 0.692 0.693 0.599 0.604 0.627 0.730 0.696 0.690 MSN + SoftPool++ 0.914 0.717 0.727 0.620 0.638 0.649 0.765 0.726 0.719 GRNet (Xie et al. 2020b) 0.853 0.578 0.646 0.635 0.710 0.580 0.690 0.723 0.677 ECG (Pan 2020) 0.906 0.680 0.716 0.683 0.734 0.651 0.766 0.753 0.736 NSFA (Zhang et al. 2020b) 0.903 0.694 0.721 0.737 0.783 0.705 0.817 0.799 0.770 CRN (Wang et al. 2020a) 0.898 0.688 0.725 0.670 0.681 0.641 0.748 0.742 0.724 VRCNet (Pan et al. 2021) 0.928 0.721 0.756 0.743 0.789 0.696 0.813 0.800 0.781 VRCNet + SoftPool++ 0.947 0.745 0.768 0.759 0.810 0.720 0.829 0.813 0.799 PoinTr (Yu et al. 2021) 0.888 0.681 0.716 0.703 0.749 0.656 0.773 0.760 0.741 SoftPoolNet (Wang et al. 2020b) 0.843 0.568 0.636 0.623 0.698 0.568 0.680 0.710 0.666 Ours 0.862 0.622 0.704 0.695 0.783 0.649 0.776 0.778 0.734 Without skip-connection 0.862 0.555 0.648 0.652 0.716 0.603 0.703 0.719 0.682 Without D 0.856 0.624 0.666 0.664 0.732 0.622 0.738 0.770 0.709 Without LR 0.822 0.488 0.602 0.573 0.661 0.500 0.667 0.696 0.626 Bold indicates the best performance achieved in certain column thin structures like the antenna on the boat and the vertical stabilizers of the jet. In addition, it tried to \ufb01ll up the hole in the box which should have remained empty. In contrast, our method that processes directly on the point cloud can handle these cases. Improvements from SoftPoolNet (Wang et al. 2020b). Moreover, we compared the proposed method against the previous SoftPoolNet (Wang et al. 2020b) to reveal the advantages of our novel approach. From Fig. 9, while the previous method fails to reconstruct the four corners of the box and the wheels of the jet, the new method is more consistent to the ground truth. Overall, our novel approach reconstructs sharper geometries with less noise and less holes. Other Methods There have been some trend to re-purpose method that were originally tailored for semantic segmentation such as PointCNN (Li et al. 2018) to train for object completion. Since they both use point clouds, the intuition is to use the local convolutions from Li et al. (2018) to upsample the point cloud from its partial scan to its completed structure. Unfortunately, these methods fails to reconstruct the objects because it is not the intended purpose of the architecture\u2014 in semantic segmentation, their input and output point cloud remains the same. 6.3 Classification on ModelNet and PartNet In addition to shape completion, we also evaluate our approach in terms of classi\ufb01cation on the ModelNet10 (Zhirong et al. 2015), ModelNet40 (Zhirong et al. 2015) and PartNet (Mo et al. 2019) datasets. Note that ModelNet40 contains 12,311 CAD models classi\ufb01ed into 40 categories while PartNet contains 26,671 models with 24 categories. Similartotheotherapproachessuchas3D-GAN(Wuetal. 2016), RS-DGCNN (Sauder et al. 2019), VConv-DAE (Sharmaet al. 2016), FoldingNet (Yang et al. 2018b) and KCNet (Shen et al. 2018), we also implemented a self-supervised training to extract features from the input point cloud then a supervised training to train a linear Support Vector Machine (SVM) (Cortes and Vapnik 1995) to predict the categorical classi\ufb01cation. The former relies on the 57,448 ShapeNet models (Chang et al. 2015) as its training dataset while the latter relies on ModelNet (Zhirong et al. 2015) and PartNet (Mo et al. 2019). It is noteworthy to mention that there is a signi\ufb01cant difference from RS-DGCNN (Sauder et al. 2019) in the details of the self-supervised training. On one hand, our method randomly subsamples the point cloud while, on the other, Sauder et al. (2019) includes an additional data augmentation step that randomly decomposes the 3D input structure into different parts then repositions these parts by translation. Since we did not include the additional augmentation from Sauder et al. (2019), our evaluation is a fair comparison against other methods. The evaluation in Table 8 reports that our model outperforms the accuracy of RS-DGCNN (Sauder et al. 2019) by 4.11% on the ModelNet40 dataset, a sign of the higher descriptiveness in terms of categorical information. The 123 \fInternational Journal of Computer Vision (a) (b) (c) (d) (e) (f) (g) (h) (i) (j) (k) (l) Fig. 9 Qualitative results on the ShapeNet (Chang et al. 2015) dataset. Note that the three results from our method corresponds to different parts of the architecture as explained in Fig. 5. Here, (k) represents our \ufb01nal reconstruction 123 \fInternational Journal of Computer Vision Table 8 Object classi\ufb01cation accuracy on ModelNet40 (Zhirong et al. 2015), ModelNet40 (Zhirong et al. 2015) and PartNet (Mo et al. 2019) datasets Method ModelNet40 (%) ModelNet10 (%) PartNet (%) VConv-DAE (Sharmaet al. 2016) 75.50 80.50 \u2013 3D-GAN (Wu et al. 2016) 83.30 91.00 74.23 Latent-GAN 85.70 95.30 \u2013 FoldingNet (Yang et al. 2018b) 88.40 94.40 \u2013 VIP-GAN (Han et al. 2019) 90.19 92.18 \u2013 RS-PointNet (Sauder et al. 2019) 87.31 91.61 76.95 RS-DGCNN (Sauder et al. 2019) 90.64 94.52 \u2013 KCNet (Shen et al. 2018) 91.0 94.4 \u2013 SoftPoolNet (Wang et al. 2020b) 92.28 96.14 84.32 Ours 94.75 96.99 87.25 Without skip-connection 93.17 96.34 85.26 Without Linter 91.23 95.11 83.07 Without Lintra 84.98 91.35 81.91 Without Linter, Lintra 84.21 91.77 80.55 Without Lboundary 89.70 94.14 82.39 Without Lpreserve 87.84 93.15 81.32 Without LR 79.22 85.40 76.48 Bold indicates the best performance achieved in certain column improvement of 2.47% from our approach compared to SoftPoolNet (Wang et al. 2020b) is also obvious, proving that the proposed SoftPool++ feature and skip-connection together are more advantageous for classi\ufb01cation. Similar results are also obtained on ModelNet10 (Zhirong et al. 2015) and PartNet (Mo et al. 2019). 6.4 Efficiency In addition to the evaluation in terms of shape completion and categorical classi\ufb01cation, we also compare in Table 9 the properties of our model such as its memory footprint and inference speed, as well as the type of data being processed. The cost of outperforming SoftPoolNet (Wang et al. 2020b) becomes evident on the memory footprint and the inference time. Compared to SoftPoolNet (Wang et al. 2020b),thememoryfootprintofourmethodisapproximately doubled due the increase in the number of parameters from the multiple feature extraction modules in our architecture. This also triggers a larger inference time than SoftPoolNet (Wang et al. 2020b) from 0.04 to 0.11 seconds. A similar trend is associated to other approaches that divides the point cloud into regions such as AtlasNet (Groueix et al. 2018) and MSN (Liu et al. 2020), i.e.we achieve signi\ufb01cantly higher accuracy in reconstruction but also increase the memory footprint and the inference time. However, if we look at the overall data, we observe that the proposed method at 61.7MB consumes remarkably less memorythantheotherpointcloudapproachessuchasGRNet (Xie et al. 2020b) at 293MB and PointCNN (Li et al. 2018) at 497MB, as well as the volumetric approaches such as 3DEPN (Dai et al. 2017) at 420MB and ForkNet (Wang et al. 2019b) at 362MB. An important reason why their models are so large in memory usage is that 3D convolutions are applied in multiple layers of their architectures, while our approach is mainly composed of 2D convolutions only. Among those approaches with large memory consumption, GRNet (Xie et al. 2020b) is one of the top performers in point cloud completion. Since their architecture relies on volumetric grids where they convert the input point cloud to voxel grid then convert back to a point cloud, this affects not only their memory footprint but also their inference time, which is 8 times higher than ours. Compared to approaches composed mainly of MLPs, our model reports a comparable size to PCN (Yuan et al. 2018) while having a faster inference time than MSN (Liu et al. 2020). The reason is that although our 2D convolution kernels introduces a additional dimensions, the newly added dimension Nk of 32 is comparably much smaller than the feature dimension N f of 256 at which MLPs operates. Notably, approaches based on KNN search such as PointCNN (Li et al. 2018) and 3D-GCN (Lin et al. 2020) usually take much longer for inference. 7 Ablation Study Based on the evaluation from ShapeNet (Chang et al. 2015), we further analyze our proposed method\u2019s behavior through an ablation study. In this section, we demonstrate the advantage of SoftPool++ over PontNet; expound on the claims of 123 \fInternational Journal of Computer Vision Table 9 Overview of different object completion methods. Note that the inference time is represented by the amount of time to conduct inference on a single object Size Inference Core Data With Method (MB) Time (s) Operator Type KNN 3D-EPN (Dai et al. 2017) 420.0 \u2013 3D Conv Voxels No ForkNet (Wang et al. 2019b) 362.0 \u2013 3D Conv Voxels No GRNet (Xie et al. 2020b) 293.0 0.88 3D Conv Points No PointCNN (Li et al. 2018) 497.0 1.20 3D Conv Points Yes DeepSDF (Park et al. 2018) 7.4 9.72 MLP SDF No FoldingNet (Yang et al. 2018b) 19.2 0.05 MLP Points No AtlasNet (Groueix et al. 2018) 2.0 0.32 MLP Points No PCN (Yuan et al. 2018) 54.8 0.11 MLP Points No MSN (Liu et al. 2020) 12.0 0.21 MLP Points No 3D-GCN (Lin et al. 2020) (coder) 2.1 0.82 2D Conv Points Yes SoftPoolNet (Wang et al. 2020b) 37.2 0.04 2D Conv Points No Ours 61.7 0.11 2D Conv Points No our loss function; investigate the value of the skip connection with feature transform in our architecture; and; delve deeper on what happens in the SoftPool++ module. 7.1 Replacing PointNet with SoftPool++ In addition to the comparison in Table 5, we also evaluate the results by replacing the latent features in other point cloud completion approaches [i.e.FoldingNet (Yang et al. 2018b), MSN (Liu et al. 2020), and PCN (Yuan et al. 2018)] with our SoftPool++ features, while keeping their decoders unchanged. In this way, we have a one-to-one comparison of PointNet and SoftPool++ features. Since these works depend on a PointNet features (having a dimensionality of 1024), we also build up our SoftPool++ features with the same size. Remarkably, the use of SoftPool++ features improves performance in all tested methods, i.e.the performance of FoldingNet (Yang et al. 2018b), PCN (Yuanetal.2018)andMSN(Liuetal.2020)improvesrespectively by 0.14 \u00d7 10\u22123, 0.23 \u00d7 10\u22123 and 0.22 \u00d7 10\u22123. 7.2 Loss Functions Tables 3 and 8 include an ablation study that investigates the effects of the individual loss functions from Sect. 5. For both experiments, we notice that all loss functions are critical to achieve state-of-the-art results. Note that we have shown in Fig. 6 and cabinet completion in Fig. 7 to demonstrate the contributions of Lboundary and Lpreserve in the reconstruction. Lboundary in other methods. An interesting idea is the capacity of Lboundary to be integrated in other existing methods that join multiple deformed 2D patches together to form the \ufb01nal output. Since the patches in AtlasNet (Groueix et al. 2018), PCN (Yuan et al. 2018) and MSN (Liu et al. 2020) are frequently overlapping nearby patches, we tried to integrate Fig. 10 Object completion results with and without the in\ufb02uence of the skip-connection Lboundary into their loss functions. Tables 4 and 3 evaluate this idea and prove that this activation helps FoldingNet (Yang et al. 2018b), PCN (Yuan et al. 2018) and AtlasNet (Groueix et al. 2018) perform better, improving the Chamfer distance with at least 1\u00d710\u22124 on the resolution of 2048 and 1\u00d710\u22123 on resolution of 16,384. 7.3 Skip Connection with Feature Transform One of the key contributions in this paper is the introduction of skip connections with feature transforms on point cloud. Our ablation study in Tables 3 and 2 also includes the numerical advantage of having the skip connection in our architecture, improving the Chamfer distance by 0.65\u00d710\u22124 in L1 and 1.27 \u00d7 10\u22124 in L2. In addition to the numerical advantage, we also interpret these values through some examples in Fig. 10 where we reconstruct lamps. Without the skip connection, the model recursively simpli\ufb01es the given partial scan until it reaches 123 \fInternational Journal of Computer Vision Fig. 11 Visualization of the \ufb01rst row of F on the \ufb01rst SoftPool++ module in our architecture the latent feature. Due to the oversimpli\ufb01cation, the output then builds the closest generic shape of the lamp. Contrary to that, with the skip connection, the model preserves the input structure and incorporates the given partial scan into the \ufb01nal reconstruction. In effect, the result is closer to the ground truth. We also perform an ablation study on the regularization LR of the feature transform R in Tables 4, 5, 3 and 2. Compared to our complete framework, the results trained without the skip-connection drops by 0.64 \u00d7 10\u22124. However, when trained with the skip-connection but without the regularization of R, the results drops by 2.14 \u00d7 10\u22124 which is signi\ufb01cantly larger. Therefore, it is noteworthy to mention that training with skip-connection but without the regularization performs worse than removing the skip-connection altogether. This clearly shows the advantage of the regularization term on the feature transform. 7.4 Activations from the SoftPool++ Features Giventheinputpointcloud,weexplorehowSoftPool++sorts the points on the \ufb01rst feature extraction module in the architecture. For this experiment, we visualize the points based on the value of the \ufb01rst column in F which is the result of MLP as shown in Fig. 2. Therefore, Fig. 11 highlights the activations associated to this feature. Noticeably, due to MLP, the points can undergo much more than just a linear transformation of its absolute coordinates. Continuing our analysis, we move further into examining how the truncation sizes (Ns, Nr) and the output dimension N f in\ufb02uence the completion. Table 10 summarizes this evaluation on ShapeNet (Chang et al. 2015) as we vary these values on the second SoftPool++ module in our architecture. As described in Table 1, since our SoftPool++ feature is \ufb01xed with 256 rows, we then set Nr \u00d7Ns = 256. Note that the next ablation study focuses on changing the number of rows by Table 10 In\ufb02uence of N f and (Ns, Nr) on the L2 Chamfer distance (multiplied by 103), evaluated on the output resolution of 2048 points Ns Nr N f 32 64 128 256 512 1 256 17.33 16.50 13.45 11.24 10.39 2 128 17.28 17.06 15.22 14.62 13.10 4 64 16.32 15.66 13.65 11.29 11.31 8 32 17.55 11.18 10.27 9.59 9.58 16 16 17.97 11.26 10.21 9.60 9.59 32 8 17.31 11.89 10.32 9.59 9.58 Bold indicates the best performance achieved in certain column Table 11 In\ufb02uence of Ns and Nr on the L2 Chamfer distance (multiplied by 103), evaluated on the output resolution of 2048 points Ns Nr 8 16 32 64 128 256 1 \u2013 \u2013 14.99 14.82 14.64 11.24 2 \u2013 14.99 14.97 14.73 14.62 11.26 4 14.79 12.85 12.27 11.29 11.08 9.91 8 11.56 10.62 9.59 9.59 9.62 9.64 16 10.07 9.59 9.62 9.62 9.61 9.63 32 9.60 9.61 9.61 9.62 9.61 9.61 Here, N f is set to 256 Bold indicates the best performance achieved in certain column independently setting Nr and Ns. For the ease of training and evaluation for all (Ns, Nr) and N f , we do not apply discriminative training D for Table 10. The table indicates that we reach the minimum Chamfer distance as soon as Ns reaches 8, Nr reaches 32 and N f reaches 256. After then, only small improvements of around 0.01 \u00d7 10\u22123 are attained. Therefore, we select Ns = 8, Nr = 32 and N f = 256 so that there are less parameters in the model to train which consequently lead to less memory footprint. The next ablation study alleviates the constraint of having a \ufb01xed latent feature dimension where we set Nr \u00d7 Ns = 256 in Table 10. In Table 11, we consider different values of Nr and Ns while setting N f to 256, where we observe that that the error plateaus when Ns is 8 and Nr is 32. Note that these values matches the optimum values from Table 10 and validates the advantage of truncation. Considering the numerical advantages of Ns, we also explore it visually while keeping N f and Nr constant to 256 and 32, respectively. Similar to Fig. 11, Fig. 12 plots the points from the input point cloud that are truncated by Ns. By increasing Ns from 4 to 16, the resulting feature also increases the amount of structures from the plane. For instance, the wings become more and more visible on the \ufb01gure. This then raises the question of how much information from the partial scans does the network need to reconstruct 123 \fInternational Journal of Computer Vision Fig. 12 Visualization of the truncated points on the input point cloud with different values of Ns the object. Evidently, this question is answered by our ablation study in Table 10 where we found the optimum value of Ns, i.e.8. Comparisons in Fig. 12 shows that larger values of Ns does not further add the points on the body of the plane which is a common part for plane category. 8" + }, + { + "url": "http://arxiv.org/abs/2203.16600v1", + "title": "Learning Local Displacements for Point Cloud Completion", + "abstract": "We propose a novel approach aimed at object and semantic scene completion\nfrom a partial scan represented as a 3D point cloud. Our architecture relies on\nthree novel layers that are used successively within an encoder-decoder\nstructure and specifically developed for the task at hand. The first one\ncarries out feature extraction by matching the point features to a set of\npre-trained local descriptors. Then, to avoid losing individual descriptors as\npart of standard operations such as max-pooling, we propose an alternative\nneighbor-pooling operation that relies on adopting the feature vectors with the\nhighest activations. Finally, up-sampling in the decoder modifies our feature\nextraction in order to increase the output dimension. While this model is\nalready able to achieve competitive results with the state of the art, we\nfurther propose a way to increase the versatility of our approach to process\npoint clouds. To this aim, we introduce a second model that assembles our\nlayers within a transformer architecture. We evaluate both architectures on\nobject and indoor scene completion tasks, achieving state-of-the-art\nperformance.", + "authors": "Yida Wang, David Joseph Tan, Nassir Navab, Federico Tombari", + "published": "2022-03-30", + "updated": "2022-03-30", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.AI" + ], + "main_content": "Introduction Understanding the entire 3D space is essential for both humans and machines to understand how to safely navigate an environment or how to interact with the objects around them. However, when we capture the 3D structure of an object or scene from a certain viewpoint, a large portion of the whole geometry is typically missing due to self-occlusion and/or occlusion from its surrounding. To solve this problem, geometric completion of scenes [2, 27, 32] and objects [16,20,39,44,45] has emerged as a task that takes on a 2.5D/3D observation and fills out the occluded regions, as illustrated in Fig. 1. There are multiple ways to represent 3D shapes. Point cloud [3,6], volumetric grid [8,27], mesh [11] and implicit surfaces [18,21,40] are among the most common data formats. These representations are used for most 3D-related computer vision tasks such as segmentation, classification and completion. For what concerns geometric completion, Input: Partial Scan Output: Completion of our method Clearly reconstructs the rear-view mirror! Figure 1. From the input partial scan to our object completion, we visualize the amount of detail in our reconstruction. most works are focused on either point cloud or volumetric data. Among them, the characteristic of having an explicitly defined local neighbourhood makes volumetric data easier to process with 3D convolutions [7,41,42]. One drawback introduced by the predefined local neighborhood is the inaccuracy due to the constant resolution of the voxels, meaning that one voxel can represent several small structures. On the other hand, point clouds have the advantage of not limiting the local resolution, although they come with their own sets of drawbacks. Mainly, there are two problems in processing point clouds: the undefined local neighborhood and unorganized feature map. Aiming at solving these issues, PointNet++ [23], PMP-Net [35], PointConv [37] and PointCNN [13] employ k-nearest neighbor search to define a local neighborhood, while PointNet [22] and SoftPoolNet [33] adopt the pooling operation to achieve permutation invariant features. Notably, point cloud segmentation and classification were further improved by involving k-nearest neighbor search to form local features in PointNet++ [23] compared to global features in PointNet [22]. Several variations of PointNet [22] also succeeded in improving point cloud completion as demonstrated in FoldingNet [43], PCN [45], MSN [16]. Other methods such as SoftPoolNet [33] and GRNet [39] explicitly present local neighbourhood in sorted feature map and voxel space, respectively. This paper investigates grouping local features to improve the point cloud completion of objects and scenes. We apply these operation in encoder-decoder architectures 1 \fwhich iteratively uses a feature extraction operation with the help of a set of displacement vectors as part of our parametric model. In addition, we also introduce a new pooling mechanism called neighbor-pooling, aimed at downsampling the data in the encoder while, at the same time, preserving individual feature descriptors. Finally, we propose a new loss function that gradually reconstructs the target from the observable to the occluded regions. The proposed approach is evaluated on both object completion dataset with ShapeNet [3], and semantic scene completion on NYU [25] and CompleteScanNet [36], attaining significant improvements producing high resolutions reconstruction with fine-grained details. 2. Related works This section focuses on the three most related fields \u2013 point cloud completion, point cloud features and semantic scene completion. Point cloud completion. Given the partial scan of an object similar to Fig. 1, 3D completion aims at estimating the missing shape. In most cases, the missing region is due to self-occlusion since the partial scan is captured from a single view of the object. Particularly for point cloud, FoldingNet [43] and AtlasNet [11] are among the first works to propose an object completion based on PointNet [22] features by deforming one or more 2D grids into the desired shape. Then, PCN [45] extended their work by deforming a collection of much smaller 2D grids in order to reconstruct finer structures. Through encoder-decoder architectures, ASFM-Net [38] and VRCNet [20] match the encoded latent feature with a completion shape prior, which produce good coarse completion results. To preserve the observed geometry from the partial scan for the fine reconstruction, MSN [16] and VRCNet [20] bypass the observed geometries by using either the minimum density sampling (MDS) or the farthest point sampling (FPS) from the observed surface and building skip connections. By embedding a volumetric sub-architecture, GRNet [39] preserves the discretized input geometries with the volumetric U-connection without sampling in the point cloud space. In more recent works, PMP-Net [35] gradually reconstructs the entire object from the observed to the nearest occluded regions. Also focusing on only predicting the occluded geometries, PoinTr [44] is among the first few transformer methods targeted on point cloud completion by translating the partial scan proxies into a set of occluded proxies to further refine the reconstruction. Point cloud features. Notably, a large amount of work in object completion [11,16,33,35,39,43,45] rely on PointNet features [22]. The main advantage of [22] is its capacity to be permutation invariant through max-pooling. This is a crucial characteristic for the input point cloud because its data is unstructured. However, the max-pooling operation disassembles the point-wise features and ignores the local neighborhood in 3D space. This motivated SoftPoolNet [33] to solve this problem by sorting the feature vectors based on the activation instead of taking the maximum values for each element. In effect, they were able to concatenate the features to form a 2D matrix so that a traditional 2D convolution from CNN can be applied. Apart from building feature representation through pooling operations, PointNet++ [23] samples the local subset of points with the farthest point sampling (FPS) then feeds it into PointNet [22]. Based on this feature, SA-Net [34] then groups the features in different resolutions with KNN for further processing, while PMP-Net [35] uses PointNet++ features to identify the direction to which the object should be reconstructed. PoinTr [44] also solves the permutational invariant problem without pooling by adding the positional coding of the input points into a transformer. Semantic scene completion. All the point cloud completion are designed to reconstruct a single object. Extending these methods from objects to scenes is difficult because of the difference in size and content. When we tried to train these methods for objects, we noticed that the level of noise is significantly increased such that most objects in the scene are unrecognizable. Evidently, for semantic scene completion, the objective is not only to build the full reconstruction of the scene but also to semantically label each component. On the other hand, there have been a number of methods for semantic scene completion based on voxel grids that was initiated by SSCNet [27]. Using a similar volumetric data with 3D convolutions [7, 41, 42], VVNet [12] convolves on the 3D volumes which are back-projected from the depth images, revealing the camera view instead of a TSDF volume. Later works such as 3D-RecGAN [42] and ForkNet [32] use discriminators to optimize the convolutional encoder and decoder during training. Since 3D convolutions are heavy in terms of memory consumption especially when the input is presented in high resolution, SketchSSC [4] learns the 3D boundary of all objects in the scene to quickly estimate the resolution of the invariant features. Although there are quite many methods targeting on volumetric semantic scene completion, there are still no related works proposed explicitly for point cloud semantic scene completion which we achieved in this paper. 3. Operators Whether reconstructing objects or scenes from a single depth image, the objective is to process the given point 2 \fcloud of the partial scan Pin to reconstruct the complete structure Pout. Most deep learning solutions [16,20,33,43, 45] solve this problem by building an encoder-decoder architecture. The encoder takes the input point cloud to iteratively down-sample it into its latent feature. Then, the decoder iteratively up-sample the latent feature to reconstruct the object or scene. In this section, we illustrate our novel down-sampling and up-sampling operations that cater to point cloud completion. Thereafter, in the following sections, we use our operators as building blocks to assemble two different encoder-decoder architectures that perform object completion and semantic scene completion. We also discuss the associated loss functions. 3.1. Down-sampling operation To formalize the down-sampling operation, we denote the input as the set of feature vectors Fin = {fi}|Fin| i=1 where fi is a feature vector and |\u00b7| is the number of elements in the set. Note that, in the first layer of the encoder, Fin is then set to the coordinates of the input point cloud. We introduce a novel down-sampling operation inspired from the Iterative Closest Point (ICP) algorithm [1, 5]. Taking an arbitrary anchor f from Fin, we start by defining a vector \u03b4 \u2208RDin. From the trainable variable \u03b4, we find the feature closest to f +\u03b4 and compute the distance. This is formally formulated as a function d \\l e ft ( \\feat , \\ de l t a \\right ) = \\min _{\\forall \\tilde {\\feat } \\in \\Fin } \\| (\\feat + \\delta ) \\tilde {\\feat } \\| \\label {eq:closest_distance} (1) where \u03b4 represents a displacement vector from f. Multiple displacement vectors are used to describe the local geometry, each with a weight \u03c3 \u2208R. We then assign the set as {(\u03b4i, \u03c3i)}s i=1 and aggregate them with the weighted function g ( \\ fea t ) = \\ sum _{i = 0}^{s} \\sigma _i \\tanh {\\frac {\\alpha }{d(\\feat , \\delta _i) + \\beta }} \\label {eq:g_func} (2) where the constants \u03b1 and \u03b2 are added for numerical stability. Here, the hyperbolic tangent in g(f) produces values closer to 1 when the distance d(\u00b7) is small and closer to 0 when the distance is large. In practice, we can speed-up (1) with the k-nearest neighbor search for each anchor. A simple example of this operation is depicted in Fig. 2. This illustrates the operation in the first layer where we process the point cloud so that we can geometrically plot a feature in Fin with respect to {(\u03b4i, \u03c3i)}s i=1. Furthermore, to enforce the influence of the anchor in this operation, we also introduce the function h ( \\ feat ) = \\rho \\cdot \\feat (3) that projects f on \u03c1 \u2208RDin, which is a trainable parameter. Note that both functions g(\u00b7) and h(\u00b7) produce a scalar value. (c) 1 -1 (a) (b) Figure 2. (a) k-nearest neighbor in reference to an anchor f; (b) displacement vectors around the anchor f +\u03b4i and the corresponding weight \u03c3i; and, (c) closest features \u02dc f to f + \u03b4i for all i. Thus, if we aim at building a set of output feature vectors, each with a dimension of Dout, we construct the set as \\F o u t = \\le f t \\{ \\left [ g_b(\\f eat _a) + h(\\feat _a) \\right ]_{b=1}^{D_\\text {out}} \\right \\}_{a=1}^{|\\Fin |} \\label {eq:down} (4) where different sets of trainable parameters {(\u03b4i, \u03c3i)}s i=1 are assigned to each element, while different \u03c1 for each output vector. Moreover, the variables s in (2) and Dout in (4) are the hyper-parameters. We label this operation as the feature extraction. It is noteworthy to mention that the proposed downsampling operation is different from 3D-GCN [15], which only takes the cosine similarity. While still being scaleinvariant, hence suitable for object classification and segmentation, they ignore the metric structure of the local 3D geometry; consequently, making completion difficult because the original scale of the local geometry is missing. Neighbor pooling. The final step in our down-sampling operation is to reduce the size of Fout with pooling. However, unlike Graph Max-Pooling (GMP) [15], that takes the element-wise maximum value of the feature across all the vectors, we select the subset of feature vectors with the highest activations. Therefore, while GMP disassembles their features as part of their pooling operation, we preserve the feature descriptors from Fout. From the definition of Fout in (4), we base our activation for each vector fa \\ math c al {A}_ a = \\sum _{b=1}^{D_\\text {out}} \\tanh |g_b(\\feat _a)| \\label {eq:activation} (5) on the results of g(\u00b7) from (2). Thereafter, we only take the 1 \u03c4 of the number of feature vectors with the highest activations. 3.2. Up-sampling operation The up-sampling and pooling operations in the encoder reduce the point cloud to a latent vector. In this case, if we directly use the operation in (4), the first layer in the decoder ends up with one vector since |Fin| is one. Subsequently, all 3 \fEncoder Decoder Feature Extraction Feature Extraction Neighbor Pooling Neighbor Pooling Feature Extraction Neighbor Pooling Feature Extraction Max Pooling Up-Sampling Up-Sampling Up-Sampling Up-Sampling Up-Sampling Up-Sampling Up-Sampling Figure 3. This architecture is composed of the proposed operators to build its encoder and decoder. the other layers in the decoder result in a single vector. To solve this issue, our up-sampling iteratively runs (4) so that, denoting Fin as the input to the layer, we build the set of output feature vectors as \\ F up &= \\lef t \\ { \\Fo u t ^u \\r i ght \\}_{u =1} ^{ N_\\text {up }} \\nonumber \\\\ &= \\left \\{ \\left [ g_b^u(\\feat _a) + h_b^u(\\feat _a) \\right ]_{b=1}^{D_\\text {out}} \\right \\}_{a=1, u=1}^{a = |\\Fin |, u = N_\\text {up}} \\label {eq:up} (6) which increases the number of vectors by Nup. As a result, Fup is a set of Nu\u00b7|Fin| feature vectors. In addition to the list of hyper-parameters in Sec. 3.1, our up-sampling operation also takes Nup as a hyper-parameter. 4. Encoder-decoder architectures In order to uncover the strengths of our operators in Sec. 3 (i.e. feature extraction, neighbor pooling and upsampling), we used them as building blocks to construct two different architectures. The first directly implements our operators to build an encoder-decoder while the second takes advantage of our operators to improve the transformers derived from PoinTr [44]. We refer the readers to the Supplementary Materials for the detailed parameters of the architectures. 4.1. Direct application The objective of the first architecture is to establish that building it solely from the proposed operators (with the additional max-pooling) can already be competitive in point cloud completion. We then propose an encoder-decoder architecture based on our operators alone as shown in Fig. 3. Geometry-Aware Transformer Encoder Feature Extraction Feature Extraction Neighbor Pooling Neighbor Pooling Positional Coding Up-Sampling Feature Extraction Feature Extraction Feature Extraction Up-Sampling Geometry-Aware Transformer Decoder Points-to-Tokens Coarse-to-Fine Figure 4. This architecture is derived from the transformers backbone, where we use the proposed operators to convert the input 3D points to tokens and to perform the coarse-to-fine strategy. The encoder is composed of four alternating layers of feature extraction and neighbor pooling. As the number of points from the input is reduced by 128 times, we use a max-pooling operator to extract a vector as our latent feature. Taking the latent feature from the encoder, the decoder is then constructed from a series of up-sampling operators, resulting in a fine completion of 16,384 points. 4.2. Transformers The second architecture aims at showing the diversity of the operators to improve the state-of-the-art from PoinTr [44] that uses transformers. We therefore propose a transformer-based architecture that is derived from [44] and our operators as summarized in Fig. 4. Before computing the attention mechanisms in the transformer, the partial scan are subsampled due to the memory constraint of the GPU. PoinTr [44] implements the Farthest Point Sampling (FPS) to reduce the number of points and MLP to convert the points to features. Conversely, our architecture applies the proposed operators. Similar to Sec. 4.1, this involves alternating the features extraction and neighbor pooling. Since the Fourier feature [28] and SIRENs [26] have proven that the sinusoidal activation is helpful in presenting complex signals and their derivatives in layer-by-layer structures, a positional coding based on the 3D coordinates is then added to the features. In Fig. 4, we refer this block as points-to-token. Thereafter, we use the geometry-aware transformers from [44] which produces a coarse point cloud. From the coarse point cloud, we then replace their coarse-to-fine strategy with our operators. This includes a series of alternating feature extraction and up-sampling operators as shown in Fig. 4. 4 \f(a) Input (b) Ground Truth Neighbor Pooling (e) Our Result (c) PoinTr Farthest Point Sampling (d) PoinTr + GMP Graph Max-Pooling Figure 5. The first row compares the point tokens chosen by Farthest Point Sampling (FPS) in PointTr [44], Graph Max-Pooling (GMP) [15] in PointTr [44] and our proposed neighbor pooling in our transformer architecture. These tokens are then fed to the transformer and the coarse-to-fine strategy to produce the reconstruction shown in the second row. It it noteworthy to emphasize the difference between our architecture from PoinTr [44] and to understand the implication of the changes. The contributions of points-to-tokens and coarse-to-fine to the overall architecture is illustrated in Fig. 5. We can observe from this figure that the FPS from PoinTr [44] only finds the distant points while the results of our neighbor pooling sketches the contours of the input point cloud to capture the meaningful structures of the object. Notably, by looking at our sketch, we can already identify the that the object is a table. This is contrary to the random points from PoinTr [44]. Moreover, our coarseto-fine strategy uniformly reconstructs the planar region on the table as well as its base. Later, in Sec. 7, we numerically evaluate these advantages in order to show that the individual components has their own merits. Since we previously discussed in Sec. 3.1 the difference of our down-sampling operation against 3D-GMP [15], we became curious to see the reconstruction in Fig. 5 if we replace the FPS in PoinTr [44] with the cosine similarity and GMP of [15]. Similar to PoinTr, the new combination selects distant points as its tokens while the table in their final reconstruction increased in size. In contrast, our tokens are more meaningful and the final results are more accurate. 5. Loss functions Given the input point cloud Pin (e.g. from a depth image), the objective of completion is to build the set of points Pout that fills up the missing regions in our input data. Since we train our architecture in a supervised manner, we denote Pgt as the ground truth. Completion. To evaluate the predicted point cloud, we impose the Earth-moving distance [9]. Comparing the output points to the ground truth and vice-versa, we end up (a) Input (b) without (c) with (d) Ground Truth First Last Figure 6. Compares the order of the point clouds reconstructed in the object completion with and without Lorder with \\math c a l {L}_ {\\ t ext {out }\\r ightarr o w \\tex t { gt}} &= \\sum _{p\\in \\Pout } \\|p-\\phi _\\text {gt}(p)\\|_2 \\label {eq:l_out} \\\\ \\mathcal {L}_{\\text {gt}\\rightarrow \\text {out}} &= \\sum _{p\\in \\Pgt } \\|p-\\phi _\\text {out}(p)\\|_2 \\label {eq:l_gt} (8) where \u03d5i(p) is a bijective function that finds the closest point in the point cloud Pi to p. Order of points in Pout. After training with (7) and (8), we noticed that the points in the output reconstruction are ordered from left to right as shown in Fig. 6(b). We want to take advantage of this organization and investigate this behavior further. Assuming the idea that, among the points in Pout, we are confident that the input point cloud must be part of it, we introduce a loss function that enforces that the first subset in Pout is similar to Pin. We formally write this loss function as \\mat h c al {L }_\\text {o r de r } &= \\sum _{p\\in \\Pin } \\mathcal {S}(\\theta _\\text {out}(p)) \\cdot \\|p-\\phi _\\text {out}(p)\\|_2 \\label {eq:l_order} (9) where \u03b8out(p) is the index of the closest point in Pout based on \u03d5out(p) while \\m a t hc al { S}(\\t he ta ) = \\begin {cases} 1, & \\text {if } \\theta \\leq |\\Pin |\\\\ 0, & \\text {otherwise} \\end {cases} \\label {eq:step} (10) is a step function that returns one if the index is within the first |Pin| points. When we plot the results with Lorder in Fig. 6(c), we noticed that the order in Pout moves from the observed to the occluded. In addition, fine-grained geometrical details such as the armrest of the chair are visible when training with Lorder; thus, improving the overall reconstruction. Semantic scene completion. In addition to the architecture in Sec. 4 and the loss functions in (7), (8) and (9) for completion, a semantic label is added to each point in the predicted cloud Pout. Given Nc categories, we denote the 5 \f(b) FoldingNet (d) MSN (c) PCN (g) PoinTr (e) SoftpoolNet (f) VRCNet (a) Input (j) Ground Truth (i) Ours (Trans) (h) Ours (Dir) Figure 7. Object completion results where we highlight the errors in red points. label for each point as a one-hot code li = [li,c]nc c=1 for the i-th point in Pout and the c-th category. Since training is supervised, the ground truth point clouds are also labeled with the semantic category. After establishing the correspondence between the predicted point cloud to the ground truth in (7) in training, we also extract the ground truth semantic label \u02c6 li. It then follows that the binary cross-entropy of the i-th point is computed \\ e p si lo n _i = \\fr ac { 1 }{ N _ c} \\sum _{c =i}^{N_s} \\hat {l}_{i,c} \\log l_{i,c} + (1 \\hat {l}_{i,c}) ( 1 \\log l_{i,c} ) (11) and formulate the semantic loss function as \\mathca l {L}_\\ text { sem antic} = \\frac {\\gamma }{|\\mathcal {P}_\\text {in}|}\\sum _{i=i}^{|\\mathcal {P}_\\text {in}|} \\epsilon _i (12) where the weight \\gam ma = \\f r ac {0.01}{\\mathcal {L}_{\\text {out}\\rightarrow \\text {gt}} + \\mathcal {L}_{\\text {gt}\\rightarrow \\text {out}}} \\label {eq:semantic_weight} (13) triggers to increase the influence of the Lsemantic in training as the completion starts to converge. Note that \u03b3 is an important factor, since the output point cloud is erratic in the initial iterations, which means that it can abruptly change from one iteration to the next before the completion starts converging. 6. Experiments To highlight the strengths of the proposed method, this section focuses on two experiments \u2013 object completion and semantic scene completion. 6.1. Object completion We evaluate the geometric completion of a single object on the ShapeNet [3] database where they have the point clouds of the partial scans as input and their corresponding ground truth completed shape. The input scans are composed of 2,048 points while the database provides a low resolution output of 2,048 points and a high resolution of 16,384 points. We follow the standard evaluation on 8 categories where all objects are roughly normalized into the same scale with point coordinates ranging between \u22121 to 1. Numerical results. We conduct our experiments based on three evaluation strategies from Completion3D [29], PCN [45] and MVP [20]. Evaluating on 8 objects (plane, cabinet, car, chair, lamp, sofa, table, vessel), they measure the predicted reconstruction through the L2-Chamfer distance, L1-Chamfer distance and the F-Score@1%, respectively. Note that, in this paper, we also follow the standard protocol where the value presented for the Chamfer distance is multiplied by 103. Although Table 1 only shows the average results across all categories, we refer the readers to the supplementary materials for the more detailed comparison. One of the key observations in this table is the capacity of our direct architecture to surpass most of the other methods\u2019 results. Among 11 approaches, our Chamfer distance is only worse than 3 methods while our F-Score@1% is better than all of them. This therefore establishes the strength of our operators since our first architecture is solely composed of it. Moreover, our second architecture, which combines our operators with the transformer, reduces the error by 3-5% on the Chamfer distance and increases the accuracy by 4.5% on the F-Score@1%. The table also examines the effects of Lorder to our reconstruction. Training with Lorder improves our results by 0.120.13 in Chamfer distance and 0.013-0.021 in F-Score@1%, validating our observations in Fig. 6. Qualitative results. We compare our object completion results in Fig. 7 with the recently proposed methods: FoldingNet [43], PCN [45], MSN [16], SoftPoolNet [33], VRCNet [20] and PoinTr [44]. The red points in the figure highlight the errors in the reconstruction. All the approaches reconstructs a point cloud with 16,384 points with the excep6 \fCompletion3D PCN MVP Method L2-Chamfer L1-Chamfer F-Score@1% FoldingNet [43] 19.07 14.31 \u2013 SoftPoolNet [33] 11.07 9.20 0.666 TopNet [29] 14.25 12.15 0.576 PCN [45] 18.22 9.64 0.614 MSN [16] \u2013 9.97 0.690 GRNet [39] 10.64 8.83 0.677 ECG [19] \u2013 \u2013 0.736 NSFA [48] \u2013 \u2013 0.770 CRN [30] 9.21 8.51 0.724 SCRN [31] 9.13 8.29 \u2013 VRCNet [20] 8.12 \u2013 0.781 PoinTr [44] 9.22 8.38 0.741 ASFM-Net [38] 6.68 \u2013 \u2013 Ours (Direct) 8.35 8.46 0.801 \u2013without Lorder 8.47 8.59 0.788 \u2013input Pgt 5.11 5.37 0.923 Ours (Transformer) 6.64 7.96 0.816 \u2013without Lorder 6.74 8.09 0.795 \u2013input Pgt 4.46 4.95 0.962 Table 1. Evaluation on Completion3D [29], PCN [45] and MVP [20] datasets with their corresponding metrics for the object completion task. tion for FoldingNet with 2,048 points and MSN with 8,192. Since FoldingNet and PCN take advantage of their mathematical assumption where they rely on deforming one or more planar grids, they tend to over-smooth their reconstruction where finer details such as the boat is flattened. In contrast, our method can perform better on the smooth regions as well as the finer structures. Nevertheless, the more recent approaches like [16,20,33,44] can also produce more descriptive reconstruction on the boat. However, they produce more errors which is highlighted in the unconventional lamp or chair. Overall, our reconstructions are closer to the ground truth. Failure cases. In addition to the qualitative results, we also examine the failure cases in Fig. 8. Most of them are objects with unusual structures like the car without the wheels. Another issue is when there is an insufficient amount of input point cloud to describe the object such as (e) Ground Truth (a) Input (d) Ours (Trans) (b) PoinTr (c) Ours (Dir) Figure 8. Examples of the failure cases in object completion. Method Resolution Average IoU Lin et al. [14] 60 12.0 Geiger and Wang [10] 60 19.6 SSCNet [27] 60 30.5 VVNet [12] 60 32.9 SaTNet [17] 60 34.4 ForkNet [32] 80 37.1 CCPNet [47] 240 38.5 SketchSSC [4] 60 41.1 SISNet [2] 60 52.4 Ours (Direct) 60 40.0 \u2013with \u03b3 = 1 in Lsemantic 60 37.2 Ours (Transformer) 60 42.4 \u2013with \u03b3 = 1 in Lsemantic 60 38.9 Table 2. Semantic scene completion on NYU [25] dataset. The value in resolution (x) is the output volumetric resolution which is x \u00d7 0.6x \u00d7 x. the chair. Notably, compared to the state-of-the-art, our reconstructions are still better in these situations. 6.2. Semantic scene completion This evaluation aims at reconstructing the scene from a single depth image through a point cloud or an SDF volume where each point or voxel is categorized with a semantic class. Originally introduced for 2.5D semantic segmentation, NYU [25] and ScanNet [6], which were later annotated for semantic completion by [27, 36], are among the most relevant benchmark datasets in this field. These datasets include pairs of depth image and the corresponding semantically labeled 3D reconstruction. Semantic scene completion with voxels. NYU are provided with real scans for indoor scenes which are acquired with a Kinect depth sensor. Following SSCNet [27], the semantic categories include 12 classes of varying shapes and sizes: empty space, ceiling, floor, wall, window, chair, bed, sofa, table, tvs, furniture and other objects. Since the other point cloud completion do not handle semantic segmentation, we start our evaluation by comparing with the voxel-based approaches which perform the both the completion and the semantic segmentation such as [2, 4, 10, 12, 14, 17, 27, 32, 47]. Considering that the volumetric data evaluates through the IoU, we need to convert our point clouds to voxel grids to make the comparison. One of the significant advantage of point clouds over voxels is that we are not constrained to a specific resolution. Since most method evaluate on 60 \u00d7 36 \u00d7 60, we converted our point cloud to this resolution. Our approach achieves competitive average IoU of 42.4% which is better than all the other methods except for SISNet [2]. However, it is noteworthy to mention that our method faces additional errors associated to the conversion from point cloud to vox7 \fMethod CompleteScanNet NYU FoldingNet [43] 11.25 14.66 AtlasNet [11] 8.92 10.12 PCN [45] 8.19 9.98 MSN [16] 7.28 8.65 SoftPoolNet [33] 8.27 9.29 GRNet [39] 4.56 5.80 VRCNet [20] 4.29 5.45 PoinTr [44] 5.08 5.92 Ours (Direct) 3.17 4.72 Ours (Transformer) 3.04 4.38 Table 3. Evaluation on CompleteScanNet [36] and NYU [25] dataset for scene completion, measuring the average Chamfer distance trained with L2 distance (multiplied by 103) with the output resolution of 16,384. els. In addition, the ground truth voxels for the furnitures in the NYU dataset is a solid volume which is not a plausible format for point cloud approaches which focuses more on the surface reconstruction. This in effect decreases the IoU of our method. Moreover, Table 10 includes a small ablation study to verify the contribution of \u03b3 from (13) in Lsemantic. If we discard (13) by setting \u03b3 to one, the IoU for our models decrease by 7.5-9%; thus, proving the advantage in adaptively weighing the semantic loss function. Point cloud scene completion. Another relevant dataset is from ScanNet [6] which was supplemented with the ground truth semantic completion by CompleteScanNet [36]. This include a total of 45,451 paired partial scan and semantic completion for training. Our evaluation in Table 3 takes 2,048 points as input and reconstructs the scene with 16,384 points. Since there is no previous work that focused on point cloud scene completion, we compare against methods that were designed for a single object completion such as PCN [45], MSN [16], SoftPoolNet [33] and GRNet [39]. Based on our evaluation in Table 3, both versions of our architectures attain the best results. Notably, we also compared these methods on the NYU dataset in Table 3. Similarly, the proposed architectures also achieve the stateof-the-art in point cloud scene completion. 7. Ablation study This section focuses on the strengths of our operator in our transformer architecture. Although we adapt the transformer from PoinTr [44], we argue that every component we added is significant to the overall performance. To evaluate this, we disentangle the points-to-tokens and coarse-to-fine blocks. In practice, we separate the backbone, which takes points in the partial scan as input and outputs a coarse point cloud, from the coarse-to-fine strategy. Evidently, in our approach, the points-to-tokens block is part of the backbone. Since most methods can also be separated in this manner, we then compose Table 4 to mix-and-match different backbones with different coarse-to-fine methods for object and scene completion. In both tables, we classified the other coarse-to-fine methods as: (1) deform which includes the operation in deforming 3D grids; (2) deconv which processes with MLP, 1D or 2D deconvolutions; and, (3) Edgeaware Feature Expansion (EFE) [19]. We then highlight the originally proposed architectures in yellow. For any given backbone in every row, our coarse-to-fine method produces the best results. Moreover, for any given coarse-to-fine strategy in every column, our backbone performs the best. Therefore, this study essentially proves that each of the proposed components in our transformer architecture has a significant role in the overall performance. 8." + }, + { + "url": "http://arxiv.org/abs/2105.14556v1", + "title": "Diversifying Dialog Generation via Adaptive Label Smoothing", + "abstract": "Neural dialogue generation models trained with the one-hot target\ndistribution suffer from the over-confidence issue, which leads to poor\ngeneration diversity as widely reported in the literature. Although existing\napproaches such as label smoothing can alleviate this issue, they fail to adapt\nto diverse dialog contexts. In this paper, we propose an Adaptive Label\nSmoothing (AdaLabel) approach that can adaptively estimate a target label\ndistribution at each time step for different contexts. The maximum probability\nin the predicted distribution is used to modify the soft target distribution\nproduced by a novel light-weight bi-directional decoder module. The resulting\ntarget distribution is aware of both previous and future contexts and is\nadjusted to avoid over-training the dialogue model. Our model can be trained in\nan end-to-end manner. Extensive experiments on two benchmark datasets show that\nour approach outperforms various competitive baselines in producing diverse\nresponses.", + "authors": "Yida Wang, Yinhe Zheng, Yong Jiang, Minlie Huang", + "published": "2021-05-30", + "updated": "2021-05-30", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "main_content": "Introduction The success of neural models has greatly advanced the research of dialog generation (Huang et al., 2020; Wang et al., 2020; Zhang et al., 2020). However, most of these models suffer from a lowdiversity issue where models tend to generate bland and generic responses such as I don\u2019t know or I\u2019m OK (Li et al., 2016). Although various approaches have been proposed to tackle this issue (Li et al., 2016; Zhao et al., 2017; Du et al., 2018; Zhou et al., 2018; Welleck et al., 2020; Zheng et al., 2020b), there are still remarkable gaps between responses generated by neural models and those from humans (Holtzman et al., 2020). Further, some existing methods may even harm the \ufb02uency or coherence when improving the diversity of generated \u2217Equal contribution \u2020 Corresponding Author: aihuang@tsinghua.edu.cn So, what exactly do you do around here ? I make the robots seem more ___ Post: Response: human 0.9 1.0 0.61 0.01 0.0 0.01 bank 0.01 0.0 0.01 fights 0.01 0.0 0.10 ugly 0.01 0.0 0.08 dull 0.01 0.0 0.11 fun \u2026 Hard Target (One hot) Label Smoothing AdaLabel (Ours) Figure 1: A dialogue sampled from the OpenSubtitles dataset. We demonstrate the hard target, label smoothing, and Adaptive Label Smoothing approach when learning to predict the next word (\u201chuman\u201d). responses. (Ippolito et al., 2019; Massarelli et al., 2020; Zheng et al., 2020a). Recently, Jiang and de Rijke (2018); Jiang et al. (2019) show that there is a strong connection between the low-diversity problem and the overcon\ufb01dence issue. i.e., over-con\ufb01dent dialogue models tend to produce low-diversity responses. One of the reasons can be attributed to the supervision target. Speci\ufb01cally, training a dialogue generation model with the Maximum Likelihood Estimation (MLE) objective under the hard target (i.e., one-hot distribution as ground truth) makes the model favor high-frequency tokens and produce over-con\ufb01dent probability estimation (Gowda and May, 2020), which ultimately leads to poor calibration (Mukhoti et al., 2020), and thus low diversity (Jiang et al., 2019). Hinton et al. (2015) and Yang et al. (2018) suggest that the ideal training target should be a soft target that assigns probability mass on multiple valid candidates (see Figure 1). With such a soft target, the over-con\ufb01dence issue can be alleviated (M\u00a8 uller et al., 2019), and thus the diversity of the output responses can be improved. Unfortunately, the ideal soft target is challenging to obtain. Early works try to tackle this issue arXiv:2105.14556v1 [cs.CL] 30 May 2021 \fusing label smoothing (Szegedy et al., 2016), i.e., a small probability is uniformly assigned to nontarget words. However, the target distribution constructed in this way is far from ideal: First, the probability of the target word is chosen manually and \ufb01xed, which cannot adapt to different contexts. However, as Holtzman et al. (2020) demonstrated, human text distribution exhibits remarkable \ufb02uctuations in the per-token perplexity. We argue that different target probabilities should be used for different contexts. Second, the uniform assignment of the probability mass on non-target words ignores the semantic relationship between the context and each word. Ideally, a word should receive more probability mass if it is more relevant to the context. For the example shown in Figure 1, word \u201cfun\u201d is more likely to appear behind the context \u201cI make the robots seem more \u201d than word \u201cbank\u201d. To address the above issue, we propose an Adaptive Label smoothing (AdaLabel) method that can dynamically estimate a soft target distribution at each time step for different contexts. Specifically, for each target word yt in the training data, the probability distribution predicted by the current model is \ufb01rst obtained. The maximum probability pmax in this distribution measures the con\ufb01dence of the current prediction, i.e., a higher pmax means higher con\ufb01dence for the current prediction. To avoid over-con\ufb01dence, we use pmax as the supervision signal for the target word yt in the training process so that the model will not be optimized towards yt when it correctly predicts yt. A word-level factor is also introduced to facilitate the learning of low-frequency words. Moreover, we introduce a novel auxiliary decoder module Da to produce the supervision signals for these non-target words in each training step. Da only contains one transformer block, and it is optimized to predict words based on bi-directional contexts. A novel Target-Mask attention scheme is devised to prevent Da from seeing the target word in the training process. This scheme also enables parallel training and inference of Da. We perform extensive experiments on two benchmark datasets: DailyDialog and OpenSubtitles. Our method outperforms various competitive baselines and signi\ufb01cantly improves the diversity of generated responses while ensuring \ufb02uency and coherency. Our major contributions are summarized: 1. We propose AdaLabel, a method that can produce a soft target distribution considering the current context and the model\u2019s con\ufb01dence. Specifically, AdaLabel ensures that the dialogue model will not be optimized toward the target word yt if yt has been correctly predicted. This prevents our model from being over-con\ufb01dent. 2. We introduce a light-weight bi-directional decoder that can produce context-aware supervision signals for non-target words. A novel Target-Mask attention scheme is devised to facilitate the parallel training and inference of this decoder. 3. Extensive experiments on two benchmark dialogue datasets with both automatic and human evaluation results show that our method helps to alleviate the model over-con\ufb01dent issue and significantly improves the model\u2019s diversity. 2 Related work Diversity Promotion: Existing approaches for solving the low diversity issue of neural dialogue models generally involve two categories: The \ufb01rst category is training-based, where new training objectives are designed (Li et al., 2016; Zhang et al., 2018; Gao et al., 2019) or latent variables are introduced (Zhao et al., 2017; Zhou et al., 2018) in the dialogue model. Some methods also try to re\ufb01ne the training target used in the MLE loss (Choi et al., 2020; Jiang et al., 2019; Li et al., 2019), or directly penalize the trivial responses with auxiliary loss terms (Welleck et al., 2020; Li et al., 2020). Unlike these existing approaches, our method tries to adaptively adjust the training target by utilizing the current predictions. The second category is decoding-based, in which different heuristic decoding rules are designed (Holtzman et al., 2020; Kulikov et al., 2019). Note that these decoding techniques are independent of the model setting, and our method can be used in combination with these techniques. Con\ufb01dence Calibration: Modern deep neural networks suffer from the over-con\ufb01dence issue (Guo et al., 2017; Kumar and Sarawagi, 2019), and various remedies are proposed (Pereyra et al., 2017; Mukhoti et al., 2020; Lin et al., 2017). Following the work of Jiang and de Rijke (2018); Jiang et al. (2019), our method is proposed to tackle the overcon\ufb01dence issue to improve the diversity of the generated responses. However, different from existing approaches, our method enables more \ufb02exible controls over the target distribution. Knowledge Distillation: Another important technique similar to our work is knowledge distilla\fEncoder Decoder Context \ud835\udc4b Auxiliary Decoder \ud835\udc9f\u0bd4 Training Response \ud835\udf16 \u0d48\u123a1 \u0d46\ud835\udf16\u123b \u0d48\ud835\udf16 \ud835\udc35\ud835\udc42\ud835\udc46 \ud835\udc97\ud835\udc66\u0b35 \ud835\udc66\u0b35 \ud835\udc97\ud835\udc66\u0b36 \ud835\udc66\u0b36 \ud835\udc97\ud835\udc66\u0b37 \ud835\udc66\u0bcd \ud835\udc97\u123e\ud835\udc38\ud835\udc42\ud835\udc46\u123f \ud835\udc66\u0b37 \ud835\udc97\ud835\udc66\u0b38 \u2026 \u2026 \ud835\udc35\ud835\udc42\ud835\udc46 \ud835\udc66\u0b35 \ud835\udc66\u0b35 \ud835\udc66\u0b36 \ud835\udc66\u0b36 \ud835\udc66\u0b37 \ud835\udefc \ud835\udc5d\u123a\ud835\udc66\u0b37\u123b Auxiliary Distribution \ud835\udc97 \ud835\udc5d\u123a\ud835\udc66\u0b37\u123b Hard Target \ud835\udc92 \ud835\udc5d\u123a\ud835\udc66\u0b37\u123b Adaptive Soft Target \ud835\udc92\u11f1 \u2112\u123a\ud835\udc92\u11f1, \ud835\udc91\u123b Partial Response Predicted Distribution \ud835\udc91 \ud835\udc5d\u0be0\u0bd4\u0beb \ud835\udc5d\u123a\ud835\udc66\u0b37\u123b Figure 2: Overview of constructing the adaptive soft target q\u2032 using AdaLabel: The maximum probability pmax in the predicted distribution p is used to obtain an adaption factor \u03f5, which is further used to combine the hard target q and the auxiliary distribution v to obtain q\u2032. A bi-directional auxiliary decoder Da is used to produce v. tion, in which a learned teacher model is distilled to a student model by minimizing a KL term (Hinton et al., 2015; Kim and Rush, 2016). The most related work comparing to ours is the C-MLM approach (Chen et al., 2020), in which a BERT model is \ufb01ne-tuned to be a teacher. Our approach and C-MLM\u2019s primary difference is that our auxiliary decoder Da is a one layer module that is jointly trained with the dialogue model. However, the BERT teacher in C-MLM contains much more parameters, and it is trained using an expensive pretrained and then \ufb01ne-tuned process. Moreover, the target-masked attention scheme in Da enables parallel inferences of v for each training sequence Y . In contrast, multiple independent forward passes are required for the BERT teacher. 3 Method 3.1 Background: MLE with Hard Target The goal of generative dialogue modeling is to learn a conditional probability distribution p(Y |X), where X is the dialogue context, Y = y1, ..., yT is a response word sequence, and yi \u2208V is a word from the vocabulary V. In an auto-regressive manner, p(Y |X) is factorized as Q t p(yt|y (1 \u2212\u03b5) \u00b7 max(v) (see Eq. 2), in which max(v) is the maximum probabilities for non-target words V\u0338=yt, we have to enforce \u03b5 > max(v) 1+max(v). Thus we propose to calculate the lower-bound \u03bb of \u03b5 as: \u03bb = max(v) 1 + max(v) + \u03b7, (5) where \u03b7 > 0 is a hyper-parameter that controls the margin between the probability of the target word and non-target words in p\u2032. To facilitate faster converge and better learning of low-probability words, an empirical factor \u03b1 \u2208 [0, 1] is further introduced to adjust the calculation of \u03b5 on the basis of Eq. 4: \u03b5 = 1 \u2212\u03b1 \u00b7 (1 \u2212max(pmax, \u03bb)), (6) where \u03b1 is calculated as the relative ratio to pmax: \u03b1 = \u0014p(yt|y