diff --git "a/abs_29K_G/test_abstract_long_2405.02791v1.json" "b/abs_29K_G/test_abstract_long_2405.02791v1.json" new file mode 100644--- /dev/null +++ "b/abs_29K_G/test_abstract_long_2405.02791v1.json" @@ -0,0 +1,101 @@ +{ + "url": "http://arxiv.org/abs/2405.02791v1", + "title": "Efficient Text-driven Motion Generation via Latent Consistency Training", + "abstract": "Motion diffusion models have recently proven successful for text-driven human\nmotion generation. Despite their excellent generation performance, they are\nchallenging to infer in real time due to the multi-step sampling mechanism that\ninvolves tens or hundreds of repeat function evaluation iterations. To this\nend, we investigate a motion latent consistency Training (MLCT) for motion\ngeneration to alleviate the computation and time consumption during iteration\ninference. It applies diffusion pipelines to low-dimensional motion latent\nspaces to mitigate the computational burden of each function evaluation.\nExplaining the diffusion process with probabilistic flow ordinary differential\nequation (PF-ODE) theory, the MLCT allows extremely few steps infer between the\nprior distribution to the motion latent representation distribution via\nmaintaining consistency of the outputs over the trajectory of PF-ODE.\nEspecially, we introduce a quantization constraint to optimize motion latent\nrepresentations that are bounded, regular, and well-reconstructed compared to\ntraditional variational constraints. Furthermore, we propose a conditional\nPF-ODE trajectory simulation method, which improves the conditional generation\nperformance with minimal additional training costs. Extensive experiments on\ntwo human motion generation benchmarks show that the proposed model achieves\nstate-of-the-art performance with less than 10\\% time cost.", + "authors": "Mengxian Hu, Minghao Zhu, Xun Zhou, Qingqing Yan, Shu Li, Chengju Liu, Qijun Chen", + "published": "2024-05-05", + "updated": "2024-05-05", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.AI" + ], + "label": "Original Paper", + "paper_cat": "Diffusion AND Model", + "gt": "Motion diffusion models have recently proven successful for text-driven human\nmotion generation. Despite their excellent generation performance, they are\nchallenging to infer in real time due to the multi-step sampling mechanism that\ninvolves tens or hundreds of repeat function evaluation iterations. To this\nend, we investigate a motion latent consistency Training (MLCT) for motion\ngeneration to alleviate the computation and time consumption during iteration\ninference. It applies diffusion pipelines to low-dimensional motion latent\nspaces to mitigate the computational burden of each function evaluation.\nExplaining the diffusion process with probabilistic flow ordinary differential\nequation (PF-ODE) theory, the MLCT allows extremely few steps infer between the\nprior distribution to the motion latent representation distribution via\nmaintaining consistency of the outputs over the trajectory of PF-ODE.\nEspecially, we introduce a quantization constraint to optimize motion latent\nrepresentations that are bounded, regular, and well-reconstructed compared to\ntraditional variational constraints. Furthermore, we propose a conditional\nPF-ODE trajectory simulation method, which improves the conditional generation\nperformance with minimal additional training costs. Extensive experiments on\ntwo human motion generation benchmarks show that the proposed model achieves\nstate-of-the-art performance with less than 10\\% time cost.", + "main_content": "Introduction Synthesizing human motion sequences under specified conditions is a fundamental task in robotics and virtual reality. Research in recent years has explored the text-to-motion diffusion framework [1, 2, 3] to generate realistic and diverse motions, which gradually recovers the motion representation from a prior distribution with multiple iterations. These works show more stable distribution estimation and stronger controllability than traditional single-step methods (e.g., GANs [4] or VAEs [5, 6]), but at the cost of a hundredfold increase in computational burden. Such a high-cost sampling mechanism is expensive in time and memory, limiting the model\u2019s accessibility in real-time applications. To mitigate inference cost, previous text-to-motion diffusion frameworks try to trade off between fidelity and efficiency from two perspectives: i) mapping length-varying and high-dimensional original motion sequences into well-reconstructed and low-dimension motion latent representations[3, 7] to reduce data redundancy and complexity, and ii) utilizing skip-step sampling strategy [3, 8] to minimize expensive and repetitive function evaluation iterations. The first perspective inspired by the excellent performance of the latent diffusion model in text-to-image synthesis, they introduce the variational autoencoder with Kullback-Leibler (KL) divergence constraints as motion representation extractor. However, unlike image data support that contains more than ten million samples, the high cost of motion capture limits the number of samples for the text-based motion generation task. As a example, the largest current human motion dataset contains no more than fifteen thousand samples after employing data augmentation. Simultaneous arXiv:2405.02791v1 [cs.CV] 5 May 2024 \foptimization of reconstruction loss and KL divergence loss, which are adversarial targets, is significantly challenging in the presence of limited training resources. To ensure high reconstruction performance, previous state-of-the-art models usually set the KL divergence weights low enough, which results in low regularity of motion representations. Such low-regularity and continuous motion representations suffer redundancy and low robustness. It can be mitigated by a sufficiently numerous repetitive function evaluation iterations, but seriously harms the generative performance in the context of extremely few sampling steps. The second perspective follows from the recently well-established diffusion solvers, which can be categorized as training-free methods and training-based methods. Previous study confirms that the forward diffusion process corresponds to an inverse diffusion process without a stochastic term and is known as the probabilistic flow ordinary differential equation (PF-ODE) [9]. Training-free methods constructed different discrete solvers for the special form of the PF-ODE, achieving almost a 20-fold performance improvement. These works effectively compress the sampling steps to 50-100 steps, but the fidelity of the ODE solution results is lower when the number of iterations is much smaller due to the complexity of the probability distribution of the motion sequences and the cumulative error of the discrete ODE sampling. It is still a significant gap in computational effort compared to traditional single-step motion generation models. Training-based methods usually rely on model distillation or trajectory distillation for implementation, and one promising approach is known as the consistency model. It impose constraints on the model to maintain the consistency of the output on the same PF-ODE trajectory, thus achieving a single-step or multiple-step generative mapping from the prior distribution to the target distribution. Typical PF-ODE trajectory generation methods are consistency distillation, which generates trajectories with pre-trained diffusion models, or consistency training, which simulates trajectories with the unbiased estimation of ground truth. The former relies on well-trained diffusion models as foundation models. Training these models from scratch is computationally expensive and time-consuming. Less costly consistency training frameworks avoid additional pre-trained models, but also suffer poor generation performance and even training collapse due to redundant and irregular latent representations. Moreover, existing consistency training frameworks have not sufficiently explored conditional PF-ODE trajectory. It results in vanilla consistency-training-based models without significant advantages over well-established multi-step diffusion samplers using classifier-free guidance. Upon the above limitations, we propose a Motion Latent Consistency Training (MLCT) framework with generates high-quality motions with no more than 5 sampling steps. Following the common latent space modeling paradigm, our motivation focuses on constructing low-dimensional and regular motion latent representations, as well as exploring the simulation of conditional PF-ODE trajectories with the consistency training model in the absence of pre-trained models. Specifically, the first contribution of this paper is to introduce a pixel-like latent autoencoder with quantization constraints, which aggregates motion information of arbitrary length to multiple latent representation tokens via self-attention calculation. It differs significantly from the widely used variational representations in that the former is bounded and discrete while the latter is unbounded and continuous. We restrict the representation boundaries with the hyperbolic tangent (Tanh) function and forces the continuous representation to map to the nearest predefined clustering center. Compared to the black-box control strategy of fine-tuning the KL divergence weights, our approach trades off the regularity and reconstruction performance of the motion latent representations more controllably via designing finite dimensional discrete latent representation space. In addition, previous practice demonstrates that the boundedness of the representations contributes to sustaining stable inference in classifier-free guidance (CFG) techniques. The second contribution of this paper is to explore a one-stage conditionally guided consistency training framework. The main insight is to consider unbiased estimation based on ground truth motion representations as the simulation of a conditional probability gradient and to propose an online updating mechanism for the unconditional probability gradient. To the best of our knowledge, this is the first application of classifier-free guidance to consistency training. Since it is utilized for generating trajectories, the denoiser does not need to be double computationally expensive in the derivation to get better conditional generation results. We evaluate the proposed framework on two widely-used datasets: KIT and HumanML datasets. The results of our 1, 3 and 5 number of function evaluations (NFE) generation are shown in Figure 1, along with the differences in FID metrics with existing methods. Extensive experiments indicate the effectiveness of MLCT and its components. The proposed framework achieves state-of-the-art performance in motion generation only in around 5 steps. To sum up, the contributions of this paper are as follows: \u2022 We explore a pixel-like motion latent representation relying on quantization constraints which is highly regular, well-reconstruction and bounded. \u2022 We introduce classifier-free guidance in consistency training for the first time. It is beneficial to realize more controllable motion generation as well as more stable training convergence. \u2022 Our proposed MLCT achieves state-of-the-art performance on two challenge datasets with extremely less sampling steps. 2 \f1 NFE 3 NFE 5 NFE 1 NFE 3 NFE 5 NFE Figure 1: Our model achieves better FID metrics with less inference time and allows for the generation of high-quality human motions based on textual prompts in around 5 NFE. The color of humans darkens over time. 2 Related Work Human motion generation. Human motion generation aims to synthesize human motion sequence under specified conditions, such as action categories [10, 11], audio [12, 13], and textual description [14, 2, 3]. In the past few years, numerous works have investigated motion generation from various generative frameworks. For example, VAE-based models [15, 16, 5] represent the motion as a set of Gaussian distributions and constrain its regularity with KL divergence. Such constraint allows it to reconstruct the motion information from the standard normal distribution, yet its results are often ambiguous. GAN-based methods [17, 4] achieve better performance by bypassing direct estimation of probabilistic likelihoods via the adversarial training strategy, but the adversarial property makes their training often unstable and prone to mode collapse. Some multi-step generative methods have emerged recently with great success, such as auto-regressive [18, 19] and diffusion methods [1, 2, 3]. In particular, the latter is gradually dominating the research frontiers due to its stable distribution estimation capability and high-quality sampling results. Motiondiffuse [1] and MDM [2] were the pioneers in implementing diffusion frameworks for motion generation. MLD [3] realizes the latent space diffusion, which significantly improves the efficiency. M2DM [7] represents motion as discrete features and diffusion processes in finite state space with state-of-the-art performance. Some recent work [8] has focused on more controlled generation with equally excellent results. These works validate the outstanding capabilities of the motion diffusion framework and receive continuous attention. Efficient diffusion sampling. Efficient diffusion sampling is the primary challenge of diffusion frameworks oriented to real-time generation tasks. DDIM [20] relaxes the restriction on Markov conditions in the original diffusion framework and achieves a 20 times computational efficiency improvement. Score-based method [9] from the same period relates the diffusion framework to a stochastic differential equation and notes that it has a special form known as the probability flow ODE. This is a milestone achievement. It guides the following works either to steer a simplified diffusion process through a specially designed form of ODE [21, 22, 23], or to skip a sufficiently large number of sampling steps via the more sophisticated higher-order ODE approximation solution strategy [24]. In addition to the above work, the diffusion process can be executed in lower dimensional and more regular latent spaces, thus reducing the single-step computational burden [25]. While these works have proven effective in computer vision, they have received only finite reflections in motion diffusion frameworks. Previous state-of-the-art methods such as MLD [3] and GraphMotion [8] have utilized VAE-based representations and DDIM sampling strategies. Precise and robust motion representation and efficient motion diffusion design remain an open problem. Consistency model. Consistency modeling is a novel and flexible diffusion sampling framework that allows the model to make trade-offs between extreme few steps and generation quality. Latent consistency models extend consistency distillation methods to the latent representation space, saving memory spend and further improving inference efficiency. Subsequently, VideoLCM further applies consistency distillation to video generation. Recent approaches have also investigated the application of Lora and control net to consistency modeling with impressive results. These methods rely on a strong teacher model as the distillation target, which trained from scratch requires not only a large dataset support but also a lot of computational resources. To reduce the training cost, ICM further explores and improves consistency training methods to obtain similar performance to consistency distillation without pre-trained models. However, it is 3 \fstill limited to the original pixel representation space of fixed dimensions and is applied to variance-explosion ODE frameworks. Consistency training methods for broader diffusion strategies in the latent representation space lack further exploration. 3 Preliminaries In this section, we briefly introduce diffusion and consistency models. 3.1 Score-based Diffusion Models The diffusion model [26] is a generative model that gradually injects Gaussian noise into the data and then generates samples from the noise through a reverse denoising process. Specifically, it gradually transforms the data distribution pdata(x0) into a well-sampled prior distribution p(xT ) via a Gaussian perturbation kernel p(xt|x0) = N(xt|\u03b1tx0, \u03c32 t I), where \u03b1t and \u03c3t are specify noise schedules. Recent studies have formalized it into a continuous time form, described as a stochastic partial differential equation, dxt = f(t)xtdt + g(t)dwt, (1) where t \u2208[\u03f5, T], \u03f5 and T are the fixed positive constant, wt denotes the standard Brownian motion, f and g are the drift and diffusion coefficients respectively with follow from, f(t) = d log \u03b1t dt , g2(t) = d\u03c32 t dt \u22122d log \u03b1t dt \u03c32 t . (2) Previous work has revealed that the reverse process of Eq. 1 shares the same marginal probabilities with the probabilistic flow ODE: dxt = [f(t)xt \u22121 2g2(t)\u2207xt log p(xt)]dt, (3) where \u2207x log p(xt) is named the score function, which is the only unknown term in the sampling pipeline. An effective approach is training a time-dependent score network S\u03b8(xt, t) to estimate \u2207x log p(xt) based on conditional score matching, parameterized as the prediction of noise or initial value in forward diffusion. Further, Eq. 3 can be solved in finite steps by any numerical ODE solver such as Euler [9] and Heun solvers [27]. 3.2 Consistency Models Theoretically, the inverse process expressed by Eq. 3 is deterministic, and the consistency model (CM) [23] achieves one-step or few-step generation by pulling in outputs on the same ODE trajectory. It is more formally expressed as, S\u03b8(xt, t) = S\u03b8(xt\u2032, t\u2032) \u2248S\u03b8(x\u03f5, \u03f5) \u2200t, t\u2032 \u2208[\u03f5, T], (4) which is known as the self-consistency property. To maintain the boundary conditions, existing consistency models are commonly parameterized by skip connections, i.e., S\u03b8(xt, t) := cskip(t)x + cout(t) \u02c6 S\u03b8(xt, t) (5) where cskip(t) and cout(t) are differentiable functions satisfied cskip(\u03f5) = 1 and cout(\u03f5) = 0. For stabilize training, the consistency model maintaining target model S\u2212 \u03b8 , trained with the exponential moving average (EMA) of parameter \u03b3, that is \u03b8\u2212\u2190\u03b3\u03b8\u2212+ (1 \u2212\u03b3)\u03b8. The consistency loss can be formulated as, Lcm(\u03b8, \u03b8\u2212) = Ex,t \u0002 d \u0000S\u03b8(xtn+1, tn+1), S\u03b8\u2212(\u02c6 xtn, tn) \u0001\u0003 (6) where d(\u00b7, \u00b7) is a metric function such as mean square or pseudo-huber metric, and \u02c6 xtn is a one-step estimation from xtn+1 with ODE solvers applied in Eq. 3. 4 Motion Latent Consistency Training Framework In this section, we discuss two critical targets. The first is encoding motions with arbitrary lengths into low-dimensional and regularized latent representations of motions to align all motion dimensions. The second is introducing the conditional PF-ODE into less cost consistency training framework for few-steps and high-quality latent representation sampling. To this end, we propose a Motion Latent Consistency Training (MLCT) framework, as shown in Figure 2. It consists of an autoencoder with quantization constraints, which is used to learn various motion representations in low-dimensional and regularized latent spaces (details in Section 4.1), and a denoising network, which is used to capture the corresponding latent state distributions and to implement few-step sampling (details in Section 4.2). 4 \fMotion Latent Representation Motion Feature ... ... ... ... Latent Representation Noise ... Time Text Transformer Block Embedding Embedding ... ... Quantized Conditional PF-ODE Trajectories Skip Connection Clamp Quantization Constraints Conditional Trajectories Simulation Conditional Target Unconditional Target Figure 2: Our Motion Consistency model can achieve high-quality motion generation given a text prompt with around 5 steps. The color of humans darkens over time. E D S S S xt x\u03f5 xT x\u03f5 xt\u2032 x\u03f5 x\u03f5 xt\u2032 xt xT dxt = f(t)xtdt + g(t)dwt dxt = [f(t)xt \u22121 2g2(t)\u2207xt log p(xt)]dt Consistency Property: S(xT , T, c) \u2248S(xt\u2032, t\u2032, c) \u2248S(xt, t, c) \u2248x\u03f5, where \u2200t, t\u2032 \u2208[\u03f5, T] 4.1 Encoding Motion as Quantized Latent Representation We construct an autoencoder G = {E, D} with transformer-based architecture to realize encoding and reconstructing between motion sequences x and latent motion representations z. The core insight is that each dimension of z is sampled from a finite set M of size 2l + 1 as follow, M = {zi; \u22121, \u2212j/l, \u00b7 \u00b7 \u00b7 , 0, \u00b7 \u00b7 \u00b7 , j/l, \u00b7 \u00b7 \u00b7 , 1}l j=0. (7) To this end, we denote z \u2208Rn,d as n learnable tokens with d dimension, aggregating the motion sequence features via attention computation. Inspired by recent quantitative work [28], we employ a hyperbolic tangent (tanh) function on the output of the encoder E to constrain the boundaries of the representation, and then quantize the result by a rounding operator R. Furthermore, the gradient of quantized items is simulated by the previous state gradient to backpropagate the gradient normally. The latent representations z are sampled by follow format, z = R \u0010 l \u00b7 tanh(E(x)) \u0011 /l. (8) The standard optimization target is to reconstruct motion information from z with the decoder D, i.e., to optimize the l1 smooth error loss, Lz = Ex h d \u0010 x, D(z) \u0011 + \u03bbjd \u0010 J (x), J (D(z)) \u0011i , (9) where J is a function to transform features such as joint rotations into joint coordinates, and it is also applied in MLD [3] and GraphMotion [8]. \u03bbj is a balancing term. Compared with the traditional VAEs, the optimization target Eq. 9 does not contain a divergence adversarial term. A well-trained autoencoder G output bounded and regular motion latent representation, which in turn improves the solution space of the denoising network, and experimentally we found that this improvement is important for the convergence of consistent training. 5 \f4.2 Few Step Motion Generation via Consistency Training For conditional motion generation, Class-Free Guidance (CFG) is crucial for synthesizing high-fidelity samples in most successful cases of motion diffusion models, such as MLD or GraphMotion. Previous work introduced CFG into the consistency distillation, demonstrating the feasibility of the consistency model on conditional PF-ODE trajectories. However, they rely on powerful pre-trained teacher models, which not only involve additional training costs but performance is limited by distillation errors. Therefore, we are motivated to simulate CFG more efficiently from the original motion latent representation following the consistency training framework to alleviate the computational burden. The diffusion stage of MLCM begins with the variance preserving schedule [9] to perturbed motion latent representations x\u03f5 = z with perturbation kernel N(xt; \u03b1(t)x0, \u03c32(t)I), \u03b1(t) := e\u22121 4 t2(\u03b21\u2212\u03b20)\u22121 2 t\u03b20, \u03c3(t) := p 1 \u2212e2\u03b1(t). (10) The consistency model S\u03b8 has been constructed to predict x\u03f5 from perturbed xt in a given PF-ODE trajectory. To maintain the boundary conditions that S\u03b8(x\u03f5, \u03f5, c) = x\u03f5, we employ the same skip setting for Eq. ?? as in the latent consistency model (LCM), which parameterized as follow: S\u03b8(xt, t, c) := \u03b72 (10t)2 + \u03b72 \u00b7 xt + 10t p (10t)2 + \u03b72 \u00b7 e S\u03b8(xt, t, c), (11) where e S\u03b8 is a transformer-based network and \u03b7 is a hyperparameter, which is usually set to 0.5. Following the selfconsistency property (as detail in Eq. 4), the model S\u03b8 has to maintain the consistency of the output at the given perturbed state xt with the previous state e xt\u2212\u2206t on the same ODE trajectory. The latter can be estimated via DPM++ solver: e xt\u2212\u2206t \u2248\u03c3t\u2212\u2206t \u03c3t \u00b7 xt \u2212\u03b1t \u00b7 (\u03b1t\u2212\u2206t \u00b7 \u03c3t \u03c3t\u2212\u2206t \u00b7 \u03b1t \u22121) \u00b7 x\u03a6 \u03f5 , (12) where x\u03a6 \u03f5 is the estimation of x\u03f5 under the different sampling strategies. In particular, x\u03a6 \u03f5 can be parameterized as a linear combination of conditional and unconditional latent presentation prediction following the CFG strategy, i.e., x\u03a6 \u03f5 (xt, t, c) = (1 + \u03c9) \u00b7 F\u03b8(xt, t, c) \u2212\u03c9F\u03b8(xt, t, \u2205), (13) where F\u03b8(\u00b7) is well-trained and x\u03f5-prediction-based motion diffusion model. It is worth noting that x\u03f5 can be utilized to simulate F\u03b8(xt, t, c) as used in the vanilla consistency training pipeline. Furthermore, F\u03b8(xt, t, \u2205) can be replaced by S\u03b8(xt, t, \u2205) with online updating. Thus Eq. 13 can be rewritten as: x\u03a6 \u03f5 (xt, t, c) = (1 + \u03c9) \u00b7 x\u03f5 \u2212\u03c9S\u03b8(xt, t, \u2205). (14) The optimization objective of the consistency model S\u03b8 is that, Lc = Ex,t h 1 \u2206td \u0010 S\u03b8(xt, t, c), S\u03b8\u2212(\u02c6 xt\u2212\u2206t, t \u2212\u2206t, c) \u0011 + \u03bbcd \u0010 S\u03b8(xt, t, \u2205), x\u03f5 \u0011i , (15) where d(x, y) = p (x \u2212y)2 + \u03b32 \u2212\u03b3 is pseudo-huber metric, \u03b3 is a constant, \u03bbc is a balancing term. The target network S\u03b8\u2212is updated after each iteration via EMA. 5 Experiments 5.1 Datasets and Metrics Datasets. We evaluate the proposed framework on two mainstream benchmarks for text-driven motion generation tasks, which are the KIT [29] and the HumanML3D [5]. The former contains 3,911 motions and their corresponding 6,363 natural language descriptions. The latter is currently the largest 3D human motion dataset comprising the HumanAct12 [15] and AMASS [30] datasets, containing 14,616 motions and 44,970 descriptions. Evaluation Metrics. Consistent with previous work, we evaluate the proposed framework in four parts. (a) Motion quality: we utilize the frechet inception distance (FID) to evaluate the distance in feature distribution between the generated data and the real data. (b) Condition matching: we first employ the R-precision to measure the correlation between the text description and the generated motion sequence and record the probability of the first k = 1, 2, 3 matches. Then, we further calculate the distance between motions and texts by multi-modal distance (MM Dist). (c) Motion diversity: we compute differences between features with the diversity metric and then measure generative diversity in the same text input using multimodality (MM) metric. (d) Calculating burden: we first use the number of function evaluations (NFE) to evaluate generated performance with fewer steps sampling. Then, we further statistics the average sampling time (AST) of a single sample. 6 \fTable 1: Comparisons to state-of-the-art methods on the HumanML test set. We repeat all the evaluations 20 times and report the average with a 95% confidence interval. \"\u2191\" denotes that higher is better. \"\u2193\" denotes that lower is better. \"\u2192\" denotes that results are better if the metric is closer to the real motion. \u2020 denotes that classifier-free guidance is utilized, causing a double NFE. Method R-Precision \u2191 FID \u2193 MM-Dist\u2193 Diversity\u2192 MModality\u2191 NFE\u2193 Top-1 Top-2 Top-3 Real 0.511\u00b1.003 0.703\u00b1.003 0.797\u00b1.002 0.002\u00b1.000 2.974\u00b1.008 9.503\u00b1.065 TEMOS[6] 0.424\u00b1.002 0.612\u00b1.002 0.722\u00b1.002 3.734\u00b1.028 3.703\u00b1.008 8.973\u00b1.071 0.368\u00b1.018 T2M[5] 0.457\u00b1.002 0.639\u00b1.003 0.740\u00b1.003 1.067\u00b1.002 3.340\u00b1.008 9.188\u00b1.002 2.090\u00b1.083 MDM [2] 0.320\u00b1.005 0.498\u00b1.004 0.611\u00b1.007 0.544\u00b1.044 5.566\u00b1.027 9.559\u00b1.086 2.799\u00b1.072 1000 MD [1] 0.491\u00b1.001 0.681\u00b1.001 0.782\u00b1.001 0.630\u00b1.001 3.113\u00b1.001 9.410\u00b1.049 1.553\u00b1.042 1000 MLD\u2020 [3] 0.481\u00b1.003 0.673\u00b1.003 0.772\u00b1.002 0.473\u00b1.013 3.196\u00b1.010 9.724\u00b1.082 2.413\u00b1.079 100 GraphMotion\u2020[8] 0.504\u00b1.003 0.699\u00b1.002 0.785\u00b1.002 0.116\u00b1.007 3.070\u00b1.008 9.692\u00b1.067 2.766\u00b1.096 300 M2DM [7] 0.497\u00b1.003 0.682\u00b1.002 0.763\u00b1.003 0.352\u00b1.005 3.134\u00b1.010 9.926\u00b1.073 3.587\u00b1.072 100 Our 0.460\u00b1.001 0.655\u00b1.002 0.760\u00b1.006 0.232\u00b1.007 3.238\u00b1.008 9.658\u00b1.065 3.506\u00b1.008 5 Table 2: Comparisons to state-of-the-art methods on the KIT test set. The meaning of the markers is the same as in Tab. 1. Method R-Precision \u2191 FID \u2193 MM-Dist\u2193 Diversity\u2192 MModality\u2191 NFE\u2193 Top-1 Top-2 Top-3 Real 0.424\u00b1.005 0.649\u00b1.006 0.779\u00b1.006 0.031\u00b1.004 2.788\u00b1.012 11.08\u00b1.097 TEMOS[6] 0.353\u00b1.006 0.561\u00b1.007 0.687\u00b1.005 3.717\u00b1.051 3.417\u00b1.019 10.84\u00b1.100 0.532\u00b1.034 T2M[5] 0.370\u00b1.005 0.569\u00b1.007 0.693\u00b1.007 2.770\u00b1.109 3.401\u00b1.008 10.91\u00b1.119 1.482\u00b1.065 MDM [2] 0.164\u00b1.004 0.291\u00b1.004 0.396\u00b1.004 0.497\u00b1.021 9.191\u00b1.022 10.85\u00b1.109 1.907\u00b1.214 1000 MD [1] 0.417\u00b1.004 0.621\u00b1.004 0.739\u00b1.004 1.954\u00b1.062 2.958\u00b1.005 11.10\u00b1.143 0.730\u00b1.013 1000 MLD\u2020 [3] 0.390\u00b1.008 0.609\u00b1.008 0.734\u00b1.007 0.404\u00b1.027 3.204\u00b1.027 10.80\u00b1.117 2.192\u00b1.071 100 GM\u2020,\u2021[8] 0.429\u00b1.007 0.648\u00b1.006 0.769\u00b1.006 0.313\u00b1.013 3.076\u00b1.022 11.12\u00b1.135 3.627\u00b1.113 300 M2DM [7] 0.416\u00b1.004 0.628\u00b1.004 0.743\u00b1.004 0.515\u00b1.029 3.015\u00b1.017 11.417\u00b1.970 3.325\u00b1.370 100 Our 0.433\u00b1.007 0.655\u00b1.006 0.783\u00b1.006 0.408\u00b1.013 2.831\u00b1.018 11.179\u00b1.085 1.23\u00b1.037 5 5.2 Implementation Details Model Configuration. The motion autoencoder {E, D} and the score network S are both the transformer architecture with long skip connections [31], which is also used in MLD [3]. Specifically, both the encoder E and decoder D contain 7 layers of transformer blocks with input dimensions 256, and each block contains 3 learnable tokens. The size of the finite set M is set as 2001, i.e. l = 1000. The score network S contains 15 layers of transformer blocks with input dimensions 512. The frozen CLIP-ViT-L-14 model [32] is used to be the text encoder. It encodes the text to a pooled output w \u2208R1,256 and then projects it as text embedding to sum with the time embedding before the input of each block. Train Configuration. For diffusion time horizon [\u03f5, T] into N \u22121 sub-intervals, we set \u03f5 is 0.002, T is 1, N is 1000. We follow the consistency model [23] to determine ti = (\u03f51/\u03c1 + i\u22121 N\u22121(T 1/\u03c1 \u2212\u03f51/\u03c1))\u03c1, where \u03c1 = 2. For balance training, we set \u03bbj as 0.001. All the proposed models are trained with the AdamW optimizer with a learning rate of 10\u22124 on a single RTX 4090 GPU. The size of each mini-batch is 64 and 128 for the autoencoder and denoising network, and the training process has been iterated with 1500 and 2000 epochs for the autoencoder and denoising network. 5.3 Comparisons to State-of-the-art Methods The test results of HumanML and KIT are shown in Tab. 1 and Tab. 2, respectively. Our framework achieves the state-of-the-art generation performance. Compared to existing motion diffusion generation frameworks with more than 50-1000 iterations (e.g., MDM, MotionDiffuse, and MLD), our approach reduces the computational burden by more than tenfold without severely degrading the quality of damage generation. Remarkably, our inference pipeline is very concise, with no tricks such as additional text preprocessing as used in GraphMotion. Sampling in fewer steps also has 7 \fReal MDM MLD T2M-GPT Our Figure 3: Qualitative analysis of our model and previous models. We provide three textual prompts for the motion visualization results. We achieve better motion generation performance to match some text conditions with fewer NFE. not significantly reduced diversity and multi-modality metrics, which remain competitive. Fig. 3 shows the comparison of the visualization results with the previous model. 5.4 Ablation Study Table 3: Ablation study of our framework with more generation metrics under different guidance parameters. The meaning of the markers is the same as in Tab. 1. Dataset w R-Precision Top-3 \u2191 FID \u2193 MM-Dist \u2193 MModality \u2191 KIT 0 0.742\u00b1.006 0.717\u00b1.028 3.051\u00b1.021 2.496\u00b1.065 0.5 0.771\u00b1.006 0.504\u00b1.021 2.885\u00b1.023 1.935\u00b1.044 1 0.775\u00b1.005 0.494 \u00b1.019 2.831\u00b1.021 1.844\u00b1.049 1.5 0.783\u00b1.006 0.411\u00b1.019 2.809\u00b1.019 1.648\u00b1.040 2 0.777\u00b1.006 0.518\u00b1.016 2.799\u00b1.023 1.612\u00b1.041 Effectiveness of each component. We explore the generative performance of the classifier-free guidance technique under different representations, and the results are reported in Fig. 4. When the guidance coefficient w equals to 0, the model degenerates into a vanilla consistency model. We discover that increasing various degrees of classifier-free guidance accelerates consistency training convergence and improves generation quality. The pixel-discrete motion representation via the quantized autoencoder has better convergence ability generation performance compared to the continuous motion representation. In particular, under the same consistency training parameters, we have not observed significant gains in generation quality from variational constraints compared to the vanilla autoencoder. We further discuss more comprehensive generation metrics at different guidance parameters and the results are reported in Tab. 3. As the guidance parameters increase, controllability and generation quality gradually improve, with a corresponding decrease in diversity. In contrast to the larger guidance parameters employed in the traditional diffusion framework 8 \f500 1000 1500 2000 Epoch ( = 0.0) 0 2 4 6 HumanML3D FID 500 1000 1500 2000 Epoch ( = 0.5) 0 2 4 6 FID 500 1000 1500 2000 Epoch ( = 1.0) 0 2 4 6 FID 500 1000 1500 2000 Epoch ( = 1.5) 0 2 4 6 FID 500 1000 1500 2000 Epoch ( = 2.0) 0 2 4 6 FID Auto-Encoder Variational Auto-Encoder Quantized Auto-Encoder 500 1000 1500 2000 Epoch ( = 0.0) 0 2 4 6 KIT FID 500 1000 1500 2000 Epoch ( = 0.5) 0 2 4 6 FID 500 1000 1500 2000 Epoch ( = 1.0) 0 2 4 6 FID 500 1000 1500 2000 Epoch ( = 1.5) 0 2 4 6 FID 500 1000 1500 2000 Epoch ( = 2.0) 0 2 4 6 FID Figure 4: Ablation study of the quantized autoencoder employed in our framework with the conventional variational autoencoder and the vanilla autoencoder under different guidance parameters. We repeat all evaluations 3 times at each 50 epoch and report the average values. (which can usually be set to 7), we find that there is no contribution to the generation quality starting from w greater than 2 in the consistency training framework. Table 4: Ablation study of different number of token and sizes of representation finite set. The meaning of the markers is the same as in Tab. 1. Dataset Token l R-Precision Top-3 \u2191 FID \u2193 MM-Dist \u2193 MModality \u2191 KIT 2 100 0.770\u00b1.006 0.599\u00b1.025 2.870\u00b1.020 1.656\u00b1.043 2 500 0.774\u00b1.005 0.550\u00b1.019 2.829\u00b1.018 1.769\u00b1.021 2 2000 0.775\u00b1.005 0.428\u00b1.016 2.844\u00b1.019 1.645\u00b1.045 4 1000 0.781\u00b1.003 0.489\u00b1.021 2.823\u00b1.021 1.859\u00b1.044 6 1000 0.781\u00b1.004 0.465\u00b1.021 2.821\u00b1.019 1.839\u00b1.055 2 1000 0.783\u00b1.006 0.411\u00b1.019 2.809\u00b1.019 1.648\u00b1.040 Ablation study on the different model hyperparameters. In Tab. 4, we test the model performance with different hyperparameters. Consistent with the findings of MLD, increasing the number of tokens does not remarkably increase the generation quality. Appropriately increasing the size of the finite set 2l + 1 is beneficial in improving the generation results, and such gain is no longer significant when l is larger than 1000. Table 5: Ablation study of different number of function evaluations. Dataset NFE R-Precision Top-3 \u2191 FID \u2193 MM-Dist \u2193 MModality \u2191 KIT 1 0.777\u00b1.005 0.567\u00b1.002 2.865\u00b1.013 1.424\u00b1.040 3 0.781\u00b1.005 0.409\u00b1.014 2.812\u00b1.019 1.598\u00b1.037 5 0.783\u00b1.006 0.411\u00b1.019 2.809\u00b1.019 1.648\u00b1.040 8 0.783\u00b1.006 0.400\u00b1.015 2.810\u00b1.017 1.667\u00b1.051 10 0.786\u00b1.006 0.395\u00b1.015 2.795\u00b1.019 1.663\u00b1.049 Ablation study on the different sampling steps. Our generation results at different sampling steps are further shown in Tab. 5. We have excellent results with fewer sampling steps, but when the number of sampling steps is increased to more than 15, the increased number of sampling steps does not result in a quality payoff. It is a common problem with consistency training. 9 \f5.5 Time Cost Table 6: Comparison of inference time with previous sota models. Method MDM MLD T2M-GPT GraphMotion Our (NFE 5) Our (NFE 3) AST (s) 7.5604 0.0786 0.2168 0.5417 0.0141 0.0098 The consistency training method we use does not require prior training of the diffusion model, so training is inexpensive and is available on just a single 4090. On the HumanML dataset, we train the encoder in 15 hours and the denoiser in 12 hours. Benefiting from the consistency sampling strategy, our inference time is also more than tenfold less than existing models. A more detailed time comparison is reported in Tab. 6. 6", + "additional_graph_info": { + "graph": [ + [ + "Minghao Zhu", + "Ronghao Dang" + ] + ], + "node_feat": { + "Minghao Zhu": [ + { + "url": "http://arxiv.org/abs/2309.00297v1", + "title": "Fine-Grained Spatiotemporal Motion Alignment for Contrastive Video Representation Learning", + "abstract": "As the most essential property in a video, motion information is critical to\na robust and generalized video representation. To inject motion dynamics,\nrecent works have adopted frame difference as the source of motion information\nin video contrastive learning, considering the trade-off between quality and\ncost. However, existing works align motion features at the instance level,\nwhich suffers from spatial and temporal weak alignment across modalities. In\nthis paper, we present a \\textbf{Fi}ne-grained \\textbf{M}otion\n\\textbf{A}lignment (FIMA) framework, capable of introducing well-aligned and\nsignificant motion information. Specifically, we first develop a dense\ncontrastive learning framework in the spatiotemporal domain to generate\npixel-level motion supervision. Then, we design a motion decoder and a\nforeground sampling strategy to eliminate the weak alignments in terms of time\nand space. Moreover, a frame-level motion contrastive loss is presented to\nimprove the temporal diversity of the motion features. Extensive experiments\ndemonstrate that the representations learned by FIMA possess great\nmotion-awareness capabilities and achieve state-of-the-art or competitive\nresults on downstream tasks across UCF101, HMDB51, and Diving48 datasets. Code\nis available at \\url{https://github.com/ZMHH-H/FIMA}.", + "authors": "Minghao Zhu, Xiao Lin, Ronghao Dang, Chengju Liu, Qijun Chen", + "published": "2023-09-01", + "updated": "2023-09-01", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "main_content": "INTRODUCTION With the enormous growth of uncurated data on the Internet, selfsupervised learning came with the promise of learning powerful representations from unlabelled data that can be transferred to various downstream tasks. In particular, contrastive self-supervised learning based on instance discrimination [61] has achieved great success in both NLP [7, 13] and computer vision [10, 23, 46]. In the video domain, this learning diagram has also presented promising performance by keeping the instances within the same video semantically consistent [16, 45]. However, vanilla video contrastive learning has difficulty modeling local temporal information [12, 43] and possesses severe background bias [14, 55], which limits the generalization and transferability of the learned representations. The reason may stem from the existence of static bias in the positive pair construction. The features of temporally separated instances could be easily pulled close by attending to the static cues while neglecting the dynamic details, which provide crucial information for discrimination and downstream tasks. To alleviate the background bias in the context of contrastive learning, an effective method is to introduce motion information. arXiv:2309.00297v1 [cs.CV] 1 Sep 2023 \fMM \u201923, October 29\u2013November 3, 2023, Ottawa, ON, Canada. Minghao Zhu, Xiao Lin, Ronghao Dang, Chengju Liu, & Qijun Chen Previous works incorporate motion information either by constructing motion-prominent positive pairs via meticulously designed data augmentations [14, 54, 55] or using other modalities to explicitly enhance motion information in feature space [33, 39, 62]. Among them, aligning the motion features of optical flow in an explicit way achieves impressive results. But the expensive computation cost limits the scalability of optical flow on large-scale datasets. Hence, how to incorporate motion information effectively without resorting to costly accessories has attracted a lot of attention. Frame difference, an alternative with a negligible cost for optical flow, can extract motion across frames by removing the still background. But its quality is extremely susceptible to background noise caused by camera jitter or drastic changes in the background. Several approaches [15, 43, 49, 50] employ it in video contrastive learning and show promising results. Even with notable improvements in performance, how to effectively align the RGB and frame difference features in the latent space still not be fully explored. The previous works align the features of RGB and frame difference from temporally different views by using global feature vectors [15, 50], which may suffer from the weak alignment between the two modalities. As illustrated in Fig. 1, we summarize and categorize the weak alignment problem into two types: temporal weak alignment and spatial weak alignment. Temporal weak alignment signifies that two clips at different timestamps are not semantically consistent in the concept of motion. Some action classes consist of multiple stages of sub-motions. For example, high jumping in Fig. 1 has two motion stages: running and jumping. These two stages have very different motion semantics, and directly pulling them close in the feature space will make the model invariant to the motion semantic changes along the temporal dimension. Spatial weak alignment can be viewed as an inherent problem of the instance-level contrastive learning framework. The global average pooling extends the receptive field of the features to the entire view and compresses all information into a one-dimensional vector, resulting in the pooled features losing spatial information and being distracted by background noise. Simply aligning the pooled features may lead to the misalignment of background features and inadequate alignment of foreground features. As shown in Fig. 1, since the frame difference contains a lot of background noise, the alignment of pooled features becomes sub-optimal for introducing significant motion information. In this paper, we present a novel framework to introduce wellaligned motion features in video contrastive learning, namely Finegrained Motion Alignment (FIMA). We argue that the alignment of motion features should be at a more fine-grained level. To this end, we discard the global average pooling and construct a pixel-level contrastive learning framework, where each pixel of the RGB feature map tries to predict the motion feature pixels at the same spatial location but the different timestamps. In this way, the alignments of foreground and background features are sufficient and decoupled. Based on it, we address the temporal weak alignment by designing a motion decoder that takes the RGB feature map from the target view as the bridge between temporally distant positive pairs. This task requires the model to not only learn the correct correspondence between RGB features from the target and the source view but also encode enough motion information for cross-modality reconstruction. Thus, spatiotemporally discriminative motion features can be learned. To tackle the spatial weak alignment, we propose a foreground sampling strategy to filter out the background pixels in the construction of positive pairs, hence avoiding the distraction of background noise. In addition, we further propose a frame-level motion reconstruction task for improving the temporal diversity of the motion features. Given a frame of RGB feature map, the motion decoder learns to reconstruct the exactly overlapped local motion feature and distinguish it from others. We summarize our main contributions as follows: \u2022 We demonstrate the weak alignment problem between RGB and frame difference modalities, and analyze the influence of the weak alignment in terms of time and space. \u2022 We present a novel FIMA framework, consisting of a dense contrastive learning paradigm, a foreground sampling strategy, and a motion decoder to eliminate the weak alignment of two modalities at the pixel and the frame level, which enhances the selfsupervised representations. \u2022 Our framework achieves state-of-the-art or competitive results on two downstream tasks, action recognition, and video retrieval, across UCF-101, HMDB-51, and Diving-48 datasets. 2 RELATED WORK Self-supervised learning in videos. Self-supervised learning aims to learn transferable representations from large-scale unlabeled data. In the video domain, early works focus on encoding intrinsic structure information by solving sophisticated designed pretext tasks, including temporal transformation [5, 26, 38, 60, 65, 66], statistics prediction [17, 56], and spatiotemporal puzzles [29, 35]. Later, contrastive learning based on instance discrimination [61] makes great progress in the image domain [10, 19, 23]. Some works extend it into video representation learning and achieve promising results [16, 45]. To further enhance video representations, a line of works construct various positive views by applying spatiotemporal augmentations [4, 14, 31, 41, 44]. Besides, [9, 24, 68] formulate predictive tasks in a contrastive manner. [1, 28] leverage the idea of clustering in video contrastive learning. [1, 37, 47] seek consistency between multi-model data, such as audio, text, and optical flow. Alleviation of background bias in videos. Background bias in commonly used datasets may lead to the model over-focusing on static cues, resulting in poor generalization. To overcome this, [34] proposes a procedure to assemble a less biased dataset. In the context of video self-supervised learning, DSM [54], BE [55], and FAME [14] implicitly mitigate the background bias by constructing motion-prominent positive or negative samples. [33, 39, 62] explicitly introduce motion information into feature space from other modalities like optical flow. DCLR [15] uses frame difference as motion supervision and decouples it from data input and feature space. In this paper, we address the background bias by selectively introducing significant motion information from frame difference. Dense supervision in contrastive learning. Dense contrastive learning is initially devised for dense prediction tasks in the image domain [59, 64], such as object detection and semantic segmentation. It preserves spatial information by constructing pixel-level positive pairs between dense features from different views. In the video domain, some works exploit dense supervision in contrastive learning. [21, 22, 25] propose to predict dense feature maps in the \fFine-Grained Spatiotemporal Motion Alignment for Contrastive Video Representation Learning MM \u201923, October 29\u2013November 3, 2023, Ottawa, ON, Canada. future. They generate positive samples by applying consistent geometric transformation across a video. However, such a strategy can not induce sufficient occlusion invariance, which has been proven to be crucial for contrastive representation learning [42]. [67] extends [59] to the video domain and constructs region-level positive pairs by calculating the correspondence between local RGB features. But this strategy hardly establishes a correct correspondence between RGB and frame difference due to the discrepancy between the two modalities. To address these limitations, our method extends [64] to the spatiotemporal domain and constructs positive pairs based on spatial location prior. 3 METHOD An overview of our proposed framework is presented in Fig. 2. In Section 3.1, we first revisit vanilla spatiotemporal contrastive learning and extend it to the pixel level to generate dense motion supervision. In Section 3.2, we elaborate on the motion decoder and foreground sampling strategy designed to eliminate temporal and spatial weak alignment. Finally, we present the frame-level motion reconstruction task to improve the feature diversity in Section 3.3. 3.1 Pixel-level Contastive Learning in Videos Given a video, the vanilla instance-level spatiotemporal contrastive learning randomly samples two video clips \b \ud835\udc49, \u02dc \ud835\udc49 \t at different timestamps. The clips are augmented by temporally consistent augmentation [45] and processed by a feature encoder with the global average pooling to extract corresponding video-level representations \b \ud835\udf08, \u02dc \ud835\udf08 \t . The prevalent InfoNCE loss [40] is adopted for optimization: LVV = \u2212log \u210e\u0000\ud835\udf08\ud835\udc5e, \u02dc \ud835\udf08\ud835\udc58\u0001 \u210e\u0000\ud835\udf08\ud835\udc5e, \u02dc \ud835\udf08\ud835\udc58\u0001 + \u00cd \u02c6 \ud835\udf08\ud835\udc58\u210e\u0000\ud835\udf08\ud835\udc5e, \u02c6 \ud835\udf08\ud835\udc58\u0001 , (1) where \u210e(\ud835\udc65,\ud835\udc66) = exp \u0000\ud835\udc54(\ud835\udc65)\ud835\udc47\ud835\udc54(\ud835\udc66)/\u2225\ud835\udc54(\ud835\udc65)\u2225\u2225\ud835\udc54(\ud835\udc66)\u2225\ud835\udf0f\u0001 measures the similarity between two projected feature vectors \ud835\udc54(\ud835\udc65) and \ud835\udc54(\ud835\udc66); \ud835\udc54(\u00b7) is a non-linear projection head network; \ud835\udf0fis the temperature parameter; Negative keys \u02c6 \ud835\udf08\ud835\udc58are taken from a memory bank as we follow the network design of MoCo [23]. Note that the superscripts \ud835\udc5eand \ud835\udc58indicate the features extracted by the query encoder and the momentum encoder, respectively. This training objective aims to pull the query and its positive key closer while it repels other negative keys in the latent space. However, the representations learned based on Eq. (1) are easily overwhelmed by static background cues [14, 55] and lack the ability to capture dynamic motion information [12]. To introduce motion information, previous works [15, 50] use frame difference as the other motion representation learning branch. We calculate the frame difference \u02dc \ud835\udc37by differentiating adjacent frames of \u02dc \ud835\udc49and extract the motion feature \u02dc \ud835\udc51\ud835\udc58. The motion contrastive loss is as follows: LVD = \u2212log \u210e\u0000\ud835\udf08\ud835\udc5e, \u02dc \ud835\udc51\ud835\udc58\u0001 \u210e\u0000\ud835\udf08\ud835\udc5e, \u02dc \ud835\udc51\ud835\udc58\u0001 + \u00cd \u02c6 \ud835\udc51\ud835\udc58\u210e\u0000\ud835\udf08\ud835\udc5e, \u02c6 \ud835\udc51\ud835\udc58\u0001 , (2) where \u02c6 \ud835\udc51\ud835\udc58is the motion features of other videos in a memory bank. As mentioned in Section 1, the globally average-pooled features cannot be well-aligned due to the weak alignment between modalities. Thus, we extend the dense contrastive learning framework [64] to the spatiotemporal domain to generate dense motion supervision. The representations extracted from \b \ud835\udc49, \u02dc \ud835\udc37 \t are kept as feature maps, denoted as \b \ud835\udc39\ud835\udc5e, \u02dc \ud835\udc40\ud835\udc58\t \u2208R\ud835\udc47\u00d7\ud835\udc3b\u00d7\ud835\udc4a\u00d7\ud835\udc36, where \ud835\udc47, \ud835\udc3b,\ud835\udc4a,\ud835\udc36are the dimensions of the time, height, width, channel, respectively. \ud835\udc39\ud835\udc5e \ud835\udc56and \u02dc \ud835\udc40\ud835\udc58 \ud835\udc56represent the \ud835\udc56-th frame of the feature maps. For each feature pixel \ud835\udc53\ud835\udc5e \ud835\udc56\u2208\ud835\udc39\ud835\udc5e \ud835\udc56, we assign a positive pixel set \ud835\udf19\ud835\udc58 \ud835\udc5d\u2286\u02dc \ud835\udc40\ud835\udc58 \ud835\udc56based on the spatial location prior. Specifically, we record the original coordinates of the clip when applying the geometric transformation (e.g. crop and flip). Then, each feature pixel in \ud835\udc39\ud835\udc5e \ud835\udc56and \u02dc \ud835\udc40\ud835\udc58 \ud835\udc56is warped to the original video spatial space, and the two-dimensional (height and width) Euclidean distances between \ud835\udc53\ud835\udc5e \ud835\udc56and all pixels in \u02dc \ud835\udc40\ud835\udc58 \ud835\udc56 are computed. The distances are normalized to the diagonal length of a feature map bin and a hyperparameter T is used to measure the distance in scale. We set the threshold T = 0.7 by default. The pixels in \u02dc \ud835\udc40\ud835\udc58 \ud835\udc56with a distance smaller than T are assigned to the \ud835\udf19\ud835\udc58 \ud835\udc5d. The rest of the pixels in the feature map \u02dc \ud835\udc40\ud835\udc58and motion feature pixels from other videos are assigned to negative pixel set \ud835\udf19\ud835\udc58 \ud835\udc5b. The pixel-level motion contrastive loss can be formulated as: LPix = \u2212log \u00cd \u02dc \ud835\udc5a\ud835\udc58 \ud835\udc56\u2208\ud835\udf19\ud835\udc58 \ud835\udc5d \u210e\u0000\ud835\udc53\ud835\udc5e \ud835\udc56, \u02dc \ud835\udc5a\ud835\udc58 \ud835\udc56 \u0001 \u00cd \u02dc \ud835\udc5a\ud835\udc58 \ud835\udc56\u2208\ud835\udf19\ud835\udc58 \ud835\udc5d \u210e\u0000\ud835\udc53\ud835\udc5e \ud835\udc56, \u02dc \ud835\udc5a\ud835\udc58 \ud835\udc56 \u0001 + \u00cd \u02c6 \ud835\udc5a\ud835\udc58 \ud835\udc56\u2208\ud835\udf19\ud835\udc58 \ud835\udc5b \u210e\u0000\ud835\udc53\ud835\udc5e \ud835\udc56, \u02c6 \ud835\udc5a\ud835\udc58 \ud835\udc56 \u0001 . (3) Note that the projection head \ud835\udc54(\u00b7) here is instantiated as two successive 1 \u00d7 1 convolution layers to adapt the input form of the feature map. The loss is averaged over all feature pixels in \ud835\udc39\ud835\udc5ewith at least one positive pair (i.e., corresponding \ud835\udf19\ud835\udc58 \ud835\udc5d\u2260\u2205). Intuitively, given an RGB feature pixel in the source view, the network learns to predict the motion features at the same spatial location in the target view. 3.2 Fine-grained Motion Feature Alignment The pixel-level LPix constructs a framework for fully utilizing dense motion supervision. Based on it, we propose to separately eliminate the weak alignment of motion features in terms of time and space. Temporal Weak Alignment Elimination. The positive pair constructed by LPix still exists two limitations: the first is that the loss pulls features at different timestamps close, resulting in the model becoming invariant to the motion semantic change along the temporal dimension; the second is that the receptive field of a feature pixel only covers a limited region, which may lead to a greater semantic discrepancy between positive pairs. To address these limitations, we propose to use the RGB features \u02dc \ud835\udc39\ud835\udc5eextracted from the target view \u02dc \ud835\udc49by the query encoder as the bridge between \ud835\udc39\ud835\udc5eand \u02dc \ud835\udc40\ud835\udc58. To this end, we design a motion decoder based on the attention mechanism, which is quite effective in spatiotemporal dependence modeling [2, 6, 58, 67]. Specifically, we consider the dense features \u02dc \ud835\udc39\ud835\udc5e \ud835\udc56as a collection containing various information. For each RGB feature pixel \ud835\udc53\ud835\udc5e \ud835\udc56\u2208\ud835\udc39\ud835\udc5e \ud835\udc56, the motion decoder tries to reconstruct the motion features at the same spatial location in the target view by querying the information in collection \u02dc \ud835\udc39\ud835\udc5e \ud835\udc56. Then the pixel-level motion contrastive loss becomes: \ud835\udc66\ud835\udc5e \ud835\udc56= MD(\ud835\udc53\ud835\udc5e \ud835\udc56, \u02dc \ud835\udc39\ud835\udc5e \ud835\udc56, \u02dc \ud835\udc39\ud835\udc5e \ud835\udc56), (4) LPix = \u2212log \u00cd \u02dc \ud835\udc5a\ud835\udc58 \ud835\udc56\u2208\ud835\udf19\ud835\udc58 \ud835\udc5d \u210e\u0000\ud835\udc66\ud835\udc5e \ud835\udc56, \u02dc \ud835\udc5a\ud835\udc58 \ud835\udc56 \u0001 \u00cd \u02dc \ud835\udc5a\ud835\udc58 \ud835\udc56\u2208\ud835\udf19\ud835\udc58 \ud835\udc5d \u210e\u0000\ud835\udc66\ud835\udc5e \ud835\udc56, \u02dc \ud835\udc5a\ud835\udc58 \ud835\udc56 \u0001 + \u00cd \u02c6 \ud835\udc5a\ud835\udc58 \ud835\udc56\u2208\ud835\udf19\ud835\udc58 \ud835\udc5b \u210e\u0000\ud835\udc66\ud835\udc5e \ud835\udc56, \u02c6 \ud835\udc5a\ud835\udc58 \ud835\udc56 \u0001 , (5) \fMM \u201923, October 29\u2013November 3, 2023, Ottawa, ON, Canada. Minghao Zhu, Xiao Lin, Ronghao Dang, Chengju Liu, & Qijun Chen Frame-level motion reconstruction Momentum encoder V D \uf025 V \uf025 q F \uf025 q F k M \uf025 k i M \uf025 Pix \uf04c Prediction Encoder EMA FS FS q i F + = Sampling Extract Foreground Sampling (FS) k i l \uf025 D \uf025 Divide Attract Repel Motion decoder Q K V [cls] token C q i F \uf025 q cls y Concat Back. features C Stop grad Fore. sampling Pixel features Clip features Motion Queue Fra \uf04c k L \u02c6k L Figure 2: Overview of the framework. We sample two temporally distant clips {\ud835\udc49, \u02dc \ud835\udc49} and compute the frame difference \u02dc \ud835\udc37. The corresponding dense feature maps {\ud835\udc39\ud835\udc5e, \u02dc \ud835\udc39\ud835\udc5e, \u02dc \ud835\udc40\ud835\udc58} are extracted by the encoder or its momentum version. We sample the foreground features at the \ud835\udc56-th frame of \ud835\udc39\ud835\udc5eand concatenate them with a class token, then feed them into the motion decoder. We use the motion decoder to reconstruct the foreground features of \u02dc \ud835\udc40\ud835\udc58in the \ud835\udc56-th frame, by collecting information from \u02dc \ud835\udc39\ud835\udc5e \ud835\udc56. Finally, the class token is used to reconstruct the local motion feature with time interval overlaps exactly with \u02dc \ud835\udc39\ud835\udc5e \ud835\udc56. where MD(\ud835\udc44, \ud835\udc3e,\ud835\udc49) refers to the motion decoder implemented by the standard transformer layer [53]; \ud835\udc44\u2208R1\u00d7\ud835\udc51is the query feature; \ud835\udc3e,\ud835\udc49\u2208R\ud835\udc3b\ud835\udc4a\u00d7\ud835\udc51are key-value pairs; \ud835\udc51is the query/key dimension; \ud835\udc66\ud835\udc5e \ud835\udc56is the output of the motion decoder. The motion decoder builds the correct correspondence between the RGB features of the source view \ud835\udc39\ud835\udc5e \ud835\udc56and the target view \u02dc \ud835\udc39\ud835\udc5e \ud835\udc56and avoids enforcing the features at different timestamps to be similar. It also requires every RGB feature pixel in \u02dc \ud835\udc39\ud835\udc5e \ud835\udc56to encode more motion information around itself for cross-modality reconstruction. Further, the attention operation extends the receptive field of the predicted feature pixels to the entire target view, thus eliminating the semantic discrepancy. Spatial Weak Alignment Elimination. With the proposed dense contrastive learning framework, the alignment of each foreground and background feature pixel is decoupled. We can avoid the disturbance of noise by filtering out the background pixels. As a common visualization technique, class activation maps [3, 69] provide an intuitive way to localize the discriminative regions, by which we expect to classify the foreground and background pixels. However, we find that activation maps from either RGB or frame-difference features easily attend to the incorrect background area. The other observation is that the distribution of the activation map is relatively flat when the model\u2019s attention is disturbed by background noise. In other words, it tends to cover a larger range of the background with lower activation values. Instead, the distribution of the activation map is steeper when the model correctly captures the foreground information. The underlying reason lies in the foreground information being more structural and concentrated, producing a distinct and dense activation region. Based on this observation, we propose to use the features of two modalities with complementary information to jointly determine the foreground region. Concretely, we compute class-agnostic activation maps [3] of two modalities by applying the average pooling along channel and time dimensions, then fuse them by simple point-wise addition. We divide pixels in the target view \u02dc \ud835\udc40\ud835\udc58into two mutually exclusive sets \u02dc \ud835\udc40\ud835\udc58 \ud835\udc53\ud835\udc54and \u02dc \ud835\udc40\ud835\udc58 \ud835\udc4f\ud835\udc54 based on the fused activation map \u02dc \ud835\udc4d\ud835\udc58\u2208R\ud835\udc3b\u00d7\ud835\udc4aas follows: \ud835\udc34\u0000 \u02dc \ud835\udc5a\ud835\udc58 \ud835\udc56,\ud835\udc57 \u0001 = ( 1, if \u02dc \ud835\udc67\ud835\udc57> \ud835\udefd-th quantile of \u02dc \ud835\udc4d\ud835\udc58, 0, otherwise, (6) where \u02dc \ud835\udc5a\ud835\udc58 \ud835\udc56,\ud835\udc57is a pixel in the feature map \u02dc \ud835\udc40\ud835\udc58; \ud835\udc56denotes the temporal index and \ud835\udc57\u2208{(1, 1), (1, 2), ..., (\ud835\udc3b,\ud835\udc4a)} is the spatial index; \u02dc \ud835\udc67\ud835\udc57is the pixel at spatial index \ud835\udc57in the fused activation map; \ud835\udefd\u2208[0, 1] is a hyper-parameter used to describe the portion of the foreground. We set \ud835\udefd= 0.5 by default. Similarly, we obtain the foreground set \ud835\udc39\ud835\udc5e \ud835\udc53\ud835\udc54and the background set \ud835\udc39\ud835\udc5e \ud835\udc4f\ud835\udc54of the source view \ud835\udc39\ud835\udc5ein the same manner. We only sample positive pairs in the foreground regions and the pixel-level motion contrastive loss becomes: \ud835\udc66\ud835\udc5e \ud835\udc56= MD(\ud835\udc53\ud835\udc5e \ud835\udc56, \u02dc \ud835\udc39\ud835\udc5e \ud835\udc56, \u02dc \ud835\udc39\ud835\udc5e \ud835\udc56), (7) LPix = \u2212log \u00cd \u02dc \ud835\udc5a\ud835\udc58 \ud835\udc56\u2208\ud835\udf19\ud835\udc58 \ud835\udc5d\u2229\u02dc \ud835\udc40\ud835\udc58 \ud835\udc53\ud835\udc54 \u210e\u0000\ud835\udc66\ud835\udc5e \ud835\udc56, \u02dc \ud835\udc5a\ud835\udc58 \ud835\udc56 \u0001 \u00cd \u02dc \ud835\udc5a\ud835\udc58 \ud835\udc56\u2208\ud835\udf19\ud835\udc58 \ud835\udc5d\u2229\u02dc \ud835\udc40\ud835\udc58 \ud835\udc53\ud835\udc54 \u210e\u0000\ud835\udc66\ud835\udc5e \ud835\udc56, \u02dc \ud835\udc5a\ud835\udc58 \ud835\udc56 \u0001 + \u00cd \u02c6 \ud835\udc5a\ud835\udc58 \ud835\udc56\u2208\ud835\udf19\ud835\udc58 \ud835\udc5b \u210e\u0000\ud835\udc66\ud835\udc5e \ud835\udc56, \u02c6 \ud835\udc5a\ud835\udc58 \ud835\udc56 \u0001 , (8) where \ud835\udf19\ud835\udc58 \ud835\udc5bhere indicates the rest of the pixels in the feature map \u02dc \ud835\udc40\ud835\udc58 except \b \ud835\udf19\ud835\udc58 \ud835\udc5d\u2229\u02dc \ud835\udc40\ud835\udc58 \ud835\udc53\ud835\udc54 \t, plus motion feature pixels from other videos. \fFine-Grained Spatiotemporal Motion Alignment for Contrastive Video Representation Learning MM \u201923, October 29\u2013November 3, 2023, Ottawa, ON, Canada. Table 1: Ablation study on the loss designs. Contrastive Losses Linear Finetune LVV LPix LFra LVD UCF101 HMDB51 UCF101 HMDB51 ! 58.1 26.7 76.7 48.8 ! ! 68.7 37.7 80.8 53.3 ! ! 73.1 39.2 83.1 54.2 ! ! 69.7 38.5 81.3 54.1 ! ! ! 75.3 42.8 84.2 57.8 The loss is averaged over all feature pixels in the foreground set \ud835\udc39\ud835\udc5e \ud835\udc53\ud835\udc54with at least one positive pair. 3.3 Frame-level Motion Feature Reconstruction The LPix aligns the motion features at the pixel level. To further enhance the temporal diversity of the learned features, we propose a frame-level local motion reconstruction task. We divide the motion clip \u02dc \ud835\udc37into \ud835\udc47sub-clips, where \ud835\udc47is the time dimension of the corresponding feature map \u02dc \ud835\udc40\ud835\udc58. Given a frame of the feature map \u02dc \ud835\udc39\ud835\udc5e \ud835\udc56, the motion decoder aims to reconstruct the feature of the sub-clip with time interval overlaps exactly with \u02dc \ud835\udc39\ud835\udc5e \ud835\udc56. Before input to the motion decoder, we prepend a learnable [cls] token to the sequence of features. For the output state of the class token \ud835\udc66\ud835\udc5e \ud835\udc50\ud835\udc59\ud835\udc60and the corresponding local motion feature \u02dc \ud835\udc59\ud835\udc58 \ud835\udc56, the frame-level motion contrastive loss can be formulated as: \ud835\udc66\ud835\udc5e \ud835\udc50\ud835\udc59\ud835\udc60= MD([cls], \u02dc \ud835\udc39\ud835\udc5e \ud835\udc56, \u02dc \ud835\udc39\ud835\udc5e \ud835\udc56), (9) LFra = \u2212log \u210e\u0000\ud835\udc66\ud835\udc5e \ud835\udc50\ud835\udc59\ud835\udc60, \u02dc \ud835\udc59\ud835\udc58 \ud835\udc56 \u0001 \u210e\u0000\ud835\udc66\ud835\udc5e \ud835\udc50\ud835\udc59\ud835\udc60, \u02dc \ud835\udc59\ud835\udc58 \ud835\udc56 \u0001 + \u00cd \ud835\udc59\ud835\udc58\u2208{ \u00af \ud835\udc3f\ud835\udc58, \u02c6 \ud835\udc3f\ud835\udc58} \u210e\u0000\ud835\udc66\ud835\udc5e \ud835\udc50\ud835\udc59\ud835\udc60,\ud835\udc59\ud835\udc58\u0001 , (10) where \u00af \ud835\udc3f\ud835\udc58and \u02c6 \ud835\udc3f\ud835\udc58are the sets of local features in the same video and other videos. We use the motion decoder shared with LPix to avoid introducing extra overhead. By discriminating the positive from the intra-video negatives and inter-video negatives, the features extracted by the encoder become more temporally discriminative. The overall learning objective can be written as: L = LVV + LPix + LFra, (11) where we jointly optimize all losses and treat each term equally. 4 EXPERIMENTS 4.1 Implementation Details Datasets. We conduct experiments on four standard video datasets, UCF101 [48], HMDB51 [32], Kinetics-400 [27], and Diving-48 [34]. We use the updated V2 version of Diving-48 for evaluation. Technical Details. We choose two widely used backbones, R(2+1)D18 [51], and I3D-22 [8] as the video encoder. The non-linear projection head is instantiated as two successive 1 \u00d7 1 convolution layers with an output dimension of 128 to adapt the input form of the feature map. We closely follow the network design of MoCo-v2 [11]. Besides, the negative set \ud835\udf19\ud835\udc58 \ud835\udc5bin LPix is implemented as a queue with a size of 784 for UCF101 and 31360 for Kinetics-400. We show more details in the supplementary material. Table 2: Ablation study on the components in LPix. Including dense contrastive learning framework (DC), foreground sampling strategy (FS), and motion decoder (MD). The first line indicates LPix degrades to LVD. Components Finetune Linear DC FS MD 80.8 68.7 ! 79.7 69.7 ! ! 81.3 71.2 ! ! 82.3 71.0 ! ! ! 83.1 73.1 We implement the motion decoder by using the standard transformer layer in [53]. As the default setting we use a 2-layer 512-wide model with 4 attention heads. The class token is a learnable 512-dim embedding. A linear layer is attached on two sides to adjust the feature dimension. The motion decoder is placed before the nonlinear projection head. We add 1D absolute positional encodings to each feature frame before inputting it into the motion decoder. Self-supervised Pre-training. In the pre-training phase, we randomly sample two 16-frame clips with a temporal stride of 2 at different timestamps and compute the frame difference. Each clip is randomly cropped and resized to the size of 224 \u00d7 224 or 112 \u00d7 112 and then undergoes random horizontal flip, color jittering, random grayscale, and Gaussian blur in a temporally consistent way [45]. We pretrain the model for 200 epochs on UCF101 or 100 epochs on Kinetics-400. Following the linear scaling rule of the learning rate [18], we set the initial learning rate to 0.00375 with a total batch size of 24 for UCF101 or 0.01 with a total batch size of 64 for Kinetics-400. A half-period cosine schedule is used for the learning rate decay. We adopt SGD as the optimizer with a momentum of 0.9 and a weight decay of 10\u22124. Downstream Task Evaluations. We evaluate the self-supervised representations on two downstream tasks: action recognition and video retrieval. Following the common practice [12, 22], we average the predictions of 10 uniformly sampled clips of a video as the final result. For action recognition, we use the weights of the pretrained network as initialization and evaluate the representations under linear probing and fine-tuning settings. We report the top-1 classification accuracy on split 1 of UCF101 and HMDB51, and the v2 test set of Diving-48. For video retrieval, we directly use the pre-trained model as a feature extractor without further training. Following [65], we extract the feature of each video in the test set as a query and retrieve the k-nearest neighbors in the training set by calculating the cosine similarity. We report the top-k recall R@k on UCF101 and HMDB51. 4.2 Ablation Study In this subsection, we perform in-depth ablation studies of FIMA. We pre-trained on split 1 of UCF101 with I3D for 200 epochs. Unless otherwise specified, we report the linear probing and fine-tuning Top-1 classification accuracy on UCF101 split 1. Overall Framework. We analyze how each loss function contributes to the overall learning objective. We show the results of linear probing and fine-tuning accuracies on UCF101 and HMDB51 \fMM \u201923, October 29\u2013November 3, 2023, Ottawa, ON, Canada. Minghao Zhu, Xiao Lin, Ronghao Dang, Chengju Liu, & Qijun Chen Table 3: Ablation studies on the hyperparametrs and key details. We report fine-tuning (ft) and linear probing (lin) accuracy on UCF101 split 1 unless otherwise specified. Default settings are colored in gray. (a) Distance threshold T. T = 0.7 yields better performance in general. threshold ft lin 0.35 82.6 71.9 0.7 83.1 73.1 1.4 83.4 71.0 2.8 80.6 69.8 (b) Motion decoder depth. 2 blocks of decoder is the optimal setting. blocks ft lin 1 81.8 71.2 2 83.1 73.1 3 82.7 71.9 (c) Decoder width and number of heads. Excess attention heads introduce noise. dim nheads ft lin 256 4 82.7 73.3 8 81.3 69.4 512 4 83.1 73.1 8 81.7 71.6 (d) Foreground ratio \ud835\udefd. Small \ud835\udefdis beneficial for the transferability of features ratio ft lin ft\u210e lin\u210e 0.3 82.2 71.6 56.0 41.2 0.5 83.1 73.1 54.2 39.2 0.7 81.9 71.5 54.3 39.0 (e) Foreground mask source. Using both modalities provides a more precise mask. source ft lin RGB 81.6 72.1 frame difference 82.7 71.1 both 83.1 73.1 (f) Foreground mask position. Filtering out the noise on both sides is important. position ft lin no mask 82.3 71.0 prediction side 81.8 71.7 target side 81.9 72.2 both 83.1 73.1 in Table 1. The vanilla instance-level video contrastive loss LVV serves as the baseline. We can observe that our pixel-level motion contrastive loss LPix improves the baseline by a large margin and significantly outperforms the instance-level loss LVD. This observation verifies our idea of eliminating the weak alignment. The framelevel local motion loss LFra also leads to notable performance gains. It is complementary to LPix since it improves temporal diversity by aligning motion features at the frame level. Components of LPix. To eliminate the weak alignment of motion features, we propose a dense contrastive learning framework, foreground sampling strategy, and motion decoder in LPix. We ablate the effectiveness of these components in Table 2. When only adopting the dense contrastive framework, the performances are compromised on the classification task [63]. The foreground sampling strategy and motion decoder can boost the performances independently or cooperatively by eliminating the spatial and temporal weak alignment. This also proves the existence of two kinds of weak alignment and the effectiveness of the proposed designs. Distance Threshold T. Table 3a ablates the distance threshold T in dense contrastive learning framework. This parameter describes the range of motion features as the contrast target of a pixel. T = 0.7 yields better performance in general. The result is in accordance with the one in [64]. Motion Decoder Design. We first conduct experiments with different decoder depths in Table 3b. A 2-layer shallow motion decoder achieves the best results. More layers lead to a decrease in the results. We reason that more decoder layers may lead to overfitting of model training on the small-scale UCF101 dataset. In Table 3c we ablate the decoder width and the number of heads. We observe that 8 attention heads decrease the performances. We argue that excess attention heads may sample information from some noisy latent subspaces. For the decoder width, considering that the motion decoder is also responsible for local motion reconstruction task, we use 512 dimensions by default. We provide more ablation studies in the supplementary material. Foreground Sampling Strategy. We study the influence of the foreground ratio \ud835\udefdin Table 3d. We additionally report the results on HMDB51 in this study, noted as ft\u210eand lin\u210e. An intriguing observation is that \ud835\udefd= 0.3 obtains better results on HMDB51 and \ud835\udefd= 0.5 performs best on UCF101. We argue that aligning motion features with a small foreground ratio introduces the most relevant and noise-less motion information, which is critical for the transferability of the learned representations. On the other hand, a relatively large foreground ratio can provide more cues for instance discrimination but inevitably introduces more noise. Table 3e studies the source of the foreground sampling mask. Using the combination of RGB and frame difference achieves the best results, as it locates the foreground region more precisely. Table 3f studies the position of the foreground sampling mask. Applying a foreground mask on the prediction or target side means filtering out the background feature pixels in the corresponding feature map. We find background noise on either the prediction or the target side could damage the learned representations. Thus sampling foreground features on both sides is important. 4.3 Evaluation on Downstream Tasks Action Recognition on UCF101 and HMDB51. We compare our method with the state-of-the-art self-supervised learning works on action recognition in Table 4. We report linear probing and fine-tuning Top-1 accuracy and list the detailed settings such as backbone architecture, number of frames, and resolution. For a fair comparison, we do not report methods using a deeper backbone or other modalities such as optical flow, audio, and text. In linear probing settings, our method achieves the best results on both datasets. As the major counterparts with R(2+1)D backbone, DCLR [15] and SDC [43] also utilize frame difference as the source of motion information. FIMA outperforms DCLR and SDC in general, which implies we align the features of frame difference more precisely. With the I3D backbone pre-trained on Kinetics-400 for 100 epochs, our method consistently surpasses FAME [14] on both UCF101 and HMDB51, which is pre-trained for 200 epochs on Kinetics-400. It suggests that explicitly incorporating motion information is more effective than in an implicit manner. \fFine-Grained Spatiotemporal Motion Alignment for Contrastive Video Representation Learning MM \u201923, October 29\u2013November 3, 2023, Ottawa, ON, Canada. Table 4: Action recognition performance on UCF101 and HMDB51 under linear probing and fine-tuning settings. \u2020 denotes our reproduced results that strictly follow the settings in the original paper. Method Backbone Pretrain Dataset Feames Res Linear Finetune UCF101 HMDB51 UCF101 HMDB51 VCP [36] R3D-18 UCF101 16 112 66.3 32.2 PRP [66] R(2+1)D UCF101 16 112 72.1 35.0 DCLR [15] R(2+1)D UCF101 16 112 67.1 40.1 82.3 50.1 SDC [43] R(2+1)D UCF101 16 112 67.4 40.7 82.1 49.7 FIMA(ours) R(2+1)D UCF101 16 112 71.2 41.1 84.1 56.0 BE [55] I3D UCF101 16 224 82.4 52.9 FAME [14] I3D UCF101 16 224 67.2\u2020 36.9\u2020 81.2 52.6 FIMA(ours) I3D UCF101 16 224 75.3 42.8 84.2 57.8 CCL [30] R3D-18 Kinetics-400 16 112 52.1 27.8 69.4 37.8 MemDPC [22] R3D-34 Kinetics-400 40 224 54.1 30.5 78.1 41.2 RSPNet [9] R(2+1)D Kinetics-400 16 112 61.8 42.8 81.1 44.6 LSFD [4] R3D-18 Kinetics-400 32 128 77.2 53.7 MLFO [44] R3D-18 Kinetics-400 16 112 63.2 33.4 79.1 47.6 VideoMoCo [41] R(2+1)D Kinetics-400 32 112 78.7 49.2 TCLR [12] R(2+1)D Kinetics-400 16 112 84.3 54.2 DCLR [15] R(2+1)D Kinetics-400 16 112 72.3 46.4 83.3 52.7 FAME [14] R(2+1)D Kinetics-400 16 112 72.2 42.2 84.8 53.5 SDC [43] R(2+1)D Kinetics-400 16 112 72.1 45.9 86.1 54.8 FIMA(ours) R(2+1)D Kinetics-400 16 112 73.1 45.5 86.7 59.4 DSM [54] I3D Kinetics-400 16 224 74.8 52.5 BE [55] I3D Kinetics-400 16 224 86.8 55.4 FAME [14] I3D Kinetics-400 16 224 75.3\u2020 46.7\u2020 88.6 61.1 FIMA(ours) I3D Kinetics-400 16 224 76.4 47.3 88.5 62.1 Table 5: Action recognition performance on Diving-48. All models are pre-trained on Kinetics-400. Method Backbone Res. Finetune TCLR [12] R3D-18 112 22.9 BE [55] I3D 224 62.4 FAME [14] I3D 224 72.9 FIMA(ours) R(2+1)D 112 74.7 In fine-tuning settings, our method with R(2+1)D obtains state-ofthe-art results on both datasets. Remarkably, our R(2+1)D model pretrained on UCF101 gets 56.0% classification accuracy on HMDB51, which outperforms all existing methods pre-trained on Kinetics-400. It demonstrates the data efficiency of our method and the high transferability of the learned representations. For the I3D network, FIMA pre-trained on UCF101 outperforms FAME by 3.0% and 5.2% on two datasets. When conducting pre-training on Kinetics-400, FIMA achieves competitive results with FAME with half training epochs (100 epochs vs. 200 epochs). Additionally, we report fine-tuning results on the less biased Diving-48 dataset [34] in Table 5. Our R(2+1)D pre-trained model with 112 \u00d7 112 resolution outperforms previous methods with a larger backbone. It demonstrates that our method introduces truly aligned motion features and effectively suppresses background bias. Table 6: Recall-at-topK(%). Video retrieval performance under different K values on UCF101 and HMDB51. \u2020 denotes our reproduced results. Method Backbone UCF101 HMDB51 R@1 R@5 R@10 R@20 R@1 R@5 R@10 R@20 Pace [57] R(2+1)D 25.6 42.7 51.3 61.3 12.9 31.6 43.2 58.0 MLFO [44] R3D-18 39.6 57.6 69.2 78.0 18.8 39.2 51.0 63.7 CACL [20] R(2+1)D 41.5 59.7 68.4 77.6 16.4 38.0 49.6 63.4 DCLR [15] R(2+1)D 54.8 68.3 75.9 82.8 24.1 44.5 53.7 64.5 FIMA(ours) R(2+1)D 52.2 68.8 77.0 84.1 24.2 46.4 59.4 72.2 DSM [54] I3D 17.4 35.2 45.3 57.8 7.6 23.3 36.5 52.5 FAME\u2020 [14] I3D 52.8 67.9 75.9 82.3 20.7 43.3 56.4 69.7 FIMA(ours) I3D 54.0 69.4 77.1 84.8 24.5 48.7 59.5 72.6 Video Retrieval. We show the video retrieval performance on UCF101 and HMDB51 in Table 6. All models are pre-trained on UCF101 with a resolution of 112\u00d7112 for R(2+1)D and 224\u00d7224 for I3D. For R(2+1)D backbone, our method generally performs better than prior work DCLR [15] but slightly worse in the R@1 metric on UCF101. For the I3D backbone, FIMA achieves superior results on both datasets. The stable performance improvements with different network architectures demonstrate the effectiveness and strong generalization of our method. \fMM \u201923, October 29\u2013November 3, 2023, Ottawa, ON, Canada. Minghao Zhu, Xiao Lin, Ronghao Dang, Chengju Liu, & Qijun Chen Nunchucks PoleVault PlayingCello Billiards StillRings Pix \uf04c RGB + VV \uf04c VV \uf04c Pix \uf04c RGB + VV \uf04c VV \uf04c ThrowDiscus Figure 3: Class-agnostic activation map visualization for MoCo baseline (middle column) and MoCo+LPix (right column). LPix is effective for alleviating background bias. 4.4 Visualization of Model Attention To demonstrate the effectiveness of the proposed pixel-level motion contrastive loss LPix for alleviating the background bias, we adopt the class-agnostic activation map [3] to visualize the model attention. The qualitative results of the I3D model are presented in Fig. 3. We can observe that the model pre-trained with the vanilla contrastive loss LVV severely suffers from background bias and falsely attends to static cues. Since LPix introduces fine-grained motion information, the model pre-trained with LVV + LPix correctly concentrates on the foreground area with significant motion. 4.5 Why Is LPix More Effective? To understand why is LPix more effective than the vanilla motion contrastive loss LVD, we pre-trained the I3D network with LVV + LVD and LVV + LPix, respectively. We visualize the classagnostic activation maps in Fig. 4. It can be observed that the motion information introduced by LVD limited to the local area, while some significant motion cues are neglected. For example, in row-1 on the left, the ground-truth label is \"BodyWeightSquats\". The attention of the model trained with LVV + LVD concentrates on the upper part of the human body. However, the movement of the legs is critical to discriminate the \"BodyWeightSquats\" from other motions. On the contrary, the attention of the model trained with LVV +LPix covers the whole foreground area, thus providing more comprehensive and holistic motion details for downstream tasks. 4.6 Are Motion Features Well Aligned? To verify whether the motion features are well aligned, we visualize the affinity matrices that describe the pairwise relationships between RGB and motion feature pixels. Specifically, we apply the average pooling to the extracted feature maps along the time dimension and then flat the output to the shape of 49 \u00d7 1024. Each pixel feature is normalized and the cosine similarity is used to calculate the relationship. The final matrix is averaged over 50 randomly selected video clips in the test set of UCF101. As shown in Fig. 5 (a), FIMA aligns the features of RGB and frame difference better in two aspects. First, the affinity matrix of FIMA has higher similarity around the diagonal. This indicates that every feature pixel learned by FIMA encodes more motion information around itself. Second, there are some outliers outside the diagonal in the affinity matrix of MoCo+LVD, while the affinity matrix of FIMA does not. This HandstandPushups Basketball JumpingRope GolfSwing Pix \uf04c RGB + VV \uf04c VD \uf04c + VV \uf04c Pix \uf04c RGB + VV \uf04c VD \uf04c + VV \uf04c PlayingGuitar BodyWeightSquats Figure 4: Class-agnostic activation map visualization for MoCo+LVD (middle column) and MoCo+LPix (right column). Pre-training with LPix provides richer motion information. MoCo + VD \uf04c FIMA MoCo + VD \uf04c FIMA (a) Spatial affinity matrices (b) Temporal similarities Figure 5: (a) Spatial affinity matrices and (b) Temporal similarity statistics between RGB features and motion features with MoCo+LVD pre-training and FIMA pre-training. demonstrates that FIMA can retain spatial location information and facilitate a more accurate alignment of motion features. Besides, to demonstrate the effectiveness along the temporal dimension, we randomly select 50 videos in the test set of UCF101. For each video, we uniformly sample 10 clips and extract their pooled RGB and motion features. Then calculate the similarity of each pair of RGB and motion features. We visualize the similarities as a violin plot in Fig. 5 (b). We can observe that FIMA has a smaller mean similarity with a larger deviation, which indicates that the features learned by FIMA can better capture the variation of the motion semantics. 5" + } + ], + "Ronghao Dang": [ + { + "url": "http://arxiv.org/abs/2310.05136v5", + "title": "InstructDET: Diversifying Referring Object Detection with Generalized Instructions", + "abstract": "We propose InstructDET, a data-centric method for referring object detection\n(ROD) that localizes target objects based on user instructions. While deriving\nfrom referring expressions (REC), the instructions we leverage are greatly\ndiversified to encompass common user intentions related to object detection.\nFor one image, we produce tremendous instructions that refer to every single\nobject and different combinations of multiple objects. Each instruction and its\ncorresponding object bounding boxes (bbxs) constitute one training data pair.\nIn order to encompass common detection expressions, we involve emerging\nvision-language model (VLM) and large language model (LLM) to generate\ninstructions guided by text prompts and object bbxs, as the generalizations of\nfoundation models are effective to produce human-like expressions (e.g.,\ndescribing object property, category, and relationship). We name our\nconstructed dataset as InDET. It contains images, bbxs and generalized\ninstructions that are from foundation models. Our InDET is developed from\nexisting REC datasets and object detection datasets, with the expanding\npotential that any image with object bbxs can be incorporated through using our\nInstructDET method. By using our InDET dataset, we show that a conventional ROD\nmodel surpasses existing methods on standard REC datasets and our InDET test\nset. Our data-centric method InstructDET, with automatic data expansion by\nleveraging foundation models, directs a promising field that ROD can be greatly\ndiversified to execute common object detection instructions.", + "authors": "Ronghao Dang, Jiangyan Feng, Haodong Zhang, Chongjian Ge, Lin Song, Lijun Gong, Chengju Liu, Qijun Chen, Feng Zhu, Rui Zhao, Yibing Song", + "published": "2023-10-08", + "updated": "2024-03-11", + "primary_cat": "cs.AI", + "cats": [ + "cs.AI", + "cs.CV" + ], + "main_content": "INTRODUCTION Referring object detection (ROD) aims to detect target objects according to language reference that represents user intentions. ROD is closely related to visual grounding where there are phrase grounding (Akbari et al., 2019; Li et al., 2022a; Gao et al., 2023) and referring expression comprehension (Su et al., 2020; Zhu et al., 2022). As shown in Fig. 1, phrase grounding detects all objects mentioned in one sentence, while referring expression comprehension (REC) only detects one single object that the text refers to. As such, the language reference in REC shall be discriminative and specifically relates to one object without ambiguity. Currently, visual grounding develops at an initial stage and leaves a gap for practical usage. The phrase grounding does not differentiate which object ought to be detected via language description, while REC only targets for one object with single text reference. In the current REC datasets, each image contains few expressions (e.g., 1 or 2 phrases). These expressions are insufficient to represent user intentions. In an image where there are several objects, users may want to detect each single object by using different descriptions (e.g., object color, shape, or location), or detect multiple objects in different combinations (e.g., similar properties or relationships). These diverse expressions are not conveyed within current REC datasets, leaving the gap for existing methods to practically fulfill user intentions for visual grounding. Moreover, the manual collection of these expressions are cumbersome, and subject bias prevents an effective coverage of common user intentions when perceiving each image. Therefore, the practical user expressions are not well fulfilled when they expect to detect various objects in one image. In this work1, we aim to push visual grounding toward practical usage from a data-centric perspective. Instead of developing REC models to generalize based on current data, we set up referring object detection (ROD) scenario to automatically diversify user expressions. Inspired by the generalization of foundation models that execute common user instructions based on the image and text inputs, our InstructDET borrows their capabilities to produce human-like instructions that encompass user intentions related to object detection. The generalized instructions produced by the foundation models can be regarded as an expansion of existing user expressions in REC. We produce instructions that describe single object from two pipelines. In the first pipeline (i.e., global prompt), we convert an image into an elaborate text description via LLaVA (Liu et al., 2023a). The text description, together with object bbxs coordinates, are sent to the LLaMA (Touvron et al., 2023) for instruction generation in global prompt. During generation, we manually write 3 in-context examples and leverage the in-context learning (Dong et al., 2023) ability of LLaMA to describe the content related to each object following the format of our examples. In the second pipeline (i.e., local prompt), we send the image and text prompts into LLaVA. The objects in the image are marked with bbxs and the text prompts require LLaVA to describe the object content. We initialize LLaVA with miniGPT4 weights and find it tends to produce lengthy and global descriptions. So we perform a partial finetuning on LLaVA by using REC data to let it focus on local objects. Through these two pipelines, we observe that instructions generated from global prompt pipeline focus more on the object relationship, while instructions generated from local prompt pipeline focus more on rich visual details and advanced logic reasoning. Naturally, we combine instructions from these two pipelines to formulate expressions for single referred object. During instruction generation, the uncontrolled model hallucination (Li et al., 2022b) brings incorrect or irrelevant instructions. We propose to use visual-textual verification via CLIP (Radford et al., 2021) for effective instruction filtering. The generalization and reasoning of foundation models (Wang et al., 2022; Zhou et al., 2022) provide sufficient instructions encompassing user intentions for single object description. When describing multiple objects, we divide descriptions into two parts. The first part is to independently describe each single object followed by concatenation, and the second part is to summarize commonalities of multiple objects. The commonality summarization requires unifying similar or related objectives by a higher-level language abstraction that describes their similarities and relationships. We collect the 1We do not differentiate \u201cinstruction\u201d and \u201cexpression\u201d in this paper, as both of them represent user intentions. For presentation clarity, in our InstructDET pipeline we refer expressions that are generated by foundation models, and we further refine expressions to instructions for InDET inclusion. As we only focus on ROD, we can formalize our instruction by simply adding the word \u2018detect\u2019 beforehand. 2 \fPublished as a conference paper at ICLR 2024 combinations of different objects via semantic clustering, then utilize LLM to generate commonality summarizations for each combination. We automatically collect instructions targeting for single or multiple objects in images and construct our InDET dataset. Sec. 4 shows an in-depth analysis of our dataset where we establish a guideline to organize these instructions from 6 aspects. Compared to existing REC datasets where the instructions only reside in sub-parts of our groups, our InDET is more comprehensive to incorporate user intentions of object detection. Fig. 1 shows an intuitive example of the generalized expressions produced by foundation models. By using our InDET dataset, we train a conventional ROD model and find it surpasses existing VG models on standard benchmarks and our InDET test set. Moreover, we also validate that our model has learned to effectively understand the meaning of instructions rather than only recognize key words, which is because of the tremendously expressive instructions incorporated for our model training. Our InstructDET method can automatically expand training data by using in-the-wild images with object bbxs, which improves our model generalizations towards practical usage. In addition, our model can already serve as the detection module of the neural-symbolic visual compositional task solution given arbitrary language instructions beyond object detection (e.g., Visual ChatGPT (Wu et al., 2023), VISPROG (Gupta & Kembhavi, 2023)). 2 RELATED WORKS Visual Grounding. Studies on visual grounding (Kamath et al., 2021; Chen et al., 2021; Deng et al., 2021; Su et al., 2023) can be mainly categorized as phrase grounding (Plummer et al., 2022; Kojima et al., 2023) and REC (Hudson & Manning, 2018; Li & Sigal, 2021). Phrase grounding detects all objects mentioned in the text while REC localizes one object that the text referred to. In (Zhang et al., 2022; Liu et al., 2023c), the objects mentioned in the text are verified to each visual object proposal one-by-one. These methods require a clear and specific object referring in the text. On the other hand, methods (Zhu et al., 2022; Yan et al., 2023) based on DETR (Carion et al., 2020) can accept abstract and summarized descriptions such as \u201cred objects\u201d and \u201call objects\u201d. Our ROD model follows DETR-based design to enrich interpretation of various instructions. Note that our model is learned via InDET dataset where instructions are produced based on preset object bbxs. Referring Expression Datasets. The REC datasets are usually constructed via manual annotation on the images. A two-player game is utilized in (Kazemzadeh et al., 2014) where the text descriptions are concise due to limited relevant visual contents. The RefCOCO, RefCOCO+ (Yu et al., 2016), and RefCOCOg (Mao et al., 2016) employ MSCOCO (Lin et al., 2014) images for manual expression production. The expression flexibility and diversity of these datasets are limited to encompass common detection intentions. Recent datasets (Krishna et al., 2017; Kuznetsova et al., 2020; Kebe et al., 2021) focuses on data scalability rather than auto generation. Cops-Ref (Chen et al., 2020) leverages scene graph as reasoning groundwork, thus forming a tree structure to generate expressions with varying compositionality. Different from these methods based on template guided expression generation, our InstructDET relies on foundation models to produce well generalized and human-like instructions. Data Generation via Foundation Models. The InstructGPT (Ouyang et al., 2022) and GPT4 (OpenAI, 2023) have shown generalization and reasoning abilities for data generation. LLaVA (Liu et al., 2023a) first uses GPT4 for multi-modal data generation following instructions. Otter (Li et al., 2023a) performs multi-modality in-context instruction tuning by levering multiple images, questions, and answers. Currently, these models focus on global image and language understanding, with less focus on local object analysis. Moreover, these multi-modality models, although processing multi-modality data, still outputs single-modality text description. There is a gap for these foundation models to function in the computer vision scenarios, especially visual recognition. In comparison, our InstructDET uses foundation models to benefit ROD model training, which contributes directly to improve object detection performance. 3 INSTRUCTDET Fig. 2 shows an overview of our InstructDET method for data construction. Given an input image with object bbxs, we use two pipelines to produce detection expressions from foundation models. The expressions are further refined to instructions and incorporated into our InDET dataset. 3 \fPublished as a conference paper at ICLR 2024 Language Description In Context Samples Task Description LLAMA Dropout Post Processing InDET LLM Prompt Object bbx axis Training Image Global Prompt Pipeline Local Prompt Pipeline LLAVA VLM Prompt Task Description Partial Finetuning Expression Instruction Figure 2: An overview of our InstructDET. We use two pipelines to produce detection expressions via foundation models. In the global prompt pipeline, we use LLaVA to describe an image via text, and combine this text with other text prompts for LLaMA input. In the local prompt pipeline, we use the same image with object bbxs and text prompts as multi modality input for LLaVA. The produced expressions are further refined to instructions and incorporated into our InDET dataset. 3.1 GLOBAL PROMPT PIPELINE The large language model (LLM) has shown surprising generalizations to well execute common user instructions. We use LLM to simulate user intentions when perceiving objects in an image. Our global prompt pipeline produces a text prompt for the LLaMA2 model. This prompt consists of several contents including global image description, object bbx coordinates, in-context samples, and task description. Without tuning LLaMA, we obtain instructions that describe objects in an image. A detailed example is shown in Sec. C for an intuitive illustration of how we leverage foundation models to produce expressions. We elucidate the key steps during this process as follows: Given an image with object bbxs, we first obtain global image description in text form. If this image already contains dense captions (e.g., from Flicker30K), we directly load these captions. Alternatively, we leverage LLaVA to generate the global image description. The text prompt we use for LLaVA contains our language guidance to emphasize that specific interested object categories shall be mentioned. As such, LLaVA will describe each labeled object in its output. As for object bbx content, if the image is from REC dataset, we use referring expression as the object content. Otherwise, we simply use the category name. When designing the task description prompt, we expect LLaMA to produce diverse expressions that contain different properties of single object as much as possible. We manually list the attributes from the user perspective, including the object type, color, function, motions, etc. Besides, we include the object attributes of its relationship with other objects in image, such as object interactions, object relative positions, etc. When using these aforementioned prompts for LLaMA input, we find that the output text varies significantly and might be irrelevant to the target objects. Inspired by the incontext learning ability of foundation models, we manually design in-context samples to regularize the output content and format. The output results will thus resemble our in-context examples but with our expected diversified object descriptions. 3.2 LOCAL PROMPT PIPELINE The global prompt pipeline produces expressions according to text prompts. Naturally, we can feed both image and text prompt to the multi-modality foundation model for object description. Given an image, we mark the object with bbx rectangle, and send this image to LLaVA, together with the text prompt that requires LLaVA to describe the object according to the bbx. Here, the bbx serves as a visual highlight for LLaVA to comprehend the target object that we expect to describe. Our LLaVA model is initialized with miniGPT4 weights. When we send these multi-modality inputs to LLaVA, we observe that LLaVA produces detailed and dense descriptions for the image, rather than expressions of the specific target object. We analyze that the vision-language alignment module in LLaVA is the Q-Former (Li et al., 2023b), which transforms one image into only 32 visual tokens without concentrating on the local objects. Meanwhile, LLaVA itself tends to produce lengthy and dense descriptions. In order to generate instructions suitable for ROD, we finetune a part of LLaVA by using existing REC datasets. Specifically, we only update a linear layer that transforms visual 2In this paper, we use a variant of LLaMA (i.e., Vicuna 13B) that has gone through instruction tuning. Besides, we use a variant of LLaVA, which is a multi-modal paradigm that maps visual features into token embeddings with further alignment to the text domain. 4 \fPublished as a conference paper at ICLR 2024 tokens to the text embedding space during training. The linear layer is learned to attend local objects with concise expressions. After finetuning, we observe the LLaVA output becomes informative and closely related to the target object. Detailed examples of generating expressions in local prompt pipeline are shown in Sec. B, and the detailed analysis on how finetuning improves LLaVA output is provided in Sec. H. 3.3 EXPRESSION FILTER In global and local pipelines, we have regularized the output of foundation model from several aspects including text prompt specification, in-context learning, and model finetuning. In practice, we still observe the model hallucination phenomena that the model sometimes generate expressions describing objects not even exist in the image. Moreover, the expressions from the local prompt pipeline sometimes describe the whole image rather than local objects. This is due to the miniGPT4 initialization of LLaVA, which utilizes dense captions for instruction tuning. The tendency to generate global image description is mitigated via our model finetuning to focus on local object, but not completely disappeared. To further improve the expression quality, we introduce visual and language matching via CLIP (Radford et al., 2021) to filter out inappropriate expressions. Fig. 3 shows an overview. It contains image visual prompting and visual-textual matching. Generated Expression CLIP CLIP Visual Prompting a blonde lady wearing a long white scarf Global Score Local Score Expression Dropout Figure 3: Expression filtering by image visual prompting and visual-textual matching via CLIP. Visual Prompting We study visual language pretraining (VLP) (Yang et al., 2023; Shtedritski et al., 2023) where visual prompting is developed for images. We observe that in zeroshot REC, coupling VLP with visual prompts enables robust pairing of local image region and corresponding text description. In the pairing process, the design of visual prompting heavily influences the visual-textual matching results. Specifically, we employ the superposition of a red ellipse and the target Gaussian blur reversion as visual prompts. A detailed pipeline illustrating visual prompting is in Sec. D. Visual-Textual Matching. We use images with visual prompting that emphasizes target objects to verify the corresponding text descriptions via a frozen CLIP model. While local object contents are well aligned with target referring expressions, we observe that expressions describing the whole image are not eliminated by CLIP. We analyze that CLIP is originally trained to focus on the correlations between global image features and global textual semantics. The global visual-textual matching makes CLIP model to prefer global image description accordingly. To remove this effect, we establish a referring measurement from both local and global perspectives. For the image in Fig. 3, we compute a global score Sg and a local prompt score Sl. The magnitude of referring expression can be measured via our local enhancement score Se = Sl \u2212Sg. Our final expression evaluation score can be computed as: Sf = \u03b11Se + \u03b12Sl = (\u03b11 + \u03b12)Sl \u2212\u03b11Sg = Sl \u2212\u03b11Sg (1) where \u03b11 and \u03b12 are scalars balancing the contributions of Sg and Sl with \u03b11 + \u03b12 = 1. So \u03b11 \u2208[0, 1] adjusts the final score towards local content referring or global semantics. Note that we introduce Se to measure difference between local and global scores. If the expression is more related to the target object, Se becomes higher after visual prompting for object highlights. After computing Sf, we set a dynamic threshold to filter out expressions. This is because Sf is based on CLIP\u2019s preference that a small target object with well matched expression achieves a lower score than a large object with mismatched expression. Therefore, we use provided expression (for images from REC) or category name (for images out of REC) to compute a final score, and discard generated instructions whose Sf is lower than this score. 3.4 MULTI-OBJECTS EXPRESSION GENERATION Our expression generation pipeline illustrated above targets for each object independently. In practice, users may refer to multiple objects in one image. We study common user expressions for multi-objects, and conclude them into two aspects. The first one contains splicing expressions that 5 \fPublished as a conference paper at ICLR 2024 combine different single object expressions with \u2018and\u2019 or comma. In this case, the objects mentioned in the expression are not related to each other. The second aspect contains generalization expressions that summarize the common properties of multiple objects (e.g., color, category, or location) to produce an abstract and conceptual description. It resembles mining similarities between multiple objects and thus is not straightforward to conclude. Therefore, we need to discover object combinations where similar properties may exist, and then summarize the commonalities among them to constitute the summarized expressions. 1 6 2 3 4 5 Concate Single Target\u00a0 Expressions 1 2 3 4 5 6 white bowl next to the carrots, empty bowl, big white bowl bowl with green beans, white bowl with green in it, bowl with green things person behind white bowl, person in black shirt, partial person in blue red shirt, guy eating on the right distant plate, plate at the hand of the man in the red suit a big plate with carrots, beans and a big steak on it 1 2 3 4 5 6 two bowls next to each other people who are eating plates on the table BERT Encoder Clustering Figure 4: Mining commonalities among multiobjects via expression concatenation and text semantic clustering, followed by LLaMA descriptions on each cluster center. Our process to produce summarized expression is shown in Fig. 4. For each object, we first concatenate all its related expressions with commas. Through this concatenation, we can obtain this object expression from different perspectives (i.e., different properties). Then, we use the text encoder in BERT (Devlin et al., 2018) to map this concatenated expression to a semantic embedding space. As such, we obtain embeddings of concatenated expressions from all objects. Then, we cluster these embeddings into indeterminate number of clusters by using DBSCAN (Ester et al., 1996) method. We use LLaMA to generate text for clusters with multiple objects. The details of using LLaMA to mine object commonalities are in Sec. E. The generated text indicates the summarized expression we aim to produce for multiple objects. Post Processing. After generating expressions for single and multiple objects, we verify and remove the repeated expressions that pertain to the same object. Then, we utilize LLaMA to further diversify these generated expressions while preserving their original intents, i.e., we use LLaMA to do synonymous rewriting of generated expressions. The prompt we use for synonymous rewriting is provided in Sec. I. We observe that for different objects in one image, the expression for one object may be similar to that of others. These expressions are ambiguous since we can not refer to a unique object based on their referring. Nevertheless, we transfer these expressions to refer to multi-objects since they express a set of objects in one image. This transfer further augments multiobject referring expressions. Finally, we collect these remaining expressions after post processing as instructions. Together with corresponding object bbxs and images, we construct our InDET dataset by incorporating diversified object detection instructions encompassing user intentions. 4 DATASET ANALYSIS Our InDET dataset contains images from MSCOCO (Lin et al., 2014), Flicker (Plummer et al., 2015), and Objects365 (Shao et al., 2019). There are 120.6K images with 908.4K referring object sets in total. Together with original expressions, there are 3.6M instructions in total, making InDET the largest real-world REC dataset at present. The average instruction length is 6.2 words and the vocabulary size is 63k words, which surpasses existing automatically annotated datasets in terms of instruction quantity, richness, and vocabulary breadth. We split the images into training, validation, and testing sets, with the corresponding instruction amount of 3139K, 240K, and 247K, respectively. In the following, we first propose a guideline that represent common user intentions and divides existing instructions into 6 groups. Then, we analyze all the instructions in InDET according to this guideline to show how our InDET advances REC scenario compared to existing datasets. Instruction Guideline. The instructions in InDET dataset describe objects from various perspectives. We observe that these descriptions all focus on object category, attribute, and relations, but with different emphasis extent. Based on expression complexity, we establish a guideline that divides all instructions into 6 groups. Each group reflects one level of emphasis on category, attribute, and relations. Table 1 shows our guideline and examples. The first four groups are for single object and the last two groups are for multiple objects. In the first group (G1), there is only one single phrase to describe object category, which is similar to the traditional object detection task. From G2 to G4, more phrases are involved to describe the target object. For G5, we construct a spliced form to combine instructions from different single objects. In G6, the instruction is a general description 6 \fPublished as a conference paper at ICLR 2024 Table 1: Instruction guideline and samples. Our guideline contains 6 aspects and covers common user intentions. These aspects are built upon object category, attribute, and relations with different emphasis levels. We use \u22c6and \u22c6\u22c6to indicate the description complexity of different aspects. Aspect Category Attribute Relation Examples Single Object 1 2 3 4 pencil; two children; soccer ball; city street shirts with English letters; red and white airplane man in blue shirt halfway on screen;\u00a0 people who are sitting under an umbrella; a man in a grey sweater and black jeans performing a skateboarding trick; a woman sitting cross-legged on the couch with her back facing the viewer. She has a white shirt and black pant; Multiple Objects 5 Single object combination Commonality generalization a black hat on a man's head and red umbrella and blue truck in rains every object on table; kids playing with the blond boy 6 the glasses are sitting on top of some kind of paper or folder and there is a book and a lantern next to it Num of Words Amount (e6) (a) Length distribution (b) Diversity distribution Cosine Similarity Group (c) Group distribution 0 5 10 15 20 25 0.1 0.2 0.3 0.4 0.5 0.5 1.0 0.6 0.7 0.8 0.9 0 12 2 4 6 8 10 0 2 3 4 5 6 1.0 0.2 0.4 0.6 0.8 1 Amount\u00a0(e6) Percentage % Figure 5: Dataset analysis of expression length, diversity and group distributions. of commonality between multiple objects. To this end, the instructions from G1 to G6 gradually introduces semantic understanding, visual language grounding, and logic reasoning for ROD. After guideline construction, we use LLaMA to assign each instruction into our groups by using in-context learning that let LLaMA to understand assigning principles and in-context assigning examples. The detailed usage of LLaMA for instruction assign is shown in Sec. F. Instruction Length, Diversity, and Aspect Ratio Distributions. We analyze our InDET from the instruction length, diversity, and ratio distribution in our guideline groups. The RefCOCO and Flicker datasets are introduced for comparison. Fig. 5(a) shows the number of word distribution where the instruction of InDET contains more words than the other two datasets. Moreover, there are 100K instructions in our InDET consist of more than 10 words, while other datasets do not contain such informative expressions. In Fig. 5(b), we show diversity comparison where we use CLIP to map all instructions into a semantic space. Then, for the same target objects we compute average pairwise cosine similarity. The results show that our InDET contains lower value than other datasets, which indicates that our instructions are more diverse when describing the same target object. In Fig. 5(c), we show aspect ratio distribution of expressions assigned by our guideline. For existing datasets, user expressions commonly exist in G1 and G2. In contrast to the user expressions that seldom exist from G3 to G5 for Flicker, and seldom exist in G5 and G6 for RefCOCO, the instructions in our InDET exist normally in all groups. This distribution shows that our InDET is more effective to encompass common user intentions, especially for multiple objects. By leveraging our InDET, the ROD model becomes more practically applicable. 5 REFERRING OBJECT DETECTION In this section, we illustrate our model design for ROD task. We notice that ROD shares little difference with visual grounding (VG). First, ROD produces uncertain number of object bbxs (i.e., 0, 1, or multiple) based on one input instruction, as shown in Fig. 1. Second, ROD supports abstract and summarized object descriptions (e.g., \u201call objects on the table\u201d) that do not clearly refer to specific objects such as \u201cbottle\u201d, \u201corange\u201d, and \u201cknife\u201d. As recent VG models (Zhang et al., 2022; Liu et al., 2023c) require a one-by-one verification between visual objects and expression words, they are not able to execute such instructions. Motivated by the difference, we set up a conventional framework from DETR-based VG models (Zhu et al., 2022; Yan et al., 2023). Fig. 6 shows an overview of our DROD model. We illustrate key steps as follows: Given an image with text instruction, we use visual and text encoders (Dosovitskiy et al., 2021; Devlin et al., 2018; Ge et al., 2023) to obtain their embeddings. Then, we use a bi-directional cross 7 \fPublished as a conference paper at ICLR 2024 attention module to perform multi-modality embedding fusion. For the fused visual embedding, we sent it to the transformer encoder and decoder structure (Zhu et al., 2020) with N learnable queries as position priors (Meinhardt et al., 2022; Ge et al., 2021). Then, the decoder produces N instance proposals for further selection. For the fused text embedding, we pass it through a global average pooling and MLP for text2visual embedding space mapping. Finally, we use cosine similarity to match proposals and mapped text embedding. During the training stage, we use confidence loss and localization loss via supervised learning. During the inference stage, we select proposals whose matching scores are above a predefined threshold, which allows our model to produce arbitrary number of bbxs for diversified instruction execution. More details are shown in Sec. G. 6 EXPERIMENTS a blonde lady wearing\u00a0a long white scarf Cross Attention Transformer Encoder Transformer Decoder Language Embedding Image Embedding Pool+MLP Figure 6: An overview of our diversified referring object detection (DROD) model. We evaluate the ROD performance on standard VG benchmarks (i.e., RefCOCO, RefCOCO+, and RefCOCOg) and our InDET dataset. As illustrated in Sec. 4, the images with marked objects of our InDET dataset are collected from existing datasets while the instructions are significantly enriched. We split the training and test set of InDET following RefCOCO/g/+ where the test set contains 6.5k images with an increased number of instructions to 315K. Moreover, these instructions are assigned to 6 groups according to our guideline. The performance on each group reflects how VG methods perform when processing different aspects of user instructions. The comparing methods in our experiments are from recent VG methods including MDETR (Kamath et al., 2021), Grounding-DINO (Liu et al., 2023c) and UNINEXT (Yan et al., 2023). Due to page limit, we show the evaluation results on our InDET, our InDET with shuffled expression, and standard benchmarks. Model training, and ablation studies on partial LLaVA finetuning and visual prompt selection are provided in Sec. G and Sec. H. We also provide visual comparison results of these methods in Sec. A. A video demo showing the practical usage of our DROD model is in our webpage. Table 2: Evaluation results on our InDET and shuffled InDET test sets. We show the object bbx average precision (AP) values (%) of these two test sets with a slash (\u2018/\u2019) separation. Method Backbone AP AP by Group G1 G2 G3 G4 G5 G6 MDETR ResNet101 34.86 / 31.21 47.44 / 46.61 46.79 / 42.55 34.14 / 28.13 23.22 / 16.86 25.91 / 23.52 28.17 / 23.66 G-DINO SwinB 35.96 / 30.43 47.10 / 45.91 47.17 / 42.56 35.29 / 27.28 26.84 / 18.46 27.95 / 23.74 27.61 / 23.57 UNINEXT ResNet50 43.37 / 37.61 54.49 / 53.09 54.52 / 49.91 44.49 / 35.59 37.17 / 28.30 31.41 / 28.28 32.01 / 27.52 DROD (Ours) ResNet50 62.24 / 53.78 67.14 / 65.08 67.34 / 61.56 60.89 / 48.82 55.10 / 41.50 70.15 / 64.64 74.22 / 67.11 DROD (Ours) ViT-H 66.90 / 57.32 72.53 / 69.79 72.47 / 65.44 66.42 / 52.50 59.86 / 46.01 73.34 / 67.82 75.92 / 68.73 In our InDET test set, we compare our DROD model to other methods under the evaluation metric of object bbx average precision with a threshold of 0.5. On the other hand, we investigate whether these methods have truly comprehended the meaning of instruction, or they perform ROD only based on the key words (e.g., noun) without comprehending the whole expression. So we shuffle the InDET test set by randomly ordering the words in each instruction. We produce results of existing VG methods on our InDET test set without assuming object numbers in advance. For one method, if its performance drops more on the shuffled data, this method is shown to better comprehend the meaning of instruction. Table 2 shows the evaluation results. Overall, UNINEXT achieves a higher AP than MDETR (i.e., 43.37 v.s. 34.86) in our InDET test set, while decreasing more than MDETR (i.e., 37.61 v.s. 31.21) in shuffled data. This indicates that UNINEXT is more effective than MDETR for ROD and better comprehends instruction meaning. Meanwhile, UNINEXT achieves a higher AP value than Grounding-DINO. In comparison, our DROD largely surpasses UNINEXT (62.24 v.s. 43.37) on the overall AP comparison, and using a VIT encoder further increases our performance. This indicates that our DROD is more effective to comprehend generalized instructions for ROD. Meanwhile, we observe that our performance drop is larger than UNINEXT (8.46 v.s. 5.76), which shows that our 8 \fPublished as a conference paper at ICLR 2024 Table 3: Evaluation results on the RefCOCO/g/+ datasets. We follow evaluation protocols to report AP values (%) of comparing methods. We use the notations \u201dCC\u201d, \u201dVG\u201d, \u201dOI\u201d, \u201dO365\u201d, \u201dRIGame\u201d, for COCO, Visual Genome, OpenImage, Objects365, ReferItGame, respectively. Method Backbone Data RefCOCO RefCOCO+ RefCOCOg val testA testB val testA testB val-u test-u RefTR ResNet101 VG 85.65 88.73 81.16 77.55 82.26 68.99 79.25 80.01 SeqTR DarkNet53 VG,RIGame,Flickr,RefC 87.00 90.15 83.59 78.69 84.51 71.87 82.69 83.37 MDETR ResNet101 GoldG,CC,RefC 86.75 89.58 81.41 79.52 84.09 70.62 81.64 80.89 G-DINO SwinB O365,CC,RefC,GoldG,etc 83.95 87.79 79.16 72.91 80.91 62.96 76.98 76.76 UNINEXT ResNet50 O365,CC,RefC 87.64 90.35 83.49 78.14 83.22 68.71 80.96 81.86 DROD (Ours) ResNet50 O365,CC,InDET 88.92 90.86 85.57 78.27 83.39 71.04 83.01 82.91 AP \uff08a\uff09Numerical Results \uff08b\uff09Visual Comparisons An object that protects the woman in the picture from direct sunlight InDET RefCOCO+Flicker The most helpful thing for thirsty people InDET RefCOCO+Flicker Figure 7: Our InDET dataset improves logic reasoning of ROD models. In (a), existing models trained with our InDET dataset show superior results compared to other datasets. In (b), we show visual comparisons by using the same DROD model but with different training datasets. model better comprehends different expressions. Specifically for the results in each group, we notice that our performance drop is little in G1, and becomes larger from G2 to G4. This is because more and more words are introduced from G1 to G4 for object description. A random order gradually affects our model comprehension. For G5 and G6, we note that our method largely outperform other methods. The multi-object instructions incorporated in the dataset improves our performance. Besides evaluating our InDET test set, we compare our DROD model with existing VG methods (Zhu et al., 2022; Li & Sigal, 2021) on the standard VG benchmarks RefCOCO/g/+. Table 3 shows the evaluation results. Overall, our DROD model achieves favorable performance on these datasets. This is because our DROD model utilizes InDET dataset where diversified instructions improve model generalizations. By using a conventional ROD model, we improve the VG performance from the data diversity perspective. In addition to the overall precision comparisons, we evaluate how our dataset improves logic reasoning and instruction comprehension of existing models. Specifically, we select 2k test samples from our InDET test dataset where logic reasoning on instructions is required for object detection. For each model (i.e., MDETR, G-DINO, or UNINEXT), we train it by using different datasets (i.e., RefCOCO, Flicker, or InDET) and show the performance comparison on our 2k test samples. Fig. 7(a) shows the evaluation results where each model trained with our InDET dataset outperforms the same model trained with other datasets. In Fig. 7(b), we show visual comparisons by using our DROD model but with different training sets. It shows that using original datasets, the model tends to ground keywords rather than preform multi-modal reasoning based on instructions. In comparison, by training with our InDET, the model well interprets instruction meaning and conduct logic reasoning across languages and visual images. 7 CONCLUDING REMARKS We aim to push ROD into practical usage from a data-centric perspective. On one hand, we notice that current REC expressions are insufficient to encompass user detection intentions. On the other hand, foundation models have shown promising generalizations to simulate manual understanding and description abilities. To this end, we develop InstructDET that leverages foundation models to produce human-like expressions in REC, which tends to incorporate common user intentions into ROD training. As a result, our DROD model achieves favorable performance compared to existing VG methods. In the future, we can combine our method with open-set object detectors to fully explore in-the-wild images (e.g., Internet images) for comprehensive user expression generation. We expect our DROD model to generalize as much as existing foundation models, and thus take a huge step towards completely solving ROD task. 9 \fPublished as a conference paper at ICLR 2024 ACKNOWLEDGEMENT This paper is supported by the National Natural Science Foundation of China under Grants (62073245, 62173248,62233013). Shanghai Science and Technology Innovation Action Plan (22511104900). Shanghai Municipal Science and Technology Major Project (2021SHZDZX0100) and the Fundamental Research Funds for the Central Universities." + }, + { + "url": "http://arxiv.org/abs/2302.01520v1", + "title": "Multiple Thinking Achieving Meta-Ability Decoupling for Object Navigation", + "abstract": "We propose a meta-ability decoupling (MAD) paradigm, which brings together\nvarious object navigation methods in an architecture system, allowing them to\nmutually enhance each other and evolve together. Based on the MAD paradigm, we\ndesign a multiple thinking (MT) model that leverages distinct thinking to\nabstract various meta-abilities. Our method decouples meta-abilities from three\naspects: input, encoding, and reward while employing the multiple thinking\ncollaboration (MTC) module to promote mutual cooperation between thinking. MAD\nintroduces a novel qualitative and quantitative interpretability system for\nobject navigation. Through extensive experiments on AI2-Thor and RoboTHOR, we\ndemonstrate that our method outperforms state-of-the-art (SOTA) methods on both\ntypical and zero-shot object navigation tasks.", + "authors": "Ronghao Dang, Lu Chen, Liuyi Wang, Zongtao He, Chengju Liu, Qijun Chen", + "published": "2023-02-03", + "updated": "2023-02-03", + "primary_cat": "cs.RO", + "cats": [ + "cs.RO", + "cs.AI" + ], + "main_content": "Introduction Object navigation (Zheng et al., 2022; Moghaddam et al., 2022; Du et al., 2021; Wang et al., 2022) is a challenging task that requires an agent to \ufb01nd a target object in an unknown environment with \ufb01rst-person visual observations. Numerous techniques have been developed to advance this \ufb01eld by incorporating different inductive biases (Figure 1 (a)) due to the task\u2019s complexity. However, regrettably, the object navigation \ufb01eld does not form a uni\ufb01ed inductive bias paradigm similar to the CV (Cao & Wu, 2022; d\u2019Ascoli et al., 2021) or NLP (Levine et al., 2022; Kharitonov & Chaabouni, 2021) \ufb01elds. Inspired by the \ufb02aw, through the induction and sublimation of the current mainstream methods, we propose a meta-ability decoupling (MAD) paradigm, hoping to unify and connect various object navigation methods. This paper involves two important new concepts: metaability and thinking. Meta-ability refers to every essential ability needed to complete a complex task. For instance, solving a mathematical problem requires the integration of various meta-abilities such as text comprehension, logical reasoning, and conceptual abstraction. Without these meta-abilities, relying solely on intuition is insuf\ufb01cient to Black Box ... Reward ... Object Navigation Ability (a) Existing Methods (b) Meta-Ability Decoupling (MAD) Meta-Ability Inductive Bias Input Encoder (\u2171) (\u2172) (\u2174) (\u2170) (\u2173) Figure 1. (a) Existing methods directly improve the overall object navigation ability by introducing various inductive biases into the black box model. (b) Our proposed meta-ability decoupling (MAD) paradigm decomposes the overall object navigation ability into multiple meta-abilities, and designs speci\ufb01c inputs, thinking encoders, and rewards for each meta-ability. complete complex tasks. Thinking refers to the information abstraction for a certain ability. Typically, this abstraction is modeled end-to-end using neural networks. According to the de\ufb01nition of meta-ability and thinking, we summarize the current mainstream object navigation methods and identify their limitations. As shown in Figure 2, object navigation methods are divided into four categories: association methods (Dang et al., 2022a; Zhang et al., 2021), memory methods (Chen et al., 2022; Fukushima et al., 2022), deadlock-specialized methods (Du et al., 2020; Lin et al., 2021) and SLAM methods (Ravichandran et al., 2022; Liang et al., 2021). The different inductive biases introduced by these four types of methods determine which meta-abilities are emphasized and which are overlooked. Therefore, the existing methods all attempt to use biased thinking to abstract the ultimate ability for object navigation (Figure 1 (a)). Nevertheless, due to the sparsity and ambiguity of the reward signal, it is challenging for biased thinking to implicitly decouple complete meta-abilities which are crucial in object navigation. To address the above issues, we propose a meta-ability decoupling (MAD) paradigm (Figure 1 (b)), which solves embodied AI tasks in \ufb01ve stages: (i) selecting meta-abilities based on prior knowledge; (ii) determining the input features arXiv:2302.01520v1 [cs.RO] 3 Feb 2023 \fMultiple Thinking Achieving Meta-Ability Decoupling for Object Navigation of each thinking according to the characteristics of its corresponding meta-ability; (iii) designing suitable encoding networks for each thinking; (iv) designing the collaboration modules between different thinking according to the characteristics of the task; (v) designing rewards and punishments for each meta-ability. During this process, meta-abilities are decoupled in three aspects: input, encoding, and reward signals. In this paper, we primarily focus on the investigation of object navigation tasks, however, we believe that the MAD paradigm can be extended to other similar embodied AI tasks. Guided by the MAD paradigm, we design a multiple thinking (MT) model for the object navigation task. First, we select \ufb01ve meta-abilities (explained in Sec. 3): intuition, search, navigation, exploration and obstacle. Subsequently, for these \ufb01ve meta-abilities, we use overall image features, object detection features, target-oriented memory, historical state memory, and obstacle location memory as input for corresponding thinking. Each thinking uses a simple encoding network with necessary inductive bias. Furthermore, we devise a multiple thinking collaboration (MTC) module to facilitate cooperation between the different metaabilities. Finally, meta-ability reward is designed to guide each thinking\u2019s abstract understanding for the corresponding meta-ability. Extensive experiments on the AI2-Thor (Kolve et al., 2017) and RoboTHOR (Deitke et al., 2020) datasets show that our MAD paradigm not only outperforms SOTA methods on the typical object navigation task, but also on the zeroshot object navigation task. Moreover, an interpretability analysis of MT model based on MAD demonstrates that our method contributes signi\ufb01cantly to both the interpretability and \ufb02exibility of object navigation tasks. Our contributions can be summarized as follows: \u2022 We propose a general meta-ability decoupling (MAD) paradigm to generalize and unify various current object navigation approaches. \u2022 Following the MAD paradigm, we design a multiple thinking (MT) model for the object navigation task, which outperforms existing models in both typical and zero-shot object navigation tasks. \u2022 Our meta-ability interpretability framework provides a novel analytical mode for future researchers. 2. Related Works 2.1. Object Navigation Target-speci\ufb01c typical object navigation tasks require an agent to navigate to a known target object in an unknown Deadlock-Specialized Methods Object Navigation Intuition Search Navigation Obstacle Exploration Association Methods Memory Methods SLAM Methods Intuition Search Intuition Navigation Obstacle Exploration Navigation Exploration Obstacle Exploration DOA SP HOZ BRM CKR OMT VGM GBE DUET SemExp SSCNav PONI GOSE TPN SAVN MAAD Figure 2. Summary of various object navigation methods. We categorize the mainstream methods for object navigation into four classes, which achieve the enhancement of certain meta-abilities by improving the neural network. environment. Some recent methods diligently improve the network or introduce prior knowledge in order to solve various problems in the object navigation task. We categorize these methods into four classes (Figure 2): (i) association methods (Wu et al., 2019; Gao et al., 2021; Yang et al., 2019) which utilize object association or area association to enable the agent to build a relational graph model of the scene; (ii) memory methods (Zhu et al., 2021; Kwon et al., 2021) which depend on long-term explicit memory to more comprehensively consider historical information to make decisions; (iii) SLAM methods (Chaplot et al., 2020b; Ramakrishnan et al., 2022) which build an agent-centric semantic map in real time; (iv) deadlock-specialized methods (Wortsman et al., 2019; Du et al., 2020) which use special mechanisms to help the agent escape from the local deadlock state. Due to the lack of the meta-ability decoupling perspective, each class of methods only emphasize partial meta-abilities, resulting in a lack of comprehensive ability to solve complex tasks. Target-agnostic zero-shot object navigation tasks are gaining increasing attention with the development of multimodal contrastive learning (Radford et al., 2021). This task requires that the training environment shields the target objects for testing. Al-Halah et al. (2022) mapped various modalities into the image-goal embedding space, thus adapting the image-goal navigation agent. Zhao et al. (2022) represented the object-target relationship as cosine similarity to alleviate the over\ufb01tting. These zero-shot object navigation methods essentially extend the typical object navigation architecture by mapping the discrete class inputs to a continuous semantic space. \fMultiple Thinking Achieving Meta-Ability Decoupling for Object Navigation 2.2. Decoupling Idea in Object Navigation Decoupling is a common concept in the \ufb01eld of arti\ufb01cial intelligence (Duan et al., 2021), and it is also frequently observed in object navigation tasks. SemExp (Ravichandran et al., 2022) decouples the continuous decision-making process into two discretized steps that are \u201dwhere to look for an object?\u201d and \u201dhow to navigate to (x, y)?\u201d. AVSW (Park et al., 2022) decouples the environmental exploration from navigation to the target. ANS (Chaplot et al., 2020a) decouples the modeling of the environment from the endto-end network using a real-time built semantic map. The aforementioned decoupling methods all split the end-to-end object navigation model into several independently trained models, thus losing the advantages of \ufb02exibility and simplicity. Our MAD paradigm improves the interpretability and generalizability of the model while maintaining end-to-end learning. 3. Meta-Ability Decoupling (MAD) Object navigation is a complex long-distance decisionmaking task in the real world. Ef\ufb01cient and accurate navigation to the target object requires the assistance of multiple meta-abilities. We decouple the following \ufb01ve meta-abilities from the object navigation task based on existing object navigation methods and human experience: 1) Intuition Ability: The ability to directly derive action decisions from raw image features. 2) Search Ability: The ability to look for the target object through knowledge association. 3) Navigation Ability: The ability to navigate to the target position based on the target orientation information in memory. 4) Exploration Ability: The ability to ef\ufb01ciently and comprehensively acquire scene information. 5) Obstacle Ability: The ability to avoid colliding with obstacles. All \ufb01ve meta-abilities are present in current methods (Figure 2), however, a single method only emphasizes certain meta-abilities. By clearly identifying and decoupling them, researchers can more easily combine the strengths of multiple approaches. 4. Multiple Thinking (MT) Network The multiple thinking (MT) network is designed under the guidance of the MAD paradigm. As shown in Figure 3, we design suitable input (Sec. 4.2), encoding network (Sec. 4.3), and reward (Sec. 4.6) for each meta-ability. Additionally, we design a multiple thinking collaboration (MTC) module (Sec. 4.4) that interacts information between different types of thinking. 4.1. Task De\ufb01nition The agent is initialized to a random state s = {x, y, \u03b8, \u03b2} and random target object p. At each timestamp t, according to the single view RGB image ot and target p, the agent learns a navigation strategy \u03c0(at|ot, p), where at \u2208A = {MoveAhead; RotateLeft; RotateRight; LookDown; LookUp; Done} and Done is the output if the agent believes that it has navigated to the target location. Ultimately, if the agent is within a threshold (i.e., 1.5 meters (Du et al., 2020)) of the target and correctly detects it when Done is output, the navigation episode is considered successful. Zero-shot object navigation task divides the target objects into a training set Ptrain = {p1, p2, \u00b7 \u00b7 \u00b7 , pn} and a test set Ptest = {pn+1, pn+2, \u00b7 \u00b7 \u00b7 , pn+m}. The objects in the test set are only available during the testing process. 4.2. Thinking Inputs Each thinking\u2019s input is the most important inductive bias for the corresponding meta-ability. Input features should be as concise as possible while meeting the requirements of the meta-abilities. As shown in the thinking boxes in Figure 3, we select \ufb01ve specialized inputs from the agent\u2019s available information based on the characteristics of the meta-abilities. 1) Intuition thinking inputs ITi \u2208R7\u00d77\u00d7512 are extracted from the \ufb01rst-person perspective image using a \ufb01xed-weight ResNet18 (He et al., 2016). 2) Search thinking inputs STi \u2208RN\u00d7262 are the object visual and position features extracted from the image using DETR (Carion et al., 2020). 3) Navigation thinking inputs NTi \u2208RDn\u00d79 follow the target-oriented memory graph (TOMG) proposed in (Dang et al., 2022b). Navigation thinking only focuses target-related information; thus the TOMG is composed of the target bounding box and the agent\u2019s coordinates on the visited target-visible nodes. 4) Exploration thinking inputs ETi \u2208RDe\u00d74 are the agent\u2019s historical positions and camera angles. 5) Obstacle thinking inputs OTi \u2208RDo\u00d72 are the positions of known unreachable nodes. When the agent attempts to reach a certain node and fails, it will record that node as unreachable. N is the number of objects. Dn, De, Do respectively represent the number of visited target-visible nodes, visited nodes and known unreachable nodes. 4.3. Thinking Embedding Thinking embedding abstracts thinking inputs into the semantic space for decision making. Past works (Gao et al., \fMultiple Thinking Achieving Meta-Ability Decoupling for Object Navigation Intuition Thinking Search Thinking Obstacle Thinking Navigation Thinking Exploration Thinking M T C LSTM ... ... DETR Object features ... Object Attention Frozen ResNet18 Conv Learnable Search Intuition\u00a0 Target Target-visible nodes Dynamic multi-scale \u00a0aggregator Target attention Pool Navigation Target-invisible nodes Current node Egocentric agent state\u00a0 ... ... FC Exploration Pool Egocentric obstacle position\u00a0 ... FC Obstacle Pool Unreachable nodes Visited nodes TCN1 TCN2 TCN3 Meta-ability reward FC FC+LN Figure 3. Model overview. MTC: multiple thinking collaboration. Our multiple thinking (MT) model is primarily composed of \ufb01ve thinking modules and a MTC module, preceding the LSTM network. The colored boxes on the right match the details of thinking embedding on the left. (1) Intuition thinking takes image ResNet18 encoding as input. (2) Search thinking takes the object features extracted by DETR as input. (3) Navigation thinking takes the target orientation information at target-visible nodes as input. (4) Exploration thinking takes historical agent states as input. (5) Obstacle thinking takes the locations of known unreachable nodes as input. 2021; Chen et al., 2022) have introduced various prior knowledge into thinking encoding networks to guide models\u2019 attention. However, our MT method only uses a minimal amount of encoding techniques based on the characteristics of each meta-ability, highlighting the advantage of the MAD paradigm itself. Intuition Thinking A simple learnable pointwise convolution directly encodes the input ResNet features: ITo = \u03b4(Conv(ITi)) (1) where Conv refers to the pointwise convolution and \u03b4 represents the ReLU nonlinearity (Nair & Hinton, 2010). Search Thinking Search thinking aims to enable the agent to quickly capture the target with the fewest steps when the target is not in view. In order to have the object association ability, we adopt the unbiased directed object attention (DOA) graph Gt \u2208RN\u00d7N proposed in (Dang et al., 2022a) to assign weights to each object. We extract the object\u2019s attention weight vector Gp t from Gt based on the target p, and assign it to each encoded object feature: STo = \u03b4(STiW ST ) \u2299Gp t (2) W ST is a learnable parameter matrix and \u2299allows each object feature to be multiplied by its corresponding attention coef\ufb01cient. Navigation Thinking Navigation thinking requires the ability to memorize, locate and navigate to the target. We borrow the target-aware multi-scale aggregator (TAMSA) proposed in (Dang et al., 2022b) to map the target observation information of different positions into the target orientation relative to the current state. First, the decisions (e.g. rotate right) are made relative to the current agent\u2019s state (xc, yc, \u03b8c, \u03b2c), so we self-center the agent\u2019s states (xi, yi, \u03b8i, \u03b2i) stored in TOMG. (e xi, e yi) = (xi, yi) \u2212(xc, yc) (e \u03b8x i , e \u03b2x i ) = sin((\u03b8i, \u03b2i) \u2212(\u03b8c, \u03b2c)) (e \u03b8y i , e \u03b2y i ) = cos((\u03b8i, \u03b2i) \u2212(\u03b8c, \u03b2c)) i \u2208\u2206M (3) where \u2206M represents the index collection of target-visible nodes. To ensure that the angle and position coordinates have the same order of magnitude, we use sin and cos to normalize the angle coordinates to [\u22121, 1]. After this egocentric coordinate transformation, we obtain egocentric TOMG features g NTi \u2208RDn\u00d711. Similarly, the agent states in exploration thinking and obstacle thinking also need egocentric transformation as described above. The subsequent encoding process is represented as: NTo = HT FNT ( g NTi) \u2299FE(E) (4) H = 3 X j=1 TCNj( g NTi) (5) H \u2208RDn\u00d71 is obtained by summing different scale kernels that are generated by using the multi-scale temporal convolution networks (TCNs) to process g NTi. FNT (\u00b7) maps g NTi to a higher-dimensional feature space. Since navigation thinking needs to adaptively change when searching for different targets, the one-hot target index E is encoded by two fully connected (FC) layers FE(\u00b7) to generate a channel-wise activation vector which recalibrates channel-wise feature responses. Exploration Thinking We hope that the agent can use exploration thinking to more ef\ufb01ciently explore the environment and avoid repeated exploration. After self-centering \fMultiple Thinking Achieving Meta-Ability Decoupling for Object Navigation Holistic-Thinking Feature Figure 4. Multiple thinking collaboration (MTC) module between two thinking. The increase in the number of thinking does not affect the overall structure. We \ufb01rst extract the holisticthinking feature from the outputs of multiple thinking. Then, the channel activation vector is generated for each thinking, which recalibrates the thinking features. the agent\u2019s state, we also introduce two inductive biases by polarizing the coordinates. (i) Through the distance of polar coordinates, the network could learn that historical states closer to the current node are more important. (ii) Through the angle of polar coordinates, the network could learn that traveling in the direction of less exploration can gain more knowledge of the scene. Subsequently, we use two FC layers FET (\u00b7) and a global average pooling layer to obtain the output of exploration thinking. ETo = 1 De De X l=1 FET (g(ETi \u27e8l\u27e9)) (6) where \u27e8l\u27e9retrieves the feature of the l-th node from the historical memory graph, and g(\u00b7) represents the process of converting a cartesian coordinate system to a polar coordinate system. Obstacle Thinking Previous approaches commonly suffered from the issue of repeatedly colliding with the same obstacle, leading to deadlock. Our obstacle thinking helps the agent quickly escape from deadlock states by memorizing collided obstacles. The overall encoding process is similar to that of exploration thinking. OTo = 1 Do Do X l=1 FOT (g(OTi \u27e8l\u27e9)) (7) Dropout layer is added after the more complex intuition thinking, search thinking, and navigation thinking in the above \ufb01ve thinking encoding networks. 4.4. Multiple Thinking Collaboration (MTC) Although we decouple the meta-abilities required for object navigation, cooperation between the meta-ability thinking is still necessary. For instance, when the search thinking discovers that the target is to the right of the agent, the obstacle thinking needs to give the obstacles on the right more attention. Therefore, we design a multiple thinking collaboration (MTC) module (Figure 4) to transmit shared information between different thinking. The MTC module primarily recalibrates the channel weights of each thinking using the condensed information from all thinking. Initially, we squeeze the outputs of multiple thinking into a holistic-thinking feature representation Z: Z = \u03b4(WZ [ITo, STo, NTo, ETo, OTo] + bZ) (8) Then, excitation signals for each thinking are generated to recalibrates each thinking output XTo: XTc = XTo \u2299\u03c3(WXT Z + bXT ) X \u2192I, S, N, E, O (9) where \u03c3(\u00b7) represents the sigmoid activation function. 4.5. Policy Learning After holistic-thinking recalibration, all thinking features need to be integrated into a uni\ufb01ed representation vector: G = \u03b4(WG[ITc, STc, NTc, ETc, OTc]LN + bG) (10) where [\u00b7]LN concatenates the \ufb01nal output features of each thinking and uses layer normalization to stabilize the forward input distribution and backpropagation gradient. Finally, the multiple thinking joint representation G is used to learn an LSTM (Hochreiter & Schmidhuber, 1997) action policy \u03c0(at|Gt, p). Following the previous works (Yang et al., 2019; Dang et al., 2022a), we treat this task as a reinforcement learning problem and utilize the asynchronous advantage actor-critic (A3C) algorithm (Mnih et al., 2016). 4.6. Meta-Ability Reward Our model is supervised by two types of reward: base reward RB and meta-ability reward RMA. Similar to the previous work (Zhang et al., 2021), RB is composed of three parts. (i) We penalize each step with a small negative reward -0.01. (ii) To encourage movement, if the agent outputs MoveAhead, a positive reward of 0.01 is given. (iii) If any object instance from the target object category is reached within a certain number of steps, the agent receives a large positive reward 5.0. Meta-ability reward is designed for the goals that each meta-ability needs to achieve. RMA = Rs + Rn + Re + Ro (11) Search Reward Rs If the target object is correctly identi\ufb01ed in the \ufb01eld of view, Rs = 0.01, otherwise Rs = 0. Navigation Reward Rn Once the target has been located, if the chosen action allows the agent to move closer to the target, Rn = 0.01, otherwise Rn = 0. \fMultiple Thinking Achieving Meta-Ability Decoupling for Object Navigation Exploration Reward Re If the agent repeatedly reaches the same state, Re = \u22120.01, otherwise Re = 0. Obstacle Reward Ro If the agent collides with an obstacle, Ro = \u22120.01, otherwise Ro = 0. RMA enables each meta-ability thinking to more quickly capture the direction of learning. Accordingly, during the initial C training episodes when guiding the model\u2019s learning direction, the model is supervised by both meta-ability reward RMA and base reward RB. Afterwards, when better task performance metrics are desired, the model receives supervision solely from the base reward RB. 5. Experiment 5.1. Experimental Setup Datasets AI2-Thor (Kolve et al., 2017) and RoboTHOR (Deitke et al., 2020) are our primary experimental platforms. AI2-Thor includes 30 different \ufb02oorplans for each of 4 room layouts: kitchen, living room, bedroom, and bathroom. For each scene type, we use 20 rooms for training, 5 rooms for validation, and 5 rooms for testing. RoboTHOR consists of a set of 89 apartments, 75 of which are accessible. we use 60 for training and 15 for validation. RoboTHOR is a more complex version of the AI2-Thor environment, with 2.4 times larger \ufb02oor area and 5.5 times longer path length. For zero-shot object navigation, we re-split the widely used 22 target classes (Pal et al., 2021; Zhao et al., 2022) into 18/4 seen/unseen and 14/8 seen/unseen classes. We train the model with seen object classes as the targets and test the model with unseen object classes as the targets. Evaluation Metrics We use the success rate (SR), success weighted by path length (SPL) (Anderson et al., 2018) metrics to evaluate our method. SR indicates the success rate of the agent in completing the task, which is formulated as SR = 1 F PF i=1 Suci, where F is the number of episodes and Suci indicates whether the i-th episode succeeds. SPL considers the path length more comprehensively and is de\ufb01ned as SPL = 1 F PF i=1 Suci L\u2217 i max(Li,L\u2217 i ), where Li is the path length taken by the agent and L\u2217 i is the theoretical shortest path. Implementation Details The model with only intuition thinking (IT) and base reward RB is our baseline. We train our model with 18 workers on 2 RTX 2080Ti Nvidia GPUs, in a total of 3M navigation episodes. The dropout rate is set to 0.3, and the meta-ability reward RMA is only utilized in the \ufb01rst 0.2M (C) episodes. We report the results for all targets (ALL) and for a subset of targets (L \u22655) with optimal trajectory lengths greater than 5. Table 1. Ablation experiments for each meta-ability. Removing a meta-ability means removing the corresponding thinking and reward for the meta-ability. IT ST NT ET OT ALL (%) L \u22655 (%) SR\u2191 SPL\u2191 SR\u2191 SPL\u2191 \u2713 43.48 18.91 30.36 14.34 \u2713 \u2713 75.94 42.77 68.24 43.28 \u2713 \u2713 \u2713 79.62 46.13 72.96 45.93 \u2713 \u2713 \u2713 \u2713 81.97 48.75 75.54 48.95 \u2713 \u2713 \u2713 \u2713 \u2713 83.14 50.23 77.03 50.88 Table 2. Ablation experiments for the multiple thinking collaboration (MTC) module and meta-ability reward. ID Method ALL (%) L \u22655 (%) SR\u2191 SPL\u2191 SR\u2191 SPL\u2191 1 Complete MT 83.14 50.23 77.03 50.88 2 MT \u2192No MTC 82.26 50.14 76.17 50.52 3 MT \u2192No RMA 81.96 49.54 76.22 49.93 4 RB + RMA (All Episodes) 81.31 48.29 75.36 48.44 5.2. Ablation Experiments Meta-Ability Ablation The object navigation task is decomposed into a total of \ufb01ve meta-abilities, which are ablated in Table 1. Based on the MAD structure, search ability is the most important meta-ability, followed by navigation ability, with exploration ability and obstacle ability playing a supportive role. MTC and Meta-Ability Reward Ablation Table 2 shows the ablation results where the MTC module and the meta-ability reward RMA are removed in the second and third rows, respectively. We observe that the MTC module has a greater impact on SR, and the meta-ability reward improves both SR and SPL. The results of the fourth row in Table 2, in which both base reward RB and meta-ability reward RMA are used throughout the entire training process, suggest that using meta-ability reward in the later stages of training can divert the model\u2019s pursuit of the \ufb01nal goal (\ufb01nding the object via the shortest path) and result in a disconnection between the reward and actual performance. 5.3. Comparative Analysis of Different Targets Figure 5 compares the SR of our MT method and the current SOTA method (DOA (Dang et al., 2022a)) for each target object. Previous methods perform poorly for small objects and objects in complex environments (e.g. bedroom). The \ufb01ve target objects (labeled in red) that bene\ufb01t most from our method are mostly previously unresolved targets. The \ufb01ve target objects (labeled in blue) that bene\ufb01t least from our method are mostly common in the simpler kitchen scene. It is observable from the pie chart that our MT model makes a much greater overall contribution to SR improvement in complex scenes (e.g. bedroom, living \fMultiple Thinking Achieving Meta-Ability Decoupling for Object Navigation AlarmClock Book Bowl CellPhone Chair CoffeeMachine DeskLamp FloorLamp Fridge GarbageCan Kettle Laptop LightSwitch Microwave Pan Plate Pot RemoteControl Sink StoveBurner Television Toaster SR (%) DOA MT 83.14 74.32 ALL Kitchen Bathroom Bedroom Living Room 21.8% 15.8% 15.2% 13.9% 11.2% 8.4% 6.5% 2.6% CellPhone DeskLamp AlarmClock Toaster Fridge GarbageCan Kettle LightSwitch Scene Contribution Typical Target Contribution Improvement Figure 5. Comparison of our MT method with the DOA method in terms of SR index for each individual target. The red and blue markers indicate the targets with highest and lowest performance improvement of the MT method respectively. The pie chart shows the contribution of each scene to overall SR improvement. Subsequently, two objects with highest contribution from each scene are plotted in a bar chart. Table 3. Comparison with target-speci\ufb01c SOTA methods on the AI2-Thor / RoboTHOR datasets. ID Method ALL (%) L \u22655 (%) Episode Time (s)\u2193 SR\u2191 SPL\u2191 SR\u2191 SPL\u2191 I SSCNav 77.14/38.12 31.09/14.10 71.73/33.46 34.33/11.04 1.34/4.14 PONI 78.58/38.42 33.78/16.30 72.92/34.72 36.40/13.22 1.59/4.58 II OMT 71.13/32.17 37.27/20.09 61.94/25.33 38.19/18.16 0.64/2.01 VGM 73.95/35.82 40.69/23.71 64.07/27.22 40.73/19.54 0.73/2.46 III TPN 67.32/30.51 37.01/18.62 58.13/23.89 35.90/14.91 0.24/0.77 IV HOZ 68.53/31.67 37.50/19.02 60.27/24.32 36.61/14.81 0.28/0.81 VTNet 72.24/33.92 44.57/23.88 63.19/26.77 43.84/19.80 0.32/1.33 DOA 74.32/36.22 40.27/22.12 67.88/30.16 40.36/18.32 0.33/1.25 V MT 83.14/42.80 50.23/29.07 77.03/37.85 50.88/23.16 0.35/1.20 room) compared to simple scenes (e.g. kitchen, bathroom). These \ufb01ndings indicate that our MT method can address multifaceted decision-making challenges in complex environments via \ufb02exible meta-abilities. More experiments are in Appendix D and E. 5.4. Comparisons with the State-of-the-Art Target-Speci\ufb01c Typical Object Navigation In Table 3, our MT model is compared with the four categories of SOTA models. (I) SLAM methods. The real-time construction of a semantic map and sub-goal path planning technique enhance the interpretability of these methods. However, due to the signi\ufb01cant cost of exploring the environment, the time required for each episode is several times that of other methods. (II) Memory methods. The explicit memory of long-term historical information enhances the model\u2019s exploration and navigation ability. Despite this, existing memory methods have a large amount of redundancy, resulting in poor generalization. (III) Deadlock-specialized methods. Deadlock states occur frequently while navigatTable 4. Comparison with target-agnostic zero-shot SOTA methods on the AI2-Thor datasets. Method Seen/Unseen split Unseen Classes ALL (%) L \u22655 (%) SR\u2191 SPL\u2191 SR\u2191 SPL\u2191 Random 18/4 9.76 2.03 0.82 0.27 ZER 18/4 31.28 15.06 25.74 14.60 ZSON 18/4 57.32 20.94 46.43 21.78 MT-ZS 18/4 68.61 27.95 57.53 29.28 Random 14/8 7.70 3.19 0.44 0.08 ZER 14/8 24.62 10.42 14.33 8.99 ZSON 14/8 52.74 18.11 33.53 14.38 MT-ZS 14/8 62.40 24.08 46.58 23.76 ing. Although incorporating a deadlock-specialized module resolves some issues, it disrupts the overall cohesiveness of model training. (IV) Association methods. Object or region association is the most common inductive bias. Narrowly, association methods only serve to enhance search ability, with minimal effect on other abilities. Under the guidance of the MAD paradigm, our MT method explicitly decouples the various meta-abilities of the object navigation task, theoretically unifying the above four types of models. In comparison to the SOTA method (DOA (Dang et al., 2022a)) with similar computational complexity, our MT method brings an overall 8.82/6.58 and 9.96/6.95 improvement in SR and SPL (AI2-Thor / RoboTHOR, %), respectively. Target-Agnostic Zero-Shot Object Navigation In contrast to typical object navigation tasks, the target objects in the zero-shot task are not visible during training. Consequently, we replace the object one-hot encoding in the MT model with the cosine similarity of glove embedding to the target (Zhao et al., 2022), resulting in the MT-ZS model. In comparison with the CLIP-based ZER model (Khandelwal et al., 2022) and the class-unrelated ZSON model (Zhao et al., 2022), our MT-ZS model based on the MAD paradigm exhibits clear advantages (Table 4). The success of our MAD paradigm in the zero-shot task demonstrates its effectiveness for embodied AI tasks with high generalization dif\ufb01culty as well. More experiments are in Appendix C. 5.5. Meta-ability Qualitative Analysis Our MT model makes decisions based on the synthesis of various meta-abilities during navigation. In Figure 6, we visualize the mean value of the thinking output neurons XTc at each step to explore how each thinking in\ufb02uences model inference in different scenarios. Intuition thinking exhibits no discernible pattern, so in this case we only illustrate the other four thinking. (a) Search thinking is activated when the target or target-related objects are observed, and becomes increasingly active as the target is approached. (b) \fMultiple Thinking Achieving Meta-Ability Decoupling for Object Navigation 2 4 6 8 10 12 14 16 0 0.5 1 2 4 6 8 10 12 14 0 0.5 1 1 2 3 4 5 6 7 8 9 0 0.5 1 2 4 6 8 10 0 0.5 1 Average activation of thinking neurons Step\u00a0 (d)\u00a0 Obstacle Thinking Light Switch Laptop Average activation of thinking neurons Step\u00a0 (b)\u00a0 Navigation Thinking Coffee Machine Average activation of thinking neurons Step\u00a0 (a)\u00a0 Search Thinking (c)\u00a0 Exploration Thinking Floorlamp Average activation of thinking neurons Step\u00a0 Laptop Figure 6. We visualize the average activation of each thinking\u2019s neurons during navigation. The depth of the arrow color represents the average value of the current thinking\u2019s output neurons, corresponding to the line chart above. The blue pentagram signi\ufb01es the step in the path where thinking is most active. Table 5. Quantitative comparison of meta-abilities across different models. Method ALL (%) SSR (S)\u2191 NSNLP (N)\u2191 REP (E)\u2193 CP (O)\u2193 Baseline 91.35 23.15 5.28 12.74 DOA 95.82(+4.47) 44.11(+20.96) 7.14(+1.86) 10.26(\u22122.48) MT 97.76(+6.14) 51.39(+28.24) 4.03(\u22121.25) 4.93(\u22127.81) When the target object is suddenly lost from the agent\u2019s \ufb01eld of view, the level of navigation thinking becomes greatly heightened, enabling the agent to quickly reacquire the target object. (c) Continuous forward motion quickly activates exploration thinking. (d) Obstacle thinking is maximally activated when the agent encounters an obstacle, and the level of activation gradually decreases as the distance from the obstacle increases. Each thinking\u2019s excitation provides a clearer explanation for the model\u2019s decision-making process based on meta-abilities. More analysis is in Appendix F. 5.6. Meta-Ability Quantitative Analysis Meta-Ability Metrics In order to quantitatively evaluate the meta-abilities of each model, we de\ufb01ne four meta-ability metrics: (i) search success rate (SSR): the success rate in \ufb01nding the target; (ii) navigation success weighted by navigation path length (NSNPL): SPL during the navigation phase after \ufb01nding the target; (iii) repeated exploration probability (REP): probability of reaching the same state repeatedly; (iv) collision probability (CP): proportion of actions resulting in collision with obstacles. Larger SSR and NSNPL values indicate stronger search and navigation abilities, while smaller REP and CP values indicate stronger exploration and obstacle abilities. More detailed explanations are in Appendix B. Analysis As shown in Table 5, our MT model performs signi\ufb01cantly better in each meta-ability than the other models. Current SOTA method DOA primarily utilizes object association to enhance search ability, however, REP indicates that the exploration ability of the DOA method has decreased relative to the baseline model. This phenomenon suggests that without decoupling meta-abilities, enhancing one meta-ability may lead to weakening other meta-abilities. It is noteworthy that our MT model only employs a small fraction of the inductive bias used in the DOA model to enhance search ability (Sec. 4.3), yet the search ability of the MT model outperforms that of the DOA model. This \ufb01nding leads us to believe that the thinking speci\ufb01city promoted by the MAD paradigm can amplify the impact of each inductive bias on the corresponding meta-ability. 6. Limitation There are still some limitations to this paper. (i) The selection of meta-abilities depends on human experience, so how to decouple the more abstract meta-abilities is still an open problem. (ii) MAD is only applied and experimented on the object navigation task in this paper, and we expect that researchers can expand it to more embodied AI tasks. (iii) How meta-ability thinking affects the model\u2019s decisionmaking still has many directions worthy of exploration. 7." + }, + { + "url": "http://arxiv.org/abs/2208.00553v2", + "title": "Search for or Navigate to? Dual Adaptive Thinking for Object Navigation", + "abstract": "\"Search for\" or \"Navigate to\"? When finding an object, the two choices always\ncome up in our subconscious mind. Before seeing the target, we search for the\ntarget based on experience. After seeing the target, we remember the target\nlocation and navigate to. However, recently methods in object navigation field\nalmost only consider using object association to enhance \"search for\" phase\nwhile neglect the importance of \"navigate to\" phase. Therefore, this paper\nproposes the dual adaptive thinking (DAT) method to flexibly adjust the\ndifferent thinking strategies at different navigation stages. Dual thinking\nincludes search thinking with the object association ability and navigation\nthinking with the target location ability. To make the navigation thinking more\neffective, we design the target-oriented memory graph (TOMG) to store\nhistorical target information and the target-aware multi-scale aggregator\n(TAMSA) to encode the relative target position. We assess our methods on the\nAI2-Thor dataset. Compared with the state-of-the-art (SOTA) method, our method\nreports 10.8%, 21.5% and 15.7% increase in success rate (SR), success weighted\nby path length (SPL) and success weighted by navigation efficiency (SNE),\nrespectively.", + "authors": "Ronghao Dang, Liuyi Wang, Zongtao He, Shuai Su, Chengju Liu, Qijun Chen", + "published": "2022-08-01", + "updated": "2022-08-13", + "primary_cat": "cs.AI", + "cats": [ + "cs.AI", + "cs.RO" + ], + "main_content": "Introduction Object navigation (Moghaddam et al. 2022; Li et al. 2022) is a challenging task that requires an agent to \ufb01nd a target object in an unknown environment with \ufb01rst-person visual observations. Due to the limited \ufb01eld of view, the information that guides the agent navigation process is insuf\ufb01cient. Therefore, some researchers recently introduced scene prior knowledge into end-to-end navigation networks. These methods have been applied to address various issues, including the use of object associations (Yang et al. 2019), object attention bias (Dang et al. 2022), and the lack of universal knowledge (Gao et al. 2021). However, these methods improve the ef\ufb01ciency of only the \u201csearch for\u201d phase (start\u2192\ufb01rst seeing the target) while neglecting the \u201cnavigate to\u201d phase (\ufb01rst seeing the target\u2192end). Our experiments (Table 4 in supplementary material) show that for the current SOTA end-to-end methods, the \u201cnavigate to\u201d steps accounts *Corresponding author Copyright \u00a9 2023, Association for the Advancement of Arti\ufb01cial Intelligence (www.aaai.org). All rights reserved. Book First see book The agent needs to search for and navigate to a book \"Navigate to\" Phase Search Thinking Search Thinking Navigation Thinking Fusion \"Search for\" Phase Figure 1: We divide the agent\u2019s navigation process into two phases: \u201csearch for\u201d (pink) and \u201cnavigate to\u201d (blue). During the \u201csearch for\u201d phase, the agent uses only search thinking to search for the target. During the \u201cnavigate to\u201d phase, navigation thinking assists the agent in quickly navigating to the target location. for 60% of the whole path, while only 40% for humans; the success rate after seeing the target is only 80%, while humans can reach 100%. Some modular approaches (Chaplot et al. 2020; Ramakrishnan et al. 2022) model the environment by using topdown semantic maps. With the help of the detailed semantic maps, the object navigation task can be decoupled into two separate training subtasks: predicting the subtarget point and navigating to the subtarget point, thus optimizing the agent navigation ability after seeing the target. However, these methods depend strongly on semantic maps, which are hypersensitive to sensory noise and scene changes. Furthermore, the generation of high-quality semantic maps requires a large amount of computational resources. To address the above issues, we aim to integrate this task decoupling concept in modular methods into end-toend methods. Therefore, we propose the dual adaptive thinking (DAT) method. As shown in Figure 1, the agent\u2019s thinking modes are divided into search thinking and navigation arXiv:2208.00553v2 [cs.AI] 13 Aug 2022 \fthinking. Search thinking guides the agent to quickly locate the target according to prior knowledge. Navigation thinking assists the agent in ef\ufb01ciently navigating to the target position after locating the target. The agent adaptively adjusts the dominance of the two thinking ways in an end-to-end network according to the navigation progress. Speci\ufb01cally, we develop different designs for the search thinking network and navigation thinking network. For the search thinking network, we adapt the directed object attention (DOA) graph method proposed in (Dang et al. 2022) to design object association and attention allocation strategies. For the navigation thinking network, we propose a target-oriented memory graph (TOMG) to store the simpli\ufb01ed agent state and target orientation information. Furthermore, we design a target-aware multi-scale aggregator (TAMSA) to re\ufb01ne the features in the TOMG to guide the agent\u2019s navigation. Extensive experiments on the AI2-Thor (Kolve et al. 2017) dataset show that our dual adaptive thinking (DAT) method not only optimizes the \u201cnavigate to\u201d phase in the end-to-end network but also outperforms the state-of-the-art (SOTA) method (Dang et al. 2022) by 10.8% and 21.5% in the success rate (SR) and success weighted by path length (SPL). Moreover, we propose a new metric, success weighted by navigation ef\ufb01ciency (SNE), to assess the agent\u2019s navigation ability during the \u201cnavigate to\u201d phase. As a general concept, the proposed multiple thinking strategy can be applied in various other embodied arti\ufb01cial intelligence tasks. Our contributions can be summarized as follows: \u2022 We propose a dual adaptive thinking (DAT) method that allows the agent to \ufb02exibly use different modes of thinking during navigation. \u2022 We carefully design a navigation thinking network with a selective memory module (TOMG) and a feature re\ufb01nement module (TAMSA). \u2022 We demonstrate that our DAT method not only addresses inef\ufb01ciencies in the \u201cnavigate to\u201d phase but also substantially outperforms existing object navigation models. Related Works Object Navigation Object navigation tasks require an agent to navigate to a target object in an unknown environment while considering only visual inputs. Recently, the relationships between objects have been introduced into navigation networks, allowing agents to locate targets more quickly by considering object associations. In (Zhang et al. 2021), the hierarchical object-to-zone (HOZ) graph was used to guide an agent in a coarse-to-\ufb01ne manner. Moreover, researchers (Dang et al. 2022) have utilized the directed object attention (DOA) graph to address the object attention bias problem. Although these works allow agents to locate targets faster, they do not address the problem of how to navigate to these targets more quickly. Our dual adaptive thinking (DAT) method divides the thinking into two types: search thinking and navigation thinking, which can collaborate adaptively to make every navigation stage ef\ufb01cient. Modular Navigation The modular navigation method has been proposed to solve the generalizability problem of end-to-end models in complex environments. It has been proven that using a top-down semantic map to predict distant subgoal points (Chaplot et al. 2020) is feasible on the Habitat dataset. The PONI (Ramakrishnan et al. 2022) method trains two potential function networks using supervised learning to determine where to search for an unseen object. These modular methods require a large amount of computing and storage resources to generate semantic maps in real time and are sensitive to the image segmentation quality. Our method implicitly incorporates different thinking during navigation into an end-to-end network without relying on semantic maps. Necessity of Dual Thinking Dual Thinking in Humans Embodied AI (Duan et al. 2022) is a challenging research topic that requires agents to use well-developed intuitive tasks (e.g., classi\ufb01cation (Wang et al. 2019) and detection (Liu et al. 2020)) to complete complex logical tasks (e.g., navigation (Zhu, Meurer, and G\u00a8 unther 2022) and interaction (Shridhar et al. 2020)) in real-world environments. Humans often use more than one way of thinking when completing these complex logical tasks. For example, when we need an object, we \ufb01rst use associative thinking to locate the object and then use navigational thinking to reach the object location; when we answer a question about an object, we \ufb01rst use exploratory thinking to fully understand the object and then use reasoning and language-organized thinking to draw conclusions. Therefore, multiple thinking approaches can be introduced in end-to-end networks to develop interpretable hierarchical models that are more consistent with how humans address complex logic problems. Repeated Target Search Problem There is a phenomenon in the current methods that if the agent loses the target in view, it still needs to be searched again to locate the target. This phenomenon causes the agent to waste a considerable amount of time while re-searching for the target, potentially leading to constant loops. This problem is especially common in environments with many obstacles. A clear orientation memory for the target is the key to solve this problem. Therefore, we design a targetoriented memory graph (TOMG) and target-aware multiscale aggregator (TAMSA) in the navigation thinking network to ensure that the agent navigates to the target ef\ufb01ciently without repeatedly re-searching. Dual Adaptive Thinking Network Our goal is to endow agents with both search and navigation thinking and to adjust their status based on the navigation process. To achieve this goal, we design three networks, as illustrated in Figure 2: (i) search thinking network; (ii) navigation thinking network; (iii) adaptive fusion network. (i) and (ii) are connected by (iii) to form the dual adaptive thinking (DAT) network. \fSearch Thinking Navigation Thinking Adaptive Fusion LSTM C Image Object Previous Action Target visible C LN FC Search thinking feature Navigation thinking feature Egocentric coordinates TOMG Target TAMSA Visited target-visible nodes Visited target-invisible nodes Starting node Current node ... Target embedding Attention Object\u00a0association C Attention redistribute Figure 2: Model overview. TOMG: target-oriented memory graph. TAMSA: target-aware multi-scale aggregator. Our model includes three modules: search thinking, navigation thinking and adaptive fusion. In the search thinking network, we endow the model with an object association ability according to the DOA graph method proposed in (Dang et al. 2022). In the navigation thinking network, we provide the model with the ability to remember the target orientation. In the adaptive fusion network, we make the dual thinking work in harmony according to the navigation progress. Task De\ufb01nition The agent is initialized to a random state s = {x, y, \u03b8, \u03b2} and random target object p. According to the single view RGB image ot and target p, the agent learns a navigation strategy \u03c0(at|ot, p), where at \u2208A = {MoveAhead; RotateLeft; RotateRight; LookDown; LookUp; Done} and Done is the output if the agent believes that it has navigated to the target location. Ultimately, if the agent is within 1.5 m of the target object when Done is output, the navigation episode is considered successful. Search Thinking Network Search thinking aims to enable the agent to quickly capture the target with the fewest steps when the target is not in view. To use ef\ufb01cient object association, we adopt the unbiased directed object attention (DOA) graph method proposed in (Dang et al. 2022). According to the object-target association score Gt calculated by the DOA method, we redistribute the attention to the object features St (from DETR (Carion et al. 2020)) and image features It (from ResNet18 (He et al. 2016)) to ensure that the agent pays attention to objects and image regions that are more relevant to the target. In the object attention redistribution process, the objecttarget association score of each object q is multiplied by the object features St to generate the \ufb01nal object embedding b St: b Sq t = Sq t Gq t q = 1, 2, \u00b7 \u00b7 \u00b7 , N (1) where b St = {b S1 t , b S2 t , \u00b7 \u00b7 \u00b7 , b SN t }, and N is the number of objects. In the image attention redistribution process, we assign attention to image features It according to the object semantic embeddings generated by the one-hot encodings. Initially, the semantic embeddings are weighted by Gt \u2208RN\u00d71 to obtain the attention-aware object semantics D. We use D as the query and It as the key and value in the multi-head image attention to generate the \ufb01nal image embedding b It: Qi = DW Q i Ki = ItW K i Vi = ItW V i i = 1, \u00b7 \u00b7 \u00b7 , NH (2) headi = softmax(QiKT i \u221a HD )Vi (3) b It = Concat(head1, \u00b7 \u00b7 \u00b7 , headNH)W O (4) where HD and NH denote the hidden dimensionality and number of heads in the multi-head attention. Finally, the attention-aware object features b St and image features b It are concatenated with the previous action embedding PA to obtain the output ST of the search thinking network. Navigation Thinking Network Target-Oriented Memory Graph (TOMG) In contrast to search thinking, navigation thinking requires the ability to memorize, locate and navigate to the target. Thus, we design a target-oriented memory graph (TOMG) as the input feature M. As shown in Figure 2, the TOMG is composed of the visited target-visible nodes. Each node feature m \u2208R1\u00d79 is concatenated by three parts: the target bounding box, the target con\ufb01dence and the agent\u2019s state (position and angle). This target-oriented method of storing information about visited nodes uses 400\u00d7 less storage than the methods used in previous works (Fukushima et al. 2022; Zhu et al. 2021). Since the agent cannot obtain its own absolute position and orientation in unknown environments, the stored coordinates take the starting position as the origin and the starting orientation as the coordinate axis. Target-visible nodes are \ufb01ltered by a con\ufb01dence threshold cf. Finally, to reduce the storage redundancy, only the L closest targetvisible nodes to the current node in the path are stored. If the number of target-visible nodes is less than L, the remaining nodes are \ufb01lled with values of 0. Egocentric Coordinate Transformation In the above section, we mentioned that the agent\u2019s position (xi, yi) and angle (\u03b8i, \u03b2i) are calculated relative to the starting position \fFC TCN Dilation rate = 1 TCN Dilation rate = 2 ... ... ... Aggregator Kernel Flatten Target Pool Circle Padding Dilation rate = 0 Matrix multiply Element-wise multiply FC FC Figure 3: A detailed explanation of the target-aware multiscale aggregator (TAMSA). We \ufb01rst use the multi-scale TCNs to obtain aggregator kernels that aggregate the targetoriented memory graph with L nodes into graph with \u00a3 nodes. Then, the aggregated features allocate attention to the channel dimension using the target semantics. We describe the circle padding method applied in our TCNs below the \ufb01gure. (x0, y0) and angle (\u03b80, \u03b20). However, as the agent navigates during each step, the decisions (e.g., rotate right) are made relative to the agent\u2019s own coordinate system. Therefore, as shown in Figure 2, we convert the coordinates of each node in the TOMG to the coordinate system of the current node (xc, yc, \u03b8c, \u03b2c) at each step: (e xi, e yi) = (xi, yi) \u2212(xc, yc) (e \u03b8x i , e \u03b2x i ) = sin((\u03b8i, \u03b2i) \u2212(\u03b8c, \u03b2c)) (e \u03b8y i , e \u03b2y i ) = cos((\u03b8i, \u03b2i) \u2212(\u03b8c, \u03b2c)) i \u2208\u2206M (5) where \u2206M represents the index collection of target-visible nodes. To ensure that the angle and position coordinates have the same order of magnitude, we use sin and cos to normalize the angle coordinates to [\u22121, 1]. After this egocentric coordinate transformation, we obtain egocentric TOMG features f M \u2208RL\u00d711. Target-Aware Multi-Scale Aggregator (TAMSA) To encode navigation thinking into the network, we design a target-aware multi-scale aggregator (TAMSA) to aggregate the egocentric TOMG feature f M into an implicit representation NT. In contrast to typical methods that use transformers or temporal convolutions as encoders, we devise a unique dynamic encoder that better leverages the memory graph features, as described below. First, to improve the feature expression ability of the navigation thinking network, we use fully connected (FC) layers to map the features f M to higher dimensional spaces. Inspired by some advanced works (Dosovitskiy et al. 2021; Liu et al. 2021) on vision transformers, we add layer normalization between the two FC layers to stabilize the forward input distribution and backpropagation gradient (Xu et al. 2019). The encoding details can be formulated as follows: Y = \u03b4(LN(f MW M1)W M2) (6) where \u03b4 denotes the ReLU function, LN denotes layer normalization, and W M1 \u2208R11\u00d716 and W M2 \u2208R16\u00d732 are learnable parameters. Then, a multi-scale dynamic kernel is calculated to re\ufb01ne the target orientation features into implicit nodes. As shown in Figure 3, we use three temporal convolution networks (TCNs) with different dilation rates d to generate three dynamic kernels with distinct scales. It is worth noting that the TCN with d = 0 degenerates to an FC layer. In the early stages of the \u201cnavigate to\u201d phase, the TOMG contains fewer valid nodes; thus, the boundary degradation caused by zero padding has a greater impact. Accordingly, inspired by (Zhang, Hu, and Wang 2022), we design the circle padding (CP) which \ufb01lls the sequence edge with the features at the other end of the sequence (Figure 3). The different scale kernels are added after multiplying by the learnable parameter wd: H(l) = 2 X d=0 wd( X j\u2208\u03a8 Y (l + j \u2217d) \u2217fd(j) + bd) (7) where H = {H(1), \u00b7 \u00b7 \u00b7 , H(L)}, l is the central node of the convolution kernel, \u03a8 refers to the set of offsets in the neighborhood considering convolution conducted on the central node, Y (\u00b7) takes out the node features in Y , and fd and bd denote the weights and biases in the convolution kernel with dilation rate d. The multi-scale dynamic kernel H \u2208RL\u00d7\u00a3 re\ufb01nes Y \u2208RL\u00d732 to e Y \u2208R\u00a3\u00d732. Intuitively, the mappings between the observation data and target azimuth differ when searching for different targets. For example, when looking for a TV, even if the TV is located far from the agent, the agent can clearly identify the target and obtain a larger target bounding box; however, when looking for a mobile phone, the agent obtains a smaller target bounding box, even if the agent is close to the mobile phone. Therefore, we enhance the TAMSA representation by considering the target semantic information. To achieve this goal, the one-hot target index E is encoded to the same channel dimension as e Y through two FC layers, whose result is channel-wise multiplied with e Y to get the target-aware feature representation b Y : b Y = HT Y \u2299\u03b4(\u03b4(EW E1)W E2) (8) Finally, to obtain the \ufb01nal output NT of the navigation thinking network, we \ufb02atten b Y from R\u00a3\u00d732 to R1\u00d732\u00a3 and use an FC layer to reduce the output dimension. Furthermore, we add residual connections to ensure the stability of the feature transfer process. NT = \u03b4(Flatten(b Y )W Y ) + 1 \u00a3 \u00a3 X l=1 b Y (l) (9) A dropout layer is added before the output to reduce over\ufb01tting in the navigation thinking network. Adaptive Fusion (AF) of Dual Thinking Networks Search thinking and navigation thinking have different work strategies according to the navigation progress. During the \f\u201csearch for\u201d phase, since there are no visited target-visible nodes, NT is an all-zero matrix. Therefore, the navigation thinking network does not affect the action decision when the target has not yet been seen. During the \u201cnavigate to\u201d phase, to ensure navigation robustness, search thinking and navigation thinking work together to guide the action decision. As the number of visited target-visible nodes increases, navigation thinking gradually dominates. The fusion process of the two thinking methods can be expressed as: DT = (LN(Concat(NT, ST)))W (10) where W is a learnable parameter matrix that adaptively adjusts the proportion of the two thinking networks, and LN is demonstrated to be signi\ufb01cantly bene\ufb01cial to the generalizability of the model. Policy Learning Following the previous works (Mirowski et al. 2017; Fang et al. 2021), we treat this task as a reinforcement learning problem and utilize the asynchronous advantage actor-critic (A3C) algorithm (Mnih et al. 2016). However, in the search thinking network, the complex multi-head attention calculations are dif\ufb01cult to directly learn by the reinforcement learning (Du, Yu, and Zheng 2021); thus, we use imitation learning to pretrain the search thinking network. We divide the continuous action process into step-by-step action predictions and teach the agent to rely on only object associations to determine actions without considering historical navigation information. After pretraining, we obtain a search thinking network with a basic object association ability. Finally, the search thinking network and navigation thinking network are jointly trained via reinforcement learning. Experiment Experimental Setup Dataset AI2-Thor (Kolve et al. 2017) is our main experimental platform, which includes 30 different \ufb02oorplans for each of 4 room layouts: kitchen, living room, bedroom, and bathroom. For each scene type, we use 20 rooms for training, 5 rooms for validation, and 5 rooms for testing. Evaluation Metrics We use the success rate (SR), success weighted by path length (SPL) (Anderson et al. 2018a), and our proposed success weighted by navigation ef\ufb01ciency (SNE) metrics to evaluate our method. SR is formulated as SR = 1 F PF i=1 Suci, where F is the number of episodes and Suci indicates whether the i-th episode succeeds. SPL considers the path length more comprehensively and is de\ufb01ned as SPL = 1 F PF i=1 Suci L\u2217 i max(Li,L\u2217 i ), where Li is the path length taken by the agent and L\u2217 i is the theoretical shortest path. SNE considers the navigation ef\ufb01ciency during the \u201dnavigate to\u201d phase and is de\ufb01ned as SNE = 1 F F X i=1 Suci Li Lnav i + 1 (11) where Lnav i is the path length in the \u201dnavigate to\u201d phase. To ensure that the denominator is nonzero, we use Lnav i + 1 as the denominator in the above equation. Implementation Details We train our model with 18 workers on 2 RTX 2080Ti Nvidia GPUs. The dropout rate and target-visible \ufb01lter cf in our model are set to 0.3 and 0.4, respectively. The number of implicit nodes \u00a3 in TAMSA is set to 3. We report the results for all targets (ALL) and for a subset of targets (L \u22655) with optimal trajectory lengths greater than 5. Ablation Experiments Baseline Similar to (Dang et al. 2022), our baseline model adopts the features concatenated from the image branch (from ResNet18), object branch (from DETR) and previous action branch as the environment perception encoding. Next, an LSTM network is used to model the temporal implicit features. The \ufb01rst row in Table 1 shows the performance of our baseline. It is worth noting that since we adopt the object features extracted by DETR, the capability of our baseline model is already close to some SOTA methods with FasterRCNN object detector (Ren et al. 2015). Dual Adaptive Thinking The purpose of dual adaptive thinking is to dynamically use two distinct thinking methods to ensure that the agent performs well during each stage. As shown in Table 1, the model with search thinking outperforms the baseline with the gains of 4.56/7.64, 1.57/2.25 and -0.38/0.69 in SR, SPL and SNE (ALL/L \u22655, %). The search thinking network enables the agent to quickly locate the object through object associations; however, because the relative position of the target is not estimated, the SPL metric is limited by redundant paths in the \u201cnavigate to\u201d phase. Adaptively incorporating our proposed navigation thinking into the search thinking improves the SR, SPL and SNE by 6.49/7.85, 3.89/4.89 and 18.3/26.87 (ALL/L \u22655, %). The results prove that the navigation thinking improves the agent\u2019s performance on various indicators by optimizing the path during the \u201dnavigate to\u201d phase. Moreover, the last two rows in Table 1 show that the fusion of the dual thinking networks considerably improves the \ufb01nal model effect. Navigation Thinking Network The navigation thinking network includes three key modules: the target-oriented memory graph (TOMG), the egocentric coordinate transformation module and the target-aware multi-scale aggregator (TAMSA). Rows 4 through 7 in Table 1 show the ablation results on the three modules. The navigation thinking network without the TAMSA increases the SR and SNE by 2.22/2.16 and 7.59/10.45, but decreases the SPL by 3.03/3.2 (ALL/L \u22655, %). TAMSA improves the SPL back by re\ufb01ning the introduction of navigation thinking. Although the use of TOMG alone does not directly improve the performance of the various indicators, the simpli\ufb01ed and highly abstract storage features in the TOMG facilitate the subsequent feature re\ufb01nement and thinking integration. Figure 4 displays various metrics and computation speeds while using different storage features (TOMG, object and image) and maximum stored steps L. Image features with the most redundant information perform the worst. Compared with object features, the target-oriented characteristic in the TOMG considerably improves the SNE. Most importantly, the TOMG is substantially less complex than \fTable 1: Ablation results on each module in the three sub-networks: search, navigate and fusion. ID Search Thinking Navigation Thinking Fusion ALL (%) L \u22655 (%) Associate Pretrain TOMG Egocentric TAMSA AF LN SR SPL SNE SR SPL SNE 1 71.34 43.47 121.91 60.72 42.18 110.73 2 \u2713 74.89 44.98 122.32 67.12 44.01 111.81 3 \u2713 \u2713 75.90 45.04 121.53 68.36 44.43 111.42 4 \u2713 \u2713 \u2713 76.02 43.15 126.15 68.66 42.19 119.22 5 \u2713 \u2713 \u2713 \u2713 78.12 42.01 129.12 70.52 41.23 121.87 6 \u2713 \u2713 \u2713 \u2713 78.04 45.67 131.24 70.34 45.30 125.98 7 \u2713 \u2713 \u2713 \u2713 \u2713 80.88 45.71 135.44 73.42 45.91 133.11 8 \u2713 \u2713 \u2713 \u2713 \u2713 \u2713 81.34 47.53 138.12 74.89 47.76 132.82 9 \u2713 \u2713 \u2713 \u2713 \u2713 \u2713 \u2713 82.39 48.93 139.83 76.21 49.32 138.29 Table 2: Ablation experiments on each module in the targetaware multi-scale aggregator (TAMSA). Dynamic: dynamic aggregator kernel, TA: target-aware, MS: multi-scale, CP: circle padding. Method ALL (%) L \u22655 (%) SR SPL SNE SR SPL SNE Average Pooling 79.67 45.14 134.21 73.33 45.21 126.94 Transformer 77.23 43.24 132.83 71.44 42.97 127.11 TCN 78.66 43.41 133.69 74.42 43.91 125.18 TAMSA A1 Dynamic 80.15 44.26 135.02 71.80 45.33 130.87 A2 A1+TA 81.20 46.71 136.25 74.17 47.52 134.91 A3 A1+MS 81.14 47.28 135.22 73.44 48.31 136.49 A4 A2+MS 81.32 47.41 137.26 75.88 49.36 138.12 A5 A4+CP 82.39 48.93 139.83 76.21 49.32 138.29 other storage methods in terms of calculation and memory costs. In terms of computational ef\ufb01ciency, when the number of stored steps is set to 40, compared with storing object and image features, the TOMG improves the computational speed by 41.43% and 47.69%, respectively. In terms of memory usage, the TOMG requires only 0.64% and 0.29% of the memory required by the object and image features. Furthermore, as the number of stored steps increases, the computational burden of the TOMG storage method remains essentially constant. Target-Aware Multi-Scale Aggregator (TAMSA) In contrast to the commonly used encoders such as TCNs and transformers, our proposed TAMSA uses a dynamic kernel to achieve automatic sequence length reduction without applying global pooling at the end. As shown in Table 2, the use of either TCNs or transformers exhibits worse performance than using average pooling directly. The results indicate that these commonly used encoders are not suitable for our navigation thinking network. Based on the initial aggregator model (A1), the target-aware (TA) property brings improvements of 1.05/2.37, 2.45/2.19, 1.23/4.04, and the multi-scale (MS) property brings improvements of 0.99/1.64, 3.02/2.98, 0.20/5.62 in SR, SPL and SNE (ALL/L \u22655, %). The two properties optimize the agent\u2019s route in the \u201cnavigate to\u201d phase during long-distance navigation but have little effect on the route during short-distance navigation. To address this issue, we utilize circle padding (CP) to prevent serious information loss in limited targetvisible nodes, thereby optimizing the path during shortdistance navigation. Fusion of Dual Thinking Modules When humans complete a task, multiple thinking approaches often cooperate with each other rather than operating independently. Therefore, how to effectively integrate the two separately designed thinking networks through a uni\ufb01ed network is crucial. Rows 4 to 7 in Table 1 show that after the navigation thinking network is added to the model, the SPL improves less than the SNE. This gap suggests that although navigation thinking optimizes the \u201cnavigate to\u201d phase, it has a negative impact on the \u201csearch for\u201d phase. Our proposed adaptive fusion (AF) method solves the above problem and improves the SR and SPL metrics by 0.46/1.47 and 1.82/1.85 (ALL/L >= 5, %). Moreover, since the feature modal and encoding methods used by the search thinking and navigation thinking are completely different, directly concatenating the two thinking features can lead to backpropagation instability and considerable over\ufb01tting. Therefore, layer normalization (LN) is used after the two thinking features are concatenated, improving the SR, SPL and SNE by 1.05/1.32, 1.40/1.56 and 1.71/5.47, respectively (ALL/L >= 5, %). Comparisons with the State-of-the-Art Methods Our DAT method is compared with three categories of relevant SOTA methods, as shown in Table 3. (I) Methods with search thinking. These methods have lower SNE because they do not apply navigation thinking. Compared to the recently proposed DOA (Dang et al. 2022) method, our DAT method brings 8.07/8.33, 8.66/8.96 and 18.97/29.10 improvements in SR, SPL and SNE (ALL/L \u22655, %). (II) Methods with long-term memory. These methods theoretically depend on historical information to model environments more clearly; however, methods such as OMT (Fukushima et al. 2022) store overcomplicated features, increasing the dif\ufb01culty of network learning. Therefore, the current memory modules do not exert their full strength. (III) Modular methods based on semantic maps. The strong interpretability of semantic maps enables agents to quickly navigate to the target location after seeing the target; thus, their \u201cnavigate to\u201d phase ef\ufb01ciency (SNE) is higher. Nevertheless, these methods require considerable efforts to \f5 10 15 20 25 30 35 40 124 126 128 130 132 134 136 138 SNE (%) 5 10 15 20 25 30 35 40 70 71 72 73 74 75 76 77 78 SR (%) 5 10 15 20 25 30 35 40 44 45 46 47 48 49 50 SPL (%) TOMG (dimension = 9) Object (dimension = 22\u00a0 \u00a066) Image (dimension = 7\u00a0 \u00a07\u00a0 \u00a064) 5 10 15 20 25 30 35 40 18 20 22 24 26 28 30 32 34 36 Time for Each Step (ms) Maximum Stored Steps (L) Maximum Stored Steps (L) Maximum Stored Steps (L) Maximum Stored Steps (L) Figure 4: We compare the metrics in paths with L \u22655 while storing different features and path lengths for the navigation thinking. The red \ufb01ve-pointed star indicates the choices that optimize the given indicator. Table 3: Comparison with SOTA methods on the AI2-Thor dataset. More experiments on the other datasets are in the supplementary material (Table 6). ID Method ALL (%) L \u22655 (%) SR SPL SNE SR SPL SNE I Random 4.12 2.21 7.91 0.21 0.08 8.14 SAVN (2019) 63.12 37.81 102.44 52.01 34.94 94.51 ORG (2020) 67.32 37.01 111.88 58.13 35.90 101.29 HOZ (2021) 68.53 37.50 110.79 60.27 36.61 106.37 VTNet (2021) 72.24 44.57 115.99 63.19 43.84 109.80 DOA (2022) 74.32 40.27 120.86 67.88 40.36 109.19 II OMT (2022) 71.13 37.27 124.31 61.94 38.19 117.98 III SSCNav (2021) 77.14 35.09 138.22 71.73 34.33 136.87 PONI (2022) 78.58 37.27 141.17 72.92 36.40 137.26 IV Ours (DAT) 82.39 48.93 139.83 76.21 49.32 138.29 explore the environment, resulting in an inability to visually capture targets as quickly as search thinking methods. The SPL of the current state-of-the-art modular method PONI (Ramakrishnan et al. 2022) is 11.66/12.92 lower (ALL/L \u2265 5, %) than that of our DAT method. Qualitative Analysis The route in the environment is visualized using only the search thinking and our DAT method, as shown in Figure 5. In the \u201csearch for\u201d phase, the two methods predict the same path since our DAT method has not yet applied the proposed navigation thinking network. After entering the \u201cnavigate to\u201d phase, the navigation thinking in the DAT method begins to assist the agent\u2019s decision-making by structuring the memory graph of the target information. At the decision key frame, the target cannot be seen. The method with only search thinking selects the right room with richer object information. In contrast, our DAT method uses the target relative position representation generated by the navigation thinking to select the correct left room. The correct decision in this key frame leads to the successful navigation of our DAT method. The route visualization in more scenarios is shown in the supplementary material (Figure 7). Multiple Adaptive Thinking in Embodied AI The dual adaptive thinking (DAT) network proposed in this paper provides key inspiration for future research. In the object navigation task, dual adaptive thinking can be Target Target: Laptop \"Navigate to\" Phase \"Search for\" Phase Decision Key Frame First Target-Visible Frame pillow bed Figure 5: Visualization on the RoboTHOR test environment. Pink arrows: \u201csearch for\u201d phase; red arrows: route with only the search thinking applied in the \u201cnavigate to\u201d phase; blue arrows: route with both the search and navigation thinking applied in the \u201cnavigate to\u201d phase. Both routes differ at the decision key frame. extended to multiple adaptive thinking. The environment modeling thinking, object state understanding thinking, and other types of thinking can be introduced in multiple adaptive thinking models. Furthermore, multiple adaptive thinking is not limited to object navigation tasks. In other embodied AI tasks, such as embodied question answering (EQA) (Das et al. 2018) and visual language navigation (VLN) (Anderson et al. 2018b), agents can use multiple thinking approaches to more \ufb02exibly address real-world problems." + }, + { + "url": "http://arxiv.org/abs/2204.04421v2", + "title": "Unbiased Directed Object Attention Graph for Object Navigation", + "abstract": "Object navigation tasks require agents to locate specific objects in unknown\nenvironments based on visual information. Previously, graph convolutions were\nused to implicitly explore the relationships between objects. However, due to\ndifferences in visibility among objects, it is easy to generate biases in\nobject attention. Thus, in this paper, we propose a directed object attention\n(DOA) graph to guide the agent in explicitly learning the attention\nrelationships between objects, thereby reducing the object attention bias. In\nparticular, we use the DOA graph to perform unbiased adaptive object attention\n(UAOA) on the object features and unbiased adaptive image attention (UAIA) on\nthe raw images, respectively. To distinguish features in different branches, a\nconcise adaptive branch energy distribution (ABED) method is proposed. We\nassess our methods on the AI2-Thor dataset. Compared with the state-of-the-art\n(SOTA) method, our method reports 7.4%, 8.1% and 17.6% increase in success rate\n(SR), success weighted by path length (SPL) and success weighted by action\nefficiency (SAE), respectively.", + "authors": "Ronghao Dang, Zhuofan Shi, Liuyi Wang, Zongtao He, Chengju Liu, Qijun Chen", + "published": "2022-04-09", + "updated": "2022-07-08", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.AI" + ], + "main_content": "INTRODUCTION The object navigation [16, 24, 25, 38] requires an agent to navigate in unseen environments and find a specified target by executing a sequence of actions. The agent can only use visual observation and *Corresponding author. Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from permissions@acm.org. MM \u201922, October 10\u201314, 2022, Lisbon, Portugal. \u00a9 2022 Association for Computing Machinery. ACM ISBN 978-1-4503-9203-7/22/10...$15.00 https://doi.org/10.1145/XXXXXX.XXXXXX target information as inputs and predict actions by deep reinforcement learning in each step. Most prior works [20, 22, 23] have directly used global image features to recursively train agents based on egocentric observations. Nevertheless, if the target is invisible, it is difficult to efficiently navigate to the object position with these methods. Therefore, some recent works [6, 37] have focused on establishing specific object prior knowledge to better understand complete scenes. Yang et al. [37] and Du et al. [6] used the graph convolutional networks (GCNs) to learn graph representations of object features. However, the agent may not treat all items equally, preferring to focus on more conspicuous objects. Therefore, we propose the object attention bias problem, which refers to the situations in which agents focus on objects with high visibility during navigation. For example, when looking for a cell phone, an agent may focus on a closer, clearer TV while ignoring a distant, blurrier laptop that has a higher correlation with the cell phone (Figure 1). In principle, this phenomenon occurs mainly because neural networks prefer features with higher information entropy. Therefore, the direct use of graph convolutions to implicitly learn relationships between objects can lead to considerable object attention bias. To address the above problem, object attention should be decoupled from the object features, so we let the agent explicitly learn the attention weights of objects according to the different targets. Concretely, we propose a learnable two-layer directed object attention (DOA) graph that represents the relationships between objects in view and the target to address the object attention bias problem. The first graph layer is an intrinsic attention graph (IG) that establishes the basic object attention relationships. The second graph layer is a view adaptive attention graph (VAG) that changes based on the observations of the agent. The DOA graph, which is produced by the weighted average of the two graph layers, can guide an agent to reasonably assign attention to each object in view. Based on the DOA graph, we further design two novel crossattention modules: the unbiased adaptive image attention (UAIA) module and the unbiased adaptive object attention (UAOA) module. As illustrated in Figure 1, the target is the arrival point, and the objects in view are the starting points for the directed edges in the DOA graph. The weight of a directed edge from an object node to a target node is the object\u2019s attention while searching for this target. The UAOA module uses object attention in the DOA graph to directly distribute weights to different objects. The UAIA module uses a multihead attention mechanism to determine the areas in the arXiv:2204.04421v2 [cs.CV] 8 Jul 2022 \fMM \u201922, October 10\u201314, 2022, Lisbon, Portugal. Ronghao Dang, et al. Next view Current view \uff1f \uff01 MoveAhead RotateLeft RotateRight LookUp LookDown Done MoveAhead RotateLeft RotateRight LookUp LookDown Done clear unclear Target 5% 20% Target Object Object Current view Next view DOA Graph Television Laptop Cellphone Television Laptop Cellphone Given higher attention\u00a0 Given lower attention\u00a0 Target node in DOA graph Object node in DOA graph The agent is looking for cellphone ... clear unclear Object Biased \u00a0Attention\u00a0 Unbiased \u00a0Attention\u00a0 Attention score Figure 1: In the previous biased method [6], since the nearby TV is clearer than the distant laptop, the agent focuses more on the TV, resulting in an incorrect decision. Our proposed unbiased DOA graph learns that the cell phone is more likely to be near the laptop according to the target and the current view, resulting in an correct decision. global image that need attention. We follow the operation described in [6] to concatenate the image branch, object branch and previous action branch into a vector. However, just as the temporal sequence in the transformer needs positional encodings to acquire position information [31, 34, 41], different branches need tokens to represent their identities. In consequence, we propose an adaptive branch energy distribution (ABED) method, which allows the network to distinguish different branches with the addition of few parameters. Then, in accordance with [40], we input the concatenated features of the multimodel information into a long short-term memory (LSTM) network for learning and optimize the model with the A3C reinforcement learning strategy. Extensive experiments on the AI2-Thor [13] dataset show that our method not only eliminates the object attention bias problem but also increases the state-of-the-art (SOTA) method by 7.4%, 8.1%, 17.6% in the success rate (SR), success weighted by path length (SPL) and success weighted by action efficiency (SAE) [40]. Our method performs well inasmuch as the agent has a more comprehensive understanding of object relationships and an unbiased attention distribution. In summary, our contributions are as follows: \u2022 We identify the prevalent object attention bias problem in object navigation, which occurs due to differences in object visibility. \u2022 We propose the directed object attention (DOA) graph, which addresses the problem of object attention bias and provides the agent with a better understanding of the internal relationships between objects. \u2022 The unbiased adaptive object attention (UAOA) and unbiased adaptive image attention (UAIA) modules use the DOA graph to allocate more reasonable attention resources to object features and global image features. \u2022 We propose the parameter-free adaptive branch energy distribution (ABED) method to optimize branch feature aggregation. 2 RELATED WORKS 2.1 Object Navigation In an object navigation task, an agent is given goal-oriented instruction to search for a specified object. The primitive object navigation models make decisions by directly processing input images with convolutional neural networks (CNNs). Recently, researchers [3, 8, 37] have found that using only CNN features in raw images cannot achieve the desired results. An increasing number of researchers have begun to use methods such as object detection to extract high-level semantic features to better guide the agent\u2019s movement. Yang et al. [37] initially use graph convolutional networks (GCNs) to learn the object prior knowledge. Gao et al. [8] utilize cross-modality knowledge reasoning (CKR) to apply an external knowledge graph in the agent\u2019s navigation. Zhang et al. [40] propose the hierarchical object-to-zone (HOZ) graph to guide an agent in a coarse-to-fine manner. In our work, we conduct the online-learning directed object attention (DOA) graph to serve as prior knowledge, which provides more unbiased object attention. \fUnbiased Directed Object Attention Graph for Object Navigation MM \u201922, October 10\u201314, 2022, Lisbon, Portugal. 0 2 4 6 object detected times 0 2 4 6 8 AlarmClock Book Bowl CellPhone Chair CoffeeMachine DeskLamp FloorLamp Fridge GarbageCan Kettle Laptop LightSwitch Microwave Pan Plate Pot RemoteControl Sink StoveBurner Television Toaster \uff08a\uff09Adjacency Matrix of GCN \uff08b\uff09Distribution of Object Detected Times Figure 2: Two potential reasons for object attention bias: (a) endogenous cause: unreasonable GCN adjacency matrix [6]; (b) exogenous cause: the total number of times each object is identified shows a long-tailed distribution. 2.2 Debiasing Methods Bias problems are widespread in machine learning [11, 33], especially in the field of scene understanding [15, 32]. However, no previous work has analyzed and addressed the bias problem in object navigation tasks. Current debiasing methods can be roughly categorized into three types: (i) data augmentation or re-sampling [9, 17, 18], (ii) unbiased learning through the design of training networks and loss functions [19, 39], and (iii) disentangling biased representations from unbiased representations [2, 21]. Our proposed DOA graph method belongs to the second category. However, unlike common debiasing methods [19, 39], our method explicitly models the bias module, which essentially solves the object attention bias problem in object navigation. 3 OBJECT ATTENTION BIAS 3.1 Bias Discovered in GCNs The object GCN used in [6, 40] attempts to aggregate the extracted object features by using the correlations between their bounding boxes. However, it is too difficult to learn a reasonable adjacency matrix which is crucial for GCNs [42]. As shown in Figure 2 (a), objects that are easy to observe, such as the floor lamp and fridge, have larger weights, while objects that are difficult to observe, such as the alarm clock and cell phone, have smaller weights. This kind of object attention bias is caused by over-focusing on coordinates and confidence scores. To reduce this bias, the agent should focus on what and where the object is rather than its size and clarity. The GCN ablation experiment, shown in Table 1, demonstrates that the GCN module only slightly improves the navigation ability of the agent, implying that a biased GCN module cannot be used to effectively and unbiasedly model relationships among objects. 3.2 Duality of Bias We cannot criticize biased training because our visual world is inherently biased; people simply prefer to trust objects that are easier to identify. For example, when looking for a plate, we often pre-search for a cabinet instead of a knife or fork. In fact, some biased decisions allow agents to avoid some weird mistakes and make the overall actions more robust. However, excessive bias can cause an agent to overfit the training dataset, resulting in the agent ignoring critical navigation information. In general, there are two reasons for object attention bias: (i) endogenous cause (Figure 2 (a)), the network\u2019s own preference towards objects with richer visual features; (ii) exogenous cause (Figure 2 (b)), inequalities in the frequency each object are present in the dataset. This paper mainly starts from the endogenous cause without changing the number of objects in the dataset. Our proposed DOA graph corrects the agent\u2019s neglect of small and ambiguous objects (bad bias) while maintaining the agent\u2019s trust in high-confidence objects (good bias). 4 PROPOSED METHOD Our goal is to propose an attention allocation strategy without object attention bias and a reasonable cross-branch aggregation method for a target-driven visual navigation system. To achieve this goal, our navigation system contains four major components, as illustrated in Figure 3: (i) directed object attention (DOA) graph generator; (ii) unbiased adaptive object attention (UAOA) module; (iii) unbiased adaptive image attention (UAIA) module; (iv) adaptive branch energy distribution (ABED) method. (ii) and (iii) are based on the object attention in the DOA graph. 4.1 Task Definition Initially, the agent is given a random target from a group of \ud835\udc41 objects \ud835\udc43= {\ud835\udc43\ud835\udc4e\ud835\udc5b, \u00b7 \u00b7 \u00b7 ,\ud835\udc36\ud835\udc52\ud835\udc59\ud835\udc59\ud835\udc5d\u210e\ud835\udc5c\ud835\udc5b\ud835\udc52}, and starts from a random state \ud835\udc60= {\ud835\udc65,\ud835\udc66,\ud835\udf03\ud835\udc5f,\ud835\udf03\u210e} in a random house. Here, (\ud835\udc65,\ud835\udc66) and (\ud835\udf03\ud835\udc5f,\ud835\udf03\u210e) represent the coordinates and angles of the agent. After the state and target are initialized, the agent begins to navigate based on its own observations. At each timestamp \ud835\udc61, the agent only receives the RGB image \ud835\udc5c\ud835\udc61from a single perspective and the target \ud835\udc5d\u2208\ud835\udc43. According to \ud835\udc5c\ud835\udc61and \ud835\udc5d, the agent learns a navigation strategy \ud835\udf0b(\ud835\udc4e\ud835\udc61|\ud835\udc5c\ud835\udc61, \ud835\udc5d), where \ud835\udc4e\ud835\udc61\u2208\ud835\udc34= {\ud835\udc40\ud835\udc5c\ud835\udc63\ud835\udc52\ud835\udc34\u210e\ud835\udc52\ud835\udc4e\ud835\udc51; \ud835\udc45\ud835\udc5c\ud835\udc61\ud835\udc4e\ud835\udc61\ud835\udc52\ud835\udc3f\ud835\udc52\ud835\udc53\ud835\udc61; \ud835\udc45\ud835\udc5c\ud835\udc61\ud835\udc4e\ud835\udc61\ud835\udc52\ud835\udc45\ud835\udc56\ud835\udc54\u210e\ud835\udc61; \ud835\udc3f\ud835\udc5c\ud835\udc5c\ud835\udc58\ud835\udc37\ud835\udc5c\ud835\udc64\ud835\udc5b; \ud835\udc3f\ud835\udc5c\ud835\udc5c\ud835\udc58\ud835\udc48\ud835\udc5d; \ud835\udc37\ud835\udc5c\ud835\udc5b\ud835\udc52} and \ud835\udc37\ud835\udc5c\ud835\udc5b\ud835\udc52is the output if the agent believes it has navigated to the target location. The successful episode is defined as: an agent selects the termination action \ud835\udc37\ud835\udc5c\ud835\udc5b\ud835\udc52when the distance between the agent and the target is less than 1.5 meters and the target is in the field of view. 4.2 Directed Object Attention (DOA) Graph DOA graph is a graphical representation of the correlation degree between identifiable objects and the target. According to the analysis presented in Section 3.1, previous GCN-based methods cannot learn unbiased relationships between objects in object navigation tasks. In contrast, our proposed DOA graph provides an explicit yet flexible representation of the relationships between objects. The DOA graph is obtained through a weighted summation of the intrinsic object graph and the view adaptive graph. 4.2.1 Intrinsic Object Graph. The intrinsic object graph represents the ubiquitous intrinsic relationships between objects. For example, a mouse and a laptop have a strong inherent relationship. Here, we define a learnable matrix \ud835\udc3a\ud835\udc5b\u2208R\ud835\udc41\u00d7\ud835\udc41to represent the intrinsic object graph, where \ud835\udc41is the number of objects. As the agent uses reinforcement learning to continuously explore different environments, \ud835\udc3a\ud835\udc5bgradually tends to be reasonable and stable. \ud835\udc3a\ud835\udc5bis fixed during testing. Each edge between objects in \ud835\udc3a\ud835\udc5bis bidirectional. The end node of an edge represents the target object \ud835\udc5d, while the start node of the edge represents the object that needs to be assigned attention. Therefore, the weight of each directed edge represents the intrinsic correlation between an object \ud835\udc5e\u2208\ud835\udc43and the target object \ud835\udc5d\u2208\ud835\udc43. Each row of \ud835\udc3a\ud835\udc5bis normalized using SoftMax to ensure that the sum of all edge values connected to a target node is 1. \fMM \u201922, October 10\u201314, 2022, Lisbon, Portugal. Ronghao Dang, et al. ... Muti-Head Score Calculation TS C Q K ... DOA graph VAG IG Object Attention ... OS Muti-Head Attention CF PI Q K V ... FC ABED LSTM ResNet18 Faster RCNN Done Reminder Action Image Embedding Object Embedding Previous Action UAIA UAOA CF Avg Pool C Figure 3: Model overview. PI: pixel index embedding, TS: target semantics, OS: object semantics, CF: confidence filter. VAG: view adaptive graph, IG: intrinsic graph, Avg Pool: average pooling. Our model consists of three branches: Image branch, Object branch, and Action branch. We perform UAIA on Image branch and UAOA on Object branch, respectively, based on directed object attention (DOA) graph. The joint features of the re-integrated branches after adaptive branch energy distribution (ABED) are input into an LSTM network to predict the next action. 4.2.2 View Adaptive Graph. The view adaptive graph represents the real-time adaptive relationships between objects. The agent generates the global image features \ud835\udc3c\ud835\udc61\u2208R\ud835\udc40\u00d7512 (from ResNet18 [10]) and object features \ud835\udc46\ud835\udc61\u2208R\ud835\udc41\u00d7518 (from Faster-RCNN [29]) after observing the current scene. Here, \ud835\udc40is the pixel number of the image feature map. \ud835\udc46\ud835\udc61is concatenated by the object visual features \ud835\udc46\ud835\udc63\ud835\udc56\ud835\udc60\ud835\udc62\ud835\udc4e\ud835\udc59 \ud835\udc61 , object position features \ud835\udc46\ud835\udc5d\ud835\udc5c\ud835\udc60 \ud835\udc61 , confidence \ud835\udc46\ud835\udc50\ud835\udc5c\ud835\udc5b\ud835\udc53 \ud835\udc61 and target indicator \ud835\udc46\ud835\udc61\ud835\udc4e\ud835\udc5f\ud835\udc54\ud835\udc52\ud835\udc61 \ud835\udc61 . Since low-confidence object detection results often contain excessive noise, we filter\ud835\udc46\ud835\udc61with the confidence criterion \ud835\udc46\ud835\udc50\ud835\udc5c\ud835\udc5b\ud835\udc53 \ud835\udc61 > \ud835\udc61\u210e\ud835\udc5f\ud835\udc52\ud835\udc60\u210e\ud835\udc5c\ud835\udc59\ud835\udc51to obtain e \ud835\udc46\ud835\udc61. For introducing target information to the image features \ud835\udc3c\ud835\udc61, we encode the object index using the one-hot method and two fully connected layers to obtain \ud835\udc42\ud835\udc3c. The input image query e \ud835\udc3c\ud835\udc61\u2208R1\u00d7576 can be formulated as: e \ud835\udc3c\ud835\udc61= \ud835\udc36\ud835\udc5c\ud835\udc5b\ud835\udc50\ud835\udc4e\ud835\udc61( 1 \ud835\udc40 \ud835\udc40 \u2211\ufe01 \ud835\udc56=1 \ud835\udc3c\ud835\udc56 \ud835\udc61,\ud835\udc42\ud835\udc3c\ud835\udc5d) (1) where \ud835\udc42\ud835\udc3c\ud835\udc5drefers to the \ud835\udc5d-th object (target) semantics. 1 \ud835\udc40 \u00cd\ud835\udc40 \ud835\udc56=1 \ud835\udc3c\ud835\udc56 \ud835\udc61 is squeezing global spatial information into a channel descriptor using global average pooling. The agent grounds the current overall environmental characteristics e \ud835\udc3c\ud835\udc61to the object features e \ud835\udc46\ud835\udc61via mutihead score calculation [34] to produce the view adaptive graph \ud835\udc3a\ud835\udc63\u2208R\ud835\udc41\u00d71: e \ud835\udc44\ud835\udc56= e \ud835\udc3c\ud835\udc61e \ud835\udc4a\ud835\udc44 \ud835\udc56 e \ud835\udc3e\ud835\udc56= e \ud835\udc46\ud835\udc61e \ud835\udc4a\ud835\udc3e \ud835\udc56 \ud835\udc56= 1, 2, \u00b7 \u00b7 \u00b7 , \ud835\udc41\ud835\udc3b (2) \u009d \u210e\ud835\udc52\ud835\udc4e\ud835\udc51\ud835\udc56= \ud835\udc60\ud835\udc5c\ud835\udc53\ud835\udc61\ud835\udc5a\ud835\udc4e\ud835\udc65( e \ud835\udc44\ud835\udc56e \ud835\udc3e\ud835\udc47 \ud835\udc56 \u221a \ud835\udc3b\ud835\udc37 ) (3) \ud835\udc3a\ud835\udc63= \ud835\udc36\ud835\udc5c\ud835\udc5b\ud835\udc50\ud835\udc4e\ud835\udc61(\u009d \u210e\ud835\udc52\ud835\udc4e\ud835\udc511, \u00b7 \u00b7 \u00b7 , \u009d \u210e\ud835\udc52\ud835\udc4e\ud835\udc51\ud835\udc41\ud835\udc3b) e \ud835\udc4a\ud835\udc42 (4) where \ud835\udc3b\ud835\udc37and \ud835\udc41\ud835\udc3bdenote the hidden dimensionality and number of heads in the muti-head score calculation. e \ud835\udc4a\ud835\udc44 \ud835\udc56 \u2208R576\u00d7\ud835\udc3b\ud835\udc37and e \ud835\udc4a\ud835\udc3e \ud835\udc56 \u2208R518\u00d7\ud835\udc3b\ud835\udc37map e \ud835\udc3c\ud835\udc61and e \ud835\udc46\ud835\udc61to the same dimension \ud835\udc3b\ud835\udc37. e \ud835\udc4a\ud835\udc42\u2208 R\ud835\udc41\ud835\udc3b\u00d71 aggregate the various subgraphs calculated by the scaled dot-product of the multiple heads to generate the graph \ud835\udc3a\ud835\udc63. 4.2.3 Object Attention. According to the searched target \ud835\udc5d, we take the edge weight \ud835\udc3a\ud835\udc5d \ud835\udc5b\u2208R\ud835\udc41\u00d71 from the intrinsic object graph \ud835\udc3a\ud835\udc5b with the \ud835\udc5d-th node as the end point. With the weighted summation of the intrinsic weight and the view adaptive weight, we can obtain the attention: \ud835\udc3a\ud835\udc61= \ud835\udc3a\ud835\udc5d \ud835\udc5b\ud835\udc64\ud835\udc5b+ \ud835\udc3a\ud835\udc63\ud835\udc64\ud835\udc63 (5) that each object requires. The learnable \ud835\udc64\ud835\udc5band \ud835\udc64\ud835\udc63are the weights of the two graphs. 4.3 Unbiased Adaptive Attention 4.3.1 Unbiased Adaptive Object Attention (UAOA). The purpose of unbiased adaptive object attention (UAOA) is to use the object attention \ud835\udc3a\ud835\udc61obtained in Section 4.2 to assign attention to different object features. To balance the information integrity and computational complexity, we apply two fully connected layers around the ReLU [26] to map the object features \ud835\udc46\ud835\udc61to a lower-dimensional representation \ud835\udc46\ud835\udc61\u2032 \u2208R\ud835\udc41\u00d764. Finally, the attention weight of each object \ud835\udc5eis multiplied by the object features \ud835\udc46\ud835\udc61\u2032: b \ud835\udc46\ud835\udc5e \ud835\udc61= \ud835\udc39\ud835\udc60\ud835\udc50\ud835\udc4e\ud835\udc59\ud835\udc52(\ud835\udc46\ud835\udc5e \ud835\udc61 \u2032,\ud835\udc3a\ud835\udc5e \ud835\udc61) = \ud835\udc46\ud835\udc5e \ud835\udc61 \u2032\ud835\udc3a\ud835\udc5e \ud835\udc61 \ud835\udc5e= 1, 2, \u00b7 \u00b7 \u00b7 , \ud835\udc41 (6) where \ud835\udc46\ud835\udc61\u2032 = {\ud835\udc461 \ud835\udc61 \u2032,\ud835\udc462 \ud835\udc61 \u2032, \u00b7 \u00b7 \u00b7 ,\ud835\udc46\ud835\udc41 \ud835\udc61 \u2032}, \ud835\udc46\ud835\udc5e \ud835\udc61 \u2032 is the low dimensional feature of the \ud835\udc5e-th object. \ud835\udc3a\ud835\udc5e \ud835\udc61is the weight of the \ud835\udc5e-th object in DOA graph at time \ud835\udc61. \fUnbiased Directed Object Attention Graph for Object Navigation MM \u201922, October 10\u201314, 2022, Lisbon, Portugal. Table 1: We obtain a strong and concise baseline by comparing the roles of different modules in previous methods [6, 28, 40] (%). These modules include the object GCN (GCN), the row image branch (Image), the zone branch (Zone), the room branch (Room), the previous action branch (Action) and the object branch (Object). GCN Image Zone Room Action Object All L>=5 Episode Time SR SPL SAE SR SPL SAE \u2713 \u2713 \u2713 \u2713 \u2713 \u2713 68.53\u00b11.32 37.50\u00b11.01 25.98\u00b10.96 60.27\u00b11.67 36.61\u00b11.12 27.68\u00b11.31 0.239 \u2713 \u2713 \u2713 \u2713 \u2713 68.11\u00b11.61 37.80\u00b10.87 26.07\u00b11.26 58.15\u00b11.19 36.30\u00b10.97 27.55\u00b10.88 0.177 \u2713 \u2713 \u2713 \u2713 \u2713 63.95\u00b11.37 32.04\u00b11.69 26.72\u00b10.87 54.38\u00b10.56 30.38\u00b11.27 26.52\u00b11.21 0.145 \u2713 \u2713 \u2713 \u2713 \u2713 67.51\u00b10.97 37.81\u00b11.13 25.75\u00b11.04 58.31\u00b11.22 36.69\u00b10.87 27.83\u00b11.36 0.223 \u2713 \u2713 \u2713 \u2713 \u2713 67.61\u00b11.16 37.60\u00b11.32 26.11\u00b11.88 59.21\u00b11.45 36.62\u00b10.99 28.35\u00b11.17 0.235 \u2713 \u2713 \u2713 \u2713 \u2713 64.46\u00b11.74 34.60\u00b11.14 25.49\u00b11.09 54.68\u00b10.67 33.01\u00b10.93 25.81\u00b11.12 0.243 \u2713 \u2713 \u2713 \u2713 \u2713 48.26\u00b11.11 19.88\u00b11.31 17.58\u00b11.57 35.80\u00b11.12 17.23\u00b11.65 16.66\u00b11.18 0.160 \u2713 \u2713 \u2713 69.14\u00b10.67 37.87\u00b10.88 27.77\u00b10.95 60.42\u00b11.11 37.28\u00b10.66 28.70\u00b10.67 0.174 4.3.2 Unbiased Adaptive Image Attention (UAIA). We use the DOA graph to focus more attention on the areas around important objects in the image. We use the encoded object index \ud835\udc42\ud835\udc3crather than word embeddings trained by other networks to represent the object semantic information (what the object is). The object index embeddings learned by our network are more coupled to our dataset than word embeddings trained on other tasks. We use the object attention \ud835\udc3a\ud835\udc61to generate the attention-aware object semantics \ud835\udc37: \ud835\udc37= \ud835\udc41 \u2211\ufe01 \ud835\udc5e=1 I(\ud835\udc46\ud835\udc50\ud835\udc5c\ud835\udc5b\ud835\udc53 \ud835\udc61 > \ud835\udc61\u210e\ud835\udc5f\ud835\udc52\ud835\udc60\u210e\ud835\udc5c\ud835\udc59\ud835\udc51)\ud835\udc3a\ud835\udc5e \ud835\udc61\u00d7 \ud835\udc42\ud835\udc3c\ud835\udc5e (7) where I(\u00b7) is the indicator function. The confidence filter \ud835\udc46\ud835\udc50\ud835\udc5c\ud835\udc5b\ud835\udc53 \ud835\udc61 > \ud835\udc61\u210e\ud835\udc5f\ud835\udc52\ud835\udc60\u210e\ud835\udc5c\ud835\udc59\ud835\udc51indicates that only the semantics of objects whose detection confidence exceeds \ud835\udc61\u210e\ud835\udc5f\ud835\udc52\ud835\udc60\u210e\ud835\udc5c\ud835\udc59\ud835\udc51are used. Unlike the convolution operation in CNNs, the muti-head attention operation is permutation-invariant [5], which cannot leverage the order of the tokens in an input sequence. To mitigate this issue, our work adds absolute positional encodings to each pixel in the input sequence, such as in [5, 34], enabling order awareness. We use standard learned 1D pixel index embedding \ud835\udc43\ud835\udc3c\u2208R\ud835\udc40\u00d764 since we have not observed any significant performance gains when using more complicated 2D position embeddings. The resulting sequence of embedded vectors serves as input to the encoder. \ud835\udc3c\ud835\udc61\u2032 = \ud835\udeff(\ud835\udeff(\ud835\udc3c\ud835\udc61\ud835\udc4a1)\ud835\udc4a2) + \ud835\udc43\ud835\udc3c (8) where \ud835\udeffrefers to the ReLU function, \ud835\udc4a1 \u2208R512\u00d7128 and \ud835\udc4a2 \u2208 R128\u00d764 reduce the dimension of the global image features \ud835\udc3c\ud835\udc61and the pixel index embedding \ud835\udc43\ud835\udc3cis introduced to generate positionaware image features \ud835\udc3c\ud835\udc61\u2032 \u2208R\ud835\udc40\u00d764. We use the attention-aware object semantics \ud835\udc37as the query and the position-aware image features \ud835\udc3c\ud835\udc61\u2032 as the key and value in the muti-head image attention (\ud835\udc3b\ud835\udc37= 64, \ud835\udc41\ud835\udc3b= 4) to generate the final image embedding b \ud835\udc3c\ud835\udc61. \ud835\udc44\ud835\udc56= \ud835\udc37\ud835\udc4a\ud835\udc44 \ud835\udc56 \ud835\udc3e\ud835\udc56= \ud835\udc3c\ud835\udc61\u2032\ud835\udc4a\ud835\udc3e \ud835\udc56 \ud835\udc49\ud835\udc56= \ud835\udc3c\ud835\udc61\u2032\ud835\udc4a\ud835\udc49 \ud835\udc56 \ud835\udc56= 1, 2, \u00b7 \u00b7 \u00b7 , \ud835\udc41\ud835\udc3b (9) \u210e\ud835\udc52\ud835\udc4e\ud835\udc51\ud835\udc56= \ud835\udc60\ud835\udc5c\ud835\udc53\ud835\udc61\ud835\udc5a\ud835\udc4e\ud835\udc65( \ud835\udc44\ud835\udc56\ud835\udc3e\ud835\udc47 \ud835\udc56 \u221a \ud835\udc3b\ud835\udc37 )\ud835\udc49\ud835\udc56 (10) b \ud835\udc3c\ud835\udc61= \ud835\udc36\ud835\udc5c\ud835\udc5b\ud835\udc50\ud835\udc4e\ud835\udc61(\u210e\ud835\udc52\ud835\udc4e\ud835\udc511, \u00b7 \u00b7 \u00b7 ,\u210e\ud835\udc52\ud835\udc4e\ud835\udc51\ud835\udc41\ud835\udc3b)\ud835\udc4a\ud835\udc42 (11) 4.4 Adaptive Branch Energy Distribution (ABED) Previous works [6, 40] have input the directly concatenated embedded features of the three branches (object, image and previous action branches) into LSTM networks to establish time series models. However, there are two issues with this simple feature stitching: (i) It is difficult for the network to distinguish between the different information types of the three branches during training; (ii) It is difficult to guarantee that the data distribution of the concatenated vector is rational. Therefore, we propose the adaptive branch energy distribution (ABED) method to provide additional tokens to each branch without introducing extra parameters, and optimize the data distribution of the concatenated vector. We establish a learnable vector \ud835\udc45= {\ud835\udc5f1,\ud835\udc5f2,\ud835\udc5f3} with only three parameters. The final output vector \ud835\udc3b\ud835\udc61can be expressed as: \ud835\udc3b\ud835\udc61= \ud835\udc39\ud835\udc5d\ud835\udc64(\ud835\udc36\ud835\udc5c\ud835\udc5b\ud835\udc50\ud835\udc4e\ud835\udc61(\ud835\udc5f1 b \ud835\udc46\ud835\udc61,\ud835\udc5f2b \ud835\udc3c\ud835\udc61,\ud835\udc5f3\ud835\udc43\ud835\udc34)) (12) where \ud835\udc43\ud835\udc34is the previous action embedding and \ud835\udc39\ud835\udc5d\ud835\udc64refers to the pointwise convolution. The operation, which is similar to the energy distribution of the input signal, uses the significant differences between the feature distributions of the three branches to explicitly distinguish the branches and learn a more reasonable distribution of the concatenated vector. Although our method is quite simple, experiments have proven that it can significantly improve the navigation ability of the agent. When compared to the process of directly adding branch semantic (BS) embeddings to each branch, this method is unique in that it can provide the model with a strong scene understanding without destroying the other modules in complex models. 4.5 Policy Learning Previous works [35, 37] have used direct observations to learn a strategy \ud835\udf0b(\ud835\udc4e\ud835\udc61|\ud835\udc5c\ud835\udc61, \ud835\udc5d). Our work uses unbiased object graph relationships to learn an LSTM action policy \ud835\udf0b(\ud835\udc4e\ud835\udc61|\ud835\udc3b\ud835\udc61, \ud835\udc5d), where \ud835\udc3b\ud835\udc61is the joint representation of the global image embedding \ud835\udc5f1b \ud835\udc3c\ud835\udc61, object embedding \ud835\udc5f2 b \ud835\udc46\ud835\udc61and previous action embedding \ud835\udc5f3\ud835\udc43\ud835\udc34. Based on previous works [7, 20], we treat this task as a reinforcement learning problem and utilize the asynchronous advantage actor-critic (A3C) algorithm [22], which applies policy gradients to assist the agent in choosing an appropriate action \ud835\udc4e\ud835\udc61in the high-dimensional action \fMM \u201922, October 10\u201314, 2022, Lisbon, Portugal. Ronghao Dang, et al. Table 2: The ablation study of the different components of the UAIA module (%). CF: confidence filter, PI: pixel index. UAIA ALL L>=5 CF PI SR SPL SAE SR SPL SAE 70.48 37.13 27.63 63.12 37.09 29.16 \u2713 72.12 37.45 27.98 64.23 37.67 29.13 \u2713 71.12 38.91 27.23 64.01 39.34 30.33 \u2713 \u2713 72.28 39.13 28.91 65.19 39.98 30.22 Table 3: The ablation study of the different modules in the DOA graph (%). IG: intrinsic graph, VAG: view adaptive graph, MA: multi-head attention, TS: target semantics. \u2019\u2014\u2014\u2019 indicates that the view adaptive graph is not used. IG VAG ALL L>=5 MA TS SR SPL SAE SR SPL SAE 70.77 38.51 27.98 63.83 38.04 28.74 \u2713 \u2014\u2014 73.67 39.98 30.23 65.79 39.23 31.98 \u2713 72.12 39.29 28.10 65.55 39.21 30.91 \u2713 \u2713 73.91 39.21 30.21 67.78 39.34 31.89 \u2713 \u2713 67.43 36.29 27.01 60.12 35.90 26.88 \u2713 \u2713 \u2713 74.32 40.27 29.79 67.88 40.36 32.56 space \ud835\udc34. In accordance with the done reminder operation presented in [40], we use the target detection confidence to explicitly enhance the probability of the \ud835\udc37\ud835\udc5c\ud835\udc5b\ud835\udc52action. 5 EXPERIMENTS 5.1 Experimental Setup 5.1.1 Dataset. We choose the AI2-Thor [13] dataset and its corresponding simulator as the experimental platform. The AI2-Thor dataset includes 30 different floorplans for each of 4 room layouts: kitchen, living room, bedroom, and bathroom. For each scene type, we use 20 rooms for training, 5 rooms for validation, and 5 rooms for testing. There are 22 kinds of objects (\ud835\udc41= 22) that agents can recognize, and we ensure that there are at least 4 kinds of objects in each room [35]. 5.1.2 Evaluation Metrics. We use the success rate (SR), success weighted by path length (SPL) [1], and success weighted by action efficiency (SAE) [40] metrics to evaluate our method. SR indicates the success rate of the agent in completing the task, which is formulated as \ud835\udc46\ud835\udc45= 1 \ud835\udc38 \u00cd\ud835\udc38 \ud835\udc56=1 \ud835\udc46\ud835\udc62\ud835\udc50\ud835\udc56, where \ud835\udc38is the number of episodes and \ud835\udc46\ud835\udc62\ud835\udc50\ud835\udc56indicates whether the \ud835\udc56-th episode succeeds. SPL considers the path length more comprehensively and is defined as \ud835\udc46\ud835\udc43\ud835\udc3f= 1 \ud835\udc38 \u00cd\ud835\udc38 \ud835\udc56=1 \ud835\udc46\ud835\udc62\ud835\udc50\ud835\udc56 \ud835\udc3f\u2217 \ud835\udc56 \ud835\udc5a\ud835\udc4e\ud835\udc65(\ud835\udc3f\ud835\udc56,\ud835\udc3f\u2217 \ud835\udc56) , where \ud835\udc3f\ud835\udc56is the path length taken by the agent and \ud835\udc3f\u2217 \ud835\udc56is the theoretical shortest path. SAE considers the effects of unnecessary rotations and is defined as \ud835\udc46\ud835\udc34\ud835\udc38= 1 \ud835\udc38 \u00cd\ud835\udc38 \ud835\udc56=1 \ud835\udc46\ud835\udc62\ud835\udc50\ud835\udc56 \u00cd\ud835\udc47 \ud835\udc61=0 I(\ud835\udc4e\ud835\udc56 \ud835\udc61\u2208\ud835\udc34\ud835\udc50\u210e\ud835\udc4e\ud835\udc5b\ud835\udc54\ud835\udc52) \u00cd\ud835\udc47 \ud835\udc61=0 I(\ud835\udc4e\ud835\udc56 \ud835\udc61\u2208\ud835\udc34\ud835\udc4e\ud835\udc59\ud835\udc59) , where I(\u00b7) is the indicator function, \ud835\udc4e\ud835\udc56 \ud835\udc61is the agent\u2019s action at time \ud835\udc61in episode \ud835\udc56, \ud835\udc34\ud835\udc4e\ud835\udc59\ud835\udc59is the set of all actions, and \ud835\udc34\ud835\udc50\u210e\ud835\udc4e\ud835\udc5b\ud835\udc54\ud835\udc52is the set of actions that can change the position of the agent. Table 4: The ablation study of different branch fusion methods in ABED (%). CA: cross-attention, BS: branch samentics, ED: energy distribution method. CA ABED ALL L>=5 BS ED SR SPL SAE SR SPL SAE 69.14 37.87 27.77 60.42 37.28 28.70 \u2713 71.07 40.02 26.30 64.04 39.50 29.57 \u2713 71.56 38.66 28.99 64.12 38.89 29.07 \u2713 73.44 39.55 29.12 67.01 39.34 30.55 \u2713 67.82 33.50 27.74 61.63 34.23 29.10 \u2713 74.32 40.27 29.79 67.88 40.36 32.56 Table 5: The ablation study of the different components in overall model (%). UAOA UAIA ABED ALL L>=5 SR SPL SAE SR SPL SAE 69.14 37.87 27.77 60.42 37.28 28.70 \u2713 73.21 39.20 29.29 66.37 39.12 31.18 \u2713 72.28 39.13 28.91 65.19 39.98 30.22 \u2713 70.77 38.51 27.98 63.83 38.04 28.74 \u2713 \u2713 73.58 39.11 29.46 67.42 39.04 31.74 \u2713 \u2713 \u2713 74.32 40.27 29.79 67.88 40.36 32.56 5.1.3 Implementation Details. We train our model with 18 workers on 2 RTX 2080Ti Nvidia GPUs. The Adam optimizer [12] is used to update the network parameters with a learning rate of 10\u22124. We introduce a dropout of 0.3 to the muti-head attention mechanism and global image embedding. The confidence threshold for filtering objects is set to 0.6. Faster-RCNN [29] is fine-tuned on 50% [40] of the training data from the AI2-Thor dataset. For evaluation, our results take the average value of the test for 3 times. We report the results for all targets (ALL) and for a subset of targets (L>=5) with optimal trajectory lengths greater than 5. 5.2 Strong and Concise Baseline The methods presented in [6, 28, 40] include five kinds of branches (image, zone, room, previous action and object) that are concatenated into a vector before being input into the LSTM network. To evaluate the influence of the five branches separately on the object navigation task, we eliminate each branch in turn, as shown in Table 1. The object branch has the greatest impact on the experimental results, and the removal of the object branch drops 20.27/24.27, 17.62/19.38, 8.40/11.02 in SR, SPL and SAE (ALL/L>=5, %). This confirms the importance of object features in object navigation task. In this perspective, the adequate exploration of object relationships is necessary. Moreover, the image and previous action branches also have significant impacts on the agent\u2019s navigation ability. Whereas, the room branch and the zone branch have little effect on the SR and SPL. Accordingly, our baseline retains only the image, object and previous action branches. The last row in Table 1 shows our simplified baseline. The removal of the object GCN, room branch and zone branch reduces the agent\u2019s exploration time by 27.2% while leaving the SR, SPL and \fUnbiased Directed Object Attention Graph for Object Navigation MM \u201922, October 10\u201314, 2022, Lisbon, Portugal. 0 0.02 0.04 0.06 0.08 Kitchen Intinsic Attention Kitchen Adaptive Attention Living Room Intinsic Attention Living Room Adaptive Attention 0 0.02 0.04 0.06 0.08 0.1 Proposed Unbiased DOA Biased HOZ Average Attention AlarmClock Book Bowl CellPhone Chair CoffeeMachine DeskLamp FloorLamp Fridge GarbageCan Kettle Laptop LightSwitch Microwave Pan Plate Pot RemoteControl Sink StoveBurner Television Toaster AlarmClock Book Bowl CellPhone Chair CoffeeMachine DeskLamp FloorLamp Fridge GarbageCan Kettle Laptop LightSwitch Microwave Pan Plate Pot RemoteControl Sink StoveBurner Television Toaster Average Attention (a) (b) Figure 4: (a) Compare the object average attention of biased HOZ [40] and our unbiased DOA method. (b) Compare the object average attention (intrinsic attention + adaptive attention) of different scenes in DOA graph. SAE essentially unchanged. This more concise baseline allows us to observe the advantages of the added modules more clearly. 5.3 Ablation Experiments We verify the effectiveness of each proposed module with extensive experiments. Table 2, Table 3, Table 4 and Table 5 show the results of ablation experiments on the UAIA, DOA graph, ABED and overall model. More ablation experiments are provided in the Supplementary Material. 5.3.1 UAIA. As shown in Table 2, the UAIA module has two main components: the confidence filter (CF), which is used to eliminate outlier objects; the pixel index (PI) embedding, which is used to increase the order-awareness of the global image features. The UAIA module with CF (confidence > 0.6) outperforms the UAIA module without CF by 1.64/1.11 in SR (ALL/L >= 5, %). This result shows that reducing the influence of irrelevant objects in the UAIA module can effectively improve the navigation ability. The UAIA module with PI embedding outperforms the UAIA module without PI embedding by 1.78/2.25 in SPL (ALL/L >= 5, %), demonstrating that adding positional encoding to each pixel in the image can optimize the agent\u2019s path. The two components complement each other to improve the effect of the UAIA module. 5.3.2 DOA Graph. As shown in Table 3, we explore the role of intrinsic graph (IG) and view adaptive graph (VAG) in DOA graph. The DOA graph with only IG outperforms the DOA graph without IG by 2.90/1.96, 1.47/1.19 and 2.25/3.24 in SR, SPL and SAE (ALL/L >= 5, %). However, the DOA graph with only VAG brings a blow to the agent\u2019s navigation ability. This result implies that it is difficult to directly perform fully adaptive learning in object relationships, requiring the extensive prior knowledge to narrow the learning domain. During the calculation of VAG, the muti-head attention (MA) allows for more reasonable object graph relationships, while the use of target semantics (TS) allows the agent to be more specific about the target, thereby improving the navigation efficiency. 5.3.3 ABED. In Table 4, We compare two methods of providing identities for the branches: branch semantics (BS), which use the embedding of one-hot vectors; energy distribution (ED), which uses only three energy distribution coefficients. Without cross-attention (UAIA and UAOA), adding BS and ED to the original model improves the agent\u2019s navigation ability well. Notably, with cross-attention (UAIA and UAOA), adding BS to the complex model causes the model to crash. In contrast, the simple ED method improves the complex model with cross-attention by 0.88/0.87, 0.72/1.02 and 0.67/2.01 in SR, SPL and SAE (ALL/L >= 5, %). The results demonstrate the significant advantage of the ED method with only three parameters in complex models. This is consistent with our intuition that due to the complexity of object navigation tasks, the learning model must be simplified; otherwise, the strong coupling of the complex parameters between modules can cause the overall model to be difficult to learn. 5.3.4 Overall Model. The ablation experiments on the overall model with UAOA, UAIA and ABED are shown in Table 5. Compared with our proposed baseline, applying the complete model increases SR, SPL and SAE by 5.18/7.46, 2.40/3.08 and 2.02/3.86 (ALL/L >= 5, %). The results indicate that our method is capable of effectively guiding agents to navigate in unseen environments. Compared with the UAIA and ABED methods, the UAOA method improves the model more significantly. This is because the UAOA method essentially solves the object attention bias problem, increasing the agent\u2019s understanding of the relationships between objects. 5.4 Elimination of Object Attention Bias Object attention bias is the phenomenon in which objects with low visibility are ignored. Figure 4 (a) shows the average attention of the agent on all objects before and after using the DOA graph for all test floorplans. Without the use of our DOA graph approach, the agent suffers from severe long-tail distribution in the object attention. The average attention difference between the most popular object and the most neglected object is more than tenfold. Objects with high visibility, such as the floor lamp, fridge and sink, dominate the agent\u2019s decision-making, while objects with low visibility, such as the cell phone and remote control, cannot play their proper guiding roles. Figure 4 (b) shows that although the DOA method makes the average attention for each object to be similar across the entire dataset, there is a correct attention tendency in different scenes. Our DOA graph-based attention consists of two parts (section 4.2) : intrinsic attention and adaptive attention. The agent adjusts the two-part attention in different scenes to pay more attention on critical objects. In summary, the proposed DOA graph significantly improves the rationality of attention allocation for most objects, indicating that the improvement in SR, SPL and SAE is indeed from solving the object attention bias problem. We emphasize that DOA \fMM \u201922, October 10\u201314, 2022, Lisbon, Portugal. Ronghao Dang, et al. Bathroom: Light Switch Baseline Our Method 0.73 1.00 Bedroom: Book 0.99 0.01 0.79 0.04 0.17 Kitchen: Coffee Machine 0.12 0.43 0.99 0.01 0.00 Living room: Bowl 0.02 0.99 0 Start Terminate Terminate Key Start Terminate Terminate Key Start Terminate Terminate Key Start Terminate Terminate Key Figure 5: Visualization in testing environment. We show the results of experiments that involve searching for four different objects in four different scenes. The trajectory of the agent is indicated by green and blue arrows, where green is the beginning and blue is the end. The red value in the object detection box represents the attention weight of the object. Table 6: Comparison with SOTA methods (%). More experiments are in the appendix D.1. Method ALL L>=5 SR SPL SAE SR SPL SAE Random 4.12 2,21 0.43 0.21 0.08 0.05 SP [37] 62.98 38.56 24.99 52.73 33.84 23.02 SAVN [35] 63.12 37.81 20.98 52.01 34.94 23.01 ORG [6] 67.32 37.01 25.17 58.13 35,90 27.04 HOZ [40] 68.53 37.50 25.98 60.27 36.61 27.68 Ours (Baseline) 69.14 37.87 27.77 60.42 37.28 28.70 Ours (DOA+ABED) 74.32 40.27 29.79 67.88 40.36 32.56 graph is a model-agnostic object attention representation method that can be applied to a variety of models and fusion modules. 5.5 Comparisons to the State-of-the-art As shown in Table 6, we compare the test results of our method and other similar methods on the AI2-Thor dataset. All of the random decision navigation indicators are close to 0. Notably, our baseline model outperforms the SOTA method by 0.61/0.15, 0.37/0.67 and 1.97/1.02 in SR, SPL and SAE (ALL/L >= 5, %). The redundant operations in previous networks aggravate the object attention bias, which explains why the subtraction in our baseline model facilitates learning. Finally, our model with DOA graph-based modules and ABED method outperforms the proposed baseline with the gains of 5.18/7.46, 2.40/3.08 and 2.02/3.86 in SR, SPL and SAE (ALL/L >= 5, %). 5.6 Qualitative Analysis We visualize the navigation effect of the agent in Figure 5. The direction and stop timing of the rotation are critical, as seen in the trajectories of the success and failure cases. These two decisions are mainly determined by the agent\u2019s interpretation of the scene at keyframes when multiple objects can provide rich information. Our DOA graph-based method provides the agent with a more reasonable and unbiased attention allocation at these keyframes, allowing the algorithm to choose the correct rotation direction and stop timing. 6" + } + ] + }, + "edge_feat": {} + } +} \ No newline at end of file