{ "url": "http://arxiv.org/abs/2404.16645v1", "title": "Tele-FLM Technical Report", "abstract": "Large language models (LLMs) have showcased profound capabilities in language\nunderstanding and generation, facilitating a wide array of applications.\nHowever, there is a notable paucity of detailed, open-sourced methodologies on\nefficiently scaling LLMs beyond 50 billion parameters with minimum\ntrial-and-error cost and computational resources. In this report, we introduce\nTele-FLM (aka FLM-2), a 52B open-sourced multilingual large language model that\nfeatures a stable, efficient pre-training paradigm and enhanced factual\njudgment capabilities. Tele-FLM demonstrates superior multilingual language\nmodeling abilities, measured by BPB on textual corpus. Besides, in both English\nand Chinese foundation model evaluation, it is comparable to strong\nopen-sourced models that involve larger pre-training FLOPs, such as Llama2-70B\nand DeepSeek-67B. In addition to the model weights, we share the core designs,\nengineering practices, and training details, which we expect to benefit both\nthe academic and industrial communities.", "authors": "Xiang Li, Yiqun Yao, Xin Jiang, Xuezhi Fang, Chao Wang, Xinzhang Liu, Zihan Wang, Yu Zhao, Xin Wang, Yuyao Huang, Shuangyong Song, Yongxiang Li, Zheng Zhang, Bo Zhao, Aixin Sun, Yequan Wang, Zhongjiang He, Zhongyuan Wang, Xuelong Li, Tiejun Huang", "published": "2024-04-25", "updated": "2024-04-25", "primary_cat": "cs.CL", "cats": [ "cs.CL", "cs.AI" ], "label": "Original Paper", "paper_cat": "LLM Fairness", "gt": "Large Language Models (LLMs) have been considered a remarkable approach for unsupervised learning, utilizing extensive data to achieve significant advancements. Large models based on decoder-only Transformers [64; 43] have demonstrated strong abilities on language understanding, generation, and in-context learning [10], et al.. Through downstream supervised fine-tuning (SFT) and task-specific alignments (e.g., Reinforcement Learning from Human Feedback, RLHF) [41], LLMs have led to significant progress in the development of dialogue assistant applications with their human-level multi-turn interaction capabilities [40]. Furthermore, LLMs have demonstrated complex cognitive abilities as reflected by code interpretation and completion [37], mathematical problem-solving [35], logical reasoning [69], and agent-like actions [9]. Recently, LLMs have also shown potential to facilitate a unified sequence-to-sequence modeling paradigm for multimodal learning by treating image, video, and audio signals all as token sequences [57; 30]. This positions LLMs as pivotal for progress towards Artificial General Intelligence (AGI) [11]. Inspired by the superior performances of proprietary applications [40; 6], a plethora of open-sourced LLMs has been publicly available for both the English [60; 61; 42; 27; 58] and Chinese [71; 5; 7; 33] communities. The open-sourced models typically vary in size from 7B to 70B parameters, with their performances improving with model sizes and training FLOPs, which is described as scaling laws [29; 23]. Open LLMs can be classified into foundation language models, SFT models, and RLHF models. \u2020Indicates equal contribution. *Corresponding authors. Technical Report. April 26, 2024 (v1) arXiv:2404.16645v1 [cs.CL] 25 Apr 2024 Tele-FLM Technical Report 2 PRE-TRAINING DATA Despite the growing prevalence and impressive evaluation performances, the high computational cost remains the major challenge in LLM development. In this study, we focus on alleviating the excessive computation by establishing a model-producing pipeline that streamlines the hyperparame- ter searching process, minimizes trial-and-error, and reduces restarts in training. For instance, the Llama technical report [60] assumed the use of around 2,048 A100 GPUs for 5 months, while a single Llama-65B training trial spanned only 21 days, constituting only 14% of the total GPU time. It indicates that open-source endeavors of pre-training LLMs may undergo redundant trial-and-error cycles that may consume enormous computational resources. In contrast, in this work, we reduce the total time cost due to restarts and trial-and-error to negligible levels. We believe that sharing our detailed techniques, engineering practices, and training dynamics [20], especially for LLMs exceeding the 50B scale, could benefit the community as well as contribute to green AI. In this report, we introduce Tele-FLM (aka FLM-2), an open multilingual LLM with 52 billion parameters, which is pre-trained from scratch on a 2.0 trillion token corpus comprising texts from English, Chinese, and various other languages. Tele-FLM inherits and extends the low carbon techniques and fact-enhancing pre-training objectives from the FLM family [33]. The training of Tele-FLM has encountered no instability issue except hardware failures through the completed 2T tokens, and remains ongoing for more data. In addition to the model checkpoints, we release the details of data composition, model architecture, hyperparameter searching, and the full pre-training dynamics. We evaluate Tele-FLM across multiple English and Chinese benchmarks. Regarding English language modeling, Tele-FLM has better Bits-Per-Byte (BPB) than Llama2-70B [61], demonstrating strong compression capabilities. The model also achieves lower BPB than Llama3-70B [2] and Qwen1.5- 72B [5] on Chinese corpora, showcasing its multilingual nature. With fewer English training tokens and smaller models, Tele-FLM matches Llama-65B and is comparable to Llama2-70B in English foundation model evaluation. As for Chinese foundation model evaluation, Tele-FLM matches the overall performance of larger multilingual models trained with a similar amount of data (e.g., DeepSeek-67B [7]). On certain tasks, it surpasses larger models trained with significantly more data (e.g., Qwen1.5-72B). The remainder of this report is structured as follows: Section 2 delves into the specifics of pre- training data processing. Section 3 details our model architecture, tokenizer, infrastructures, training techniques, and hyperparameters. In Section 4, we illustrate the pre-training dynamics and conduct BPB-based evaluation and analysis. Benchmark evaluation in both English and Chinese are provided in Section 5. Section 6 discusses some common issues and lessons learned. Section 7 reviews related literature. We conclude our work and look to the future in Section 8.", "main_content": "Our training dataset comprises a variety of domains, as detailed in Table 1. We build a custom pipeline on spark cluster for massive data processing and apply custom functions to each subset. The pipeline includes text extraction from HTML/WARC, cleaning and paragraph-level deduplication with heuristic rules, model-based quality filtering and document-level deduplication with MinHash [8] algorithm. We obtain 2T tokens after all the procedures, and the distribution ratio between English and Chinese data is roughly 2:1. We incorporate more English data because of its higher quality, especially regarding the WebText domain. Additionally, in line with the methodology of GPT-4, we collected some instruct data and incorporated it into our pre-training data after removing the test sets of common datasets using the strict n-gram-based method. We deliberately avoid \u201ctraining on the test set\u201d or any other benchmark-oriented trick. WebText. CommonCrawl1 is often considered to be a repository containing diverse human experience and rich knowledge (especially long-tail knowledge). However, the high-quality sources in CommonCrawl are primarily concentrated in the English segment, with the Chinese content exhibiting relatively lower information density and quality. We use the latest CommonCrawl dumps from RedPajama [15] and incorporate WudaoCorpora [77] and similar Chinese-specific datasets together to form a large web-text dataset. We apply custom heuristic rules and a FastText [28] classifier to 1https://commoncrawl.org/. 2 Tele-FLM Technical Report 3 PRE-TRAINING DETAILS Table 1: Pre-training data. For each subset of our 2T pre-training tokens, we detail the language, the sampling proportion, the number of epochs completed during training, and the disk size. Domain Language Sampling Prop. Epochs Disk Size WebText en, zh 75.21% 1.0 5.9 TB Code code, zh 9.81% 1.0 528.1 GB Book en, zh 7.17% 0.8 647.6 GB WorldKnowledge multi., en, zh 2.87% 2.5 67.5 GB QA en, zh 2.12% 1.0 159.2 GB AcademicPaper en 0.99% 1.0 54.4 GB Profession-Law zh 1.04% 1.0 84.2 GB Profession-Math math 0.62% 2.0 6.1 GB Profession-Patent zh 0.14% 1.0 10.4 GB Profession-Medical zh 0.02% 1.0 1.2 GB ClassicalChinese zh 0.02% 2.5 0.5 GB filter out low-quality content, cross-deduplicate for each language, and up-sample/down-sample each subset with regard to data quality. The ratio of English to Chinese is approximately 2:1. Code. We incorporate multiple Github-like code datasets and post-process it to filter-out low quality and duplicated content. Simultaneously, we carefully assembled and curated a well-formed markdown dataset comprising Chinese technical articles. Book. We collect books from various sources in both English and Chinese, such as Redpajama [15] and Gutenberg2, among others. We develop a series of cleaning steps to remove redundant formatting, garbled text, formula errors, duplicated paragraphs, and other unwanted content from the books. After interleaved deduplication on document level, we finally obtain a high-quality book dataset. The ratio of English to Chinese is nearly 1:1. WorldKnowledge. To enrich the model\u2019s knowledge base and common sense, we add Wikipedia dumps3 from 2024 period to our training set, covering 22 languages: bg, ca, cs, da, de, en, es, fr, hr, hu, it, ja, nl, pl, pt, ro, ru, sl, sr, sv, uk, zh. We first process these dumps via Online Language Modelling Dataset Pipeline [59] to clean up format; then a meticulous multi-lingual cleaning function is applied to remove reference and subsequent content, which tend to be irrelevant to the main text. QA. We use StackExchange dataset provided by RedPajama-Data [15]. Furthermore, similar Chinese datasets are collected and incorporated into the training after filtering out those QA pairs with low information content. The ratio of English to Chinese in this subset is roughly 1:2. AcademicPaper. We use arxiv dataset collected and processed by RedPajama-Data. This dataset is processed following a Llama-like procedure, which mainly focuses on clearing useless or redundant formats for better language modeling. Profession. To enhance the model\u2019s capacity in various professional fields, we decide to include some specific domains in our dataset, including medical, law, patent, and math. Some subsets are from open-source data, such as Wanjuan-Patent [21] and MathGLM [74]. We post-process each subset independently to address formatting issues, private information disclosure, et al.. ClassicalChinese. In order to improve the model\u2019s understanding of traditional Chinese culture and its capability in classical Chinese, we carefully collect classic Chinese ancient books and poetry. These materials are more credible than those found in web texts; therefore, we assign them a larger weight during sampling. 3 Pre-training Details 3.1 Model Architecture We adapt the architecture of FLM-101B [33] as a backbone with several modifications. FLM-101B follows the standard GPT-style decoder-only transformer architecture [43], with pre-normalization 2https://www.gutenberg.org/. 3https://dumps.wikimedia.org/. 3 Tele-FLM Technical Report 3 PRE-TRAINING DETAILS Table 2: Detailed model architecture. The model configuration of Tele-FLM\u00b5P is a reduced version of Tele-FLM with a smaller hidden size. Models Layer Num Attention Heads Hidden Size FFN Hidden Size Vocab Size Context Length Params Size (M) Tele-FLM 64 64 8,192 21,824 80,000 4,096 52,850 Tele-FLM\u00b5P 64 4 512 1,344 80,000 4,096 283 Table 3: Tokenizer compression ratio. Tokenizer Compression Ratio is defined as the ratio of token length to the original UTF-8 text length. Smaller values indicate better compression. We report the compression ratios of GPT-4, Llama1/2, Llama3, and Tele-FLM on various domains in our training set, as well as the weighted average. Tokenizer Vocab Size Compression Rate English Chinese Classical Chinese Code Multilingual Mathematical Weighted Avg. GPT-4 100k 0.221 0.420 0.478 0.267 0.303 0.508 0.291 Llama1/2 32k 0.262 0.515 0.558 0.367 0.314 0.974 0.356 Llama3 128k 0.220 0.294 0.353 0.267 0.274 0.508 0.251 Tele-FLM 80k 0.248 0.235 0.307 0.363 0.340 0.965 0.261 and adds a LayerNorm to the last layer\u2019s output. Meanwhile, we apply scalar multipliers to: (1) the output of the word embedding layer and (2) the final output hidden states before softmax. We leave these multipliers tunable in pre-training to control the numerical flow. For example, the output multiplier may benefit training by modulating the entropy of the vocabulary distribution. Building on FLM-101B, we further optimize the model structure for Tele-FLM. Specifically, We use RMSNorm [80] for normalization and SwiGLU [50] for the activation function. We roll back to use Rotary Positional Embedding (RoPE) [53] without Extrapolatable Position Embedding (xPos) [55], untie the embedding layer with language modeling head, and disable linear bias in the attention and all MLP modules. One mini version named Tele-FLM\u00b5P is used to search hyper-parameters here. Table 2 details the architecture of both Tele-FLM and Tele-FLM\u00b5P. 3.2 Tokenizer The key to training a text tokenizer is to make a better trade-off between compression ratio and vocabulary size. English-focused tokenizers like GPT-4 or previous Llama series often underperform in compressing Chinese text. In order to guarantee Tele-FLM\u2019s text compression ratio within Chinese while maintaining performance under multilingual setting, we train a tokenizer that aligns closely with the pre-training data distribution. We sample 12 million diverse text samples from our pretraining dataset as the tokenizer\u2019s training dataset, including multilingual texts with a primary focus on Chinese and English, code snippets, classical Chinese literature, and mathematical content. We train the tokenizer with Byte-level BPE (BBPE) algorithm [65]. Table 3 details the tokenizers of Tele-FLM, GPT-4, and the Llama family. The tokenizer of Tele-FLM outperforms GPT-4 and Llama series in both Chinese and Classical Chinese and is comparable with their performances in English, code, and multilingual content. In math, our tokenizer aligns with Llama2 while slightly trailing GPT-4. Overall, Tele-FLM tokenizer showcases a superior compression ratio for Chinese text and satisfactory performance in English. While slightly behind Llama3, Tele-FLM outperforms other approaches on average compression ratio by a large margin. 3.3 Cluster Hardware Tele-FLM is trained on a cluster of 112 A800 SXM4 GPU servers, each with 8 NVLink A800 GPUs and 2TB of RAM. The nodes have heterogeneous CPU architectures: 96 nodes with Intel 8358 (128\u00d7 2.60GHz) CPUs and 16 nodes with AMD 7643 (96\u00d7 2.30GHz) CPUs. All nodes are interconnected via InfiniBand (IB). The training process lasts around two months, including downtime due to unexpected factors. As a comparison of infrastructures, Llama3 [2] is pre-trained on at least 49,152 Nvidia H100 GPUs (in contrast to our 896\u00d7 A800). Meta also claims to have 4 Tele-FLM Technical Report 3 PRE-TRAINING DETAILS the equivalent of 600k H100 GPUs for future computing power4. With this significant gap in total resources, computational efficiency and success rate are critical for average entities. 3.4 Parallelism Tele-FLM utilizes 3D parallel training, combining the prevailing methodologies: data parallelism, tensor parallelism, and pipeline parallelism. Data parallelism [63] is a well-established distributed training method, in which the samples in a batch are partitioned and distributed across multiple devices and processed simultaneously. No inter-device communication is involved in the forward and backward computation, while the gradient is aggregated at the end of each step. Tensor parallelism [51] splits specific neural network tensors across multiple devices and computes via inter-device communication. In Tele-FLM training, tensor parallelism is mainly applied to the attention and feed-forward modules. Excessive use of tensor parallelism may escalate GPU communication overheads and reduce the training speed. To alleviate this, we integrate pipeline parallelism [39] that partitions the model at the layer level. 3D parallelism incorporates these parallel approaches, prioritizing allocation of tensor parallelism groups with higher communication overheads to the same node, thereby maximizing intra-node communication and minimizing inter-node communication. The parallel training setup for Tele-FLM is a mixture of 4 tensor parallel, 2 pipeline parallel, and 112 data parallel. Additionally, we partition inputs to the Transformer\u2019s LayerNorm and Dropout layers along the sequence length dimension with sequence parallelism [31], yielding further GPU computational and memory savings. Furthermore, we utilize Distributed Optimizer module from Megetron-LM5 [46] with optimization. This optimizer further reduces GPU memory consumption by partitioning optimizer states with larger memory footprints across the data parallel dimension. 3.5 Hyperparameter Search Effective hyperparameter tuning may accelerate the loss reduction and ensure convergence, making it crucial for model training. However, the high cost of training large models often renders exhaustive grid searches impractical. Hence, we employ \u00b5P [73] for optimal parameter search. The Tensor Programs theories [72; 36] reveal universal relations in the training dynamics across a series of models, with their widths approaching infinity. For certain hyperparameter classes, this leads to a parameterized mapping for their optimal values between small and large widths. Generally, under \u00b5P transfer, wider models will consistently achieve lower loss than narrower ones when trained on identical data [73]. Consequently, if a narrow model converges, its wider counterparts will always converge. Based on this approach, we set a small model, namely Tele-FLM\u00b5P, for grid search purpose. As demonstrated in Table 2, this small model\u2019s architecture is different from Tele-FLM only in width. With a fixed layer number of 64 and attention head dimension of 128, we reduce the hidden size to 512. This modification results in 4 attention heads and a feed-forward hidden size of 1344. Due to its smaller size, Tele-FLM\u00b5P allows for significantly more experimental runs within fixed time and resource constraints. We search 7 hyperparameters: Learning Rate for vector-like and matrix-like weights, the Minimum Learning Rate at the end of the schedule, the initialization Standard Deviation for vector-like and matrix-like weights, the scaling factor for the embedding layer (namely Input Mult), and the scaling factor for the output hidden state in the final layer (namely Output Mult). For the definitions of vector/matrix-like weights and the \u00b5P transferring formula we apply, please refer to [75] and [73]. We use truncated normal distribution for model initialization. Figure 1 illustrates the loss and gradient norm dynamics of 9 hyperparameter combinations for the grid search, which are selected based on our prior knowledge of model configurations. We choose 4https://www.instagram.com/reel/C2QARHJR1sZ/?hl=en. 5https://github.com/NVIDIA/Megatron-LM. 5 Tele-FLM Technical Report 4 LOSS DYNAMICS AND BPB EVALUATION 0 10000 20000 30000 40000 50000 Steps 2.60 2.65 2.70 2.75 2.80 2.85 2.90 2.95 3.00 Training Loss (a) Loss curves for grid search. 0 10000 20000 30000 40000 50000 Steps 0 2 4 6 8 10 Gradient Norm (b) Gradient norm curves for grid search. Figure 1: Experimental curves of hyperparameter search based on \u00b5P. Table 4: Tele-FLM Training Hyperparameters. Searched Hyperparameters Non-Searched Hyperparameters Learning Rate 1.5e-4 LR Schedule Type cosine Matrix Learning Rate 1.5e-4 LR Schedule (tokens) 2.5T Minimum Learning Rate 1.5e-5 Warmup Step 2,000 Standard Deviation 4e-3 Clip Grad 1.0 Matrix Standard Deviation 4.242e-3 Weight Decay 0.0 Input Mult 1.0 Batch Size (tokens) 5,505,024 Output Mult 3.125e-2 RoPE Theta 10,000 the hyperparameters represented by the red line for final training after assessing the rate of loss decrease, trend stability, and gradient norm stability. Using \u00b5P, we derive the optimal hyperparameter configuration for the final 52B model based on this searched result, which is detailed in Table 4. A more fine-grained search can be conducted with expanded time and budgets. 4 Loss Dynamics and BPB Evaluation 0 250 500 750 1000 1250 1500 1750 2000 Trained T okens (Billions) 1.5 1.6 1.7 1.8 1.9 2.0 2.1 2.2 Training Loss (a) Training loss curve. 0 250 500 750 1000 1250 1500 1750 2000 Trained T okens (Billions) 1.5 1.6 1.7 1.8 1.9 2.0 2.1 2.2 Validation Loss (b) Validation loss curve. 0 250 500 750 1000 1250 1500 1750 2000 Trained T okens (Billions) 0.00 0.05 0.10 0.15 0.20 0.25 0.30 0.35 0.40 Gradient Norm (c) Training gradient norm curve. Figure 2: Pre-training curves for Tele-FLM w.r.t. amount of data in billion tokens. We present the curves for training and validation loss and gradient norm on our pre-training data distribution in Figure 2. Figure 2a shows that the training process of Tele-FLM succeeds with a single, stable run without any divergence. This result is predictable with our \u00b5P hyperparameter search mentioned above. Figure 2b indicates that the loss curve generalizes well to validation data without saturation or overfitting. Figure 2c presents the gradient norm. We observe that the reduction in language modeling loss translates well into improvements on downstream tasks. Language modeling is compression [16]. Evaluation metrics related to language perplexity (PPL) are well-known to be closely connected to compression ratio. Moreover, these metrics usually exhibit more stable scaling behavior, making them an authentic foundation of downstream task performance (which is usually measured by more complex and nonlinear metrics [48]). For PPL-related evaluation, we use Bits-Per-Byte (BPB) [38; 18] as our metric, which considers both per-token loss and the 6 Tele-FLM Technical Report 4 LOSS DYNAMICS AND BPB EVALUATION 0.0 0.5 1.0 1.5 2.0 0.50 0.55 0.60 0.65 0.70 0.75 WebT ext (en) T ele-FLM Llama-65B Llama2-70B Llama3-8B Llama3-70B 0.0 0.5 1.0 1.5 2.0 0.45 0.50 0.55 0.60 0.65 0.70 AcademicPaper (en) T ele-FLM Llama-65B Llama2-70B Llama3-8B Llama3-70B 0.0 0.5 1.0 1.5 2.0 0.60 0.65 0.70 0.75 0.80 Book (en) T ele-FLM Llama-65B Llama2-70B Llama3-8B Llama3-70B 0.0 0.5 1.0 1.5 2.0 0.45 0.48 0.50 0.53 0.55 0.58 0.60 0.62 0.65 StackExchange (en) T ele-FLM Llama-65B Llama2-70B Llama3-8B Llama3-70B 0.0 0.5 1.0 1.5 2.0 0.45 0.50 0.55 0.60 0.65 0.70 0.75 Wikipedia (multi-language) T ele-FLM Llama-65B Llama2-70B Llama3-8B Llama3-70B 0.0 0.5 1.0 1.5 2.0 0.16 0.18 0.20 0.22 0.24 0.26 0.28 0.30 Github (code) T ele-FLM Llama-65B Llama2-70B Llama3-8B Llama3-70B 0.0 0.5 1.0 1.5 2.0 0.60 0.80 1.00 1.20 1.40 WebT ext (zh) T ele-FLM Llama-65B Llama2-70B Llama3-8B Llama3-70B 0.0 0.5 1.0 1.5 2.0 0.80 1.00 1.20 1.40 1.60 Book (zh) T ele-FLM Llama-65B Llama2-70B Llama3-8B Llama3-70B 0.0 0.5 1.0 1.5 2.0 0.60 0.70 0.80 0.90 1.00 1.10 1.20 1.30 WorldKnowledge (zh) T ele-FLM Llama-65B Llama2-70B Llama3-8B Llama3-70B 0.0 0.5 1.0 1.5 2.0 0.80 1.00 1.20 1.40 1.60 QA (zh) T ele-FLM Llama-65B Llama2-70B Llama3-8B Llama3-70B 0.0 0.5 1.0 1.5 2.0 Trained T okens (Trillions) 1.00 1.20 1.40 1.60 1.80 2.00 2.20 ClassicalChinese (zh) T ele-FLM Llama-65B Llama2-70B Llama3-8B Llama3-70B 0.0 0.5 1.0 1.5 2.0 0.20 0.30 0.40 0.50 0.60 0.70 0.80 0.90 1.00 BPB Loss on Validation Dataset Professional (zh) T ele-FLM Llama-65B Llama2-70B Llama3-8B Llama3-70B Figure 3: BPB curves of Tele-FLM on representative English (en), Chinese (zh), multi-language, and code validation datasets, compared with Llama series. influence of domains and tokenizers. Specifically, on a test corpus in a certain domain, if the total loss is close, a model that tokenizes with a better compression ratio is preferred by the BPB metric. For the English language, we break down the BPB evaluation into 6 different domains, represented by validation datasets from WebText6, Github, Wikipedia, Books, ArXiv, and StackExchange, respectively. We compare with different versions of Llama, including Llama-65B, Llama2-70B, Llama3-8B, and Llama3-70B [2], to analyze how well Tele-FLM fits to compress English data. 6We use text from CommonCrawl and C4, which approximately represent the same source (broad web data). 7 Tele-FLM Technical Report 5 BENCHMARK EVALUATIONS Table 5: BPB of Tele-FLM, Llama family models, and Qwen1.5-72B on English datasets. BPB is computed for 6 dataset categories, with weighted sum results based on Llama [60] and Tele-FLM training data configurations. The best results are in boldface and second-best underlined. Model WebText Github Wikipedia Book ArXiv StackExchange Weighted Sum L-Prop.1 F-Prop.2 Loss Llama-65B 1.650 0.543 1.297 1.791 1.205 1.293 1.572 1.485 Llama2-70B 1.588 0.471 1.198 1.695 1.103 1.220 1.506 1.418 Llama3-70B 1.729 0.597 1.300 1.886 1.042 1.388 1.642 1.556 Qwen1.5-72B 1.996 0.592 1.433 2.107 1.111 1.393 1.878 1.773 Tele-FLM (52B) 1.598 0.314 1.163 1.843 1.153 1.193 1.512 1.411 BPB Llama-65B 0.615 0.286 0.595 0.710 0.590 0.570 0.602 0.574 Llama2-70B 0.592 0.249 0.544 0.672 0.540 0.538 0.576 0.547 Llama3-70B 0.542 0.229 0.513 0.633 0.479 0.497 0.528 0.502 Qwen1.5-72B 0.642 0.234 0.601 0.717 0.521 0.515 0.620 0.586 Tele-FLM (52B) 0.562 0.164 0.570 0.700 0.567 0.531 0.550 0.516 1 L-Prop. (Llama [60] Proportion): 82% : 4.5% : 4.5% : 4.5% : 2.5% : 2.0%. 2 F-Prop. (Tele-FLM Proportion): 75.17% : 13.48% : 3.56% : 5.26% : 1.46% : 1.07%. Table 6: BPB of Tele-FLM, Llama family models and Qwen1.5-72B, on Chinese datasets. BPB is computed for 7 dataset categories, with direct average and weighted sum results based on Tele-FLM training data distributions. Models WebText Code Book World QA Classical Professional Direct Weighted1 Knowledge Chinese Average Sum Loss Llama-65B 1.773 1.236 2.029 1.586 2.076 2.819 1.215 1.819 1.782 Llama2-70B 1.419 1.019 1.542 1.189 1.681 2.233 0.896 1.426 1.414 Llama3-70B 2.152 1.264 2.210 1.722 2.568 2.844 1.109 1.981 2.114 Qwen1.5-72B 2.260 1.405 2.520 1.751 2.888 2.748 0.908 2.069 2.243 Tele-FLM (52B) 1.923 1.096 2.135 1.612 2.530 2.144 0.846 1.755 1.913 BPB Llama-65B 1.325 0.744 1.503 1.161 1.528 2.280 0.919 1.351 1.326 Llama2-70B 1.060 0.614 1.142 0.869 1.237 1.811 0.678 1.059 1.052 Llama3-70B 0.913 0.498 0.943 0.752 1.063 1.458 0.485 0.873 0.897 Qwen1.5-72B 0.759 0.537 0.871 0.663 0.951 1.237 0.329 0.764 0.759 Tele-FLM (52B) 0.643 0.478 0.741 0.619 0.831 0.949 0.290 0.650 0.646 1 Tele-FLM training set Proportion: 76.60% : 1.91% : 11.61% : 1.44% : 4.50% : 0.07% : 3.87%. Figure 3 illustrates the BPB trends w.r.t. to the amount of our pre-training data (in trillion tokens). As training progresses, Tele-FLM surpasses Llama2-70B on WebText, Github, and StackExchange, outperforming Llama-65B and Llama3-8B on almost all datasets, demonstrating strong foundation abilities in English. Numerical results are presented in Table 5. Regarding the weighted sum of BPB, Tele-FLM outperforms Llama-65B, Llama2-70B, Qwen1.5-72B, and Llama3-8B on both Tele-FLM and Llama [60] weighting proportions. Note that Llama3-8B is trained on more than 15T tokens, and these results may indicate that scaling up the model size is still important, despite the rapid growth of the total amount of training data. Similarly to English, we compute BPB across 7 domains with the corresponding Chinese validation data, namely WebText, Code, Book, World Knowledge, QA, Classical Chinese, and Professional. Results are visualized in Figure 3 (with \u201czh\u201d suffix). Specific scores are provided in Table 6. On all these validation corpora, Tele-FLM demonstrates lower BPB than Qwen1.5-72B and the latest Llama3-70B model. Thus, we conclude that our foundation model achieves strong compression performance for Chinese without sacrificing its English language modeling abilities, and vice versa. 5 Benchmark Evaluations 5.1 English: Open LLM, HumanEval, and BBH Benchmarks. We evaluate Tele-FLM on three public and widely-used English benchmarks: Open LLM Leaderboard7, HumanEval [12], and BIG-Bench Hard [52]. 7https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard. 8 Tele-FLM Technical Report 5 BENCHMARK EVALUATIONS \u2022 Open LLM Leaderboard is hosted on Huggingface and includes 6 key tasks to measure a model\u2019s performance on a variety of areas, such as commonsense inference, knowledge capacity, truthfulness, and maths. We report our model\u2019s results with the official evaluation tools (Language Model Evaluation Harness [19]). For the baseline models, we pick the results directly from the Open LLM Leaderboard. \u2022 HumanEval, introduced by OpenAI, tends to evaluate the code generation ability of language models by measuring functional correctness of docstring-prompted output. We choose the pass@5 metric as a trade-off between representing model capability and the evaluation speed. \u2022 Big-Bench Hard is derived from the BIG-Bench benchmark, a diverse evaluation suite that focuses on tasks believed to be beyond the capabilities of current language models. The Big-Bench-Hard, containing 23 challenging tasks, is specifically chosen to represent areas where language models did not surpass average human-rater performance, according to prior evaluations [56]. Table 7: Performance of Tele-FLM and baselines on English benchmarks. Model Average ARC HellaSwag MMLU TruthfulQA WinoGrande GSM8K HumanEval BBH 25-shot 10-shot 5-shot zero-shot 5-shot 5-shot zero-shot 3-shot Llama2-70B 63.39 67.32 87.33 69.83 44.92 83.74 54.06 46.95 52.94 Llama2-13B 50.29 59.39 82.13 55.77 37.38 76.64 22.82 28.66 39.52 Llama-65B 56.98 63.48 86.09 63.93 43.43 82.56 37.23 33.54 45.54 Llama-13B 46.20 56.23 80.93 47.67 39.48 76.24 7.58 23.78 37.72 Tele-FLM (52B) 56.60 59.47 82.25 64.00 43.09 79.40 45.19 34.76 44.60 Results. Table 7 compares Tele-FLM to the Llama series. With 52B parameters and around 1.3T English pre-training tokens, Tele-FLM matches the overall performance of Llama-65B, which is trained on approximately 1.4T tokens. Regarding the nature of different subtasks, Tele-FLM shows advantages over Llama-65B on GSM8K [14] and HumanEval, which focus on reasoning capabilities, but performs slightly worse on some tasks that rely more heavily on knowledge. This disadvantage can potentially be mitigated with more pre-training data consumed. Besides, Tele-FLM achieves > 90% of the performances of Llama2-70B, which is larger in size and trained on a 2T token corpus. 5.2 Chinese: OpenCompass Benchmarks. To measure the Chinese language and knowledge capabilities of our model, we conduct an evaluation using the OpenCompass8 toolkit. Specifically, we choose the following tasks to evaluate the model\u2019s performance in multiple aspects: C-Eval [26] and CMMLU [32] (multisubject knowledge), C3 [54] (reading comprehension), CHID [82] (Chinese culture and language understanding), and CSL [34] (keyword recognition). Results. Table 8 shows evaluation results on Chinese benchmarks. On average, Tele-FLM achieves significantly higher scores than GPT-3.5 and comparable to GPT-4 and DeepSeek-67B [7], reaching 84% of Qwen1.5-72B\u2019s performance [5]. Note that Qwen1.5-72B is larger in size and trained with up to 3T tokens. On CHID and CSL, Tele-FLM shows leading performance among all the models compared. Interestingly, CHID is very specific to Chinese culture, while CSL comes from the scientific domain. This indicates Tele-FLM\u2019s potential to both quickly adapt to a specific language and benefit from general knowledge presented in different languages. 5.3 Evolution of Performance during Training We automatically track the evaluation scores on sampled validation data for 8 of the evaluation benchmarks, as depicted in Figure 4. We observe that for all the tasks, evaluation score improves as pre-training and validation loss/BPB decreases. For knowledge-oriented English benchmarks, including ARC [13], HellaSwag [78], Winogrande [3], and MMLU [22], the performances increase smoothly with more data, which is intuitive regarding the task nature. For reasoning-oriented tasks including GSM8K and BBH, we observe a sharper increase, which indicates these tasks have more complex metrics and could possibly demonstrate emergent abilities. CMMLU is a knowledge-oriented Chinese benchmark. The sharper increase in CMMLU indicates that our Chinese training data is far from saturating, and further improvement can be expected with the ongoing training process. 8https://opencompass.org.cn/home. 9 Tele-FLM Technical Report 6 LESSONS LEARNED Table 8: Performance of Tele-FLM and baselines on Chinese benchmarks. The results of Qwen1.5-72B and our Tele-FLM are locally computed with the OpenCompass toolkit, while other results are picked from OpenCompass leaderboard. Model Average C-Eval CMMLU C3 CHID CSL GPT-4 76.64 69.90 71.00 95.10 82.20 65.00 GPT-3.5 61.86 52.50 53.90 85.60 60.40 56.90 Qwen1.5-72B 80.45 83.72 83.09 81.86 91.09 62.50 Qwen-72B 83.00 83.30 83.60 95.80 91.10 61.20 DeepSeek-67B 73.46 66.90 70.40 77.80 89.10 63.10 Tele-FLM (52B) 71.13 65.48 66.98 66.25 92.57 64.38 0.0 0.5 1.0 1.5 2.0 35 40 45 50 55 60 Acc Norm ARC 0.0 0.5 1.0 1.5 2.0 60 65 70 75 80 Acc Norm HellaSwag 0.0 0.5 1.0 1.5 2.0 10 20 30 40 Acc GSM8K 0.0 0.5 1.0 1.5 2.0 38 39 40 41 42 43 44 Exact Match BBH 0.0 0.5 1.0 1.5 2.0 T okens (T) 38 40 42 44 MC2 TruthfulQA 0.0 0.5 1.0 1.5 2.0 T okens (T) 65 70 75 80 Acc Norm Winogrande 0.0 0.5 1.0 1.5 2.0 T okens (T) 30 35 40 45 50 55 60 65 Acc MMLU 0.0 0.5 1.0 1.5 2.0 T okens (T) 54 56 58 60 62 64 66 68 Acc CMMLU Figure 4: Evolution of performance evaluated by Language Model Evaluation Harness during training. Note that we sampled 20% examples for Hellswag and 30% examples for MMLU considering the time cost. 6 Lessons Learned Lesson on Pre-training Data. We have the following observations in Tele-FLM\u2019s pre-training process. First, as is widely known, both quality and quantity of the data are critical for pre-training; however, when there should be a trade-off between quality and quantity, data quality might be prioritized. For our project, an English-Chinese data ratio of 2:1 works better than 1:1, likely because the average quality of the Chinese Web data we have is relatively low. Second, changing the data distribution midway sometimes leads to changes in gradient norm curves and potential divergence, while maintaining a fixed distribution is more stable. Another advantage of maintaining a fixed data distribution is that it allows for safer early-stop of the \u00b5P experiments. To conclude, the data processing should be as complete as possible before the pre-training starts. Lesson on Hyperparameter Search. We observe that \u00b5P-based methods [73; 75] are effective and efficient in searching for the best hyperparameters and predicting the behaviors of the final large models. Specifically, prior experiences and the open-sourced learning rates are good starting points for hyperparameter search. Nevertheless, initialization standard deviation and output multipliers have more significant influences than commonly known. Lesson on Loss Dynamics. First, the slope of the loss curve typically flattens after 500B tokens. Therefore, training should be restarted promptly if early loss values are unsatisfactory. Second, random loss spikes are common and acceptable if the gradient norm curve looks normal. We observe that our model recovers from all the spikes in the pre-training process, unlike the early open-sourced endeavors [81; 4; 79]. We speculate that modern Llama-like structures, especially those with non-bias designs and truncated normal initialization, combined with effective hyperparameter search, provide decent robustness against loss spikes. Another type of spike corresponds to consistent loss increases, which can be identified early with \u00b5P and avoided before the training begins. 10 Tele-FLM Technical Report REFERENCES Lesson on Gradient Norm. The early gradient norm curves are not strong indicators of training stability. In hyperparameter search, we observe divergence following various gradient curve patterns, yet with higher divergence probabilities associated with continuously increasing gradient trends. 7 Related Work The idea of large foundation models originates from unsupervised pre-training with Transformerbased [64] architectures. Well-known examples of early foundation models include Bert [17], GPT-2 [43], and T5 [45]. GPT-3 [10] increases the model size to 175B and observes decent few-shot and zero-shot reasoning capabilities, which encourages a series of efforts to scale up foundation models [81; 47; 4; 79]. Research on scaling laws [29; 23; 24; 75] sheds light on the predictable trends of model performance when the parameter number increases. On the other hand, other works explore the emergent abilities [68; 67; 48] and their relationships to evaluation metrics and task nature. The Llama series [60; 61; 2] is well-known for its contributions to open-sourced large language models, and is widely regarded as a strong baseline for foundation model evaluation. Falcon [42] explores data processing of publicly available pre-training corpora. Mistral [27] and Gemma [58] release 7B-scaled models that are trained with more data and incorporated with advanced designs. For the Chinese community, Qwen [5], Baichuan [71], Yi [76], and DeepSeek [7] represent efforts in multilingual foundation model pre-training and open-sourcing. FLM-101B [33] studies methodologies for training large foundation models under limited budgets. InstructGPT [41] establishes the paradigm of aligning large foundation models with human preferences. Widely used approaches include supervised fine-tuning (SFT) [66; 70] and Reinforcement Learning from Human Feedback (RLHF) [49], among others [44]. Aligning techniques turn foundation models into dialogue agents, which form the core of AI assistants in commercial use. Closed-source dialogue agents are represented by GPT-4 [40], Claude [6], Grok [1], and Gemini [57]. Open-sourced chat models include Zephyr [62] and ChatGLM [25], among the large number of human-aligned versions of the open foundation models mentioned above. 8 Conclusions and Future Work In this report, we introduce Tele-FLM, an open multilingual foundation model. With 52B parameters and 2T training tokens, Tele-FLM matches the performance of larger models trained with more data, in both multilingual language modeling capabilities and benchmark evaluations. The pre-training procedure of Tele-FLM features a high success rate and low carbon footprint. We open-source the model weights as well as technical details and training dynamics. We hope this work will catalyze the growth of open-sourced LLM communities and reduce the trial-and-error cycles to train LLMs with more than 50B parameters. Note that although efforts are made to filter out harmful contents in the training data, such kind of outputs could still potentially be elicited from the released model, which does not represent the opinions of the authors or entities involved. For future work, we plan to continue enhancing the capabilities of Tele-FLM to facilitate broader application, as well as to develop efficient training techniques to explore the unmanned deep space of larger-scaled dense models. Acknowledgments This work is supported by the National Science and Technology Major Project (No. 2022ZD0116300) and the National Science Foundation of China (No. 62106249). We would like to thank Boya Wu, Li Du, Quanyue Ma, Hanyu Zhao, Shiyu Wu and Kaipeng Jia for their help on data, Hailong Qian, Jinglong Li, Taojia Liu, Junjie Wang, Yuanlin Cai, Jiahao Guo, Quan Zhao, Xuwei Yang, Hanxiao Qu, Yan Tian, and Kailong Xie for their help on computational resources, and all other colleagues\u2019 strong support for this project.", "additional_graph_info": { "graph": [ [ "Xiang Li", "Jian Song" ], [ "Xiang Li", "Bin Hu" ], [ "Jian Song", "Wangjun Yuan" ], [ "Jian Song", "Hongruixuan Chen" ], [ "Jian Song", "Naoto Yokoya" ], [ "Bin Hu", "Zhen Zhao" ] ], "node_feat": { "Xiang Li": [ { "url": "http://arxiv.org/abs/2404.16645v1", "title": "Tele-FLM Technical Report", "abstract": "Large language models (LLMs) have showcased profound capabilities in language\nunderstanding and generation, facilitating a wide array of applications.\nHowever, there is a notable paucity of detailed, open-sourced methodologies on\nefficiently scaling LLMs beyond 50 billion parameters with minimum\ntrial-and-error cost and computational resources. In this report, we introduce\nTele-FLM (aka FLM-2), a 52B open-sourced multilingual large language model that\nfeatures a stable, efficient pre-training paradigm and enhanced factual\njudgment capabilities. Tele-FLM demonstrates superior multilingual language\nmodeling abilities, measured by BPB on textual corpus. Besides, in both English\nand Chinese foundation model evaluation, it is comparable to strong\nopen-sourced models that involve larger pre-training FLOPs, such as Llama2-70B\nand DeepSeek-67B. In addition to the model weights, we share the core designs,\nengineering practices, and training details, which we expect to benefit both\nthe academic and industrial communities.", "authors": "Xiang Li, Yiqun Yao, Xin Jiang, Xuezhi Fang, Chao Wang, Xinzhang Liu, Zihan Wang, Yu Zhao, Xin Wang, Yuyao Huang, Shuangyong Song, Yongxiang Li, Zheng Zhang, Bo Zhao, Aixin Sun, Yequan Wang, Zhongjiang He, Zhongyuan Wang, Xuelong Li, Tiejun Huang", "published": "2024-04-25", "updated": "2024-04-25", "primary_cat": "cs.CL", "cats": [ "cs.CL", "cs.AI" ], "main_content": "Our training dataset comprises a variety of domains, as detailed in Table 1. We build a custom pipeline on spark cluster for massive data processing and apply custom functions to each subset. The pipeline includes text extraction from HTML/WARC, cleaning and paragraph-level deduplication with heuristic rules, model-based quality filtering and document-level deduplication with MinHash [8] algorithm. We obtain 2T tokens after all the procedures, and the distribution ratio between English and Chinese data is roughly 2:1. We incorporate more English data because of its higher quality, especially regarding the WebText domain. Additionally, in line with the methodology of GPT-4, we collected some instruct data and incorporated it into our pre-training data after removing the test sets of common datasets using the strict n-gram-based method. We deliberately avoid \u201ctraining on the test set\u201d or any other benchmark-oriented trick. WebText. CommonCrawl1 is often considered to be a repository containing diverse human experience and rich knowledge (especially long-tail knowledge). However, the high-quality sources in CommonCrawl are primarily concentrated in the English segment, with the Chinese content exhibiting relatively lower information density and quality. We use the latest CommonCrawl dumps from RedPajama [15] and incorporate WudaoCorpora [77] and similar Chinese-specific datasets together to form a large web-text dataset. We apply custom heuristic rules and a FastText [28] classifier to 1https://commoncrawl.org/. 2 Tele-FLM Technical Report 3 PRE-TRAINING DETAILS Table 1: Pre-training data. For each subset of our 2T pre-training tokens, we detail the language, the sampling proportion, the number of epochs completed during training, and the disk size. Domain Language Sampling Prop. Epochs Disk Size WebText en, zh 75.21% 1.0 5.9 TB Code code, zh 9.81% 1.0 528.1 GB Book en, zh 7.17% 0.8 647.6 GB WorldKnowledge multi., en, zh 2.87% 2.5 67.5 GB QA en, zh 2.12% 1.0 159.2 GB AcademicPaper en 0.99% 1.0 54.4 GB Profession-Law zh 1.04% 1.0 84.2 GB Profession-Math math 0.62% 2.0 6.1 GB Profession-Patent zh 0.14% 1.0 10.4 GB Profession-Medical zh 0.02% 1.0 1.2 GB ClassicalChinese zh 0.02% 2.5 0.5 GB filter out low-quality content, cross-deduplicate for each language, and up-sample/down-sample each subset with regard to data quality. The ratio of English to Chinese is approximately 2:1. Code. We incorporate multiple Github-like code datasets and post-process it to filter-out low quality and duplicated content. Simultaneously, we carefully assembled and curated a well-formed markdown dataset comprising Chinese technical articles. Book. We collect books from various sources in both English and Chinese, such as Redpajama [15] and Gutenberg2, among others. We develop a series of cleaning steps to remove redundant formatting, garbled text, formula errors, duplicated paragraphs, and other unwanted content from the books. After interleaved deduplication on document level, we finally obtain a high-quality book dataset. The ratio of English to Chinese is nearly 1:1. WorldKnowledge. To enrich the model\u2019s knowledge base and common sense, we add Wikipedia dumps3 from 2024 period to our training set, covering 22 languages: bg, ca, cs, da, de, en, es, fr, hr, hu, it, ja, nl, pl, pt, ro, ru, sl, sr, sv, uk, zh. We first process these dumps via Online Language Modelling Dataset Pipeline [59] to clean up format; then a meticulous multi-lingual cleaning function is applied to remove reference and subsequent content, which tend to be irrelevant to the main text. QA. We use StackExchange dataset provided by RedPajama-Data [15]. Furthermore, similar Chinese datasets are collected and incorporated into the training after filtering out those QA pairs with low information content. The ratio of English to Chinese in this subset is roughly 1:2. AcademicPaper. We use arxiv dataset collected and processed by RedPajama-Data. This dataset is processed following a Llama-like procedure, which mainly focuses on clearing useless or redundant formats for better language modeling. Profession. To enhance the model\u2019s capacity in various professional fields, we decide to include some specific domains in our dataset, including medical, law, patent, and math. Some subsets are from open-source data, such as Wanjuan-Patent [21] and MathGLM [74]. We post-process each subset independently to address formatting issues, private information disclosure, et al.. ClassicalChinese. In order to improve the model\u2019s understanding of traditional Chinese culture and its capability in classical Chinese, we carefully collect classic Chinese ancient books and poetry. These materials are more credible than those found in web texts; therefore, we assign them a larger weight during sampling. 3 Pre-training Details 3.1 Model Architecture We adapt the architecture of FLM-101B [33] as a backbone with several modifications. FLM-101B follows the standard GPT-style decoder-only transformer architecture [43], with pre-normalization 2https://www.gutenberg.org/. 3https://dumps.wikimedia.org/. 3 Tele-FLM Technical Report 3 PRE-TRAINING DETAILS Table 2: Detailed model architecture. The model configuration of Tele-FLM\u00b5P is a reduced version of Tele-FLM with a smaller hidden size. Models Layer Num Attention Heads Hidden Size FFN Hidden Size Vocab Size Context Length Params Size (M) Tele-FLM 64 64 8,192 21,824 80,000 4,096 52,850 Tele-FLM\u00b5P 64 4 512 1,344 80,000 4,096 283 Table 3: Tokenizer compression ratio. Tokenizer Compression Ratio is defined as the ratio of token length to the original UTF-8 text length. Smaller values indicate better compression. We report the compression ratios of GPT-4, Llama1/2, Llama3, and Tele-FLM on various domains in our training set, as well as the weighted average. Tokenizer Vocab Size Compression Rate English Chinese Classical Chinese Code Multilingual Mathematical Weighted Avg. GPT-4 100k 0.221 0.420 0.478 0.267 0.303 0.508 0.291 Llama1/2 32k 0.262 0.515 0.558 0.367 0.314 0.974 0.356 Llama3 128k 0.220 0.294 0.353 0.267 0.274 0.508 0.251 Tele-FLM 80k 0.248 0.235 0.307 0.363 0.340 0.965 0.261 and adds a LayerNorm to the last layer\u2019s output. Meanwhile, we apply scalar multipliers to: (1) the output of the word embedding layer and (2) the final output hidden states before softmax. We leave these multipliers tunable in pre-training to control the numerical flow. For example, the output multiplier may benefit training by modulating the entropy of the vocabulary distribution. Building on FLM-101B, we further optimize the model structure for Tele-FLM. Specifically, We use RMSNorm [80] for normalization and SwiGLU [50] for the activation function. We roll back to use Rotary Positional Embedding (RoPE) [53] without Extrapolatable Position Embedding (xPos) [55], untie the embedding layer with language modeling head, and disable linear bias in the attention and all MLP modules. One mini version named Tele-FLM\u00b5P is used to search hyper-parameters here. Table 2 details the architecture of both Tele-FLM and Tele-FLM\u00b5P. 3.2 Tokenizer The key to training a text tokenizer is to make a better trade-off between compression ratio and vocabulary size. English-focused tokenizers like GPT-4 or previous Llama series often underperform in compressing Chinese text. In order to guarantee Tele-FLM\u2019s text compression ratio within Chinese while maintaining performance under multilingual setting, we train a tokenizer that aligns closely with the pre-training data distribution. We sample 12 million diverse text samples from our pretraining dataset as the tokenizer\u2019s training dataset, including multilingual texts with a primary focus on Chinese and English, code snippets, classical Chinese literature, and mathematical content. We train the tokenizer with Byte-level BPE (BBPE) algorithm [65]. Table 3 details the tokenizers of Tele-FLM, GPT-4, and the Llama family. The tokenizer of Tele-FLM outperforms GPT-4 and Llama series in both Chinese and Classical Chinese and is comparable with their performances in English, code, and multilingual content. In math, our tokenizer aligns with Llama2 while slightly trailing GPT-4. Overall, Tele-FLM tokenizer showcases a superior compression ratio for Chinese text and satisfactory performance in English. While slightly behind Llama3, Tele-FLM outperforms other approaches on average compression ratio by a large margin. 3.3 Cluster Hardware Tele-FLM is trained on a cluster of 112 A800 SXM4 GPU servers, each with 8 NVLink A800 GPUs and 2TB of RAM. The nodes have heterogeneous CPU architectures: 96 nodes with Intel 8358 (128\u00d7 2.60GHz) CPUs and 16 nodes with AMD 7643 (96\u00d7 2.30GHz) CPUs. All nodes are interconnected via InfiniBand (IB). The training process lasts around two months, including downtime due to unexpected factors. As a comparison of infrastructures, Llama3 [2] is pre-trained on at least 49,152 Nvidia H100 GPUs (in contrast to our 896\u00d7 A800). Meta also claims to have 4 Tele-FLM Technical Report 3 PRE-TRAINING DETAILS the equivalent of 600k H100 GPUs for future computing power4. With this significant gap in total resources, computational efficiency and success rate are critical for average entities. 3.4 Parallelism Tele-FLM utilizes 3D parallel training, combining the prevailing methodologies: data parallelism, tensor parallelism, and pipeline parallelism. Data parallelism [63] is a well-established distributed training method, in which the samples in a batch are partitioned and distributed across multiple devices and processed simultaneously. No inter-device communication is involved in the forward and backward computation, while the gradient is aggregated at the end of each step. Tensor parallelism [51] splits specific neural network tensors across multiple devices and computes via inter-device communication. In Tele-FLM training, tensor parallelism is mainly applied to the attention and feed-forward modules. Excessive use of tensor parallelism may escalate GPU communication overheads and reduce the training speed. To alleviate this, we integrate pipeline parallelism [39] that partitions the model at the layer level. 3D parallelism incorporates these parallel approaches, prioritizing allocation of tensor parallelism groups with higher communication overheads to the same node, thereby maximizing intra-node communication and minimizing inter-node communication. The parallel training setup for Tele-FLM is a mixture of 4 tensor parallel, 2 pipeline parallel, and 112 data parallel. Additionally, we partition inputs to the Transformer\u2019s LayerNorm and Dropout layers along the sequence length dimension with sequence parallelism [31], yielding further GPU computational and memory savings. Furthermore, we utilize Distributed Optimizer module from Megetron-LM5 [46] with optimization. This optimizer further reduces GPU memory consumption by partitioning optimizer states with larger memory footprints across the data parallel dimension. 3.5 Hyperparameter Search Effective hyperparameter tuning may accelerate the loss reduction and ensure convergence, making it crucial for model training. However, the high cost of training large models often renders exhaustive grid searches impractical. Hence, we employ \u00b5P [73] for optimal parameter search. The Tensor Programs theories [72; 36] reveal universal relations in the training dynamics across a series of models, with their widths approaching infinity. For certain hyperparameter classes, this leads to a parameterized mapping for their optimal values between small and large widths. Generally, under \u00b5P transfer, wider models will consistently achieve lower loss than narrower ones when trained on identical data [73]. Consequently, if a narrow model converges, its wider counterparts will always converge. Based on this approach, we set a small model, namely Tele-FLM\u00b5P, for grid search purpose. As demonstrated in Table 2, this small model\u2019s architecture is different from Tele-FLM only in width. With a fixed layer number of 64 and attention head dimension of 128, we reduce the hidden size to 512. This modification results in 4 attention heads and a feed-forward hidden size of 1344. Due to its smaller size, Tele-FLM\u00b5P allows for significantly more experimental runs within fixed time and resource constraints. We search 7 hyperparameters: Learning Rate for vector-like and matrix-like weights, the Minimum Learning Rate at the end of the schedule, the initialization Standard Deviation for vector-like and matrix-like weights, the scaling factor for the embedding layer (namely Input Mult), and the scaling factor for the output hidden state in the final layer (namely Output Mult). For the definitions of vector/matrix-like weights and the \u00b5P transferring formula we apply, please refer to [75] and [73]. We use truncated normal distribution for model initialization. Figure 1 illustrates the loss and gradient norm dynamics of 9 hyperparameter combinations for the grid search, which are selected based on our prior knowledge of model configurations. We choose 4https://www.instagram.com/reel/C2QARHJR1sZ/?hl=en. 5https://github.com/NVIDIA/Megatron-LM. 5 Tele-FLM Technical Report 4 LOSS DYNAMICS AND BPB EVALUATION 0 10000 20000 30000 40000 50000 Steps 2.60 2.65 2.70 2.75 2.80 2.85 2.90 2.95 3.00 Training Loss (a) Loss curves for grid search. 0 10000 20000 30000 40000 50000 Steps 0 2 4 6 8 10 Gradient Norm (b) Gradient norm curves for grid search. Figure 1: Experimental curves of hyperparameter search based on \u00b5P. Table 4: Tele-FLM Training Hyperparameters. Searched Hyperparameters Non-Searched Hyperparameters Learning Rate 1.5e-4 LR Schedule Type cosine Matrix Learning Rate 1.5e-4 LR Schedule (tokens) 2.5T Minimum Learning Rate 1.5e-5 Warmup Step 2,000 Standard Deviation 4e-3 Clip Grad 1.0 Matrix Standard Deviation 4.242e-3 Weight Decay 0.0 Input Mult 1.0 Batch Size (tokens) 5,505,024 Output Mult 3.125e-2 RoPE Theta 10,000 the hyperparameters represented by the red line for final training after assessing the rate of loss decrease, trend stability, and gradient norm stability. Using \u00b5P, we derive the optimal hyperparameter configuration for the final 52B model based on this searched result, which is detailed in Table 4. A more fine-grained search can be conducted with expanded time and budgets. 4 Loss Dynamics and BPB Evaluation 0 250 500 750 1000 1250 1500 1750 2000 Trained T okens (Billions) 1.5 1.6 1.7 1.8 1.9 2.0 2.1 2.2 Training Loss (a) Training loss curve. 0 250 500 750 1000 1250 1500 1750 2000 Trained T okens (Billions) 1.5 1.6 1.7 1.8 1.9 2.0 2.1 2.2 Validation Loss (b) Validation loss curve. 0 250 500 750 1000 1250 1500 1750 2000 Trained T okens (Billions) 0.00 0.05 0.10 0.15 0.20 0.25 0.30 0.35 0.40 Gradient Norm (c) Training gradient norm curve. Figure 2: Pre-training curves for Tele-FLM w.r.t. amount of data in billion tokens. We present the curves for training and validation loss and gradient norm on our pre-training data distribution in Figure 2. Figure 2a shows that the training process of Tele-FLM succeeds with a single, stable run without any divergence. This result is predictable with our \u00b5P hyperparameter search mentioned above. Figure 2b indicates that the loss curve generalizes well to validation data without saturation or overfitting. Figure 2c presents the gradient norm. We observe that the reduction in language modeling loss translates well into improvements on downstream tasks. Language modeling is compression [16]. Evaluation metrics related to language perplexity (PPL) are well-known to be closely connected to compression ratio. Moreover, these metrics usually exhibit more stable scaling behavior, making them an authentic foundation of downstream task performance (which is usually measured by more complex and nonlinear metrics [48]). For PPL-related evaluation, we use Bits-Per-Byte (BPB) [38; 18] as our metric, which considers both per-token loss and the 6 Tele-FLM Technical Report 4 LOSS DYNAMICS AND BPB EVALUATION 0.0 0.5 1.0 1.5 2.0 0.50 0.55 0.60 0.65 0.70 0.75 WebT ext (en) T ele-FLM Llama-65B Llama2-70B Llama3-8B Llama3-70B 0.0 0.5 1.0 1.5 2.0 0.45 0.50 0.55 0.60 0.65 0.70 AcademicPaper (en) T ele-FLM Llama-65B Llama2-70B Llama3-8B Llama3-70B 0.0 0.5 1.0 1.5 2.0 0.60 0.65 0.70 0.75 0.80 Book (en) T ele-FLM Llama-65B Llama2-70B Llama3-8B Llama3-70B 0.0 0.5 1.0 1.5 2.0 0.45 0.48 0.50 0.53 0.55 0.58 0.60 0.62 0.65 StackExchange (en) T ele-FLM Llama-65B Llama2-70B Llama3-8B Llama3-70B 0.0 0.5 1.0 1.5 2.0 0.45 0.50 0.55 0.60 0.65 0.70 0.75 Wikipedia (multi-language) T ele-FLM Llama-65B Llama2-70B Llama3-8B Llama3-70B 0.0 0.5 1.0 1.5 2.0 0.16 0.18 0.20 0.22 0.24 0.26 0.28 0.30 Github (code) T ele-FLM Llama-65B Llama2-70B Llama3-8B Llama3-70B 0.0 0.5 1.0 1.5 2.0 0.60 0.80 1.00 1.20 1.40 WebT ext (zh) T ele-FLM Llama-65B Llama2-70B Llama3-8B Llama3-70B 0.0 0.5 1.0 1.5 2.0 0.80 1.00 1.20 1.40 1.60 Book (zh) T ele-FLM Llama-65B Llama2-70B Llama3-8B Llama3-70B 0.0 0.5 1.0 1.5 2.0 0.60 0.70 0.80 0.90 1.00 1.10 1.20 1.30 WorldKnowledge (zh) T ele-FLM Llama-65B Llama2-70B Llama3-8B Llama3-70B 0.0 0.5 1.0 1.5 2.0 0.80 1.00 1.20 1.40 1.60 QA (zh) T ele-FLM Llama-65B Llama2-70B Llama3-8B Llama3-70B 0.0 0.5 1.0 1.5 2.0 Trained T okens (Trillions) 1.00 1.20 1.40 1.60 1.80 2.00 2.20 ClassicalChinese (zh) T ele-FLM Llama-65B Llama2-70B Llama3-8B Llama3-70B 0.0 0.5 1.0 1.5 2.0 0.20 0.30 0.40 0.50 0.60 0.70 0.80 0.90 1.00 BPB Loss on Validation Dataset Professional (zh) T ele-FLM Llama-65B Llama2-70B Llama3-8B Llama3-70B Figure 3: BPB curves of Tele-FLM on representative English (en), Chinese (zh), multi-language, and code validation datasets, compared with Llama series. influence of domains and tokenizers. Specifically, on a test corpus in a certain domain, if the total loss is close, a model that tokenizes with a better compression ratio is preferred by the BPB metric. For the English language, we break down the BPB evaluation into 6 different domains, represented by validation datasets from WebText6, Github, Wikipedia, Books, ArXiv, and StackExchange, respectively. We compare with different versions of Llama, including Llama-65B, Llama2-70B, Llama3-8B, and Llama3-70B [2], to analyze how well Tele-FLM fits to compress English data. 6We use text from CommonCrawl and C4, which approximately represent the same source (broad web data). 7 Tele-FLM Technical Report 5 BENCHMARK EVALUATIONS Table 5: BPB of Tele-FLM, Llama family models, and Qwen1.5-72B on English datasets. BPB is computed for 6 dataset categories, with weighted sum results based on Llama [60] and Tele-FLM training data configurations. The best results are in boldface and second-best underlined. Model WebText Github Wikipedia Book ArXiv StackExchange Weighted Sum L-Prop.1 F-Prop.2 Loss Llama-65B 1.650 0.543 1.297 1.791 1.205 1.293 1.572 1.485 Llama2-70B 1.588 0.471 1.198 1.695 1.103 1.220 1.506 1.418 Llama3-70B 1.729 0.597 1.300 1.886 1.042 1.388 1.642 1.556 Qwen1.5-72B 1.996 0.592 1.433 2.107 1.111 1.393 1.878 1.773 Tele-FLM (52B) 1.598 0.314 1.163 1.843 1.153 1.193 1.512 1.411 BPB Llama-65B 0.615 0.286 0.595 0.710 0.590 0.570 0.602 0.574 Llama2-70B 0.592 0.249 0.544 0.672 0.540 0.538 0.576 0.547 Llama3-70B 0.542 0.229 0.513 0.633 0.479 0.497 0.528 0.502 Qwen1.5-72B 0.642 0.234 0.601 0.717 0.521 0.515 0.620 0.586 Tele-FLM (52B) 0.562 0.164 0.570 0.700 0.567 0.531 0.550 0.516 1 L-Prop. (Llama [60] Proportion): 82% : 4.5% : 4.5% : 4.5% : 2.5% : 2.0%. 2 F-Prop. (Tele-FLM Proportion): 75.17% : 13.48% : 3.56% : 5.26% : 1.46% : 1.07%. Table 6: BPB of Tele-FLM, Llama family models and Qwen1.5-72B, on Chinese datasets. BPB is computed for 7 dataset categories, with direct average and weighted sum results based on Tele-FLM training data distributions. Models WebText Code Book World QA Classical Professional Direct Weighted1 Knowledge Chinese Average Sum Loss Llama-65B 1.773 1.236 2.029 1.586 2.076 2.819 1.215 1.819 1.782 Llama2-70B 1.419 1.019 1.542 1.189 1.681 2.233 0.896 1.426 1.414 Llama3-70B 2.152 1.264 2.210 1.722 2.568 2.844 1.109 1.981 2.114 Qwen1.5-72B 2.260 1.405 2.520 1.751 2.888 2.748 0.908 2.069 2.243 Tele-FLM (52B) 1.923 1.096 2.135 1.612 2.530 2.144 0.846 1.755 1.913 BPB Llama-65B 1.325 0.744 1.503 1.161 1.528 2.280 0.919 1.351 1.326 Llama2-70B 1.060 0.614 1.142 0.869 1.237 1.811 0.678 1.059 1.052 Llama3-70B 0.913 0.498 0.943 0.752 1.063 1.458 0.485 0.873 0.897 Qwen1.5-72B 0.759 0.537 0.871 0.663 0.951 1.237 0.329 0.764 0.759 Tele-FLM (52B) 0.643 0.478 0.741 0.619 0.831 0.949 0.290 0.650 0.646 1 Tele-FLM training set Proportion: 76.60% : 1.91% : 11.61% : 1.44% : 4.50% : 0.07% : 3.87%. Figure 3 illustrates the BPB trends w.r.t. to the amount of our pre-training data (in trillion tokens). As training progresses, Tele-FLM surpasses Llama2-70B on WebText, Github, and StackExchange, outperforming Llama-65B and Llama3-8B on almost all datasets, demonstrating strong foundation abilities in English. Numerical results are presented in Table 5. Regarding the weighted sum of BPB, Tele-FLM outperforms Llama-65B, Llama2-70B, Qwen1.5-72B, and Llama3-8B on both Tele-FLM and Llama [60] weighting proportions. Note that Llama3-8B is trained on more than 15T tokens, and these results may indicate that scaling up the model size is still important, despite the rapid growth of the total amount of training data. Similarly to English, we compute BPB across 7 domains with the corresponding Chinese validation data, namely WebText, Code, Book, World Knowledge, QA, Classical Chinese, and Professional. Results are visualized in Figure 3 (with \u201czh\u201d suffix). Specific scores are provided in Table 6. On all these validation corpora, Tele-FLM demonstrates lower BPB than Qwen1.5-72B and the latest Llama3-70B model. Thus, we conclude that our foundation model achieves strong compression performance for Chinese without sacrificing its English language modeling abilities, and vice versa. 5 Benchmark Evaluations 5.1 English: Open LLM, HumanEval, and BBH Benchmarks. We evaluate Tele-FLM on three public and widely-used English benchmarks: Open LLM Leaderboard7, HumanEval [12], and BIG-Bench Hard [52]. 7https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard. 8 Tele-FLM Technical Report 5 BENCHMARK EVALUATIONS \u2022 Open LLM Leaderboard is hosted on Huggingface and includes 6 key tasks to measure a model\u2019s performance on a variety of areas, such as commonsense inference, knowledge capacity, truthfulness, and maths. We report our model\u2019s results with the official evaluation tools (Language Model Evaluation Harness [19]). For the baseline models, we pick the results directly from the Open LLM Leaderboard. \u2022 HumanEval, introduced by OpenAI, tends to evaluate the code generation ability of language models by measuring functional correctness of docstring-prompted output. We choose the pass@5 metric as a trade-off between representing model capability and the evaluation speed. \u2022 Big-Bench Hard is derived from the BIG-Bench benchmark, a diverse evaluation suite that focuses on tasks believed to be beyond the capabilities of current language models. The Big-Bench-Hard, containing 23 challenging tasks, is specifically chosen to represent areas where language models did not surpass average human-rater performance, according to prior evaluations [56]. Table 7: Performance of Tele-FLM and baselines on English benchmarks. Model Average ARC HellaSwag MMLU TruthfulQA WinoGrande GSM8K HumanEval BBH 25-shot 10-shot 5-shot zero-shot 5-shot 5-shot zero-shot 3-shot Llama2-70B 63.39 67.32 87.33 69.83 44.92 83.74 54.06 46.95 52.94 Llama2-13B 50.29 59.39 82.13 55.77 37.38 76.64 22.82 28.66 39.52 Llama-65B 56.98 63.48 86.09 63.93 43.43 82.56 37.23 33.54 45.54 Llama-13B 46.20 56.23 80.93 47.67 39.48 76.24 7.58 23.78 37.72 Tele-FLM (52B) 56.60 59.47 82.25 64.00 43.09 79.40 45.19 34.76 44.60 Results. Table 7 compares Tele-FLM to the Llama series. With 52B parameters and around 1.3T English pre-training tokens, Tele-FLM matches the overall performance of Llama-65B, which is trained on approximately 1.4T tokens. Regarding the nature of different subtasks, Tele-FLM shows advantages over Llama-65B on GSM8K [14] and HumanEval, which focus on reasoning capabilities, but performs slightly worse on some tasks that rely more heavily on knowledge. This disadvantage can potentially be mitigated with more pre-training data consumed. Besides, Tele-FLM achieves > 90% of the performances of Llama2-70B, which is larger in size and trained on a 2T token corpus. 5.2 Chinese: OpenCompass Benchmarks. To measure the Chinese language and knowledge capabilities of our model, we conduct an evaluation using the OpenCompass8 toolkit. Specifically, we choose the following tasks to evaluate the model\u2019s performance in multiple aspects: C-Eval [26] and CMMLU [32] (multisubject knowledge), C3 [54] (reading comprehension), CHID [82] (Chinese culture and language understanding), and CSL [34] (keyword recognition). Results. Table 8 shows evaluation results on Chinese benchmarks. On average, Tele-FLM achieves significantly higher scores than GPT-3.5 and comparable to GPT-4 and DeepSeek-67B [7], reaching 84% of Qwen1.5-72B\u2019s performance [5]. Note that Qwen1.5-72B is larger in size and trained with up to 3T tokens. On CHID and CSL, Tele-FLM shows leading performance among all the models compared. Interestingly, CHID is very specific to Chinese culture, while CSL comes from the scientific domain. This indicates Tele-FLM\u2019s potential to both quickly adapt to a specific language and benefit from general knowledge presented in different languages. 5.3 Evolution of Performance during Training We automatically track the evaluation scores on sampled validation data for 8 of the evaluation benchmarks, as depicted in Figure 4. We observe that for all the tasks, evaluation score improves as pre-training and validation loss/BPB decreases. For knowledge-oriented English benchmarks, including ARC [13], HellaSwag [78], Winogrande [3], and MMLU [22], the performances increase smoothly with more data, which is intuitive regarding the task nature. For reasoning-oriented tasks including GSM8K and BBH, we observe a sharper increase, which indicates these tasks have more complex metrics and could possibly demonstrate emergent abilities. CMMLU is a knowledge-oriented Chinese benchmark. The sharper increase in CMMLU indicates that our Chinese training data is far from saturating, and further improvement can be expected with the ongoing training process. 8https://opencompass.org.cn/home. 9 Tele-FLM Technical Report 6 LESSONS LEARNED Table 8: Performance of Tele-FLM and baselines on Chinese benchmarks. The results of Qwen1.5-72B and our Tele-FLM are locally computed with the OpenCompass toolkit, while other results are picked from OpenCompass leaderboard. Model Average C-Eval CMMLU C3 CHID CSL GPT-4 76.64 69.90 71.00 95.10 82.20 65.00 GPT-3.5 61.86 52.50 53.90 85.60 60.40 56.90 Qwen1.5-72B 80.45 83.72 83.09 81.86 91.09 62.50 Qwen-72B 83.00 83.30 83.60 95.80 91.10 61.20 DeepSeek-67B 73.46 66.90 70.40 77.80 89.10 63.10 Tele-FLM (52B) 71.13 65.48 66.98 66.25 92.57 64.38 0.0 0.5 1.0 1.5 2.0 35 40 45 50 55 60 Acc Norm ARC 0.0 0.5 1.0 1.5 2.0 60 65 70 75 80 Acc Norm HellaSwag 0.0 0.5 1.0 1.5 2.0 10 20 30 40 Acc GSM8K 0.0 0.5 1.0 1.5 2.0 38 39 40 41 42 43 44 Exact Match BBH 0.0 0.5 1.0 1.5 2.0 T okens (T) 38 40 42 44 MC2 TruthfulQA 0.0 0.5 1.0 1.5 2.0 T okens (T) 65 70 75 80 Acc Norm Winogrande 0.0 0.5 1.0 1.5 2.0 T okens (T) 30 35 40 45 50 55 60 65 Acc MMLU 0.0 0.5 1.0 1.5 2.0 T okens (T) 54 56 58 60 62 64 66 68 Acc CMMLU Figure 4: Evolution of performance evaluated by Language Model Evaluation Harness during training. Note that we sampled 20% examples for Hellswag and 30% examples for MMLU considering the time cost. 6 Lessons Learned Lesson on Pre-training Data. We have the following observations in Tele-FLM\u2019s pre-training process. First, as is widely known, both quality and quantity of the data are critical for pre-training; however, when there should be a trade-off between quality and quantity, data quality might be prioritized. For our project, an English-Chinese data ratio of 2:1 works better than 1:1, likely because the average quality of the Chinese Web data we have is relatively low. Second, changing the data distribution midway sometimes leads to changes in gradient norm curves and potential divergence, while maintaining a fixed distribution is more stable. Another advantage of maintaining a fixed data distribution is that it allows for safer early-stop of the \u00b5P experiments. To conclude, the data processing should be as complete as possible before the pre-training starts. Lesson on Hyperparameter Search. We observe that \u00b5P-based methods [73; 75] are effective and efficient in searching for the best hyperparameters and predicting the behaviors of the final large models. Specifically, prior experiences and the open-sourced learning rates are good starting points for hyperparameter search. Nevertheless, initialization standard deviation and output multipliers have more significant influences than commonly known. Lesson on Loss Dynamics. First, the slope of the loss curve typically flattens after 500B tokens. Therefore, training should be restarted promptly if early loss values are unsatisfactory. Second, random loss spikes are common and acceptable if the gradient norm curve looks normal. We observe that our model recovers from all the spikes in the pre-training process, unlike the early open-sourced endeavors [81; 4; 79]. We speculate that modern Llama-like structures, especially those with non-bias designs and truncated normal initialization, combined with effective hyperparameter search, provide decent robustness against loss spikes. Another type of spike corresponds to consistent loss increases, which can be identified early with \u00b5P and avoided before the training begins. 10 Tele-FLM Technical Report REFERENCES Lesson on Gradient Norm. The early gradient norm curves are not strong indicators of training stability. In hyperparameter search, we observe divergence following various gradient curve patterns, yet with higher divergence probabilities associated with continuously increasing gradient trends. 7 Related Work The idea of large foundation models originates from unsupervised pre-training with Transformerbased [64] architectures. Well-known examples of early foundation models include Bert [17], GPT-2 [43], and T5 [45]. GPT-3 [10] increases the model size to 175B and observes decent few-shot and zero-shot reasoning capabilities, which encourages a series of efforts to scale up foundation models [81; 47; 4; 79]. Research on scaling laws [29; 23; 24; 75] sheds light on the predictable trends of model performance when the parameter number increases. On the other hand, other works explore the emergent abilities [68; 67; 48] and their relationships to evaluation metrics and task nature. The Llama series [60; 61; 2] is well-known for its contributions to open-sourced large language models, and is widely regarded as a strong baseline for foundation model evaluation. Falcon [42] explores data processing of publicly available pre-training corpora. Mistral [27] and Gemma [58] release 7B-scaled models that are trained with more data and incorporated with advanced designs. For the Chinese community, Qwen [5], Baichuan [71], Yi [76], and DeepSeek [7] represent efforts in multilingual foundation model pre-training and open-sourcing. FLM-101B [33] studies methodologies for training large foundation models under limited budgets. InstructGPT [41] establishes the paradigm of aligning large foundation models with human preferences. Widely used approaches include supervised fine-tuning (SFT) [66; 70] and Reinforcement Learning from Human Feedback (RLHF) [49], among others [44]. Aligning techniques turn foundation models into dialogue agents, which form the core of AI assistants in commercial use. Closed-source dialogue agents are represented by GPT-4 [40], Claude [6], Grok [1], and Gemini [57]. Open-sourced chat models include Zephyr [62] and ChatGLM [25], among the large number of human-aligned versions of the open foundation models mentioned above. 8 Conclusions and Future Work In this report, we introduce Tele-FLM, an open multilingual foundation model. With 52B parameters and 2T training tokens, Tele-FLM matches the performance of larger models trained with more data, in both multilingual language modeling capabilities and benchmark evaluations. The pre-training procedure of Tele-FLM features a high success rate and low carbon footprint. We open-source the model weights as well as technical details and training dynamics. We hope this work will catalyze the growth of open-sourced LLM communities and reduce the trial-and-error cycles to train LLMs with more than 50B parameters. Note that although efforts are made to filter out harmful contents in the training data, such kind of outputs could still potentially be elicited from the released model, which does not represent the opinions of the authors or entities involved. For future work, we plan to continue enhancing the capabilities of Tele-FLM to facilitate broader application, as well as to develop efficient training techniques to explore the unmanned deep space of larger-scaled dense models. Acknowledgments This work is supported by the National Science and Technology Major Project (No. 2022ZD0116300) and the National Science Foundation of China (No. 62106249). We would like to thank Boya Wu, Li Du, Quanyue Ma, Hanyu Zhao, Shiyu Wu and Kaipeng Jia for their help on data, Hailong Qian, Jinglong Li, Taojia Liu, Junjie Wang, Yuanlin Cai, Jiahao Guo, Quan Zhao, Xuwei Yang, Hanxiao Qu, Yan Tian, and Kailong Xie for their help on computational resources, and all other colleagues\u2019 strong support for this project.", "introduction": "Large Language Models (LLMs) have been considered a remarkable approach for unsupervised learning, utilizing extensive data to achieve significant advancements. Large models based on decoder-only Transformers [64; 43] have demonstrated strong abilities on language understanding, generation, and in-context learning [10], et al.. Through downstream supervised fine-tuning (SFT) and task-specific alignments (e.g., Reinforcement Learning from Human Feedback, RLHF) [41], LLMs have led to significant progress in the development of dialogue assistant applications with their human-level multi-turn interaction capabilities [40]. Furthermore, LLMs have demonstrated complex cognitive abilities as reflected by code interpretation and completion [37], mathematical problem-solving [35], logical reasoning [69], and agent-like actions [9]. Recently, LLMs have also shown potential to facilitate a unified sequence-to-sequence modeling paradigm for multimodal learning by treating image, video, and audio signals all as token sequences [57; 30]. This positions LLMs as pivotal for progress towards Artificial General Intelligence (AGI) [11]. Inspired by the superior performances of proprietary applications [40; 6], a plethora of open-sourced LLMs has been publicly available for both the English [60; 61; 42; 27; 58] and Chinese [71; 5; 7; 33] communities. The open-sourced models typically vary in size from 7B to 70B parameters, with their performances improving with model sizes and training FLOPs, which is described as scaling laws [29; 23]. Open LLMs can be classified into foundation language models, SFT models, and RLHF models. \u2020Indicates equal contribution. *Corresponding authors. Technical Report. April 26, 2024 (v1) arXiv:2404.16645v1 [cs.CL] 25 Apr 2024 Tele-FLM Technical Report 2 PRE-TRAINING DATA Despite the growing prevalence and impressive evaluation performances, the high computational cost remains the major challenge in LLM development. In this study, we focus on alleviating the excessive computation by establishing a model-producing pipeline that streamlines the hyperparame- ter searching process, minimizes trial-and-error, and reduces restarts in training. For instance, the Llama technical report [60] assumed the use of around 2,048 A100 GPUs for 5 months, while a single Llama-65B training trial spanned only 21 days, constituting only 14% of the total GPU time. It indicates that open-source endeavors of pre-training LLMs may undergo redundant trial-and-error cycles that may consume enormous computational resources. In contrast, in this work, we reduce the total time cost due to restarts and trial-and-error to negligible levels. We believe that sharing our detailed techniques, engineering practices, and training dynamics [20], especially for LLMs exceeding the 50B scale, could benefit the community as well as contribute to green AI. In this report, we introduce Tele-FLM (aka FLM-2), an open multilingual LLM with 52 billion parameters, which is pre-trained from scratch on a 2.0 trillion token corpus comprising texts from English, Chinese, and various other languages. Tele-FLM inherits and extends the low carbon techniques and fact-enhancing pre-training objectives from the FLM family [33]. The training of Tele-FLM has encountered no instability issue except hardware failures through the completed 2T tokens, and remains ongoing for more data. In addition to the model checkpoints, we release the details of data composition, model architecture, hyperparameter searching, and the full pre-training dynamics. We evaluate Tele-FLM across multiple English and Chinese benchmarks. Regarding English language modeling, Tele-FLM has better Bits-Per-Byte (BPB) than Llama2-70B [61], demonstrating strong compression capabilities. The model also achieves lower BPB than Llama3-70B [2] and Qwen1.5- 72B [5] on Chinese corpora, showcasing its multilingual nature. With fewer English training tokens and smaller models, Tele-FLM matches Llama-65B and is comparable to Llama2-70B in English foundation model evaluation. As for Chinese foundation model evaluation, Tele-FLM matches the overall performance of larger multilingual models trained with a similar amount of data (e.g., DeepSeek-67B [7]). On certain tasks, it surpasses larger models trained with significantly more data (e.g., Qwen1.5-72B). The remainder of this report is structured as follows: Section 2 delves into the specifics of pre- training data processing. Section 3 details our model architecture, tokenizer, infrastructures, training techniques, and hyperparameters. In Section 4, we illustrate the pre-training dynamics and conduct BPB-based evaluation and analysis. Benchmark evaluation in both English and Chinese are provided in Section 5. Section 6 discusses some common issues and lessons learned. Section 7 reviews related literature. We conclude our work and look to the future in Section 8." } ], "Jian Song": [ { "url": "http://arxiv.org/abs/2309.01907v1", "title": "SyntheWorld: A Large-Scale Synthetic Dataset for Land Cover Mapping and Building Change Detection", "abstract": "Synthetic datasets, recognized for their cost effectiveness, play a pivotal\nrole in advancing computer vision tasks and techniques. However, when it comes\nto remote sensing image processing, the creation of synthetic datasets becomes\nchallenging due to the demand for larger-scale and more diverse 3D models. This\ncomplexity is compounded by the difficulties associated with real remote\nsensing datasets, including limited data acquisition and high annotation costs,\nwhich amplifies the need for high-quality synthetic alternatives. To address\nthis, we present SyntheWorld, a synthetic dataset unparalleled in quality,\ndiversity, and scale. It includes 40,000 images with submeter-level pixels and\nfine-grained land cover annotations of eight categories, and it also provides\n40,000 pairs of bitemporal image pairs with building change annotations for\nbuilding change detection task. We conduct experiments on multiple benchmark\nremote sensing datasets to verify the effectiveness of SyntheWorld and to\ninvestigate the conditions under which our synthetic data yield advantages. We\nwill release SyntheWorld to facilitate remote sensing image processing\nresearch.", "authors": "Jian Song, Hongruixuan Chen, Naoto Yokoya", "published": "2023-09-05", "updated": "2023-09-05", "primary_cat": "cs.CV", "cats": [ "cs.CV", "cs.AI", "cs.HC" ], "main_content": "2.1. Remote Sensing Image Processing Tasks 2.1.1 Land Cover Mapping The discipline of land cover mapping is a crucial component of remote sensing image processing, where the goal is to categorize and depict physical features on Earth\u2019s surface, such as grass, trees, water bodies, bareland, buildings, etc. This task resembles semantic segmentation in traditional computer vision. Although the introduction of benchmark datasets for real-world scenarios, such as DeepGlobe [11], LoveDA [46], and OpenEarthMap (OEM) [48], has made significant advances in associated research, there is still a clear need for high-quality synthetic datasets. This is an area where the field of computer vision has made significant progress. Recognizing this gap, we were motivated to create SyntheWorld, a synthetic dataset crafted to improve performance in land cover mapping tasks. 2.1.2 Building Change Detection The task of building change detection forms another crucial component within the realm of remote sensing image processing. It involves the identification and localization of modifications in man-made structures, especially buildings, over time, achieved through the analysis of images of the same area captured at different intervals. It is an indispensable technique for assessing damage in scenarios such as earthquakes, hurricanes, or floods, and for monitoring urban development and expansion over time. Typical annotations for this task involve binary masks, with networks trained to predict areas of building change based on input image pairs from two time points. While the emergence of benchmark real-world datasets such as WHU-CD [17], LEVIRCD+ [5], and SECOND [51] have provided the field with valuable data resources, the lack of high-quality synthetic datasets has hindered the pace of related research. 2 2.2. Existing Synthetic Datasets 2.2.1 Street-view & Indoor-view As we mentioned, the availability of large, high-quality synthetic datasets for street-view and indoor-view has driven the development of related techniques in traditional computer vision. The MPI Sintel Dataset [4] is widely used for training and evaluating optical flow algorithms, capturing natural scenes and motions in its synthetic dataset derived from an animated film. SceneFlow [25], with more than 35,000 synthetic stereo video sequences, is designed for the evaluation of optical flow, disparity, and scene flow algorithms. SYNTHIA [36], a dataset composed of 9,400 multiviewpoint frames from a virtual city, targets urban scene understanding tasks with its pixel-level semantic annotations. The GTA5 dataset [32], comprising 24,966 synthetic images from the perspective of a car in virtual cities, is tailored to the understanding of urban scenes with its pixellevel semantic annotations compatible with the Cityscapes dataset [9]. Synscapes [47], featuring 25,000 photorealistic street scenes, aims to improve the performance of computer vision models in outdoor scenes with its precise semantic labels. Finally, SceneNet [13], a diverse synthetic dataset of over 5 million indoor scenes with RGB-D images and semantic labels, is designed for indoor scene understanding tasks. 2.2.2 Overhead-view The AICD dataset [3], one of the earliest datasets with an overhead view, uses the Virtual Battle Station 2 game engine to simulate building alterations. Despite its 1,000 pairs of 800\u00d7600 RGB image pairs with building change masks, its 500 change instances are limited compared to the tens of thousands found in real-world datasets. The GTA-VSID dataset [52], extracted from the GTA-V game, covers a 100km2 area with 121 500 \u00d7 500 aerial RGB images. Although it is useful for building segmentation tasks, its 1m GSD limits performance in high-resolution remote sensing datasets. Syntinel-1 [21], the first high-resolution synthetic remote sensing dataset for building segmentation, is based on CityEngine and offers a variety of urban styles. The Syntcities dataset [31] is for disparity estimation in remote sensing images, featuring three virtual cities and 8,100 pairs of high-resolution images. RarePlanes [39], a semisynthetic dataset for aircraft object detection, combines real WorldView-3 satellite imagery and 3D models. 3. Dataset Generation and Description Constructing a virtual city manually is time-consuming. Comparatively, SyntheWorld differs from existing overhead-view synthetic datasets by using procedural modeling. Previous studies in computer graphics Road/River.py #Road/River parameters river_num = randint() road_num = randint() width = uniform() ... Tree.py #Tree parameters trunk = Sample_Noise() branch_num = randint() leaf_num = randint() ... Building.py #Building parameters height = uniform() type = select() roof_angle = uniform() ... Grid_based.py #Grid-based parameters district_num = randint() district_size = randint() obj_density = uniform() ... Terrain_based.py #Terrain-based parameters flat_area = uniform() mountain_area = uniform() tree_density = uniform() ... GPT-4 Stable Diffusion . . . Rangeland Agriculture land Bareland Road Roof Manual Prompts Sensor.py #Sensor parameters azimuth = uniform() look_angle = gaussian() gsd = uniform() ... Sun.py #Sunlight parameters elevation = uniform() intensity = uniform() color = [uniform(),uniform(),uniform()] ... Sensor & Sunlight Layout Geometry Texture Figure 2. The essential components for building SyntheWorld dataset. have explored procedural modeling for cities and buildings [19, 27, 28], but none have utilized these techniques for the creation of overhead view datasets. We create our own procedural rules to create 3D geometries and apply textures derived from generative models, which minimize labor costs and enrich diversity. 3.1. Generation Workflow Layout. We adopt grid-based and terrain-based methods for the virtual world, as illustrated in Fig. 2. For the gridbased method, we randomly slice a grid of 0.25-0.36km2 into several blocks of varying numbers and sizes, placing different types of buildings and trees in each block, and the boundaries between the blocks serve as our road system. It mainly simulates the more regular city and suburban layouts, and also contributes to the production of 0.30.6m GSD synthetic remote sensing images. For the terrainbased method, we use random noise textures to generate terrains such as mountains, plains, and oceans with ranges of 1-2km2. Placing rivers, roads, buildings, and trees according to carefully designed rules based on Geometry Nodes in Blender [7], this method mimics irregular layouts in developing regions. It mainly contributes to the production of 0.6-1.0m GSD synthetic remote sensing images. Geometry. The geometry row in Fig. 2 demonstrates our approach to procedurally model trees and buildings. For buildings, we use random noise to cut out differently shaped grids on a flat plane, which we then extrude into 3D geometries following pre-set rules. Users can control predefined parameters to generate an infinite number of different geometric styles. We distribute predefined asset components (walls, roofs, windows, etc.) to the geometry and finally map the texture generated by AIGC to the building. For trees, we use random-shaped curves as trunks and distribute different styles of tree components to the curve following certain rules. Texture. The last row in Fig. 2 shows examples of our process for generating corresponding texture assets using 3 RS Synthetic Datasets Features and Composition GSD (m) Task # of images Image Size Automatic Labeling Fully Synthetic Procedural Modeling AICD [3] \u2212 BCD 1, 000 pairs 800 \u00d7 600 \u221a \u221a \u00d7 GTA-V-SID [52] 1 BS 121 500 \u00d7 500 \u00d7 \u221a \u00d7 Synthinel-1 [21] 0.3 BS 1, 054 572 \u00d7 572 \u221a \u00d7 \u00d7 RarePlanes [39] 0.31 \u223c0.39 OD 50, 000 512 \u00d7 512 \u221a \u00d7 \u00d7 SyntCities [31] 0.1, 0.3, 1.0 DE 8, 100 pairs 1024 \u00d7 1024 \u221a \u00d7 \u00d7 SyntheWorld (Ours) 0.3 \u223c0.6 BS/LC/BCD 30, 000 pairs 512 \u00d7 512 \u221a \u221a \u221a 0.6 \u223c1.0 10, 000 pairs 1024 \u00d7 1024 Table 1. Features and composition comparison among remote sensing synthetic datasets. LC: land cover mapping. BCD: building change detection. BS: building segmentation. OD: object detection. DE: disparity estimation. AIGC. In terms of operational specifics, we first make a Stable Diffusion usage guide as a prompt to help GPT-4 understand its workings and prompt forms. We then provide excellent prompts as examples and ask GPT-4 to generate different themed prompts for different types of textures. In total, we generated around 140,000 seamless textures for different geometry to build SyntheWorld, far exceeding the number of textures used by existing overhead-view datasets. See the supplementary material for detailed prompts and generated images. 3.2. Structure of Dataset As shown in Tab. 1, SyntheWorld is a comprehensive image dataset, consisting of 40,000 pairs of images. Of these, 30,000 pairs have a GSD ranging from 0.3 to 0.6 m, with each image having size 512 \u00d7 512. The remaining 10,000 pairs have a GSD of 0.6 to 1.0 m and a larger image size of 1024 \u00d7 1024. Each pair in the dataset contains a post-event image, which is utilized for the land cover mapping task. These post-event images are accompanied by semantic labels of eight categories, as shown in Fig. 1. These categories are consistent with those of the OEM [48] dataset. Correspondingly, the pre-event images are derived by introducing variability in each scene. This involves different textures, lighting parameters, and camera settings. Additionally, there is a 10% to 50% chance that any given building in the scene might be removed. Both pre-event and post-event images from each pair are used collectively for the building change detection task. Accordingly, the dataset comes with 40,000 binary classification masks corresponding to this task. The off-nadir angle of all images ranges from \u221225\u25e6to 25\u25e6and follows a Gaussian distribution with a certain mean 0\u25e6and variance 2.3\u25e6. Similarly, we simulate the sun\u2019s position during the day in most countries by adjusting the zenith (ranging between 25\u25e6to 35\u25e6) and the elevation parameters (ranging between 45\u25e6to 135\u25e6), as guided by the documentation of the Pro Atom [8] addon in the Blender community, both parameters following a uniform distribution. This inclusion of various viewing angles and sun elevation enhances the robustness of SyntheWorld and ensures its applicability to a wide range of real-world datasets. 3.3. Comparison with Existing Synthetic Datasets As depicted in Tab. 1, we provide a comparative analysis of SyntheWorld and existing synthetic remote sensing datasets, in terms of their features and composition. The Task column presents the primary tasks illustrated in the corresponding dataset\u2019s literature. Regarding label generation, the GTA-V-SID dataset [52] consists of screenshots of the GTA-5 commercial video game, with buildings manually annotated. On the contrary, the remaining datasets are capable of automatically generating annotations via the corresponding 3D software. In terms of complete synthesis, only SyntheWorld achieves this feat. The other datasets have adopted real remote sensing images to some extent as texture or as part of the dataset during their construction. Finally, in SyntheWorld, most 3D models are generated using procedural modeling, while in other synthetic datasets, the geometric structure and texture of the models are either predefined or meticulously designed by 3D artists. This unique characteristic of SyntheWorld significantly enhances its diversity. 4. Experiments 4.1. Real-world Benchmark Datasets To validate the versatility and effectiveness of SyntheWorld, we performed experiments using several highresolution remote sensing datasets from various real-world scenarios. In the subsequent discussion, we present an indepth overview of these datasets. In the experiments showcased in this section, we employ \u201cw\u201d to signify the utilization of the SyntheWorld dataset and \u201cw/o\u201d to indicate its non-use. For the building segmentation task, we relied on OEM [48] and LoveDA [46] datasets, as well as INRIA [24] and BANDON [30] datasets. The INRIA dataset, which 4 Train on Test on OEM* LoveDA* INRIA BANDON GTA-V-SID [52] 2.43 0.88 1.74 1.64 Synthinel-1 [21] 35.37 14.13 39.89 28.19 SyntCities [31] 23.61 21.39 30.39 30.01 SyntheWorld 49.26 37.28 45.76 34.01 OEM* [48] 80.48 55.35 75.61 64.19 Table 2. mIoU(%) results of the building segmentation task using DeepLabv3+. * means to use only the part of the building label in the dataset. targets building footprint segmentation, incorporates aerial images from ten cities in the United States and Europe at a resolution of 0.3 m. The BANDON dataset stands out with significant off-nadir angles and focuses on urban areas with skyscrapers. It offers high-resolution 0.6m remote sensing images from Beijing and Shanghai. We turned to OEM and LoveDA datasets again for the multi-class land cover mapping task. The OEM dataset, encompassing 97 regions across 44 countries worldwide, provides high-resolution images with detailed eight-class land cover annotations. The LoveDA dataset offers 0.3m GSD remote sensing images from three diverse regions in China, labeled with seven land cover categories. In the building change detection task, we harnessed the WHU-CD [17], LEVIR-CD+ [5], and SECOND [51] datasets. The LEVIR-CD+ dataset consists of 987 image pairs, with 637 pairs in the training set and 348 pairs in the test set. SECOND, a semantic change detection dataset, collects 4662 pairs of aerial images from various platforms and sensors across cities like Hangzhou, Chengdu, and Shanghai. The WHU-CD dataset consists of two pairs of superhigh-resolution (0.075m) aerial images. We cropped these large training (21243\u00d715354) and testing (11265\u00d715354) images into non-overlapping 512 \u00d7 512 patches for our experiments. 4.2. Building Segmentation To compare with existing overhead-view synthetic datasets, which mainly include semantic labels of buildings, we performed building segmentation experiments. We use the DeepLabv3 + [6] network equipped with ResNet50 [14] backbone. We adopted the SGD optimizer [33] for all synthetic datasets, employing a learning rate of 1e-3, a weight decay of 5e-4, and a momentum of 0.9; for the OEM dataset, we opted for a higher learning rate of 1e-2. The results are presented in Tab. 2. The GTA-VSID [52] dataset underperforms on various high-resolution real-world datasets due to its smaller quantity and 1m GSD. The model trained on the SyntheWorld dataset outperforms other datasets on four real-world datasets, especially on the OEM and LoveDA datasets. These two datasets include a considerable number of buildings in developing or develDatasets w/o w/ OEM [48] 66.96 66.84 LoveDA [46] 51.14 53.32 O\u2192L 35.28 34.83 L\u2192O 21.95 25.24 Table 3. Land cover mapping mIoU(%) outcomes from intradataset and cross-dataset evaluations, utilizing the DeepLabv3+ model for all experiments. O\u2192L denotes training on the OEM training set and testing on the LoveDA validation set, while L\u2192O represents the converse. oped areas. Thus, the performance of SyntheWorld far exceeds that of other competitors in these two datasets. As the buildings in the INRIA [24] and BANDON [30] datasets are predominantly high-rises in urban areas or well-organized detached houses in suburban areas, the advantage of the SyntheWorld dataset is not as evident as in the other two datasets, but still shows the best performance. Furthermore, the last column of Tab. 2 shows the performance of the model trained on the OEM dataset and tested on other datasets. Although SyntheWorld significantly outperforms other synthetic datasets, there is still a gap compared to realworld datasets. In Fig. 3 (a), we also visualized the feature extract of the well-trained ResNet-50 of all synthetic and real datasets using UMAP [26]. In terms of feature space, SyntheWorld is closer to real-world datasets than any existing synthetic datasets. 4.3. Land Cover Mapping SyntheWorld is the first synthetic dataset that offers consistent annotations compatible with high-resolution realworld benchmarks. In this section, we primarily discuss the performance of SyntheWorld in the land cover mapping task. 4.3.1 Cross-dataset Experiments To evaluate the enhancements brought about by using SyntheWorld, we adopted the mixed training strategy [32] often used with synthetic datasets, a batch size of 8, including 7 real images and 1 synthetic image per batch. The model was trained using DeepLabv3+ with the SGD optimizer and an initial learning rate of 1e-2, accompanied by a weight decay of 5e-4, and a momentum of 0.9. All experiments were trained for 100 epochs on a Tesla A100 GPU. Specifically, we map the rangeland class and the developed space class in OEM and SyntheWorld to the background class in LoveDA to keep the classes consistent. Tab. 3 outlines the results obtained by integrating training images from a real-world dataset with SyntheWorld and the results of cross-dataset tests using the SyntheWorld dataset. Incorporating SyntheWorld with the entire 5 BANDON (Real) INRIA (Real) LoveDA (Real) OEM (Real) GTA-V-SID (Synthetic) SyntCities (Synthetic) Synthinel-1 (Synthetic) SyntheWorld (Synthetic) (a) (b) Figure 3. (a) 2D UMAP visualization of synthetic and real datasets. We use ResNet-50 pre-trained on the OEM dataset as the feature extractor; (b) Colormap of density estimation for SyntheWorld, OEM, and LoveDA dataset. Datasets 1% 5% 10% w/o w/ w/o w/ w/o w/ OEM [48] 40.9 45.01 52.21 54.0 58.40 59.31 LoveDA [46] 34.59 36.75 42.38 44.58 45.27 48.12 Table 4. mIoU(%) results from the DeepLabv3+ model, trained both with and without SyntheWorld, and deployed on two realworld land cover mapping datasets at various proportions of real image utilization. OEM [48] training set does not result in performance enhancements. Similarly, combining SyntheWorld with the OEM training set and subsequently testing on LoveDA [46] slightly reduces model efficacy. However, when we merge SyntheWorld with the LoveDA and test on the same, the model\u2019s mIoU increases by 2.18 points. In addition, a 3.29point improvement in mIoU is observed when testing the OEM test set after integrating SyntheWorld and LoveDA. To investigate the observed phenomenon, we made density estimation maps for the three datasets as displayed in Fig. 3 (b). This reveals a notable overlap between SyntheWorld and OEM, with a lesser overlap in relation to LoveDA. The expansive coverage of the OEM dataset surpasses that of LoveDA and SyntheWorld. This finding sheds light on the patterns observed in Tab. 4. The vast diversity of the OEM dataset effectively captures the most data diversity inherent in SyntheWorld. Therefore, no performance enhancement results from integrating SyntheWorld. Nevertheless, the substantial overlap between SyntheWorld and OEM enables a performance boost when SyntheWorld is merged with LoveDA and tested on OEM. Conversely, the lesser overlap between SyntheWorld and LoveDA means that integrating SyntheWorld during OEM training does not lead to improvements in the LoveDA test set. Subsequently, we assessed performance when integrating SyntheWorld with varying proportions of real-world datasets. Tab. 4 presents the findings. Irrespective of the real-world dataset being OEM or LoveDA, the integration Train on Test on Urban Rural w/o w w/o w Urban 47.00 50.32 33.44 37.95 Rural 36.86 38.17 48.64 51.66 Table 5. Land cover mapping results, measured in mIoU(%), from cross-domain experiments involving urban and rural areas of the LoveDA dataset. of SyntheWorld consistently enhances model performance when the quantity of training data is limited. 4.3.2 Cross-domain Experiments In order to examine the performance of SyntheWorld in outof-domain test scenarios, we partition the OEM [48] dataset into seven distinct continents. Africa, Asia, Europe, Central America, North America, South America, and Oceania. Simultaneously, for the LoveDA [46] dataset, we conducted experiments using urban and rural areas as separate domains. We conduct experiments with various decoders and encoders; in this section, we show the results of one model. See supplementary material for more results from different models, dataset division, and experimental setup. Continent-wise experimental results. Fig. 4 displays the results of cross-continent experiments in the OEM dataset using the U-Net [35] architecture with the EfficientNet-B4 [43] encoder. We can observe that our SyntheWorld dataset can significantly enhance performance across most dataset pairs. Also, we show in Fig. 5 the qualitative results when synthetic data can lead to a boost. More results can be found in the supplementary material. However, in some cases, the synthetic dataset does not yield a substantial improvement and could even degrade the model performance. It is crucial to investigate the reasons for such enhancement and impairment for the use of synthetic datasets. Therefore, we have conducted a further analysis of these results in Sec. 4.3.3. Urban-Rural experimental results. We conducted similar cross-domain experiments on the LoveDA dataset, which includes two domains, rural and urban. The results are illustrated in Tab. 5. We found that the SyntheWorld dataset enhances model performance in both in-domain and out-of-domain tests. 4.3.3 Relative Distance Ratio The cross-domain experiments discussed in Sec. 4.3.1 and Sec. 4.3.2 show that the SyntheWorld dataset does not always yield significant improvements. This highlights the need to understand the underlying causes. We introduce a metric, the Relative Distance Ratio (RDR), aiming to quantify the relationship between source, target, and synthetic 6 (a) (b) (c) Figure 4. Results of continent-wise in-domain and out-of-domain land cover mapping experiments of OEM dataset. The x-axis represents the target domain and the y-axis represents the source domain. U-Net with EfficientNet-B4 encoder is used for all experiments. (a) The mIoU results of without using SyntheWorld; (b) The mIoU results of mixed training with SyntheWorld; (c) Changes in mIoU. SA\u2192SA SA\u2192NS SA\u2192AS SA\u2192EU Image Ground Truth w/ SyntheWorld w/o SyntheWorld Figure 5. Qualitative results by U-Net model of continent-wise land cover mapping task. datasets and clarify when synthetic data can bring improvements. For measuring the distance between datasets, various methods have been discussed in the literature [1, 15, 38]. The most commonly used measure of the distance between synthetic and real datasets is the FID score [15]. Here we adopt the Fr\u00b4 echet Distance, as the measure of distance between different datasets. Since the Inception model [42] pre-trained on ImageNet [12] is not suitable for remote sensing datasets, we use ResNet-50 [14] pre-trained on the OEM [48] dataset. The formula to compute the FD between any dataset pair is as follows: FD( x, y) = ||\\mu _x \\mu _y||^2 + \\text {Tr}(\\Sigma _x + \\Sigma _y 2(\\Sigma _x\\Sigma _y)^{\\frac {1}{2}}) \\label {eq:FD} (1) where \u00b5x and \u00b5y denote the mean feature vectors of datasets x and y, respectively, and \u03a3x and \u03a3y represent the covariance matrices of the corresponding feature vectors. Then we denote the source domain dataset as S, the target domain dataset as T, SyntheWorld as G, and the FD between any two datasets as \u03b4(., .). Afterwards, the distance between the source domain dataset S and the target domain Figure 6. Scatter diagram with correlation between mIoU changes and proposed Relative Distance Ratio (RDR). dataset T can be expressed as: \\de lt a (f_S, f_ T) = FD(f_S, f_T) \\label {eq:FD_ST} (2) Similarly, the distance between the target domain dataset T and the synthetic dataset G can be represented as: \\d e lta (f_T, f_G) = FD(f_T, f_G) \\label {eq:FD_TG} (3) These fT , fS and fG are obtained by applying a ResNet50 model, pre-trained on the OEM dataset. Subsequently, we can define the Relative Distance Ratio (RDR), denoted as \\mat hc a l {R}(f_S, f_T, f_G) , to be calculated using the following formula: \\ma th c al { R}(f _ S, f_T, f_ G) = \\frac {\\delta (f_T , f_G )}{\\delta (f_S , f_T )} \\label {eq:RDR} (4) Intuitively, a smaller R indicates a greater capacity of the model to integrate knowledge from the synthetic data and transfer it to the target domain. To validate this, we presented a correlation scatter plot in Fig. 6, which reveals a negative correlation between R and the improvement in mIoU. This observation aligns with our initial conception of designing the RDR metric. Therefore, the proposed RDR 7 Datasets STANet-PAM DTCDSCN ChangeFormer w/o w/ w/o w/ w/o w/ LEVIR-CD+ [5] 0.752 0.782 0.793 0.812 0.784 0.835 SECOND* [51] 0.713 0.733 0.712 0.727 0.723 0.734 WHU-CD [17] 0.707 0.802 0.769 0.862 0.783 0.836 Table 6. F1 score resulting from the use or non-use of SyntheWorld across three building change detection benchmark datasets, assessed with three different models. Datasets 1% 5% 10% w/o w/ w/o w/ w/o w/ LEVIR-CD+ [5] 0.517 0.646 0.636 0.731 0.726 0.764 SECOND* [51] 0.401 0.435 0.546 0.622 0.583 0.631 WHU-CD [17] 0.242 0.312 0.433 0.638 0.510 0.705 Table 7. Comparison of F1 scores from the DTCDSCN model trained with and without SyntheWorld, applied on three different real-world datasets at varying ratios of real image use. metric effectively serves as a quantitative conditional criterion for employing synthetic data, that is, when R is large, there is a risk of using synthetic data and vice versa. 4.4. Building Change Detection In this section, we demonstrate the effectiveness of SyntheWorld on the building change detection task. We employ four prevalent building change detection networks, FC-siam-Diff [10], STANet-PAM [5], DTCDSCN [22], and ChangeFormer [2]. We adhere to a mixed training strategy that includes a 7:1 real-to-synthetic image ratio. For ChangeFormer and DTCDSCN we use AdamW [23] optimizer with learning rate 1e-4, for the other two models we use Adam optimizer with learning rate 1e-3. Each mixed training experiment is trained for 100 epochs on the Tesla A100 GPU. Tab. 6 presents the F1 score of three different models applied to three different datasets in the real world. Evidently, for each real-world dataset and each model type, integrating the SyntheWorld dataset induces an improvement, notably for the WHU-CD dataset where it can induce almost a 10-point increase in the F1 score when using the STANetPAM and DTCDSCN models. Also, we display in Fig. 7 the qualitative results when using SyntheWorld can lead to enhancements. More results can be found in the supplementary material. Tab. 7 reveals the F1 score of the DTCDSCN model with different proportions of the real-world training set, with and without the incorporation of SyntheWorld. Across all realworld datasets, SyntheWorld invariably provides substantial performance improvement when training data is scarce. Tab. 8 illustrates the generalizability of the SyntheWorld dataset with the FC-siam-Diff model. We draw comparisons with three real datasets and the AICD synthetic Train on Test on LEVIR-CD+ SECOND* WHU-CD LEVIR-CD+ (Real) 0.751 0.180 0.614 SECOND* (Real) 0.405 0.614 0.522 WHU-CD (Real) 0.222 0.248 0.812 AICD (Synthetic) 0.094 0.267 0.092 SyntheWorld 0.419 0.386 0.457 Table 8. Evaluation of generalizability across multiple building change detection datasets. The table shows the F1 scores. * means to use the part of building change label in SECOND. True positive True negative False positive False negative LEVIR-CD+ SECOND* WHU-CD Pre-event image Post-event image Reference map w/ SyntheWorld w/o SyntheWorld Figure 7. Qualitative results by DTCDSCN model of building change detection task on three datasets. dataset. The results show that by using only the SyntheWorld dataset for training, we can achieve acceptable results on all three datasets. Specifically, compared to the AICD [3] dataset, ours has a significant performance and generalization advantage. 5. Discussion and Societal Impacts We introduced SyntheWorld, the most extensive synthetic remote sensing dataset, used for land cover mapping and building change detection. Its diversity, enhanced by procedural modeling and AIGC, sets it apart from other datasets. Comprehensive experiments validate SyntheWorld\u2019s utility and flexibility. Furthermore, we investigate scenarios where SyntheWorld does not enhance performance, proposing the RDR metric for initial exploration of when SyntheWorld can deliver lift. Notably, SyntheWorld has a significant gap compared to real datasets. This stems from some modeling rules mismatching real-world distributions, a challenge we aim to address in future work. Additional future work involves leveraging SyntheWorld to explore domain adaptation and generalization techniques in remote sensing. Acknowledgements This work was supported in part by JST FOREST Grant Number JPMJFR206S; Microsoft Research Asia; and the GSFS Challenging New Area Doctoral Research Grant (Project No. C2303). 8", "introduction": "High-resolution remote sensing image processing is vi- tal for urban planning, disaster response, and environmen- tal monitoring. Although advances in deep neural networks and the emergence of various benchmark datasets have led to significant progress in these research areas, the unique as- pects of remote sensing image processing tasks still present many challenges. First, acquiring large-scale datasets that compare with those in computer vision and natural language processing is difficult due to the sensitivity, privacy, and commercial con- siderations of remote sensing data. As a result, remote sens- ing datasets tend to be significantly smaller. Second, com- pared to fields like computer vision or natural language pro- cessing, remote sensing data annotation is both more costly and time-intensive. For example, annotating a 1024 \u00d7 1024 image from a large land cover mapping dataset such as [48] usually takes more than two hours. Finally, variations in image capture conditions such as sensor type, image acqui- sition season, and geographical location introduce a severe domain shift problem in remote sensing image processing. Synthetic datasets, with their low-cost acquisition, high fidelity, and diversity, present a viable solution to these chal- lenges. In the field of computer vision, numerous high- quality synthetic datasets [4,13,25,32,36,47] have already emerged, primarily serving tasks such as semantic segmen- tation, depth estimation, optical flow estimation, and 3D re- construction of street-view and indoor-view scenario. How- ever, high-quality synthetic datasets for remote sensing are scarce in comparison. The most important reason is, as described in [21], in a virtual world constructed for street- view or indoor-view scenes, the distance between the sen- sor and the target location is relatively small (a few or tens of meters), with the main focus being on pedestrians, ve- hicles, road signs, or various furniture, resulting in a rela- tively small virtual world size. In contrast, in remote sens- ing scenarios, sensors are often located tens of thousands of meters away from the target virtual world, making even a relatively small virtual world extend over several square kilometers, while maintaining a multitude of diverse targets, such as thousands of trees in different poses and hundreds of buildings of different styles. This makes the construction of large-scale synthetic remote sensing datasets exceptionally challenging. Upon a thorough survey of the available synthetic re- mote sensing datasets [3,21,31,39,50,52], we discern that each of them has specific limitations. First, most existing works focus on a single task, such as building segmenta- tion [21, 52] or object detection [39, 50]. However, there is a notable lack of effective synthetic datasets for criti- cal tasks like multi-class land cover mapping and building change detection. Furthermore, these datasets exhibit lim- 1 arXiv:2309.01907v1 [cs.CV] 5 Sep 2023 Rangeland Bareland Developed space Road Tree Water Agriculture land Building Figure 1. Examples of SyntheWorld dataset. ited diversity due to constraints associated with the size of the virtual world and the tools used. They either emulate real-world cities to create a limited number of virtual en- vironments or use real remote sensing images as the back- ground. Furthermore, when it comes to 3D models in the virtual world, existing methodologies consistently rely on predefined textures, layouts, and geometries, resulting in a restrictive range of styles for buildings, trees, and other land objects. In this work, we use the freely available open-source 3D modeling software Blender [7], along with various plugins from the Blender community, GPT-4 [29], and the Stable Diffusion model [34], to develop a procedural modeling system specifically for generating high-resolution remote sensing datasets. We present SyntheWorld, the largest high- resolution remote sensing image dataset for land cover map- ping and building change detection tasks. Fig. 1 displays some examples from the proposed SyntheWorld dataset. The main contributions of this work are: \u2022 We introduce SyntheWorld, the first fully synthetic high-resolution remote sensing dataset, which inte- grates procedural 3D modeling techniques with Arti- ficial Intelligence Generated Content (AIGC). \u2022 We use SyntheWorld as the first synthetic dataset specifically designed to improve performance in two crucial tasks: multi-class land cover mapping and building change detection. \u2022 We propose the Relative Distance Ratio (RDR), a new metric designed to quantify the conditions under which the synthetic dataset can drive performance improve- ments. \u2022 Through comprehensive experiments on various re- mote sensing benchmark datasets, we demonstrate the utility and effectiveness of our dataset." } ], "Bin Hu": [ { "url": "http://arxiv.org/abs/2306.03604v6", "title": "Enabling Intelligent Interactions between an Agent and an LLM: A Reinforcement Learning Approach", "abstract": "Large language models (LLMs) encode a vast amount of world knowledge acquired\nfrom massive text datasets. Recent studies have demonstrated that LLMs can\nassist an embodied agent in solving complex sequential decision making tasks by\nproviding high-level instructions. However, interactions with LLMs can be\ntime-consuming. In many practical scenarios, they require a significant amount\nof storage space that can only be deployed on remote cloud server nodes.\nAdditionally, using commercial LLMs can be costly since they may charge based\non usage frequency. In this paper, we explore how to enable intelligent\ncost-effective interactions between the agent and an LLM. We find that this\nproblem can be naturally formulated by a Markov decision process (MDP), and\npropose When2Ask, a reinforcement learning based approach that learns when it\nis necessary to query LLMs for high-level instructions to accomplish a target\ntask. Experiments on MiniGrid and Habitat environments that entail planning\nsub-goals demonstrate that When2Ask learns to solve target tasks with only a\nfew necessary interactions with an LLM, and significantly reduces interaction\ncosts in testing environments compared with baseline methods. Experiment\nresults also suggest that by learning a mediator model to interact with the\nLLM, the agent's performance becomes more robust against partial observability\nof the environment. Our code is available at\nhttps://github.com/ZJLAB-AMMI/LLM4RL.", "authors": "Bin Hu, Chenyang Zhao, Pu Zhang, Zihao Zhou, Yuanhang Yang, Zenglin Xu, Bin Liu", "published": "2023-06-06", "updated": "2024-03-05", "primary_cat": "cs.AI", "cats": [ "cs.AI" ], "main_content": "2.1 The Options Framework We consider sequential decision-making in embodied environments, which is commonly formalized as a Markov decision process (MDP), denoted as M = \u27e8S, A, p, r, \u03b3\u27e9. Here S represents the state space, A represents the action spaces, p(s\u2032|s, a) denotes the state transition probability function, r(s, a) represents the reward function, and \u03b3 is the discount factor. The objective in such a framework is to learn an optimal policy that maximizes the cumulative return over time \ufffd t \u03b3tr(st, at), where t denotes the time index. The options framework extends the traditional notion of action in an MDP to include options, which are essentially closed-loop policies that encompass a sequence of actions over a period of time Sutton et al. objective in such a framework is to learn an op \ufffd t \u03b3tr(st, at), where t denotes the time index. \ufffd The options framework extends the traditional notion of action in an MDP to include options, which are essentially closed-loop policies that encompass a sequence of actions over a period of time Sutton et al. (1999); Precup (2000). Options can range from higher-level tasks such as picking up an object or going to lunch, to more primitive actions like muscle twitches and joint torques. The introduction of options allows for the incorporation of temporally abstract knowledge and action within the RL framework in a natural and general manner, thus provides a flexible and intuitive approach to handle complex tasks with varying levels of granularity. Formally, an option \u03c9 is defined as a 3-tuple \u27e8I\u03c9, \u03c0\u03c9, \u03b2\u03c9\u27e9, where I\u03c9 represents the initial state set for this option, \u03c0\u03c9 denotes the acting policy for the option, and \u03b2\u03c9 represents the termination condition. Given a state s, a policy-over-options would select an option \u03c9 from the set of available options \u2126. The agent would then plan low-level actions by following its current option policy a \u223c\u03c0(\u00b7|s, \u03c9) until the termination condition \u03b2\u03c9 is satisfied. In our work, we use pre-defined skills as options and a pre-trained LLM as the policy-over-options to generate high-level options. 2.2 LLM as a Planner Recent research has shown that LLMs have achieved significant success in various tasks within embodied environments Wang et al. (2023b;c); Ahn et al. (2022). Taking inspiration from these works, we employ a pre-trained LLM to act as a planner, generating a sequence of options using descriptions of observations and tasks. The generated plan, represented as a list of options [\u03c9k]k=1,...,K, is then executed by following the corresponding option policies. Formally, with text descriptions as input prompts, the LLM outputs a plan in the form of a sequence of options. An actor module subsequently generates low-level actions at each time step, following the option policy \u03c0(a|s; \u03c9k). The policies for the actor module, \u03c0\u03c9, can either be hard-coded or learned from data. 3 Related work LLMs have emerged as powerful tools for plan generation, and previous research has focused on designing effective interfaces between planners and actors. In Ahn et al. (2022), LLMs are employed to plan the entire sequence of options at the beginning of each task, enabling the agent to complete the task without further interaction with the planner. In Wang et al. (2023c), the authors introduce a feedback system where the agent requests the LLM to generate an updated plan based on environmental feedback when the execution of the previous plan fails. This approach enhances the robustness of the acting agent in the face of environmental uncertainties. However, these methods often rely on hard-coded failure detectors, such as applying a threshold to limit the number of permissible MDP state-transition timesteps for an option. In Ren et al. (2023), a framework is proposed for measuring and aligning the uncertainty of LLM-based planners, allowing them 3 Figure 2: An overview of the Planner-Actor-Mediator paradigm and an example of the interactions. At each time step, the mediator takes the observation ot as input and decides whether to ask the LLM planner for new instructions or not. When the asking policy decides to ask, as demonstrated with a red dashed line, the translator converts ot into text descriptions, and the planner outputs a new plan accordingly for the actor to follow. On the other hand, when the mediator decides to not ask, as demonstrated with a green dashed line, the mediator returns to the actor directly, telling it to continue with the current plan. to seek assistance from humans when necessary. In addition, Dasgupta et al. (2023) introduce the PlannerActor-Reporter framework, which includes a reporter module to enhance information exchange between the actor and the LLM-based planner. In this framework, the agent interacts with the LLM at each timestep, regardless of whether new information is acquired or not. While this approach eliminates the need for hardcoded termination conditions and reduces uncertainties during option execution, it leads to excessive resource consumption, especially when utilizing a large-scale and expensive LLM as the planner. In this paper, we propose learning an interaction policy that enables the local agent to communicate with a remote LLM, and empirically demonstrate that our approach can overcome the limitations of previously mentioned hard-coded rule-based interaction protocols or protocols that entail querying the LLM at each time point. 4 Our Approach When2Ask We design When2Ask based on the Planner-Actor-Mediator framework Dasgupta et al. (2023). In particular, we enhance this framework by incorporating an mediator model that learns to facilitate intelligent and costeffective interactions between the agent and the LLM using RL. 4.1 The Planner-Actor-Mediator Framework This framework consists of three components, as illustrated in Fig. 2: the planner, the actor and the mediator. The planner component is responsible for providing high-level instructions to guide the agent\u2019s actions. The actor component generates low-level actions based on these instructions. Lastly, the mediator acts as an interface between the planner and the actor, facilitating communication and coordination between them. Planner The planner component reads text-based descriptions of the current state and generates a plan for the next high-level option or a sequence of options to perform. In our framework, we utilize a pre-trained LLM as the planner. The LLM receives the descriptions of the current observation and is asked to generate high-level skill instructions for the actor. Whenever the planner is activated, the LLM generates an option plan given the descriptions provided with appropriately designed prompts. Actor The actor component is responsible for planning the low-level actions that align with the instructed option, such as \u201cgo to the red door\u201d or \u201cpick up the yellow key\u201d. In our approach, we consider these option 4 policies to be hard-coded using human expert knowledge. It is also possible to pre-train these policies using option-conditioned reward functions to achieve more complex skills. Mediator In this work, our primary focus is on designing an intelligent mediator component within the Planner-Actor-Mediator framework. Our approach involves training an explicit asking policy using RL to determine when to interact with the planner. The mediator component consists of two sub-components: an asking policy that decides whether to request a new plan from the planner based on observations and the current option, and a translator module that converts observations into text descriptions readable by the LLM. Following Ahn et al. (2022); Carta et al. (2023), we assume the availability of an expert translator here. In our experiments, the translator is designed with two stages. Firstly, we extract the IDs of objects, such as keys, doors, and boxes, observed within the field of view of the local agent using the built-in interface of the simulation platform. Next, we input this information into our predefined template and output it to LLM in a fixed format. An example of the format can be seen in the green box of Fig.4. The translator can also be learned from data Wang et al. (2023c); Dasgupta et al. (2023). 4.2 Learning asking policy with RL Here we introduce our proposed approach to learn an asking policy for use in the mediator component. As mentioned earlier, interacting with the LLM can be costly. Ideally, the asking policy should be trained to enable the agent to request a new plan from the LLM only when it discovers new and informative observations. The expectation is that the LLM will provide a different plan in response to these new observations. To address this, we formulate the problem as an MDP, where the state includes information about the agent\u2019s observation and current option in action. The action space consists of two actions: \u201cAsk\" and \u201cNot Ask\". In this formulation, the LLM planner is considered as part of the environment that can influence state transitions. The reward function encompasses both the task-related return, denoted as r, and an additional penalty term that penalizes unnecessary interactions. Specifically, when the asking policy decides to ask the LLM for a new plan, but the plan provided by the LLM remains the same as the current one, the agent incurs a penalty. This penalty encourages the asking policy to avoid unnecessary interactions and ensures that requesting a new plan is primarily motivated by the discovery of new informative observations. Denote the asking policy as \u03c0ask with its parameters represented by \u03b8. We train this policy using standard onpolicy RL methods, specifically Proximal Policy Optimization (PPO) Schulman et al. (2017). The objective function for training the asking policy is defined as follows: max \u03b8 X t=1 \u0002 \u03b3trt \u2212\u03bb1(yt == Ask \u2227\u03c9t == \u03c9t\u22121) \u0003 , where yt \u2208{Ask, Not Ask} represents the decision made by the asking policy at time step t, rt denotes the task reward obtained at time step t, and \u03c9t is the planned option provided by the LLM at time step t. The penalty factor \u03bb is used to balance the importance of avoiding unnecessary interactions. Note that if the decision made by the asking policy is \u201cNot Ask\" (yt == Not Ask), we set \u03c9t to be the plan from the previous time step (\u03c9t = \u03c9t\u22121). This ensures that if the agent decides not to ask for a new plan, it continues executing the same plan as before. During each iteration, data is collected on-policy using the model \u03c0ask \u03b8 . 5 Experiments Through experiments, we seek to address the following questions: can our agent effectively minimize interaction costs while maintaining a high target task completion rate, compared with baseline methods? Can our agent proactively seek assistance from an LLM in exploratory environments? Lastly, how does the performance of our agent compare to a baseline RL agent that does not utilize an LLM planner? The experimental results indicate that the answer to all of the aforementioned questions is yes. As a byproduct, we find that our approach is able to tolerate the imperfection of a crucial component, the translator, in the mediator module, which is used to convert observed images into textual descriptions (refer details to the Appendix). We employ two versions of the Vicuna model (Vicuna-7b and Vicuna-13b) Touvron et al. (2023) as LLM planners in our experiments. 5 5.1 Baselines In our experiments, we considered four baseline interaction methods as follows: Hard-coded The timing and conditions for requesting new instructions from LLMs are manually determined by human experts for each option Wang et al. (2023c). The agent will only request a new plan from the LLM planner when specific termination conditions for the option are met. These conditions involve a goalfinishing detector and a constraint on the maximum number of allowed timesteps. For example, let\u2019s consider the option \u201cgo to the red door.\" The termination condition for this option specifies that the agent should reach the target door location or exceed 100 timesteps spent on this option. We argue that, adopting such hardcoded termination rules, the agent cannot fully utilize newly acquired information during option execution. Additionally, these hard-coded rules may be vulnerable to uncertainties embedded in other components of the framework. Always The agent queries the LLM planner at every timestep, ensuring that any newly acquired information is immediately relayed to the planner Dasgupta et al. (2023). This strategy theoretically leads to better task performance as there is no delay between gathering new information and requesting a re-plan. However, it comes with the drawback of consuming significantly more interaction resources. Random At each timestep, the agent has a fixed probability of 50% to query the LLM for instructions. Never The agent never interacts with the LLM. Instead, the policy-over-options (i.e., the planner) is learned using RL techniques based on data collected during interactions with the environment Sutton et al. (1999); Precup (2000). This means that the agent learns to make decisions and generate plans without actively querying the LLM in real-time decision-making. By comparing this method with other approaches, we can assess the contribution of using an LLM as the planner for embodied sequential decision-making tasks. This comparison helps evaluate the effectiveness and advantages of incorporating a pre-trained LLM into the planning process. 5.2 MiniGrid Experiments The MiniGrid environment Chevalier-Boisvert et al. (2023) consists of a collection of 2D grid-world environments with goal-oriented tasks. In these environments, the agent must navigate within a 2D grid room and interact with specific objects to complete various tasks, such as \u201copen the red door\" or \u201cput the green ball next to the yellow box\". One important characteristic of our MiniGrid environment is that the agent\u2019s view range is limited. This means that the agent needs to explore the environment and gather useful information to plan its actions effectively. The environment returns observations in the form of a full grid, but with unexplored areas occluded, similar to the concept of \u201cfog of war\" in games like StarCraft. Technically, the observation returned by the environment has a shape of o \u2208RW \u00d7H\u00d74, where W and H represent the width and height of the grid, respectively. For an unexplored grid at location [w, h], the observation returns the vector [\u22121, \u22121, \u22121, \u22121]. For an explored grid, the corresponding 4D vector contains information about the object ID, color ID, state ID (e.g., closed or locked for a door), and the agent\u2019s direction ID (indicating the agent\u2019s orientation if it is present at this location, or 4 otherwise). This design allows us to focus on the agent\u2019s reasoning ability and exclude potential influences from factors like memorization. Fig. 3 provides an example of the environment setup in the SimpleDoorKey scenario. In our experiments, we focus on the task of opening a locked door in five distinct environments: SimpleDoorKey, KeyInBox, RandomBoxKey, ColoredDoorKey, and MovingObstacle. All of these environments are procedurally generated, i.e., the grid layout (including room size, key and door locations) is randomly determined each time the environment is reset. To evaluate generalization, a held-out test set consisting of 100 randomly selected seeds is predefined for each environment. Refer to the Appendix for more information about the environments. We use the Vicuna-7b model for the SimpleDoorKey, KeyInBox, RandomBoxKey, and MovingObstacle environments, while for the more complex ColoredDoorKey environment we use the Vicuna-13b model. To enable interactions between the agent and the LLM planner. As demonstrated in previous work Min et al. 6 observed nothing observed green key observed green key observed green door, carrying green key Finished! Figure 3: An illustrative example of the partial observations and their corresponding text descriptions in environment SimpleDoorKey. The agent is illustrated with a red triangle, and the path it takes is illustrated with red dots. At the start of each episode, the agent is provided with only limited information, with the unexplored area masked (light grey). As the agent progresses in this room, it reveals more information about the room layout for the planner, until it successfully opens the locked door. Figure 4: Example of the prefix prompt and one interaction for the ColoredDoorKey environment. Prefix prompt consists of task instruction and few-shot examples. In Chain-of-Thought-style prompts, we add inference processes within the examples. Note that these few-shot examples are only provided for grounding task knowledge and constraining the output formats of the LLM. We do not need to exhaustively enumerate all knowledge and rules to construct prompts, as a qualified LLM can do logical reasoning based on a limited number of prompts, then provide proper plans (instructions) that are adaptable to new scenarios encountered in the environment. (2022), language models like LLMs require carefully designed prompts and few-shot demonstrations to generalize to different tasks. In our experiments, we provide task instructions and few-shot examples as incontext prompts for each environment. These prompts serve to ground the task knowledge and guide the LLM\u2019s understanding of the specific task requirements. Furthermore, for the challenging reasoning task 7 in the ColoredDoorKey environment, we utilize Chain-of-Thought prompts proposed by Wei et al. (2022). These prompts help the LLM to deal with complex reasoning tasks specific to the ColoredDoorKey environment. The few-shot examples provided in the prompts are used to anchor the agent\u2019s knowledge about the task, such as understanding that a door can only be unlocked with a key of the same color, and provide constraints on the output formats and guidelines for generating appropriate plans. Note that the LLM planner must reason about the target task using its embedded knowledge and generalization capabilities to adapt to different scenarios with varying objects and colors. Fig. 4 provides an example of the prefix prompts and an interaction example in the ColoredDoorKey environment. It shows how the LLM planner successfully generates a correct plan based on novel observations. 5.2.1 Can our agent complete target tasks with less interaction costs? We compare our proposed approach When2Ask with several baseline methods to evaluate its effectiveness. We analyze the learning curves for both communication costs (Fig. 5) and task performances (Fig. 6) across all five environments. Additionally, we provide asymptotic performances in Table 1. As is shown, our approach successfully reduces the number of interactions with the LLM while maintaining task performance across all environments. This reduction in communication cost indicates that our method effectively learns to minimize non-informative interactions with the LLM. Furthermore, our approach maintains consistently high success rates throughout the learning process. This observation indicates that the asking policy learns to filter out unnecessary interactions while still engaging in essential interactions with the LLM to achieve successful task completion. Table 1: Asymptotic performance comparison on five MiniGrid environments. The performance metrics include the total number of interactions with the LLM, the number of MDP state-transition timesteps, and the success rate for completing a task. These results show that our approach achieves competitive task performance in terms of success rate while significantly reducing interaction costs (indicated by the number of interactions) compared to Always and Random. Hard-coded requires the fewest LLM interactions but often fails to complete tasks. All results are averaged over 500 test trials (We use 5 training seeds to initialize the policy network, and conduct 100 independent tests per seed). Environment Performance metric Hard-Coded Always Random Our approach SimpleDoorKey Number of interactions \u2193 1.58 25.78 12.75 4.24 Number of timesteps \u2193 64.9 25.78 26.55 29.20 Task success rate \u2191 59% 100% 100% 100% KeyInBox Number of interactions \u2193 1.58 26.78 15.3 4.33 Number of task timesteps \u2193 65.49 26.78 27.46 29.01 Task success rate \u2191 59% 100% 100% 100% RandomBoxKey Number of interactions \u2193 1.93 30.26 16.09 3.61 Number of task timesteps \u2193 61.71 30.26 30.2 34.41 Task success rate \u2191 56% 94% 95% 95% ColoredDoorKey Number of interactions \u2193 2.01 61.96 23.75 3.29 Number of timesteps \u2193 75.54 61.96 44.64 47.87 Task success rate \u2191 43% 49% 81% 83% MovingObstacle Number of interactions \u2193 2.29 39.49 20.70 6.94 Number of timesteps \u2193 82.36 39.49 44.90 48.63 Task success rate \u2191 43% 94% 93% 92% 5.2.2 Can our agent proactively seek assistance from an LLM in exploratory environments? Upon analyzing the agent\u2019s performance in situations where it is expected to ask the LLM planner for help, we observe that the baseline method with a hard-coded asking policy exhibited significantly lower success rates compared to other approaches. This discrepancy occurs because the agent continues executing every option until its termination condition is met, even when it has already gathered sufficient information to complete the task. Consequently, this inefficient approach results in wasted time on each option and ultimately leads to 8 0 200 400 600 800 1000 Training Iterations 0 5 10 15 20 25 30 35 40 Number of Interactions SimpleDoorKey Our approach Hard-Coded Always Random 0 200 400 600 800 1000 Training Iterations 0 5 10 15 20 25 30 35 40 Number of Interactions KeyInBox Our approach Hard-Coded Always Random 0 200 400 600 800 1000 Training Iterations 0 5 10 15 20 25 30 35 40 Number of Interactions RandomBoxKey Our approach Hard-Coded Always Random 0 100 200 300 400 500 Training Iterations 0 10 20 30 40 50 60 Number of Interactions ColoredDoorKey Our approach Hard-Coded Always Random 0 100 200 300 400 500 Training Iterations 0 10 20 30 40 50 60 Number of Interactions MovingObstacle Our approach Hard-Coded Always Random Figure 5: The number of interactions with the LLM vs. the number of RL iterations used for learning the asking policy. It shows that, for every environment, the more thoroughly the asking policy is trained, the fewer interactions with the LLM planner (i.e., the less interaction costs) are required to complete task. failure in completing the task within the given time limit. In contrast, our proposed approach, along with other baseline methods, demonstrates the ability to early-stop options when necessary. As a result, they achieve 100 percent success rates in SimpleDoorKey and KeyInBox. In a specific scenario within the ColoredDoorKey environment, as illustrated in Fig. 7a, we see an interesting phenomenon. The agent has chosen to take the Explore option and acquired information about the location of the yellow key (frame 2). With use of the Hard-coded baseline approach, the agent shall continue with the Explore option until it has fully explored the entire room. In contrast, using our proposed approach, the agent can recognize the value of asking the LLM planner for guidance given the current information, and immediately propose asking about the next steps while ceasing further exploration. The LLM would instruct the agent to efficiently pick up the yellow key without wasting additional time. This example highlights the effectiveness of our proposed approach in recognizing when to seek assistance from the LLM planner and making more efficient decisions based on the available information. By leveraging the embedded knowledge of the LLM planner, our approach enables the agent to make informed choices that expedite task completion and improve overall performance. 9 0 200 400 600 800 1000 Training Iterations 0.5 0.6 0.7 0.8 0.9 1.0 Success Rate SimpleDoorKey Our approach Hard-Coded Always Random 0 200 400 600 800 1000 Training Iterations 0.5 0.6 0.7 0.8 0.9 1.0 Success Rate KeyInBox Our approach Hard-Coded Always Random 0 200 400 600 800 1000 Training Iterations 0.5 0.6 0.7 0.8 0.9 1.0 Success Rate RandomBoxKey Our approach Hard-Coded Always Random 0 100 200 300 400 500 Training Iterations 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 Success Rate ColoredDoorKey Our approach Hard-Coded Always Random 0 200 400 600 800 1000 Training Iterations 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 Success Rate MovingObstacle Our approach Hard-Coded Always Random Figure 6: Success rate of completing target tasks vs. the number of RL iterations used for learning the asking policy. It demonstrates that our approach consistently maintains a high success rate across all environments, and outperforms baseline methods in ColoredDoorKey. 5.2.3 How does the performance of our agent compare to a baseline RL agent that does not utilize an LLM planner? To assess the importance of the reasoning ability of the LLM in our approach, we conduct an ablation study comparing it with a baseline RL agent that does not use an LLM planner. The RL baseline focuses on learning the planner, specifically policy-over-options, without any interaction with an LLM. The summarized results displayed in Table 2 demonstrate that even in the simplest environment, SimpleDoorKey, the RL baseline faces challenges in completing the task within a fixed number of training iterations. This suggests that learning how to solve these tasks from scratch is difficult for an RL agent. In embodied environments like these, agents must acquire skills such as exploration, reasoning about relationships between objects, and planning optimal actions to accomplish tasks successfully. By incorporating the LLM\u2019s assistance, an agent can leverage the world knowledge embedded in the language model, leading to a significant reduction in the difficulties associated with solving these tasks. Consequently, the outcomes of the ablation study support the notion that the reasoning ability provided by the pre-trained LLM plays a crucial role in achieving higher performance in complex environments. 10 (a) An example scenario where the agent discovers new information during option explore. (b) An example scenario where the hard-coded translator fails to encode all information. Figure 7: Two example scenarios where the agent is expected: (a) to ask the LLM planner for help as it has collected useful information for the planner to adjust its plan; and (b) to not ask the LLM, as the LLM may propose wrong options due to an imperfect translator. Table 2: Performance comparison between our agent and an RL agent that does not use LLM in the SimpleDoorKey environment. Performance metric RL Our approach Average Return \u2191 0.0324 0.7583 Average # of state-transition timesteps \u2193 98.36 30.47 Success rate \u2191 12% 100% 5.3 Habitat Experiments We further evaluate our approach with the Habitat environment Szot et al. (2021). The results indicate the potential of our approach to function effectively in visually realistic domains. The details on the experiment setting are referred to the Appendix. We compare our approach against baselines on the Pick&Place task. To ensure reliability of experimental results, we utilize 10 training seeds to initialize the policy network. This allows us to explore different initializations and avoid biased results. Subsequently, we select the best policy obtained from these training runs to run 250 independent testing trials. As presented in Table 3 and Fig. 8, our approach significantly outperforms baseline methods across all three stages. Particularly, compared to the hard-coded baseline where the preset plan is executed step-by-step, our approach addresses the \u201chand-off problem\" Szot et al. (2021) that can arise when the preceding option terminates at a state that makes it challenging for the succeeding option to initiate. This issue is depicted in Fig. 9, where the robot stops at an unfavorable location at the end of the Navigate option, resulting in a failure to execute the subsequent Pick option. Our approach effectively bypasses this problem by incorporating intelligent interactions with the LLM planner, enabling the agent to adapt its actions based on dynamic information provided by the planner. The obtained results demonstrate that the RL learned asking policy effectively establishes a connection between the world knowledge embedded within the LLMs and the local knowledge embedded within the pretrained skills. This connection leads to a superior overall performance of our approach compared to the baselines that do not involve any learning. These findings align with the main observations from our exper11 0 20 40 60 80 100 Training Iterations 0 50 100 150 200 250 300 Number of Interactions Pick&Place Our approach Hard-Coded Random 0 20 40 60 80 100 Training Iterations 0.0 0.1 0.2 0.3 0.4 0.5 0.6 Success Rate Pick&Place Our approach Stage1 Our approach Stage2 Our approach Stage3 Hard-Coded Stage1 Hard-Coded Stage2 Hard-Coded Stage3 Random Stage1 Random Stage2 Random Stage3 Figure 8: The number of interactions with the LLM (left) and the stage success rates (right) vs. the number of training iterations used for learning the asking policy on the Pick&Place task. Figure 9: An illustrative example demonstrating the \u201chand-off\u201d problem in Habitat. The robot\u2019s objective is to navigate to the living room and pick up the apple from the table. With the Hard-Coded baseline in use (left), according to preset hard-coded rules, the agent must first complete the Navigate option before executing the Pick option. Consequently, the agent stops at a location where the apple is not visible at the end of Navigate, resulting in its future failure in the Pick option. With our approach (right), in the middle of Navigate, the agent finds itself at a suitable location where the apple can be spotted. The learned mediator interrupts the ongoing Navigate and query the LLM planner, which returns the Pick option. This helps the agent subsequently pick up the apple successfully. This example demonstrates the effectiveness of our approach in bypassing the \u201chand-off\u201d issue. iments in the MiniGrid environments, particularly in the ColoredDoorKey scenario, where the RL learned asking policy enables the agent to outperform all baselines. 6 Conclusions In this paper, we propose a novel RL approach that intelligently manages interactions between local agents and LLMs, taking into account both interaction cost and task completion efficiency. Our approach enables a local agent to learn when to initiate or maintain interaction with the LLM, as well as when to rely on its own learned skills without requiring LLM interaction. It offers a pathway for more efficient and intelligent use of LLMs in diverse real-world applications. We conduct extensive experiments to demonstrate the effectiveness of our approach, yielding two major insights. Firstly, RL can provide an elegant and practical solution to optimize the trade-off between interaction cost and task completion efficiency in the context of LLMassisted sequential decision making. Secondly, by minimizing unnecessary interactions with the LLM, the agent can achieve more robust performance. This observation aligns with the learning process in humans, where excessive reliance on external forces, such as teachers or experts, can hinder the improvement of task completion abilities, especially in terms of generalizing to new tasks. Conversely, by utilizing existing abilities to handle new situations or tasks and seeking external assistance only when necessary, individuals can enhance their ability to improve at a faster pace. 12 Table 3: Success rate of each stage completions and total number of interactions with the LLM planner in the Habitat during testing. Performance metric Hard-Coded Random Our approach Stage1 success rate\u2191 10.8% 7.6% 53.6% Stage2 success rate\u2191 2.4% 1.6% 46.4% Stage3 success rate\u2191 2.0% 1.2% 35.6% Total # of interactions\u2193 1.00 295.60 7.99 Acknowledgments This work is supported by Exploratory Research Project (No.2022RC0AN02) of Zhejiang Lab.", "introduction": "To empower embodied agents with the capability to effectively handle demanding sequential decision-making tasks, it is essential for them to possess reasoning abilities that enable them to plan for the long-term conse- quences of their actions Deitke et al. (2022). Reinforcement learning (RL), particularly deep RL, has emerged as a popular paradigm for addressing these challenges. Deep RL involves agents interacting with the environ- ment and learning from feedback to improve their decision-making over time. Despite recent advancements in deep RL, several challenges still remains and limits its vast applications in real world scenarios. For instance, solving complex problems using deep RL often requires significant computational resources. Additionally, safety concerns can arise during the learning phase, especially in scenarios where the agent\u2019s exploration \u2217Equal contributions \u2020Y. Yang did this work during internship at Zhejiang Lab. \u2021Corresponding author 1 arXiv:2306.03604v6 [cs.AI] 5 Mar 2024 Figure 1: A general framework of using LLMs for solving complex embodied tasks. The LLMs provide high-level instructions based on state descriptions, and the agent generates low-level actions following these instructions and interacts with the target environment to collect further feedback. might interact with the real world or other sensitive environments Das et al. (2018); Chevalier-Boisvert et al. (2018). As an alternative, the emergence of large language models (LLMs) has shown promise in tackling these issues. Previous studies have demonstrated that LLMs possess reasoning capabilities Radford et al. (2019); Brown et al. (2020); Wei et al. (2022). Researchers have explored leveraging LLMs\u2019 reasoning abil- ities to solve various embodied tasks, including robot manipulation tasks Ahn et al. (2022); Huang et al. (2022); Jiang et al. (2022) and playing video games Dasgupta et al. (2023); Wang et al. (2023a;c). As de- picted in Fig. 1, the embodied agent interacts with the environment, gathering information ralated to the target task, and utilizes LLMs as explicit reasoners to make high-level plans using natural language instructions, such as instructing a robot to \u201cpick up a can of coke\u201d or \u201cplace an apple on the table\u201d for the next step. While the integration of pre-trained LLMs as explicit planners in embodied agents has demonstrated promis- ing results, enabling efficient interaction between these agents and LLMs to solve real-world problems re- mains challenging. Frequent queries to LLMs can result in unnecessary resource wastage, including fees (if a commercial LLM is used), communication overhead and reasoning time. Whereas insufficient queries to LLMs prevent the agent from adjusting its plan according to online feedback from the environment, making it vulnerable to uncertainties in the environment. Determining an appropriate guideline for querying LLMs requires expert knowledge of the target task. Con- sider a scenario where a robot is instructed to collect a can of coke but encounters a locked door on its way to the kitchen. Ideally, the agent should recognize this incident and adjust its plan accordingly by consulting the LLM on how to deal with the locked door. In such cases, timely decision-making regarding when to consult the LLM planner becomes crucial. Failure to interrupt the ongoing action plan and request a new one in time can hinder task completion progress or even lead to safety issues, such as damaging the door or the robot itself. Conversely, frequent requests for plans from the LLM can be time-consuming and costly, particularly when using commercial LLMs deployed on remote cloud server nodes that charge based on usage frequency. In this paper, we propose When2Ask, a general approach that trains the agent to make intelligent cost-effective interactions between itself and an LLM deployed on a remote cloud server. Our objective is to facilitate ef- fective completion of a target task while minimizing communication costs incurred from interactions with the LLM. Specifically, we adopt a Planner-Actor-Mediator framework, similar to Dasgupta et al. (2023), where the planner is a pre-trained LLM used for making plans, the actor contains policies for executing the plans, and the mediator serves as an interface in between by deciding when to request a new plan and generating observation representations for the planner (which are text descriptions). With a focus on optimizing inter- acting timings, we use RL to learn an asking policy that instructs the agent to either adhere to the current plan or request a new plan from the LLM. To summarize, our main contributions are as follows: \u2022 We propose an RL approach termed When2Ask to coordinate the interaction between the agent and the LLM based on the Planner-Actor-Mediator framework Dasgupta et al. (2023). Concretely, we propose to introduce an explicit asking policy in the mediator and train it using an RL approach to determine when to query the LLM planner. 2 \u2022 We conducted a comprehensive evaluation of When2Ask against baseline methods based on simula- tion platforms MiniGrid Chevalier-Boisvert et al. (2023) and Habitat Szot et al. (2021). The results demonstrate that the learned asking policy is able to make intelligent decisions on when to query LLMs, resulting in high success rates with only a few necessary LLM interactions in test tasks. Additionally, we find that our approach can perform robustly against partial observability of the environments." } ], "Wangjun Yuan": [ { "url": "http://arxiv.org/abs/2306.05834v3", "title": "On spectrum of sample covariance matrices from large tensor vectors", "abstract": "In this paper, we investigate the limiting empirical spectral distribution\n(LSD) of sums of independent rank-one $k$-fold tensor products of\n$n$-dimensional vectors as $k,n \\to \\infty$. Assuming that the base vectors are\ncomplex random variables with unit modular, we show that the LSD is the\nMar\\v{c}enko-Pastur law. Comparing with the existing results, our limiting\nsetting allows $k$ to grow much faster than $n$. Consequently, we obtain the\nnecessary and sufficient conditions for Mar\\v{c}enko-Pastur law to serve as the\nLSD of our matrix model. Our approach is based on the moment method.", "authors": "Wangjun Yuan", "published": "2023-06-09", "updated": "2024-01-07", "primary_cat": "math.PR", "cats": [ "math.PR" ], "main_content": "In this section, we study the graph combinatorics, which will be used in Section 3. 2.1 Preliminaries In this subsection, we introduce some preliminaries on graph combinatorics, which can be found in [3,7]. For a positive integer s, we denote by [s] the set of integers from 1 to s. We call \u03b1 = (\u03b11, . . . , \u03b1p) \u2208[m]p a sequence of length p with vertices \u03b1j for 1 \u2264j \u2264p. We denote by |\u03b1| the number of distinct elements in \u03b1. If s = |\u03b1|, then we call \u03b1 an s-sequence. Let Js,p(m) be the set of all s-sequences \u03b1 \u2208[m]p. Then [m]p = p \ufffd s=1 \ufffd s=1 Js,p(m). (2.1) For a sequence \u03b1 = (\u03b11, . . . , \u03b1p) and each value t in \u03b1, we count its frequency by degt(\u03b1) = #{j \u2208[p] : \u03b1j = t}. (2.2) where we use the notation #S for the number of elements in the set S. Two sequences are equivalent if one becomes the other by a suitable permutation on [m]. The sequence \u03b1 is canonical if \u03b11 = 1 and \u03b1u \u2264max{\u03b11, . . . , \u03b1u\u22121} + 1 for u \u22652. We denote by Cs,p the set of all canonical s-sequences of length p. From the definition above, one can see that the set of distinct vertices of a canonical s-sequence is [s]. Denote by Is,m the set of injective maps from [s] to [m]. For a canonical s-sequence \u03b1 and a map \u03c6 \u2208Is,m, we call \u03c6(\u03b1) the s-sequence (\u03c6(\u03b11), . . . , \u03c6(\u03b1p)). For each canonical s-sequence, its image under the maps in Is,m gives all its equivalent sequences, and hence its equivalent class of sequences in [m]p has exactly m(m \u22121) \u00b7 \u00b7 \u00b7 (m \u2212s + 1) distinct elements. We fixed a canonical s-sequence \u03b1 = (\u03b11, . . . , \u03b1p) \u2208[m]p. For i = (i(1), . . . , i(p)) \u2208[n]p, draw two parallel lines referred as the \u03b1-line and the i-line, respectively. Plot i(1), . . . , i(p) draw two parallel lines referred as the \u03b1-line and the i-line, respectively. Plot i, . . . , i on the i-line and \u03b11, . . . , \u03b1p on the \u03b1-line. Draw p down edges from \u03b1u to i(u) and p up edges from i(u) to \u03b1u+1 for 1 \u2264u \u2264p with the convention that \u03b11 = \u03b1p+1. We denote the graph by g(i, \u03b1) and call such graph a \u2206(p; \u03b1)-graph. From the definition, one can easily 5 see that the graph g(i, \u03b1) is a connected directed graph with up edges and down edges appear alternatively. An example of the \u2206(p; \u03b1)-graph is given in (a) of the Figure 1. Two graphs g(i, \u03b1) and g(i\u2032, \u03b1) are called equivalent if the two sequences i and i\u2032 are equivalent, and we write g(i, \u03b1) \u223cg(i\u2032, \u03b1) for this equivalence. For each equivalent class, we choose the canonical graph such that i = (i(1), . . . , i(p)) \u2208[n]p is a canonical r-sequence for some r \u2208N+. A canonical \u2206(p; \u03b1)-graph is denoted by \u2206(p, r, s; \u03b1) if it has r noncoincident i-vertices and s noncoincident \u03b1-vertices. We call a graph \u22061(p, s; \u03b1)-graph if it is a \u2206(p; \u03b1)-graphs such that each down edge coincide with exactly one up edge and if we glue the pair of coincident edges and remove the orientation, the resulting graph is a tree with p edges and p + 1 vertices. Hence, we have r + s = p + 1. We give an example of \u22061(p, s; \u03b1)-graph in (b) of Figure 1. (a) \u2206(p, \u03b1)-graph with p = 3, \u03b1 = (1, 2, 2), i = (1, 2, 3). (b) \u22061(p, s; \u03b1)-graph with p = 3, s = 2, \u03b1 = (1, 2, 2), i = (1, 2, 1). Figure 1 For a given sequence \u03b1 \u2208Cs,p, the following lemma determines the number of sequences i \u2208Cp+1\u2212s,p such that g(i, \u03b1) \u2208\u22061(p, s; \u03b1). Lemma 2.1. ([7, Lemma 3.1]) For any 1 \u2264s \u2264p and any sequence \u03b1 \u2208Cs,p, there is at most one sequence i \u2208Cp+1\u2212s,p such that g(i, \u03b1) \u2208\u22061(p, s; \u03b1). We denote by C(1) s,p the set of such canonical sequences \u03b1. Then the number of the elements in C(1) s,p is 1 p \u0012 p s \u22121 \u0013\u0012p s \u0013 . 2.2 Characterization of C(1) s,p In this subsection, we establish some properties of the sequences in C(1) s,p. We start with the following definition. Definition 1. A sequence \u03b1 = (\u03b11, . . . , \u03b1p) is called a crossing sequence if there exist j1 < j2 < j3 < j4, such that \u03b1j1 = \u03b1j3 \u0338= \u03b1j2 = \u03b1j4. A sequence is called non-crossing sequence if it is not a crossing sequence. 6 The following theorem is a characterization of the set C(1) s,p. Theorem 2.1. For any 1 \u2264s \u2264p, the set of all non-crossing sequences \u03b1 \u2208Cs,p is C(1) s,p. The proof of Theorem 2.1 follows directly from the two lemmas below. Lemma 2.2. ([7, Lemma 3.4]) For any 1 \u2264s \u2264p and any sequence \u03b1 \u2208Cs,p, if \u03b1 is a crossing sequence, then \u03b1 / \u2208C(1) s,p. Lemma 2.3. For any 1 \u2264s \u2264p and any sequence \u03b1 \u2208Cs,p, if \u03b1 is a non-crossing sequence, then \u03b1 \u2208C(1) s,p. Proof. (of Lemma 2.3) We prove by induction on p. The case p = 1 is trivial, since \u03b1 = (\u03b11) and i = (i(1)), so g(i, \u03b1) is the graph with exactly one up edge from i(1) to \u03b11 and one down edge from \u03b11 to i(1). Assume that Lemma 2.3 holds for sequence of length at most p \u22121 for p \u22652. We need to show Lemma 2.3 holds for any non-crossing sequence \u03b1 = (\u03b11, . . . , \u03b1p) \u2208Cs,p. We consider the following two cases according to whether \u03b11 coincide with other vertices. Case 1. There exists 1 < j < p + 1, such that \u03b1j = \u03b11. In this case, we split the sequence \u03b1 to two subsequences \u03b1\u2032 = (\u03b11, . . . , \u03b1j\u22121) and \u03b1\u2032\u2032 = (\u03b1j, . . . , \u03b1p). For the subsequence \u03b1\u2032, it is canonical s\u2032-sequence for some s\u2032 < s. Moreover, \u03b1\u2032 is non-crossing and has length j \u22121 \u2264p \u22121, so by induction hypothesis, we have \u03b1\u2032 \u2208C(1) s\u2032,j\u22121, and thus, there exists a canonical (j \u2212s\u2032)-sequence i\u2032 of length j \u22121, such that g(i\u2032, \u03b1\u2032) \u2208\u22061(j \u22121, s\u2032; \u03b1\u2032). For the subsequence \u03b1\u2032\u2032, it is also non-crossing but not canonical. The non-crossing property allows us to identify \u03b1\u2032\u2032 to a canonical sequence. Note that the vertices of \u03b1\u2032 take values in [s\u2032], so the vertices of \u03b1\u2032\u2032 take values in {1} \u222a{s\u2032 + 1, . . . , s}. We define \u03b2\u2032\u2032 = (\u03b2j, . . . , \u03b2p) by setting \u03b2k = \u03b1k if \u03b1k = \u03b11, and \u03b2k = \u03b1k \u2212s\u2032 + 1 if \u03b1k \u0338= \u03b11. Now \u03b2\u2032\u2032 is canonical non-crossing (s\u2212s\u2032 +1)-sequence of length p\u2212j +1 \u2264p\u22121. Hence, by induction hypothesis, we have \u03b2\u2032\u2032 \u2208C(1) s\u2212s\u2032+1,p\u2212j+1 and there exists a canonical (p \u2212j \u2212s + s\u2032 + 1)sequence i\u2032\u2032 of length p \u2212j + 1, such that g(i\u2032\u2032, \u03b2\u2032\u2032) \u2208\u22061(p \u2212j + 1, s \u2212s\u2032 + 1; \u03b2\u2032\u2032). The sequence i which satisfies g(i, \u03b1) \u2208\u22061(p, s; \u03b1) can be obtain by \u2019gluing\u2019 the two sequence i\u2032 and i\u2032\u2032. We provide the Figure 2 part (a) for the idea of gluing two graphs. More precisely, we define a canonical (p \u2212s + 1)-sequence i = (i(1), . . . , i(p)) by i(k) = i\u2032(k) for 1 \u2264k \u2264j \u22121, and i(k) = i\u2032\u2032(k) + j \u2212s\u2032 for j \u2264k \u2264p. One can easily check that i is a canonical (p \u2212s + 1) sequence, and the subsequence (i(1), . . . , i(j\u22121)) and (i(j), . . . , i(p)) has no common vertex. Thus, the subgraph of g(i, \u03b1) from \u03b11 to \u03b1j and the subgraph from \u03b1j to \u03b1p+1 do not have coincide edge and satisfy the definition of \u22061(p, s; \u03b1)-graph. Therefore, we can conclude that the graph g(i, \u03b1) \u2208\u22061(p, s; \u03b1) so \u03b1 \u2208C(1) s,p. Case 2. For any 1 < j < p+1, \u03b1j \u0338= \u03b11. In this case, we have \u03b11 = 1, \u03b12 = 2. We consider the following two subcases. Case 2(a). If for any 2 < k < p+1, \u03b1k \u0338= \u03b12. We consider the sequence \u03b2 = (\u03b21, . . . , \u03b2p\u22121) given by \u03b21 = \u03b11 and \u03b2k = \u03b1k+1 \u22121. Then \u03b2 is a non-crossing canonical (s \u22121)-sequence of length p \u22121. By induction hypothesis, \u03b2 \u2208C(1) s\u22121,p\u22121, and g(\u02dc i, \u03b2) \u2208\u22061(p \u22121, s \u22121; \u03b2) 7 (a) Combine two \u22061(p, s; \u03b1)-graphs g(i\u2032, \u03b1\u2032) and g(i\u2032\u2032, \u03b1\u2032\u2032). (b) Insert the coincident edges i(1) \u2192\u03b12 and \u03b12 \u2192i(2). Figure 2 for some canonical (p \u2212s + 1)-sequence \u02dc i = (\u02dc i(1), . . . ,\u02dc i(p\u22121)). The sequence i which satisfies g(i, \u03b1) \u2208\u22061(p, s; \u03b1) can be obtained by \u2019inserting\u2019 the vertex of value 1 to the sequence \u02dc i between \u02dc i(1) and \u02dc i(2). The part (b) of Figure 2 is provided for the idea of inserting the coincident edges between i(1) = i(2) and \u03b12. More precisely, we define a canonical (p\u2212s+1)sequence i = (i(1), . . . , i(p)) by i(1) = i(2) = 1 and i(k) = \u02dc i(k\u22121) for 3 \u2264k \u2264p. One can easily check that i is a canonical (p\u2212s+1)-sequence of length p, and there are exactly two edges with vertex \u03b12: an up edge i(1) \u2192\u03b12 and a down edge \u03b12 \u2192i(2). Noting that i(2) = i(1), the up edge and down edge coincide, but they do not coincide with other edges. Thus, we have g(i, \u03b1) \u2208\u22061(p, s; \u03b1), which means that \u03b1 \u2208C(1) s,p. Case 2(b). If there exist 2 < k \u2264p, such that \u03b1k = \u03b12. Then we can split the sequence \u03b1 into two subsequences \u03b1\u2032, \u03b1\u2032\u2032, where \u03b1\u2032 = (\u03b12, . . . , \u03b1k\u22121), and \u03b1\u2032\u2032 = (\u03b11, \u03b1k+1, . . . , \u03b1p). One can use the argument of Case 1 to deduce that there are two canonical sequences i1, i2 of length k \u22122 and p \u2212k + 1 respectively, such that the graph g(i1, \u03b1\u2032) and g(i2, \u03b1\u2032\u2032) satisfy the definition of \u22061-graph. We shift the sequence i1 by adding 1 to the value of each vertex, and denote by i\u2032 1 the sequence after shifting. We also shift the sequence i2 by adding |i1| to all vertices that do not have value 1. We write i\u2032 2 for the sequence after shifting. Then one can glue the two sequences i\u2032 1 and i\u2032 2 using the argument in Case 1. More precisely, the canonical sequence i = (i(1), . . . , i(p)) can be defined by i(1) = 1, i(j) = i\u2032(j\u22121) 1 for 2 \u2264j \u2264k \u22121, and i(j) = i\u2032(j\u2212k+1) 2 for k \u2264j \u2264p. The non-crossing of the sequence \u03b1 ensure that the graph g(i, \u03b1) \u2208\u22061(p, s; \u03b1). In the following, we study the graph g(i, \u03b1) for non-crossing sequence \u03b1. We first introduce the conception of paired graph and single graph. Definition 2. Let \u03b1, i be two sequences. The \u2206(p; \u03b1)-graph g(i, \u03b1) is called a paired graph if for any two vertices, between which the number of up edges equals to the number of down edges. The graph g(i, \u03b1) is called a single graph if there exist two vertices, such that difference of the number of up edges and down edges between the two vertices is exactly one. 8 Remark 2.1. For a \u2206(p; \u03b1)-graph g(i, \u03b1), if one reduces the graph by removing an up edges with one of the coincident down edges at the same time (but keep the vertices), then a paired graph is the graph which can be reduced to a graph without edges, while a single graph is the graph that can be reduced to a graph with at least one single edge. Remark 2.2. 1. A \u22061(p, s; \u03b1)-graph is always a paired graph. Figure 3 provides two examples of paired graphs that are not \u22061(p, s; \u03b1)-graph. 2. Single graphs exist for any sequence \u03b1. Figure 1 (a) is an example of single graph. Indeed, one only need to choose i to have distinct vertices. 3. There are \u2206(p; \u03b1)-graphs which is neither a paired graph nor a single graph. See for example Figure 4 where the multiple edges in g(i, \u03b1) have the same orientation. (a) paired graph with p = 2, \u03b1 = i = (1, 1). (b) paired graph with p = 4, \u03b1 = (1, 2, 1, 2), i = (1, 2, 2, 1). Figure 3 Figure 4: g(i, \u03b1) with p = 4 and \u03b1 = i = (1, 2, 1, 2). Next, we establish the following proposition for the \u2206(p; \u03b1)-graph for non-crossing sequence \u03b1. Proposition 2.1. For any 1 \u2264s \u2264p, and \u03b1 \u2208C(1) s,p, for any canonical sequence i of length p, the graph g(i, \u03b1) is either a paired graph or a single graph. 9 In order to prove Proposition 2.1, we need to introduce the conception of consecutive down (resp. up) edges. Let \u03b1, i be two canonical sequences. For any two coincide down edges \u03b1j1 \u2192i(j1) and \u03b1j2 \u2192i(j2) with some 1 \u2264j1 < j2 \u2264p, if all up edges {i(j) \u2192\u03b1j+1 : j1 \u2264j < j2} between the two down edges do not coincide with them (without considering the orientation), then we call the two down edges \u03b1j1 \u2192i(j1) and \u03b1j2 \u2192i(j2) are consecutive down edges with distance j2 \u2212j1. Similarly, for any two coincide up edges i(j1) \u2192\u03b1j1+1 and i(j2) \u2192\u03b1j2+1 with 1 \u2264j1 < j2 \u2264p, if down edge \u03b1j \u2192i(j) does not coincide with them for any j1 + 1 \u2264j \u2264j2, then we call the two up edges consecutive up edges with distance j2 \u2212j1. In the graph given by Figure 4, the two coincident down edges \u03b11 \u2192i(1) and \u03b13 \u2192i(3) are consecutive down edges with distance 2, while the two coincident up edges i(1) \u2192\u03b12 and i(3) \u2192\u03b14 are consecutive up edges with distance 2. Proof. (of Proposition 2.1) We prove by contradiction. We fix the sequence \u03b1 \u2208C(1) s,p. Assume that there exists a canonical sequence i = (i(1), . . . , i(p)), such that the graph g(i, \u03b1) is neither a paired graph nor a single graph. By definition, there exist two vertices, such that the numbers of the up and down edges between the two vertices are different by at lease two. Thus, there are consecutive up edges or consecutive down edges. We choose the pair of consecutive edges with the smallest distance and consider the case that they are up edges and down edges separately. If there are more than one pair of consecutive edges with the smallest distance, we can choose any one of them. Case 1. The pair of consecutive edges with smallest distance are down edges \u03b1j1 \u2192i(j1) and \u03b1j2 \u2192i(j2) with some 1 \u2264j1 < j2 \u2264p. We restrict out attention to the path P : \u03b1j1 \u2192i(j1) \u2192\u03b1j1+1 \u2192. . . \u2192i(j2\u22121) \u2192\u03b1j2. For all vertices that coincides with i(j1), we denote by A the collection of their neighbourhoods among the collection {\u03b1j : j1+1 \u2264j \u2264j2\u22121} and E the corresponding collection of edges. We denote by B the collection {\u03b11, . . . , \u03b1j1, \u03b1j2, . . . , \u03b1p+1}. We keep the multiplicity for coincide vertices (resp. edges) for A (resp. E). Note that vertices in A do not coincide with \u03b1j1 by the definition of consecutive down edges, and do not coincide with any vertex in B since \u03b1 is non-crossing. Thus, A and B are disjoint. Since the vertex i(j1) is not the endpoint of the path P, the numbers of the up edges and down edges within P associated with i(j1) are the same. Noting that on the path P, the edge \u03b1j1 \u2192i(j1) is the only edge associated with i(j1) that are not in E, so the number of edges in E is odd. Hence, there exist the coincide edges in E consist of different number of up edges and down edges. If the difference of up and down coincident edges is exactly one, then the graph is a single graph, which is a contradiction. If the up and down coincident edges differ by at least two, then there is another pair of consecutive edges. This also leads to a contradiction since the consecutive down edges \u03b1j1 \u2192i(j1) and \u03b1j2 \u2192i(j2) should have the smallest distance. Case 2. The pair of consecutive edges with smallest distance are up edges i(j1) \u2192\u03b1j1+1 and i(j2) \u2192\u03b1j2+1 with 1 \u2264j1 < j2 \u2264p. The argument is similar to the Case 1, and is sketched below. We consider the path P \u2032 : \u03b1j1+1 \u2192. . . \u2192\u03b1j2 \u2192i(j2) \u2192\u03b1j2+1. For all vertices that coincides with i(j2), we denote by A\u2032 the collection of their neighbourhoods among {\u03b1j : j1 + 2 \u2264j \u2264j2} and E\u2032 the 10 corresponding collection of edges. We also denote B\u2032 = {\u03b11, . . . , \u03b1j1+1, \u03b1j2+1, . . . , \u03b1p+1}. Then by the definition of consecutive up edges and the fact that \u03b1 is non-crossing, one can deduce that A\u2032 and B\u2032 are disjoint. Besides, by analyzing the neighbourhood of i(j2) in the path P \u2032, one can deduce that the number of edges in E\u2032 is odd. This contradicts to either the condition that g(i, \u03b1) is not a single graph or the assumption that consecutive edges have distance at least j2 \u2212j1. 2.3 Paired graph In this subsection, we study the paired graphs, which contribute to the moments in Section 3. For graphs that are not single graph, we have the following proposition for the number of vertices. Proposition 2.2. For any 1 \u2264r, s \u2264p, for any \u03b1 \u2208Cs,p and i \u2208Cr,p, we have the following statements: 1. If g(i, \u03b1) is a paired graph, then r+s \u2264p+1. The equality holds if and only if g(i, \u03b1) is a \u22061(p, s; \u03b1)-graph. 2. If g(i, \u03b1) is neither a paired graph nor a single graph, then r + s \u2264p. Proof. If g(i, \u03b1) is not a single graph, then all edges must coincide with at least one other edges. If we remove the orientation and glue all the coincide edges, it results in nondirected connected graph with at most p edges and exactly r + s vertices, which implies that r + s \u2264p + 1. The equality holds if and only if the resulting graph is a tree with exactly p edges. In this case, all edges in the graph g(i, \u03b1) must coincide with exactly one other edge. If there are two coincident edges that have the same orientation, then directed graph g(i, \u03b1) is disconnected, which is a contradiction. Thus, the equality only happens when the graph g(i, \u03b1) is a \u22061(p, s; \u03b1)-graph. Next, we introduce the Stirling number of the second kind with the notation S(n, k), which is defined as the number of ways to partition a set of n objects into k non-empty subsets. For non-crossing sequence \u03b1, the following proposition counts the number of paired graph associate to \u03b1. Proposition 2.3. For any 1 \u2264s \u2264p and any sequence \u03b1 \u2208C(1) s,p, the number of sequence i \u2208Cr,p such that g(i, \u03b1) is a paired graph is S(p + 1 \u2212s, r) if r \u2264p + 1 \u2212s, and is 0 if r > p + 1 \u2212s. Proof. The case r > p + 1 \u2212s is straightforward from Proposition 2.2. In the following, we only consider the case r \u2264p + 1 \u2212s. We fix a sequence \u03b1 \u2208C(1) s,p. Firstly, we will show that any paired graph g(i, \u03b1) can be transferred to a \u22061(p, s; \u03b1)graph by splitting the vertices in the sequence i. By definition, one can easily see that paired graphs may have more than one pair of up and down edges between two vertices, and may have cycles if the multiple edges are glued and orientation are removed. Thus, 11 our strategy is to remove the multiple pairs of up and down edges in the first step, and then remove the cycles in the second step. Step 1. For any two vertices v1, v2 in the paired graph g(i, \u03b1), we denote by mv1,v2 the number of the up edges between v1 and v2. We define K(g(i, \u03b1)) = X v1,v2 (mv1,v2 \u22121) where the sum P v1,v2 is over all pairs of vertices (v1, v2) that are neighbourhood in g(i, \u03b1). One can easily check by definition that K(g(i, \u03b1)) = 0 if and only if every edge in g(i, \u03b1) coincides with exactly one edge, and the two coincident edges have different orientation. For the case K(g(i, \u03b1)) = 1, there exists two vertices, between which there are two up edges and two down edges. In the following, we will split the corresponding i-vertex can into two vertices and resulting in a new i-sequence i\u2032, such that the the graph g(i\u2032, \u03b1) is a paired graph without coincident edges of the same orientation. The argument is similar to [7, Lemma 3.3], and is sketched below in two cases. Case 1. If we scan the edges from \u03b11 to \u03b1p+1, the first appearance of the four coincident edges is an down edges. In this case, the coincident edges are the jth down edge \u03b1j \u2192i(j), the lth down edge \u03b1l \u2192i(l), the j\u2032th up edge i(j\u2032) \u2192\u03b1j\u2032+1 and the l\u2032th up edge i(l\u2032) \u2192\u03b1l\u2032+1 for some j < j\u2032 + 1 \u2264l < l\u2032 + 1. We split the vertex i(l) into two vertices i(l,1) and i(l,2). The edges from \u03b11 \u2192i(1) to i(l\u22121) \u2192\u03b1l that connects i(l) are plotted to connect i(l,1), while the edges from \u03b1l \u2192i(l) to i(p) \u2192\u03b1p+1 that connects i(l) are plotted to connect i(l,2). See Figure 5 below. Figure 5: Case 1-Split an i vertex to cancel multiple pairs of edges. Case 2. If we scan the edges starting from \u03b11 \u2192i(1), the first appearance of the four coincident edges is an up edges. In this case, the coincident edges are the jth up edge i(j) \u2192\u03b1j+1, the lth up edge i(l) \u2192\u03b1l+1, the j\u2032th down edge \u03b1j\u2032 \u2192i(j\u2032) and the l\u2032th down edge \u03b1l\u2032 \u2192i(l\u2032) for some j < j\u2032 \u2264l < l\u2032. We split the vertex i(l) into two vertices i(l,1) and i(l,2). The edges from \u03b1j\u2032 \u2192i(j\u2032) to i(l) \u2192\u03b1l+1 that connects i(l) are plotted to connect i(l,2), while the rest of the edges that connects i(l) are plotted to connect i(l,1). See Figure 6 below. 12 Figure 6: Case 2-Split an i vertex to cancel multiple pairs of edges. In both cases, one could check that after splitting the vertex i(l), the graph is still connected, and is paired graph. Moreover, the number of edges between any two vertices is either 0 or 2, which implies that K(g(i\u2032, \u03b1)) = 0. One can use induction to show that there exists a sequence i\u2032, such that K(g(i\u2032, \u03b1)) = 0 and the paired graph g(i\u2032, \u03b1) can be obtained from g(i, \u03b1) by splitting some of the vertices in i. Indeed, by scanning all edges starting from \u03b11 \u2192i(1), we can find the first coincident directed edges. Then we can apply the argument above to split the i-vertex associated to the coincident directed edges into two i-vertices. We denote the resulting i-sequence by \u02dc i\u2032. Then we have K(g(\u02dc i\u2032, \u03b1)) = K(g(i, \u03b1)) \u22121. By the induction hypothesis, we can find a sequence i\u2032 by splitting \u02dc i\u2032, such that K(g(i\u2032, \u03b1)) = 0. Moreover, the i-sequence i\u2032 can also be obtained by splitting the sequence i. Step 2. Let g(i\u2032, \u03b1) be a paired graph such that K(g(i\u2032, \u03b1)) = 0. Then every up edge coincides with exactly one down edge. Denote by C(g(i\u2032, \u03b1)) the number of cycles when gluing all the pairs of coincident up edge and down edge and removing the orientation. By definition, C(g(i\u2032, \u03b1)) = 0 if and only if g(i\u2032, \u03b1) is a \u22061(p, s; \u03b1)-graph. If C(g(i\u2032, \u03b1)) = 1, then there is exactly one cycle when gluing the pair of coincident up edge and down edge. In the following, we will split one vertex in i\u2032 and denote by i\u2032\u2032 the new i-sequence, such that g(i\u2032\u2032, \u03b1) is still a paired graph without any cycle when gluing all pairs of coincident up edge and down edge, and g(i\u2032\u2032, \u03b1) does not have coincident edges of the same orientation. That is, g(i\u2032\u2032, \u03b1) is a \u22061(p, s; \u03b1)-graph. The argument is similar to [7, Lemma 3.6], and is sketched below. We scan the edges starting from \u03b11 \u2192i(1), and find the first edge that results in a cycle without considering orientation and removing the multiplicity of the edges. Using the non-crossing property of \u03b1, one can show that this edge must be a down edge. We denote the down edge by \u03b1j \u2192i(j). We split the vertex i(j) into two vertices i(j,1) and i(j,2). The edges in the path \u03b11 \u2192i(1) \u2192. . . \u2192\u03b1j that connects i(j) are plotted to connect i(j,1), while the edges in the path \u03b1j \u2192i(j) \u2192. . . \u2192i(p) \u2192\u03b1p+1 that connects i(j) are plotted to connect i(j,2). See Figure 7. Once could check that after splitting the vertex i(j), there is no multiple up edge or multiple down edge. Besides, there is no cycle without considering 13 the orientation and gluing pairs of coincident up edge and down edge. Hence, if we denote by i\u2032\u2032 the new i-sequence, then K(g(i\u2032, \u03b1)) = 0 = C(g(i\u2032, \u03b1)). Figure 7: Split an i vertex to cancel cycle. One can use induction to show that there exists a sequence i\u2032\u2032, such that C(g(i\u2032\u2032, \u03b1)) = K(g(i\u2032\u2032, \u03b1)) = 0 and the paired graph g(i\u2032\u2032, \u03b1) can be obtained from g(i\u2032, \u03b1) by splitting some of the vertices in i\u2032. Indeed, we can scan all edges from \u03b11 \u2192i\u2032(1) and find the first edge which forms a cycle when gluing coincident edges and removing orientation. Then we can use the argument above to split one of the i-vertex in the cycle into two i-vertices. We denote the resulting i-sequence by \u02dc i\u2032\u2032. The splitting procedure will not lead to coincident up edges nor coincident down edges, nor new cycle when gluing all pairs of coincident edges. Thus, we have C(g(\u02dc i\u2032\u2032, \u03b1)) \u2264C(g(i\u2032, \u03b1)) \u22121 and K(g(\u02dc i\u2032\u2032, \u03b1)) = 0. Then by induction hypothesis, we can split vertices in \u02dc i\u2032\u2032 to obtain i\u2032\u2032, such that g(i\u2032\u2032, \u03b1) is paired graph and C(g(i\u2032\u2032, \u03b1)) = K(g(i\u2032\u2032, \u03b1)) = 0. Moreover, the i-sequence i\u2032\u2032 can also be obtained by splitting vertices in the sequence i\u2032. Therefore, joining the two steps above, for any paired graph g(i, \u03b1), we can split vertices on i to obtain i\u2032\u2032, such that g(i\u2032\u2032, \u03b1) is a \u22061(p, s; \u03b1)-graph. Secondly, we will establish a bijective map from the set of all partitions of [p + 1 \u2212s] to the set of the canonical r-sequence that form a paired graph with \u03b1. By Lemma 2.1, there exists a unique canonical (p + 1 \u2212s)-sequence i = (i(1), . . . , i(p)), such that g(i, \u03b1) is a \u22061(p, s; \u03b1)-graph. Let P(p + 1 \u2212s) be the set of all partitions of [p + 1 \u2212s], and P(p + 1 \u2212s, q) be the set of all partitions of [p + 1 \u2212s] with q blocks. For a partition \u03c0 \u2208P(p + 1 \u2212s, q) with blocks V1, V2, . . . , Vq, without loss of generality, we assume that min{a : a \u2208V1} < . . . < min{a : a \u2208Vq}. We identify the partition \u03c0 with the mapping \u03c0 : [p + 1 \u2212s] \u2192[q] given by \u03c0(a) = b if a \u2208Vb. We abuse the notation for partition and the corresponding mapping. By the definition, it is easy to see that \u03c0 maps a canonical sequence to a canonical sequence. For fixed \u03b1 \u2208C(1) s,p, let P\u2032(\u03b1) be the set of all canonical sequences i\u2032 such that g(i\u2032, \u03b1) are paired graphs. We write P\u2032(\u03b1, r) for the set of all canonical r-sequences i\u2032 in P\u2032(\u03b1). We consider 14 the following mapping: \u03a6 : P(p + 1 \u2212s) \u2212 \u2192 P\u2032(\u03b1) \u03c0 \u2212 \u2192 \u03c0(i), where \u03c0(i) = (\u03c0(i(1)), . . . , \u03c0(i(p))). Note that for any \u03c0 \u2208P(p + 1 \u2212s), g(\u03c0(i), \u03b1) can be obtained from g(i, \u03b1) by gluing the i-vertices according to the partition \u03c0. Since g(i, \u03b1) is a paired graph, so is g(\u03c0(i), \u03b1), which implies that \u03a6 is well-defined. As we have proved in the first part that any paired graph can be transferred to a \u22061(p, s; \u03b1)-graph by appropriately splitting the vertices in the i-sequence, we can conclude that \u03a6 is surjective. Moreover, for two different partitions \u03c01, \u03c02 \u2208P(p + 1 \u2212s), the two canonical sequence \u03c01(i) and \u03c02(i) are different. Thus, \u03a6 is injective. Therefore, \u03a6 is bijective. To conclude, we consider the following restriction of \u03a6: \u03a6|r : P(p + 1 \u2212s, r) \u2212 \u2192 P\u2032(\u03b1, r). Since the bijectivity of \u03a6|r inherites from \u03a6, by the definition of Stirling number of the second kind, we have #P\u2032(\u03b1, r) = #P(p + 1 \u2212s, r) = S(p + 1 \u2212s, r). We end this subsection by collecting some properties of the Stirling number of the second kind. We refer the readers to [8] for more details. Lemma 2.4. 1. We have S(n, n) = S(n, 1) = 1 for n \u22651. For 1 \u2264k \u2264n, we have S(n, k) = k X i=1 (\u22121)k\u2212iin i!(k \u2212i)! . 2. For positive integers n \u2265k > 1, we have S(n + 1, k) = S(n, k \u22121) + kS(n, k). 3. For positive integers n, we have n X k=1 S(n, k) \u00b7 x(x \u22121) . . . (x \u2212k + 1) = xn. 15 3 Convergence of spectral moments 3.1 Proof of Theorem 1.1 We compute the moment 1 nk E \u0002 TrM p n,k,m \u0003 . for any p \u2208N+. By convention, \u03b1p+1 = \u03b11. We have 1 nk E \u0002 TrM p n,k,m \u0003 = 1 nk m X \u03b11,...,\u03b1p=1 p Y t=1 \u03c4\u03b1t ! E h Tr \u0000Y \u2217 \u03b11Y\u03b12Y \u2217 \u03b12 \u00b7 \u00b7 \u00b7 Y\u03b1pY \u2217 \u03b1pY\u03b1p+1 \u0001i = 1 nk m X \u03b11,...,\u03b1p=1 p Y t=1 \u03c4\u03b1t ! E \" k Y l=1 Tr \u0010\u0000y(l) \u03b11 \u0001\u2217y(l) \u03b12 \u0000y(l) \u03b11 \u0001\u2217. . . y(l) \u03b1p+1 \u0011 # = 1 nk m X \u03b11,...,\u03b1p=1 p Y t=1 \u03c4\u03b1t ! E \" Tr \u0010\u0000y(1) \u03b11 \u0001\u2217y(1) \u03b12 \u0000y(1) \u03b11 \u0001\u2217. . . y(1) \u03b1p+1 \u0011 #!k = 1 nk m X \u03b11,...,\u03b1p=1 p Y t=1 \u03c4\u03b1t ! \uf8eb \uf8edE \uf8ee \uf8f0 n X i(1),...,i(p)=1 p Y t=1 \u0012\u0000y(1) \u03b1t \u0001 i(t) \u0000y(1) \u03b1t+1 \u0001 i(t) \u0013\uf8f9 \uf8fb \uf8f6 \uf8f8 k , (3.1) where we used the i.i.d. setting in the third equality. For two sequences \u03b1 = (\u03b11, . . . , \u03b1p) \u2208[m]p and i = (i(1), . . . , i(p)) \u2208[n]p, let E(i, \u03b1) = E \" p Y t=1 \u0012\u0000y(1) \u03b1t \u0001 i(t) \u0000y(1) \u03b1t+1 \u0001 i(t) \u0013# . (3.2) By the i.i.d. setting, E(i, \u03b1) = E(i\u2032, \u03b1\u2032) if the two sequences i and \u03b1 are equivalent to i\u2032 and \u03b1\u2032, respectively. By (3.1) and (2.1), we have 1 nk E \u0002 TrM p n,k,m \u0003 = 1 nk p X s=1 X \u03b1\u2208Js,p(m) p Y t=1 \u03c4\u03b1t ! \uf8eb \uf8ed p X r=1 X i\u2208Jr,p(n) E(i, \u03b1) \uf8f6 \uf8f8 k = 1 nk p X s=1 X \u03b1\u2208Cs,p \uf8eb \uf8edX \u03c6\u2208Is,m p Y t=1 \u03c4\u03c6(\u03b1t) \uf8f6 \uf8f8 \uf8eb \uf8ed p X r=1 n \u00b7 \u00b7 \u00b7 (n \u2212r + 1) X i\u2208Cr,p E(i, \u03b1) \uf8f6 \uf8f8 k :=I1 + I2, (3.3) where I1 = 1 nk p X s=1 X \u03b1\u2208C(1) s,p \uf8eb \uf8edX \u03c6\u2208Is,m p Y t=1 \u03c4\u03c6(\u03b1t) \uf8f6 \uf8f8 \uf8eb \uf8ed p X r=1 n \u00b7 \u00b7 \u00b7 (n \u2212r + 1) X i\u2208Cr,p E(i, \u03b1) \uf8f6 \uf8f8 k , 16 I2 = 1 nk p X s=1 X \u03b1\u2208Cs,p\\C(1) s,p \uf8eb \uf8edX \u03c6\u2208Is,m p Y t=1 \u03c4\u03c6(\u03b1t) \uf8f6 \uf8f8 \uf8eb \uf8ed p X r=1 n \u00b7 \u00b7 \u00b7 (n \u2212r + 1) X i\u2208Cr,p E(i, \u03b1) \uf8f6 \uf8f8 k . Note that the component of the base vector satisfies \u0010 y(l) \u03b2 \u0011 i \u0010 y(l) \u03b2 \u0011 i = 1 n, \f \f \fE h\u0010\u0010 y(l) \u03b2 \u0011 i \u0011pi\f \f \f \u2264 1 np/2. (3.4) Recall the definition of paired graph and single graph in Definition 2. For any sequence \u03b1 \u2208Cs,p and i \u2208Cr,p, we have \uf8f1 \uf8f4 \uf8f2 \uf8f4 \uf8f3 E(i, \u03b1) = n\u2212p, g(i, \u03b1) is a paired graph, E(i, \u03b1) = 0, g(i, \u03b1) is a single graph, |E(i, \u03b1)| \u2264n\u2212p, g(i, \u03b1) otherwise. (3.5) Firstly, we deal with I1. For any \u03b1 \u2208C(1) s,p, by Proposition 2.1, formula (3.5) and Proposition 2.3, we obtain X i\u2208Cr,p E(i, \u03b1) =n\u2212p \u00b7 #{i \u2208Cr,p : g(i, \u03b1) is paired graph} = ( n\u2212p \u00b7 S(p + 1 \u2212s, r), r \u2264p + 1 \u2212s, 0, r > p + 1 \u2212s, where we use the notation #S for the number of elements in the set S. Thus, it follows from Lemma 2.4 that p X r=1 n \u00b7 \u00b7 \u00b7 (n \u2212r + 1) X i\u2208Cr,p E(i, \u03b1) = n\u2212p p+1\u2212s X r=1 n \u00b7 \u00b7 \u00b7 (n \u2212r + 1) \u00b7 S(p + 1 \u2212s, r) = n1\u2212s. Hence, I1 = p X s=1 \u0010 m nk \u0011s X \u03b1\u2208C(1) s,p \uf8eb \uf8ed1 ms X \u03c6\u2208Is,m p Y t=1 \u03c4\u03c6(\u03b1t) \uf8f6 \uf8f8. (3.6) Next, we deal with I2. For any \u03b1 \u2208Cs,p \\C(1) s,p, by Lemma 2.1 and Proposition 2.2, if the graph g(i, \u03b1) is not a single graph for i \u2208Cr,p, then r +s \u2264p. Hence, by (3.5), we establish \f \f \f \f \f \f p X r=1 n \u00b7 \u00b7 \u00b7 (n \u2212r + 1) X i\u2208Cr,p E(i, \u03b1) \f \f \f \f \f \f \u2264 p\u2212s X r=1 n\u2212p+r \u00b7 #{i \u2208Cr,p : g(i, \u03b1) is not a single graph}. 17 Thus, we have |I2| \u22641 nk p X s=1 X \u03b1\u2208Cs,p\\C(1) s,p \f \f \f \f \f \f X \u03c6\u2208Is,m p Y t=1 \u03c4\u03c6(\u03b1t) \f \f \f \f \f \f \f \f \f \f \f \f p X r=1 n \u00b7 \u00b7 \u00b7 (n \u2212r + 1) X i\u2208Cr,p E(i, \u03b1) \f \f \f \f \f \f k = p X s=1 \u0010 m nk \u0011s X \u03b1\u2208Cs,p\\C(1) s,p \f \f \f \f \f \f 1 ms X \u03c6\u2208Is,m p Y t=1 \u03c4\u03c6(\u03b1t) \f \f \f \f \f \f \u00d7 \f \f \f \f \f p\u2212s X r=1 n\u2212p+r+s\u22121 \u00b7 #{i \u2208Cr,p : g(i, \u03b1) is not a single graph} \f \f \f \f \f k . (3.7) Note that the assumption (1.4) implies the following convergence: 1 ms X \u03c6\u2208Is,m p Y t=1 \u03c4\u03c6(\u03b1t) \u2192 s Y t=1 m(\u03c4) degt(\u03b1), m \u2192\u221e. Hence, under the limiting setting (1.3), when n, k \u2192\u221e, we can deduce from (3.6) and (3.7) that I1 \u2192 p X s=1 cs X \u03b1\u2208C(1) s,p s Y t=1 m(\u03c4) degt(\u03b1) ! , I2 \u21920. (3.8) Therefore, the proof is concluded by taking limit n, k \u2192\u221ein (3.3) and using (3.8). 3.2 Proof of Theorem 1.2 For any p \u2208N, for k \u22652, we compute the variance Var \u0012 1 nk TrM p n,k,m \u0013 . The idea is similar to [7], and is sketched below. By the computation of [7, Section 3.2], we have Var \u0012 1 nk TrM p n,k,m \u0013 = 1 n2k X \u03b1,\u03b2\u2208[m]p \u03b1\u2229\u03b2\u0338=\u2205 p Y t=1 \u03c4\u03b1t\u03c4\u03b2t ! \u00d7 \uf8ee \uf8ef \uf8f0 \uf8eb \uf8edX i,j\u2208[n]p E\u2032(i, \u03b1; j, \u03b2) \uf8f6 \uf8f8 k \u2212 \uf8eb \uf8edX i,j\u2208[n]p E(i, \u03b1)E(j, \u03b2) \uf8f6 \uf8f8 k\uf8f9 \uf8fa \uf8fb, 18 where E(\u00b7, \u00b7) is given in (3.5), and E\u2032(i, \u03b1; j, \u03b2) is defined by E\u2032(i, \u03b1; j, \u03b2) = E \" p Y t=1 \u0012\u0000y(1) \u03b1t \u0001 i(t) \u0000y(1) \u03b1t+1 \u0001 i(t) \u0000y(1) \u03b2t \u0001 j(t) \u0000y(1) \u03b2t+1 \u0001 j(t) \u0013# . Next, we join the two graphs g(i, \u03b1) and g(j, \u03b2) together and keep the coincident edges. We denote by g(i, \u03b1) \u222ag(j, \u03b2) the resulting graph. If there is an edge in the graph g(i, \u03b1) \u222a g(j, \u03b2) that does not coincide with any other edges, then this edge must belong to g(i, \u03b1) or g(j, \u03b2), which implies E\u2032(i, \u03b1; j, \u03b2) = E(i, \u03b1)E(j, \u03b2) = 0. Thus, we only need to consider the indices such that all edges in g(i, \u03b1) \u222ag(j, \u03b2) coincide with other edges. Noting that \u03b1 \u2229\u03b2 \u0338= \u2205, the graph g(i, \u03b1) \u222ag(j, \u03b2) is connected with 4p edges. Hence, if we remove orientation and glue coincident edges for the graph g(i, \u03b1) \u222a g(j, \u03b2), it results in a non-directed connected graph with at most 2p edges, which implies that |(\u03b1, \u03b2)| + |(i, j)| \u22642p + 1. Hence, we have Var \u0012 1 nk TrM p n,k,m \u0013 = 1 n2k 2p X s=1 X (\u03b1,\u03b2)\u2208Cs,2p \u03b1\u2229\u03b2\u0338=\u2205 \uf8eb \uf8edX \u03c6\u2208Is,m p Y t=1 \u03c4\u03c6(\u03b1t)\u03c4\u03c6(\u03b2t) \uf8f6 \uf8f8 \u00d7 \uf8ee \uf8ef \uf8f0 \uf8eb \uf8ed 2p+1\u2212s X r=1 n . . . (n \u2212r + 1) X (i,j)\u2208Cr,2p E\u2032(i, \u03b1; j, \u03b2) \uf8f6 \uf8f8 k \u2212 \uf8eb \uf8ed 2p+1\u2212s X r=1 n . . . (n \u2212r + 1) X (i,j)\u2208Cr,2p E(i, \u03b1)E(j, \u03b2) \uf8f6 \uf8f8 k\uf8f9 \uf8fa \uf8fb. By (3.4), for any sequence \u03b1, \u03b2, i, j, it holds that |E\u2032(i, \u03b1; j, \u03b2)|, |E(i, \u03b1)E(j, \u03b2)| \u2264n\u22122p. Thus, max \uf8f1 \uf8f2 \uf8f3 \f \f \f \f \f \f 2p+1\u2212s X r=1 n . . . (n \u2212r + 1) X (i,j)\u2208Cr,2p E(i, \u03b1)E(j, \u03b2) \f \f \f \f \f \f , \f \f \f \f \f \f 2p+1\u2212s X r=1 n . . . (n \u2212r + 1) X (i,j)\u2208Cr,2p E\u2032(i, \u03b1; j, \u03b2) \f \f \f \f \f \f \uf8fc \uf8fd \uf8fe 19 \u2264 2p+1\u2212s X r=1 nr\u22122p X (i,j)\u2208Cr,2p 1 \u2264Cpn1\u2212s(1 + on(1)), where Cp is a positive number that only depends on p, and on(1) is a quantity that tends to 0 as n \u2192\u221e. Hence, we obtain Var \u0012 1 nk TrM p n,k,m \u0013 \u22642Ck p nk 2p X s=1 X (\u03b1,\u03b2)\u2208Cs,2p \u03b1\u2229\u03b2\u0338=\u2205 \uf8eb \uf8edX \u03c6\u2208Is,m p Y t=1 \u03c4\u03c6(\u03b1t)\u03c4\u03c6(\u03b2t) \uf8f6 \uf8f8n\u2212ks(1 + on(1))k =2Ck p nk 2p X s=1 \u0010 m nk \u0011s X (\u03b1,\u03b2)\u2208Cs,2p \u03b1\u2229\u03b2\u0338=\u2205 \uf8eb \uf8edX \u03c6\u2208Is,m s Y t=1 1 m\u03c4 degt(\u03b1)+degt(\u03b2) \u03c6(t) \uf8f6 \uf8f8(1 + on(1))k. By assumption (1.4), we have X \u03c6\u2208Is,m s Y t=1 1 m\u03c4 degt(\u03b1)+degt(\u03b2) \u03c6(t) \u2192 s Y t=1 m(\u03c4) degt(\u03b1)+degt(\u03b2), m \u2192\u221e. Together with (1.3), we establish Var \u0012 1 nk TrM p n,k,m \u0013 \u22642Ck p nk 2p X s=1 cs X (\u03b1,\u03b2)\u2208Cs,2p \u03b1\u2229\u03b2\u0338=\u2205 s Y t=1 m(\u03c4) degt(\u03b1)+degt(\u03b2) ! (1 + on(1))k+1+s. Therefore, for k \u22652, we have X n\u22652 Var \u0012 1 nk TrM p n,k,m \u0013 < +\u221e. The proof is concluded by Borel-Cantelli\u2019s Lemma, noting that k = k(n) tends to infinity as n \u2192\u221e. 3.3 Proof of Corollary 1.1 We start with the uniqueness of \u00b5. We have \f \f \f \f \f \f \f p X s=1 cs X \u03b1\u2208C(1) s,p s Y t=1 m(\u03c4) degt(\u03b1) !\f \f \f \f \f \f \f \u2264 p X s=1 cs X \u03b1\u2208C(1) s,p s Y t=1 Adegt(\u03b1)degt(\u03b1)degt(\u03b1) ! \u2264 p X s=1 cs X \u03b1\u2208C(1) s,p A Ps t=1 degt(\u03b1) p X s=1 degt(\u03b1) !Pp s=1 degt(\u03b1) . 20 By definition (2.2), for \u03b1 \u2208Cs,p, we have Ps t=1 degt(\u03b1) = p. Hence, it follows from Lemma 2.1 that \f \f \f \f \f \f \f p X s=1 cs X \u03b1\u2208C(1) s,p s Y t=1 m(\u03c4) degt(\u03b1) !\f \f \f \f \f \f \f \u2264Appp p X s=1 cs X \u03b1\u2208C(1) s,p 1 = Appp p X s=1 cs p \u0012 p s \u22121 \u0013\u0012p s \u0013 . Using the inequality \u0000p s \u0001 \u22642p for all 0 \u2264s \u2264p, we obtain \f \f \f \f \f \f \f p X s=1 cs X \u03b1\u2208C(1) s,p s Y t=1 m(\u03c4) degt(\u03b1) !\f \f \f \f \f \f \f \u22644pAppp p X s=1 cs p \u22644pAp(1 + c)ppp. Hence, \u221e X p=1 \f \f \f \f \f \f \f p X s=1 cs X \u03b1\u2208C(1) s,p s Y t=1 m(\u03c4) degt(\u03b1) !\f \f \f \f \f \f \f \u22121/p \u2265 \u221e X p=1 (4pAp(c + 1)ppp)\u22121/p = \u221e X p=1 1 4A(c + 1)p = +\u221e. Therefore, the Carleman\u2019s condition is satisfied, which implies that there exists a unique probability measure \u00b5 whose moments are given by (1.5). The uniqueness of the probability measure \u00b5 corresponding to the moments in (1.5) guarantees the almost sure convergence of the ESD of Mn,k,m towards \u00b5. If \u03c4\u03b1 = 1 for all 1 \u2264\u03b1 \u2264m, then the condition (1.4) holds with m(\u03c4) q = 1 for all q \u2208N. In this case, the moment sequence (1.5) for \u00b5 becomes Z R xp\u00b5(dx) = p X s=1 cs X \u03b1\u2208C(1) s,p 1 = p X s=1 cs p \u0012 p s \u22121 \u0013\u0012p s \u0013 , \u2200p \u2208N+, where we use Lemma 2.1 in the last equality. By [3, Lemma 3.1], this moment sequence coincides with the moment sequence of the Mar\u02c7 cenko-Pastur law (1.2). Therefore, the uniqueness of \u00b5 implies that \u00b5 is exactly the Mar\u02c7 cenko-Pastur law (1.2). Acknowledgments The author gratefully acknowledges the financial support of ERC Consolidator Grant 815703 \u201dSTAMFORD: Statistical Methods for High Dimensional Diffusions\u201d. Besides, the author would like to acknowledge Tiefeng Jiang, Felix Parraud, Kevin Schnelli, Jianfeng Yao for the discussion and helpful suggestions. 21", "introduction": "For n \u2208N, let y = 1 \u221an(\u03be1, . . . , \u03ben) \u2208Cn, where {\u03be1, . . . , \u03ben} is a family of i.i.d. centered random variables with unit variance, and let {y(l) \u03b1 : 1 \u2264\u03b1 \u2264m, 1 \u2264l \u2264k} be a family of i.i.d. copies of y. For 1 \u2264\u03b1 \u2264m, define Y\u03b1 = y(1) \u03b1 \u2297\u00b7 \u00b7 \u00b7 \u2297y(k) \u03b1 yielding a k-fold tensor product. We identify each Y\u03b1 as an nk-dimensional vector, and denote by Y = (Y1, . . . , Ym). Let {\u03c41, \u03c42, . . .} be a sequence of real numbers, we consider the sum of m independent rank-1 Hermitian matrices: Mn,k,m = m X \u03b1=1 \u03c4\u03b1Y\u03b1Y \u2217 \u03b1 , (1.1) which is an nk \u00d7 nk Hermitian matrix. \u2217Department of Mathematics, University of Luxembourg. E-mail: ywangjun@connect.hku.hk 1 arXiv:2306.05834v3 [math.PR] 7 Jan 2024 In statistics, the model (1.1) is called the sample covariance matrix. It helps to under- stand the population covariance matrix. The limiting empirical spectral distribution (LSD) of Mn,k,m can serve as a test statistics. The simplest case k = 1, which corresponds to the population vector with i.i.d. entries, was well-studied in the literature. Under appropriate moment conditions on \u03be1, the LSD was obtained in the seminal paper [11] when n \u2192+\u221e and m/nk \u2192c for some positive constant c. When \u03c4\u03b1 = 1 for all \u03b1, the resulting LSD is the famous Mar\u02c7 cenko-Pastur law. This probability distribution has a density function given by p(x) = p ((1 + \u221ac)2 \u2212x)(x \u2212(1 \u2212\u221ac)2) 2\u03c0x 1[(1\u2212\u221ac)2,(1+\u221ac)2](x) + (1 \u2212c)\u03b40(dx)10 0 and the fourth moment of \u03be1 is 1. In the present paper, we study the model (1.1) with \u03be1 chosen from the unit circle on the complex plane. Our focus lies in the scenario where k goes to infinity much faster than the setting in [7]. We derive the limiting moment sequence of the ESD of Mn,k,m when n, k goes to infinity. When \u03c4\u03b1 \u22611, the limiting moment sequence reveals that the LSD is exactly Mar\u02c7 cenko-Pastur law (1.2). From the point of view of probability theory, we remove the restriction on the speed of k approaching infinity, as required in [7, 9]. Our results even allows that k/n does not have a limit when n \u2192\u221e. In practical terms, one would anticipate that the dimension of a system remains fixed and is reused multiple times, which leads to the scenario that k tends to infinity while n is fixed. The study of the model (1.1) with fixed n remains an open question, and we plan to address this case in our future work. As an variant of this real-world scenario, the results in this paper apply to the setting where k \u226bn \u226b1. We would like to remark that the tensor Y\u03b1 introduced above is the non-symmetric random tensor model. Instead of considering the tensor product of i.i.d. vectors, the k- fold tensor product y\u2297k of the same random vector y \u2208Cn is known as the symmetric random tensor model. The limiting spectral distribution of the Hermitian matrix (1.1) constructed by i.i.d. copies of y\u2297k was studied in [5,15]. Throughout the paper, we assume that the parameters m, n, k grow towards infinity following the proportion m nk \u2192c (1.3) for some constant c \u2208(0, \u221e). The following theorem is the first main results of the paper, where we establish the convergence in expectation of the moments. Theorem 1.1. Let Mn,k,m be in (1.1) with |\u03be1| = 1. Suppose that for all q \u2208N, 1 m m X j=1 \u03c4 q j \u2192m(\u03c4) q , m \u2192+\u221e. (1.4) Assume that (1.3) holds. Then for any fixed p \u2208N+, we have lim n,k\u2192+\u221e 1 nk E \u0002 TrM p n,k,m \u0003 = p X s=1 cs X \u03b1\u2208C(1) s,p s Y t=1 m(\u03c4) degt(\u03b1) ! . Here, degt(\u03b1) is the frequency of t in the sequence \u03b1 and is given by (2.2), and C(1) s,p is a set of sequences that is defined in Lemma 2.1. Our approach is based on the method of moments. We associate graphs to each terms of the moment, and compute the moment by distinguishing the graphs that contributes 3 to the limit. The class C(1) s,p of sequences corresponds to the largest leading terms, while the sequences in \u03b1 / \u2208C(1) s,p are negligible. It\u2019s important to note that the tensor structure introduces a power of k in the moment calculation. For the case k = 1, the limit of the moment sequence can be obtained by counting the size of C(1) s,p, since only the leading terms in the sum of \u03b1 \u2208C(1) s,p contribute to the limit. We refer the interested readers to [3, Section 3.1.3]. This is also true whenever k = o(n). See [7, Section 4]. For the case k = O(n) studied in [7], besides the first leading term, the second leading term also contributes to the limit. In our current setting, where k can grow much faster, all terms associated with \u03b1 \u2208C(1) s,p may contribute to the limit. Hence, we need to characterize all the possible graphs associate with \u03b1 \u2208C(1) s,p. This is the main novelty in methodology of the present paper. In the next theorem, we strengthen Theorem 1.1 from convergence in expectation to almost sure convergence. Theorem 1.2. Assume that the conditions in Theorem 1.1 hold. Suppose that k = k(n) is a function of n and tends to infinity as n \u2192\u221e. Then for any fixed p \u2208N+, lim n\u2192+\u221e 1 nk TrM p n,k,m = p X s=1 cs X \u03b1\u2208C(1) s,p s Y t=1 m(\u03c4) degt(\u03b1) ! , almost surely. After establishing the limiting moment sequence, it is nature to inquire whether this sequence uniquely characterizes a probability measure. The following corollary provides a condition that guarantees the uniqueness of the probability measure corresponding to the moment sequence, which leads to the almost sure convergence of the ESD of Mn,k,m. Corollary 1.1. Assume that the conditions in Theorem 1.1 hold. Suppose that there exists a positive constant A, such that |m(\u03c4) q | \u2264Aqqq for all q \u2208N. Then there exists a probability measure \u00b5 whose moment sequence is Z R xp\u00b5(dx) = p X s=1 cs X \u03b1\u2208C(1) s,p s Y t=1 m(\u03c4) degt(\u03b1) ! , \u2200p \u2208N+. (1.5) Moreover, suppose that k = k(n) is a function of n and tends to infinity when n \u2192\u221e, then the ESD of Mn,k,m converges almost surely to \u00b5. In particular, if \u03c4\u03b1 = 1 for all 1 \u2264\u03b1 \u2264m, then the ESD of Mn,k,m converges almost surely to the Mar\u02c7 cenko-Pastur law (1.2). By combining the results in [9, Theorem 1.2], [7, Theorem 2.1] and Corollary 1.1, we immediately obtain the following necessary and sufficient condition for Mar\u02c7 cenko-Pastur law (1.2) to be the LSD of the model (1.1) when \u03c4\u03b1 \u22611. Corollary 1.2. Let Mn,k,m be in (1.1) with \u03c4\u03b1 = 1 for all 1 \u2264\u03b1 \u2264m. Then the ESD of Mn,k,m converges almost surely to the Mar\u02c7 cenko-Pastur law (1.2) if and only if either k = o(n) or |\u03be1| = 1. 4 The rest of the paper is organized as follow. We develop the theory of graph combinato- ries in Section 2. We first introduce some results from literature on the graph combinatorics in Subsection 2.1. Then we characterize the set C(1) s,p, which corresponding to the leading term in the moment computation. The paired graph, which is the graph that contributes to the limit for \u03b1 \u2208C(1) s,p, is studied in Section 2.3. In Section 3, we prove the main theorems. The proofs of Theorem 1.1, Theorem 1.2 and Corollary 1.1 are presented in Section 3.1, Section 3.2 and Section 3.3, respectively." } ], "Hongruixuan Chen": [ { "url": "http://arxiv.org/abs/2401.09019v1", "title": "Change Detection Between Optical Remote Sensing Imagery and Map Data via Segment Anything Model (SAM)", "abstract": "Unsupervised multimodal change detection is pivotal for time-sensitive tasks\nand comprehensive multi-temporal Earth monitoring. In this study, we explore\nunsupervised multimodal change detection between two key remote sensing data\nsources: optical high-resolution imagery and OpenStreetMap (OSM) data.\nSpecifically, we propose to utilize the vision foundation model Segmentation\nAnything Model (SAM), for addressing our task. Leveraging SAM's exceptional\nzero-shot transfer capability, high-quality segmentation maps of optical images\ncan be obtained. Thus, we can directly compare these two heterogeneous data\nforms in the so-called segmentation domain. We then introduce two strategies\nfor guiding SAM's segmentation process: the 'no-prompt' and 'box/mask prompt'\nmethods. The two strategies are designed to detect land-cover changes in\ngeneral scenarios and to identify new land-cover objects within existing\nbackgrounds, respectively. Experimental results on three datasets indicate that\nthe proposed approach can achieve more competitive results compared to\nrepresentative unsupervised multimodal change detection methods.", "authors": "Hongruixuan Chen, Jian Song, Naoto Yokoya", "published": "2024-01-17", "updated": "2024-01-17", "primary_cat": "eess.IV", "cats": [ "eess.IV", "cs.AI", "cs.CV", "cs.MM" ], "main_content": "2.1. Segment Anything Model As a vision foundation model, the SAM is designed and trained to be promptable [14]. By trained on the millions of annotated images, the SAM can perform zero-shot transfer for new image distributions and tasks. The architecture of SAM comprises three key components: an image encoder, a prompt encoder that can receive points, boxes, text, and mask as the prompt, and a lightweight mask decoder. Among them, the image encoder is a pre-trained Vision Transformer (ViT) [15]; the positional encodings, CLIP model [16] and convolutional layers are adopted to embed different types of prompt data; the decoder integrates a Transformer decoder block with a dynamic mask prediction head. Now, SAM has demonstrated impressive performance in a range of remote sensing applications [17]. In this paper, we achieve unsupervised change detection on optical and map data by leveraging the powerful zero-shot segmentation capability of SAM to transform the optical image into the modality-independent segmentation domain. 2.2. Detecting Land-Cover Changes We propose two strategies to detect land-cover changes using the SAM. The first strategy detects land-cover changes in the general case, while the second targets the detection of new land-cover objects appearing against a background. 2.2.1. No Prompt As illustrated in Fig. 1-(a), the first approach employs SAM\u2019s no-prompt function, i.e., let SAM segment everything in the optical imagery to generate its segmentation map. Concurrently, we apply a connected component labeling (CCL) algorithm on the rasterized OSM data to produce its instance map. This process aligns the optical image and OSM data within the same domain, which we call the segmentation domain, thus eliminating the modality differences between them. Subsequently, in order to detect land-cover changes from the obtained segmentation and instance maps, here, we argue that two land-cover objects should have different shapes if a change event occurs. In this way, land-cover changes can be obtained by comparing the shape attributes such as area, aspect ratio, etc. of two instances at the same location. However, the masks obtained by the SAM do not have the category information and each mask may not represent a complete land-cover instance. For example, a building might be composed of several masks. Therefore, we propose a hierarchical aggregation method guided by OSM data instances. Specifically, for each instance in the OSM data, we find all masks in the segmentation map with which there is an intersection, and iteratively merge these masks outward from the instance\u2019s center. After each merge operation, we calculate CCL algorithm Segment anything model \u2744 Optical imagery Segmentation map No prompt Instance map OSM data Hierarchical aggregation Change map (Inconsistency map) (a) CCL algorithm Segment anything model \u2744 Optical imagery Segmentation map P Instance map prompt Instance map OSM data Change map (Anomaly map) Prompt Background retrieval (b) Fig. 1: The proposed multimodal change detection framework based on the SAM. (a) Detecting land-cover changes without prompt by comparing the shape of instances. (b) Detecting land-cover changes with the prompt from instance maps. the overlap rate between the merged mask and the instance. If the overlap rate exceeds a set threshold during the merge process, the instance is considered to be unchanged, or else it is considered to be changed. 2.2.2. Instance Map Prompt The above strategy can effectively detect the situation in which two instances have changed. However, it struggles with scenarios where new land-cover objects emerge within a land-cover background. For example, a certain large area is vegetation on the OSM data, and a building appears in the optical image. In this case, the above strategy treats the entire background area as unchanged and the emerging building cannot be detected. To address this limitation, we propose a strategy that adopts instances from OSM data as prompts for SAM. As depicted in Fig. 1-(b), we can guide the segmentation of SAM from using background instances of OSM data, which can be obtained from the legend of the OSM data [12], in the form of a box or mask prompt. In this case, SAM will generally segment the background in the optical image. The land-cover objects appearing in the background will be considered anomalies, and thus will not be segmented out by OSM data Optical imagery Reference map M3CD FPMS NPSG IRG-McS SR-GCAE FD-MCD SAM-MCD (ours) Fig. 2: Binary change maps obtained by different methods on the Aachen dataset. SAM. In this way, we can identify these emerging objects by extracting unrecognized pixels from the segmentation map within the instance\u2019s region. 3. EXPERIMENTS 3.1. General Information In this paper, our experiments utilize three datasets: Aachen, Christchurch, and Vegas [12]. These datasets vary in size: Aachen measures 1000\u00d71000 pixels, Christchurch 1024\u00d71024 pixels, and Vegas 650\u00d7650 pixels. As illustrated in Fig. 2 through Fig. 4, the substantial modality differences between map data and optical remote sensing imagery present significant challenges for change detection. We have selected six prominent unsupervised multimodal change detection methods for comparative analysis. These methods are M3CD [1], FPMS [7], NPSG [8], IRG-McS [3], SR-GCAE [9], and FD-MCD [4]. Each has demonstrated state-of-the-art results across various benchmark datasets and modal combinations. To assess the accuracy of these methods, we employ three commonly used metrics in change detection tasks [18]: overall accuracy (OA), F1 score, and Kappa coefficient (KC). These metrics will provide a comprehensive evaluation of each method\u2019s performance. 3.2. Experimental Results Figures 2 through 4 display binary change maps generated by the comparison methods and our proposed SAM-MCD. Specific accuracy metrics for these methods are detailed in Table 1. For most comparison methods, although they can obtain accurate detection results on data from the modal combinations like optical and SAR data pairs, their effectiveness diminishes when detecting land-cover changes between map and optical data pairs. Among all comparison methods, FD-MCD [4], which analyzes structural relationships by transforming data from different modalities into the (graph) Fourier domain, shows commendable performance across all three datasets. OSM data Optical imagery Reference map M3CD FPMS NPSG IRG-McS SR-GCAE FD-MCD SAM-MCD (ours) Fig. 3: Binary change maps obtained by different methods on the Christchurch dataset. OSM data Optical imagery Reference map M3CD FPMS NPSG IRG-McS SR-GCAE FD-MCD SAM-MCD (ours) Fig. 4: Binary change maps obtained by different methods on the Vegas dataset. In comparison, the proposed method obtains more accurate detection results by leveraging the advanced segmentation capabilities of the vision foundation model and comparing the relationship between the map data and the optical images in the modality-independent segmentation domain. However, it is observed that our method encounters some challenges with the Vegas dataset, particularly in missing detections. This issue predominantly arises in certain regions of the optical image (notably the upper center region), where our two strategies are less effective. Consequently, our method fails to detect most of the changes occurring in these areas. 4. CONCLUSION In this paper, we propose an unsupervised multimodal change detection approach aiming at automatically detecting landcover changes from OSM data and optical remote sensing imagery. By employing the vision foundation model SAM, OSM data and optical images with large modality difference can be transformed to the modality-independent segmentation domain. We design two strategies for detecting changes based on segmentation maps of optical images and instance maps of OSM. When applied to three map-optical data pairs with distinct scenes, the proposed method yields more competitive detection results compared to the SOTA methods. Our future work will focus on extending our framework to unsupervised semantic change detection on OSM data and optical imagery. Table 1: Accuracy assessment on change maps obtained by different methods on the three multimodal change detection datasets. The highest values are marked in bold and the next highest values are underlined. Method Aachen Christchurch Vegas OA F1 KC OA F1 KC OA F1 KC M3CD [1] 0.6542 0.0946 0.0083 0.3790 0.0582 -0.0708 0.4485 0.4233 -0.0141 FPMS [7] 0.6954 0.3233 0.1804 0.5269 0.1800 0.0100 0.3184 0.0076 -0.1126 NPSG [8] 0.6376 0.3051 0.0840 0.5149 0.2186 -0.0099 0.4225 0.1805 0.0503 IRG-McS [3] 0.6780 0.1617 0.0786 0.5272 0.0638 0.0020 0.3788 0.0364 0.0035 SR-GCAE [9] 0.6136 0.4344 0.1414 0.5656 0.5228 0.1252 0.5026 0.5179 0.0517 FD-MCD [4] 0.6684 0.5540 0.2949 0.7449 0.6737 0.4787 0.5792 0.6017 0.1878 SAM-MCD 0.6820 0.5945 0.3429 0.7503 0.7729 0.5077 0.5735 0.6251 0.1428 5.", "introduction": "Multimodal change detection aims at detecting land-cover changes from multitemporal remote sensing images with dif- ferent modalities [1, 2]. As leveraging various types of data sources, this technique is crucial for high temporal resolu- tion monitoring and rapid response to emergent events [3\u20136]. \u2217Corresponding Author (yokoya@k.u-tokyo.ac.jp). This work was sup- ported in part by the Council for Science, Technology and Innovation (CSTI), the Cross-ministerial Strategic Innovation Promotion Program (SIP), De- velopment of a Resilient Smart Network System against Natural Disas- ters (Funding agency: NIED), the JSPS, KAKENHI under Grant Number 22H03609, JST, FOREST under Grant Number JPMJFR206S, Microsoft Re- search Asia, Next Generation Artificial Intelligence Research Center of The University of Tokyo, and the Graduate School of Frontier Sciences, The Uni- versity of Tokyo through the Challenging New Area Doctoral Research Grant (Project No. C2303). However, compared to unimodal change detection, multi- modal change detection presents additional challenges due to variations in statistical distributions, channel numbers, and noise levels between pre-event and post-event images, known as the heterogeneous modality problem [4]. Depending on whether or not labels are available to change detectors, existing approaches are categorized into supervised, semi-supervised, and unsupervised methods. Among them, unsupervised methods have gained promi- nence due to the fact that detectors can be used without any prior-labeled data. The basic idea of unsupervised multi- modal change detection is to transform multimodal images into a new domain where the modal heterogeneity can be eliminated. Depending on the transformation technique and the target domain, the existing unsupervised methods can be categorized into four types, i.e., (1) modality translation- based methods [7], (2) similarity measurement-based meth- ods [3, 4, 8, 9], (3) feature learning-based methods [10], and (4) classification-based methods [11]. Currently, the multimodal data involved in most of these methods are remote sensing images acquired by airborne and spaceborne sensors. Few studies have focused on map data, such as OpenStreetMap (OSM) data. Multimodal change de- tection between map data and optical imagery is undoubt- edly significant in enriching the data sources of change de- tection as well as updating the geographic information sys- tem. Our previous work [12] explored this by developing an architecture to detect land-cover changes between OSM data and optical high-resolution imagery and established the first benchmark dataset based on the OpenEarthMap dataset [13]. However, the proposed architecture and focused tasks are still supervised. How to achieve change detection between map data and optical imagery in an unsupervised manner remains an unexplored and challenging topic. Existing unsupervised models, designed for airborne and spaceborne remote sensing images, may struggle with the unique aspects of map data. Therefore, it may be difficult for them to achieve good de- tection results meeting real-world application requirements. In this paper, we advance the field by proposing an unsu- pervised approach to detect land-cover changes between map data and optical imagery through exploiting the vision foun- dation model SAM [14]. arXiv:2401.09019v1 [eess.IV] 17 Jan 2024" }, { "url": "http://arxiv.org/abs/2201.10953v2", "title": "Dual-Tasks Siamese Transformer Framework for Building Damage Assessment", "abstract": "Accurate and fine-grained information about the extent of damage to buildings\nis essential for humanitarian relief and disaster response. However, as the\nmost commonly used architecture in remote sensing interpretation tasks,\nConvolutional Neural Networks (CNNs) have limited ability to model the\nnon-local relationship between pixels. Recently, Transformer architecture first\nproposed for modeling long-range dependency in natural language processing has\nshown promising results in computer vision tasks. Considering the frontier\nadvances of Transformer architecture in the computer vision field, in this\npaper, we present the first attempt at designing a Transformer-based damage\nassessment architecture (DamFormer). In DamFormer, a siamese Transformer\nencoder is first constructed to extract non-local and representative deep\nfeatures from input multitemporal image-pairs. Then, a multitemporal fusion\nmodule is designed to fuse information for downstream tasks. Finally, a\nlightweight dual-tasks decoder aggregates multi-level features for final\nprediction. To the best of our knowledge, it is the first time that such a deep\nTransformer-based network is proposed for multitemporal remote sensing\ninterpretation tasks. The experimental results on the large-scale damage\nassessment dataset xBD demonstrate the potential of the Transformer-based\narchitecture.", "authors": "Hongruixuan Chen, Edoardo Nemni, Sofia Vallecorsa, Xi Li, Chen Wu, Lars Bromley", "published": "2022-01-26", "updated": "2022-05-28", "primary_cat": "cs.CV", "cats": [ "cs.CV", "cs.LG", "eess.IV" ], "main_content": "The building damage assessment task is two-fold: building localization and building per-pixel damage segmentation, namely damage classification. In this work, we present an end-to-end Transformer-based architecture DamFormer for both tasks. The overall proposed network architecture DamFormer is presented in Fig. 1, which is constructed by two parts: a siamese Transformer encoder and a lightweight dual-tasks decoder. 2.1. Siamese Transformer Encoder Transformer backbones have shown their potential in feature extraction for semantic segmentation. Nevertheless, many of the current models are based on the ViT backbone, whose numerous network parameters inevitably introduce high computation overhead, which is paramount in the case of processing large-scale multitemporal remote sensing data. Therefore, in our work, we construct our siamese Transformer encoder based on the cutting-edge Transformer architecture SegFormer [8]. SegFormer consists of two main modules: a hierarchical transformer encoder named Mix Transformer (MiT) that outputs multi-level features and a lightweight All-MLP decoder to aggregate them and predict the semantic segmentation mask. As in ViT, an input image is first divided into patches, however MiT selects a smaller patches of size 4\u00d74 to favor the dense prediction task. To reduce the computational complexity of the standard self-attention mechanism, MiT uses a sequence reduction process to reduce the length of the semantic sequence. Also, considering the positional encoding in Transformer demands the same resolution for input data in training and testing stages, a 3\u00d73 convolutional layer is introduced to replace the positional encoding. Benefiting from it, SegFormer architecture can accept multitemporal images of any size in training and testing stages. Specifically, based on the Mix Transformer encoder, our encoder has two streams to extract low-resolution finegrained and high-resolution coarse features from multitemporal very high-resolution imagery where each stream has four Transformer blocks as shown in Fig. 1. In this work, since pre-and post-disaster images are homogeneous with similar visual patterns, a weight-sharing mechanism is applied for the two streams, making the extracted features comparable (in the same feature space) and reducing network parameters. In addition, the overlap patch merging mechanism [8] is introduced at each block to shrink the features obtained from the previous block to half the size. Finally, we propose a multitemporal adaptive fusion module, consisting of concatenation, convolutional layer, and channel attention mechanism [10]. The concatenation operation and convolutional layer merge the feature maps of the same size from the two streams, which can facilitate the information interaction between pre-disaster and post-disaster feature maps. The channel attention mechanism can weight the importance of each channel and yield task-specific features that are well-suited for downstream building localization and damage classification tasks, respectively. 2.2. Lightweight Dual-Tasks Decoder Keeping a large receptive field to include enough contextual information is key to segmenting remote sensing data. Therefore, sophisticated and computational demanding decoder designs such as multi-scale dilated convolution and (a) (b) (c) (d) Fig. 2: Visualization of the damage assessment results obtained by DamFormer from the xBD dataset. (a) Pre-disaster image. (b) Post-disaster image. (c) Reference map. (d) Prediction map. In (c) and (d), background pixels are shown in black, undamaged pixel in white, minor damaged pixels in green, major damaged pixels in yellow, and destroyed pixels in red. self-attention head are often used by the CNN-based architecture. By contrast, our architecture can extract non-local features in each layer bene\ufb01ting from the Transformer-based encoder. Hence, a simple lightweight decoder is therefore suf\ufb01cient for downstream tasks. Since the building damage assessment includes both building localization and damage classi\ufb01cation tasks, our decoder includes two sub-networks with the same structure containing a cross-level fusion module and a classi\ufb01er. In each decoder, the hierarchical task-speci\ufb01c features are \ufb01rst upsampled to the same size. Then, the feature maps are concatenated and a cross-layer fusion model based on a 1\u00d71 convolutional layer is applied to aggregate information from different layers. The yielded feature map containing multilevel information is used for \ufb01nal prediction by an attached 1\u00d71 convolutional layer. In addition, the multi-level feature map in the localization sub-network is added back to the classi\ufb01cation sub-network to contribute to the damage assessment performance. 2.3. Loss Function To train the proposed architecture, we utilize a compound loss function to jointly optimize building localization and damage classi\ufb01cation tasks. For building detection, we optimize predictions according to the building reference maps with binary cross-entropy loss. To address the problem of skewed class where pixels belonging to building classes (foreground pixels) are far less than background pixels, dice loss is introduced to balance the foreground and background pixels. For damage assessment, we optimize predictions according to the damage reference maps with cross-entropy loss. Compared to building detection, the sample imbalance problem is more challenging in damage classi\ufb01cation, which is not limited to the foreground and background pixels but also lies in the distribution of pixels with different damage levels. Considering this, we apply Lovasz softmax loss function [11] to support the optimization step, since it encounters sample imbalance problems by directly optimizing the IoU metric. Finally, our overall loss function can be formed as follows: Loverall = Lloc + \u03b1Ldam (1) where \u03b1 is a trade-off parameter that controls the importance of damage assessment, we set \u03b1 to 1 in this paper. 3. EXPERIMENTS 3.1. Dataset, Metrics, Benchamrk Methods To evaluate the effectiveness of our DamFormer architecture, we conduct experiments on the xBD dataset [12], which is currently the largest publicly available damage assessment dataset, containing 11\u2019034 multitemporal very-highresolution image-pairs with a size of 1024\u00d71024 and six disaster types: earthquake/tsunami, \ufb02ood, volcanic eruption, wild\ufb01re, and wind. The damage annotations in the xBD dataset are divided into four levels: non-damage, minor damage, major damage, and destroyed. We implemented our network using Pytorch. In the two siamese streams, the speci\ufb01c number of MixFomer units in the four blocks is 3, 4, 6 and 3, keeping the same number of the residual units in ResNet-50 architecture. To train the overall network, we apply AdamW optimizer with an initial learning rate of 6e\u22125 and a weight decay of 5e\u22123. Following the widely used metrics suggested in the xView2 Computer Vision for Building Damage Assessment Challenge1, an evaluation metric based on F1 score was utilized, including building localization score F loc 1 , the harmonic mean of subclass-wise damage classi\ufb01cation scores F dam 1 and overall score F oa 1 , which can be formed as F oa 1 = 0.3F loc 1 + 0.7F dam 1 . To evaluate the proposed framework we adopted four CNN-based architectures as a comparison methods, i.e., the xView2 baseline method based on UNet + ResNet2, the xView2 1st place solution method3 Siamese-UNet, MaskRCNNDA [13], and ChangeOS [3]. Because the 1st place solution is based on a multi-model ensemble and ChangeOS applied object-based post-processing to improve the \ufb01nal performance, we used the ResNet-34 based models and the pixelbased ChangeOS respectively for a fair comparison. 1https://www.xview2.org/ 2https://github.com/DIUx-xView/xView2 baseline 3https://github.com/DIUx-xView/xView2 \ufb01rst place Table 1: BENCHMARK COMPARISON OF DAMAGE ASSESSMENT RESULTS PRODUCED ON THE XBD DATASET Method F oa 1 F loc 1 F dam 1 Damage F1 per class No Minor Major Destroyed xView2 Baseline 26.54 80.47 3.42 66.31 14.35 0.94 46.57 Siamese-UNet 71.68 85.92 65.58 86.74 50.02 64.43 71.68 MaskRCNN 74.10 83.60 70.02 90.60 49.30 72.20 83.70 ChangeOS 75.50 85.69 71.14 89.11 53.11 72.44 80.79 DamFormer 77.02 86.86 72.81 89.86 56.78 72.56 80.51 3.2. Experimental Results Fig. 2 illustrates some visualization results obtained by DamFormer on the xBD dataset, which shows the proposed method can yield accurate and intact segmentation maps re\ufb02ecting different levels of damages. Furthermore, in Table 1, we report the benchmark comparison on the xView2 holdout split. As is evident, DamFormer outperformed on both building localization and damage classi\ufb01cation tasks, which indicates the effectiveness of Transformer architecture in multitemporal remote sensing image processing tasks of building damage assessment. 4. CONCLUSION In this paper, we preliminary evaluate the potential of Transformer in multitemporal remote sensing data processing tasks. Speci\ufb01cally, we propose a dual-tasks siamese Transformer framework called DamFormer for building damage assessment tasks. DamFormer is made up of a siamese Transformer encoder and a lightweight dual-tasks decoder. Different from the limited receptive \ufb01eld of CNN-based architecture, DamFormer can extract non-local and representative features for building localization and damage assessment tasks. The experimental results on the xBD dataset demonstrate the superiority of our architecture for building damage assessment in comparison with CNN-based architectures. Our future work includes but is not limited to applying DamFormer architecture and its variants to man-made disasters response and further exploring Transfomer architecture in other remote sensing-related tasks, such as change detection, land-cover, and land-use classi\ufb01cation. 5.", "introduction": "Disasters such as hurricanes, earthquakes, \ufb02oods, volcanic eruptions, and wild\ufb01res cause signi\ufb01cant damage and eco- nomic losses every year. Timely and accurate building dam- \u2217Corresponding Author (edoardo.nemni@unitar.org). This work was sup- ported in part by the National Key R&D Program of China under Grant 2019YFE0126800. The work at the United Nations Satellite Centre (UN- OSAT) is part of the operational mapping service funded by the Norwegian Ministry of Foreign Affairs. age assessment is critical for humanitarian assistance, disas- ter response, and post-disaster reconstruction. Multitemporal very high-resolution satellite imageries are often used in the dual-task of identifying the extent of buildings and the sever- ity of the damage. Deep learning, especially Convolutional Neural Networks (CNNs), has shown great results in computer vision and re- mote sensing applications including building damage assess- ment [1\u20133]. However, CNN lacks a global understanding of the images and the ability to model the non-local relationship between pixels. To capture long-range dependency for further performance improvement, some advanced modules, such as dilated convolution and self-attention mechanism were intro- duced [4,5]. Nonetheless, in addition to a signi\ufb01cant compu- tational cost, these modules were often designed at the end of CNN-backbone, which implies that the relationship in low- level information was not captured. Transformers architectures [6] have shown great per- formances in natural language processing (NLP) and their ability to model long-range dependencies had motivated re- searchers to explore its adaptation to computer vision tasks. In fact, the Vision Transformer (ViT) [7] and its variants [8,9] outperformed the state-of-the-art CNN in many computer vi- sion tasks when trained on suf\ufb01cient data. However, the Transformer-based architecture has not been investigated in large-scale multitemporal remote sensing interpretation tasks. More importantly, compared to computer vision tasks, mul- titemporal remote sensing interpretation tasks are subject to different complex challenges, such as larger object scale differences, and diverse image patterns caused by different sensor types and atmospheric conditions. In this paper, we explore the potential of Transformer- based backbones applying an end-to-end dual-tasks siamese Transformer architecture named DamFormer for the task of building damage assessment. We believe that this work can provide a different perspective to multitemporal remote sens- ing interpretation. arXiv:2201.10953v2 [cs.CV] 28 May 2022 Transformer Block I Transformer Block II Transformer Block IV Transformer Block III Multitemporal Adaptive Fusion Transformer Block I Transformer Block II Transformer Block IV Transformer Block III Weight-shared H/4\u00d7W/4\u00d7C1 H/8\u00d7W/8\u00d7C2 H/16\u00d7W/16\u00d7C3 H/32\u00d7W/32\u00d7C4 H/4\u00d7W/4\u00d7C1 H/8\u00d7W/8\u00d7C2 H/16\u00d7W/16\u00d7C3 H/32\u00d7W/32\u00d7C4 Stream T2 Stream T1 Siamese Encoder Cross-layer Fusion Dual-tasks Decoder Cross-layer Fusion Localization Classification Add Classifier I Classifier II Building Prediction Damage Prediction Upsample & Concat Upsample & Concat Fig. 1: Overview of the proposed DamFormer architecture." }, { "url": "http://arxiv.org/abs/2109.08912v1", "title": "Unsupervised Domain Adaptation for Semantic Segmentation via Low-level Edge Information Transfer", "abstract": "Unsupervised domain adaptation for semantic segmentation aims to make models\ntrained on synthetic data (source domain) adapt to real images (target domain).\nPrevious feature-level adversarial learning methods only consider adapting\nmodels on the high-level semantic features. However, the large domain gap\nbetween source and target domains in the high-level semantic features makes\naccurate adaptation difficult. In this paper, we present the first attempt at\nexplicitly using low-level edge information, which has a small inter-domain\ngap, to guide the transfer of semantic information. To this end, a\nsemantic-edge domain adaptation architecture is proposed, which uses an\nindependent edge stream to process edge information, thereby generating\nhigh-quality semantic boundaries over the target domain. Then, an edge\nconsistency loss is presented to align target semantic predictions with\nproduced semantic boundaries. Moreover, we further propose two entropy\nreweighting methods for semantic adversarial learning and self-supervised\nlearning, respectively, which can further enhance the adaptation performance of\nour architecture. Comprehensive experiments on two UDA benchmark datasets\ndemonstrate the superiority of our architecture compared with state-of-the-art\nmethods.", "authors": "Hongruixuan Chen, Chen Wu, Yonghao Xu, Bo Du", "published": "2021-09-18", "updated": "2021-09-18", "primary_cat": "cs.CV", "cats": [ "cs.CV" ], "main_content": "Semantic segmentation is one of the most challenging computer vision tasks, which aims to predict pixel-level semantic labels for a given image. Following the work in [29], it has become mainstream to use fully convolutional network (FCN) architecture for tackling the semantic segmentation task, and many effective models have been presented [1, 7, 12, 28, 45, 47, 56]. Besides, some probability graph models like conditional random filed [20] are used as an effective post-processing method for improving performance. More recently, some work introduces multi-task learning into semantic segmentation [6, 10, 19, 37], which combines networks for complementary tasks to improve semantic segmentation accuracy. Nevertheless, to train these semantic segmentation models, numerous real-world images with pixel-level annotations are required, which are usually difficult to collect. An alternative way is to train these models with photo-realistic synthetic data. 2.2 UDA for Semantic Segmentation Unsupervised domain adaptation aims to align the domain distribution shift between labeled source data and unlabeled target data [4, 17, 25, 58, 59]. A very attractive application of UDA is using photo-realistic synthetic data to train semantic segmentation models, and a variety of methods have been presented, which can be broadly divided into three types: adversarial learning based approach [15, 18, 26, 38, 39, 41, 43], image translation approach [8, 14, 26, 50, 54], and self-supervised learning approach [8, 27, 36, 48, 53, 60]. Adversarial learning methods involve two networks. A generative network predicts the segmentation maps for the input source or target images. Another discriminator network takes the feature maps from generative network and tries to predict the domain type of feature maps, while generative network tries to fool the discriminator. By iteratively repeating this process, the two domains would have a similar distribution. In [15], the adversarial approach for first applied to UDA for semantic segmentation. In [41], adversarial learning is performed to match the entropy maps of two domains. In [31], a local alignment score map is designed to evaluate the category-level alignment degree for guiding the Unsupervised Domain Adaptation for Semantic Segmentation via Low-level Edge Information Transfer -, -, transfer of semantic features. In [51], an attention-based discriminator network is presented to adaptively measure the hard-adapted semantic features. Image translation methods directly apply adversarial models or style transfer approaches to transform the source images into target-style images for aligning the domain gap. In [14], CycleGAN [57] is used to transform the synthetic images of the source domain to the style of the target images. In [54], a style transfer network is presented to make the images from two domains visually similar. Recently, Yang et al. [50] use fast Fourier transform to reduce the appearance difference between images from different domains. Self-supervised learning is another effective UDA approach. In the field of UDA for semantic segmentation, self-supervised learning methods use the target prediction as pseudo-labels to train the segmentation network, which could make the model implicitly learn the domain-invariant representations [51, 60]. In [60], a class balancing strategy and spatial prior are presented to guide the selfsupervised learning in target domain. In [27], a pyramid curriculum strategy based on multi-scale pooling is proposed to select reliable pseudo-labels to train the segmentation network. Moreover, selfsupervised learning could be combined with adversarial learning or image translation to further boost UDA performance [18, 32, 50, 51]. In this paper, unlike the previous adversarial learning methods that only focus on aligning the high-level semantic features, the proposed method simultaneously aligns semantic and edge features and utilizes adapted edge features to facilitate the transfer of semantic features. Moreover, we further present an entropy reweighting semantic adversarial learning strategy and an uncertainty-adaptive self-supervised learning approach to enhance the UDA performance. 3 METHODOLOGY In UDA for semantic segmentation, the labeled source domain is denoted as DS = {(\ud835\udc4b\ud835\udc60,\ud835\udc4c\ud835\udc60)}\ud835\udc60\u2208S, and the unlabeled target domain is denoted as DT = {\ud835\udc4b\ud835\udc61}\ud835\udc61\u2208T, where \ud835\udc4b\ud835\udc60\u2208R\ud835\udc3b\u00d7\ud835\udc4a\u00d73 is a source image, \ud835\udc4c\ud835\udc60\u2208R\ud835\udc3b\u00d7\ud835\udc4a\u00d7\ud835\udc36is the one-hot semantic label associated with \ud835\udc4b\ud835\udc60, and \ud835\udc4b\ud835\udc61\u2208R\ud835\udc3b\u00d7\ud835\udc4a\u00d73 is a target image. Our goal is to utilize the lowlevel edge information, which is relatively easier to be transferred on the two domains, to guide the semantic segmentation over the target domain, thereby obtaining desirable prediction performance. Figure 2 illustrates our architecture that mainly contains three parts: semantic information transfer, edge information transfer, and uncertainty-adaptive self-supervised learning. 3.1 Semantic Information Transfer Semantic stream \ud835\udc3a\ud835\udc60\ud835\udc52\ud835\udc5ais the basis of our architecture. Specifically, we use entropy adversarial learning method as our baseline, which has yielded promising results in UDA for semantic segmentation [41] and has served as the basis of more advanced methods [32, 42]. First, \ud835\udc3a\ud835\udc60\ud835\udc52\ud835\udc5ais trained by minimizing cross-entropy loss L\ud835\udc60\ud835\udc52\ud835\udc54 \ud835\udc60\ud835\udc52\ud835\udc5aover source data: L\ud835\udc60\ud835\udc52\ud835\udc54 \ud835\udc60\ud835\udc52\ud835\udc5a= \u2212 \u2211\ufe01 \u210e,\ud835\udc64 \u2211\ufe01 \ud835\udc50 \ud835\udc4c(\u210e,\ud835\udc64,\ud835\udc50) \ud835\udc60 log \ud835\udc43(\u210e,\ud835\udc64,\ud835\udc50) \ud835\udc60 (1) where \ud835\udc43\ud835\udc60\u2208R\ud835\udc3b\u00d7\ud835\udc4a\u00d7\ud835\udc36is the source semantic prediction map generate by \ud835\udc3a\ud835\udc60\ud835\udc52\ud835\udc5a. Besides, to overcome the negative effect of class imbalance problem, lov\u00e1sz-softmax loss [3] is imposed on source data. Subsequently, \ud835\udc3a\ud835\udc60\ud835\udc52\ud835\udc5atakes a target image as input and output the semantic prediction map \ud835\udc43\ud835\udc61. Then, the weighted self-information map \ud835\udc3c\ud835\udc61\u2208R\ud835\udc3b\u00d7\ud835\udc4a\u00d7\ud835\udc36is calculated: \ud835\udc3c(\u210e,\ud835\udc64) \ud835\udc61 = \u2212\ud835\udc43(\u210e,\ud835\udc64) \ud835\udc61 log \ud835\udc43(\u210e,\ud835\udc64) \ud835\udc61 (2) To reduce the domain gap, in entropy adversarial learning, a discriminator \ud835\udc37\ud835\udc60\ud835\udc52\ud835\udc5ais trained to predict the domain type for the weighted self-information map, and \ud835\udc3a\ud835\udc60\ud835\udc52\ud835\udc5ais trained to fool \ud835\udc37\ud835\udc60\ud835\udc52\ud835\udc5a. However, the entropy adversarial learning treats all the target images equally, but there exist easy-adapt images with simple scenes and hard-adapt images with difficult scenes in the target domain. To better optimize these hard-adapt samples, we argue that harder samples need to contribute more loss during the training stage. Since the entropy map can reflect the confidence levels of the target predictions [41], we utilize the pixel-wise sum of target entropy map to measure the difficulty for each target image: \uf8f1 \uf8f4 \uf8f4 \uf8f4 \uf8f2 \uf8f4 \uf8f4 \uf8f4 \uf8f3 \ud835\udc38(\u210e,\ud835\udc64) \ud835\udc61 = \u2212 1 log\ud835\udc36 \u00cd \ud835\udc50\ud835\udc43(\u210e,\ud835\udc64,\ud835\udc50) \ud835\udc61 log \ud835\udc43(\u210e,\ud835\udc64,\ud835\udc50) \ud835\udc61 E\ud835\udc61= \u00cd \u210e,\ud835\udc64 \ud835\udc38(\u210e,\ud835\udc64) \ud835\udc61 (3) where E\ud835\udc61is the pixel-wise sum of entropy map \ud835\udc38\ud835\udc61. If a target image has a low overall entropy value, it can be regarded as an easyadapt sample, otherwise it is a hard-adapt sample. Based on this assumption, we propose an entropy reweighting adversarial loss: L\ud835\udc4e\ud835\udc51\ud835\udc63 \ud835\udc60\ud835\udc52\ud835\udc5a= \u2212 \u2211\ufe01 \u210e,\ud835\udc64 log (1 \u2212\ud835\udc37\ud835\udc60\ud835\udc52\ud835\udc5a(\ud835\udc3c\ud835\udc60)) \u2212 \u0010 1 + (\ud835\udefcE\ud835\udc61)2\u0011 \u2211\ufe01 \u210e,\ud835\udc64 log (\ud835\udc37\ud835\udc60\ud835\udc52\ud835\udc5a(\ud835\udc3c\ud835\udc61)) (4) where \ud835\udc3c\ud835\udc60is the source weighted self-information map, and \ud835\udefcis a weight factor. Noteworthy, we adopt the square of entropy to enlarge the loss difference between easy-adapt samples and hardadapt samples. Through optimizing L\ud835\udc60\ud835\udc52\ud835\udc54 \ud835\udc60\ud835\udc52\ud835\udc5aand L\ud835\udc4e\ud835\udc51\ud835\udc63 \ud835\udc60\ud835\udc52\ud835\udc5a, the two domains are aligned at the semantic feature maps to some extent. Next, we explicitly use the low-level edge information to further facilitate the transfer of semantic features. 3.2 Edge Information Transfer In previous methods, edge information is entangled with other types of information in the segmentation network and is implicitly adapted through adversarial learning, which makes it difficult to use edge information for facilitating the transfer of semantic information. To explicitly use low-level edge information, we use an independent edge stream \ud835\udc3a\ud835\udc52\ud835\udc54to decouple edge information from \ud835\udc3a\ud835\udc60\ud835\udc52\ud835\udc5a. In terms of the specific network architecture, we adopt a lightweight auxiliary network introduced in [37] for edge detection. A gated convolutional layer is introduced in \ud835\udc3a\ud835\udc52\ud835\udc54to ensure that \ud835\udc3a\ud835\udc52\ud835\udc54only processes edge-relevant information. Specifically, \ud835\udc3a\ud835\udc52\ud835\udc54takes the output of the first convolutional layer of \ud835\udc3a\ud835\udc60\ud835\udc52\ud835\udc5aas input and aims to yield precise semantic boundary maps B\ud835\udc60and B\ud835\udc61. To this end, \ud835\udc3a\ud835\udc52\ud835\udc54is first trained by minimizing binary cross-entropy loss L\ud835\udc60\ud835\udc52\ud835\udc54 \ud835\udc52\ud835\udc54 -, -, Hongruixuan Chen, Chen Wu, Yonghao Xu, and Bo Du seg eg \uf04c con eg \uf04c \u2207\uf046 st \uf04c seg sem \uf04c adv sem \uf04c Semantic Stream Gsem Source image Source label Source semantic boundary Edge Stream Geg adv eg \uf04c Deg Entropy map Target image Source flow Target flow Concat Discriminator Pixel-wise sum Pixel-wise product \u03a3 Target prediction Source prediction Dsem Source edge map Target edge map \u03a3 Figure 2: The overall architecture of our SEDA architecture, which composed of three parts: 1) In semantic information transfer, the feature-level adversarial learning approach is applied to align the semantic distributions of source and target domains. Besides, the entropy map of target output is utilized to weight each target sample at the image-level; 2) In edge information transfer, the edge information is decoupled from \ud835\udc3a\ud835\udc60\ud835\udc52\ud835\udc5aand independently processed by \ud835\udc3a\ud835\udc52\ud835\udc54. Feature-level adversarial learning is also performed to align edge feature distributions of two domains. Then, the target semantic boundary map is used to guide the target semantic segmentation; 3) In uncertainty-adaptive self-supervised learning, the target prediction is weighted by entropy map at pixel-level and treated as pseudo-labels to train \ud835\udc3a\ud835\udc60\ud835\udc52\ud835\udc5a. over the source domain. Ground truth of semantic boundaries can be directly generated from source semantic labels. Through optimizing L\ud835\udc60\ud835\udc52\ud835\udc54 \ud835\udc52\ud835\udc54, \ud835\udc3a\ud835\udc52\ud835\udc54is capable of generating precise semantic boundaries for source domain data. Compared to highlevel semantic features, the low-level edge features have a smaller domain gap. Moreover, due to optimizing L\ud835\udc4e\ud835\udc51\ud835\udc63 \ud835\udc60\ud835\udc52\ud835\udc5a, the inter-domain gap in the features of shallow layers is further reduced. Accordingly, despite merely supervised by source edge information, \ud835\udc3a\ud835\udc52\ud835\udc54can generate decent boundaries in the target domain. To produce higher quality semantic boundaries for target domain, similar to semantic stream, we also introduce a discriminator \ud835\udc37\ud835\udc52\ud835\udc54for predicting the domain labels for the edge feature maps, while \ud835\udc3a\ud835\udc52\ud835\udc54is trained to fool \ud835\udc37\ud835\udc52\ud835\udc54: L\ud835\udc4e\ud835\udc51\ud835\udc63 \ud835\udc52\ud835\udc54 = \u2212 \u2211\ufe01 \u210e,\ud835\udc64 \u0000log \u00001 \u2212\ud835\udc37\ud835\udc52\ud835\udc54(H\ud835\udc60)\u0001 + log \u0000\ud835\udc37\ud835\udc52\ud835\udc54(H\ud835\udc61)\u0001\u0001 (5) where H\ud835\udc60and H\ud835\udc61are the edge features of source and target domains from the last layer of \ud835\udc3a\ud835\udc52\ud835\udc54. Subsequently, we add the target edge feature map back to \ud835\udc3a\ud835\udc60\ud835\udc52\ud835\udc5a, thereby guiding the semantic segmentation over the target domain. However, this way can only implicitly refine the semantic segmentation results, which cannot guarantee consistency between the boundary map and the predicted semantic segmentation map. To explicitly encourage the target semantic segmentation maps to align with the boundary map, we introduce an edge consistency loss L\ud835\udc50\ud835\udc5c\ud835\udc5b \ud835\udc52\ud835\udc54: \uf8f1 \uf8f4 \uf8f4 \uf8f4 \uf8f2 \uf8f4 \uf8f4 \uf8f4 \uf8f3 P\ud835\udc61= 1 \u221a 2 \r \r \r\u2207 \u0010 F \u2217arg max \ud835\udc50 \ud835\udc43\ud835\udc61 \u0011\r \r \r L\ud835\udc50\ud835\udc5c\ud835\udc5b \ud835\udc52\ud835\udc54 = \u00cd N+ \f \f \fP (N+) \ud835\udc61 \u2212B(N+) \ud835\udc61 \f \f \f (6) where P\ud835\udc61is the semantic boundary map computed by taking a spatial derivative on the target segmentation output, F is Gaussian filter, and N+ contains coordinates of all boundary pixels in both P\ud835\udc61and B\ud835\udc61. L\ud835\udc50\ud835\udc5c\ud835\udc5b \ud835\udc52\ud835\udc54 aims at ensuring that target semantic boundary pixels are penalized if there is a mismatch with boundaries predicted by \ud835\udc3a\ud835\udc52\ud835\udc54. By optimizing L\ud835\udc50\ud835\udc5c\ud835\udc5b \ud835\udc52\ud835\udc54, \ud835\udc3a\ud835\udc60\ud835\udc52\ud835\udc5acan be guided by the target boundary map, thereby generating more accurate target prediction map. Furthermore, since argmax operator is not differentiable, the Gumbel softmax trick [16] is adopted to approximate the partial derivatives of L\ud835\udc50\ud835\udc5c\ud835\udc5b \ud835\udc52\ud835\udc54 to a given parameter during the backward propagation stage. Unsupervised Domain Adaptation for Semantic Segmentation via Low-level Edge Information Transfer -, -, 3.3 Uncertainty-Adaptive Self-Supervised Learning By means of using the low-level edge information to guide the transfer of semantic information, \ud835\udc3a\ud835\udc60\ud835\udc52\ud835\udc5acan generate more accurate semantic segmentation results over the target domain. Subsequently, considering the complex distribution of real-world data (target domain), we apply the self-supervised learning strategy to make our architecture further fit the distribution of the target domain. The standard self-supervised learning method [32, 60] sets a threshold to select high-confident target pseudo-labels. However, it is difficult to choose a suitable threshold: an over-large threshold could make the available target information very less, and an oversmall threshold could produce too many noisy labels, damaging the adaptation performance. Besides, the confidence levels of these selected pseudo-labels are also different. To address these issues, we present an uncertainty-adaptive self-supervised loss that adopts entropy to adaptively estimate uncertainty and reweight target prediction at pixel-level: L\ud835\udc62\ud835\udc4e\ud835\udc60\ud835\udc59= \u2212 \u2211\ufe01 \u210e,\ud835\udc64 \u0010 1 \u2212\ud835\udc38(\u210e,\ud835\udc64) \ud835\udc61 \u00112 \u2211\ufe01 \ud835\udc50 \u02c6 \ud835\udc4c\ud835\udc61 (\u210e,\ud835\udc64,\ud835\udc50) log \ud835\udc43(\u210e,\ud835\udc64,\ud835\udc50) \ud835\udc61 (7) where \u02c6 \ud835\udc4c\ud835\udc61is the one-hot semantic pseudo-labels. Both L\ud835\udc4e\ud835\udc51\ud835\udc63 \ud835\udc60\ud835\udc52\ud835\udc5aand L\ud835\udc62\ud835\udc4e\ud835\udc60\ud835\udc59adopt entropy to measure uncertainty and reweight samples. However, L\ud835\udc4e\ud835\udc51\ud835\udc63 \ud835\udc60\ud835\udc52\ud835\udc5auses the sum of entropy to weight the target image at the image-level, but L\ud835\udc60\ud835\udc61uses entropy map to weight target prediction at the pixel-level. In L\ud835\udc4e\ud835\udc51\ud835\udc63 \ud835\udc60\ud835\udc52\ud835\udc5a, the image with larger entropy means harder to adapt and is assigned more weight. In contrast, the pixel with a smaller entropy value represents higher confidence and is more highlighted in L\ud835\udc62\ud835\udc4e\ud835\udc60\ud835\udc59. Finally, our complete loss function L is formed by all the loss functions: L = L\ud835\udc60\ud835\udc52\ud835\udc54 \ud835\udc60\ud835\udc52\ud835\udc5a+ \ud835\udf061L\ud835\udc4e\ud835\udc51\ud835\udc63 \ud835\udc60\ud835\udc52\ud835\udc5a+ \ud835\udf062L\ud835\udc60\ud835\udc52\ud835\udc54 \ud835\udc52\ud835\udc54+ \ud835\udf063L\ud835\udc4e\ud835\udc51\ud835\udc63 \ud835\udc52\ud835\udc54 + L\ud835\udc50\ud835\udc5c\ud835\udc5b \ud835\udc52\ud835\udc54 + L\ud835\udc62\ud835\udc4e\ud835\udc60\ud835\udc59 (8) where \ud835\udf061 to \ud835\udf063 are trade-off parameters that weight the importance of the corresponding terms. And our optimization objective is to learn a target model G according to: G = arg min \ud835\udc3a\ud835\udc60\ud835\udc52\ud835\udc5a min \ud835\udc3a\ud835\udc60\ud835\udc52\ud835\udc5a \ud835\udc3a\ud835\udc52\ud835\udc54 max \ud835\udc37\ud835\udc60\ud835\udc52\ud835\udc5a \ud835\udc37\ud835\udc52\ud835\udc54 L (9) The full training procedure of our method consists of three steps: 1) jointly optimizing \ud835\udc3a\ud835\udc60\ud835\udc52\ud835\udc5a, \ud835\udc37\ud835\udc60\ud835\udc52\ud835\udc5a, \ud835\udc3a\ud835\udc52\ud835\udc54, and \ud835\udc37\ud835\udc52\ud835\udc54by semantic segmentation loss, edge loss, and two adversarial learning loss over the source and target domains; 2) generating pseudo labels and correspond entropy maps by \ud835\udc3a\ud835\udc60\ud835\udc52\ud835\udc5aover the target domain; 3) optimizing \ud835\udc3a\ud835\udc60\ud835\udc52\ud835\udc5aby uncertainty-adaptive self-supervised loss. 4 EXPERIMENTS In this section, following the common protocol of previous works [15, 31, 41], we conduct experiments on the two-challenging syntheticto-real unsupervised domain adaptation tasks, i.e., GTAV\u2192Cityscapes, and SYNTHIA\u2192Cityscapes. Specifically, we use GTAV or SYNTHIA datasets with pixel-level annotations as the source domain and Cityscapes dataset without any annotations as the target domain. 4.1 Datasets Cityscapes is a real-world urban scene image dataset, which provides 3975 images, each of which has a resolution of 2048\u00d71024, collected from 50 cities in Germany [9]. Following the standard protocols [15, 31, 41], we use the 2975 images from Cityscapes dataset training set as the unlabeled target domain for training, and evaluate our method on the 500 images from the validation set. GTAV is a large synthetic dataset containing 24966 high quality labeled urban scene images with a resolution of 1914\u00d71052 from open-world computer games, Grand Theft Auto V [34]. The 19 compatible semantic classes between GTAV and Cityscapes are selected in the experiment. SYNTHIA is another synthetic urban scene dataset [35]. Following previous works, we use the SUNTHIA-RAND-CITYSCAPES subset that contains 9400 annotated images with a resolution of 1280\u00d7760 and shares 16 semantic classes with Cityscapes. In the training stage, we consider the 16 common classes with the Cityscapes. In the evaluation stage, 16and 13-class subsets are used to make quantitative assessment. 4.2 Implementation Details All the experiments in this paper 1 are implemented with Pytorch in a single NVIDIA GTX 1080Ti GPU. Limited by the GPU memory, during the training stage, the resolution of Cityscapes images is resized to 1024\u00d7512, and that of GTAV images is resized to 1280\u00d7720. The resolution of SYNTHIA images remains unchanged. For the sake of fair comparison, like most of the state-of-the-art methods, we do not use any data augment technique. For the semantic stream \ud835\udc3a\ud835\udc60\ud835\udc52\ud835\udc5a, following most of the state-of-theart methods, we use ResNet-101 architecture [13] with pretrained parameters from ImageNet [11]. The implementation of edge stream \ud835\udc3a\ud835\udc52\ud835\udc54follows the work [37]. \ud835\udc3a\ud835\udc52\ud835\udc54is mainly composed of three residual blocks, and each block is followed by a gated convolutional layer, ensuring \ud835\udc3a\ud835\udc52\ud835\udc54only processes edge related information. For the discriminator \ud835\udc37\ud835\udc60\ud835\udc52\ud835\udc5a, we apply the same architecture used in [41]. For the discriminator \ud835\udc37\ud835\udc52\ud835\udc54, we adopt a simple structure consisting of three 4\u00d74 convolutional layers with a stride of 2, and one 1\u00d71 convolutional layer. Except for the last layer, each convolutional layer is followed by a Leaky-ReLU with a slope of 0.2. To verify the robustness of our method, the hyper-parameters keep the same in both tasks. For our joint loss, the values of \ud835\udf061 to \ud835\udf063 are set as 1\ud835\udc52\u22123, 20, and 1\ud835\udc52\u22123, respectively. The weight factor \ud835\udefc in entropy reweighting adversarial loss is set to 10. In terms of the whole architecture training, the SGD optimizer with a learning rate of 2.5\ud835\udc52\u22124, momentum of 0.9, and a weight decay of 5\ud835\udc52\u22124 is utilized to train \ud835\udc3a\ud835\udc60\ud835\udc52\ud835\udc5aand \ud835\udc3a\ud835\udc52\ud835\udc54. Two Adam optimizers with a learning rate of 1\ud835\udc52\u22124 are used for training \ud835\udc37\ud835\udc60\ud835\udc52\ud835\udc5aand \ud835\udc37\ud835\udc52\ud835\udc54, respectively. 4.3 Performance Comparison In this subsection, we compare our proposed method with the existing state-of-the-art methods [18, 23, 24, 26, 27, 31, 32, 38, 39, 41, 43, 49\u201351]. For the sake of fairness, all the reported methods use ResNet-101 as the backbone network and the data augment 1The source code will be made publicly available. -, -, Hongruixuan Chen, Chen Wu, Yonghao Xu, and Bo Du Table 1: Evaluation results of semantic segmentation by adapting from GTAV to Cityscapes. The mechanism \u201cT\u201d, \u201cA\u201d, and \u201cS\u201d mean image translation, adversarial training, and self-supervised learning, respectively. The best results are highlighted in bold. GTAV\u2192Cityscapes Methods Mech. road sidewalk building wall fence pole light sign veg. terrain sky person rider car truck bus train mbike bike mIoU AdaSegNet [38] A 86.5 36.0 79.9 23.4 23.3 23.9 35.2 14.8 83.4 33.3 75.6 58.5 27.6 73.7 32.5 35.4 3.9 30.1 28.1 42.4 ADVENT [41] A 89.4 33.1 81.0 26.6 26.8 27.2 33.5 24.7 83.9 36.7 78.8 58.7 30.5 84.8 38.5 44.5 1.7 31.6 32.4 45.5 CLAN [31] A 87.0 27.1 79.6 27.3 23.3 28.3 35.5 24.2 83.6 27.4 74.2 58.6 28.0 76.2 33.1 36.7 6.7 31.9 31.4 43.2 AdaptPatch [39] A 92.3 51.9 82.1 29.2 25.1 24.5 33.8 33.0 82.4 32.8 82.2 58.6 27.2 84.3 33.4 46.3 2.2 29.5 32.3 46.5 PyCDA [27] S 90.5 36.3 84.4 32.4 28.7 34.6 36.4 31.5 86.8 37.9 78.5 62.3 21.5 85.6 27.9 34.8 18.0 22.9 49.3 47.4 CCM [24] S 93.5 57.6 84.6 39.3 24.1 25.2 35.0 17.3 85.0 40.6 86.5 58.7 28.7 85.8 49.0 56.4 5.4 31.9 43.2 49.9 FDA [50] TS 92.5 53.3 82.4 26.5 27.6 36.4 40.6 38.9 82.3 39.8 78.0 62.6 34.4 84.9 34.1 53.1 16.9 27.7 46.4 50.4 IntraDA [32] AS 90.6 37.1 82.6 30.1 19.1 29.5 32.4 20.6 85.7 40.5 79.7 58.7 31.1 86.3 31.5 48.3 0.0 30.2 35.8 46.3 FADA [43] AS 91.0 50.6 86.0 43.4 29.8 36.8 43.4 25.0 86.8 38.3 87.4 64.4 38.0 85.2 31.6 46.1 6.5 25.4 37.1 50.1 DAST [51] AS 92.2 49.0 84.3 36.5 28.9 33.9 38.8 28.4 84.9 41.6 83.2 60.0 28.7 87.2 45.0 45.3 7.4 33.8 32.8 49.6 BDL [26] TAS 91.0 44.7 84.2 34.6 27.6 30.2 36.0 36.0 85.0 43.6 83.0 58.6 31.6 83.3 35.3 49.7 3.3 28.8 35.6 48.5 TIR [18] TAS 92.9 55.0 85.3 34.2 31.1 34.9 40.7 34.0 85.2 40.1 87.1 61.0 31.1 82.5 32.3 42.9 0.3 36.4 46.1 50.2 LDR [49] TAS 90.8 41.4 84.7 35.1 27.5 31.2 38.0 32.8 85.6 42.1 84.9 59.6 34.4 85.0 42.8 52.7 3.4 30.9 38.1 49.5 UDACT [23] TAS 95.3 65.1 84.6 33.2 23.7 32.8 32.7 36.9 86.0 41.0 85.6 56.1 25.9 86.3 34.5 39.1 11.5 28.3 43.0 49.6 SourceOnly 64.6 27.3 76.9 19.1 21.1 27.0 32.1 18.5 81.2 14.5 72.4 55.4 21.6 62.9 29.4 8.4 2.4 24.2 35.0 36.5 Ours AS 94.0 61.8 85.8 29.2 32.5 35.4 40.6 43.3 87.2 43.9 84.4 63.8 29.1 88.7 46.0 49.9 0.0 43.7 49.9 52.8 Table 2: Evaluation results of semantic segmentation by adapting from SYNTHIA to Cityscapes. The mechanism \u201cT\u201d, \u201cA\u201d, and \u201cS\u201d mean image translation, adversarial learning, and self-supervised learning, respectively. We show the mIoU (%) of the 13 classes (mIoU*) excluding classes with \u201c*\u201d. \u201c-\u201d represents the method does not report the corresponding experimental result. The best results are highlighted in bold. SYNTHIA\u2192Cityscapes Methods Mech. road sidewalk building wall* fence* pole* light sign veg. sky person rider car bus mbike bike mIoU mIoU* AdaSegNet [38] A 81.7 39.1 78.4 11.1 0.3 25.8 6.8 9.0 79.1 80.8 54.8 21.0 66.8 34.7 13.8 29.9 39.6 45.8 ADVENT [41] A 85.6 42.2 79.7 8.7 0.4 25.9 5.4 8.1 80.4 84.1 57.9 23.8 73.3 36.4 14.2 33.0 41.2 48.0 CLAN [31] A 81.3 37.0 80.1 16.1 13.7 78.2 81.5 53.4 21.2 73.0 32.9 22.6 30.7 47.8 AdaptPatch [39] A 82.4 38.0 78.6 8.7 0.6 26.0 3.9 11.1 75.5 84.6 53.5 21.6 71.4 32.6 19.3 31.7 40.0 46.5 PyCDA [27] S 75.5 30.9 83.3 20.8 0.7 32.7 27.3 33.5 84.7 85.0 64.1 25.4 85.0 45.2 21.2 32.0 46.7 53.3 CCM [24] S 79.6 36.4 80.6 13.3 0.3 25.5 22.4 14.9 81.8 77.4 56.8 25.9 80.7 45.3 29.9 52.0 45.2 52.9 FDA [50] TS 79.3 35.0 73.2 19.9 24.0 61.7 82.6 61.4 31.1 83.9 40.8 38.4 51.1 52.5 IntraDA [32] AS 84.3 37.7 79.5 5.3 0.4 24.9 9.2 8.4 80.0 84.1 57.2 23.0 78.0 38.1 20.3 36.5 41.7 48.9 FADA [43] AS 84.5 40.1 83.1 4.8 0.0 34.3 20.1 27.2 84.8 84.0 53.5 22.6 85.4 43.7 26.8 27.8 45.2 52.5 DAST [51] AS 87.1 44.5 82.3 10.7 0.8 29.9 13.9 13.1 81.6 86.0 60.3 25.1 83.1 40.1 24.4 40.5 45.2 52.5 BDL [26] TAS 86.0 46.7 80.3 14.1 11.6 79.2 81.3 54.1 27.9 73.7 42.2 25.7 45.3 51.4 TIR [50] TAS 92.6 53.2 79.2 1.6 7.5 78.6 84.4 52.6 20.0 82.1 34.8 14.6 39.4 49.3 LDR [49] TAS 85.1 44.5 81.0 16.4 15.2 80.1 84.8 59.4 31.9 73.2 41.0 32.6 44.7 53.1 UDACT [23] TAS 93.3 54.0 81.3 14.3 0.7 28.8 21.3 22.8 82.6 83.3 57.7 22.8 83.4 30.7 20.2 47.2 46.5 53.9 SourceOnly 55.9 22.7 72.1 9.3 0.1 24.7 10.3 10.4 73.8 77.9 54.9 20.5 41.2 31.7 8.3 11.5 32.8 37.8 Ours AS 92.0 53.9 82.0 10.1 0.2 32.8 13.3 26.0 83.6 84.4 63.0 21.4 86.7 46.8 24.7 49.0 48.1 55.9 technique is not used. The per-class Intersection-Over-Union (IoU) and mean IoU (mIoU) are adopted as the evaluation criteria. GTAV to Cityscapes. Table 1 displays the comparison results from GTAV to Cityscapes. First, all domain adaptation methods Unsupervised Domain Adaptation for Semantic Segmentation via Low-level Edge Information Transfer -, -, (a) (b) (c) (d) Figure 3: Qualitative results on the GTAV\u2192Cityscapes task. (a) Input images from Cityscapes. (b) Segmentation results without domain adaptation. (c) Segmentation results of the proposed method.(d) Ground truth. (a) (b) (c) Figure 4: Illustrations of produced boundary maps. (a) Input images from Cityscapes. (b) Semantic boundary maps without adversarial learning. (c) Semantic boundary maps with adversarial learning. outperform the model without domain adaptation (SourceOnly) by large performance margins. Then, our method has the best mIoU 52.8%, which is significantly better than that of the compared stateof-the-art methods. Compared with some adversarial training and self-supervised learning methods, such as CLAN and PyCDA, our method improves by 9.6% and 5.4% mIoU and has significant gains in almost all classes. Moreover, some methods also combine the (a) (b) Figure 5: The t-SNE visualization of embedded semantic features over the target domain. (a) Features without the guide of edge information. (b) Features with the guide of edge information. adversarial learning or image translation with self-supervised learning and achieve decent performance, like FDA, DAST, and UDACT. Compared to these methods, our approach still has a significant improvement. Figure 3 presents some qualitative results 2 produced by our methods. SYNTHIA to Cityscapes. Table 2 reports the comparison results on the SYNTHIA\u2192Cityscapes task. Like the previous work, we also report two mIoU metrics: 13 classes of mIoU* and 16 classes of mIoU. According to Table 2, it is obvious that our proposed method 2For more qualitative results, please see the supplementary materials. -, -, Hongruixuan Chen, Chen Wu, Yonghao Xu, and Bo Du Table 3: Ablation study of the proposed method in terms of mIoU (%) on the two tasks. Here, \u201cBaseline\u201d represents the original adversarial learning without entropy reweighting. Baseline L\ud835\udc4e\ud835\udc51\ud835\udc63 \ud835\udc60\ud835\udc52\ud835\udc5a L\ud835\udc50\ud835\udc5c\ud835\udc5b \ud835\udc52\ud835\udc54 L\ud835\udc4e\ud835\udc51\ud835\udc63 \ud835\udc52\ud835\udc54 L\ud835\udc62\ud835\udc4e\ud835\udc60\ud835\udc59 GTAV SYNTHIA \u2713 45.1 41.2 \u2713 \u2713 46.3 41.8 \u2713 \u2713 \u2713 48.4 43.7 \u2713 \u2713 \u2713 \u2713 49.2 44.4 \u2713 \u2713 \u2713 \u2713 \u2713 52.8 48.1 Table 4: Parameter analysis of the weighting factor \ud835\udefcfor the entropy reweighting adversarial loss in terms of mIoU (%). GTAV\u2192Cityscapes \ud835\udefc 0 1 5 10 15 20 30 mIoU 45.1 45.2 45.8 46.3 46.0 45.3 43.0 Table 5: Comparison of the proposed UASL and standard self-supervised learning method with different thresholds \ud835\udc47in terms of mIoU (%). SL Type GTAV SYNTHIA SL \ud835\udc47= 0 51.3 47.1 \ud835\udc47= 0.5 52.2 47.6 \ud835\udc47= 0.9 52.5 47.4 UASL 52.8 48.1 can outperform other state-of-the-art methods on both 13-class and 16-class with mIoU of 55.9% and 48.1%. In summary, these results obtained on both tasks reveal the effectiveness and superiority of our architecture in learning domain-invariant representations for UDA in semantic segmentation. 4.4 Discussion We further report the ablation study results to demonstrate the performance contribution of each element in our proposed method in Table 3. It can be seen that each element contributes to the final success of the adaptation. The proposed method outperforms the \u201cBaseline\u201d by +7.7% and +6.9% with GTAV and SYNTHIA as the source, respectively. Specifically, our proposed image-level entropy reweighting methods can enhance the model adaptation ability in the hard samples, thereby improving performance by 1.2% and 0.6%. Besides, weight factor \ud835\udefcis an important parameter for the entropy reweighting adversarial loss, which controls the extra adaptation degree to hard samples, we evaluate the performance of the semantic stream with different \ud835\udefcon the GTAV\u2192Cityscapes task, as shown in Table 4. When \ud835\udefc=0, the loss is equal to the standard adversarial loss. As \ud835\udefc increases, the loss of high-entropy samples (i.e., hard samples) is enlarged and hard samples could get better adaptation. However, if \ud835\udefcis too large, the network could merely focus on the adaptation of G C S C 2.0 0.8 1.0 1.2 1.4 1.6 1.8 -Distance w/o DA w/ DA w/ DA+Edge w/o DA w/ DA Semantic Features: Edge Features: Figure 6: A-distance of semantic and edge feature representations on the two tasks. hard samples, which may damage the adaptation performance in easy samples. Subsequently, utilizing low-level edge information can further enhance adaptation performance and lead to a large performance boost. Since the small domain gap in low-level edge features can be narrowed to some extent by semantic adversarial learning, using an independent edge stream to generate semantic boundaries and align the target prediction maps with them can improve mIoU by 2.1% and 1.9%. After explicitly performing adversarial learning for edge information, the mIoU get further improved by 0.8% and 0.7%. This obvious performance improvement (+2.9% and +2.6%) clearly demonstrates our argument. Figure 4 illustrates some produced semantic boundary maps. Obviously, through adversarial learning, the small inter-domain gap in edge features get further narrowed, thereby producing more accurate semantic boundaries. Then, in Figure 5, we additionally give a contrastive analysis between semantic feature distributions without and with the guide of edge information by t-SNE [40], which reveals that our edge consistency loss can enforce the alignment of semantic features, leading to clearer and more discriminative clusters over the target domain. To further verify our argument, we display A-distance [2] of semantic features and edge features in Figure 6, which is a commonly used metric for measuring domain discrepancy. Since it is difficult to compute the exact A-distance, a proxy distance is defined as \u02c6 \ud835\udc51A = 2(1\u22122\ud835\udf16) [2], where \ud835\udf16is the generalization error of a classifier (SVM in this paper) trained on the binary problem of distinguishing samples between the source and target domains. From Figure 6, we could see that \u02c6 \ud835\udc51A on edge features is obviously smaller than \u02c6 \ud835\udc51A on semantic features, which proves that low-level edge information has a smaller inter-domain gap. After performing semantic adversarial learning, the \u02c6 \ud835\udc51A on semantic features is reduced. Moreover, the \u02c6 \ud835\udc51A on semantic features become smaller through the guide of edge information, which verifies that edge information can be explicitly used to facilitate the transfer of semantic information. Lastly, through UASL, the mIoU performance of the proposed method reaches 52.8% and 48.1%. In addition, we also compare UASL to the standard SL method with three commonly used thresholds: \ud835\udc47= 0 [32], \ud835\udc47= 0.5 [27], and \ud835\udc47= 0.9 [18]. The relevant results are Unsupervised Domain Adaptation for Semantic Segmentation via Low-level Edge Information Transfer -, -, reported in Table 5. We could see that the threshold of generating pseudo-label significantly affects the performance of standard SL. And different domain adaptation tasks may have diverse optimal thresholds. It is difficult to choose a suitable threshold. In contrast, our UASL does not need to select a threshold. It can fully utilize target label information, adaptively pay large weights to high-confident pixels and suppress the effects of low-confident pixels, thereby obtaining better performance. 5 CONCLUSION In this paper, we present a new domain adaptation approach by leveraging the low-level edge information that is easy to adapt to guide the transfer of high-level semantic information. Specifically, we propose a semantic-edge domain adaptation architecture. The semantic stream adopts the existing entropy adversarial learning approach. To better adapt hard target samples, an entropy reweighting method is presented to make the network pay more attention to hard samples. The edge stream can produce semantic boundaries. To make the target predicted boundaries more precise, adversarial learning is performed on the edge stream. For the purpose of explicitly guiding the transfer of semantic features, an edge consistency loss function is presented to ensure the consistency between the target semantic map and boundary map. Lastly, an uncertainty-adaptive self-supervised learning is proposed to further fit the distribution of the target domain. The experimental results in the two UDA segmentation scenarios from synthetic to real demonstrate that our method obtains better results than the existing state-of-the-art works.", "introduction": "Semantic segmentation is a fundamental task in image processing, which aims to assign semantic labels to all pixels in a given im- age. Obtaining precise semantic segmentation results is significant for many vision-based applications [5, 33, 44, 46, 47, 55]. Nowa- days, deep learning-based models, especially convolutional neural networks (CNNs) [21, 22], have achieved promising progress in semantic segmentation. To train a good segmentation network, a large number of fully annotated images are often required. Nev- ertheless, collecting large-scale datasets with accurate pixel-level annotation is time-consuming [9]. To reduce labeling consump- tion, an alternative way is utilizing synthetic images with precise pixel-level annotations to train the deep models. These synthetic images and corresponding annotations can be automatically gen- erated by game engines, such as Grand Theft Auto V (GTAV) [34]. However, due to the large domain gap caused by the appearance difference between synthetic images and real images, the models trained on the synthetic images (source domain) inevitably face severe performance degradation on the real-world image datasets (target domain). To address this issue, unsupervised domain adaptation (UDA) methods have been introduced to reduce the domain gap between labeled source domain and unlabeled target domain. In terms of the semantic segmentation task, adversarial learning-based UDA approaches demonstrate good efficiency in aligning domain gaps in the feature-level [15, 31, 38, 41, 51]. All of these methods align high-level feature distributions of different domains since high- level features contain abundant semantic category information. However, the large inter-domain gap in the high-level semantic representations makes the accurate alignment difficult. As pointed by Luo et al. [31], directly aligning the high-level semantic features may lead to negative transfer and damage the adaptation perfor- mance in the originally well-aligned regions. To address this issue, they propose a local score alignment map to guide the transfer of semantic information. arXiv:2109.08912v1 [cs.CV] 18 Sep 2021 -, -, - Hongruixuan Chen, Chen Wu, Yonghao Xu, and Bo Du Source image Label Target image Adaptation result Adapt Adapt Decouple Source edge map Target edge map Small gap Guide Our result Large gap Figure 1: We propose to explicitly use low-level edge infor- mation with a small inter-domain gap for UDA in seman- tic segmentation. Compared with high-level semantic infor- mation, low-level edge information is easier to adapt; thus high-quality edge maps can be produced over the target do- main. Since the edge information can reflect the boundaries of semantic category, it can be used to guide the transfer of semantic information. In this paper, we provide a different viewpoint for addressing this issue. As argued in [30], in contrast to deep feature representations with large domain gaps and poor transferability, feature represen- tations extracted by shallow convolutional layers are often general. Then, according to the visualization results of CNN reported in [52], the feature representations extracted by CNN show strong hierarchical nature and the shallow layers highly respond to low- level edge and color information. Based on these arguments and observations, we argue that low-level edge feature representations have a smaller inter-domain gap in comparison with high-level se- mantic features. Intuitively, it could be also observed that although the synthetic image and real-world image are quite different in appearance, the object shapes of the same category are very simi- lar. Moreover, there exists a strong interaction between the edge information and the semantic information: the edge information can reflect the boundaries of semantic category. Consequently, as shown in Figure 1, we treat low-level edge information as the transferable factor that could be used to facili- tate transfer for high-level semantic information. Specifically, we present a semantic-edge domain adaptation (SEDA) architecture consisting of a semantic stream and an edge stream. In our archi- tecture, the edge information is decoupled from the mainstream semantic network and is explicitly processed by an independent stream. The semantic stream adopts the existing entropy adversarial learning method as the basis. To better adapt hard target images, we present an entropy reweighting method to assign larger weights to harder images. For edge stream, we train it with the source semantic boundaries and adapt the source and target edge features through adversarial learning. An edge consistency loss function is applied to encourage the semantic segmentation predictions to correctly align with the semantic boundaries. As the target results with more accurate boundaries are obtained, we use self-supervised learning (SL) to further fit the distribution of target domain. Furthermore, to overcome the issues of standard self-supervised learning, an uncertainty-adaptive self-supervised learning (UASL) is presented. The contributions of our work can be concluded as follows: (1) This paper proposes a semantic-edge domain adaptation architecture, which presents the first attempt at explicitly using edge information that has a small inter-domain gap for facilitating the transfer of high-level semantic information. (2) Two entropy-based reweighting methods are proposed to improve adversarial learning and self-supervised learning, enabling our architecture to learn better domain-invariant representations. (3) Experiments on two challenging benchmark adaptation tasks demonstrate that the proposed method can obtain better results than existing state-of-the-art methods." }, { "url": "http://arxiv.org/abs/2108.08157v1", "title": "Towards Deep and Efficient: A Deep Siamese Self-Attention Fully Efficient Convolutional Network for Change Detection in VHR Images", "abstract": "Recently, FCNs have attracted widespread attention in the CD field. In\npursuit of better CD performance, it has become a tendency to design deeper and\nmore complicated FCNs, which inevitably brings about huge numbers of parameters\nand an unbearable computational burden. With the goal of designing a quite deep\narchitecture to obtain more precise CD results while simultaneously decreasing\nparameter numbers to improve efficiency, in this work, we present a very deep\nand efficient CD network, entitled EffCDNet. In EffCDNet, to reduce the\nnumerous parameters associated with deep architecture, an efficient convolution\nconsisting of depth-wise convolution and group convolution with a channel\nshuffle mechanism is introduced to replace standard convolutional layers. In\nterms of the specific network architecture, EffCDNet does not use mainstream\nUNet-like architecture, but rather adopts the architecture with a very deep\nencoder and a lightweight decoder. In the very deep encoder, two very deep\nsiamese streams stacked by efficient convolution first extract two highly\nrepresentative and informative feature maps from input image-pairs.\nSubsequently, an efficient ASPP module is designed to capture multi-scale\nchange information. In the lightweight decoder, a recurrent criss-cross\nself-attention (RCCA) module is applied to efficiently utilize non-local\nsimilar feature representations to enhance discriminability for each pixel,\nthus effectively separating the changed and unchanged regions. Moreover, to\ntackle the optimization problem in confused pixels, two novel loss functions\nbased on information entropy are presented. On two challenging CD datasets, our\napproach outperforms other SOTA FCN-based methods, with only benchmark-level\nparameter numbers and quite low computational overhead.", "authors": "Hongruixuan Chen, Chen Wu, Bo Du", "published": "2021-08-18", "updated": "2021-08-18", "primary_cat": "cs.CV", "cats": [ "cs.CV", "cs.LG", "eess.IV" ], "main_content": "In this section, we first elaborate on the efficient convolution that factorizes standard convolution into the combination of depth-wise convolution, group point-wise convolution, and channel shuffle. Based on the efficient convolution, we propose the very deep and efficient network EffCDNet and describe the key modules of our network in detail, including residual channel shuffle (RCS) unit, efficient ASPP module, RCCA module, and information entropy-based loss function. 2.1. Efficient Convolution As change detection networks become deeper and more complicated, the standard convolutional layer introduces an increasingly large number of parameters, thereby incurring high computational and storage costs. Considering an input feature map Fin \u2208RH\u00d7W\u00d7Cin and a produced output feature map Fo \u2208RH\u00d7W\u00d7Co, a standard convolutional layer (Fig. 1-(a)) parameterized by filters K \u2208RS\u00d7S\u00d7C\u00d7C can be expressed as follows: \u2208 \u2208 convolutional layer (Fig. 1-(a)) parameterized by filters K \u2208RS k\u00d7S k\u00d7Cin\u00d7Co can be expressed as follows: Fo(h, w, c) = \ufffd Fin \u2217K \ufffd (h, w, c) = S k \ufffd S k \ufffd Cin \ufffd K(i, j, = \ufffd S k \ufffd i=1 conv \ufffd i=1 \u2217 S k \ufffd j=1 volut \ufffd j=1 \ufffd Cin \ufffd n=1 tion \ufffd Cin \ufffd n=1 K(i, j, n, c)Fin(h + i, w + j, n). (1) \ufffd \ufffd \ufffd If Cin = Co = 512, S k = 3, a standard convolutional layer would have 2.36 million trainable parameters. In practice, many of these deep layers would be stacked, thereby causing huge parameter numbers. It is therefore necessary to explore efficient variants of convolution for change detection tasks. 4 (a) (b) (c) Figure 2: The illustrations of (a) group point-wise convolution, (b) information isolation problem in group convolution, and (c) group point-wise convolution with channel shu\ufb04e. Depth-wise separate convolution (Io\ufb00e & Szegedy, 2015a; Howard et al., 2017) is a widely used structure to reduce the network parameters, as shown in Fig. 1-(b), by splitting the standard convolution into a depth-wise convolution and a 1\u00d71 convolution, called point-wise convolution. In depth-wise convolution parameterized with \ufb01lters Kd \u2208RS k\u00d7S k\u00d71\u00d7Cin, each kernel extracts features on only one channel: Fd(h, w, c) = \u0010 Fin \u2217Kd\u0011 (h, w, c) = S k X i=1 S k X j=1 Kd(i, j, 1, c)Fin(h + i, w + j, c). (2) However, each channel of feature maps produced by depth-wise convolution is isolated from every other channel. To solve the problem of no information interaction occurring between channels, following depth-wise convolution, a point-wise convolution with \ufb01lters K p \u2208R1\u00d71\u00d7Cin\u00d7Co is used to fuse the information in each channel: Fo(h, w, c) = \u0010 Fd \u2217K p\u0011 (h, w, c) = Cin X n=1 K p(1, 1, n, c)Fd(h, w, n) (3) The above process, which factorizes standard convolution into two steps, can reduce the parameter numbers to 1/Co + 1/S 2 k of the standard convolution. After the standard convolution has been replaced by depth-wise separate convolution, point-wise convolution occupies the majority of the parameters. More speci\ufb01cally, in a depth-wise separate convolution with S K = 3, Cin = 512, and Co = 512, point-wise convolution contributes 98.3% of the number of parameters. Therefore, to further reduce parameters and construct more e\ufb03cient models, group point-wise convolution (Xie et al., 2017) is introduced. In group point-wise convolution, the input feature map and convolution \ufb01lters are divided into G groups: \uf8f1 \uf8f4 \uf8f4 \uf8f4 \uf8f2 \uf8f4 \uf8f4 \uf8f4 \uf8f3 K p = h K p 1 , K p 2 , \u00b7 \u00b7 \u00b7 , K p G i , K p g \u2208R1\u00d71\u00d7(Cin/G)\u00d7(Co/G) Fin = h Fin 1 , Fin 2 , \u00b7 \u00b7 \u00b7 , Fin G i , Fin g \u2208RH\u00d7W\u00d7(Cin/G) . (4) 5 Figure 3: The network architecture of the proposed E\ufb00CDNet. The encoder network consists of two very deep siamese streams and one e\ufb03cient ASPP module. The key components of the decoder network are a skip connection structure, a RCCA module and a classi\ufb01er. The encoder network receives two multi-temporal VHR images and outputs one multi-scale di\ufb00erence feature map. The decoder network combines the multi-scale di\ufb00erence feature map with a low-level di\ufb00erence feature map, which is generated by the two low-level feature maps output by RCS block I. The feature representation becomes more discriminative by means of the RCCA module. Subsequently, the classi\ufb01er outputs the change map. Finally, the parameters of the whole network are optimized using the information entropy-based loss function. Then, the \ufb01lters K p g of group g only operate on the corresponding g-th input feature maps Fin g : Fo g(h, w, c) = \u0010 Fin g \u2217K p g \u0011 (h, w, c) = Cin/G X n=1 K p g (1, 1, n, c)Fin g (h, w, n). (5) Subsequently, by concatenating the output feature map Fo g of each group in channel, we can obtain the \ufb01nal output feature map Fo = h Fo 1, Fo 2, \u00b7 \u00b7 \u00b7 , Fo G i \u2208RH\u00d7W\u00d7Co. By means of the grouping operation, the parameter numbers of point-wise convolution are reduced to 1/G. Fig. 2-(a) presents the diagram of group point-wise convolution. Nevertheless, considering that multiple group point-wise convolutions are stacked together, an information isolation problem exists between groups. As illustrated in Fig. 2-(b), it is obvious that the output produced by a certain group relates only to the input within the corresponding group, and that there is no information interaction between groups, which might seriously damage the network performance. To address this issue, we utilize a channel shu\ufb04e mechanism (Zhang et al., 2018) for cross-group information exchange. More speci\ufb01cally, for an output feature map of point-wise group convolution Fo \u2208RH\u00d7W\u00d7Co, it is \ufb01rst reshaped in the channel dimension Fo re \u2208RH\u00d7W\u00d7G\u00d7(Co/G). The last two dimensions of Fo re are then transposed and \ufb02attened back into the original channel dimension \u02dc Fo \u2208RH\u00d7W\u00d7Co. With the help of this simple channel shu\ufb04e mechanism, the information deposited in each group is reassigned, as shown in Fig. 2-(c). Based on the aforementioned depth-wise convolution, point-wise group convolution and channel shu\ufb04e mechanism, we can build a powerful and e\ufb03cient deep network for change detection tasks. 2.2. Network Architecture Utilizing the e\ufb03cient convolution as the basic unit, the overall network architecture of our proposed E\ufb00CDNet is presented in Fig. 3. The encoder of E\ufb00CDNet is designed to be very deep, enabling it to extract representative 6 and informative deep features in order to cope with the complex ground situations of VHR images. However, in addition to a large number of parameters, a very deep network often su\ufb00ers from training problems such as gradient vanishing and the derogation problem (He et al., 2016). Accordingly, a RCS unit that combines residual learning and e\ufb03cient convolution is utilized to deepen the encoder of our network. The RCS unit is introduced in section 2.2.1. More speci\ufb01cally, the encoder network of E\ufb00CDNet, with its two streams S T1 and S T2, takes a multi-temporal image-pair IT1 and IT2 as input. To ensure that the deep features extracted from two images are comparable (in the same feature space) and to reduce parameters, the two streams are designed as siamese architectures that have the same structure and are weight-shared. Each stream consists of a standard convolutional block and three RCS blocks. First, the low-level feature representation is abstracted from the input image by means of the standard convolutional block. Three RCS blocks stacked by multiple RCS units then progressively extract more and more abstract and global feature representations. The network depth of each stream reaches 102 layers. The constructed stream is deep enough to enable full extraction of informative and representative deep features from the input image. Despite its depth, the stream su\ufb00ers from neither training problems nor large parameter numbers owing to the presence of the RCS unit. After yielding two deep feature map FT1 and FT2 from input images by two siamese streams, a pixel-wise subtraction operation is performed on FT1 and FT2 for generating deep di\ufb00erence feature map D = FT1 \u2212FT2, where change information gets highlighted. In the next step, to better handle changed objects with di\ufb00erent sizes in a low computational cost way, an EASPP module is applied to D to produce multi-scale di\ufb00erence feature map Dms. The speci\ufb01c structure of EASPP module is illustrated in section 2.2.2. For the decoder of E\ufb00CDNet, unlike in the existing UNet-like change detection architecture, we argue that the decoder network in change detection tasks does not necessarily have to be as deep and complicated as the encoder: as long as the encoder network can fully extract informative and representative features, and the features of two classes can be adequately separated from each other, a shallow decoder can also recover changed objects very well. Guided by this argument, the decoder of E\ufb00CDNet is simple and lightweight. As shown in Fig. 3, in the decoder network, the multi-scale di\ufb00erence feature map Dms is \ufb01rst upsampled by a factor of 4. Since Dms contains more abstract and global change information, yet less concrete and local change information, some concrete and local information is required to generate changed objects with accurate boundaries. Considering that the low-level feature maps in early layers contain rich local information, the two low-level feature maps generated by the \ufb01rst RCS block are utilized. The two feature maps are reduced dimension in channel axis by two 1\u00d71 convolutional layers, which can avoid the case in which the importance of low-level features outweighs the importance of Dms. Through a subtraction operation, a low-level di\ufb00erence feature map Dl is generated. Dms and Dl are then concatenated and passed through a 1\u00d71 convolutional layer to fuse the low-level and high-level change information. Next, to further separate the changed and unchanged regions, a RCCA module, which employs a self-attention mechanism to use non-local similar feature representation for improving feature discriminability, is utilized to e\ufb00ectively enlarge the feature distance between changed and unchanged pixels. The motivation and description of the RCCA module are presented in section 2.2.3. Although the decoder of E\ufb00CDNet is shallow and simple, implementing the procedures outlined above enables the generated features to have both global and local changed information and exhibit high inter-class discriminability. It is therefore easy for the classi\ufb01er to generate precise change detection results with sharp boundaries. At the end of decoder, we apply a classi\ufb01er comprising 7 (a) (b) (c) Figure 4: The illustrations of (a) standard residual unit, (b) residual channel shu\ufb04e unit, (c) residual channel shu\ufb04e unit with stride=2 for down-sampling. several e\ufb03cient convolutions and a standard 1\u00d71 convolutional layer, followed by an upsampling by a factor of 4, to produce the prediction result. Finally, with reference to the prediction result and ground truth, a loss function is applied to optimize the network parameters. However, in change detection tasks, there exist some confused pixels, such as non-building changes in building change detection tasks and pseudo-changes caused by seasonal variation. Cross-entropy loss function, the most widely used loss function in change detection, cannot optimize these pixels very well. To tackle this problem, we instead use Shannon information entropy to measure the uncertainty of each pixel in the prediction map and propose two information entropy-based loss functions. As the confused pixels show high entropy values, they will receive more attention in the training stage. More details about the information entropy-based loss function are provided in section 2.2.4. 2.2.1. Residual Channel Shu\ufb04e Unit for Very Deep Encoder In order to fully extract informative and representative features from input VHR images, we aim to design a very deep encoder for change detection. However, constructing a deep encoder network is not as easy as simply stacking more layers. In addition to the large model size, a very deep encoder will face gradient vanishing and degradation problems during the training stage (Io\ufb00e & Szegedy, 2015b; He et al., 2016), which will hamper the network convergence, damage the \ufb01nal performance, and even result in lower accuracy than shallow networks if left unaddressed. Moreover, limited by the training problems, a lot of change detection networks are only about 20 to 30 layers in size, which restricts these networks to achieve better performance. To deal with the training problems, we introduce residual learning into our network (He et al., 2016). A standard residual learning module is illustrated in Fig. 4-(a). However, although the residual learning module is able to solve the training problems of very deep networks, all convolutional layers used in the residual module are standard convolution; hence, it is inevitable that this will introduce numerous parameters as the network depth increases. Therefore, depth-wise convolution, group point-wise convolution, and channel shu\ufb04e are adopted to modify the residual module in order to create the 8 Figure 5: The illustration of EASPP module. residual channel shu\ufb04e (RCS) unit. As shown in Fig. 4-(b), in the RCS unit, the two 1\u00d71 convolutional layers are replaced by group point-wise convolution, meaning that their parameters are reduced to 1/G, (where G is the group number). To avoid the information block problem of group convolution, the \ufb01rst group point-wise convolution is followed by the channel shu\ufb04e mechanism to reassign the information in each group. Subsequently, depth-wise convolution is applied to replace the standard 3\u00d73 convolutional layer, thus further reducing parameters. For the down-sampling step, to retain more information, an RCS unit with stride is utilized instead of directly using a pooling operation. Fig. 4-(c) depicts the RCS unit with a stride of 2. In the shortcut path, an average pooling layer with kernel size 3\u00d73 and stride of 2 is applied. In the residual path, a 3\u00d73 depth-wise convolution with a stride of 2 is used to reduce feature size. The down-sampled raw feature map and down-sampled residual feature map are then concatenated to produce the \ufb01nal output. Bene\ufb01ting from residual learning and e\ufb03cient convolution, the RCS unit allows us to construct a very deep encoder for E\ufb00CDNet while avoid the obstacles associated with deep architecture. 2.2.2. E\ufb03cient ASPP Module for Multi-scale Feature Learning In the remote sensing images, there exist di\ufb00erent ground objects with diverse sizes ranging from tiny neighbors to large regions, which in turn results in various changed objects with multiple scales. To handle multi-scale changed objects, an e\ufb03cient ASPP (EASPP) module is proposed in E\ufb00CDNet. The ASPP module (Chen et al., 2018) is an e\ufb00ective multi-scale feature extraction module, which extracts multi-scale information by means of multiple parallel atrous convolution layers (Chen et al., 2014) with di\ufb00erent dilation rates. Nonetheless, the standard ASPP module performs feature extraction on deep features with high dimensions, resulting in huge parameter numbers and high computational overhead. To address this issue, we propose an e\ufb03cient ASPP module for learning multi-scale di\ufb00erence representation. Fig. 5 illustrates the proposed EASPP module, which comprises four atrous convolution branches with di\ufb00erent dilated rates and one pooling branch. To reduce the number of parameters, in the EASPP module, the standard atrous convolution is decomposed into a combination of depth-wise atrous 9 (a) (b) (c) Figure 6: The Illustrations of (a) non-local mutual improvement, (b) recurrent information propagation, and (c) criss-cross self-attention module. convolution, group convolution, and channel shu\ufb04e. The dilation rates of the four depth-wise atrous convolutions are set to 1, 2, 4, and 8 to e\ufb00ectively extract features at di\ufb00erent scales. In the pooling branch, a global pooling layer followed by a 1\u00d71 group convolution with channel shu\ufb04e can capture the global information. Finally, the features containing change information on di\ufb00erent scales are concatenated and passed through another 1\u00d71 group convolution with channel shu\ufb04e to generate the \ufb01nal multi-scale di\ufb00erence features. Notably, in the proposed E\ufb00CDNet, the EASPP module has only 0.44 million parameters; in comparison, a standard ASPP module would have 7.70 million parameters. The use of the EASPP module notably reduces the parameters and accordingly allows our network to extract multi-scale di\ufb00erence features with high e\ufb03ciency. 2.2.3. Criss-cross Self-attention Module for Lightweight Decoder As many studies have shown, highly discriminative features can help the classi\ufb01er to achieve more precise change detection results (Zhang et al., 2020; Shi et al., 2020; Gong et al., 2016). Zhang et al. (Zhang et al., 2020) introduced a spatial attention module in the decoder network, which uses a large convolution kernel to fuse the local information for each position, thereby generating an attention map to separate changed and unchanged regions. However, this spatial attention module only utilizes the information in a local neighborhood for each position, but features in non-local positions can also provide information useful for highlighting feature representation. As shown in Fig. 6-(a), for a certain position belonging to the top emerged building, not only can features around this position help to improve discriminability, but features belonging to other emerged buildings can also o\ufb00er information to aid in further distinguishing this pixel from unchanged ones. In turn, the feature representation in this position can help to improve the feature discriminability in the other changed buildings. The same is true 10 for the unchanged pixels (see the blue illustration in Fig. 6-(a)). In order to utilize this non-local information to increase feature discriminability, we introduce a self-attention mechanism to model the relationship between a certain position and all other positions, thus ensuring that any two positions with similar features will contribute mutual discriminability regardless of their distance. For more information about the self-attention mechanism, please refer to (Zhang et al., 2019). Nonetheless, in the standard self-attention module, the time and space complexity of calculating a self-attention map are both O(HW \u00d7HW) (Zhang et al., 2019), which would result in high computational complexity and occupy a huge amount of GPU memory when processing large-scale VHR images. Accordingly, in E\ufb00CDNet, a criss-cross self-attention (CCA) mechanism (Huang et al., 2019) is introduced to more e\ufb03ciently model the dense relationship between pixels. Fig. 6-(c) presents the CCA module. Similar to the standard self-attention module, three feature maps Q, K, and R are \ufb01rst generated from the input feature map by three 1\u00d71 convolutional layers. To optimize the self-attention map calculation step, for each pixel, the CCA module simply calculates the relationship between each pixel and the pixels in the horizontal and vertical directions rather than all remaining pixels, as follows: Dcc(i, j) = Q(i, j)KT cc(i, j) (6) where Kcc(i, j) \u2208R(H+W\u22121)\u00d7C\u2032 is the set of features in the crisscross path of position (i, j). A softmax operation is then applied on Dcc \u2208RH\u00d7W\u00d7(H+W\u22121) to calculate the attention map Acc \u2208RH\u00d7W\u00d7(H+W\u22121). Finally, for the position (i, j) in V, the set of features in the corresponding criss-cross path Vcc(i, j) \u2208R(H+W\u22121)\u00d7C is collected and multiplied with Acc(i, j) \u2208R(H+W\u22121) to obtain the corresponding feature representation in the self-attention feature map Hcc \u2208RH\u00d7W\u00d7C at position (i, j): Hcc(i, j) = Acc(i, j)Vcc(i, j) (7) Through employing this criss-cross calculation operation, the time and space complexity of the attention map calculation are greatly reduced from O(HW \u00d7 HW) to O(HW \u00d7 (H + W \u22121)). However, a single CCA module can only capture information in the horizontal and vertical directions; the information in the remaining positions that are not in the criss-cross path is not considered. Accordingly, to harvest the dense information, the CCA module is duplicated twice in the network to form two loops. As shown in Fig. 6-(b), for the two positions (x1, y1) and (x2, y2) that are not in the same row and column, in the \ufb01rst loop, the information in the position (x1, y1) can propagate to position (x2, y1) and position (x1, y2). In the second loop, the information of (x1, y1) encoded in the position (x2, y1) and position (x1, y2) can be captured by position (x2, y2); thus, the information of position (x1, y1) can be indirectly obtained. Therefore, through the above recurrent CCA (RCCA) module, the similar features can be selectively integrated for each position to highlight their feature representations with a minor computational cost. Moreover, to further reduce parameters and computational cost, the three 1\u00d71 convolutional layers in the CCA module are replaced by the pointwise group convolution with channel shu\ufb04e. 2.2.4. Shannon Information Entropy Loss for Network Training For change detection network training, cross-entropy loss is a commonly used loss function. However, as noted above, there exist some confused pixels that are di\ufb03cult for cross-entropy loss to optimize (such as some non-building changes in building change detection tasks or pseudo-changes caused by seasonal variation). For these pixels, we argue that as the training stage progresses, the cross-entropy penalty is not large enough (see 11 (a) (b) (c) (d) (e) Figure 7: The illustration of confused pixels. (a) Pre-change image. (b) Post-change image. (c) Ground truth. (d) Change map generated by deep network. (e) Change probability map. Figure 8: Curve of cross-entropy loss function and two information entropy-based loss functions. the middle interval of the blue line in Fig. 8), and considering the ambiguity of confused pixels in change and non-change, the further optimization becomes di\ufb03cult. Therefore, these pixels often have a similar probability of change and non-change in the prediction map. Fig. 7 illustrates the confused pixels in the building change detection task. As shown in Fig. 7-(e), most pixels have a large changed or unchanged probability (i.e., a very small changed probability), but the change probability of some confused pixels (marked with a red rectangle) is only about 0.5. Since these confused pixels have a close probability of change and non-change, from an information theory perspective, they are under-con\ufb01dent and have relatively large uncertainty. Shannon information entropy (Shannon, 2001) thus can be utilized to measure this uncertainty. Given a network prediction map P \u2208RH\u00d7W\u00d7C, the entropy E(i, j) \u2208[0, 1] at position (i, j) can be calculated as follows: E(i, j) = \u2212 1 logC C X c=1 P(i, j, c) log P(i, j, c) (8) where C is the number of output categories, in change detection tasks, C = 2. Based on information entropy, two novel loss functions are presented to help the cross-entropy loss in optimizing these confused pixels. In the entropy map of an ideal network output, only the pixels near the boundary between changed and unchanged regions have high entropy; the entropy of pixels within the region should be close to 0. With this in mind, we propose the \ufb01rst loss function, called information entropy L1 (IEL1) loss: L = H X i=1 W X j=1 \uf8eb \uf8ec \uf8ec \uf8ec \uf8ec \uf8ec \uf8ed C X c=1 Y(i, j) log P(i, j) + \u03b1\u2225E(i, j) \u2212B(i, j)\u22251 \uf8f6 \uf8f7 \uf8f7 \uf8f7 \uf8f7 \uf8f7 \uf8f8 (9) 12 Here, the \ufb01rst term is cross-entropy loss; in the second term, B is an edge map produced from the ground truth by an edge detection algorithm. The second term uses L1 loss to make the entropy map close to the ideal entropy map, thereby forcefully causing confused pixels with high entropy to have low entropy. Meanwhile, the cross-entropy loss would apply a large penalty to the wrong pixels to guarantee that their probabilities move towards the correct category. Moreover, considering that confused pixels have a relatively low penalty in cross-entropy loss but a fairly high entropy value, the second function, called information entropy weighted (IEW) loss, uses entropy to weigh each pixel, as follows: L = H X i=1 W X j=1 C X c=1 (1 + E(i, j))Y(i, j) log P(i, j) (10) Through IEW loss, the confused pixels with high uncertainty can contribute to larger loss values, thus prompting the network to concentrate more on the optimization of confused pixels. Fig. 8 plots the curve of the cross-entropy loss function and two information entropy-based loss functions. We can therefore see that, compared with cross-entropy, IEL1 and IEW have an obviously large penalty in the intermediate intervals. The e\ufb00ects of information entropy loss functions are discussed in section 3.4.2 and section 3.4.4. 3. Experimental Results and Analysis 3.1. Description of Datasets To validate the e\ufb00ectiveness of our proposed network, we conduct detailed experiments on two large-scale open-source change detection datasets. The \ufb01rst one, released in (Lebedev et al., 2018), is a season-varying dataset (SVCD) acquired from Google Earth, including seven season-varying image-pairs with a size of 4725\u00d72200 pixels and four season-varying image-pairs with a size of 1900\u00d71000 pixels. The spatial resolution of these images varies from 0.03m to 1m per pixel. The authors cropped these 11 multi-temporal image-pairs into image patch-pairs with a size of 256\u00d7256 pixels (Lebedev et al., 2018). Finally, the SVCD dataset contains 10000 image-pairs for training, 3000 image-pairs for validation, and 3000 image-pairs for testing. The second dataset is a challenging building change detection dataset (BCD) (Ji et al., 2019), which covers an area that was a\ufb00ected by a 6.3-magnitude earthquake in February 2011 and rebuilt in the following years. This dataset consists of one multi-temporal aerial image-pair with a size of 32507\u00d715354 pixels. The spatial resolution of these images is 0.075m per pixel. Considering the huge size of the two images, the two images are cropped into 1827 nonoverlapping 512\u00d7512 image-pairs. We used 1096 (60%) image-pairs to form the training set, 365 (20%) image-pairs to form the validation set, and 366 (20%) image-pairs to form the test set. 3.2. Experimental Setup 3.2.1. Parameters Settings We implement our network based on Pytorch. The speci\ufb01c network structure and con\ufb01guration of E\ufb00CDNet are listed in Table 1. It can be seen from the table that the encoder of E\ufb00CDNet is very deep and achieves a depth of 106 layers. By contrast, the decoder of E\ufb00CDNet is relatively shallow, with only has 10 layers. The depth of the entire network reaches 116 layers. To train the network, we apply a stochastic gradient descent (SGD) optimizer 13 Table 1: Speci\ufb01c network structure and con\ufb01guration of E\ufb00CDNet Part Module Depth Layer Repeat Output shape Con\ufb01guration Encoder Conv Block 3 Conv 1 H \u00d7 W \u00d7 48 3\u00d73, ReLU, BN Conv 2 H \u00d7 W \u00d7 48 3\u00d73, ReLU, BN Max pooling 1 H/2 \u00d7 W/2 \u00d7 48 3\u00d73, Stride 2 RCS Block I 12 RCS unit 1 H/2 \u00d7 W/2 \u00d7 240 RCS unit 2 H/2 \u00d7 W/2 \u00d7 240 RCS unit 1 H/4 \u00d7 W/4 \u00d7 240 Stride 2 RCS Block II 75 RCS unit 1 H/4 \u00d7 W/4 \u00d7 480 RCS unit 23 H/4 \u00d7 W/4 \u00d7 480 RCS unit 1 H/8 \u00d7 W/8 \u00d7 480 Stride 2 RCS Block III 12 RCS unit 1 H/8 \u00d7 W/8 \u00d7 960 RCS unit 2 H/8 \u00d7 W/8 \u00d7 960 RCS unit 1 H/16 \u00d7 W/16 \u00d7 960 Stride 2 Di\ufb00 Di\ufb00 1 H/16 \u00d7 W/16 \u00d7 960 EASPP 4 see Fig. 5 1 H/16 \u00d7 W/16 \u00d7 256 Decoder Upsample 4\u00d7Upsample 1 H/4 \u00d7 W/4 \u00d7 256 Bilinear Skip connection 1 Conv 1 H/4 \u00d7 W/4 \u00d7 48 1\u00d71, ReLU, BN Conv 1 H/4 \u00d7 W/4 \u00d7 48 1\u00d71, ReLU, BN Di\ufb00 1 H/4 \u00d7 W/4 \u00d7 48 Concat 1 H/4 \u00d7 W/4 \u00d7 304 Fusion 1 E\ufb00Conv 1 H/4 \u00d7 W/4 \u00d7 256 1\u00d71 RCCA 2 CCA (see Fig. 6-(c)) 2 H/4 \u00d7 W/4 \u00d7 256 Classi\ufb01er 7 E\ufb00Conv 2 H/4 \u00d7 W/4 \u00d7 256 3\u00d73 Conv 1 H/4 \u00d7 W/4 \u00d7 2 1\u00d71, Softmax 4\u00d7Upsample 1 H \u00d7 W \u00d7 2 Bilinear with an initial learning rate of 1e\u22123, momentum of 0.9, and a weight decay of 1e\u22126. The number of training epochs is set to 200. After the 100th epoch, the learning rate is halved if the validation loss does not drop at 10 epochs. The batch sizes are 16 and 4 for the SVCD and BCD datasets, respectively. Note that, on both datasets, the proposed E\ufb00CDNet is trained from scratch without the use of any pre-training technique. For the group numbers G of group point-wise convolution, as a trade-o\ufb00between accuracy and model size, we set G = 4. For the recurrence of RCCA module R, since the dense relationship of each pixel to all remaining pixels can be captured when the loop number R is 2, and considering the performance and computational complexity comprehensively, R is set to 2. For the IEL1, we use the Canny edge detector (Canny, 1986) on the ground truth to generate edge maps and set \u03b1=1 as the balance between pixel category correctness and pixel certainty. Finally, on both datasets, we train the network with two information entropy-based loss functions and select the better of the two to generate the \ufb01nal change detection results. All experiments are conducted on an Intel Xeon E5-2620 v4 2.10-GHz processor and a single NVIDIA GTX 1080Ti GPU. The implementation of our method will be released through GitHub 1. 1https://github.com/I-Hope-Peace/E\ufb00CDNet 14 (a) (b) (c) (d) (e) (f) (g) (h) (i) (j) (k) Figure 9: Change detection results obtained by di\ufb00erent methods on the SVCD dataset. (a) Pre-change images. (b) Post-change images. (c) Ground truth. Change maps produced by: (d) FC-EF, (e) FC-Siam-Di\ufb00, (f) DSMS-FCN, (g) FCN-PP, (h) IUNet++, (i) DASNet, (j) IFN, and (k) E\ufb00CDNet. In change maps, white areas indicate true changed pixels, black areas indicate true unchanged pixels, red areas represent unchanged pixels that are falsely detected as changed pixels (false alarm), and green areas represent changed pixels that are falsely detected as unchanged pixels (missing alarm). 3.2.2. Evaluation Criteria To access the accuracy of the obtained change maps, four commonly used evaluation criteria are adopted: namely, precision rate (P), recall rate (R), overall accuracy (OA), and F1 score. P denotes the percentage of changed pixels that are correctly classi\ufb01ed. R indicates the percentage of real changed pixels in all pixels classi\ufb01ed into the change category. OA represents the number of both changed and unchanged pixels that are classi\ufb01ed correctly, divided by the number of all pixels. Finally, F1 score is a comprehensive measure that considers both P and R, de\ufb01ned as the harmonic mean of P and R: F1 = 2PR/(P + R). Furthermore, the parameter numbers of each model and the corresponding number of \ufb02oating-point multiplicationadds (FLOPs) are utilized to quantitatively measure the complexity of di\ufb00erent models. 3.3. Change Detection Results To validate the e\ufb00ectiveness and superiority of our proposed E\ufb00CDNet, we conduct experiments on the two datasets and compare E\ufb00CDNet with the seven comparison methods, including FC-EF (Caye Daudt et al., 2018), FC-Siam-Di\ufb00(Caye Daudt et al., 2018), DSMS-FCN (Chen et al., 2019), FCN-PP (Lei et al., 2019), IUNet++ (Peng et al., 2019), DASNet (Chen et al., 2020c), and IFN (Zhang et al., 2020). Fig. 9 presents some change detection maps produced by our E\ufb00CDNet and seven comparison methods on the SVCD dataset. In the \ufb01ve illustrated examples, the types of change are various, the scales of the changed regions also di\ufb00er, and there are many interferences caused by seasonal variation. As shown in this \ufb01gure, all change maps acquired by our model are very close to the ground truth, the changed regions are complete and accurate, and pseudo-changes are well suppressed. For example, the changes in the example of the \ufb01rst row show the emergence of a road. Although the road is very narrow and the changes in vegetation caused by seasonal variation are obvious, the proposed method still successfully detects road changes without fragmentation and is 15 (a) (b) (c) (d) (e) (f) (g) (h) (i) (j) (k) Figure 10: Building change detection results obtained by di\ufb00erent methods on the BCD dataset. (a) Pre-change images. (b) Post-change images. (c) Ground truth. Change maps produced by: (d) FC-EF, (e) FC-Siam-Di\ufb00, (f) DSMS-FCN, (g) FCN-PP, (h) IUNet++, (i) DASNet, (j) IFN, and (k) E\ufb00CDNet. In change maps, white areas indicate true changed pixels, black areas indicate true unchanged pixels, red areas represent unchanged pixels that are falsely detected as changed pixels (false alarms) and green areas represent changed pixels that are falsely detected as unchanged pixels (missing alarm). not disturbed by the seasonal changes. In short, all visual interpretations relating to the change maps in Fig. 9 qualitatively re\ufb02ect the e\ufb00ectiveness of the proposed E\ufb00CDNet. To con\ufb01rm the superiority of E\ufb00CDNet, in Table 2, we report the quantitative results of the comparison models and E\ufb00CDNet on the SVCD dataset. First, we could see that the depth of E\ufb00CDNet is far deep than that of these comparison methods. As is evident, the proposed method outperforms all comparison methods on the four evaluation criteria. Limited to the simple network architecture, the accuracies of the two benchmark models (i.e., FC-EF and FC-Siam-Di\ufb00) are much lower than that of E\ufb00CDNet. Despite adopting a pyramid pooling module to extract multi-scale features, the early fusion strategy is not bene\ufb01cial for highlighting change information, leading to the poor performance of FCN-PP. In (Zhang et al., 2020), by utilizing pretraining technology for the encoder, as well as deep supervision and attention module for the decoder, the proposed IFN achieves promising results on the SVCD dataset. Compared to IFN, the proposed E\ufb00CDNet gets a slightly better value of P and a considerable increase in terms of R, which means that the proposed E\ufb00CDNet is capable of detecting more complete changed regions. Finally, E\ufb00CDNet achieves improvements in terms of accuracy by 1.18% of OA, and 4.97% of F1, respectively, in comparison with IFN. To further demonstrate the generality of E\ufb00CDNet in change detection tasks, we conduct experiments on the BCD dataset. This dataset is a building change detection dataset, in which only building changes belong to the change class; other types of changes are pseudo-changes and should be classi\ufb01ed into the non-change class. Fig. 10 presents some building change maps produced by our model and the other seven comparison methods. From the \ufb01gure, we can see that E\ufb00CDNet generates the most accurate change maps out of all methods. For di\ufb00erent types, numbers, and scales of building changes, E\ufb00CDNet can generate changed regions with precise boundaries and high internal compactness. For the other types of changes, regardless of their change intensity, E\ufb00CDNet is 16 Table 2: Accuracy assessment of change detection results produced by various methods on the SVCD dataset Method P (%) R (%) OA (%) F1 (%) Depth FC-EF 81.00 73.34 94.82 77.00 20 FC-Siam-Di\ufb00 88.62 80.29 96.45 84.25 20 DSMS-FCN 89.34 82.40 96.85 85.73 28 FCN-PP 91.77 82.21 97.03 86.73 16 IUNet++ 89.54 87.11 96.73 87.56 24 DASNet 92.52 91.45 98.07 91.93 19 IFN 94.96 86.08 97.71 90.30 37 E\ufb00CDNet 95.68 94.86 98.89 95.27 116 Table 3: Accuracy assessment of change detection results produced by various methods on the BCD dataset Method P (%) R (%) OA (%) F1 (%) Depth FC-EF 72.94 81.76 95.46 77.10 20 FC-Siam-Di\ufb00 80.58 77.47 96.15 79.00 20 DSMS-FCN 84.30 81.33 96.83 82.79 28 FCN-PP 84.37 80.87 96.81 82.58 16 IUNet++ 90.88 81.41 97.50 85.89 24 DASNet 90.94 85.16 97.82 88.00 19 IFN 89.10 85.52 97.67 87.27 37 E\ufb00CDNet 92.54 90.06 98.39 91.29 116 capable of correctly classifying them into the non-change class. Although the state-of-the-art method IFN can detect complete changed regions, the internal compactness of the obtained results is not high (see the \ufb01rst and third rows of Fig. 10-(i)), meaning that it cannot compete with E\ufb00CDNet. Table 3 presents the quantitative results on the BCD dataset. Once again, the proposed E\ufb00CDNet achieves the best performance on all evaluation criteria. By using a pretraining network, deep supervision technique, and attention module, IFN obtains decent performance. In comparison, through using a very deep encoder, an e\ufb03cient multi-scale feature extraction module, and a light decoder with self-attention mechanism, our E\ufb00CDNet achieves obvious performance increases of 3.44%, 4.54%, 0.72%, and 4.02% for P, R, OA, and F1, respectively. On both datasets, the proposed E\ufb00CDNet generates highly precise change maps and outperforms all other comparison methods on all evaluation criteria, which demonstrates the e\ufb00ectiveness and superiority of our model. Going one step further, Fig. 11 plots the parameter numbers of E\ufb00CDNet and comparison methods, along with their FLOPs, when processing a pair of bi-temporal images with a size of 3\u00d7256\u00d7256. From this \ufb01gure, we can derive more promising conclusions. First, due to the shallow and simple network architecture, the two benchmark methods (i.e. FC-EF, FC-Siam-Di\ufb00) have very few parameters and a low computational cost; nevertheless, it is di\ufb03cult to fully guarantee the accuracy of these methods. By contrast, E\ufb00CDNet achieves the best performance on both datasets with a network depth of 116 layers, but the model size of E\ufb00CDNet is almost the same as the benchmark methods, while its computational cost is only slightly higher than FC-Siam-Di\ufb00. Although IFN achieved very high accuracy in the SVCD dataset, its utilization of standard convolution in deep layers and the complicated decoder network result in too many parameters (28.12M) and very high computational cost (120.40 GFLOPs). In contrast, by adopting the e\ufb03cient convolution to replace the standard convolution, the parameter numbers of 17 Figure 11: The parameter numbers and computational amount of E\ufb00CDNet and comparison methods. FLOPs are estimated under one bitemporal image-pair with a size of 2\u00d73\u00d7256\u00d7256. (a) (b) (c) Figure 12: Learning curves for the four methods on the BCD dataset. (a) Training loss. (b) Validation loss. (c) Validation F1. E\ufb00CDNet is 6.4% of that of IFN and computational cost is only 14.9% of that of IFN. Moreover, the proposed EffCDNet also achieves better performance than IFN on both datasets. In summary, E\ufb00CDNet successfully balances the contradiction between change detection performance and the computational burden caused by deep architecture. Under the same hardware conditions, compared to the state-of-the-art methods, E\ufb00CDNet requires fewer computational resources but can achieve higher accuracy, which makes it superior overall in an actual production environment. 3.4. Discussion 3.4.1. Network training Due to the training problems caused by deep architecture, existing change detection networks are not deep enough, which limits their learning ability. Moreover, some complex networks, such as IFN, use a pre-training network for parameter initialization and deep supervision techniques to alleviate training problems. Even so, the network depth of IFN is only 37 layers. In comparison, through its introduction of residual learning in the RCS unit, E\ufb00CDNet does not encounter training problems despite having a network depth of 116 layers. Fig. 12 plots the learning curves of the proposed network and three comparison networks (FC-EF, FCN-PP, IFN) trained on 18 Table 4: Performance contribution of key components in E\ufb00CDNet on both datasets Dataset Method EASPP RCCA IEL1 IEW F1 (%) SVCD Backbone 91.73 +EASPP \u2713 93.72 +RCCA \u2713 94.30 E\ufb00CDNet \u2713 \u2713 94.63 +IEL1 \u2713 \u2713 \u2713 95.27 +IEW \u2713 \u2713 \u2713 95.16 BCD Backbone 87.66 +EASPP \u2713 88.89 +RCCA \u2713 89.25 E\ufb00CDNet \u2713 \u2713 89.94 +IEL1 \u2713 \u2713 \u2713 90.88 +IEW \u2713 \u2713 \u2713 91.29 Table 5: The F1 score of symmetric architecture and our asymmetric architecture on both datasets Architecture Param.(M) Layers SVCD BCD Backbone-Ours 1.48 110 91.73 87.66 Backbone-Sym 1.91 111 91.20 86.80 the BCD dataset. As shown in Fig. 12-(a), in the training set, compared to the other three methods, the loss of E\ufb00CDNet drops fastest and becomes the smallest after the \ufb01rst few epochs, which demonstrates the superiority of deep architecture with residual learning. Furthermore, E\ufb00CDNet exhibits very good generalization ability. As presented in Fig. 12-(b), in the validation set, the loss of E\ufb00CDNet gradually decreases and achieves the smallest value. Fig. 12-(c) further shows the F1 obtained by the four methods on the validation set. We can see that the F1 of E\ufb00CDNet reaches a value close to 0.5 after the \ufb01rst epoch and surpasses 0.8 at the seventh epoch. Subsequently, the F1 of E\ufb00CDNet continues to rise steadily until it reaches 0.88 at the 50th epoch. In comparison, through the use of facilitating training techniques, after the \ufb01rst few epochs, the F1 of IFN also rises quickly and then maintains a steady increase; however, it is still lower than that of E\ufb00CDNet. 3.4.2. Ablation Study In this subsection, to ascertain the contribution of key components in E\ufb00CDNet to the overall performance, we conduct a series of ablation experiments on both datasets. The relevant results are reported in Table 4; here, \u201cBackbone\u201d denotes a network with the very deep encoder and shallow decoder, but without EASPP, RCCA, and information entropy loss. It is apparent that, even without the two modules and IE loss, our presented backbone achieves F1 results of 91.73% and 87.66% on the SVCD and BCD datasets, respectively, which is comparable to the results achieved by the state-of-the-art method IFN. Then, we also modify our backbone into a symmetric architecture with similar depth and compare it with our asymmetric architecture in Table 5. We can see that the performance of asymmetric architecture is better even though the symmetric architecture has more parameters. This result, to some extent, veri\ufb01es our claim that symmetric architecture is not necessary, and that network architecture with a very deep encoder and a simple decoder is more suitable for binary change detection tasks. 19 Figure 13: The relationship between the performance of E\ufb00CDNet and group number G on the SVCD dataset. With the help of EASPP, the F1 is increased by 1.99% and 1.23%, which indicates the signi\ufb01cance of multiscale change information extraction. Enlarging the feature distance between changed and unchanged pixels by considering non-local similar di\ufb00erence information, RCCA increases the F1 improvement from 91.73% to 94.30% on the SVCD dataset and from 87.66% to 89.25% on the BCD dataset. When both modules are combines, the network attains an F1 of 94.63% F1 and 89.94% F1 on the SVCD and BCD datasets, respectively. Finally, although the E\ufb00CDNet with cross-entropy loss achieves very high F1 (94.63%) on the SVCD dataset, it still achieves performance gains of around 0.6% after using IEL1 for training. On the BCD dataset, this performance improvement is more obvious: IEL1 and IEW can contribute an F1 increase of 0.94% and 1.35%, respectively. 3.4.3. E\ufb00ects of Group Number The group number G of e\ufb03cient convolution is an important parameter that controls our model size. Therefore, we evaluate the performance of E\ufb00CDNet with di\ufb00erent values of G on the SVCD dataset, as shown in Fig. 13. We could see that by utilizing group convolution to reduce model size, the number of parameters reduce largely. Owing to the channel shu\ufb04e mechanism, which can ensure the information exchange between di\ufb00erent groups, the performance of our approach only see a marginal decline \ufb01rst. However, when G increases to 6, F1 score sees a signi\ufb01cant decrease. Therefore, G = 4 is a good trade-o\ufb00between the performance of change detection and parameter numbers. 3.4.4. E\ufb00ects of RCCA module In the decoder of E\ufb00CDNet, the RCCA module is used to enlarge the feature distance between changed and unchanged pixels, after which the features pass through the classi\ufb01er to generate the \ufb01nal result. To qualitatively re\ufb02ect the e\ufb00ectiveness of the RCCA module, for the feature map before the classi\ufb01er, we calculate its L2 norm 20 (a) (b) (c) (d) (e) (f) (g) Figure 14: Visualization of deep features before classi\ufb01er. (a) Pre-change images. (b) Post-change images. (c) Ground truth. (d) Feature distance maps produced by our backbone network. (e) Feature distance maps produced by our backbone network with the RCCA module. (f) t-SNE results of our backbone network. (g) t-SNE results of our backbone network with the RCCA module. In the t-SNE result, red indicates a changed pixel and blue indicates an unchanged pixel. Table 6: Comparison of RCCA module and standard self-attention module on the SVCD dataset. FLOPs are estimated under one image-pair with size of 2\u00d73\u00d7256\u00d7256 Attention type R \u2206FLOPs(G) F1 (%) 0 91.83 RCCA 1 0.48 93.32 2 0.95 94.30 3 1.43 94.45 Standard 10.34 94.39 to obtain the feature distance map and use the t-SNE algorithm (Maaten & Hinton, 2008) to show the distribution of changed and unchanged pixels. As shown in Fig. 14, compared to the feature distance map of the backbone network (see Fig. 13-(d)), the map yielded by the network using the RCCA module (see Fig. 13-(e)) is more discriminative; here, the discrepancy between changed and unchanged pixels is very obvious, as the changed pixels are clearly highlighted while the unchanged pixels are well suppressed. Moreover, in the t-SNE results, we can see that with the help of the RCCA module, the changed and unchanged pixels are well clustered and have higher internal compactness; in addition, the two types of pixels are signi\ufb01cantly separated from each other. Consequently, through the RCCA module, the feature representation at each position achieves improved discriminability from similar feature representations at any position, thereby producing highly discriminative features. Furthermore, we study the relationship between R and model performance and compare the RCCA module with the standard self-attention module. As reported in Table 6, when R = 1, capturing the relationship between 21 (a) (b) Figure 15: Change maps and entropy maps on the two datasets. (a) Change detection results and corresponding entropy maps on the SVCD dataset. (b) Change detection results and corresponding entropy map on the BCD dataset. From top to bottom: multi-temporal image-pair and ground truth, change maps and corresponding entropy maps obtained by E\ufb00CDNet trained with cross-entropy loss, IEL1 and IEW, respectively. Table 7: The performance of FC-EF and IFN trained by di\ufb00erent loss functions on the BCD dataset Model OA(%) F1(%) FCEF-CE 95.46 77.10 FCEF-IEL1 96.46 80.41 FCEF-IEW 96.47 79.84 IFN-CE 97.67 87.27 IFN-IEL1 97.80 87.83 IFN-IEW 97.82 87.79 each pixel and the pixels in its criss-cross path can bring about a 1.50% F1 improvement. By repeating the module twice to capture the dense relationships, the F1 is increased by 0.98%. However, increasing R from 2 to 3 brings about only a slight performance increment. Therefore, as a tradeo\ufb00between computational cost and accuracy, R is set to 2 in E\ufb00CDNet. More importantly, when R = 2, the RCCA module shows competitive performance with standard self-attention module, but signi\ufb01cantly reduces FLOPs by about 91.8%; this means that, compared to the standard self-attention module, the RCCA module is capable of improving feature discriminability with compared to the standard self-attention module, the RCCA module is capable of improving feature discriminability with more e\ufb03cient way. 3.4.5. E\ufb00ects of Information Entropy Loss In section 2.2.4, two novel entropy information loss functions are proposed to solve the problem of crossentropy loss in optimizing confused pixels. As Table 4 shows, the two loss functions can improve network performance. To further intuitively demonstrate the role of information entropy loss, in Fig. 15, we present the change maps and corresponding entropy maps generated by E\ufb00CDNet trained by cross-entropy loss and the two information entropy losses. As this \ufb01gure shows, the entropy maps of E\ufb00CDNet trained by information loss are visually 22 superior, as mere category boundaries have high entropy values while category internal regions have very low entropy values. Moreover, the obtained changed maps match the ground truth very well. However, in the entropy map produced by the network trained by cross-entropy, a proportion of the pseudo-change pixels have high entropy values, indicating that this type of pixel has high uncertainty and may not be well optimized in the training stage. In the corresponding change maps, these high-entropy pixels belonging to the non-change class are misclassi\ufb01ed as changed pixels. Consequently, compared to cross-entropy loss, the proposed information entropy-based loss functions are better able to optimize confused pixels during the training stage, thereby improving the network performance in predicting these pixels. Besides, Table 7 presents the performance of the benchmark method FC-EF and the state-of-the-art method IFN trained on the BCD dataset with di\ufb00erent loss functions. As is clear, being trained by two information entropy loss functions, FC-EF and IFN achieve better change detection performance. However, the performance improvement of IFN by information entropy loss is not so signi\ufb01cant because it has used the deep supervision techinique and dice loss function. The results reported in Table 7 indicates that the proposed information entropy losses are not only suitable for the proposed E\ufb00CDNet, which is a general loss function and can be used to aid with other change detection network training. In addition, this approach of using information entropy to help in optimizing uncertain samples is not limited to change detection, and can also be employed in other remote sensing image interpretation tasks. 4. Conclusion Focusing on the major issues with the existing FCN-based change detection network, in this paper, we present the \ufb01rst attempt at designing an end-to-end e\ufb03cient deep network, called E\ufb00CDNet. To obtain excellent change detection performance, E\ufb00CDNet is designed to be very deep. In terms of the speci\ufb01c network architecture, EffCDNet adopts an architecture that has a very deep encoder and a lightweight decoder. In order to overcome the problems of numerous parameters and unbearable computational cost brought about by deep architecture, in E\ufb00CDNet, almost all standard convolution layers are replaced by an e\ufb03cient convolution. Moreover, the design and application of the key components of E\ufb00CDNet also give consideration to both performance boost and computational overhead. Consequently, although the proposed E\ufb00CDNet achieves a depth of 116 layers, which is far deeper than existing methods, its number of parameters is only slightly higher than benchmark models, while its computational cost is much lower than the state-of-the-art models. In addition, we present two information entropy-based loss functions to optimize E\ufb00CDNet. Compared with cross-entropy loss, the proposed information entropy-based loss function can better optimize confused pixels, thereby improving the network performance. Detailed experiments conducted on two challenging open change detection datasets demonstrate the e\ufb00ectiveness and superiority of our approach.", "introduction": "Remote sensing techniques can o\ufb00er large-scale, long-term, and periodic observations over the ground surface (Bovolo & Bruzzone, 2015; Bergen et al., 2019).Using multi-temporal remote sensing images to detect earth \u2217Corresponding author Email address: chen.wu@whu.edu.cn (Chen Wu) Preprint submitted to ISPRS Journal of Photogrammetry and Remote Sensing August 19, 2021 arXiv:2108.08157v1 [cs.CV] 18 Aug 2021 surface changes (i.e., change detection) has become a hot topic in the remote sensing \ufb01eld (Singh, 1989; Zhu, 2017; Liu et al., 2019). Due to the rapid development of Earth observation technology, numerous optical sensors (e.g., IKONOS, QuickBird, GaoFen, and WorldView) have been developed that are capable of providing a large number of multi-temporal images with high spatial resolution, which has expanded the potential applications of change detection. How to utilize these massive VHR images for precise change detection is signi\ufb01cant for \ufb01elds including land-cover and land-use change analysis, urban planning and development, precision agriculture, cadastral survey, and damage assessment (Xian et al., 2009; Luo et al., 2018; Shi et al., 2020; Brunner et al., 2010; Vetrivel et al., 2018; Wu et al., 2018). A large amount of literature has been published over the past decades with a focus on change detection (Hussain et al., 2013; Bruzzone & Diego Fern` andez Prieto, 2000; Nielsen et al., 1998; Wu et al., 2014; Bovolo et al., 2008; Liu et al., 2015; Hoberg et al., 2015; Lei et al., 2014). However, the obvious geometric structures and complex texture information in VHR images pose signi\ufb01cant challenges for these traditional methods; this is because these methods only explore \u201cshallow\u201d features and many of them are hand- crafted, which is both unrepresentative and shows poor robustness (Wu et al., 2021). Under these circumstances, deep learning was introduced to facilitate the extraction of high-level, hierarchical, and representative features for change detection. Recent years have witnessed the success of deep learning in a wide range of \ufb01elds, including computer vision, natural language processing, and remote sensing image interpretation (Lecun et al., 2015; Zhang et al., 2016; Zhu et al., 2017; Liu et al., 2020). A series of change detection methods have accordingly been developed based on deep models (Gong et al., 2017; Du et al., 2019; Chen et al., 2019, 2020b). Gong et al. (Gong et al., 2017) used a superpixel segmentation-based method to generate reliable samples, and further designed a DBN for di\ufb00erence representation learning in VHR images. Following recent developments in multi-scale feature learning (Szegedy et al., 2015), Chen et al. (Chen et al., 2019) presented a deep siamese multi-scale convolutional network and a cor- responding pre-detection algorithm for change detection in VHR images. Regarding the change detection task as a sequential prediction problem, Lyu et al. (Lyu et al., 2016) utilized an improved long short-term memory (LSTM) to learn the land-cover change rules. By combining CNN and RNN, Mou et al. (Mou et al., 2019) developed an end-to-end convolutional recurrent neural network to learn spatial-spectral-temporal features for binary and multi- class change detection. All the above well-established methods are patch-wise models, in which a corresponding neighbor region is \ufb01rst generated for each pixel in multi-temporal images, after which they are passed into deep models for feature extraction and change detection. Since each pixel is represented by a \ufb01xed-size patch, these models have a limited and \ufb01xed receptive \ufb01eld, which restricts their change detection performance. Moreover, the patch generation step introduces redundant computational cost; for large-scale multi-temporal images, the space and time costs brought in by this step are enormous (Caye Daudt et al., 2018; Xu et al., 2020). To overcome the drawbacks of the patch-wise method, Daudt et al. (Caye Daudt et al., 2018) introduced the fully convolutional network (FCN) for the change detection task and presented two siamese FCN architectures. Due to replacing fully connected layers with 1\u00d71 convolutional layers, the FCN architecture can take multi- temporal images of arbitrary size and directly produce the complete change maps without the patch generation step, thereby achieving better performance and higher e\ufb03ciency (Shelhamer et al., 2017). Following the work of Daudt et al., many change detection studies have been carried out based on FCN architecture. Lei et al. (Lei et al., 2019) proposed an FCN with a symmetric U-shape for landslide inventory mapping, in which a pyramid pooling 2 module (Zhao et al., 2017) is utilized to capture the multi-scale change information. For the same multi-scale fea- ture extraction purpose, Chen et al. (Chen et al., 2019, 2020a) developed a deep siamese multi-scale FCN based on a multi-scale feature convolution unit and utilized fully connected conditional random \ufb01eld (FC-CRF) to balance the local and global change information. Inspired by the UNet++ architecture proposed for medical images, Peng et al. (Peng et al., 2019) presented an improved UNet++ (IUNet++), which is capable of learning multi-level features in VHR images and achieves decent change detection performance in an open-source dataset. In addition, by comprehensively utilizing pre-training, deep supervision, and attention mechanism, Zhang et al. (Zhang et al., 2020) presented a novel image fusing network (IFN) that can produce accurate change detection results. After considering the above-mentioned existing works, it can be concluded that in order to obtain better change detection performance, building deeper and more complicated networks has become a primary tendency. How- ever, this tendency has resulted in a sharp increase in both parameter numbers and computational cost. From FC-EF (Caye Daudt et al., 2018), the \ufb01rst FCN model presented for change detection, to IFN (Zhang et al., 2020), one of the state-of-the-art methods, the parameter numbers have increased by 20.8 times, and the computational cost has increased by 18 times. Although the hardware technology has developed rapidly, the computational cost of these state-of-the-art methods remains a serious obstacle for real-time processing of large-scale and massive multi-temporal data from various Earth observations on portable devices. Therefore, how to balance the con\ufb02ict between performance improvement and computational burden brought by deep architecture is worth considering. In terms of the speci\ufb01c network architecture, most previous FCN-based methods have simply adopted UNet-like architecture. The most obvious characteristic of these networks is that the encoder and decoder networks are sym- metric. For change detection, considering the diversity of changed objects and scenes, the encoder is required to extract representative and informative deep features from multi-temporal images to the great extent possible. Subsequently, the decoder simply needs to separate the changed and unchanged pixels to generate change maps. The importance of the encoder and decoder in change detection tasks is not equal; hence, a symmetric structure may lead to insu\ufb03cient parameters for the encoder, resulting in poor performance, and excessive parameters for the decoder, generating extra computational resource requirements without equivalent performance improvement. In addition, there exist some \u201cconfused pixels\u201d that are di\ufb03cult to optimize for cross-entropy loss; for example, a few non-building changes in the building change detection task and some pseudo-changes caused by seasonal vari- ations. Due to the lack of su\ufb03cient optimization in the training stage, these pixels often have a similar probability of change and non-change, i.e., high uncertainty, causing false and missing alarms in the \ufb01nal result. These issues motivate us to propose a very deep and e\ufb03cient end-to-end change detection network (E\ufb00CDNet) for change detection. The network depth of E\ufb00CDNet reaches 116 layers while maintaining benchmark-network- level parameter numbers and low computational overhead. Our main contributions can be summarized as follows: 1. This paper presents the \ufb01rst attempt at designing a very deep but lightweight change detection network. Compared with other state-of-the-art methods, the proposed network achieves better performance on two open large-scale datasets with only benchmark-level numbers of parameters and computational overhead. 2. To balance the con\ufb02ict between performance boost and the computational burden brought about by deep architecture, an e\ufb03cient convolution consisting of depth-wise convolution, group point-wise convolution, and channel shu\ufb04e is used to replace the standard convolution. The application of this e\ufb03cient convolution allows us to deepen our network without incurring a high degree of computational cost. 3 (a) (b) Figure 1: The illustrations of (a) standard convolution, and (b) depth-wise separate convolution. 3. The self-attention mechanism is introduced for change detection tasks to separate changed and unchanged pixels. To further decrease the computational cost and make the network more e\ufb03cient, a recurrent criss- cross attention module is applied rather than the standard self-attention module. 4. In order to further optimize confused pixels in change detection tasks, two novel information entropy-based loss functions are presented; this enables better optimization of confused pixels in the training stage, thus enhancing the network\u2019s performance. The remainder of this paper is organized as follows. Section 2 elaborates on the proposed network in detail. In Section 3, the experimental results and related discussions conducted on two open datasets are presented. Finally, the conclusion of our work is drawn in Section 4." }, { "url": "http://arxiv.org/abs/2004.05745v1", "title": "Deep Siamese Domain Adaptation Convolutional Neural Network for Cross-domain Change Detection in Multispectral Images", "abstract": "Recently, deep learning has achieved promising performance in the change\ndetection task. However, the deep models are task-specific and data set bias\noften exists, thus it is difficult to transfer a network trained on one\nmulti-temporal data set (source domain) to another multi-temporal data set with\nvery limited (even no) labeled data (target domain). In this paper, we propose\na novel deep siamese domain adaptation convolutional neural network (DSDANet)\narchitecture for cross-domain change detection. In DSDANet, a siamese\nconvolutional neural network first extracts spatial-spectral features from\nmulti-temporal images. Then, through multiple kernel maximum mean discrepancy\n(MK-MMD), the learned feature representation is embedded into a reproducing\nkernel Hilbert space (RKHS), in which the distribution of two domains can be\nexplicitly matched. By optimizing the network parameters and kernel\ncoefficients with the source labeled data and target unlabeled data, the\nDSDANet can learn transferrable feature representation that can bridge the\ndiscrepancy between two domains. To the best of our knowledge, it is the first\ntime that such a domain adaptation-based deep network is proposed for change\ndetection. The theoretical analysis and experimental results demonstrate the\neffectiveness and potential of the proposed method.", "authors": "Hongruixuan Chen, Chen Wu, Bo Du, Liangepei Zhang", "published": "2020-04-13", "updated": "2020-04-13", "primary_cat": "cs.CV", "cats": [ "cs.CV" ], "main_content": "2.1. MK-MMD Caused by plenty of factors, the probability distributions characterizing source domain s and target domain t are dissimilar. arXiv:2004.05745v1 [cs.CV] 13 Apr 2020 Fig. 1: Overview of the CD architecture based on the proposed DSDANet. And due to only limited (or no) labeled data in target domain available, it is challenging to construct a model that can match these two domains and learn transferable representation. An ef\ufb01cient and common way is combining the CD errors with a domain discrepancy metric. A widely used metric is the maximum mean discrepancy (MMD). MMD is a nonparametric kernel-based metric that measures the distance between two distributions in a RKHS. And when the distributions of two domains tend to be the same and the RKHS is universal, MMD would approach zero. Nonetheless, it is dif\ufb01cult to \ufb01nd an optimal RKHS and the representation ability of single kernel is limited. And it is reasonable to assume that the optimal RKHS can be expressed as the linear combination of single kernels, thus the multikernel variant of MMD entitled MK-MMD [4] is introduced. Considering a source data set Xs and a target data set Xt, the formulation of MK-MMD is de\ufb01ned as d (Xs, Xt) = \u2225E (\u03a6k (Xs)) \u2212E (\u03a6k (Xt))\u2225H , (1) where \u2225\u2022\u2225H is the RKHS norm, \u03a6k (\u2022) is the feature map induced by multi-kernel k, which is de\ufb01ned as the linear combination of n positive semi-de\ufb01nite kernels {ku}n u=1 K := ( k : k = n X u=1 \u03b2uku, n X u=1 \u03b2u = 1, \u03b2u \u22650 ) , (2) where each ku is associated uniquely with an RKHS H, and we assume the kernels are bounded. Owing to leveraging diverse kernels, the representation ability of MK-MMD can get improvement. If the network can learn a domain-invariant representation that minimizes the MK-MMD between two domains, it can be easily transferred to the target domain with sparsely labeled data. 2.2. Network Architecture Introduced MK-MMD for domain adaptation, the structure of the proposed DSDANet is shown in Fig. 1. Given a source data set Ds = {Xs, Ys} = \b\u0000xT1 si , xT2 si , ysi \u0001\tns i=1 with enough labeled data and a target domain Dt = n\u0010 xT1 ti , xT2 ti \u0011ons i=1 without labels, xTn si \u2208Rk1\u00d7k2\u00d7c is an image patch centered i-th pixel and ysi is the corresponding label of i-th pixel. For each image patch-pair in both domains, the spatial-spectral features f T1 i and f T2 i are extracted by cascade convolutional layers and max-pooling layers. After that, the absolute value of multi-temporal spatialspectral features difference is calculated. Since the two branches of DSDANet are weight-shared, the change information could be highlighted through this operation. As we all konw, deep features learned by CNN transition from general to speci\ufb01c by the network going deeper. Especially for the last few fully connected (FC) layers, there exists an insurmountable transferability gap between features learned from different domains. If we train a network in the source domain, it cannot be transferred to the target domain via \ufb01ne-tuning with sparse target labeled data. Therefore, the MK-MMD is adopted to make the network learn domaininvariant features from two domains. An intuitive idea is combining MK-MMD with the penultimate FC layer, which can directly make the classi\ufb01er adaptive to two domains. But considering a single layer may not cope with domain distribution bias, thus the MK-MMD is embedded into the two FC layers in front of the classi\ufb01er. Since we aim to construct a network that is trained on the source CD data set but also perform well on the target task, thus the loss function of DSDANet is L = LC (Xs, Xt) + \u03bb la+1 X l=la d2 k \u0000Dl s, Dl t \u0001 , (3) where LC (Xs, Xt) is CD loss on the source labeled data, la is layer index, dk \u0000Dl s, Dl t \u0001 means the MK-MMD between the two domain on the features in the l-th layer and \u03bb \u22650 denotes a domain adaptation penalty parameter. 2.3. Optimization In the training procedure, two types of parameters require to learn, one is the network parameters \u0398 and another is the kernel coef\ufb01cient \u03b2. However, the cost of MK-MMD computation by kernel trick is O \u0000n2\u0001 , it is unacceptable for deep networks in large-scale data sets and makes the training procedure more dif\ufb01cult. Therefore, the unbiased estimate of MK-MMD [4] is utilized to decrease the computation cost from O \u0000n2\u0001 to O (n), which can be formulated as \uf8f1 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f2 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f3 d2 k \u0000Dl s, Dl t \u0001 = 2 ns ns X i=1 gk \u0000zl i \u0001 gk (zi) = k \u0000hsl 2i\u22121, hsl 2i \u0001 + k \u0000htl 2i\u22121, htl 2i \u0001 \u2212k \u0000hsl 2i\u22121, htl 2i \u0001 \u2212k \u0000hsl 2i, htl 2i\u22121 \u0001 (4) where zl i = \u0000hsl 2i\u22121, hsl 2i, htl 2i\u22121, htl 2i \u0001 is a quad-tuple evaluated by multi-kernel k and hl is learned features in l-th layer. As for the kernel parameters \u03b2, the optimal coef\ufb01cient for each d2 k \u0000Dl s, Dl t \u0001 can be sought by jointly maximizing d2 k \u0000Dl s, Dl t \u0001 itself and minimizing the variance, which results in the optimization max k\u2208K d2 k \u0000Dl s, Dl t \u0001 /\u03c32 k, (5) where \u03c32 k is estimation variance. Eventually, this optimization \ufb01nally can be resolved as a quadratic program (QP) [4]. By alternatively adopting stochastic gradient descent (SGD) to update \u0398 and solving QP to optimize \u03b2, the DSDANet can gradually learn transferrable representation from source labeled data and target unlabeled data. By minimizing Eq. 3, the marginal distributions P (Xs) and P (Xt) of two domains become very similar, yet the conditional distributions P (Ys|Xs) and P (Yt|Xt) of two domains may still be slightly different. Thus, a very small part of target labeled data is selected to \ufb01ne-tune the classi\ufb01er of DSDANet. Compared with the enough labeled data in the source domain, the labeled data provided by the target domain is very limited, so this procedure can be treated as a semi-supervised learning fashion. 3. EXPERIMENTS 3.1. General Information The data set used as the source domain is WH data set captured by GF-2, as shown in Fig. 2. The size of the two images is 1000 \u00d7 1000 pixels with four spectral bands and they have a spatial resolution of 4m. (a) (b) (c) Fig. 2: The WH data set adopted as source domain. In the ground truth, red indicates change and green means nonchange. (a) (b) (c) (d) (e) (f) Fig. 3: Two data sets adopted as target domains. (a)-(c) HY data set. (d)-(f) QU data set. In the ground truth, red indicates change and green means non-change. The data sets adopted as the target domains are HY data set and QU data set, as shown in Fig. 3. The HY data set was also captured by GF-2 with a size of 1000 \u00d7 1000 pixels. The second target data set was acquired by QuickBird with four spectral bands and a spatial resolution of 2.4m denoted as QU. Both images in this data set are 358 \u00d7 280 pixels. Since the WH and QU were acquired by different sensors leading to diverse spatial resolutions and statistical characteristics, the data distributions of these two data sets are signi\ufb01cantly different. In the training procedure, we randomly select 10% samples (the particular number is 50416) from the source domain as labeled training samples. And we train the DSDANet with labeled source training samples and all target samples without labels. After training, we only select 200 labeled samples from each target domain for \ufb01ne-tuning the classi\ufb01er. Compared with the labeled source data, the labeled data provided by the target domain is sparse. To evaluate the proposed method, we compare it with CVA [5] and SVM. To further evaluate the effectiveness of (a) (b) (c) (d) (e) (f) Fig. 4: Binary change maps obtained by the proposed method and comparison methods on the WH. (a) CVA. (b) SVM. (c)(e) Variants of DSDANet. (f) DSDANet. (a) (b) (c) (d) (e) (f) Fig. 5: Binary change maps obtained by the proposed method and comparison methods on the QU. (a) CVA. (b) SVM. (c)(e) Variants of DSDANet. (f) DSDANet. MK-MMD, we compare the DSDANet to its variants that dont perform domain adaptation, including directly inferring target data without \ufb01ne-tuning (DSCNet-v1), directly training in the target labeled data instead of training in the source domain (DSCNet-v2) and \ufb01ne-tuning with target labeled data but not equipped with MK-MMD (DSCNet-v3). 3.2. Experimental Results The binary change maps obtained by different methods on the HY data set are shown in Fig. 4. It can be observed that the proposed model generates the best CD result with more complete changed regions and less noise. For the QU data set, even though the distributions of the two domain are signi\ufb01cantly different due to the diverse characteristics of the two sensors, the DSDANet still can generate an accurate binary change map. It implies that through embedding data distributions into the optimal RKHS and minimize the distance between them, the network is capable of learning domaininvariant representation from source labeled data and unlabeled target data and can be easily transferred from one CD data set to another. The quantitative results are listed in Table 1. Due to only providing very limited target labeled data that cannot contain all the kinds of changed and unchanged land-cover types, \ufb01ne-tuning without domain adaptation also performs not well. By contrast, the DSDANet achieves the best OA and KC on the two target data set. 4. CONCLUSION In this paper, a novel network architecture entitled DSDANet is proposed for cross-domain CD in multispectral images. Table 1: Accuracy assessment on binary change maps obtained by different methods on the two target data set Method HY QU OA KC OA KC CVA 0.9445 0.7171 0.8079 0.5352 SVM 0.8467 0.4565 0.8381 0.6285 DSCNet-v1 0.8751 0.4310 0.7060 0.1147 DSCNet-v2 0.8759 0.5610 0.8286 0.5404 DSCNet-v3 0.9279 0.6650 0.8297 0.5391 DSDANet 0.9618 0.8021 0.9016 0.7670 Through restricting the domain discrepancy with MK-MMD and optimizing the network parameters and kernel coef\ufb01cient, the DSDANet can learn transferrable representation from source labeled data and target unlabeled data, which can ef\ufb01ciently bridge the discrepancy between two domains. The experimental results in two target data sets demonstrate the effectiveness of the proposed DSDANet in cross-domain CD. Even though the data distributions of the two domains are signi\ufb01cantly different, the DSDANet only needs sparse labeled data of the target domain to \ufb01ne-tune the classi\ufb01er, which makes it superior in actual production environments. 5.", "introduction": "Change detection (CD) is one of the most widely used inter- pretation techniques in the \ufb01eld of remote sensing, and has been intensively studied in previous years [1]. Nonetheless, most traditional CD models only explore low-level features in multispectral images, which are insuf\ufb01cient for representing the key information of original images. Recently, deep learn- ing (DL) has been shown to be very promising in the \ufb01eld \u2217Coresponding Author (chen.wu@whu.edu.cn). This work was sup- ported in part by the National Natural Science Foundation of China under Grant 61971317, 41801285, 61822113 and 41871243. of computer vision and remote sensing images interpretation. Hence, a number of CD methods based on DL models are developed. [2,3]. However, the training process of these DL-based CD methods requires a lot of labeled data and there is no denying that the manual selection of labeled data is labor-consuming, especially for remote sensing images. Besides, deep networks are often task-speci\ufb01c, in other words, they have a relatively weak generalization. And due to several factors, including noise and distortions, sensor characteristics, imaging con- ditions, the data distributions of different CD data sets are often quite dissimilar. Thus, if we train a deep network on one multi-temporal data set with abundant labeled samples, it would suffer degraded performance after we transfer it to a new multi-temporal data set, which makes it unavoidable to manually label numerous samples in the new data set. Nowa- days, there are massive amounts of remote sensing images are available by satellite sensors, these images can provide diverse and abundant information for covered regions. There- fore, it is incentive to develop an ef\ufb01cient CD model that is trained on a data set (source domain) with enough labeled data but can be easily transferred to a new data set (target domain) with very limited (even no) labeled data. This can be de\ufb01ned as a domain adaption problem in change detection area. Considering the above issues comprehensively, in this pa- per, a novel deep network architecture called DSDANet is proposed for cross-domain CD. By incorporating a domain discrepancy metric MK-MMD into the network architecture, the DSDANet can learn transferrable features, where the dis- tribution of two domains would be similar. To the best of au- thors knowledge, it is the \ufb01rst time that such a deep network based on domain adaptation is designed for CD in multispec- tral images." } ], "Naoto Yokoya": [ { "url": "http://arxiv.org/abs/2311.11252v1", "title": "Submeter-level Land Cover Mapping of Japan", "abstract": "Deep learning has shown promising performance in submeter-level mapping\ntasks; however, the annotation cost of submeter-level imagery remains a\nchallenge, especially when applied on a large scale. In this paper, we present\nthe first submeter-level land cover mapping of Japan with eight classes, at a\nrelatively low annotation cost. We introduce a human-in-the-loop deep learning\nframework leveraging OpenEarthMap, a recently introduced benchmark dataset for\nglobal submeter-level land cover mapping, with a U-Net model that achieves\nnational-scale mapping with a small amount of additional labeled data. By\nadding a small amount of labeled data of areas or regions where a U-Net model\ntrained on OpenEarthMap clearly failed and retraining the model, an overall\naccuracy of 80\\% was achieved, which is a nearly 16 percentage point\nimprovement after retraining. Using aerial imagery provided by the Geospatial\nInformation Authority of Japan, we create land cover classification maps of\neight classes for the entire country of Japan. Our framework, with its low\nannotation cost and high-accuracy mapping results, demonstrates the potential\nto contribute to the automatic updating of national-scale land cover mapping\nusing submeter-level optical remote sensing data. The mapping results will be\nmade publicly available.", "authors": "Naoto Yokoya, Junshi Xia, Clifford Broni-Bediako", "published": "2023-11-19", "updated": "2023-11-19", "primary_cat": "cs.CV", "cats": [ "cs.CV", "cs.LG" ], "main_content": "In this paper, we propose a human-in-the-loop map creation framework that enables high-resolution mapping at a country scale by efficiently collecting additional labels necessary for mapping out-of-distribution images based on the 2 Labeled data Land cover map Machine learning model Unlabeled data Human 1) Training 2) Mapping 4) Annotation 3) Identi\ufb01cation of failure cases Figure 1: Overview of the proposed human-in-the-loop mapping framework. pretrained model. The framework is an iterative process which consists of the following four steps: 1) training a land cover mapping model using labeled data, 2) mapping unlabeled data using the current model, 3) humans identifying failure cases, and 4) annotating selected unlabeled images to extend the labeled data. The overview of the framework is illustrated in Figure 1. We use the OpenEarthMap dataset as the labeled data and the Geospatial Information Authority of Japan (GSI) aerial images for unlabeled data. In this study, to save computational costs, the second round of Step 2 in the cycle is the last process which was performed. This section presents the materials used for submeter-level land cover mapping in Japan. We employ public seamless aerial imagery from various data sources, which provide a comprehensive view of Japan at about 20 cm spatial resolution, although older data may be of lower quality. Handling image heterogeneity due to different acquisition times and sources is a challenge. We adopt the OpenEarthMap dataset as a basis for labeled data and make additional manual annotations to build a Japan-specific model. Government maps are used to evaluate buildings and agricultural land. Also, we describe the adopted deep learning model, the method used to select the additional unlabeled images for annotation, and the evaluation method for the mapping results. 2.1 Aerial Imagery Seamless aerial photographs published by the GSI2 were used as input images for this study. Example images are shown in Figure 2(a). The seamless aerial photographs are mosaic images of the following data sources, arranged in order of priority. \u2022 Latest orthorectified aerial images from Basic Electronic National Land Map (2007\u2013present) \u2022 Aerial photographs of national forests provided by the Forestry Agency \u2022 Simplified aerial photographs (2004\u2013present) \u2022 National Land Image Information (taken from 1988 to 1990, 1984 to 1986, 1979 to 1983, 1974 to 1978) The dates of the photographs can be checked on the web mapping3 provided by GSI. Figure 2(b) shows the coverage of images taken after 2007 by year in different colors, while land areas in gray are covered by images taken before 2007. The spatial resolution for all areas is about 20 cm. The image quality of very old images from National Land Image Information is very poor, as shown in the fourth column images in Figure 2(a). Because the image is a mosaic of images taken at different times (year, season, and time of day), there are many areas where the boundaries between 2https://maps.gsi.go.jp/development/ichiran.html 3https://maps.gsi.go.jp/ 3 Sep 2020 Jul-Aug 2020 Oct-Nov 2019 Apr 2013 Oct 2008 1974-1978 Aug-Sep 2010 Apr-May 2017 Apr 2013 Apr-Sep 2009 1974-1978 Jun-Oct 2015 1974-1978 Aug 2019 Apr 2014 (a) Examples of the GSI aerial imagery 2022 2021 2020 2019 2018 2017 2016 2015 2014 2013 2012 2011 2010 2009 2008 2007 Before 2007 (b) The years of aerial image acquisition. Figure 2: The aerial photographs published by the GSI. (a) Examples of aerial imagery and (b) the imagery acquisition years. 4 RGB image Reference Piura Tonga Baybay Niamey Austin Melbourne Svaneti Bareland Rangeland Developed Road Tree Water Agriculture Building Figure 3: Examples of RGB and labeled images from the OpenEarthMap data. Figure 4: Locations of additionally labeled aerial images. 5 different images are clearly visible. Such cases are shown in the fifth and sixth columns in Figure 2(a). When using OpenEarthMap for machine learning-based mapping, it is difficult to deal with these samples without fine-tuning a model because they are out-of-distribution. Ideally, more homogeneous images taken as close in time as possible should be used for land cover map product creation, but at submeter-level resolution and large scales, image heterogeneity is a real problem in many countries and regions. Under the constraints of using such heterogeneous data, this study aims to explore a framework to conduct submeter-level country-scale mapping with as few annotations as possible, using existing high-resolution land cover labels. 2.2 OpenEarthMap Data OpenEarthMap serves as a benchmark dataset for high-resolution global land cover mapping, comprising 5,000 aerial and satellite images with manually annotated 8-class land cover labels and 2.2 million segments at a ground sampling distance of 0.25\u20130.5m. The dataset spans 97 regions in 44 countries, encompassing 6 continents. The 8 classes are bareland, rangeland, developed space, road, tree, water, agriculture land, and building. Each image has a size of 1024\u00d71024 pixels. Figure 3 shows seven samples of RGB and labeled images from the OpenEarthMap data. Leveraging OpenEarthMap, land cover mapping models demonstrate robust generalization capabilities worldwide and can be readily employed as off-the-shelf solutions across a diverse range of applications. OpenEarthMap contains 140 clean images from two prefectures in Japan, Tokyo and Kyoto. The data source is the GSI aerial imagery described in the Aerial Imagery section. In this study, we used the training set of the OpenEarthMap dataset, composed of 3,000 image samples. An additional 332 images with a size of 1024\u00d71024 pixels were manually annotated for this study. Labeling and quality control were performed by the same annotation team that labeled the OpenEarthMap dataset, so labeling accuracy and consistency were ensured. Of the additional images, 197 (60%) are out-of-distribution images that include wild images, and the remaining 135 (40%) are clean images. The out-of-distribution images were sampled from areas where an OpenEarthMap pretrained model did not perform well. Figure 4 shows the sampling points for the 332 images. To ensure a balanced accuracy assessment, we evenly selected the additional images across different prefectures. More details of the image selection process is described in the Selection of Failures and Annotation section. Building footprint from GSI Agriculture land footprint from MAFF Figure 5: Example of building footprints from GSI and agriculture land footprints from MAFF. 6 Developed Road Tree Water Agriculture Building Bareland Rangeland Input Ground truth OpenEarthMap Figure 6: GSI aerial images (top), failure predictions of the OpenEarthMap pretrained model (middle), and ground truth (bottom). 2.3 Reference Data For building and agricultural land, we use maps published by government agencies for evaluation purposes. For building footprints, we use data distributed by GSI as part of its basic map information4. GSI provides basic map information based on survey results (e.g., city planning maps) obtained by local governments. The degree to which GSI reflects the latest information in publicly available urban planning maps depends on the local government agencies. Urban planning maps are created and managed by each local government, and the updated frequency and level of detail in the information made public can differ among municipalities and regions. For example, in Tokyo, the maps are generally updated every five years. Agriculture land footprints are obtained from the Ministry of Agriculture, Forestry and Fisheries (MAFF)5, and were manually annotated from very high-spatial resolution (0.5m) remote sensing images. Each agriculture field has a minimum size of 200 m2 (400 m2 for Hokkaido). The first version was completed in 2019, with subsequent annual updates. We used the version downloaded in 2022. Figure 5 shows an example of building footprints from GSI and agriculture land footprints from MAFF selected from Tokyo. These reference maps are found to be highly accurate and densely comprehensive. Note that plastic houses for agriculture are included in both reference maps, while they are labeled as agriculture land in the OpenEarthMap dataset. 2.4 Selection of Failures and Annotation After training the U-Net-EfficientNet-B4 model using the OpenEarthMap data as illustrated in the second step of Figure 1, GSI aerial images from all over Japan were mapped with the OpenEarthMap model. In the subsequent step, the images to be annotated were selected by visually checking obvious failures from the mapping results. More specifically, the land cover mapping results were converted to the XYZ format, the mapping results were displayed on the aerial image with 70% transparency, and the overlay was compared on and off by humans to visually search for major errors in terms of pixel count. Approximately 500 sites were identified as erroneous, of which 197 sites were selected for labeling based on geographic distribution and diversity of land cover classes. Of those 197 sites, 115 sites were labeled manually, while the labels for the remaining 82 images were corrected by a simple rule-based process using the initial mapping results (e.g., converting one class to another). In addition, within a 1\u00d71 latitude/longitude tile, we selected two images with high entropy of class probabilities based on the initial mapping results, for a total of 135 sites, and then manually annotated all of them. Finally, these 332 (i.e., 197+135) images were labeled and then divided into training, validation, and test sets, comprising 199, 33, and 100 images, respectively. Figure 6 shows eight examples of mapping failures by the OpenEarthMap pretrained model, along with manually labeled ground truth. 4https://fgd.gsi.go.jp/download/mapGis.php 5https://open.fude.maff.go.jp/ 7 2.5 Model We used a U-Net model [27] with EfficientNet-B4 [28] as the backbone for mapping. This model has demonstrated both lightweight and high accuracy in comparative experiments with the latest models using the OpenEarthMap data. U-Net is a convolutional neural network architecture commonly used for semantic segmentation tasks in computer vision. It consists of an encoder and decoder with skip connections between corresponding layers, allowing the network to effectively capture both local and global features. The skip connections help preserve spatial information during downsampling and improve the network\u2019s ability to segment objects in an image accurately. Simply changing the U-Net\u2019s encoder to a more advanced backbone can lead to improved accuracy. EfficientNet is a family of convolutional neural network architectures designed to achieve better performance and efficiency by systematically scaling the model\u2019s depth, width, and resolution. It strikes a balance between accuracy and computational cost, making it highly efficient for various computer vision tasks. The EfficientNet family consists of several variants denoted by different scaling factors (B0, B1, B2, B3, B4, B5, B6, B7). We selected B4 to balance performance and computational cost. We used segmentation models PyTorch for implementation6. The training was performed for 200 epochs using cross-entropy as the loss function and Adam as the optimizer. 2.6 Evaluation Metrics To evaluate the accuracy of land cover classification, the producer\u2019s accuracy (PA) of each class, its average accuracy (AA), and overall accuracy (OA) are used. For a given class c \u2208{1,2,...,K}, where K is the number of classes (K = 8 in this paper), we denote the number of true positives and false negatives as TPc and FNc, respectively. PA of each class (PAc), AA, and OA are defined by the following formulas. PAc = TPc TPc +FNc (1) AA = 1 K K \u2211 c=1 PAc (2) OA = \u2211K c=1 TPc \u2211K c=1 TPc +\u2211K c=1 FNc (3) PA, AA, and OA do not consider false positives in their evaluation. To assess classification performance while taking the false positives into account, we use an additional metric known as Intersection over Union (IoU). IoU of a given class c \u2208{1,2,...,K} is defined as: IoUc = TPc TPc +FNc +FPc , (4) where FPc is the number of false positives. The mean IoU (mIoU) is defined as: mIoU = 1 K K \u2211 c=1 IoUc. (5) 3 Results We performed both quantitative and qualitative assessments of the mapping results. For the quantitative evaluation, we employed ground truth labels from 100 manually annotated test images. In addition, we used reference data from the Geographical Survey Institute (GSI) of the Geospatial Information Authority of Japan building footprint and the Ministry of Agriculture, Forestry and Fisheries of Japan (MAFF) agriculture land footprint from five prefectures: Miyagi, Tokyo, Aichi, Osaka, and Fukuoka, considering image acquisition times and image quality. Note that we refer to the model trained on the original OpenEarthMap dataset as \u201cOpenEarthMap,\u201d and the model retrained on the extended OpenEarthMap dataset with the training set (i.e., 199 images) of the newly annotated data from Japan for this study as \u201cOpenEarthMap Japan\u201d. Table 1 shows class-specific producer\u2019s accuracy (PA), its average accuracy (AA), and overall accuracy (OA), and Table 2 shows class-specific Intersection over Union (IoU) and mean IoU (mIoU) based on the test data. For all classes, significant accuracy improvement is observed through retraining, leading to an approximately 16 percent point increase in OA, reaching 80%. Particularly, tree, water, agriculture land, and building 6https://github.com/qubvel/segmentation_models.pytorch 8 Table 1: Comparison of producer\u2019s accuracy (PA) and overall accuracy (OA) between pretrained and fine-tuning models evaluated on the test set of the OpenEarthMap Japan dataset. Model PA (%) AA OA Bareland Rangeland Developed Road Tree Water Agriculture Building (%) (%) OpenEarthMap 33.66 45.04 67.87 66.30 64.52 63.83 87.17 85.53 64.24 63.98 OpenEarthMap Japan 38.74 68.02 66.24 75.17 93.49 83.42 90.20 89.41 75.59 80.20 Table 2: Comparison of IoU between pretrained and fine-tuning models evaluated on the test set of the OpenEarthMap Japan dataset. Model IoU (%) mIoU Bareland Rangeland Developed Road Tree Water Agriculture Building (%) OpenEarthMap 25.19 37.73 45.65 56.91 59.77 30.71 43.51 70.22 46.21 OpenEarthMap Japan 30.29 56.35 50.78 64.82 85.94 77.03 64.57 73.91 62.96 achieve PA values exceeding 80%. On the other hand, bareland, rangeland, and developed space remain below 70% accuracy, primarily due to ambiguous classes that are difficult to distinguish using only RGB aerial images. Figure 7 shows the mapping result of Japan with enlarged examples from nine prefectures. It is visually evident that we were able to create highly detailed land cover maps at a submeter spatial resolution, as represented by details of roads and buildings. In the confusion matrix presented in Figure 8, in both models over 40% of bareland is being misclassified as agriculture land. In the OpenEarthMap model, approximately 34% of rangeland is misclassified as agriculture, but this error is significantly reduced after retraining. Additionally, the confusion between tree and water, which frequently appears in forests with the OpenEarthMap model, is greatly improved after retraining. Figure 9 presents visual mapping results before and after retraining. The first column shows a clean image randomly sampled for accuracy assessment, while the remaining five columns illustrate examples where the model without retraining faced challenges. The OpenEarthMap model performs well in the first column, but in the second through sixth columns, we observe instances where it encounters challenges due to differences in data distributions. Each column highlights a typical error that arises when applying the original OpenEarthMap model to GSI aerial imagery over Japan that is out of the distribution of the original OpenEarthMap dataset. In the second column, coastal areas are incorrectly classified as agricultural land. In the third column, we see the misclassification of rangeland and runways (classified as road in our scheme) at an airport as agricultural land. The fourth column reveals the misclassification of agricultural land as building in brightly lit images. In the fifth column, misclassifications are evident at the boundaries where images from different time periods were stitched together. Finally, the sixth column shows significant errors in images completely covered by trees. These errors occur because such scenes are not adequately represented in the OpenEarthMap dataset. With our addition of only 199 images for retraining, it is evident that these obvious mistakes were greatly improved. Table 3 shows classification accuracy using the reference data distributed by the government agencies for building and agriculture land. For building, we calculate metrics for the building layer of OpenStreetMap as a comparison. Retraining leads to improvements in accuracy for building, as indicated by both OA and IoU. Moreover, it demonstrates significantly better performance than OpenStreetMap, suggesting the substantial potential of large-scale mapping using deep neural networks for the automatic updating of open map data. On the other hand, regarding agriculture land, while there is a clear improvement in IoU, the OA has decreased. This indicates a decrease in the number of true positives, but the total count of false positives and false negatives has decreased more than the decrease rate of true positives. Figure 10 shows a GSI aerial image sampled from Aichi prefecture, containing both building and agriculture land, together with the error maps for these two classes. In these images, white, black, red, and green represent true positives, true negatives, false positives, and false negatives, respectively. As seen from this comparison, the main causes of false positives and false negatives in building are 1) land use changes, 2) misalignment between reference and aerial images, and 3) prediction errors. As shown in Table 3, the OA for building in the manually annotated test data was nearly 90%, indicating that approximately 10% of the false negatives were likely due to prediction errors, while the remaining 7% were likely attributed to changes in land cover and data misalignment. Regarding agriculture land, as evident from the images in the bottom row of Figure 10, large agricultural fields are detected with high accuracy. However, smaller fields fail to be detected, as indicated by the presence of small green segments. The success or failure in detecting larger segments can be attributed to two main factors: changes in land cover and prediction errors. As shown in Table 3, in the manually annotated test data, the OA for agriculture land reaches 90%, suggesting that around 10% of the false negatives can be attributed to prediction errors, while the primary factor for the remaining 11% is likely the changes in land cover. 9 Hokkaido Miyagi Tokyo Hokkaido Aichi Miyagi Niigata Tokyo Kyoto Aichi Kochi Niigata Scale bar for enlarged maps Hiroshima Kyoto Kochi Fukuoka Scale bar for country-scale map Hiroshima Fukuoka Figure 7: Submeter-level land cover mapping of Japan with enlarged examples from nine prefectures. Table 3: Comparison of pretrained and retrained models evaluated on the reference of building and agriculture land footprints. Model Building Agriculture IoU OA IoU OA OpenStreetMap 35.77 38.69 \u2014 \u2014 OpenEarthMap 60.12 81.51 63.26 80.91 OpenEarthMap Japan 60.32 82.98 64.64 78.96 10 (a) OpenEarthMap model (b) OpenEarthMap Japan model Figure 8: Normalized confusion matrices of the (a) OpenEarthMap model and (b) OpenEarthMap Japan model. Input Ground truth OpenEarthMap Japan OpenEarthMap Figure 9: Examples of mapping results. From top to bottom, the rows display input, results of the OpenEarthMap model, results of the OpenEarthMap Japan model, and ground truth. 4 Discussion This study presents an efficient and effective approach that utilizes high-resolution OpenEarthMap data to create country-scale submeter-level land cover maps for applications such as environmental monitoring, urban planning, 11 Figure 10: (top) Aerial imagery, (middle) error map of buildings, and (bottom) error map of agriculture land. White, black, red, and green represent true positives, true negatives, false positives, and false negatives, respectively. agriculture monitoring, and disaster management. The annotation cost (including time, labor, and computing) of remote sensing imagery poses a significant challenge in creating a country-scale land cover mapping. Adopting a human-in-the-loop framework represents a promising approach to utilize the OpenEarthMap data with a small amount of additional labeled data, to create a large-scale high-resolution land cover mapping countrywide at a low annotation cost. Here, we discuss the factors that affect the accuracy assessment, comparison with other land cover products of Japan, and the limitations and prospects of this work. 12 Table 4: The OpenEarthMap land cover types in relation to the JHR LULC Map version 21.03 classification categories. No. JHR LULC Map v21.03 OpenEarthMap 1 Water bodies Water 2 Built-up Building Developed space Road 3 Paddy field Agriculture land 4 Cropland 5 Grassland Rangeland 6 Deciduous broad-leaf forest Tree 7 Deciduous needle-leaf forest 8 Evergreen broad-leaf forest 9 Evergreen needle-leaf forest 10 Bare Bareland 11 Bamboo forest Tree 12 Solar panel Developed space 4.1 Factors that Affect the Accuracy Assessment Accuracy assessment of land-cover products is an important step in quantifying the uncertainty of the products before they are used for related real-world applications [29]. The accuracy assessment of the proposed OpenEarthMap Japan was based on class labels of the OpenEarthMap data, and the building and agriculture land footprints provided by the GSI and the MAFF, respectively, using two numerical metrics at the pixel level. Regarding the OpenEarthMap class labels, there might be some label noise that affects the metrics due to the fact that in cases where visual interpretation from single temporal imagery is hard, the disagreement between human annotators can affect the quality of the annotation (label). The lack of a classification category system that aligns the class labels of the OpenEarthMap data with that of the GSI aerial imagery data would have affected the accuracy assessment as well. Furthermore, there are time differences between the GSI images and that of the building and agriculture land footprints, this could have caused some land cover changes and that might affected the metrics. Another factor could be spatial misalignment between the aerial images and the footprints due to errors in the geometric correction of the aerial images. 4.2 Comparison with other Land Cover Products Since 2010, the JAXA Earth Observation Research Center (EORC) has been producing land use and land cover maps of Japan, called the JAXA High-Resolution Land Use and Land Cover Map (JHR LULC Map)7, at 10\u201350m spatial resolution using both optical sensors and synthetic aperture radar (SAR) images. The optical images are acquired by the Advanced Visible and Near Infrared Radiometer type 2 (AVNIR-2) from the Advanced Land Observation Satellite (ALOS) and some selected from Sentinel-2 L1C, while the SAR data are from ALOS-2/Phased Array Lband Synthetic Aperture Radar-2 (PALSAR-2) [30, 31]. In this study, we provided a submeter-level LULC map with 8 classification categories (bareland, rangeland, developed space, road, tree, water, agriculture land, and building) at 0.25\u20130.5m spatial resolution using aerial images from GSI, which reflects Japan from 2007\u20132022 (see Figure 2), with a deep neural network model trained on the OpenEarthMap data achieving 80% overall accuracy. In comparison, Takahashi et al. [30] adopted a decision tree method to produce version 13.02 of the JHR LULC Map in 2013, reflecting Japan as of 2006\u20132011 at a spatial resolution of 50m with 9 classification categories (water, urban, paddy, crop, grass, deciduous forest, evergreen forest, bare land, and snow and ice), which the overall accuracy was 89.3%. In 2016, Sharma et al. [13] produced a 30m resolution land cover map of Japan (JpLC-30) of 2013\u20132015 which consists of 7 land cover categories (water bodies, deciduous forests, evergreen forests, croplands, barelands, built-up areas and herbaceous). The JpLC-30 map was produced with a random forests classifier which achieved an overall accuracy of 88.62%. More recently, version 21.03 of the JHR LULC Map (10m spatial resolution) which reflects the average cover in 2018-2020 was produced by Hirayama et al. [31] using a deep neural network method, and overall accuracy of 88.85% was achieved in 12 classification categories (water bodies, built-up, paddy field, cropland, grassland, deciduous broad-leaf forest, deciduous needle-leaf forest, evergreen broad-leaf forest, evergreen needle-leaf forest, bare, bamboo forest, and solar panel). Because the existing LULC map products of Japan have different spatial resolutions in different classification categories, it is difficult to have a fair comparison in terms of their overall accuracy. In Table 4, we juxtaposed the 12 7https://earth.jaxa.jp/en/data/2562/index.html 13 classification categories of the current JHR LULC Map version 21.03 with the 8 land cover types of the OpenEarthMap to construct semantically related classification categories among the classes of the two LULC maps. 4.3 Limitations and Prospect Although submeter-level land cover mapping with high accuracy is great, the limitation of this work is that the accuracy evaluation is only limited to the 8 classification categories (i.e., bareland, rangeland, developed space, road, tree, water, agriculture land, and building) of the OpenEarthMap, which fails to completely correspond to the 12 subdivision of Japan\u2019s land use and land cover types, called the JAXA High-Resolution Land Use and Land Cover Map, that has been producing by the JAXA Earth Observation Research Center since 2010 as shown in Table 4. We believe that the proposed framework has great potential to augment the JAXA Earth Observation Research Center map product of Japan. In future work, it is worth understanding how to verify and match the differentiated classification categories of different land use and land cover datasets, and how to extend the land types of the OpenEarthMap to all subdivisions of Japan. Also, another possibility is offered by applying the proposed framework to other countries, in particular, countries with limited available high-resolution remote sensing data such as African countries. 5 Conclusion In conclusion, this work presents the first submeter-level land cover mapping of the entire country of Japan. We used aerial images provided by GSI and classified them using the U-Net-EfficientNet-B4 model, achieving an overall accuracy (OA) of 80.20% and an average accuracy (AA) of 75.59% across eight land cover classes. Leveraging the OpenEarthMap dataset, a benchmark for global high-resolution land cover classification mapping, we introduced a human-in-the-loop mapping framework. This framework efficiently handles challenging scenes that cannot be accurately classified by the OpenEarthMap pretrained model, requiring only a small additional label dataset. We anticipate that our framework will serve as a valuable guideline for national-scale land cover mapping. Acknowledgments This work was supported in part by the Council for Science, Technology and Innovation (CSTI), the Cross-ministerial Strategic Innovation Promotion Program (SIP), Development of a Resilient Smart Network System against Natural Disasters (Funding agency: NIED), and JST, FOREST under Grant Number JPMJFR206S. Data availability The OpenEarthMap dataset is publicly available at https://zenodo.org/records/7223446. The label data of the OpenEarthMap are provided under the same license as the original RGB images, which varies with each source dataset. For more details, please see the attribution of source data here. The label data for regions where the original RGB images are in the public domain or where the license is not explicitly stated are licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 international license. The Geographical Survey Institute (GSI) of the Geospatial Information Authority of Japan web portal for the aerial imagery used in the study can be viewed at https://maps.gsi.go.jp/development/ichiran.html. Here, the user can find a detailed help document to navigate the site and download data. For building footprint and agricultural land footprint reference data, the GSI and the Ministry of Agriculture, Forestry and Fisheries of Japan (MAFF) distribute them via the web portals https://fgd.gsi.go.jp/download/mapGis.php and https://open.fude.maff.go.jp/, respectively. Code availability The source code used in this study is publicly available at https://open-earth-map.org/code.html.", "introduction": "Land cover mapping is the process of categorizing and mapping the different types of land cover or land use, such as forests, urban areas, water bodies, and agricultural fields, that can be found in a given geographic area. It involves analyzing remote sensing data, such as satellite imagery, and assigning a land cover class label to each pixel. The goal is to provide valuable information about the spatial distribution and composition of land cover types, which is helpful for various applications, including environmental monitoring, urban planning, agriculture monitoring, and disaster management. Supported by advancements in data accessibility and data analysis techniques, significant efforts have been made in the creation of global-scale land cover maps. For example, the MODIS Land Cover Type Product (MCD12Q1) is a land cover map product with a resolution of 500 meters and 17 classes, created from the MODIS satellite data using techniques such as unmixing and machine learning [1]. It includes yearly maps from 2001 to 2020 that have been made publicly available. Over the past decade, the development of medium-resolution land cover maps, such as GlobeLand30 [2] and FROM-GLC30 [3], using Landsat satellite data, has significantly advanced. With the introduction of Sentinel-2 satellites, the resolution of global land cover maps has been increased to 10 meters. Representative 10-meter resolution global land cover products include FROM-GLC10 [4], Esri Land Cover Map, and WorldCover 10m [5]. In data processing, the mainstream approach involves using machine learning to classify the time-series spectral information at \u2217Corresponding author: yokoya@k.u-tokyo.ac.jp arXiv:2311.11252v1 [cs.CV] 19 Nov 2023 each pixel. Machine learning techniques such as maximum likelihood, decision trees, random forests [6], and support vector machines [7], among others, have been widely used for global land cover classification using pixel-wise features on meter-level satellite image data. Each country or region (e.g., Europe) has been developing detailed and accurate land cover maps. For instance, the European Union\u2019s CORINE Land Cover Map utilizes more reliable labels from field surveys, enabling a highly accurate and detailed classification with 44 classes [8, 9]. The European Urban Atlas [10] produced by the GMES Urban Services project covers more than 300 major European cities using SPOT 5 satellite imagery. The cities are mapped using 20 classes, of which 17 urban classes have a minimum mapping unit (MMU) of 0.25 ha and 3 non-urban classes of 1 ha. In Japan, Deng et al. produced a land cover map of 6 classes for the Gunma Prefecture by combining Landstat-5 TM, SPOT, and aerial imagery [11]. The Japan Aerospace Exploration Agency (JAXA) and the University of Tsukuba have regularly maintained land cover maps with a ground sampling distance of 10 to 30 meters [12]. There are three versions for the time periods of 2006\u20132011, 2014\u20132016, and 2018\u20132020, using time-series multispectral data from ALOS, Landsat-8, and Sentinel-2, respectively. For classification methods, kernel density estimation was used for the first and second versions; the latest version was made using convolutional neural networks and exploiting spectral-temporal features. Sharma et al. adopted a random forest technique with bootstrap aggregating (bagging) to produce the Japan 30-meter resolution land cover map with 7 land cover types, using Landsat-8 Operational Land Imager and Thermal Infrared Sensor scenes over Japan from 2013 to 2015 [13]. The improvement in resolution is crucial for obtaining the more detailed spatial information that is essential for environmental conservation and land use planning. The Chesapeake Bay Program in the US created 2013/14 and 2017/18 high-resolution land use and land cover maps that classify 54 detailed classes at 1-meter resolution by rule-based classification using National Agriculture Imagery Program (NAIP) images and above-ground height information derived from LiDAR data [14, 15]. Spatial patterns and features are informative for land cover classification of high-resolution imagery, and deep learning is a powerful tool for automating map production [16]. For example, the 1-meter resolution database named UrbanWatch containing images from 22 major cities across the continental United States, was generated by a teacher-student deep learning method using 2014 to 2017 NAIP airborne imagery [17]. [18] adopted a deep learning method consisting of a generator and a discriminator, known as generative adversarial network (GAN), to generate 1-meter urban green space (UGS-1m) maps of 34 major cities and areas in China with global urban boundaries [19] and Google Earth imagery [20]. The GlobalUrbanNet automatic multi-city mapping and analysis framework introduced by [21] adopted both a convolutional neural network and a vision transformer to generate land cover and land use mapping of global cities automatically, at a 0.5-m resolution based on open-source data from OpenStreetMap and very high-resolution images that were purchased or collected from various open-source sites. The generalization performance of deep learning models depends on the quantity and quality of labels. Various data has been developed for submeter-level image segmentation over the past decade, mostly for building detection [22, 23]. However, datasets for land cover mapping are much less available than those for building detection. Benchmark datasets for land cover mapping at the submeter level include DeepGlobe [24] and LoveDA [25], but there is little diversity of the regions and countries covered by these datasets, and the geographic coverage of models trained on them is limited. It has been difficult to obtain high-resolution training data covering the entire globe due to the high cost of image acquisition and labeling. OpenEarthMap solved this problem, improving global generalization performance and eliminating geographic unfairness in model performance [26]. The data was collected from existing benchmark data as well as from open data in data-poor regions in a geographically balanced manner, resulting in a consistent 8-class labeling of imagery covering 44 countries and 97 regions. The labeling was very fine-grained and resulted in the construction of a submeter-level land cover mapping model that is globally applicable. This paper presents a submeter-level land cover mapping of the entirety of Japan using OpenEarthMap. In particular, we present a human-in-the-loop deep learning framework for building a customized model when there are unseen images that are out of the distribution of the OpenEarthMap dataset. The country-scale submeter-level aerial imagery used in this study is a collection of images from multiple time periods, so it contains many \u201cwild\u201d cases, such as image mosaic boundaries, heavy shadows, and low-quality old images. Since there are many cases where simply applying the OpenEarthMap pretrained model does not work, we created a Japan-specific model by additionally labeling the scenes where the estimation of the model is poor, and retraining the model with the newly labeled data. We confirm the high performance of mapping through the use of both numerical and visual evaluations. Our map will be made publicly available." } ], "Zhen Zhao": [ { "url": "http://arxiv.org/abs/2402.00440v1", "title": "Optimal investment, consumption and life insurance decisions for households with consumption habits under the health shock risk", "abstract": "This paper investigates the optimal investment, consumption, and life\ninsurance strategies for households under the impact of health shock risk.\nConsidering the uncertainty of the future health status of family members, a\nnon-homogeneous Markov process is used to model the health status of the\nbreadwinner. Drawing upon the theory of habit formation, we investigate the\ninfluence of different consumption habits on households' investment,\nconsumption, and life insurance strategies. Based on whether the breadwinner is\nalive or not, we formulate and solve the corresponding Hamilton-Jacobi-Bellman\n(HJB) equations for the two scenarios of breadwinner survival and breadwinner's\ndemise, respectively, and obtain explicit expressions for the optimal\ninvestment, consumption, and life insurance strategies. Through sensitivity\nanalysis, it has been shown that the presence of health shocks within\nhouseholds has a negative impact on investment and consumption decisions, while\nthe formation of consumption habits increases household propensity for\nprecautionary savings.", "authors": "Zhen Zhao, Wei Liu, Xiaoyi Tang", "published": "2024-02-01", "updated": "2024-02-01", "primary_cat": "stat.AP", "cats": [ "stat.AP" ], "main_content": "This paper delves into the investment, consumption, and life insurance problem of a household over a finite time period [0, T], where T < \u221eis a constant. Consider a complete probability space (\u2126, F, F, P), where P is a probability measure, and F = {Ft}t\u2208[0,T] is a filtration generated by a standard Brownian motion W(t) and a non-homogeneous Markov chain \u03b7(t). The state space of {\u03b7(t), t \u2208[0, T]} is given by S = {0, 1, 2, \u00b7 \u00b7 \u00b7 , N}. Suppose that the household consists of a breadwinner with income and a dependent { \u2208} { \u00b7 \u00b7 \u00b7} Suppose that the household consists of a breadwinner with income and a dependent without income. T represents the time limit for the breadwinner\u2019s maintenance obligation. The non-homogeneous Markov chain \u03b7(t) is used to describe the breadwinner\u2019s health status. When it takes the value 0, it indicates that the breadwinner is healthy. When \u03b7(t) takes a value in the set S\\{0}, it represents the breadwinner being in a state of accident, disability, illness, or other health-related state. This can be referred to as a non-healthy state (which encompasses the general health risks including the risk of death, although this study focuses 4 on the impact of life insurance for mitigating the risk of death, and therefore does not include the death state in the health status). The health status \u03b7(t) is not always 0, reflecting the risk of the breadwinner experiencing health shocks. The matrix Q(t) = (qij(t){i,j\u2208S}) is the transition intensity matrix, where qij(t) represents the transition intensity from state i to state j at time t. 2.1 Insurance market Assuming that \u03c4z is the remaining life of the breadwinner when his/her current state is z(z \u2208S), then \u03c4z is a non-negative random variable. Suppose \u03c4z has a probability density function f(t, z) and a distribution function F(t, z) = P(\u03c4z < t) = R t 0 f(u, z)du. Let \u03bb(t, z) denote the death force of a breadwinner in state z at time t, then \u03bb(t, z) = lim \u03b5\u21920 P(t \u2264\u03c4z < t + \u03b5|\u03c4z \u2265t) \u03b5 . Thus, \u00af F(t, z) = 1 \u2212F(t, z) = exp{\u2212 R t 0 \u03bb(u, z)du}, f(t, z) = \u03bb(t, z)exp{\u2212 R t 0 \u03bb(u, z)du}. Suppose an insurance company sells an instantaneous (infinitely short period of time) life insurance, which is priced according to the state of the insured, and charges a life insurance rate denoted as \u03b8(t, z). Taking into account the operating costs of the insurance company, there is usually \u03b8(t, z) \u2265\u03bb(t, z). For convenience, if a breadwinner spends p(t, z) at time t on this life insurance, and dies immediately, the compensation received by the household is recorded as p(t, z) \u03bb(t, z). That is, the insurance market is frictionless, e.g., Shen and Sherris (2018) , Wang et al. (2021). 2.2 Consumption habits When households make consumption decisions, their current consumption behaviour is influenced by their previous consumption levels. Let c(t) denote the consumption rate at t and h(t) the consumption habits function. According to the model of Detemple and Zapatero (1992), the consumption habits are as follows h(t) = e\u2212\u03b2th0 + \u03b1 Z t 0 e\u03b2(s\u2212t)c(s)ds, where \u03b1, \u03b2 and h0 are constants. \u03b1 measures the impact of historical consumption levels on current consumption levels. The larger the \u03b1, the more important it is for households to maintain their current level of consumption. \u03b2 measures the extent to which households have forgotten about their past consumption. The larger the \u03b2, the less influence the past consumption has on the current consumption levels. Its differential form is dh(t) = [\u03b1c(t) \u2212\u03b2h(t)]dt, (2.1) 2.3 Financial market Assuming that the financial market consists of a risk-free asset and a risky asset. The price of the risk-free asset S0(t) and the price of the risky asset S1(t) are given by 5 dS0(t) = rS0(t)dt, (2.2) dS1(t) = S1(t)[\u00b5dt + \u03c3dW(t)], (2.3) where r stands for the interest rate on risk-free assets. \u00b5 > r is the expected return on risky assets. \u03c3 is the volatility of a risky asset. W(t) is the standard Brownian motion. 2.4 Wealth process The income of a breadwinner depends on his/her health status. Denote the wage rate of a breadwinner in a health state as y(t, 0). In the case of unhealthy state, the income is denoted by y(t, k) = 1 \u03bek y(t, 0), k \u2208S\\{0}. Where \u03bek > 1 is a constant related to state and its value reflects the intensity of the health shock. Considering the social welfare and security measures, it is set to \u03bek < \u221e, i.e., when the breadwinner is in a non-healthy state, the household still has income. Let X(t) denote the wealth of the household at time t. Let \u03bd = (\u03c0(t), c(t), p(t, \u03b7(t))) denote the strategy of the household, where \u03c0(t) is the amount invested in the risky assets, c(t) is the consumption rate and p(t, \u03b7(t)) is the life insurance premium rate. Given the strategy \u03bd, the wealth process of the household is as follows \uf8f1 \uf8f2 \uf8f3 dX(t) = \u03c0(t)dS1(t) S1(t) + [X(t) \u2212\u03c0(t)]dS0(t) S0(t) + I{t<\u03c4\u03b7(t)}[y(t, \u03b7(t)) \u2212p(t, \u03b7(t))]dt \u2212c(t)dt, X(0) = x0, \u03b7(0) = \u03b70. (2.4) Thus, \u001a dX(t) = {rX(t) + \u03c0(t)(\u00b5 \u2212r) + I{t<\u03c4\u03b7(t)}[y(t, \u03b7(t)) \u2212p(t, \u03b7(t))] \u2212c(t)}dt + \u03c3\u03c0(t)dW(t), X(0) = x0, \u03b7(0) = \u03b70. (2.5) Definition 2.1. (Admissible Strategy). For any t \u2208[0, T], a strategy \u03bd = (\u03c0(s), c(s), p(s, \u03b7(s)))s\u2208[t,T] is admissible if (i) \u03bd is Ft adaptive; (ii) For any s \u2208[t, T], c(t) \u22650 and E \u0014Z T t [\u03c0(s)2 + c(s)2 + p(s, \u03b7(s))2]ds \u0015 < +\u221e; (iii) (X\u03bd, \u03bd) is the unique solution of the equation (2.5). The admissible set is expressed as \u03a0. The objective function is J(t, x, h, i; \u03bd(\u00b7)) = Et,x,h,i \u0014Z T t U(s, c(s), h(s), \u03b7(s))ds + \u03a8(T, X(T), \u03b7(T)) \u0015 , (2.6) where U(t, c, h, i) denotes the consumption utility of a household with a habit level of h in state i at time t, \u03a8(T, X(T), \u03b7(T)) denotes the utility function of terminal wealth. Referring to Tao et al. (2023), the value function can be defined as ( V (t, x, h, i) = sup \u03bd\u2208\u03a0 J(t, x, h, i; \u03bd(\u00b7)), V (T, x, h, i) = \u03a8(T, x, i), ((t, x, h, i) \u2208[0, T] \u00d7 R \u00d7 R \u00d7 S). (2.7) The optimal strategy is \u03bd\u2217(t) = (\u03c0\u2217(t), c\u2217(t), p\u2217(t, i)). 6 3 Determination of optimal solutions 3.1 Optimization problem after the death of the breadwinner If the breadwinner dies before T, there is no change of state afterwards. When t \u2208[\u03c4i, T], dependent use wealth for investment and consumption. The strategy is \u03bdd(t) = (\u03c0d(t), cd(t)), where \u03c0d(t) is the amount invested in risky assets and cd(t) is consumption rate. The corresponding set of admissible is denoted as \u03a0d. In this stage, the objective function is Jd(t, xd, hd; \u03bdd(\u00b7)) = Et,xd,hd \u0014Z T t Ud(s, cd(s), hd(s))ds + \u03a8d(T, Xd(T)) \u0015 , (3.1) where, Xd(t) is the household wealth process under the strategy \u03bdb. dXd(t) = [rXd(t) + \u03c0d(t)(\u00b5 \u2212r) \u2212cd(t)]dt + \u03c3\u03c0d(t)dW(t). (3.2) hd(t) denotes the consumption habits levels, which satisfying dhd(t) = [\u03b1cd(t) \u2212\u03b2hd(t)]dt. (3.3) The value function corresponding to (3.1) is well-defined by ( Vd(t, xd, hd) = sup \u03bdd\u2208\u03a0d Jd(t, xd, hd; \u03bdd(\u00b7)), Vd(T, xd, hd) = \u03a8(T, xd). (3.4) Suppose the utility functions as Ud(t, cd, hd) = kde\u2212\u03c1t(cd \u2212hd)1\u2212\u03b3 1 \u2212\u03b3 , \u03a8d(t, xd) = \u03c9de\u2212\u03c1t x1\u2212\u03b3 d 1 \u2212\u03b3 . \u03c1 is the discount rate, \u03b3 is the risk preference parameter, kd, \u03c9d is the weight coefficient. Note that consumption habits can be regarded as the lowest level of consumption and only consumption beyond the basic needs of life can make people satisfied. Therefore, instead of considering the utility of all consumption, we consider the utility of consumption above the consumption habit part. This means that consumption to remain above the habit level. Using the dynamic programming approach, the corresponding HJB equation of optimization problem (3.4) is given by sup \u03bdd\u2208\u03a0d \u001a Ud(t, cd, hd) + Vd,t + [rxd + \u03c0d(\u00b5 \u2212r) \u2212cd]Vd,x + (\u03b1cd \u2212\u03b2hd)Vd,h + 1 2\u03c32\u03c02 dVd,xx \u001b = 0, (3.5) where Vd,t, Vd,x and Vd,h denote the first-order partial derivatives of Vd with respect to t, x, h, Vd,xx represent the second-order partial derivatives of Vd with respect to x. 7 Theorem 3.1. (Verification Theorem). Let Vd(t, xd, hd) be a solution of the HJB equation. Then, the inequality Vd(t, xd, hd) \u2265Jd(t, xd, hd; \u03bdd(\u00b7)) holds for every \u03bdd(\u00b7) \u2208\u03a0d and (t, xd, hd) \u2208[0, T) \u00d7 R \u00d7 R. Furthermore, an admissible pair (X\u2217 d(t), h\u2217 d(t), \u03c0\u2217 d(t), c\u2217 d(t)) is optimal if and only if the equality Ud(t, c\u2217 d(t), h\u2217 d(t)) + Vd,t + [rX\u2217 d(t) + \u03c0\u2217 d(t)(\u00b5 \u2212r) \u2212c\u2217 d(t)]Vd,x + (\u03b1c\u2217 d(t) \u2212\u03b2h\u2217 d(t))Vd,h + 1 2\u03c32(\u03c0\u2217 d(t))2Vd,xx = 0, holds for a.e.t \u2208[s, T] and P \u2212a.s. Theorem 3.2. The value function of the optimal control problem (3.4) is Vd(t, xd, hd) = 1 1 \u2212\u03b3 [g(t)]\u03b3[xd \u2212hdB(t)]1\u2212\u03b3. (3.6) The corresponding optimal strategy is \uf8f1 \uf8f4 \uf8f2 \uf8f4 \uf8f3 \u03c0\u2217 d(t) = \u00b5 \u2212r \u03c32\u03b3 [xd \u2212hdB(t)], c\u2217 d(t) = hd + xd \u2212hdB(t) g(t) [1 + \u03b1B(t)]\u22121 \u03b3 (kde\u2212\u03c1t) 1 \u03b3 , (3.7) where B(t) = 1 r + \u03b2 \u2212\u03b1[1 \u2212e\u2212(r+\u03b2\u2212\u03b1)(T\u2212t)], g(t) = Z T t eN(s\u2212t)(kde\u2212\u03c1s) 1 \u03b3 [1 + \u03b1B(s)]1\u22121 \u03b3 ds + eN(T\u2212t)(\u03c9de\u2212\u03c1T) 1 \u03b3 , N = 1 \u2212\u03b3 \u03b3 [r + (\u00b5 \u2212r)2 2\u03c32\u03b3 ]. (3.8) See Appendix for proof. 3.2 Optimization problem when the breadwinner is alive When t \u2208[0, \u03c4i \u2227T], the household needs to make the optimal investment, consumption and life insurance decisions. The strategy is \u03bda(t) = (\u03c0a(t), ca(t), pa(t, i)), where \u03c0a(t) is the amount invested in risky assets, ca(t) represents the consumption rate of the household, pa(t, i) represents the life insurance premiums rate, and the corresponding set of permissible strategies is denoted as \u03a0a. The objective function (2.6) of the household can be rewritten as J(t, xa, ha, i; \u03bda(\u00b7)) =Et,xa,ha,i h Z \u03c4i\u2227T t Ua(s, ca(s), ha(s), \u03b7(s))ds + I{\u03c4i>T}\u03a8a(T, Xa(T), \u03b7(T)) + I{\u03c4iT}\u03a8a(T, Xa(T), \u03b7(T)) + I{\u03c4iT}\u03a8a(T, Xa(T), \u03b7(T)) + I{\u03c4iT} Z T t Ua(s, ca, ha, \u03b7(s))ds i =Et,xa,ha,i n Z T t f(u, t, \u03b7(u))du Z u t Ua(s, ca, ha, \u03b7(s))ds + h 1 \u2212 Z T t f(u, t, \u03b7(u))du i Z T t Ua(s, ca, ha, \u03b7(s))ds o =Et,xa,ha,i h \u00af F(s, t, i) Z T t Ua(s, ca, ha, \u03b7(s))ds i . 18 In a similar way Et,xa,ha,i h I{\u03c4i>T}\u03a8a(T, Xa(T), \u03b7(T)) i = Et,xa,ha,i h \u00af F(T, t, i)\u03a8a(T, Xa(T), \u03b7(T)) i , Et,xa,ha,i h I{\u03c4i elloademy CLUB academy hello Vision Encoder Language Decoder Vision Context Cross Attention Fixed number of queries Vision Encoder Language Decoder D hN time! Vision Context Cross Attention Fixed number of queries vacation Vision Encoder kNN Selection In-Context Pool In-Context Inference Test Sample In-Context Training Language Context elloademy CLUB academy hello \u2026 \u2026 \u2026 \u2026 \u2026 In-Context Prompts \u2026 \u2026 \u2026 Language Context Test Sample context sample test sample learned queries ambiguous character image feature Figure 3. Pipeline of our E2STR. Top: E2STR is trained with our in-context training strategy to obtain the ICL capability. Down: During inference, E2STR selects in-context prompts based on a kNN strategy, then the test sample grasps context information from the prompts to assist the recognition. Specifically, the ambiguous character \u201ca\u201d in the test sample is easily misrecognized as \u201cq\u201d. With the vision-language context produced by the in-context prompts (i.e., \u201ca\u201d in the first in-context prompt), E2STR rectifies the result. Note that in practice the in-context pool maintains image tokens and thus does not need to go through the vision encoder. Context: Sub-sample Context: Overlap Original Sample (1) The Split Strategy Context: Different Pattern, Same Label (2) The Transform Strategy Context: Split + Transform (3) Hybrid Figure 4. Illustration of the split strategy, the transform strategy, and how we hybrid them in practice. input to the vision encoder is x and the initial input to the language decoder is a start token . The training in this phase makes use of the next-token prediction loss: L = E(x,y)\u223cD \" \u2212 L X l=1 log p(yl|y in the text for each image. This serves to make the language decoder distinguish between different samples following [1]. In this stage, we propose two strategies to generate context-rich scene text sequences: the Split Strategy and the Transform Strategy (the ST strategy). The Split Strategy. As shown in Figure 4 (a), when presented with a training tuple (x, y), we split the sample and hence generating a set of \u201csub-samples\u201d. It is evident that the sub-samples exhibit a strong connection to the original training sample. Furthermore, the sub-samples themselves demonstrate interconnectivity as they overlap with one another. Next, we proceed to concatenate the sub-samples with (x, y) and additional randomly selected samples to form a context-rich sample sequence. We randomly shuffle the whole sequence before generating the actual input text (i.e., interleaving the token to the text sequence). In practice, to accurately split the training samples, we 4 synthesize 600k scene text images based on [4] and record the accurate bounding boxes of every single character. Our subsequent experiments show that the synthesized data does not change E2STR\u2019s non-context text recognition ability, but the Split Strategy based on them equips E2STR with a strong capability of in-context learning. The Transform Strategy. As shown in Figure 4 (b), given a training tuple (x, y) (whether with character-wise bounding boxes or not), we perform data augmentation (a set of image transformations, e.g., color/direction transformations) on x. In this way, we also generate a set of sub-samples with the same label but different image patterns from the original sample. In practice, as depicted in Figure 4 (c), we hybrid the above strategies. The training set is formed by concatenating the synthesized data and the original training data used in the first training phase. For the synthesized data with character-wise bounding boxes, both the Split Strategy and the Transform Strategy are utilized. For the original training data, only the Transform Strategy is implemented. Finally, after generating the sample sequence (X, Y ), where X is the image sequence and Y is the text sequence, X is fed into the vision encoder, while Y is processed by the language decoder under the auto-regressive framework. The loss function is formulated as: L(X,Y ) = \u2212 L X l=1 log p(Y l|Y