diff --git "a/abs_29K_G/test_abstract_long_2405.04496v1.json" "b/abs_29K_G/test_abstract_long_2405.04496v1.json" new file mode 100644--- /dev/null +++ "b/abs_29K_G/test_abstract_long_2405.04496v1.json" @@ -0,0 +1,50 @@ +{ + "url": "http://arxiv.org/abs/2405.04496v1", + "title": "Edit-Your-Motion: Space-Time Diffusion Decoupling Learning for Video Motion Editing", + "abstract": "Existing diffusion-based video editing methods have achieved impressive\nresults in motion editing. Most of the existing methods focus on the motion\nalignment between the edited video and the reference video. However, these\nmethods do not constrain the background and object content of the video to\nremain unchanged, which makes it possible for users to generate unexpected\nvideos. In this paper, we propose a one-shot video motion editing method called\nEdit-Your-Motion that requires only a single text-video pair for training.\nSpecifically, we design the Detailed Prompt-Guided Learning Strategy (DPL) to\ndecouple spatio-temporal features in space-time diffusion models. DPL separates\nlearning object content and motion into two training stages. In the first\ntraining stage, we focus on learning the spatial features (the features of\nobject content) and breaking down the temporal relationships in the video\nframes by shuffling them. We further propose Recurrent-Causal Attention\n(RC-Attn) to learn the consistent content features of the object from unordered\nvideo frames. In the second training stage, we restore the temporal\nrelationship in video frames to learn the temporal feature (the features of the\nbackground and object's motion). We also adopt the Noise Constraint Loss to\nsmooth out inter-frame differences. Finally, in the inference stage, we inject\nthe content features of the source object into the editing branch through a\ntwo-branch structure (editing branch and reconstruction branch). With\nEdit-Your-Motion, users can edit the motion of objects in the source video to\ngenerate more exciting and diverse videos. Comprehensive qualitative\nexperiments, quantitative experiments and user preference studies demonstrate\nthat Edit-Your-Motion performs better than other methods.", + "authors": "Yi Zuo, Lingling Li, Licheng Jiao, Fang Liu, Xu Liu, Wenping Ma, Shuyuan Yang, Yuwei Guo", + "published": "2024-05-07", + "updated": "2024-05-07", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "label": "Original Paper", + "paper_cat": "Diffusion AND Model", + "gt": "Existing diffusion-based video editing methods have achieved impressive\nresults in motion editing. Most of the existing methods focus on the motion\nalignment between the edited video and the reference video. However, these\nmethods do not constrain the background and object content of the video to\nremain unchanged, which makes it possible for users to generate unexpected\nvideos. In this paper, we propose a one-shot video motion editing method called\nEdit-Your-Motion that requires only a single text-video pair for training.\nSpecifically, we design the Detailed Prompt-Guided Learning Strategy (DPL) to\ndecouple spatio-temporal features in space-time diffusion models. DPL separates\nlearning object content and motion into two training stages. In the first\ntraining stage, we focus on learning the spatial features (the features of\nobject content) and breaking down the temporal relationships in the video\nframes by shuffling them. We further propose Recurrent-Causal Attention\n(RC-Attn) to learn the consistent content features of the object from unordered\nvideo frames. In the second training stage, we restore the temporal\nrelationship in video frames to learn the temporal feature (the features of the\nbackground and object's motion). We also adopt the Noise Constraint Loss to\nsmooth out inter-frame differences. Finally, in the inference stage, we inject\nthe content features of the source object into the editing branch through a\ntwo-branch structure (editing branch and reconstruction branch). With\nEdit-Your-Motion, users can edit the motion of objects in the source video to\ngenerate more exciting and diverse videos. Comprehensive qualitative\nexperiments, quantitative experiments and user preference studies demonstrate\nthat Edit-Your-Motion performs better than other methods.", + "main_content": "INTRODUCTION Diffusion-based [22, 41, 44, 49, 53] video motion editing aims to control the motion (e.g., standing, dancing, running) of objects in the source video based on text prompts or other conditions (e.g., depth map, visible edges, human poses, etc), while preserving the integrity of the source background and object\u2019s content. This technique is especially valuable in multimedia [6, 10, 21, 33, 52, 56, 58, 63], including advertising, artistic creation, and film production. It allows users to effortlessly modify the motion of objects in videos Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from permissions@acm.org. ACM MM, 2024, Melbourne, Australia \u00a9 2024 Copyright held by the owner/author(s). Publication rights licensed to ACM. ACM ISBN 978-x-xxxx-xxxx-x/YY/MM https://doi.org/10.1145/nnnnnnn.nnnnnnn using a video motion editing model, eliminating the necessity for complex software. In prior studies, researchers primarily utilized generative methods to create videos featuring specific actions, with few efforts focusing on editing motions within a specific video. For example, several prior studies [26, 64, 65] have focused on pose-guided video generation, which involves creating videos that align with specified human poses. Other studies [9, 17, 25, 35, 57, 66] to generate videos with the same motion by learning the motion features in the source video. These studies operate within the text-driven space-time diffusion model framework, engineered to learn the link between textual prompt inputs and corresponding video outputs. However, the spatial and temporal features of the video are not separated during the training, which makes them entangled. The spatial features are usually represented as the object\u2019s content, and the temporal features are usually represented as the background and motion. This entangled state leads to overlapping object content, background and motion in the space-time diffusion model. As a result, it is challenging to generate highly aligned videos with the fine-grained foreground and background of the source video, even when detailed text descriptions are used. Intuitively, the key to video motion editing lies in decoupling [8, 54, 60] the temporal and spatial features of the space-time diffusion model. MotionEditor [45] first explored this problem by utilizing a twobranch structure in the inference stage to decouple the object\u2019s content and background in the feature layer by the object\u2019s segmentation mask. However, since the MotionEditor\u2019s model learns the relationship between the prompt and the entire video during the training stage, the features of objects and the background overlap in the feature layer. This overlap makes it challenging to distinguish between the background and the objects using only the segmentation mask [23, 39, 50]. In this paper, we explore methods to separate the learning of temporal and spatial features in space-time diffusion models. To this end, we propose a one-shot video motion editing method named Edit-Your-Motion that requires only a single text-video pair for training. Specifically, we propose the Detailed Prompt-Guided Learning Strategy (DPL), a two-stage learning strategy designed to separate spatio-temporal features within space-time diffusion models. Furthermore, we propose Recurrent-Causal Attention (RC-Attn) as an enhancement over Sparse-Causal Attention. The RecurrentCausal Attention allows early frames in a video to receive information from subsequent frames, ensuring consistent content of objects throughout the video without adding computational burden. Additionally, we construct the Noise Constraint Loss [31] to minimize inter-frame differences of the edited video during the second training stage. During DPL, we use the space-time diffusion model (inflated UNet [37]) as the backbone and integrate ControlNet [61] to control the generation of motion. In the first training stage, we activate Recurrent-Causal Attention and freeze the other parameters. Then, we randomly disrupt the order of frames in the source video and mask the background to guide Recurrent-Causal Attention to focus on learning the content features of objects. In the second training stage, we activate Temporal Attention [48] and freeze other parameters to learn motion and background features from ordered video \fEdit-Your-Motion: Space-Time Diffusion Decoupling Learning for Video Motion Editing ACM MM, 2024, Melbourne, Australia frames. Concurrently, Noise Constraint Loss is used to minimize the difference between frames. In the inference stage, we first perform a DDIM [42] inversion for the source video to introduce latent noise and facilitate the smoothness of the edited video. Then, the pose information of the reference video is introduced via ControlNet. Next, to ensure that the content of the objects in the edited video remains consistent with that of the source video, we utilize a two-branch structure (edit branch and reconstruction branch) similar to [45]. However, unlike MotionEditor, DPL distinctly decoupled spatial and temporal features into Recurrent-Causal Attention and Temporal Attention, respectively. Therefore, we only inject the key and value of Recurrent-Causal Attention from the reconstruction branch into the editing branch, eliminating the need for the segmentation mask. In conclusion, our contributions are as follows: \u2022 We further explored how to decouple spatio-temporal features in video motion editing explicitly and proposed a oneshot video motion editing method named Edit-Your-Motion. \u2022 We designed the Detailed Prompt-Guided Learning Strategy (DPL), a two-stage training method. It can decouple the space-time diffusion model\u2019s overlapping spatial and temporal features, thereby avoiding interference from background features during the editing object\u2019s motion. \u2022 We designed Recurrent-Causal Attention to assist DPL in learning the more comprehensive content of objects in the first training stage. In addition, We constructed the Noise Constraint Loss to smooth out inter-frame differences in the second training stage. \u2022 We conduct experiments on in-the-wild videos, where the results show the superiority of our method compared with the state-of-the-art. 2 RELATED WORK In this section, we provide a brief overview of the fields related to video motion editing and point out the connections and differences between them and video motion editing. 2.1 Image Editing Recently, a large amount of work has been done on image editing using diffusion models [7, 30, 36]. SDEdit [28] is the first method for image synthesis and editing based on diffusion models. Promptto-Prompt [13] edits images by referencing cross-attention in the diffusion process. Plug-and-play [46] provides fine-grained control over the generative structure by manipulating spatial features during generation. UniTune [47] completes text-conditioned image editing tasks by fine-tuning. For non-rigidly transformed image editing, Imagic [19] preserves the overall structure and composition of the image by linearly interpolating between texts, thus accomplishing non-rigid editing while. Masactrl [4] converts selfattention to mutual self-attention for non-rigid image editing. On the other hand, InstructPix2Pix [3] has devised a method of editing images by written instructions rather than textual descriptions of image content. Unlike text-driven image editing, DreamBooth [38] generates new images with theme attributes by using several different images of a given theme. However, these methods lack temporal modeling, and it is difficult to maintain consistency between frames when generating video. 2.2 Pose-guided and Motion-Customization Video Generation Pose-guided image and video generation is a method to control image and video generation by adding additional human poses. ControlNet [61] references additional conditions via auxiliary branches to produce images consistent with the condition map. Follow-YourPose [26] controls video generation given human skeletons. It uses a two-stage training to learn to pose and control temporal consistency. ControlVideo [64] is adapted from ControlNet and uses cross-frame interaction to constrain appearance coherence between frames. Control-A-Video [65] enhances faithfulness and temporal consistency by fine-tuning the attention modules in both the diffusion models and ControlNet. Unlike the pose-guided video generation model, the motioncustomization video generation model generates videos with the same motion by learning the motion features in the source video. Customize-A-Video [35] designed an Appearance Absorber module to decompose the spatial information of motion, thus directing the Temporal LoRA [16] to learn the motion information. MotionCrafter [66] customizes the content and motion of the video by injecting motion information into U-Net\u2019s temporal attention module through a parallel spatial-temporal architecture. VMC [17] fine-tunes only the temporal attention layer in the video diffusion model to achieve successful motion customization. Unlike these methods, video motion editing requires controlling the motion of the source video object while maintaining its content and background. 2.3 Video Editing The current video editing models can be divided into two categories: video content editing models [1, 5, 20, 24, 32, 51, 67] and video motion editing models [45]. The video content editing model is designed to modify the background and object\u2019s content (e.g., the scene in the background, the clothes colour, the vehicle\u2019s shape, etc.) in the source video. In video content editing, Tune-A-Video [51] introduces the OneShot Video Tuning task for the first time, which trains the spacetime diffusion model by a single text-video pair. FateZero [32] uses cross-attention maps to edit the content of videos without any training. Mix-of-show [12] fine-tune the model through low-rank adaptions [16] (LoRA) to prevent the crash of knowledge learned by the pre-trained model. Some other approaches [2, 5, 20] use NLA [18] mapping to map the video to a 2D atlas to decouple the object content from the background to edit the content of the object effectively. In video motion editing, MotionEditor [45] uses the object\u2019s segmentation mask to decouple the content and background in the feature layer. Content features are then injected into the editing branch to maintain content consistency. Since the object and the background overlap in the feature layer, it is difficult to accurately separate the object\u2019s content from the background features with the segmentation mask. \fACM MM, 2024, Melbourne, Australia Yi Zuo, Lingling Li, Licheng Jiao, Fang Liu, Xu Liu, Wenping Ma, Shuyuan Yang, and Yuwei Guo Our approach decouples the object from the background during the training stage and directs RC-Attn and Temporal Attention to learn spatial and temporal features, respectively. This ensures that the source video content is accurately injected. 3 METHOD In video motion editing, the focus is on decoupling the spatiotemporal features of the diffusion model. To this end, we propose Edit-Your-Motion, a one-shot video motion editing method trained only on a pair of source and reference videos. Specifically, we design the Detailed Prompt-Guided Learning strategy (DPL), a two-stage learning strategy capable of decoupling spatio-temporal features in the space-time diffusion model. In the first training stage, we shuffle the video frames to disrupt the temporal relationship of the video. Then, mask the background and learn intently spatial features (object content) from the unordered frames. We further propose Recurrent-Causal Attention (RC-Attn) instead of Sparse-Causal Attention to construct consistent features of objects over the whole sequence. In the second training stage, we recover the temporal relationships in the video frames to learn the temporal features (the background and object motion). To smooth out the inter-frame differences, we also construct Noise Constraint Loss. Finally, in the inference stage, we use the deconstruction with a two-branch structure [66] (reconstruction branch and editing branch). Since the spatial and temporal features have been decoupled in the training stage, we obtain the background and motion features in the editing branch and inject the content features of the objects in the reconstruction branch into the editing branch. Fig. 2 illustrates the pipeline of Edit-Your-Motion. To introduce our proposed Edit-Your-Motion, we first introduce the basics of the text-video diffusion model in Sec. 3.1. Then, Sec. 3.2 introduces our proposed Recurrent-Causal Attention (RC-Attentio). After that, in Sec. 3.3, our proposed Detailed Prompt-Guided Learning strategy and Noise Constraint Loss are described. Finally, we will introduce the inference stage in Sec. 3.4. 3.1 Preliminaries Denoising Diffusion Probabilistic Models. The denoising diffusion probabilistic models [11, 14, 27, 55] (DDPMs) consists of a forward diffusion process and a reverse denoising process. During the forward diffusion process, it gradually adds noise \ud835\udf16to a clean image \ud835\udc990 \u223c\ud835\udc5e(\ud835\udc990) with time step \ud835\udc61, obtaining a noisy sample \ud835\udc65\ud835\udc61. The process of adding noise can be represented as: \ud835\udc5e(\ud835\udc99\ud835\udc61|\ud835\udc99\ud835\udc61\u22121) = N (\ud835\udc99\ud835\udc61| \u221a\ufe01 1 \u2212\ud835\udefd\ud835\udc61\ud835\udc99\ud835\udc61\u22121, \ud835\udefd\ud835\udc61I), (1) where \ud835\udefd\ud835\udc61\u2208(0, 1) is a variance schedule. The entire forward process of the diffusion model can be represented as a Markov chain from time \ud835\udc61to time \ud835\udc47, \ud835\udc5e(\ud835\udc991:\ud835\udc47) = \ud835\udc5e(\ud835\udc990) \ud835\udc47 \u00d6 \ud835\udc61=1 \ud835\udc5e(\ud835\udc99\ud835\udc61|\ud835\udc99\ud835\udc61\u22121) . (2) Then, in reverse processing, noise is removed through a denoising autoencoders \ud835\udf16\ud835\udf03(\ud835\udc65\ud835\udc61,\ud835\udc61) to generate a clean image. The corresponding objective can be simplified to: \ud835\udc3f\ud835\udc37\ud835\udc40= E\ud835\udc65,\ud835\udf16\u223cN(0,1),\ud835\udc61 \u0002 \u2225\ud835\udf16\u2212\ud835\udf16\ud835\udf03(\ud835\udc65\ud835\udc61,\ud835\udc61)\u22252 2 \u0003 . (3) Latent Diffusion Models. Latent Diffusion models (LDM) [29, 36, 59] is a newly introduced variant of DDPM that operates in the latent space of the autoencoder. Specifically, the encoder E compresses the image to latent features \ud835\udc9b= E(\ud835\udc99). Then performs a diffusion process over \ud835\udc67, and finally reconstructs latent features back into pixel space using the decoder D. The corresponding objective can be represented as: \ud835\udc3f\ud835\udc3f\ud835\udc37\ud835\udc40= EE(\ud835\udc65),\ud835\udf16\u223cN(0,1),\ud835\udc61 h \u2225\ud835\udf16\u2212\ud835\udf16\ud835\udf03(\ud835\udc67\ud835\udc61,\ud835\udc61)\u22252 2 i . (4) Text-to-Video Diffusion Models. Text-to-Video Diffusion Models [43] train a 3D UNet \ud835\udf163\ud835\udc37 \ud835\udf03 with text prompts \ud835\udc50as a condition to generate videos using the T2V model. Given the \ud835\udc39frames \ud835\udc991...\ud835\udc39of a video, the 3D UNet is trained by \ud835\udc3f\ud835\udc472\ud835\udc49= EE(\ud835\udc651...\ud835\udc39),\ud835\udf16\u223cN(0,1),\ud835\udc61,\ud835\udc50 \u0014\r \r \r\ud835\udf16\u2212\ud835\udf163\ud835\udc37 \ud835\udf03 (\ud835\udc671...\ud835\udc39 \ud835\udc61 ,\ud835\udc61,\ud835\udc50) \r \r \r 2 2 \u0015 , (5) where \ud835\udc671...\ud835\udc39 \ud835\udc61 is the latent features of \ud835\udc991...\ud835\udc39, \ud835\udc671...\ud835\udc39 \ud835\udc61 = E(\ud835\udc991...\ud835\udc39). 3.2 Recurrent-Causal Attention Like Tune-A-Video [51], we use the inflated U-Net network (spacetime diffusion model) as the backbone of Edit-Your-Motion, consisting of stacked 3D convolutional residual blocks and transform blocks. Each transformer block consists of Sparse-Causal Attention, Cross Attention, Temporal Attention, and a Feed-Forward Network (FFN). To save computational overhead, Tune-A-Video uses the current frame latent \ud835\udc67\ud835\udc63\ud835\udc56\u2208 \b \ud835\udc67\ud835\udc630, . . . ,\ud835\udc67\ud835\udc63\ud835\udc56\ud835\udc5a\ud835\udc4e\ud835\udc65 \t as the query for Sparse-Causal Attention. Meanwhile, the previous frame latent \ud835\udc67\ud835\udc63\ud835\udc56\u22121 is combined with the first frame latent \ud835\udc67\ud835\udc631 to obtain the key and value. The specific formula is as follows: \ud835\udc44= \ud835\udc4a\ud835\udc44\ud835\udc67\ud835\udc63\ud835\udc56, \ud835\udc3e= \ud835\udc4a\ud835\udc3e\u0002 \ud835\udc67\ud835\udc631,\ud835\udc67\ud835\udc63\ud835\udc56\u22121 \u0003 ,\ud835\udc49= \ud835\udc4a\ud835\udc49\u0002 \ud835\udc67\ud835\udc631,\ud835\udc67\ud835\udc63\ud835\udc56\u22121 \u0003 , (6) where [\u00b7] denotes concatenation operation. where \ud835\udc4a\ud835\udc44, \ud835\udc4a\ud835\udc3eand \ud835\udc4a\ud835\udc49are projection matrices. However, because there is less information in the early frames of a video, Sparse-Causal Attention does not consider the connection with the subsequent frames. As a result, it may lead to inconsistencies between the content at the beginning and the end of the video. To solve this problem, we propose a simple Recurrent-Causal Attention with no increase in computational complexity. In RecurrentCausal Attention, key and value are obtained by combining the previous frame latent \ud835\udc67\ud835\udc63\ud835\udc56\u22121 with the current frame latent \ud835\udc67\ud835\udc63\ud835\udc56, not \ud835\udc67\ud835\udc631 with \ud835\udc67\ud835\udc63\ud835\udc56\u22121. Notably, the key and value of the first frame latent \ud835\udc67\ud835\udc631 are obtained from the last frame latent \ud835\udc67\ud835\udc63\ud835\udc56\ud835\udc5a\ud835\udc4e\ud835\udc65with the first frame latent \ud835\udc67\ud835\udc631. This allows the object\u2019s content to propagate throughout the video sequence without adding any computational complexity. The formula for Recurrent-Causal Attention is as follows: \ud835\udc44= \ud835\udc4a\ud835\udc44\ud835\udc67\ud835\udc63\ud835\udc56, (7) \ud835\udc3e= ( \ud835\udc4a\ud835\udc3e\u0002 \ud835\udc67\ud835\udc63\ud835\udc56\u22121,\ud835\udc67\ud835\udc63\ud835\udc56 \u0003 if \ud835\udc56< \ud835\udc56\ud835\udc5a\ud835\udc4e\ud835\udc65 \ud835\udc4a\ud835\udc3e\u0002 \ud835\udc67\ud835\udc630,\ud835\udc67\ud835\udc63\ud835\udc56 \u0003 \ud835\udc52\ud835\udc59\ud835\udc60\ud835\udc52 , (8) \ud835\udc49= ( \ud835\udc4a\ud835\udc49\u0002 \ud835\udc67\ud835\udc63\ud835\udc56\u22121,\ud835\udc67\ud835\udc63\ud835\udc56 \u0003 if \ud835\udc56< \ud835\udc56\ud835\udc5a\ud835\udc4e\ud835\udc65 \ud835\udc4a\ud835\udc49\u0002 \ud835\udc67\ud835\udc630,\ud835\udc67\ud835\udc63\ud835\udc56 \u0003 \ud835\udc52\ud835\udc59\ud835\udc60\ud835\udc52 . (9) \fEdit-Your-Motion: Space-Time Diffusion Decoupling Learning for Video Motion Editing ACM MM, 2024, Melbourne, Australia \u201cA boy wearing black clothes and gray pants is dancing.\u201d ControlNet Inference Stage: A Two-Branch Structure that Injects Spatial Features The Second Training Stage: Learning Temporal Feature from Ordered Video Frames \u201cA boy wearing black clothes and gray Pants is playing basketball.\u201d The First Training Stage: Learning Spatial Features from Shuffled Images \u201cA boy wearing black clothes and gray pants.\u201d a P s P a P t P rf S rf C Editing Branch Reconstruction Branch k V Edited video sr S Source video Reference video a P Unordered Frames ControlNet Temp-Attn Cross-Attn RC-Attn Temp-Attn Cross-Attn RC-Attn ControlNet Ordered Frames Temp-Attn Cross-Attn RC-Attn Temp-Attn Cross-Attn RC-Attn s P sr S s r C rf C t P Figure 2: The overall pipline of Edit-Your-Motion. Edit-Your-Motion decouples spatial features (object appearance) from temporal features (background and motion information) of the source video using the Detailed Prompt-Guided Learning Strategy (DPL). In the first training stage, Recurrent-Causal attention (RC-Attn) is guided to learn spatial features. In the second training stage, Temporal Attention (Temp-Attn) is guided to learn temporal features. During inference, the spatial features of the source video are injected into the editing branch through the key and value of Recurrent-Causal Attention, thus keeping the source content and background unchanged. Overall, Recurrent-Causal Attention enables early frames to acquire more comprehensive content information compared to Sparse-Causal Attention, by establishing a link to the last frame in the first frame. 3.3 The Detailed Prompt-Guided Learning Strategy The purpose of diffusion-based video motion editing is to control the motion of objects in the source video based on a reference video with a prompt and to ensure that the content and background of the objects remain unchanged. The key lies in decoupling the diffusion model\u2019s overlapping temporal and spatial features. MotionEditor uses the object\u2019s segmentation mask to decouple the object content and the background in the feature layer. However, the decoupled features also overlap since the spatio-temporal features have been obfuscated in the model. In order to be able to decouple overlapping spatio-temporal features, we design the Detailed Prompt-Guided Learning Strategy (DPL). DPL is divided into two training stages: (1) The First Training Stage: Learning Spatial Features from Shuffled Images, and (2) The Second Training Stage: Learning Temporal Features from Ordered video frames. Next, we will describe the two stages in detail. The First Training Stage: Learning Spatial Features from Shuffled Images. In this stage, the space-time diffusion model focuses on learning the spatial features of the source object. First, we disrupt the order of video frames to destroy their temporal information and generate unordered video frames U = {\ud835\udc62\ud835\udc56|\ud835\udc56\u2208[1,\ud835\udc5b]}, where \ud835\udc5b is the length of the video. If we train the model directly using unordered frames, the features of the object and the background will overlap. Such overlapping spatio-temporal features are challenging to decouple later and will lead to interference from background features when controlling object motion. Therefore, we use an existing segmentation \fACM MM, 2024, Melbourne, Australia Yi Zuo, Lingling Li, Licheng Jiao, Fang Liu, Xu Liu, Wenping Ma, Shuyuan Yang, and Yuwei Guo network to extract the segmentation mask \ud835\udc40for the unordered video frames. Therefore, we use an existing segmentation network to extract the segmentation mask M for the video frames and mask out the background as: UM = U \u00b7 M, (10) ZM \ud835\udc61 = E(UM), (11) where ZM \ud835\udc61 is the latent features of UM, and E(\u00b7) is encoder. Then, we utilize an existing skeleton extraction network to obtain the human skeleton \ud835\udc46\ud835\udc60\ud835\udc5fin the source video and feed it into ControlNet along with the prompt \ud835\udc43\ud835\udc4e. \ud835\udc36\ud835\udc60\ud835\udc5f= \ud835\udc36\ud835\udc5c\ud835\udc5b\ud835\udc61\ud835\udc5f\ud835\udc5c\ud835\udc59\ud835\udc41\ud835\udc52\ud835\udc61(\ud835\udc46\ud835\udc60\ud835\udc5f, \ud835\udc43\ud835\udc4e), (12) where \ud835\udc36\ud835\udc60\ud835\udc5fis the pose feature of source video. Next, we will freeze other parameters and only activate Recurrent-Causal Attention. Finally, we will \ud835\udc43\ud835\udc4eand \ud835\udc36\ud835\udc60\ud835\udc5finto the space-time diffusion model for training. The reconstruction loss can be written as follows: \ud835\udc3f\ud835\udc5f\ud835\udc52\ud835\udc50= E\ud835\udc67\ud835\udc5a \ud835\udc61,\ud835\udf16\u223cN(0,1),\ud835\udc61,\ud835\udc43\ud835\udc4e,\ud835\udc36\ud835\udc60\ud835\udc5f \u0014\r \r \r\ud835\udf16\u2212\ud835\udf163\ud835\udc37 \ud835\udf03 (\ud835\udc67\ud835\udc5a \ud835\udc61,\ud835\udc61, \ud835\udc43\ud835\udc4e,\ud835\udc36\ud835\udc60\ud835\udc5f) \r \r \r 2 2 \u0015 . (13) The Second Training Stage: Learning Temporal Features from Ordered Video Frames. Unlike the first training stage, we restored the temporal relationship of video frames. Then, guide the spacetime diffusion model to learn the temporal features of motion and background from ordered video frames V = {\ud835\udc63\ud835\udc56|\ud835\udc56\u2208[1,\ud835\udc5b]}. Specifically, We construct a new prompt \ud835\udc43\ud835\udc60, which adds a description of the motion to \ud835\udc43\ud835\udc4e. Then, Temporal Attention is activated to learn motion features while other parameters are frozen. To smooth the video, we added Noise Constraint Loss [31]. The noise constraint loss can be written as follows: \ud835\udc3f\ud835\udc5b\ud835\udc5c\ud835\udc56\ud835\udc60\ud835\udc52= 1 \ud835\udc5b\u22121 \ud835\udc5b\u22121 \u2211\ufe01 \ud835\udc56=1 \r \r \r\ud835\udf16\ud835\udc53\ud835\udc56 \ud835\udc9b\ud835\udc61\u2212\ud835\udf16\ud835\udc53\ud835\udc56+1 \ud835\udc9b\ud835\udc61 \r \r \r 2 2 , (14) where \ud835\udc53\ud835\udc56denote the \ud835\udc56-th frame of the video. \ud835\udf16\ud835\udc53\ud835\udc56 \ud835\udc9b\ud835\udc61is the noise prediction at timestep \ud835\udc61. The total loss for the second training stage is constructed as follows: \ud835\udc3f\ud835\udc47\ud835\udc5c\ud835\udc61\ud835\udc4e\ud835\udc59= (1 \u2212\ud835\udf06)\ud835\udc3f\ud835\udc5b\ud835\udc5c\ud835\udc56\ud835\udc60\ud835\udc52+ \ud835\udf06\ud835\udc3f\ud835\udc5f\ud835\udc52\ud835\udc50, (15) where \ud835\udc3f\ud835\udc5f\ud835\udc52\ud835\udc50is constructed from ordered video frames V without segmentation mask \ud835\udc40. \ud835\udf06is set to 0.9. 3.4 Inference Pipelines In the inference stage, we first extract the human skeleton \ud835\udc46\ud835\udc5f\ud835\udc53from the reference video to guide motion generation. Then, to ensure that the object\u2019s content and background are unchanged, we use a two-branch architecture (reconstruction branch and editing branch) similar to [45] to inject the object\u2019s content and background features into the editing branch. Specifically, we first input the latent noise \ud835\udc67\ud835\udc60from the source video DDIM inversion and \ud835\udc43\ud835\udc4einto the reconstruction branch. Simultaneously input \ud835\udc67\ud835\udc60and \ud835\udc43\ud835\udc61into the editing branch. Then, we will input the human skeleton \ud835\udc46\ud835\udc5f\ud835\udc53from the reference video and \ud835\udc43\ud835\udc61 into ControlNet to obtain feature \ud835\udc36\ud835\udc5f\ud835\udc53as: \ud835\udc36\ud835\udc5f\ud835\udc53= \ud835\udc36\ud835\udc5c\ud835\udc5b\ud835\udc61\ud835\udc5f\ud835\udc5c\ud835\udc59\ud835\udc41\ud835\udc52\ud835\udc61(\ud835\udc46\ud835\udc5f\ud835\udc53, \ud835\udc43\ud835\udc61), (16) where \ud835\udc36\ud835\udc5f\ud835\udc53is the pose feature of the reference video to be used to guide the generation of motion in the editing branch. Next, we will inject the spatial features from the reconstruction branch into the editing branch. Due to disrupting the time relationship and mask the background in the first training stage of DPL. Therefore, we directly inject the keys and values of the RC-Attn in the reconstruction branch into the editing branch without needing segmentation masks. The specific formula can be written as: \ud835\udc3e\ud835\udc5f= \ud835\udc4a\ud835\udc3e\ud835\udc67\ud835\udc60 \ud835\udc63\ud835\udc56,\ud835\udc49\ud835\udc5f= \ud835\udc4a\ud835\udc49\ud835\udc67\ud835\udc60 \ud835\udc63\ud835\udc56, (17) \ud835\udc3e\ud835\udc52= h \ud835\udc4a\ud835\udc3e\ud835\udc67\ud835\udc52 \ud835\udc63\ud835\udc56\u22121,\ud835\udc4a\ud835\udc3e\ud835\udc67\ud835\udc52 \ud835\udc63\ud835\udc56, \ud835\udc3e\ud835\udc5fi ,\ud835\udc49\ud835\udc52= h \ud835\udc4a\ud835\udc49\ud835\udc67\ud835\udc52 \ud835\udc63\ud835\udc56\u22121,\ud835\udc4a\ud835\udc49\ud835\udc67\ud835\udc52 \ud835\udc63\ud835\udc56,\ud835\udc49\ud835\udc5fi , (18) \ud835\udc49\ud835\udc5fwhere \ud835\udc52represents the editing branch. \ud835\udc5frepresents the reconstruction branch. In the end, we obtained the edited video. 4 EXPERIMENTAL 4.1 Implementation Details Our proposed Edit-Your-Motion is based on the Latent Diffusion Model [36] (Stabel Diffusion). The data in this article comes from TaichiHD [40] and YouTube video datasets, in which each video has a minimum of 70 frames. During training, we finetune 300 steps for each of the two training stages at a learning rate of 3 \u00d7 10\u22125. For inference, we used the DDIM sampler [42] with no classifier guidance [15] in our experiments. For each video, the fine-tuning takes about 15 minutes with a single NVIDIA A100 GPU. 4.2 Comparisons Method To demonstrate the superiority of our Edit-Your-Motion, we have selected methods from motion customization, pose-guided video generation, video content editing, and video motion editing as comparison methods. (1) Tune-A-Video [51]: The first presents the work of one-shot video editing. It inflates a pre-trained T2I diffusion model to 3D to handle the video task. (2) MotionEditor1 [45]: The first examines the work of video motion editing while maintaining the object content and background unchanged. (3) Follow-YourPose [26]: Generating pose-controllable videos using two-stage training. (4) MotionDirector [66]: Generate motion-aligned videos by decoupling appearance and motion in reference videos for videomotion-customization. 4.3 Evaluation Our method can edit the motion of objects in the source video by using the reference video and prompting without changing the object content and the background. Fig. 4 shows some of our examples. As can be seen, our proposed Edit-Your-Motion accurately controls the motion and preserves the object\u2019s content and background well. The more cases are in the appendix. Qualitative Results. Fig. 3 shows the results of the visual comparison of Edit-Your-Motion with other comparison methods on 25 in-the-wild cases. Although Follow-Your-Pose and MotionDirector can align well with the motion of the reference video, it is difficult to 1Since the article\u2019s code is not provided, the experimental results in this paper are obtained by replication. \fEdit-Your-Motion: Space-Time Diffusion Decoupling Learning for Video Motion Editing ACM MM, 2024, Melbourne, Australia Source video Reference video Tune-A-Video Ours MotionEditor 4 8 12 16 0 22 6 2 Follow-Your-Pose MotionDirector Source video Reference video Tune-A-Video Ours MotionEditor 0 22 6 2 Follow Your Pose MotionDirector A girl in a plaid top and black skirt is dancing practicing wugong. A boy with a black top and gray pants is playing basketball dancing. Figure 3: Qualitative comparison with state-of-the-art methods. Compared to other baselines, Edit-Your-Motion successfully achieves motion alignment with the reference video and maintains the content consistency of the background and objects. maintain consistency between the object content and background in both the source and reference videos. It demonstrates that generating specific background and content using only text prompts is difficult. Tune-A-Video and MotionEditor show noticeable content changes. In addition, MotionEditor shows motion overlap (arms) caused by using of the segmentation mask to decouple overlapping features. In contrast to the above, our proposed Edit-Your-Motion aligns the motion of the edited video and the reference video well and preserves the content and background of the objects in the source video intact. This also demonstrates the effectiveness of our method in video motion editing. Quantitative results. We evaluate the methods with automatic evaluations and human evaluations on 25 in-the-wild cases. Automatic Evaluations. To quantitatively assess the differences between our proposed Edit-Your-Motion and other comparative methods, we use the following metrics to measure the results: (1) Text Alignment (TA). We use CLIP [34] to compute the average cosine similarity between the prompt and the edited frames. (2) Temporal Consistency (TC). We use CLIP to obtain image features and compute the average cosine similarity between neighbouring video frames. (3) LPIPS-N (L-N): We calculate Learned Perceptual Image Patch Similarity [62] between edited neighbouring frames. (4) LPIPS-S (L-S): We calculate Learned Perceptual Image Patch Table 1: Quantitative evaluation using CLIP and LPIPS. TA, TC, L-N, L-S represent Text Alignment, Temporal Consistency, LPIPS-N and LPIPS-S, respectively. Method TA \u2191 TC \u2191 L-N \u2193 L-S \u2193 Follow-Your-Pose [26] 0.236 0.913 0.213 0.614 MotionDirector [66] 0.239 0.872 0.141 0.430 Tune-A-Video [51] 0.278 0.934 0.137 0.359 MotionEditor [45] 0.286 0.948 0.102 0.300 Ours 0.289 0.950 0.109 0.276 Similarity between edited frames and source frames. Table 1 shows the quantitative results of Edit-Your-Motion with other comparative methods. The results show that Edit-Your-Motion outperforms the other methods on all metrics. User Study. We invited 70 participants to participate in the user study. Each participant could see the source video, the reference video, and the results of our and other comparison methods. For each case, we combined the results of Edit-Your-Motion with the results of each of the four comparison methods. Then, we set three \fACM MM, 2024, Melbourne, Australia Yi Zuo, Lingling Li, Licheng Jiao, Fang Liu, Xu Liu, Wenping Ma, Shuyuan Yang, and Yuwei Guo A boy wearing black clothes and gray pants is playing basketball dancing. A woman in a blue top and white skirt is waving her hand dancing. A girl with a black top and black skirt is dancing practicing Tai Chi. A man with a dark green top and black pants is standing practicing Tai Chi. 0 6 12 18 22 Figure 4: Some examples of motion editing results for Edit-Your-Motion. Table 2: User Study. Higher indicates the users prefer more to our MotionEditor. TA, CA, and MA represent Text Alignment, Content Alignment, and Motion Alignment, respectively. Method TA CA MA Follow-Your-Pose [26] 87.142% 96.663% 90.953% MotionDirector [66] 94.522% 96.190% 86.188% Tune-A-Video [51] 78.810% 82.145% 84.047% MotionEditor [45] 76.428% 82.380% 80.950% questions to evaluate Text Alignment, Content Alignment and Motion Alignment. The three questions are \"Which is more aligned to the text prompt?\", \"Which is more content aligned to the source video?\" and \"Which is more motion aligned to the reference video?\". Table 2 shows that our method outperforms the other compared methods in all three aspects. 4.4 Ablation Study To verify the effectiveness of the proposed module, we show the results of the ablation experiments in Fig. 5. In column 3, we replace RC-Attn with Sparse Attention, which makes the first frame inconsistent with the object content in the subsequent frames. This shows that RC-Attn can better establish content consistency over the entire sequence than with Sparse Attention. In column 4, w/o Noise Constraint Loss (NCL) affects the smoothness between frames, causing the background to be inconsistent between frames. In column 5, we train RC-Attn and Temporal Attention in a training stage. However, the lack of spatio-temporal decoupling results in background and object content interfering, generating undesirable edited videos. At the same time, it also demonstrates the effectiveness of DPL in decoupling time and space. \fEdit-Your-Motion: Space-Time Diffusion Decoupling Learning for Video Motion Editing ACM MM, 2024, Melbourne, Australia 40 44 48 58 Source video Reference video w/o RCA w/o NCL w/o DPT Edit-Your-Motion A girl with a black top and black shorts is waving her hand dancing. Figure 5: Some examples of video motion editing results for Edit-Your-Motion. 5", + "additional_graph_info": { + "graph": [], + "node_feat": { + "Yi Zuo": [ + { + "url": "http://arxiv.org/abs/2405.04496v1", + "title": "Edit-Your-Motion: Space-Time Diffusion Decoupling Learning for Video Motion Editing", + "abstract": "Existing diffusion-based video editing methods have achieved impressive\nresults in motion editing. Most of the existing methods focus on the motion\nalignment between the edited video and the reference video. However, these\nmethods do not constrain the background and object content of the video to\nremain unchanged, which makes it possible for users to generate unexpected\nvideos. In this paper, we propose a one-shot video motion editing method called\nEdit-Your-Motion that requires only a single text-video pair for training.\nSpecifically, we design the Detailed Prompt-Guided Learning Strategy (DPL) to\ndecouple spatio-temporal features in space-time diffusion models. DPL separates\nlearning object content and motion into two training stages. In the first\ntraining stage, we focus on learning the spatial features (the features of\nobject content) and breaking down the temporal relationships in the video\nframes by shuffling them. We further propose Recurrent-Causal Attention\n(RC-Attn) to learn the consistent content features of the object from unordered\nvideo frames. In the second training stage, we restore the temporal\nrelationship in video frames to learn the temporal feature (the features of the\nbackground and object's motion). We also adopt the Noise Constraint Loss to\nsmooth out inter-frame differences. Finally, in the inference stage, we inject\nthe content features of the source object into the editing branch through a\ntwo-branch structure (editing branch and reconstruction branch). With\nEdit-Your-Motion, users can edit the motion of objects in the source video to\ngenerate more exciting and diverse videos. Comprehensive qualitative\nexperiments, quantitative experiments and user preference studies demonstrate\nthat Edit-Your-Motion performs better than other methods.", + "authors": "Yi Zuo, Lingling Li, Licheng Jiao, Fang Liu, Xu Liu, Wenping Ma, Shuyuan Yang, Yuwei Guo", + "published": "2024-05-07", + "updated": "2024-05-07", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "main_content": "INTRODUCTION Diffusion-based [22, 41, 44, 49, 53] video motion editing aims to control the motion (e.g., standing, dancing, running) of objects in the source video based on text prompts or other conditions (e.g., depth map, visible edges, human poses, etc), while preserving the integrity of the source background and object\u2019s content. This technique is especially valuable in multimedia [6, 10, 21, 33, 52, 56, 58, 63], including advertising, artistic creation, and film production. It allows users to effortlessly modify the motion of objects in videos Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from permissions@acm.org. ACM MM, 2024, Melbourne, Australia \u00a9 2024 Copyright held by the owner/author(s). Publication rights licensed to ACM. ACM ISBN 978-x-xxxx-xxxx-x/YY/MM https://doi.org/10.1145/nnnnnnn.nnnnnnn using a video motion editing model, eliminating the necessity for complex software. In prior studies, researchers primarily utilized generative methods to create videos featuring specific actions, with few efforts focusing on editing motions within a specific video. For example, several prior studies [26, 64, 65] have focused on pose-guided video generation, which involves creating videos that align with specified human poses. Other studies [9, 17, 25, 35, 57, 66] to generate videos with the same motion by learning the motion features in the source video. These studies operate within the text-driven space-time diffusion model framework, engineered to learn the link between textual prompt inputs and corresponding video outputs. However, the spatial and temporal features of the video are not separated during the training, which makes them entangled. The spatial features are usually represented as the object\u2019s content, and the temporal features are usually represented as the background and motion. This entangled state leads to overlapping object content, background and motion in the space-time diffusion model. As a result, it is challenging to generate highly aligned videos with the fine-grained foreground and background of the source video, even when detailed text descriptions are used. Intuitively, the key to video motion editing lies in decoupling [8, 54, 60] the temporal and spatial features of the space-time diffusion model. MotionEditor [45] first explored this problem by utilizing a twobranch structure in the inference stage to decouple the object\u2019s content and background in the feature layer by the object\u2019s segmentation mask. However, since the MotionEditor\u2019s model learns the relationship between the prompt and the entire video during the training stage, the features of objects and the background overlap in the feature layer. This overlap makes it challenging to distinguish between the background and the objects using only the segmentation mask [23, 39, 50]. In this paper, we explore methods to separate the learning of temporal and spatial features in space-time diffusion models. To this end, we propose a one-shot video motion editing method named Edit-Your-Motion that requires only a single text-video pair for training. Specifically, we propose the Detailed Prompt-Guided Learning Strategy (DPL), a two-stage learning strategy designed to separate spatio-temporal features within space-time diffusion models. Furthermore, we propose Recurrent-Causal Attention (RC-Attn) as an enhancement over Sparse-Causal Attention. The RecurrentCausal Attention allows early frames in a video to receive information from subsequent frames, ensuring consistent content of objects throughout the video without adding computational burden. Additionally, we construct the Noise Constraint Loss [31] to minimize inter-frame differences of the edited video during the second training stage. During DPL, we use the space-time diffusion model (inflated UNet [37]) as the backbone and integrate ControlNet [61] to control the generation of motion. In the first training stage, we activate Recurrent-Causal Attention and freeze the other parameters. Then, we randomly disrupt the order of frames in the source video and mask the background to guide Recurrent-Causal Attention to focus on learning the content features of objects. In the second training stage, we activate Temporal Attention [48] and freeze other parameters to learn motion and background features from ordered video \fEdit-Your-Motion: Space-Time Diffusion Decoupling Learning for Video Motion Editing ACM MM, 2024, Melbourne, Australia frames. Concurrently, Noise Constraint Loss is used to minimize the difference between frames. In the inference stage, we first perform a DDIM [42] inversion for the source video to introduce latent noise and facilitate the smoothness of the edited video. Then, the pose information of the reference video is introduced via ControlNet. Next, to ensure that the content of the objects in the edited video remains consistent with that of the source video, we utilize a two-branch structure (edit branch and reconstruction branch) similar to [45]. However, unlike MotionEditor, DPL distinctly decoupled spatial and temporal features into Recurrent-Causal Attention and Temporal Attention, respectively. Therefore, we only inject the key and value of Recurrent-Causal Attention from the reconstruction branch into the editing branch, eliminating the need for the segmentation mask. In conclusion, our contributions are as follows: \u2022 We further explored how to decouple spatio-temporal features in video motion editing explicitly and proposed a oneshot video motion editing method named Edit-Your-Motion. \u2022 We designed the Detailed Prompt-Guided Learning Strategy (DPL), a two-stage training method. It can decouple the space-time diffusion model\u2019s overlapping spatial and temporal features, thereby avoiding interference from background features during the editing object\u2019s motion. \u2022 We designed Recurrent-Causal Attention to assist DPL in learning the more comprehensive content of objects in the first training stage. In addition, We constructed the Noise Constraint Loss to smooth out inter-frame differences in the second training stage. \u2022 We conduct experiments on in-the-wild videos, where the results show the superiority of our method compared with the state-of-the-art. 2 RELATED WORK In this section, we provide a brief overview of the fields related to video motion editing and point out the connections and differences between them and video motion editing. 2.1 Image Editing Recently, a large amount of work has been done on image editing using diffusion models [7, 30, 36]. SDEdit [28] is the first method for image synthesis and editing based on diffusion models. Promptto-Prompt [13] edits images by referencing cross-attention in the diffusion process. Plug-and-play [46] provides fine-grained control over the generative structure by manipulating spatial features during generation. UniTune [47] completes text-conditioned image editing tasks by fine-tuning. For non-rigidly transformed image editing, Imagic [19] preserves the overall structure and composition of the image by linearly interpolating between texts, thus accomplishing non-rigid editing while. Masactrl [4] converts selfattention to mutual self-attention for non-rigid image editing. On the other hand, InstructPix2Pix [3] has devised a method of editing images by written instructions rather than textual descriptions of image content. Unlike text-driven image editing, DreamBooth [38] generates new images with theme attributes by using several different images of a given theme. However, these methods lack temporal modeling, and it is difficult to maintain consistency between frames when generating video. 2.2 Pose-guided and Motion-Customization Video Generation Pose-guided image and video generation is a method to control image and video generation by adding additional human poses. ControlNet [61] references additional conditions via auxiliary branches to produce images consistent with the condition map. Follow-YourPose [26] controls video generation given human skeletons. It uses a two-stage training to learn to pose and control temporal consistency. ControlVideo [64] is adapted from ControlNet and uses cross-frame interaction to constrain appearance coherence between frames. Control-A-Video [65] enhances faithfulness and temporal consistency by fine-tuning the attention modules in both the diffusion models and ControlNet. Unlike the pose-guided video generation model, the motioncustomization video generation model generates videos with the same motion by learning the motion features in the source video. Customize-A-Video [35] designed an Appearance Absorber module to decompose the spatial information of motion, thus directing the Temporal LoRA [16] to learn the motion information. MotionCrafter [66] customizes the content and motion of the video by injecting motion information into U-Net\u2019s temporal attention module through a parallel spatial-temporal architecture. VMC [17] fine-tunes only the temporal attention layer in the video diffusion model to achieve successful motion customization. Unlike these methods, video motion editing requires controlling the motion of the source video object while maintaining its content and background. 2.3 Video Editing The current video editing models can be divided into two categories: video content editing models [1, 5, 20, 24, 32, 51, 67] and video motion editing models [45]. The video content editing model is designed to modify the background and object\u2019s content (e.g., the scene in the background, the clothes colour, the vehicle\u2019s shape, etc.) in the source video. In video content editing, Tune-A-Video [51] introduces the OneShot Video Tuning task for the first time, which trains the spacetime diffusion model by a single text-video pair. FateZero [32] uses cross-attention maps to edit the content of videos without any training. Mix-of-show [12] fine-tune the model through low-rank adaptions [16] (LoRA) to prevent the crash of knowledge learned by the pre-trained model. Some other approaches [2, 5, 20] use NLA [18] mapping to map the video to a 2D atlas to decouple the object content from the background to edit the content of the object effectively. In video motion editing, MotionEditor [45] uses the object\u2019s segmentation mask to decouple the content and background in the feature layer. Content features are then injected into the editing branch to maintain content consistency. Since the object and the background overlap in the feature layer, it is difficult to accurately separate the object\u2019s content from the background features with the segmentation mask. \fACM MM, 2024, Melbourne, Australia Yi Zuo, Lingling Li, Licheng Jiao, Fang Liu, Xu Liu, Wenping Ma, Shuyuan Yang, and Yuwei Guo Our approach decouples the object from the background during the training stage and directs RC-Attn and Temporal Attention to learn spatial and temporal features, respectively. This ensures that the source video content is accurately injected. 3 METHOD In video motion editing, the focus is on decoupling the spatiotemporal features of the diffusion model. To this end, we propose Edit-Your-Motion, a one-shot video motion editing method trained only on a pair of source and reference videos. Specifically, we design the Detailed Prompt-Guided Learning strategy (DPL), a two-stage learning strategy capable of decoupling spatio-temporal features in the space-time diffusion model. In the first training stage, we shuffle the video frames to disrupt the temporal relationship of the video. Then, mask the background and learn intently spatial features (object content) from the unordered frames. We further propose Recurrent-Causal Attention (RC-Attn) instead of Sparse-Causal Attention to construct consistent features of objects over the whole sequence. In the second training stage, we recover the temporal relationships in the video frames to learn the temporal features (the background and object motion). To smooth out the inter-frame differences, we also construct Noise Constraint Loss. Finally, in the inference stage, we use the deconstruction with a two-branch structure [66] (reconstruction branch and editing branch). Since the spatial and temporal features have been decoupled in the training stage, we obtain the background and motion features in the editing branch and inject the content features of the objects in the reconstruction branch into the editing branch. Fig. 2 illustrates the pipeline of Edit-Your-Motion. To introduce our proposed Edit-Your-Motion, we first introduce the basics of the text-video diffusion model in Sec. 3.1. Then, Sec. 3.2 introduces our proposed Recurrent-Causal Attention (RC-Attentio). After that, in Sec. 3.3, our proposed Detailed Prompt-Guided Learning strategy and Noise Constraint Loss are described. Finally, we will introduce the inference stage in Sec. 3.4. 3.1 Preliminaries Denoising Diffusion Probabilistic Models. The denoising diffusion probabilistic models [11, 14, 27, 55] (DDPMs) consists of a forward diffusion process and a reverse denoising process. During the forward diffusion process, it gradually adds noise \ud835\udf16to a clean image \ud835\udc990 \u223c\ud835\udc5e(\ud835\udc990) with time step \ud835\udc61, obtaining a noisy sample \ud835\udc65\ud835\udc61. The process of adding noise can be represented as: \ud835\udc5e(\ud835\udc99\ud835\udc61|\ud835\udc99\ud835\udc61\u22121) = N (\ud835\udc99\ud835\udc61| \u221a\ufe01 1 \u2212\ud835\udefd\ud835\udc61\ud835\udc99\ud835\udc61\u22121, \ud835\udefd\ud835\udc61I), (1) where \ud835\udefd\ud835\udc61\u2208(0, 1) is a variance schedule. The entire forward process of the diffusion model can be represented as a Markov chain from time \ud835\udc61to time \ud835\udc47, \ud835\udc5e(\ud835\udc991:\ud835\udc47) = \ud835\udc5e(\ud835\udc990) \ud835\udc47 \u00d6 \ud835\udc61=1 \ud835\udc5e(\ud835\udc99\ud835\udc61|\ud835\udc99\ud835\udc61\u22121) . (2) Then, in reverse processing, noise is removed through a denoising autoencoders \ud835\udf16\ud835\udf03(\ud835\udc65\ud835\udc61,\ud835\udc61) to generate a clean image. The corresponding objective can be simplified to: \ud835\udc3f\ud835\udc37\ud835\udc40= E\ud835\udc65,\ud835\udf16\u223cN(0,1),\ud835\udc61 \u0002 \u2225\ud835\udf16\u2212\ud835\udf16\ud835\udf03(\ud835\udc65\ud835\udc61,\ud835\udc61)\u22252 2 \u0003 . (3) Latent Diffusion Models. Latent Diffusion models (LDM) [29, 36, 59] is a newly introduced variant of DDPM that operates in the latent space of the autoencoder. Specifically, the encoder E compresses the image to latent features \ud835\udc9b= E(\ud835\udc99). Then performs a diffusion process over \ud835\udc67, and finally reconstructs latent features back into pixel space using the decoder D. The corresponding objective can be represented as: \ud835\udc3f\ud835\udc3f\ud835\udc37\ud835\udc40= EE(\ud835\udc65),\ud835\udf16\u223cN(0,1),\ud835\udc61 h \u2225\ud835\udf16\u2212\ud835\udf16\ud835\udf03(\ud835\udc67\ud835\udc61,\ud835\udc61)\u22252 2 i . (4) Text-to-Video Diffusion Models. Text-to-Video Diffusion Models [43] train a 3D UNet \ud835\udf163\ud835\udc37 \ud835\udf03 with text prompts \ud835\udc50as a condition to generate videos using the T2V model. Given the \ud835\udc39frames \ud835\udc991...\ud835\udc39of a video, the 3D UNet is trained by \ud835\udc3f\ud835\udc472\ud835\udc49= EE(\ud835\udc651...\ud835\udc39),\ud835\udf16\u223cN(0,1),\ud835\udc61,\ud835\udc50 \u0014\r \r \r\ud835\udf16\u2212\ud835\udf163\ud835\udc37 \ud835\udf03 (\ud835\udc671...\ud835\udc39 \ud835\udc61 ,\ud835\udc61,\ud835\udc50) \r \r \r 2 2 \u0015 , (5) where \ud835\udc671...\ud835\udc39 \ud835\udc61 is the latent features of \ud835\udc991...\ud835\udc39, \ud835\udc671...\ud835\udc39 \ud835\udc61 = E(\ud835\udc991...\ud835\udc39). 3.2 Recurrent-Causal Attention Like Tune-A-Video [51], we use the inflated U-Net network (spacetime diffusion model) as the backbone of Edit-Your-Motion, consisting of stacked 3D convolutional residual blocks and transform blocks. Each transformer block consists of Sparse-Causal Attention, Cross Attention, Temporal Attention, and a Feed-Forward Network (FFN). To save computational overhead, Tune-A-Video uses the current frame latent \ud835\udc67\ud835\udc63\ud835\udc56\u2208 \b \ud835\udc67\ud835\udc630, . . . ,\ud835\udc67\ud835\udc63\ud835\udc56\ud835\udc5a\ud835\udc4e\ud835\udc65 \t as the query for Sparse-Causal Attention. Meanwhile, the previous frame latent \ud835\udc67\ud835\udc63\ud835\udc56\u22121 is combined with the first frame latent \ud835\udc67\ud835\udc631 to obtain the key and value. The specific formula is as follows: \ud835\udc44= \ud835\udc4a\ud835\udc44\ud835\udc67\ud835\udc63\ud835\udc56, \ud835\udc3e= \ud835\udc4a\ud835\udc3e\u0002 \ud835\udc67\ud835\udc631,\ud835\udc67\ud835\udc63\ud835\udc56\u22121 \u0003 ,\ud835\udc49= \ud835\udc4a\ud835\udc49\u0002 \ud835\udc67\ud835\udc631,\ud835\udc67\ud835\udc63\ud835\udc56\u22121 \u0003 , (6) where [\u00b7] denotes concatenation operation. where \ud835\udc4a\ud835\udc44, \ud835\udc4a\ud835\udc3eand \ud835\udc4a\ud835\udc49are projection matrices. However, because there is less information in the early frames of a video, Sparse-Causal Attention does not consider the connection with the subsequent frames. As a result, it may lead to inconsistencies between the content at the beginning and the end of the video. To solve this problem, we propose a simple Recurrent-Causal Attention with no increase in computational complexity. In RecurrentCausal Attention, key and value are obtained by combining the previous frame latent \ud835\udc67\ud835\udc63\ud835\udc56\u22121 with the current frame latent \ud835\udc67\ud835\udc63\ud835\udc56, not \ud835\udc67\ud835\udc631 with \ud835\udc67\ud835\udc63\ud835\udc56\u22121. Notably, the key and value of the first frame latent \ud835\udc67\ud835\udc631 are obtained from the last frame latent \ud835\udc67\ud835\udc63\ud835\udc56\ud835\udc5a\ud835\udc4e\ud835\udc65with the first frame latent \ud835\udc67\ud835\udc631. This allows the object\u2019s content to propagate throughout the video sequence without adding any computational complexity. The formula for Recurrent-Causal Attention is as follows: \ud835\udc44= \ud835\udc4a\ud835\udc44\ud835\udc67\ud835\udc63\ud835\udc56, (7) \ud835\udc3e= ( \ud835\udc4a\ud835\udc3e\u0002 \ud835\udc67\ud835\udc63\ud835\udc56\u22121,\ud835\udc67\ud835\udc63\ud835\udc56 \u0003 if \ud835\udc56< \ud835\udc56\ud835\udc5a\ud835\udc4e\ud835\udc65 \ud835\udc4a\ud835\udc3e\u0002 \ud835\udc67\ud835\udc630,\ud835\udc67\ud835\udc63\ud835\udc56 \u0003 \ud835\udc52\ud835\udc59\ud835\udc60\ud835\udc52 , (8) \ud835\udc49= ( \ud835\udc4a\ud835\udc49\u0002 \ud835\udc67\ud835\udc63\ud835\udc56\u22121,\ud835\udc67\ud835\udc63\ud835\udc56 \u0003 if \ud835\udc56< \ud835\udc56\ud835\udc5a\ud835\udc4e\ud835\udc65 \ud835\udc4a\ud835\udc49\u0002 \ud835\udc67\ud835\udc630,\ud835\udc67\ud835\udc63\ud835\udc56 \u0003 \ud835\udc52\ud835\udc59\ud835\udc60\ud835\udc52 . (9) \fEdit-Your-Motion: Space-Time Diffusion Decoupling Learning for Video Motion Editing ACM MM, 2024, Melbourne, Australia \u201cA boy wearing black clothes and gray pants is dancing.\u201d ControlNet Inference Stage: A Two-Branch Structure that Injects Spatial Features The Second Training Stage: Learning Temporal Feature from Ordered Video Frames \u201cA boy wearing black clothes and gray Pants is playing basketball.\u201d The First Training Stage: Learning Spatial Features from Shuffled Images \u201cA boy wearing black clothes and gray pants.\u201d a P s P a P t P rf S rf C Editing Branch Reconstruction Branch k V Edited video sr S Source video Reference video a P Unordered Frames ControlNet Temp-Attn Cross-Attn RC-Attn Temp-Attn Cross-Attn RC-Attn ControlNet Ordered Frames Temp-Attn Cross-Attn RC-Attn Temp-Attn Cross-Attn RC-Attn s P sr S s r C rf C t P Figure 2: The overall pipline of Edit-Your-Motion. Edit-Your-Motion decouples spatial features (object appearance) from temporal features (background and motion information) of the source video using the Detailed Prompt-Guided Learning Strategy (DPL). In the first training stage, Recurrent-Causal attention (RC-Attn) is guided to learn spatial features. In the second training stage, Temporal Attention (Temp-Attn) is guided to learn temporal features. During inference, the spatial features of the source video are injected into the editing branch through the key and value of Recurrent-Causal Attention, thus keeping the source content and background unchanged. Overall, Recurrent-Causal Attention enables early frames to acquire more comprehensive content information compared to Sparse-Causal Attention, by establishing a link to the last frame in the first frame. 3.3 The Detailed Prompt-Guided Learning Strategy The purpose of diffusion-based video motion editing is to control the motion of objects in the source video based on a reference video with a prompt and to ensure that the content and background of the objects remain unchanged. The key lies in decoupling the diffusion model\u2019s overlapping temporal and spatial features. MotionEditor uses the object\u2019s segmentation mask to decouple the object content and the background in the feature layer. However, the decoupled features also overlap since the spatio-temporal features have been obfuscated in the model. In order to be able to decouple overlapping spatio-temporal features, we design the Detailed Prompt-Guided Learning Strategy (DPL). DPL is divided into two training stages: (1) The First Training Stage: Learning Spatial Features from Shuffled Images, and (2) The Second Training Stage: Learning Temporal Features from Ordered video frames. Next, we will describe the two stages in detail. The First Training Stage: Learning Spatial Features from Shuffled Images. In this stage, the space-time diffusion model focuses on learning the spatial features of the source object. First, we disrupt the order of video frames to destroy their temporal information and generate unordered video frames U = {\ud835\udc62\ud835\udc56|\ud835\udc56\u2208[1,\ud835\udc5b]}, where \ud835\udc5b is the length of the video. If we train the model directly using unordered frames, the features of the object and the background will overlap. Such overlapping spatio-temporal features are challenging to decouple later and will lead to interference from background features when controlling object motion. Therefore, we use an existing segmentation \fACM MM, 2024, Melbourne, Australia Yi Zuo, Lingling Li, Licheng Jiao, Fang Liu, Xu Liu, Wenping Ma, Shuyuan Yang, and Yuwei Guo network to extract the segmentation mask \ud835\udc40for the unordered video frames. Therefore, we use an existing segmentation network to extract the segmentation mask M for the video frames and mask out the background as: UM = U \u00b7 M, (10) ZM \ud835\udc61 = E(UM), (11) where ZM \ud835\udc61 is the latent features of UM, and E(\u00b7) is encoder. Then, we utilize an existing skeleton extraction network to obtain the human skeleton \ud835\udc46\ud835\udc60\ud835\udc5fin the source video and feed it into ControlNet along with the prompt \ud835\udc43\ud835\udc4e. \ud835\udc36\ud835\udc60\ud835\udc5f= \ud835\udc36\ud835\udc5c\ud835\udc5b\ud835\udc61\ud835\udc5f\ud835\udc5c\ud835\udc59\ud835\udc41\ud835\udc52\ud835\udc61(\ud835\udc46\ud835\udc60\ud835\udc5f, \ud835\udc43\ud835\udc4e), (12) where \ud835\udc36\ud835\udc60\ud835\udc5fis the pose feature of source video. Next, we will freeze other parameters and only activate Recurrent-Causal Attention. Finally, we will \ud835\udc43\ud835\udc4eand \ud835\udc36\ud835\udc60\ud835\udc5finto the space-time diffusion model for training. The reconstruction loss can be written as follows: \ud835\udc3f\ud835\udc5f\ud835\udc52\ud835\udc50= E\ud835\udc67\ud835\udc5a \ud835\udc61,\ud835\udf16\u223cN(0,1),\ud835\udc61,\ud835\udc43\ud835\udc4e,\ud835\udc36\ud835\udc60\ud835\udc5f \u0014\r \r \r\ud835\udf16\u2212\ud835\udf163\ud835\udc37 \ud835\udf03 (\ud835\udc67\ud835\udc5a \ud835\udc61,\ud835\udc61, \ud835\udc43\ud835\udc4e,\ud835\udc36\ud835\udc60\ud835\udc5f) \r \r \r 2 2 \u0015 . (13) The Second Training Stage: Learning Temporal Features from Ordered Video Frames. Unlike the first training stage, we restored the temporal relationship of video frames. Then, guide the spacetime diffusion model to learn the temporal features of motion and background from ordered video frames V = {\ud835\udc63\ud835\udc56|\ud835\udc56\u2208[1,\ud835\udc5b]}. Specifically, We construct a new prompt \ud835\udc43\ud835\udc60, which adds a description of the motion to \ud835\udc43\ud835\udc4e. Then, Temporal Attention is activated to learn motion features while other parameters are frozen. To smooth the video, we added Noise Constraint Loss [31]. The noise constraint loss can be written as follows: \ud835\udc3f\ud835\udc5b\ud835\udc5c\ud835\udc56\ud835\udc60\ud835\udc52= 1 \ud835\udc5b\u22121 \ud835\udc5b\u22121 \u2211\ufe01 \ud835\udc56=1 \r \r \r\ud835\udf16\ud835\udc53\ud835\udc56 \ud835\udc9b\ud835\udc61\u2212\ud835\udf16\ud835\udc53\ud835\udc56+1 \ud835\udc9b\ud835\udc61 \r \r \r 2 2 , (14) where \ud835\udc53\ud835\udc56denote the \ud835\udc56-th frame of the video. \ud835\udf16\ud835\udc53\ud835\udc56 \ud835\udc9b\ud835\udc61is the noise prediction at timestep \ud835\udc61. The total loss for the second training stage is constructed as follows: \ud835\udc3f\ud835\udc47\ud835\udc5c\ud835\udc61\ud835\udc4e\ud835\udc59= (1 \u2212\ud835\udf06)\ud835\udc3f\ud835\udc5b\ud835\udc5c\ud835\udc56\ud835\udc60\ud835\udc52+ \ud835\udf06\ud835\udc3f\ud835\udc5f\ud835\udc52\ud835\udc50, (15) where \ud835\udc3f\ud835\udc5f\ud835\udc52\ud835\udc50is constructed from ordered video frames V without segmentation mask \ud835\udc40. \ud835\udf06is set to 0.9. 3.4 Inference Pipelines In the inference stage, we first extract the human skeleton \ud835\udc46\ud835\udc5f\ud835\udc53from the reference video to guide motion generation. Then, to ensure that the object\u2019s content and background are unchanged, we use a two-branch architecture (reconstruction branch and editing branch) similar to [45] to inject the object\u2019s content and background features into the editing branch. Specifically, we first input the latent noise \ud835\udc67\ud835\udc60from the source video DDIM inversion and \ud835\udc43\ud835\udc4einto the reconstruction branch. Simultaneously input \ud835\udc67\ud835\udc60and \ud835\udc43\ud835\udc61into the editing branch. Then, we will input the human skeleton \ud835\udc46\ud835\udc5f\ud835\udc53from the reference video and \ud835\udc43\ud835\udc61 into ControlNet to obtain feature \ud835\udc36\ud835\udc5f\ud835\udc53as: \ud835\udc36\ud835\udc5f\ud835\udc53= \ud835\udc36\ud835\udc5c\ud835\udc5b\ud835\udc61\ud835\udc5f\ud835\udc5c\ud835\udc59\ud835\udc41\ud835\udc52\ud835\udc61(\ud835\udc46\ud835\udc5f\ud835\udc53, \ud835\udc43\ud835\udc61), (16) where \ud835\udc36\ud835\udc5f\ud835\udc53is the pose feature of the reference video to be used to guide the generation of motion in the editing branch. Next, we will inject the spatial features from the reconstruction branch into the editing branch. Due to disrupting the time relationship and mask the background in the first training stage of DPL. Therefore, we directly inject the keys and values of the RC-Attn in the reconstruction branch into the editing branch without needing segmentation masks. The specific formula can be written as: \ud835\udc3e\ud835\udc5f= \ud835\udc4a\ud835\udc3e\ud835\udc67\ud835\udc60 \ud835\udc63\ud835\udc56,\ud835\udc49\ud835\udc5f= \ud835\udc4a\ud835\udc49\ud835\udc67\ud835\udc60 \ud835\udc63\ud835\udc56, (17) \ud835\udc3e\ud835\udc52= h \ud835\udc4a\ud835\udc3e\ud835\udc67\ud835\udc52 \ud835\udc63\ud835\udc56\u22121,\ud835\udc4a\ud835\udc3e\ud835\udc67\ud835\udc52 \ud835\udc63\ud835\udc56, \ud835\udc3e\ud835\udc5fi ,\ud835\udc49\ud835\udc52= h \ud835\udc4a\ud835\udc49\ud835\udc67\ud835\udc52 \ud835\udc63\ud835\udc56\u22121,\ud835\udc4a\ud835\udc49\ud835\udc67\ud835\udc52 \ud835\udc63\ud835\udc56,\ud835\udc49\ud835\udc5fi , (18) \ud835\udc49\ud835\udc5fwhere \ud835\udc52represents the editing branch. \ud835\udc5frepresents the reconstruction branch. In the end, we obtained the edited video. 4 EXPERIMENTAL 4.1 Implementation Details Our proposed Edit-Your-Motion is based on the Latent Diffusion Model [36] (Stabel Diffusion). The data in this article comes from TaichiHD [40] and YouTube video datasets, in which each video has a minimum of 70 frames. During training, we finetune 300 steps for each of the two training stages at a learning rate of 3 \u00d7 10\u22125. For inference, we used the DDIM sampler [42] with no classifier guidance [15] in our experiments. For each video, the fine-tuning takes about 15 minutes with a single NVIDIA A100 GPU. 4.2 Comparisons Method To demonstrate the superiority of our Edit-Your-Motion, we have selected methods from motion customization, pose-guided video generation, video content editing, and video motion editing as comparison methods. (1) Tune-A-Video [51]: The first presents the work of one-shot video editing. It inflates a pre-trained T2I diffusion model to 3D to handle the video task. (2) MotionEditor1 [45]: The first examines the work of video motion editing while maintaining the object content and background unchanged. (3) Follow-YourPose [26]: Generating pose-controllable videos using two-stage training. (4) MotionDirector [66]: Generate motion-aligned videos by decoupling appearance and motion in reference videos for videomotion-customization. 4.3 Evaluation Our method can edit the motion of objects in the source video by using the reference video and prompting without changing the object content and the background. Fig. 4 shows some of our examples. As can be seen, our proposed Edit-Your-Motion accurately controls the motion and preserves the object\u2019s content and background well. The more cases are in the appendix. Qualitative Results. Fig. 3 shows the results of the visual comparison of Edit-Your-Motion with other comparison methods on 25 in-the-wild cases. Although Follow-Your-Pose and MotionDirector can align well with the motion of the reference video, it is difficult to 1Since the article\u2019s code is not provided, the experimental results in this paper are obtained by replication. \fEdit-Your-Motion: Space-Time Diffusion Decoupling Learning for Video Motion Editing ACM MM, 2024, Melbourne, Australia Source video Reference video Tune-A-Video Ours MotionEditor 4 8 12 16 0 22 6 2 Follow-Your-Pose MotionDirector Source video Reference video Tune-A-Video Ours MotionEditor 0 22 6 2 Follow Your Pose MotionDirector A girl in a plaid top and black skirt is dancing practicing wugong. A boy with a black top and gray pants is playing basketball dancing. Figure 3: Qualitative comparison with state-of-the-art methods. Compared to other baselines, Edit-Your-Motion successfully achieves motion alignment with the reference video and maintains the content consistency of the background and objects. maintain consistency between the object content and background in both the source and reference videos. It demonstrates that generating specific background and content using only text prompts is difficult. Tune-A-Video and MotionEditor show noticeable content changes. In addition, MotionEditor shows motion overlap (arms) caused by using of the segmentation mask to decouple overlapping features. In contrast to the above, our proposed Edit-Your-Motion aligns the motion of the edited video and the reference video well and preserves the content and background of the objects in the source video intact. This also demonstrates the effectiveness of our method in video motion editing. Quantitative results. We evaluate the methods with automatic evaluations and human evaluations on 25 in-the-wild cases. Automatic Evaluations. To quantitatively assess the differences between our proposed Edit-Your-Motion and other comparative methods, we use the following metrics to measure the results: (1) Text Alignment (TA). We use CLIP [34] to compute the average cosine similarity between the prompt and the edited frames. (2) Temporal Consistency (TC). We use CLIP to obtain image features and compute the average cosine similarity between neighbouring video frames. (3) LPIPS-N (L-N): We calculate Learned Perceptual Image Patch Similarity [62] between edited neighbouring frames. (4) LPIPS-S (L-S): We calculate Learned Perceptual Image Patch Table 1: Quantitative evaluation using CLIP and LPIPS. TA, TC, L-N, L-S represent Text Alignment, Temporal Consistency, LPIPS-N and LPIPS-S, respectively. Method TA \u2191 TC \u2191 L-N \u2193 L-S \u2193 Follow-Your-Pose [26] 0.236 0.913 0.213 0.614 MotionDirector [66] 0.239 0.872 0.141 0.430 Tune-A-Video [51] 0.278 0.934 0.137 0.359 MotionEditor [45] 0.286 0.948 0.102 0.300 Ours 0.289 0.950 0.109 0.276 Similarity between edited frames and source frames. Table 1 shows the quantitative results of Edit-Your-Motion with other comparative methods. The results show that Edit-Your-Motion outperforms the other methods on all metrics. User Study. We invited 70 participants to participate in the user study. Each participant could see the source video, the reference video, and the results of our and other comparison methods. For each case, we combined the results of Edit-Your-Motion with the results of each of the four comparison methods. Then, we set three \fACM MM, 2024, Melbourne, Australia Yi Zuo, Lingling Li, Licheng Jiao, Fang Liu, Xu Liu, Wenping Ma, Shuyuan Yang, and Yuwei Guo A boy wearing black clothes and gray pants is playing basketball dancing. A woman in a blue top and white skirt is waving her hand dancing. A girl with a black top and black skirt is dancing practicing Tai Chi. A man with a dark green top and black pants is standing practicing Tai Chi. 0 6 12 18 22 Figure 4: Some examples of motion editing results for Edit-Your-Motion. Table 2: User Study. Higher indicates the users prefer more to our MotionEditor. TA, CA, and MA represent Text Alignment, Content Alignment, and Motion Alignment, respectively. Method TA CA MA Follow-Your-Pose [26] 87.142% 96.663% 90.953% MotionDirector [66] 94.522% 96.190% 86.188% Tune-A-Video [51] 78.810% 82.145% 84.047% MotionEditor [45] 76.428% 82.380% 80.950% questions to evaluate Text Alignment, Content Alignment and Motion Alignment. The three questions are \"Which is more aligned to the text prompt?\", \"Which is more content aligned to the source video?\" and \"Which is more motion aligned to the reference video?\". Table 2 shows that our method outperforms the other compared methods in all three aspects. 4.4 Ablation Study To verify the effectiveness of the proposed module, we show the results of the ablation experiments in Fig. 5. In column 3, we replace RC-Attn with Sparse Attention, which makes the first frame inconsistent with the object content in the subsequent frames. This shows that RC-Attn can better establish content consistency over the entire sequence than with Sparse Attention. In column 4, w/o Noise Constraint Loss (NCL) affects the smoothness between frames, causing the background to be inconsistent between frames. In column 5, we train RC-Attn and Temporal Attention in a training stage. However, the lack of spatio-temporal decoupling results in background and object content interfering, generating undesirable edited videos. At the same time, it also demonstrates the effectiveness of DPL in decoupling time and space. \fEdit-Your-Motion: Space-Time Diffusion Decoupling Learning for Video Motion Editing ACM MM, 2024, Melbourne, Australia 40 44 48 58 Source video Reference video w/o RCA w/o NCL w/o DPT Edit-Your-Motion A girl with a black top and black shorts is waving her hand dancing. Figure 5: Some examples of video motion editing results for Edit-Your-Motion. 5" + }, + { + "url": "http://arxiv.org/abs/2012.07941v2", + "title": "Variable Selection with Second-Generation P-Values", + "abstract": "Many statistical methods have been proposed for variable selection in the\npast century, but few balance inference and prediction tasks well. Here we\nreport on a novel variable selection approach called Penalized regression with\nSecond-Generation P-Values (ProSGPV). It captures the true model at the best\nrate achieved by current standards, is easy to implement in practice, and often\nyields the smallest parameter estimation error. The idea is to use an l0\npenalization scheme with second-generation p-values (SGPV), instead of\ntraditional ones, to determine which variables remain in a model. The approach\nyields tangible advantages for balancing support recovery, parameter\nestimation, and prediction tasks. The ProSGPV algorithm can maintain its good\nperformance even when there is strong collinearity among features or when a\nhigh dimensional feature space with p > n is considered. We present extensive\nsimulations and a real-world application comparing the ProSGPV approach with\nsmoothly clipped absolute deviation (SCAD), adaptive lasso (AL), and mini-max\nconcave penalty with penalized linear unbiased selection (MC+). While the last\nthree algorithms are among the current standards for variable selection,\nProSGPV has superior inference performance and comparable prediction\nperformance in certain scenarios. Supplementary materials are available online.", + "authors": "Yi Zuo, Thomas G. Stewart, Jeffrey D. Blume", + "published": "2020-12-14", + "updated": "2021-06-15", + "primary_cat": "stat.ME", + "cats": [ + "stat.ME" + ], + "main_content": "Introduction Data are typically comprised of an outcome and features (predictors or covariates). A common scienti\ufb01c task is to separate important features (signals) from unrelated features (statistical noise) to facilitate modeling, learning, clinical diagnosis, and decision-making. Statistical models are selected for a variety of reasons: predictive ability, interpretability, ability to perform parameter inference, and ease of computation. A model\u2019s set of features is called its \u201csupport\u201d and the task of recovering the model\u2019s true support from observed data is called \u201csupport recovery\u201d. A desirable variable selection method will tend to return the set of true predictors i.e. those features with truly non-zero coe\ufb03cients with high probability. Support recovery aids inference, because knowing the model\u2019s true support bene\ufb01ts parameter estimation by reducing bias and improving e\ufb03ciency. While an incorrectly speci\ufb01ed model can sometimes have better predictive performance than a correctly speci\ufb01ed model (Shmueli et al. (2010)), having the correct support is essential for achieving optimal statistical inference (Zhang et al. (2009); Shortreed and Ertefaie (2017)). Penalized likelihood procedures, originally optimized for prediction tasks, are widely used for variable selection. The lasso, an \u21131 penalization method, produces models with strong predictive ability (Tibshirani (1996)). However, the lasso solution that maximizes predictive ability does not always lead to consistent support recovery (Leng et al. (2006); Meinshausen et al. (2006); Shmueli et al. (2010); Bogdan et al. (2015)). This is because noise variables are often included in the lasso solution that maximizes predictive ability (Meinshausen et al. (2006)). The adaptive lasso (AL), which introduces weights in the \u21131 penalty, was proposed to resolve the issue that lasso solutions can be variable selection inconsistent (Zou (2006)). With clever choice of tuning parameters, and in large samples, the adaptive lasso can recover the true support with high probability and yield parameter estimates that converge properly (Zou (2006)). Smoothly clipped absolute deviation (SCAD) (Fan and Li (2001)) and minimax concave penalty with penalized linear unbiased selection (MC+) (Zhang et al. (2010)) make use of distinctive piecewise linear thresholding functions to bridge the gap between the \u21130 and \u21131 algorithms. Both SCAD and MC+ seek to preserve large coe\ufb03cients, like the \u21130 penalty does, and shrink small coe\ufb03cients, like the \u21131 penalty does. While their variable selection properties have been well established, these 2 \fmethods are still not widely used in routine practice. All of the above approaches place a strong emphasis on predictive ability, at the cost of subsequent inference tasks. Because inference is an essential component of scienti\ufb01c investigations, a variable selection approach that balances prediction and inference tasks is highly desirable. Since traditional p-values do not re\ufb02ect whether a variable is scienti\ufb01cally relevant or not (Heinze et al. (2018)), we investigated whether using second-generation p-values (SGPV) (Blume et al. (2018, 2019)) would lead to good support recovery and subsequent parameter estimation and prediction. SGPVs emphasize scienti\ufb01c relevance in addition to statistical signi\ufb01cance, and thus they are a good tool for screening out noise features and identifying the true signals in a set of candidate variables. Following this idea, we propose a variable selection algorithm based on an \u21130-Penalized regression with SGPVs (ProSGPV). The ProSGPV algorithm has a high support recovery rate and low parameter estimation bias, while maintaining good prediction performance even in the high-dimensional setting where p > n. In a series of comprehensive simulations and a real-world application, the ProSGPV algorithm is shown to be a viable alternative to, and often a noticeable improvement on, current variable selection standards such as AL, SCAD and MC+. While only linear models are discussed in this paper, forthcoming work will show that the ProSGPV approach generalizes to models of other classes, including logistic regression, Poisson regression, Cox proportional hazards model, etc. The structure of this paper is as follows. Section 2 provides a brief background. Section 3 describes the proposed ProSGPV algorithm. Section 4 presents simulation studies comparing ProSGPV to AL, SCAD, and MC+ under various feature correlation structures and signal-to-noise ratios. Section 5 illustrates the ProSGPV algorithm using a real-world data application. Section 6 discusses the practical implications of the simulation results and some limitations of ProSGPV, and summarizes key \ufb01ndings in the paper. 2 Background material We review some fundamental ideas related to shrinkage, thresholding, inference, and prediction in the variable selection context to facilitate subsequent discussions about ProSGPV. Readers familiar with standard variable selection notation, lasso (section 2.1), adaptive 3 \flasso (section 2.2), SCAD and MC+ (section 2.3), and second-generation p-values (section 2.4) may skip to section 3 for the development of the ProSGPV algorithm. 2.1 Lasso The lasso is an \u21131 penalization procedure and one of the most widely used regularization methods for prediction modeling (Tibshirani (1996)). It reduces the feature space and identi\ufb01es a subset of features that maximize predictive accuracy subject to a sparsity condition induced by the \u21131 penalty. The set of features selected by lasso is called the active set. Let Y = (Y1, Y2, ...Yn) denote the response vector, X denote the n \u00d7 p design matrix, and \u03b2 \u2208Rp denote the coe\ufb03cient vector. \u03bb > 0 is a regularization parameter. || \u00b7 ||2 2 is the squared \u21132-norm and || \u00b7 ||1 is the \u21131-norm. Formally, the lasso solution is written as \u02c6 \u03b2 = arg min \u03b2 {1 2||Y \u2212X\u03b2||2 2 + \u03bb||\u03b2||1} (1) The lasso is often used for variable selection because its solution encourages sparsity in the active set. However, even in the classical setting of a \ufb01xed p and a growing n, the lasso active set tends to be di\ufb00erent from the set of true signals. An exception to this is when true feature columns are roughly orthogonal to noise feature columns (Knight and Fu (2000)), which unfortunately, is seldom seen in practice. Wainwright (2009b) improved the ability of the lasso solution to recover the true support under random Gaussian designs and showed that lasso can recover the true support when the e\ufb00ect size is su\ufb03ciently large and when no noise variables are highly correlated with true features. However, these conditions are strong and hard to apply in practice. In addition, even when they are met, there is no explicit way to implement the procedure because the shrinkage factor \u03bb that yields the correct support recovery is unknown (Wang et al. (2013)). Lastly, the soft thresholding function in lasso shrinks large e\ufb00ects and results in biased parameter estimates that are ideal for prediction tasks, but not necessarily optimal for inference tasks. 2.2 Adaptive lasso The adaptive lasso (AL) uses weights in the \u21131 penalty to address the inconsistent variable selection property of the lasso (Zou (2006)). With the right shrinkage parameter, initial 4 \fweights, and weight moments, the adaptive lasso can recover the true support with high probability while preserving prediction performance. Formally, the solution to the adaptive lasso is: \u02c6 \u03b2 n = arg min \u03b2 {1 2||Y \u2212X\u03b2||2 2 + \u03bbn||\u02c6 \u03c9\u03b2||1} (2) where \u02c6 \u03c9 = 1/|\u02c6 \u03b2 \u2217|\u03b3. Here \u03b3 > 0 is a tuning parameter and \u02c6 \u03b2 \u2217is any root-n-consistent estimator of the parameter \u03b2, for example, an OLS estimator, or a lasso estimator. Zou (2006) showed that AL has large-sample oracle (optimal) properties for support recovery and parameter estimation as \u03bbn/\u221an \u21920 and \u03bbnn(\u03b3\u22121)/2 \u2192\u221e. However, when the sample size is \ufb01nite, it can be hard to \ufb01nd a combination of \u02c6 \u03b2 \u2217, \u03b3, and \u03bbn such that the resulting active set matches the true support and the estimated coe\ufb03cients have low bias. 2.3 SCAD and MC+ SCAD and MC+ were designed to bridge \u21130 and \u21131 penalization schemes. As a result, both algorithms use nonconvex penalties. There are considerable advantages that come with using nonconvex penalization, such as a sparse solution and reduced parameter estimation bias, see Fan and Lv (2011, 2013); Zheng et al. (2014); Loh and Wainwright (2015). The penalty function in the SCAD corresponds to a quadratic spline function with knots at \u03bb and \u03b3\u03bb (Fan and Li (2001)). With proper choice of regularization parameters, SCAD can yield consistent variable selection in large samples (Fan and Li (2001)). MC+ has two components: a minimax concave penalty (MCP) and a penalized linear unbiased selection (PLUS) algorithm (Zhang et al. (2010)). MC+ returns a continuous piecewise linear path for each coe\ufb03cient as the penalty increases from zero (least squares) to in\ufb01nity (null model). When the penalty level is set to \u03bb = \u03c3 p (2/n) log(p), the MC+ algorithm has a high probability of support recovery and does not need to assume the strong irrepresentable condition (Wainwright (2009b)) that is required by lasso for support recovery (Zhang et al. (2010)). For visualization, Figure 1 displays the thresholding functions of \u21130 and \u21131 penalties, SCAD, and MC+ when the feature columns are orthogonal. 5 \f2.4 Second-generation p-values Second-generation p-values (SGPV), denoted as p\u03b4, were proposed for use in high dimensional multiple testing contexts (Blume et al. (2018, 2019)). SGPVs attempt to resolve some of the de\ufb01ciencies of traditional p-values by replacing the point null hypothesis with a pre-speci\ufb01ed interval null H0 = [\u2212\u03b4, \u03b4]. The idea is to use the interval as a bu\ufb00er region between \u201cnull\u201d and \u201cnon-null\u201d e\ufb00ects. The interval represents the set of e\ufb00ects that are scienti\ufb01cally indistinguishable or immeasurable from the point null due to limited precision or practicality. SGPV are essentially the fraction of data-supported hypotheses that are null, or nearly null, hypotheses. Formally, let \u03b8 be a parameter of interest, and let I = [\u03b8l, \u03b8u] be an interval estimate of \u03b8 whose length is given by |I| = \u03b8u \u2212\u03b8l. In this paper we will use a 95% CI for I, but any type of the uncertainty interval can be used. If we denote the length of the interval null by |H0|, then the SGPV p\u03b4 is de\ufb01ned as p\u03b4 = |I \u2229H0| |I| \u00d7 max \u001a |I| 2|H0|, 1 \u001b (3) where I \u2229H0 is the intersection of two intervals. The correction term max{|I|/(2|H0|), 1} applies when the interval estimate is very wide, i.e., when |I| > 2|H0|. In that case, the data are often inconclusive and the correction term shrinks the SGPV back to 1/2. As such, SGPVs indicate when data are compatible with null hypotheses (p\u03b4 = 1), or with alternative hypotheses (p\u03b4 = 0), or when data are inconclusive (0 < p\u03b4 < 1). By design, SGPVs emphasize e\ufb00ects that are scienti\ufb01cally meaningful as de\ufb01ned by exceeding a pre-speci\ufb01ed e\ufb00ect size \u03b4. Empirical studies have shown that SGPVs have the potential for identifying feature importance in high dimensional settings (Blume et al. (2018, 2019)). This idea dovetails well with the natural tendency in variable selection to keep variables whose e\ufb00ects are above some threshold, say \u03b4. One extension here is that we will let the null bound \u03b4 shrink to zero at a pre-speci\ufb01ed rate. This slight modi\ufb01cation of the basic SGPV idea makes variable selection by SPGVs much more e\ufb00ective. Sensitivity to the choice of the null bound is assessed in Section 3.3. 6 \f3 The ProSGPV algorithm The ProSGPV algorithm is a two-stage algorithm. In the \ufb01rst stage, a candidate set of variables is acquired. In the second stage, an SGPV-based thresholding is applied to select variables from the candidate set that are meaningfully associated with the outcome. 3.1 Steps The steps of the ProSGPV algorithm are shown below in Algorithm 1. Algorithm 1 ProSGPV 1: procedure ProSGPV(X, Y ) 2: Stage one: Find a candidate set 3: Standardize all inputs (the outcome and features) 4: Fit a lasso and \ufb01nd \u03bbgic using generalized information criterion 5: Fit an OLS model on the lasso active set 6: Stage two: SGPV screening 7: Extract the con\ufb01dence intervals of all variables from the previous OLS model 8: Calculate the mean coe\ufb03cient standard error SE of standardized features 9: Get the SGPV for each variable k with Ik = \u02c6 \u03b2k\u00b11.96\u00d7SEk and H0 = [\u2212SE, SE] 10: Keep variables with SGPV of zero 11: Re-run the OLS model with selected variables on the original scale 12: end procedure Note that the outcome and features are standardized except for the \ufb01nal step. Generalized information criterion (GIC) (Fan and Tang (2013)) is used to \ufb01nd the shrinkage parameter \u03bbgic that leads to a fully-relaxed lasso (Meinshausen (2007)) in the \ufb01rst stage. \u03bb could also be found through cross-validation, as there is evidence that a range of \u03bbs will lead to the true support (Fan and Li (2001); Zou (2006); Wang et al. (2013); Sun et al. (2019)). That adds to the \ufb02exibility of the algortihm. In the second stage, SGPVs are used to screen variables in the candidate set, where the null bound \u03b4 is derived from coe\ufb03cient standard errors. Sensitivity to the choice of the null bound \u03b4 is 7 \fassessed in Section 3.3. We have implemented the ProSGPV algorithm in the ProSGPV R package, which is available from the Comprehensive R Archive Network (CRAN) at https://CRAN.R-project.org/package=ProSGPV. 3.2 Solution The solution to the ProSGPV algorithm \u02c6 \u03b2 pro is \u02c6 \u03b2 pro = \u02c6 \u03b2 ols |S \u2208Rp, where S = {k \u2208C : |\u02c6 \u03b2ols k | > \u03bbk}, C = {j \u2208{1, 2, ..., p} : |\u02c6 \u03b2lasso j | > 0} (4) where \u02c6 \u03b2 ols |S is a vector of length p with non-zero elements being the OLS coe\ufb03cient estimates from the model with variables only in the set S, the \ufb01nal selection set. C is the candidate set from the \ufb01rst-stage screening. \u02c6 \u03b2lasso j is the jth lasso solution evaluated at \u03bbgic in the \ufb01rst stage. In the second stage, the cuto\ufb00is \u03bbk = 1.96 \u00d7 SEk + SE and \u03bbk is constant over k when the features are all centered and standardized. In that case, the coe\ufb03cient standard errors are identical. The ProSGPV algorithm is e\ufb00ectively a hard thresholding function. In the \ufb01rst stage, variables not selected by lasso are shrunk to zero. In the second stage, ProSGPV relaxes the coe\ufb03cients and shrinks e\ufb00ects smaller than SE to zero while preserving large e\ufb00ects. Because this is a two-stage algorithm, there does not appear to be a simple closed-form solution for the implied thresholding without conditioning on the \ufb01rst stage. However, this would-be threshold, call it \u03bbnew, tends to be larger than \u03bbgic from lasso. The only routine exception to this is when data have weak signals or high correlation. But in that case, no algorithm can fully recover the true support (Zhao and Yu (2006); Wainwright (2009a)) . A visualization of thresholding functions for several penalization methods (assuming orthogonal features) is displayed in Figure 1. The hard thresholding function in the panel (1) shrinks the coe\ufb03cient estimates to zero when the e\ufb00ects are less than \u03bb and preserves them otherwise. The lasso in (2) shrinks small e\ufb00ects to zero and shrinks large e\ufb00ects by \u03bb. SCAD and MCP in (3) bridge the gap between a hard thresholding function seen in (1) and a soft thresholding function in (2). When the coe\ufb03cient is small (|\u02c6 \u03b8| \u2264\u03bb), both methods have the same behavior as the lasso because the coe\ufb03cient is shrunk to zero in all 8 \fcases. When the coe\ufb03cient is large (|\u02c6 \u03b8| \u2265\u03b3\u03bb), SCAD and MCP have the same behavior as the hard thresholding (no shrinkage is applied). What distinguishes SCAD and MCP is the shape of its thresholding function between \u03bb and \u03b3\u03bb. As mentioned earlier, ProSGPV amounts to a hard thresholding function whose cuto\ufb00is usually larger than \u03bb. Figure 1: Thresholding functions from \ufb01ve algorithms when features are orthogonal. 9 \f3.3 Null bound The null bound in ProSGPV is set to be the average coe\ufb03cient standard error, say SE, from the OLS model on the lasso candidate set. Because of the scaling, this is equivalent to hard-thresholding variables whose absolute coe\ufb03cients are below 1.96\u00d7SEk+SE \u22483\u00d7SE. This is in line with variable selection ideas from literature. Fan and Li (2006) argued that in order to achieve optimal properties of variable selection, the amount of lasso shrinkage must be proportional to the standard error of the maximum likelihood estimates of coe\ufb03cients. Intuitively, the interval null acts as a bu\ufb00er zone to screen out e\ufb00ects that are likely false discoveries. By de\ufb01nition, the sampling distribution of false discoveries will be near the point null (since they are \u201cfalse\u201d discoveries) and the variance of this distribution shrinks at a rate proportional to the information in the sample. Hence, using the SE to delineate the smallest e\ufb00ect size of interest is natural. It is possible that a constant multiplier of the SE might yield a better Type I-Type II error tradeo\ufb00, but after trying some obvious variations we did not \ufb01nd anything better. A sensitivity analysis on the choice of the null bound was conducted and is summarized in Supplementary Figure 1. We compared the support recovery performance of ProSGPV using di\ufb00erent null bounds when signal-to-noise ratio (SNR) is medium or high and when n > p. Choices of null bounds include the original bound SE, SE \u00d7 p log(n/p), SE/ p log(n/p), \u02c6 \u03c3/12, and 0. When the null bound is constant, e.g., \u02c6 \u03c3/12, the support recovery performance is poor. When the null bound is scaled by p log(n/p), performance appears to be slightly improved in the high correlation case, but, importantly, is inferior in all other cases. When the null bound is set at 0, ProSGPV amounts to selecting variables using traditional p-values. In this case, the support recovery performance is expectedly poor even when SNR is high, because the null bound of 0 leads to many false positives (Kaufman and Rosset (2014); Janson et al. (2015)). When p > n, the above observations hold because the null bound is calculated from a model with a reduced number of features (same order as s << p, where s is the number of true signals). This sparsity assumption is necessary for successful high-dimensional support recovery (Meinshausen et al. (2006); Zhao and Yu (2006); Wainwright (2009a)). Hence, allowing the null bound, which acts as a thresholding function, to shrink at a \u221an-rate, appears to o\ufb00er the best performance across 10 \fthe widest range of scenarios. 3.4 Example Figure 2 shows the e\ufb00ect of the ProSGPV algorithm on the regression coe\ufb03cients in our simulated setting. Suppose that the true data-generating model is y = X\u03b2+\u03f5 where y is a vector of length 400. The design matrix X has \ufb01ve columns with mean zero and covariance matrix \u03a3i,j = 0.5|i\u2212j|. The coe\ufb03cient vector \u03b2 is zero everywhere except \u03b23 = 0.28. The errors are i.i.d. N(0, 1). We see in Figure 2 that the ProSGPV algorithm succeeds by selecting V3 whereas the lasso and relaxed lasso select V3 and V5 at \u03bbgic. Figure 2: Illustration of the ProSGPV algorithm. Panel (1) presents the colored lasso solution path where the vertical dotted line is the \u03bbgic. Panel (2) shows the fully-relaxed lasso path with point estimates only. Panel (3) shows the same path plus 95% con\ufb01dence intervals in light colors. Panel (4) is the proposed two-stage algorithm\u2019s selection path. The shaded area is the null region and only the 95% con\ufb01dence bound that is closer to zero is shown for each variable. 11 \f3.5 Similar algorithms from the literature Other two-stage algorithms have been proposed for pre-screening features (Meinshausen et al. (2009); Zhang et al. (2009); Wasserman and Roeder (2009); Zhou (2009, 2010); Sun et al. (2019); Weng et al. (2019); Wang et al. (2020)). Meinshausen et al. (2009) proposed a two-stage thresholded lasso, where a lasso model is \ufb01t and features are kept if they pass a data-dependent coe\ufb03cient threshold. Because of this, the resulting coe\ufb03cient estimates are biased even when the correct support is recovered. Wasserman and Roeder (2009) proposed using variable selection methods (lasso, marginal regression, forward stepwise regression, etc.) with cross-validation to pre-screen candidate variables before using Bonferroni corrected t-tests to identify and remove noise features. Wasserman\u2019s method controls the Type I error rate across all features, but pays a higher price in false negatives. ProSGPV, however, allows the Type I error rate to shrink towards zero and yields fewer false positives (See Supplementary Figure 3). Zhang et al. (2009) identi\ufb01ed relevant and irrelevant features from lasso in the \ufb01rst stage and \ufb01t another \u21131-penalized regression using only irrelevant features afterwards. However, Zhang et al. (2009) emphasizes parameter estimation and neglects support recovery. In addition, their algorithm needs to run multiple cross-validations while ProSGPV uses GIC to tune \u03bb and is therefore much faster to compute. Sun et al. (2019) proposed the hard thresholding regression (HRS). When lasso is used to derive initial weights, the HRS reduces to the fully relaxed lasso, which is the \ufb01rst stage of our two-stage ProSGPV algorithm. Unlike our algorithm, HRS keeps all variables that survive the \ufb01rst stage. Lastly, Zhou (2009, 2010) used lasso or the Dantzig selector to pre-screen and then used a fully relaxed model on thresholded coe\ufb03cients with a data-driven bound; Weng et al. (2019) selected important variables and penalized only the unselected variables for the \ufb01nal variable selection; Wang et al. (2020) used a bridge regression in the \ufb01rst stage and thresholded variables in the second stage. 3.6 Special case: one-stage ProSGPV algorithm When \u03bbgic is replaced with zero in the \ufb01rst stage of lasso, ProSGPV reduces to a onestage algorithm. That amounts to calculating the SGPV for each variable in the full OLS model and selecting ones that are above the threshold. The one-stage ProSGPV is faster to 12 \fcompute, as no lasso solution path is required. However, it does not appear to be variable selection consistent in the limit, and its inferential performance is inferior to that of the twostage ProSGPV when data do not contain strong signals or features are highly correlated. Moreover, it is not applicable when p > n, i.e., when the OLS model is not identi\ufb01able. For completeness, the support recovery performance of the one-stage algorithm can be found in Supplementary Figure 2. Its performance is very close to the two-stage algorithm when explanatory variables are independent. 3.7 Summary The ideas behind the ProSGPV algorithm are intuitive: exclude small e\ufb00ects using a datadependent threshold for noise and keep large e\ufb00ects. ProSGPV is essentially an \u21130-penalized regression. Unlike the \u21131 penalty, \u21130 optimization is nonconvex, so it is harder to compute and less popular in practice. However, our algorithm avoids enumerating all possible combinations of variables by leveraging the lasso solution in the \ufb01rst stage and threshold e\ufb00ects with an explicit bound afterwards. That translates into less computational cost than other convex optimization algorithms (as seen in Supplementary Figure 6). ProSGPV can also be thought of as a variation of the thresholded lasso with re\ufb01tting. van de Geer et al. (2011) showed that the thresholded lasso with re\ufb01tting requires less severe minimal signal conditions for successful support recovery than adaptive lasso. While lasso is used in the \ufb01rst stage screening, other variable selection methods, such as Sure Independence Screening (SIS) (Fan and Lv (2008)), can be used there. This adds the /\ufb02exibility to our algorithm. Lastly, in terms of post-selection inference, the point estimates and corresponding con\ufb01dence intervals derived from our algorithm are best when the selected model matches the true underlying model. Even when ProSGPV misses true signals, those missed variables often have small e\ufb00ects, which results in minimal impact on the inference of the other larger e\ufb00ects. 13 \f4 Simulation studies Extensive simulation studies were conducted to evaluate the inferential and prediction performance of the ProSGPV algorithm and compare it to existing methods. We investigated both traditional n > p and high-dimensional p > n settings. 4.1 Design The simulation setup is motivated by similar investigations such as Hastie et al. (2020). We set sample size n, dimension of explanatory variables p, sparsity level s (number of true signals), true coe\ufb03cient vector \u03b20 \u2208Rp, autocorrelation level \u03c1 within explanatory variables, and signal-to-noise ratio (SNR) \u03bd. In the traditional n > p setting, p is \ufb01xed at 50 and n ranges from 100 to 2000 with an increment of 50. The number of true signals s is \ufb01xed at 10. In the high-dimensional setting, n is \ufb01xed at 200 and p ranges from 200 to 2000 with an increment of 20. Here, the number of true signals is \ufb01xed at 4. \u03b20 has s non-zero values equally-spaced between one and \ufb01ve, at random positions, and the rest are zero. The coe\ufb03cients are half positive and half negative. \u03c1 can take the value of 0 (independent), 0.35 (medium autocorrelation), and 0.7 (high autocorrelation). SNR is de\ufb01ned as SNR = V ar(f(x))/V ar(\u03f5), where data are generated from a probabilistic distribution. SNR take the value of 0.7 (moderate SNR), and 2 (high SNR) (Hastie et al. (2020)). We evaluated the performance of each algorithm using standard metrics: support recovery rate, Type I error rate, power, false discovery rate, false non-discovery rate, along with the mean absolute error (de\ufb01ned below) for parameter estimation, prediction accuracy in a separate test set, and running time. See Supplement Table 1 for detailed de\ufb01nitions of the metrics for inference. Step 1: Draw n rows of the matrix X \u2208Rn\u00d7p i.i.d. from Np(0, \u03a3), where \u03a3 \u2208Rp\u00d7p has entry (i, j) equal to \u03c1|i\u2212j|. Step 2: Generate the response vector Y \u2208Rn from Nn(X\u03b20, \u03c32I), with \u03c32 de\ufb01ned to meet the desired SNR level \u03bd, i.e., \u03c32 = \u03b2T 0 \u03a3\u03b20/\u03bd. Step 3: Run SCAD, MC+, AL, and ProSGPV on the training set with n observations; record the active set from each algorithm; compute evaluation metrics in Supplementary 14 \fTable 1 plus capture rate of the exact true model, absolute bias in parameter estimation, and running time; use a separate test set to compute prediction accuracy. Note that the test set was generated in Step 1, and set aside for later use by in\ufb02ating the target sample size n. Step 4: Repeat the previous steps 1000 times and aggregate the results. SCAD was implemented using the ncvreg package in R and \u03b3 was \ufb01xed at 3.7, MC+ was implemented using the plus package. Adaptive lasso was implemented using the glmnet package and the initial weights are the inverse of absolute value of lasso estimates. For a fair comparison, GIC was used to select \u03bb in all algorithms. The ProSGPV algorithm was implemented using the ProSGPV package. The R code to replicate simulation results can be found at https://github.com/zuoyi93/r-code-prosgpv-linear. 4.2 Results and \ufb01ndings We recorded whether or not each algorithm captured the exact true model in each iteration and compared the average capture rates over 1000 iterations in Figure 3. We also compared the mean absolute error (MAE) of all coe\ufb03cient estimates, de\ufb01ned as 1 p Pp j=1 |\u02c6 \u03b2j \u2212\u03b20,j|, in Figure 4, where \u03b20,j is the jth true coe\ufb03cient. We compared the prediction accuracy of each algorithm, as measured by root mean square error (RMSE) in an independent test set in Figure 5. Power and Type I error rates are presented in Supplementary Figure 3. False discovery proportions (pFDR) and false non-discovery proportions (pFNR) are presented in Supplementary Figure 4. The e\ufb00ect of di\ufb00erent parameter tuning methods on MC+ is illustrated in Supplementary Figure 5. The comparison of computation time is shown in Supplementary Figure 6. In Figure 3, capture rates of the exact true model are compared under combinations of SNR and autocorrelation levels within the design matrix, when both n > p and n < p. When n > p, ProSGPV\u2019s support recovery rate increases as n grows. It generally has the highest support recovery rate except when the SNR is medium and correlation is high. MC+ and AL have similar capture rates, while SCAD is the worst among the four. When p > n, support recovery rates are low for all methods and decrease as p increases in the data. ProSGPV again is the highest, followed by SCAD, AL, and MC+. We investigated 15 \fFigure 3: Capture rate of the exact true model under combinations of autocorrelation level, signal-noise-ratios, and (n, p, s). In each panel, one algorithm has a colored solid line representing the average capture rate surrounded by the shaded 95% Wald interval over 1000 simulations. factors driving the support recovery performance in Supplementary Figure 3 and 4. When n > p, we see that all algorithms have decreasing Type I error rates, pFDR, pFNR, and increasing power. When p > n and data are not highly correlated, GIC-based MC+ has notably higher pFDR than the others, indicating that it over\ufb01ts the training data and 16 \fincludes many noise variables. Mean absolute error (MAE) is used to assess the parameter estimation error. When n > p, we used relative MAE which is de\ufb01ned as the ratio of an algorithm\u2019s MAE to that of the OLS model with only true features. A good estimator would have an asymptotic relative MAE of one. When p > n, absolute MAE is used because no OLS \ufb01t is possible. Figure 4 displays the median (relative) MAE of four algorithms under various scenarios. The shading shows the \ufb01rst and third quartiles of the empirical (relative) MAE distribution. In both n > p and n < p cases, ProSGPV has the lowest parameter estimation error. This should not be surprising for sparse settings with well-de\ufb01ned signals, as ProSGPV is e\ufb00ectively an \u21130 penalization derivative and \u21130 penalization drops small e\ufb00ects while keeping large ones. Johnson et al. (2015) showed that the parameter estimation risk of \u21130-penalized regression can be in\ufb01nitely better than that of the \u21131-penalized regression under certain conditions and this is a practical example. The shape of the relative MAE from ProSGPV generally follows what would be expected from a rate of p log(n)/n. This rate matches the ideal rate of parameter estimation in the optimal model from any hard thresholding function, as suggested by Theorem 1 of Zheng et al. (2014). When n > p, AL and MC+ have very close performance and SCAD has the slowest rate of convergence. When n < p, the order stays the same for all except MC+. As p passes 600, MC+ with GIC-based tunning selects more noise variables in the model and the parameter estimation performance is compromised. This can be remedied by using a universal \u03bb = \u03c3 p (2/n) log p in MC+ (see Supplementary Figure 5). Fan and Tang (2013) argued that MC+ has the same performance as SCAD when GIC is used. However, in their setting, s, p, and n are allowed to grow together. In our case, s is \ufb01xed at 4, n is \ufb01xed at 200, and only p grows. In Figure 5, the prediction RMSE is calculated in an independent test set (40%) using models built with a training set (60%). Again, when n > p the relative RMSE is used while when n < p the absolute RMSE is used. Relative RMSE is de\ufb01ned as the ratio of the prediction RMSE from one algorithm to that from the OLS model with true signals only. When n > p, all algorithms have worse prediction performance than the true OLS model unless n is really large. But their prediction RMSEs converge to the true OLS RMSE from above as n increases. While ProSGPV is not optimized for prediction tasks, 17 \fFigure 4: Parameter estimation error of all algorithms under combinations of autocorrelation level, signal-to-noise ratio, and (n, p, s). In each panel, one algorithm has a colored solid line representing the median (relative) mean absolute errors surrounded by the shaded \ufb01rst and third quartiles over 1000 simulations. its predictive ability quickly catches up with other algorithms when n/p > 6. When n < p, ProSGPV and AL have the best performance followed by SCAD. SCAD can have better prediction performance when \u03bb is selected by cross-validation. However, in that case, its support recovery is worse than that from the GIC-based SCAD. MC+ has much higher 18 \fFigure 5: Comparison of prediction accuracy of all algorithms under combinations of autocorrelation level, signal-to-noise ratio, and (n, p, s). Median (relative) root mean square errors are surrounded by their \ufb01rst and third quartiles over 1000 simulations. prediction error than the others when p > 600. That is because \u03bb selected by GIC leads to a dense model which includes many noise variables. The over\ufb01tted model has poor prediction performance in an external data set. However, this can be remedied by using a universal \u03bb in MC+, as shown in Supplementary Figure 5. In Supplementary Figure 6, the running time in seconds from all algorithms are compared. The computing environment was 2.6 GHz Dual-Core Intel Core i7 processor and 32 19 \fGB memory. ProSGPV and AL have the shortest computation time, followed by SCAD. MC+ is more time-consuming when data are highly correlated, or when n < p. 5 Real-world example We illustrate our approach using the Tehran housing data (Ra\ufb01ei and Adeli (2016)), which was high SNR (R2 = 0.98) in the OLS model with all variables. We also explored the medium SNR case by removing potentially redundant variables until R2 = 0.4. The Tehran housing data are available as a data object t.housing in the ProSGPV package. The data set contains 26 features and 372 records (see Supplementary Table 2 for the variable description). The goal is to predict the sale price (variable 9 or V9). The explanatory variables consist of seven project physical and \ufb01nancial variables, 19 economic variables, all at baseline. Clustering and correlation patterns are displayed in Supplementary Figure 7. We see that several explanatory variables form prominent clusters and that there is high pairwise correlation among the features. In particular, the price per square meter of the unit at the beginning of the project (V8) has high correlation (\u03c1 = 0.98) with the sale price (V9). We repeatedly split the data into a training set (70%) and a test set (30%). We applied AL, SCAD, MC+, and ProSGPV algorithms on the training set (n=260) with all the covariates. Prediction RMSE was calculated on the test set (n=112). We summarized the sparsity of the solutions (Supplementary Figure 8) and prediction accuracy (Supplementary Figure 9) over 1000 training-test split repetitions. SCAD over\ufb01ts the training data and has the largest selection set. AL yields the sparsest model followed by ProSGPV. GIC-based MC+ yields a constant model size. Regarding the prediction performance, ProSGPV has the lowest median prediction error closely followed by AL and MC+, while SCAD has the largest test error because of over\ufb01tting. All algorithms select duration of construction (V7) and initial price per square meter (V8) with high frequency, and there is no consensus as for which other variables to include because of high correlation and clustering. To re\ufb01ne the analysis, we removed variables that had an absolute correlation with the outcome of 0.45 or greater. The remaining covariates explain 40% of the variability in the response, which represents medium SNR. Of the remaining nine variables, MC+ 20 \falways selects zero variables. ProSGPV selects four or \ufb01ve variables with high frequency. AL selects six or seven variables with high frequency. SCAD selects more variables than ProSGPV and AL. ProSGPV, SCAD and AL have similar prediction performance, while MC+ has worse performance, due to the null model it selects. The common selected variables include total \ufb02oor area of the building (V2), lot area (V3), the price per square meter of the unit at the beginning of the project (V8), and the number of building permits issued (V11). The R code to replicate the results is available at https://github.com/ zuoyi93/r-code-prosgpv-linear. 6 Practical implications, limitations, and comments A na\u00a8 \u0131ve way to perform variable selection is to screen variables by p-values. Such methods include forward selection, backward selection, and stepwise selection (Efroymson (1966)). However, these methods have serious drawbacks. They have poor capture rates of the true underlying model (Wang (2009); Kozbur (2018)) and larger e\ufb00ective degrees of freedom (Kaufman and Rosset (2014); Janson et al. (2015)). In addition, the standard errors of the coe\ufb03cient estimates are too small, which leads to over-optimistic discoveries (Harrell Jr (2015)). Better approaches do exist, but they are more complex, require specialized software, and are not fully adopted in routine applied practice. However, SGPVs o\ufb00ers a simple and e\ufb00ective option. It exhibits excellent statistical properties in both inference and prediction tasks without increased computation time. Our simulation studies reinforce the notion that a model with good prediction ability does not necessarily lead to good inference. Comparing Figure 3 with Figure 5, we see that models optimized for prediction tend not to be optimized for inferential tasks, even when we use a di\ufb00erent parameter tuning approach for each algorithm. This corroborates \ufb01ndings in the literature (Leng et al. (2006); Meinshausen et al. (2006); Wasserman and Roeder (2009); Zheng et al. (2014); Giacobino et al. (2017); Shortreed and Ertefaie (2017)). This statement is important and bears repeating: models optimized for prediction tasks do not necessarily support good inference. Similar observations can be found by comparing the parameter estimation in Figure 4 with prediction performance in Figure 5. ProSGPV does a better job by yielding a model that is primed for inference and also has good prediction 21 \fproperties. There is a link between SNR and the proportion of variance explained (PVE). PVE(f) = 1 \u2212E(y \u2212f(x)2) Var(y) = 1 \u2212Var(\u03f5) Var(y) = SNR 1 + SNR (5) where f is the mean function, and x is independent from \u03f5. Practically, when the R2 is around 0.40 in the full model, which is equivalent to a medium SNR in our simulation, ProSGPV has a comparable performance in support recovery and slightly better parameter estimation in large n; when the R2 is above 0.66, which corresponds to a high SNR, ProSGPV has superior inference properties than the other algorithms. There are some limitations. Sensitivity to tuning parameter speci\ufb01cation is an issue for both implementation and generalizability of results. However, we found the \ufb01ndings to be fairly robust to tuning parameter speci\ufb01cation. In results not shown here, we repeated the experiment using each algorithm\u2019s preferred method for choosing a tuning parameter and the general ordering of results remained stable. Another limitation is when the design matrix has high within-correlation, which is a challenging problem for any algorithm. Not unexpectedly, ProSGPV does not do well in support recovery and its parameter estimation and prediction performance su\ufb00ers. We also note that the exact threshold function of the two-stage ProSGPV algorithm is di\ufb03cult to conceptualize, as the null bound in the fully relaxed lasso has a di\ufb00erent feature space than the full feature space. We are actively working on formulating solutions for the two-stage algorithm. Despite these relatively minor limitations, the ProSGPV algorithm looks very promising. It gives up little in terms of prediction, and o\ufb00ers improved support recovery and parameter estimation properties compared to the class of standard procedures currently in use today. Also, the ProSGPV algorithm does not depend on tuning parameters that are hard to specify. It is fair to say that unlike traditional p-values, second-generation p-values can be used for variable selection and subsequent statistical inference." + } + ] + }, + "edge_feat": {} + } +} \ No newline at end of file