librarian-bot commited on
Commit
bbc7449
·
verified ·
1 Parent(s): 0b74d21

Scheduled Commit

Browse files
data/2511.11007.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"paper_url": "https://huggingface.co/papers/2511.11007", "comment": "This is an automated message from the [Librarian Bot](https://huggingface.co/librarian-bots). I found the following papers similar to this paper. \n\nThe following papers were recommended by the Semantic Scholar API \n\n* [Bridging Hidden States in Vision-Language Models](https://huggingface.co/papers/2511.11526) (2025)\n* [Rethinking Visual Information Processing in Multimodal LLMs](https://huggingface.co/papers/2511.10301) (2025)\n* [Causally-Grounded Dual-Path Attention Intervention for Object Hallucination Mitigation in LVLMs](https://huggingface.co/papers/2511.09018) (2025)\n* [PROPA: Toward Process-level Optimization in Visual Reasoning via Reinforcement Learning](https://huggingface.co/papers/2511.10279) (2025)\n* [VLURes: Benchmarking VLM Visual and Linguistic Understanding in Low-Resource Languages](https://huggingface.co/papers/2510.12845) (2025)\n* [CoCoVa: Chain of Continuous Vision-Language Thought for Latent Space Reasoning](https://huggingface.co/papers/2511.02360) (2025)\n* [Visual Jigsaw Post-Training Improves MLLMs](https://huggingface.co/papers/2509.25190) (2025)\n\n\n Please give a thumbs up to this comment if you found it helpful!\n\n If you want recommendations for any Paper on Hugging Face checkout [this](https://huggingface.co/spaces/librarian-bots/recommend_similar_papers) Space\n\n You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: `@librarian-bot recommend`"}
data/2511.13081.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"paper_url": "https://huggingface.co/papers/2511.13081", "comment": "This is an automated message from the [Librarian Bot](https://huggingface.co/librarian-bots). I found the following papers similar to this paper. \n\nThe following papers were recommended by the Semantic Scholar API \n\n* [There is More to Attention: Statistical Filtering Enhances Explanations in Vision Transformers](https://huggingface.co/papers/2510.06070) (2025)\n* [Concept Regions Matter: Benchmarking CLIP with a New Cluster-Importance Approach](https://huggingface.co/papers/2511.12978) (2025)\n* [TextCAM: Explaining Class Activation Map with Text](https://huggingface.co/papers/2510.01004) (2025)\n* [A Quantitative Evaluation Framework for Explainable AI in Semantic Segmentation](https://huggingface.co/papers/2510.24414) (2025)\n* [A Multimodal XAI Framework for Trustworthy CNNs and Bias Detection in Deep Representation Learning](https://huggingface.co/papers/2510.12957) (2025)\n* [From Attribution to Action: Jointly ALIGNing Predictions and Explanations](https://huggingface.co/papers/2511.06944) (2025)\n* [EVO-LRP: Evolutionary Optimization of LRP for Interpretable Model Explanations](https://huggingface.co/papers/2509.23585) (2025)\n\n\n Please give a thumbs up to this comment if you found it helpful!\n\n If you want recommendations for any Paper on Hugging Face checkout [this](https://huggingface.co/spaces/librarian-bots/recommend_similar_papers) Space\n\n You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: `@librarian-bot recommend`"}
data/2511.13593.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"paper_url": "https://huggingface.co/papers/2511.13593", "comment": "This is an automated message from the [Librarian Bot](https://huggingface.co/librarian-bots). I found the following papers similar to this paper. \n\nThe following papers were recommended by the Semantic Scholar API \n\n* [LiCoMemory: Lightweight and Cognitive Agentic Memory for Efficient Long-Term Reasoning](https://huggingface.co/papers/2511.01448) (2025)\n* [MR.Rec: Synergizing Memory and Reasoning for Personalized Recommendation Assistant with LLMs](https://huggingface.co/papers/2510.14629) (2025)\n* [LightMem: Lightweight and Efficient Memory-Augmented Generation](https://huggingface.co/papers/2510.18866) (2025)\n* [Preference-Aware Memory Update for Long-Term LLM Agents](https://huggingface.co/papers/2510.09720) (2025)\n* [ENGRAM: Effective, Lightweight Memory Orchestration for Conversational Agents](https://huggingface.co/papers/2511.12960) (2025)\n* [Mem-\u03b1: Learning Memory Construction via Reinforcement Learning](https://huggingface.co/papers/2509.25911) (2025)\n* [AssoMem: Scalable Memory QA with Multi-Signal Associative Retrieval](https://huggingface.co/papers/2510.10397) (2025)\n\n\n Please give a thumbs up to this comment if you found it helpful!\n\n If you want recommendations for any Paper on Hugging Face checkout [this](https://huggingface.co/spaces/librarian-bots/recommend_similar_papers) Space\n\n You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: `@librarian-bot recommend`"}
data/2511.14899.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"paper_url": "https://huggingface.co/papers/2511.14899", "comment": "This is an automated message from the [Librarian Bot](https://huggingface.co/librarian-bots). I found the following papers similar to this paper. \n\nThe following papers were recommended by the Semantic Scholar API \n\n* [Coupled Diffusion Sampling for Training-Free Multi-View Image Editing](https://huggingface.co/papers/2510.14981) (2025)\n* [Training-Free Multi-View Extension of IC-Light for Textual Position-Aware Scene Relighting](https://huggingface.co/papers/2511.13684) (2025)\n* [MVCustom: Multi-View Customized Diffusion via Geometric Latent Rendering and Completion](https://huggingface.co/papers/2510.13702) (2025)\n* [EditCast3D: Single-Frame-Guided 3D Editing with Video Propagation and View Selection](https://huggingface.co/papers/2510.13652) (2025)\n* [FlashWorld: High-quality 3D Scene Generation within Seconds](https://huggingface.co/papers/2510.13678) (2025)\n* [CloseUpShot: Close-up Novel View Synthesis from Sparse-views via Point-conditioned Diffusion Model.](https://huggingface.co/papers/2511.13121) (2025)\n* [RapidMV: Leveraging Spatio-Angular Representations for Efficient and Consistent Text-to-Multi-View Synthesis](https://huggingface.co/papers/2509.24410) (2025)\n\n\n Please give a thumbs up to this comment if you found it helpful!\n\n If you want recommendations for any Paper on Hugging Face checkout [this](https://huggingface.co/spaces/librarian-bots/recommend_similar_papers) Space\n\n You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: `@librarian-bot recommend`"}
data/2511.15299.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"paper_url": "https://huggingface.co/papers/2511.15299", "comment": "This is an automated message from the [Librarian Bot](https://huggingface.co/librarian-bots). I found the following papers similar to this paper. \n\nThe following papers were recommended by the Semantic Scholar API \n\n* [ReCon: Region-Controllable Data Augmentation with Rectification and Alignment for Object Detection](https://huggingface.co/papers/2510.15783) (2025)\n* [High-Quality Proposal Encoding and Cascade Denoising for Imaginary Supervised Object Detection](https://huggingface.co/papers/2511.08018) (2025)\n* [Data Factory with Minimal Human Effort Using VLMs](https://huggingface.co/papers/2510.05722) (2025)\n* [Synthetic Object Compositions for Scalable and Accurate Learning in Detection, Segmentation, and Grounding](https://huggingface.co/papers/2510.09110) (2025)\n* [SpotDiff: Spotting and Disentangling Interference in Feature Space for Subject-Preserving Image Generation](https://huggingface.co/papers/2510.07340) (2025)\n* [Salient Concept-Aware Generative Data Augmentation](https://huggingface.co/papers/2510.15194) (2025)\n* [S3OD: Towards Generalizable Salient Object Detection with Synthetic Data](https://huggingface.co/papers/2510.21605) (2025)\n\n\n Please give a thumbs up to this comment if you found it helpful!\n\n If you want recommendations for any Paper on Hugging Face checkout [this](https://huggingface.co/spaces/librarian-bots/recommend_similar_papers) Space\n\n You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: `@librarian-bot recommend`"}
data/2511.15462.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"paper_url": "https://huggingface.co/papers/2511.15462", "comment": "This is an automated message from the [Librarian Bot](https://huggingface.co/librarian-bots). I found the following papers similar to this paper. \n\nThe following papers were recommended by the Semantic Scholar API \n\n* [What Drives Paper Acceptance? A Process-Centric Analysis of Modern Peer Review](https://huggingface.co/papers/2509.25701) (2025)\n* [ReviewerToo: Should AI Join The Program Committee? A Look At The Future of Peer Review](https://huggingface.co/papers/2510.08867) (2025)\n* [ReviewGuard: Enhancing Deficient Peer Review Detection via LLM-Driven Data Augmentation](https://huggingface.co/papers/2510.16549) (2025)\n* [LLM-REVal: Can We Trust LLM Reviewers Yet?](https://huggingface.co/papers/2510.12367) (2025)\n* [Paper Copilot: Tracking the Evolution of Peer Review in AI Conferences](https://huggingface.co/papers/2510.13201) (2025)\n* [Gen-Review: A Large-scale Dataset of AI-Generated (and Human-written) Peer Reviews](https://huggingface.co/papers/2510.21192) (2025)\n* [From Authors to Reviewers: Leveraging Rankings to Improve Peer Review](https://huggingface.co/papers/2510.21726) (2025)\n\n\n Please give a thumbs up to this comment if you found it helpful!\n\n If you want recommendations for any Paper on Hugging Face checkout [this](https://huggingface.co/spaces/librarian-bots/recommend_similar_papers) Space\n\n You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: `@librarian-bot recommend`"}
data/2511.16110.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"paper_url": "https://huggingface.co/papers/2511.16110", "comment": "This is an automated message from the [Librarian Bot](https://huggingface.co/librarian-bots). I found the following papers similar to this paper. \n\nThe following papers were recommended by the Semantic Scholar API \n\n* [SAID: Empowering Large Language Models with Self-Activating Internal Defense](https://huggingface.co/papers/2510.20129) (2025)\n* [VisualDAN: Exposing Vulnerabilities in VLMs with Visual-Driven DAN Commands](https://huggingface.co/papers/2510.09699) (2025)\n* [CrossGuard: Safeguarding MLLMs against Joint-Modal Implicit Malicious Attacks](https://huggingface.co/papers/2510.17687) (2025)\n* [SoK: Taxonomy and Evaluation of Prompt Security in Large Language Models](https://huggingface.co/papers/2510.15476) (2025)\n* [VERA-V: Variational Inference Framework for Jailbreaking Vision-Language Models](https://huggingface.co/papers/2510.17759) (2025)\n* [Black-box Optimization of LLM Outputs by Asking for Directions](https://huggingface.co/papers/2510.16794) (2025)\n* [RAID: Refusal-Aware and Integrated Decoding for Jailbreaking LLMs](https://huggingface.co/papers/2510.13901) (2025)\n\n\n Please give a thumbs up to this comment if you found it helpful!\n\n If you want recommendations for any Paper on Hugging Face checkout [this](https://huggingface.co/spaces/librarian-bots/recommend_similar_papers) Space\n\n You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: `@librarian-bot recommend`"}
data/2511.16334.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"paper_url": "https://huggingface.co/papers/2511.16334", "comment": "This is an automated message from the [Librarian Bot](https://huggingface.co/librarian-bots). I found the following papers similar to this paper. \n\nThe following papers were recommended by the Semantic Scholar API \n\n* [VOLD: Reasoning Transfer from LLMs to Vision-Language Models via On-Policy Distillation](https://huggingface.co/papers/2510.23497) (2025)\n* [ARM2: Adaptive Reasoning Model with Vision Understanding and Executable Code](https://huggingface.co/papers/2510.08163) (2025)\n* [DeepEyesV2: Toward Agentic Multimodal Model](https://huggingface.co/papers/2511.05271) (2025)\n* [ARES: Multimodal Adaptive Reasoning via Difficulty-Aware Token-Level Entropy Shaping](https://huggingface.co/papers/2510.08457) (2025)\n* [ACPO: Adaptive Curriculum Policy Optimization for Aligning Vision-Language Models in Complex Reasoning](https://huggingface.co/papers/2510.00690) (2025)\n* [RewardMap: Tackling Sparse Rewards in Fine-grained Visual Reasoning via Multi-Stage Reinforcement Learning](https://huggingface.co/papers/2510.02240) (2025)\n* [Reward and Guidance through Rubrics: Promoting Exploration to Improve Multi-Domain Reasoning](https://huggingface.co/papers/2511.12344) (2025)\n\n\n Please give a thumbs up to this comment if you found it helpful!\n\n If you want recommendations for any Paper on Hugging Face checkout [this](https://huggingface.co/spaces/librarian-bots/recommend_similar_papers) Space\n\n You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: `@librarian-bot recommend`"}
data/2511.16931.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"paper_url": "https://huggingface.co/papers/2511.16931", "comment": "This is an automated message from the [Librarian Bot](https://huggingface.co/librarian-bots). I found the following papers similar to this paper. \n\nThe following papers were recommended by the Semantic Scholar API \n\n* [A Survey of AI Scientists](https://huggingface.co/papers/2510.23045) (2025)\n* [Deep Ideation: Designing LLM Agents to Generate Novel Research Ideas on Scientific Concept Network](https://huggingface.co/papers/2511.02238) (2025)\n* [HIKMA: Human-Inspired Knowledge by Machine Agents through a Multi-Agent Framework for Semi-Autonomous Scientific Conferences](https://huggingface.co/papers/2510.21370) (2025)\n* [A Survey of Data Agents: Emerging Paradigm or Overstated Hype?](https://huggingface.co/papers/2510.23587) (2025)\n* [AgentExpt: Automating AI Experiment Design with LLM-based Resource Retrieval Agent](https://huggingface.co/papers/2511.04921) (2025)\n* [Democratizing AI scientists using ToolUniverse](https://huggingface.co/papers/2509.23426) (2025)\n* [MirrorMind: Empowering OmniScientist with the Expert Perspectives and Collective Knowledge of Human Scientists](https://huggingface.co/papers/2511.16997) (2025)\n\n\n Please give a thumbs up to this comment if you found it helpful!\n\n If you want recommendations for any Paper on Hugging Face checkout [this](https://huggingface.co/spaces/librarian-bots/recommend_similar_papers) Space\n\n You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: `@librarian-bot recommend`"}
data/2511.17074.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"paper_url": "https://huggingface.co/papers/2511.17074", "comment": "This is an automated message from the [Librarian Bot](https://huggingface.co/librarian-bots). I found the following papers similar to this paper. \n\nThe following papers were recommended by the Semantic Scholar API \n\n* [ScaleWeaver: Weaving Efficient Controllable T2I Generation with Multi-Scale Reference Attention](https://huggingface.co/papers/2510.14882) (2025)\n* [Towards Better & Faster Autoregressive Image Generation: From the Perspective of Entropy](https://huggingface.co/papers/2510.09012) (2025)\n* [Not All Tokens are Guided Equal: Improving Guidance in Visual Autoregressive Models](https://huggingface.co/papers/2509.23876) (2025)\n* [Dynamic Mixture-of-Experts for Visual Autoregressive Model](https://huggingface.co/papers/2510.08629) (2025)\n* [ActVAR: Activating Mixtures of Weights and Tokens for Efficient Visual Autoregressive Generation](https://huggingface.co/papers/2511.12893) (2025)\n* [SoftCFG: Uncertainty-guided Stable Guidance for Visual Autoregressive Model](https://huggingface.co/papers/2510.00996) (2025)\n* [EchoGen: Generating Visual Echoes in Any Scene via Feed-Forward Subject-Driven Auto-Regressive Model](https://huggingface.co/papers/2509.26127) (2025)\n\n\n Please give a thumbs up to this comment if you found it helpful!\n\n If you want recommendations for any Paper on Hugging Face checkout [this](https://huggingface.co/spaces/librarian-bots/recommend_similar_papers) Space\n\n You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: `@librarian-bot recommend`"}
data/2511.17344.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"paper_url": "https://huggingface.co/papers/2511.17344", "comment": "This is an automated message from the [Librarian Bot](https://huggingface.co/librarian-bots). I found the following papers similar to this paper. \n\nThe following papers were recommended by the Semantic Scholar API \n\n* [Birth of a Painting: Differentiable Brushstroke Reconstruction](https://huggingface.co/papers/2511.13191) (2025)\n* [AvatarTex: High-Fidelity Facial Texture Reconstruction from Single-Image Stylized Avatars](https://huggingface.co/papers/2511.06721) (2025)\n* [UniLumos: Fast and Unified Image and Video Relighting with Physics-Plausible Feedback](https://huggingface.co/papers/2511.01678) (2025)\n* [X2Video: Adapting Diffusion Models for Multimodal Controllable Neural Video Rendering](https://huggingface.co/papers/2510.08530) (2025)\n* [VideoCanvas: Unified Video Completion from Arbitrary Spatiotemporal Patches via In-Context Conditioning](https://huggingface.co/papers/2510.08555) (2025)\n* [PickStyle: Video-to-Video Style Transfer with Context-Style Adapters](https://huggingface.co/papers/2510.07546) (2025)\n* [Enhancing Video Inpainting with Aligned Frame Interval Guidance](https://huggingface.co/papers/2510.21461) (2025)\n\n\n Please give a thumbs up to this comment if you found it helpful!\n\n If you want recommendations for any Paper on Hugging Face checkout [this](https://huggingface.co/spaces/librarian-bots/recommend_similar_papers) Space\n\n You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: `@librarian-bot recommend`"}
data/2511.17450.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"paper_url": "https://huggingface.co/papers/2511.17450", "comment": "This is an automated message from the [Librarian Bot](https://huggingface.co/librarian-bots). I found the following papers similar to this paper. \n\nThe following papers were recommended by the Semantic Scholar API \n\n* [ChronoEdit: Towards Temporal Reasoning for Image Editing and World Simulation](https://huggingface.co/papers/2510.04290) (2025)\n* [From Seeing to Predicting: A Vision-Language Framework for Trajectory Forecasting and Controlled Video Generation](https://huggingface.co/papers/2510.00806) (2025)\n* [Object-Aware 4D Human Motion Generation](https://huggingface.co/papers/2511.00248) (2025)\n* [TGT: Text-Grounded Trajectories for Locally Controlled Video Generation](https://huggingface.co/papers/2510.15104) (2025)\n* [Enhancing Physical Plausibility in Video Generation by Reasoning the Implausibility](https://huggingface.co/papers/2509.24702) (2025)\n* [VChain: Chain-of-Visual-Thought for Reasoning in Video Generation](https://huggingface.co/papers/2510.05094) (2025)\n* [PhysCorr: Dual-Reward DPO for Physics-Constrained Text-to-Video Generation with Automated Preference Selection](https://huggingface.co/papers/2511.03997) (2025)\n\n\n Please give a thumbs up to this comment if you found it helpful!\n\n If you want recommendations for any Paper on Hugging Face checkout [this](https://huggingface.co/spaces/librarian-bots/recommend_similar_papers) Space\n\n You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: `@librarian-bot recommend`"}
data/2511.17487.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"paper_url": "https://huggingface.co/papers/2511.17487", "comment": "This is an automated message from the [Librarian Bot](https://huggingface.co/librarian-bots). I found the following papers similar to this paper. \n\nThe following papers were recommended by the Semantic Scholar API \n\n* [Decoupling Reasoning and Perception: An LLM-LMM Framework for Faithful Visual Reasoning](https://huggingface.co/papers/2509.23322) (2025)\n* [BLINK-Twice: You see, but do you observe? A Reasoning Benchmark on Visual Perception](https://huggingface.co/papers/2510.09361) (2025)\n* [VTPerception-R1: Enhancing Multimodal Reasoning via Explicit Visual and Textual Perceptual Grounding](https://huggingface.co/papers/2509.24776) (2025)\n* [Diagnosing Visual Reasoning: Challenges, Insights, and a Path Forward](https://huggingface.co/papers/2510.20696) (2025)\n* [From Perception to Cognition: A Survey of Vision-Language Interactive Reasoning in Multimodal Large Language Models](https://huggingface.co/papers/2509.25373) (2025)\n* [Learning to See Before Seeing: Demystifying LLM Visual Priors from Language Pre-training](https://huggingface.co/papers/2509.26625) (2025)\n* [Agentic Jigsaw Interaction Learning for Enhancing Visual Perception and Reasoning in Vision-Language Models](https://huggingface.co/papers/2510.01304) (2025)\n\n\n Please give a thumbs up to this comment if you found it helpful!\n\n If you want recommendations for any Paper on Hugging Face checkout [this](https://huggingface.co/spaces/librarian-bots/recommend_similar_papers) Space\n\n You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: `@librarian-bot recommend`"}
data/2511.17502.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"paper_url": "https://huggingface.co/papers/2511.17502", "comment": "This is an automated message from the [Librarian Bot](https://huggingface.co/librarian-bots). I found the following papers similar to this paper. \n\nThe following papers were recommended by the Semantic Scholar API \n\n* [dVLA: Diffusion Vision-Language-Action Model with Multimodal Chain-of-Thought](https://huggingface.co/papers/2509.25681) (2025)\n* [Unified Diffusion VLA: Vision-Language-Action Model via Joint Discrete Denoising Diffusion Process](https://huggingface.co/papers/2511.01718) (2025)\n* [UniCoD: Enhancing Robot Policy via Unified Continuous and Discrete Representation Learning](https://huggingface.co/papers/2510.10642) (2025)\n* [Embodiment Transfer Learning for Vision-Language-Action Models](https://huggingface.co/papers/2511.01224) (2025)\n* [VITA-VLA: Efficiently Teaching Vision-Language Models to Act via Action Expert Distillation](https://huggingface.co/papers/2510.09607) (2025)\n* [XR-1: Towards Versatile Vision-Language-Action Models via Learning Unified Vision-Motion Representations](https://huggingface.co/papers/2511.02776) (2025)\n* [Spatial Forcing: Implicit Spatial Representation Alignment for Vision-language-action Model](https://huggingface.co/papers/2510.12276) (2025)\n\n\n Please give a thumbs up to this comment if you found it helpful!\n\n If you want recommendations for any Paper on Hugging Face checkout [this](https://huggingface.co/spaces/librarian-bots/recommend_similar_papers) Space\n\n You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: `@librarian-bot recommend`"}