-
Sequence Parallelism: Long Sequence Training from System Perspective
Paper • 2105.13120 • Published • 6 -
Ring Attention with Blockwise Transformers for Near-Infinite Context
Paper • 2310.01889 • Published • 13 -
Striped Attention: Faster Ring Attention for Causal Transformers
Paper • 2311.09431 • Published • 4 -
DeepSpeed Ulysses: System Optimizations for Enabling Training of Extreme Long Sequence Transformer Models
Paper • 2309.14509 • Published • 19
Maozhou Ge
Gmc2
AI & ML interests
None yet
Recent Activity
upvoted
an
article
4 days ago
Supercharge Edge AI With High‑Accuracy Reasoning Using NVIDIA Nemotron Nano 2 9B
liked
a dataset
5 days ago
nvidia/Llama-Nemotron-Post-Training-Dataset
upvoted
a
collection
5 days ago
InternVL3.5-Core
Organizations
None yet