Dataset Viewer
id
int64 398
565
| name
stringclasses 7
values | deadline
timestamp[ns]date 2025-09-02 00:00:00
2025-12-30 00:00:00
| lang
stringclasses 1
value | description
stringclasses 7
values | reference
stringclasses 7
values | gpu_types
listlengths 1
1
|
|---|---|---|---|---|---|---|
399
|
amd-fp8-mm
| 2025-09-02T00:00:00
|
py
|
You will implement a custom fp8-blockwise matmul kernel optimized for MI300.
You will be given single-precision scaling factors for your matrices.
The shapes of all outer and inner dimensions of tensors are from DeepSeek-R1.
To be explicit, you will be given a tuple of tensors:
```
(a, b, a_scale, b_scale, c)
```
where `a` and `b` are the input matrices, `a_scale` and `b_scale` are the scaling factors for `a` and `b` respectively,
and `c` is the output matrix:
* `a` is M x K in column-major order in e4m3fnuz
* `b` is N x K in column-major order in e4m3fnuz
* `a_scale` is M x K // 128 in column-major order in fp32
* `b_scale` is N // 128 x K // 128 in column-major order in fp32
* `c` is M x N in ROW-major order in bf16
Matrix sizes `m` and `n` are divisible by 64, `k` is divisible by 128.
The ranking criteria is the geometric mean of the benchmark results.
For the grand price, your kernel will be evaluated against the speed of light analysis
and the solution closest to the speed of light will be awarded the grand price.
```
The speed of light analysis is:
M N K time[us]
1024 1536 7168 8.63
1024 4608 7168 25.89
6144 1536 7168 51.78
6144 4608 7168 155.30
1024 7168 256 3.17
6144 7168 256 17.27
```
|
import torch
from task import input_t, output_t
from utils import make_match_reference
block_shape = (128, 128)
def generate_input(m: int, n: int, k: int, seed: int) -> input_t:
"""
Generate random input and weights for Blockwise W8A8 Matmul scaled to FP32.
Returns:
Tuple of (
a: torch.Tensor[float8_e4m3fnuz] of shape [m, k],
b: torch.Tensor[float8_e4m3fnuz] of shape [n, k],
a_scale: torch.Tensor[float32] of shape [m, k // 128],
b_scale: torch.Tensor[float32] of shape [n // 128, k // 128],
c: torch.Tensor[bfloat16] of shape [m, n]
)
"""
gen = torch.Generator(device='cuda')
gen.manual_seed(seed)
block_shape_n, block_shape_k = block_shape
scale_n = (n + block_shape_n - 1) // block_shape_n
scale_k = (k + block_shape_k - 1) // block_shape_k
# Generate random inputs with FP8 quantization
a = (torch.randn((k, m), dtype=torch.bfloat16, device="cuda", generator=gen)).to(torch.float8_e4m3fnuz)
b = (torch.randn((k, n), dtype=torch.bfloat16, device="cuda", generator=gen)).to(torch.float8_e4m3fnuz)
# Generate scaling factors with FP32
a_scale = torch.randn([scale_k, m], dtype=torch.float32, device="cuda", generator=gen)
b_scale = torch.randn([scale_k, scale_n], dtype=torch.float32, device="cuda", generator=gen)
c = torch.zeros((m, n), dtype=torch.bfloat16, device="cuda")
return (a.T, b.T, a_scale.T, b_scale.T, c)
def ref_kernel(data: input_t) -> output_t:
"""
Highly inefficient torch reference implementation of FP8 GEMM.
You can use this as a reference / starting template for your implementation.
"""
# c: [m, n] is pre-allocated memory to help remove allocation overhead.
a, b, a_scale, b_scale, c = data
# a is M x K in column-major order, we convert here for simplicity.
a = a.contiguous()
a_scale = a_scale.contiguous()
b_scale = b_scale.contiguous()
# constants
m = a.shape[0]
n = b.shape[0]
k = a.shape[1]
block_shape_n = 128
block_shape_k = 128
scale_n = b_scale.shape[0]
scale_k = b_scale.shape[1]
# Apply blockwise scaling to input 'a'
a_scale = a_scale.unsqueeze(-1).repeat(1, 1, block_shape_k) # Shape: [m, scale_k, block_shape_k]
a_scale = a_scale.reshape(m, scale_k * block_shape_k)
a_scale = a_scale[:, :k]
# Dequantize 'a', in your implementation you should do this at the end.
a = a.to(a_scale.dtype) * a_scale
# Apply blockwise scaling to input 'b'
b_scale = (
b_scale.view(-1, 1)
.repeat(1, block_shape_n * block_shape_k)
.view(scale_n, scale_k, block_shape_n, block_shape_k)
.permute(0, 2, 1, 3) # Reorder dimensions: [scale_n, blk_n, scale_k, blk_k]
.reshape(scale_n * block_shape_n, scale_k * block_shape_k)
)
b_scale = b_scale[:n, :k]
# Dequantize 'b', in your implementation you should do this at the end.
b = b.to(b_scale.dtype) * b_scale
# Compute FP8 GEMM and write to 'c'.
c[...] = (a @ b.T).to(torch.bfloat16)
return c
check_implementation = make_match_reference(ref_kernel, rtol=2e-02, atol=1e-03)
|
[
"MI300"
] |
398
|
amd-identity
| 2025-12-30T00:00:00
|
py
|
This task is purely for testing the submission system. There will be *no* points.
> Input: (input_tensor, output_tensor)
> - input_tensor: Input data
> - output_tensor: Pre-allocated empty tensor of the same shape as `input_tensor`
> Output: Should return `output_tensor` after it has been filled with the values from `input_tensor`.`
|
import torch
from task import input_t, output_t
from utils import make_match_reference
def generate_input(size: int, seed: int) -> input_t:
gen = torch.Generator(device='cuda')
gen.manual_seed(seed)
data = torch.empty(size, device='cuda', dtype=torch.float16)
data.uniform_(0, 1, generator=gen)
return data, torch.empty_like(data)
def ref_kernel(data: input_t) -> output_t:
input, output = data
output[...] = input
return output
check_implementation = make_match_reference(ref_kernel)
|
[
"MI300"
] |
430
|
amd-mixture-of-experts
| 2025-09-02T00:00:00
|
py
|
For a more complete description, see: https://tinyurl.com/amd-comp-moe
Implement a DeepSeek-style Mixture of Experts (MoE) layer for efficient transformer models
on a single MI300X device.
MoE is a technique that allows scaling model capacity without proportionally increasing computational costs
by using a routing mechanism to selectively activate only a subset of parameters for each token.
Your task:
- Implement token routing using a simple softmax-based learned router
- Route tokens to the top-k experts based on router probabilities
- Process tokens through their assigned experts
- Combine expert outputs weighted by router probabilities
- Calculate appropriate auxiliary losses for training stability
Input:
- `data`: Tuple of (input: torch.Tensor, weights: Dict[str, torch.Tensor], config: Dict)
- input: Input tensor of shape [bs, seq_len, d_hidden]
- weights: Dictionary containing model weights
- config: Dictionary containing model configuration parameters
Output:
- Tuple containing:
- output: Processed tensor [bs, seq_len, d_model]
- aux_data: Dictionary with auxiliary data like router probabilities and losses
|
from utils import make_match_reference
from task import input_t, output_t
import torch
import torch.nn as nn
import torch.nn.functional as F
from typing import Dict, Tuple, List, Optional
import math
# Reference code in PyTorch
class Expert(nn.Module):
def __init__(self, config: Dict, d_expert: Optional[int] = None):
super().__init__()
self.config = config
self.act_fn = nn.SiLU()
self.d_hidden: int = config["d_hidden"]
self.d_expert: int = config["d_expert"] if d_expert is None else d_expert
self.W_gate = nn.Linear(self.d_hidden, self.d_expert, bias=False)
self.W_up = nn.Linear(self.d_hidden, self.d_expert, bias=False)
self.W_down = nn.Linear(self.d_expert, self.d_hidden, bias=False)
def forward(self, x: torch.Tensor) -> torch.Tensor:
gate = self.act_fn(self.W_gate(x))
out = self.W_down(gate * self.W_up(x))
return out
class MoEGate(nn.Module):
def __init__(self, config: Dict):
super().__init__()
self.top_k: int = config["n_experts_per_token"]
self.num_experts: int = config["n_routed_experts"]
self.d_hidden: int = config["d_hidden"]
self.W_g = nn.Linear(self.d_hidden, self.num_experts, bias=False)
def forward(self, x: torch.Tensor) -> Tuple[torch.Tensor, torch.Tensor]:
logits = self.W_g(x)
scores = logits.softmax(dim=-1)
topk_scores, topk_indices = torch.topk(scores, k=self.top_k, dim=-1, sorted=False)
return topk_indices, topk_scores
class MoE(nn.Module):
def __init__(self, config: Dict):
super().__init__()
self.config = config
self.experts = nn.ModuleList([
Expert(config)
for _ in range(config["n_routed_experts"])
])
self.gating_network = MoEGate(config)
shared_expert_dim = config["d_expert"] * config["n_shared_experts"]
self.shared_expert = Expert(config=config, d_expert=shared_expert_dim)
def forward(self, x: torch.Tensor) -> torch.Tensor:
shared_output = self.shared_expert(x)
expert_indices, expert_scores = self.gating_network(x)
batch_size, seq_len, hidden_dim = x.shape
orig_shape = x.shape
x_flat = x.view(-1, hidden_dim)
flat_expert_indices = expert_indices.view(-1)
flat_expert_weights = expert_scores.view(-1, 1)
routed_output_flat = self.moe_infer(x_flat,
flat_expert_indices,
flat_expert_weights)
routed_output = routed_output_flat.view(*orig_shape)
return routed_output + shared_output
@torch.no_grad()
def moe_infer(self,
x: torch.Tensor,
flat_expert_indices: torch.Tensor,
flat_expert_weights: torch.Tensor
) -> torch.Tensor:
expert_cache = torch.zeros_like(x)
idxs = flat_expert_indices.argsort()
counts = flat_expert_indices.bincount().cpu().numpy()
tokens_per_expert = counts.cumsum()
num_per_tok = self.config["n_experts_per_token"]
token_idxs = idxs // num_per_tok
for expert_id, end_idx in enumerate(tokens_per_expert):
start_idx = 0 if expert_id == 0 else tokens_per_expert[expert_id - 1]
if start_idx == end_idx:
continue
expert = self.experts[expert_id]
exp_token_idxs = token_idxs[start_idx:end_idx]
expert_tokens = x[exp_token_idxs]
expert_out = expert(expert_tokens)
expert_out.mul_(flat_expert_weights[idxs[start_idx:end_idx]])
expert_cache.scatter_reduce_(
0,
exp_token_idxs.view(-1, 1).repeat(1, x.shape[-1]),
expert_out,
reduce='sum'
)
return expert_cache
def ref_kernel(data: input_t) -> output_t:
"""
Reference implementation of DeepSeek-style Mixture of Experts using PyTorch.
Args:
data: Tuple of (input: torch.Tensor, weights: Dict[str, torch.Tensor], config: Dict)
- input: Input tensor of shape [batch_size, seq_len, hidden_dim]
- weights: Dictionary containing model weights
- config: Dictionary containing model configuration parameters
Returns:
Tuple containing:
- output: Processed tensor [batch_size, seq_len, d_model]
- aux_data: Dictionary with auxiliary data
"""
input_tensor, weights, config = data
num_experts = config["n_routed_experts"]
moe = MoE(config)
# Fill in the given weights of the model
moe.gating_network.W_g.weight = nn.Parameter(weights['router.weight'])
for i in range(num_experts):
gate_proj_weight = weights[f'experts.{i}.0.weight']
up_proj_weight = weights[f'experts.{i}.1.weight']
down_proj_weight = weights[f'experts.{i}.2.weight']
# Transpose weights to match expected shape for nn.Linear
moe.experts[i].W_gate.weight = nn.Parameter(gate_proj_weight.t())
moe.experts[i].W_up.weight = nn.Parameter(up_proj_weight.t())
moe.experts[i].W_down.weight = nn.Parameter(down_proj_weight.t())
moe.shared_expert.W_gate.weight = nn.Parameter(weights['shared_experts.0.weight'].t())
moe.shared_expert.W_up.weight = nn.Parameter(weights['shared_experts.1.weight'].t())
moe.shared_expert.W_down.weight = nn.Parameter(weights['shared_experts.2.weight'].t())
output = moe(input_tensor)
return output
# Input generation for the reference code
def generate_input(
dhidden: int,
dexpert: int,
nroutedexperts: int,
nsharedexperts: int,
nexpertspertoken: int,
bs: int,
seqlen: int,
seed: int
) -> input_t:
# Really dumb but for now _ isn't parsing correctly.
d_hidden = dhidden
d_expert = dexpert
n_routed_experts = nroutedexperts
n_shared_experts = nsharedexperts
n_experts_per_token = nexpertspertoken
batch_size = bs
seq_len = seqlen
config = {
"d_hidden": d_hidden,
"d_expert": d_expert,
"n_routed_experts": n_routed_experts,
"n_shared_experts": n_shared_experts,
"n_experts_per_token": n_experts_per_token,
"batch_size": batch_size,
"seq_len": seq_len,
}
gen = torch.Generator(device='cuda')
gen.manual_seed(seed)
num_experts = n_routed_experts
expert_dim = d_expert
weights = {}
input_tensor = torch.randn(
(batch_size, seq_len, d_hidden),
device='cuda',
dtype=torch.float16,
generator=gen
).contiguous()
# Initialize router weights
weights['router.weight'] = torch.randn(
(num_experts, d_hidden),
device="cuda",
dtype=torch.float16,
generator=gen
) / math.sqrt(d_hidden)
for i in range(num_experts):
weights[f'experts.{i}.0.weight'] = torch.randn(
(d_hidden, expert_dim),
device='cuda',
dtype=torch.float16,
generator=gen
) / math.sqrt(expert_dim)
weights[f'experts.{i}.1.weight'] = torch.randn(
(d_hidden, expert_dim),
device='cuda',
dtype=torch.float16,
generator=gen
) / math.sqrt(expert_dim)
weights[f'experts.{i}.2.weight'] = torch.randn(
(expert_dim, d_hidden),
device='cuda',
dtype=torch.float16,
generator=gen
) / math.sqrt(d_hidden)
weights['shared_experts.0.weight'] = torch.randn(
(d_hidden, expert_dim * n_shared_experts),
device='cuda',
dtype=torch.float16,
generator=gen
) / math.sqrt(expert_dim * n_shared_experts)
weights['shared_experts.1.weight'] = torch.randn(
(d_hidden, expert_dim * n_shared_experts),
device='cuda',
dtype=torch.float16,
generator=gen
) / math.sqrt(expert_dim * n_shared_experts)
weights['shared_experts.2.weight'] = torch.randn(
(expert_dim * n_shared_experts, d_hidden),
device='cuda',
dtype=torch.float16,
generator=gen
) / math.sqrt(d_hidden)
return (input_tensor, weights, config)
check_implementation = make_match_reference(ref_kernel, rtol=1e-2, atol=1e-2)
|
[
"MI300"
] |
463
|
amd-mla-decode
| 2025-09-02T00:00:00
|
py
|
You will implement a custom mla decode kernel optimized for MI300, a few things simplified here:
1. Q, K, V data type as bfloat16
2. decode only with pre-allocated non-paged latent kv cache
3. return the update kv cache with MLA output
The shapes of all outer and inner dimensions of tensors are from DeepSeek-R1, and split number of heads to fit in one GPU.
To be explicit, you will be given a tuple to tensors:
```yml
input [bs, sq, dim]
attn_output [bs, n_heads, sq, v_head_dim]
kv_cache [bs, sq, kv_lora_rank + qk_rope_head_dim]
```
where
0. bs::128 # batch size
1. prefill::[512, 2048, 4096, 6144] # as kv length
2. sq::1 # as only consider decoding
3. dim::7168 # hidden size of deepseek v3
4. kv_lora_rank::[512] # kv lora rank of deepseek v3
5. qk_rope_head_dim::[64] # rope embedding dimension
6. v_head_dim::128 # head size
7. n_heads::128 # num of attn heads
The ranking criteria is the geometric mean of the benchmark results.
For the grand prize, your kernel will be evaluated against the speed of light analysis
and the solution closest to the speed of light will be awarded the grand prize.
The speed of light analysis is::
| bs | prefill | sq | dtype | roofline time(us) |
|---|---|---|---|---|
| 128 | 512 | 1 | bf16 | 54.62 |
| 128 | 2048 | 1 | bf16 | 141.16 |
| 128 | 4096 | 1 | bf16 | 210.75 |
| 128 | 6144 | 1 | bf16 | 280.87 |
|
import math
from dataclasses import dataclass
import torch
from torch import nn
import torch.nn.functional as F
from task import input_t, output_t
from utils import make_match_reference
class RoPE(nn.Module):
def __init__(self, d_model: int):
super().__init__()
self.d_model = d_model
theta = 10000 ** (-torch.arange(0, d_model//2,dtype=torch.bfloat16) / (d_model//2))
self.register_buffer("theta", theta)
def rotate_half(self, x: torch.Tensor) -> torch.Tensor:
x1, x2 = x.chunk(2, dim=-1)
return torch.cat((-x2, x1), dim=-1)
def forward(self, x: torch.Tensor, start_pos: int = 0) -> torch.Tensor:
seq_len = x.size(-2)
d_model = x.size(-1)
assert d_model == self.d_model
seq_idx = torch.arange(start_pos, start_pos + seq_len, device=x.device)
idx_theta = torch.einsum('s,d->sd', seq_idx, self.theta)
idx_theta2 = torch.cat([idx_theta, idx_theta], dim=-1)
cos = idx_theta2.cos().to(torch.bfloat16)
sin = idx_theta2.sin().to(torch.bfloat16)
return x * cos + self.rotate_half(x) * sin
class KVCache(nn.Module):
def __init__(self, kv_cache_shape: tuple, **kwargs) -> None:
super().__init__(**kwargs)
self.register_buffer('data', torch.zeros(kv_cache_shape, dtype=torch.bfloat16))
self.seq_len = 0
self.zero()
def zero(self) -> None:
self.data.zero_()
def get_data(self) -> torch.Tensor:
return self.data
def forward(self, c_kv: torch.Tensor) -> torch.Tensor:
assert self.seq_len + c_kv.size(1) <= self.data.size(1), "KV Cache Exceeded"
self.data = self.data.to(c_kv.dtype)
self.data[
:, self.seq_len : self.seq_len + c_kv.size(1), :
] = c_kv
self.seq_len += c_kv.size(1)
return self.data[:, :self.seq_len], self.seq_len
@dataclass
class Config:
batch_size: int
dim: int
n_heads: int
q_lora_rank: int
kv_lora_rank: int
qk_nope_head_dim: int
qk_rope_head_dim: int
v_head_dim: int
seq_len: int
max_seq_len: int
kv_cache_shape: tuple
Q_proj_down_weight: torch.Tensor
Q_proj_up_weight: torch.Tensor
KV_proj_down_weight: torch.Tensor
KV_proj_up_weight: torch.Tensor
wo_weight: torch.Tensor
class MLA(nn.Module):
def __init__(self, config: Config):
super().__init__()
self.dim = config.dim
self.n_heads = config.n_heads
self.q_lora_rank = config.q_lora_rank
self.kv_lora_rank = config.kv_lora_rank
self.nope_head_dim = config.qk_nope_head_dim
self.rope_head_dim = config.qk_rope_head_dim
self.v_head_dim = config.v_head_dim
# Down-projection matrices
self.Q_proj_down = nn.Linear(self.dim, self.q_lora_rank, dtype=torch.bfloat16, bias=False)
self.KV_proj_down = nn.Linear(self.dim, self.kv_lora_rank + self.rope_head_dim, dtype=torch.bfloat16, bias=False)
# Up-projection and rope projection matrices
self.Q_proj_up = nn.Linear(self.q_lora_rank, (self.nope_head_dim + self.rope_head_dim) * self.n_heads, dtype=torch.bfloat16, bias=False)
self.KV_proj_up = nn.Linear(self.kv_lora_rank, (self.nope_head_dim + self.v_head_dim) * self.n_heads, dtype=torch.bfloat16, bias=False)
# RoPE on half embeddings
self.q_rope = RoPE(self.rope_head_dim)
self.k_rope = RoPE(self.rope_head_dim)
# Output projection
self.wo = nn.Linear(self.v_head_dim * self.n_heads, self.dim, dtype=torch.bfloat16, bias=False)
self.eps = 1e-6
def forward(self, x: torch.Tensor, kv_cache: KVCache) -> torch.Tensor:
# seq_len = 1 always here
batch_size, seq_len, model_dim = x.size()
################################################################################
# Step 1: Handle down-projection + KV cache #
################################################################################
q_lora = self.Q_proj_down(x)
kv_lora = self.KV_proj_down(x)
kv_lora, kv_len = kv_cache(kv_lora)
query_pos = kv_len - 1
################################################################################
# Step 2: Up-project and prepare NoPE + RoPE #
################################################################################
# Handle queries Q first
q_nope_and_rope = self.Q_proj_up(q_lora).view(
batch_size, seq_len, self.n_heads, self.nope_head_dim + self.rope_head_dim)
q_nope, q_rope = torch.split(q_nope_and_rope, [self.nope_head_dim, self.rope_head_dim], dim=-1)
# Handle keys and values K/V. V does not need RoPE
kv_nope, k_rope = torch.split(kv_lora, [self.kv_lora_rank, self.rope_head_dim], dim=-1)
kv_nope = self.KV_proj_up(kv_nope).view(
batch_size, kv_len, self.n_heads, self.nope_head_dim + self.v_head_dim)
k_nope, v = torch.split(kv_nope, [self.nope_head_dim, self.v_head_dim], dim=-1)
################################################################################
# Step 3: Handle RoPE Stream #
################################################################################
# Compute RoPE for queries and combine with no-RoPE part
q_rope = q_rope.permute(0, 2, 1, 3) # bs x n_heads x seq_len x rope_head_dim
q_rope = self.q_rope(q_rope, start_pos=query_pos)
q_nope = q_nope.permute(0, 2, 1, 3) # bs x n_heads x seq_len x rope_head_dim
q = torch.concat([q_nope, q_rope], dim=-1)
# Compute RoPE for keys and combine with no-RoPE part
k_rope = k_rope[:, None, :, :]
k_rope = self.k_rope(k_rope).expand(-1,self.n_heads,-1,-1)
k_nope = k_nope.permute(0, 2, 1, 3) # bs x kv_len x n_heads x rope_head_dim
k = torch.concat([k_nope, k_rope], dim=-1)
################################################################################
# Compute Multi-head Attention #
################################################################################
v = v.permute(0, 2, 1, 3) # bs x n_heads x kv_len x v_head_dim
scores = torch.matmul(q, k.transpose(-1, -2)) / math.sqrt(self.rope_head_dim + self.nope_head_dim)
attn = F.softmax(scores, dim=-1).to(torch.bfloat16)
y = torch.matmul(attn, v).view(batch_size, 1, -1)
y = self.wo(y)
return y, kv_cache.get_data()
def generate_input(batchsize, dim, dq, prefill, seed):
# Sizes derived from: https://github.com/deepseek-ai/DeepSeek-V3/blob/main/inference/model.py
gen = torch.Generator(device='cuda')
gen.manual_seed(seed)
# Generate weights for linear layers
Q_proj_down_weight = torch.randn((dq, dim), dtype=torch.bfloat16, generator=gen, device='cuda') / math.sqrt(dim)
KV_proj_down_weight = torch.randn((512 + 64, dim), dtype=torch.bfloat16, generator=gen, device='cuda') / math.sqrt(dim)
Q_proj_up_weight = torch.randn(((128 + 64) * 128, dq), dtype=torch.bfloat16, generator=gen, device='cuda') / math.sqrt(dq)
KV_proj_up_weight = torch.randn(((128 + 128) * 128, 512), dtype=torch.bfloat16, generator=gen, device='cuda') / math.sqrt(512)
wo_weight = torch.randn((dim, 128 * 128), dtype=torch.bfloat16, generator=gen, device='cuda') / math.sqrt(128 * 128)
config = Config(
batch_size=batchsize,
dim=dim,
q_lora_rank=dq,
n_heads=128,
kv_lora_rank=512,
qk_nope_head_dim=128,
qk_rope_head_dim=64,
v_head_dim=128,
seq_len=1,
max_seq_len=8192,
kv_cache_shape=(batchsize, 8192, 512 + 64),
Q_proj_down_weight=Q_proj_down_weight,
Q_proj_up_weight=Q_proj_up_weight,
KV_proj_down_weight=KV_proj_down_weight,
KV_proj_up_weight=KV_proj_up_weight,
wo_weight=wo_weight,
)
x = torch.randn((config.batch_size, 1, config.dim), dtype=torch.bfloat16, generator=gen, device='cuda')
# Pre-fill KV cache
kv_cache = KVCache((config.batch_size, config.max_seq_len, config.kv_lora_rank + config.qk_rope_head_dim)).to('cuda')
pre_filled_cache = torch.randn((config.batch_size, prefill, config.kv_lora_rank + config.qk_rope_head_dim),
dtype=torch.bfloat16, generator=gen, device='cuda')
kv_cache(pre_filled_cache)
return config, x, kv_cache
def ref_kernel(data: input_t) -> output_t:
config, x, kv_cache = data
# Load in model weights
model = MLA(config).to('cuda')
model.Q_proj_down.weight = nn.Parameter(config.Q_proj_down_weight)
model.Q_proj_up.weight = nn.Parameter(config.Q_proj_up_weight)
model.KV_proj_down.weight = nn.Parameter(config.KV_proj_down_weight)
model.KV_proj_up.weight = nn.Parameter(config.KV_proj_up_weight)
model.wo.weight = nn.Parameter(config.wo_weight)
output, kv_cache = model(x, kv_cache)
return output, kv_cache
check_implementation = make_match_reference(ref_kernel, rtol=2e-02, atol=8e-03)
def time_mla(model, x, kv_cache, num_warmup=3, num_trials=5):
# Warmup runs
for _ in range(1):
output, _ = model(x, kv_cache)
torch.cuda.synchronize()
# Timed runs
times = []
for _ in range(num_trials):
kv_cache = KVCache((config.batch_size, config.max_seq_len, config.kv_lora_rank + config.qk_rope_head_dim)).to('cuda')
start = torch.cuda.Event(enable_timing=True)
end = torch.cuda.Event(enable_timing=True)
start.record()
output, updated_kv = model(x, kv_cache)
end.record()
torch.cuda.synchronize()
times.append(start.elapsed_time(end))
avg_time = sum(times) / len(times)
return output, updated_kv, avg_time, times
if __name__ == "__main__":
# Generate test input
batchsize = 128
dim = 7168
dq = 1536
prefill = 512
seed = 97
# Create model and inputs
config, x, kv_cache = generate_input(batchsize, dim, dq, prefill, seed)
model = MLA(config).to('cuda')
# Run model with timing
output, updated_kv, avg_time, times = time_mla(model, x, kv_cache)
# Test reference kernel
ref_output, ref_kv = ref_kernel((config, x, kv_cache))
print("\nReference kernel output:")
print(f"Output shape: {ref_output.shape}")
print(f"KV cache shape: {ref_kv.shape}")
print("\nFirst few values of reference output:")
print(ref_output[0, :10])
# Compare outputs
print("\nOutput difference:")
print(f"Max absolute difference: {torch.max(torch.abs(output - ref_output))}")
print(f"Mean absolute difference: {torch.mean(torch.abs(output - ref_output))}")
print(f"Input shape: {x.shape}")
print(f"Output shape: {output.shape}")
print(f"Updated KV cache shape: {updated_kv.shape}")
print("\nFirst few values of output:")
print(output[0, :10])
print(f"\nTiming results over {len(times)} runs (ms):")
print(f"Average: {avg_time:.2f}")
print(f"Individual times: {[f'{t:.2f}' for t in times]}")
|
[
"MI300"
] |
565
|
amd-ag-gemm
| 2025-10-15T07:00:00
|
py
|
Implement a AllGather-Gemm kernel on a single MI300X device.
AllGather-Gemm (AG-Gemm) is a technique that combines the AllGather communication
pattern with General Matrix Multiplication (GEMM) to optimize the performance
of transformer models on GPUs.
Your task:
- Implement the AG-Gemm kernel to perform matrix multiplications
in a distributed manner, leveraging the AllGather operation to collect
data from multiple GPUs.
- Ensure that the implementation is optimized for the MI300X architecture,
taking advantage of its specific hardware features for maximum performance.
Input:
- `data`: Tuple of (input: torch.Tensor, weights: torch.Tensor,
bias: Optional, None or torch.Tensor)
- input: Local input tensor of shape [local_M, K].
- weight: Weight tensor of shape [local_N, K].
- bias: bias tensor of shape [local_N] or None.
Output:
- Tuple containing:
- output: Resulting tensor of shape [local_M * world_size, local_N]
The ranking criteria is the geometric mean of the benchmark results.
For the grand price, your kernel will be evaluated against the speed of light
analysis and AMD implementations, the solution closest to the speed of light
and AMD implementations will be awarded the grand price.
```
The speed of light analysis is:
m n k has_bias time[us]
64 18432 7168 False 6.46
512 12288 4096 True 24.58
2048 2880 2880 True 23.04
4096 4096 4096 False 65.54
8192 14336 4096 True 458.75
8192 29568 8192 False 946.18
```
|
from task import input_t, output_t
import torch
def generate_input(rank: int, world_size: int, m: int, n: int, k: int, has_bias: bool, seed: int) -> input_t:
"""
Generate random input and weights for the Allgather-Gemm operation.
Returns:
Tuple of (
input: torch.Tensor, # [local_M, k]
weight: torch.Tensor, # [local_N, K]
bias: Optional[torch.Tensor], # [local_N] or None
)
"""
device = torch.device(f"cuda:{rank}")
gen = torch.Generator(device=device)
gen.manual_seed(seed + rank)
assert m % world_size == 0, "m must be divisible by world_size"
assert n % world_size == 0, "n must be divisible by world_size"
local_m = m // world_size
local_n = n // world_size
# Generate random inputs and weights
input = (torch.rand((local_m, k), dtype=torch.bfloat16, device=device, generator=gen) * 2 - 1) * 0.01
weight = (torch.rand((local_n, k), dtype=torch.bfloat16, device=device, generator=gen) * 2 - 1) * 0.01
bias = None
if has_bias:
bias = (torch.rand((local_n,), dtype=torch.bfloat16, device=device, generator=gen) * 2 - 1) * 0.01
return (input, weight, bias)
def ref_kernel(data: input_t) -> output_t:
"""
Reference kernel for AG-GEMM operation.
Args:
data: Tuple of (input: torch.Tensor, weight: torch.Tensor, bias: Optional[torch.Tensor])
- input: Local input tensor of shape [local_M, K].
- weight: Weight tensor of shape [local_N, K].
- bias: Optional bias tensor of shape [local_N] or None.
Returns:
output: Resulting tensor of shape [local_M * world_size, local_N].
"""
input, weight, bias = data
local_M, K = input.shape
world_size = torch.distributed.get_world_size()
full_input = torch.empty((local_M * world_size, K), dtype=input.dtype, device=input.device)
# allgather
torch.distributed.all_gather_into_tensor(full_input, input)
# matmul
output = torch.matmul(full_input, weight.T)
if bias is not None:
output = output + bias
return output
def check_implementation(data: input_t, output: output_t):
expected = ref_kernel(data)
if output.device != expected.device:
return False, f"Output device mismatch: {output.device} != {expected.device}"
res = torch.allclose(output, expected, rtol=1e-2, atol=1e-2)
if not res:
return False, f"Output values mismatch, {output} != {expected}"
return True, ""
|
[
"MI300x8"
] |
563
|
amd-all2all
| 2025-10-15T07:00:00
|
py
|
You are expected to implement dispatch and simulated moe and combine kernels with intra node communication, refering to reference.py, which jointly made a custom single node all2all kernel optimized for 8xMI300.
You will be given MoEConfig, which is the main hypeparameter, including numbers of experts, experts per token, hidden dim, max number tokens each dp rank and input output dtype
To be explicit, you will be given data of all ranks, naming all_rank_data.
each rank data including:
```
num_tokens, indices, weights, x
```
lets explain the input args one by one.
* `x` is tokens data at each rank, with (num_tokens, hidden_dim) shape
* `num_tokens` is the numbers of tokens at each rank, a scalar with maximum numbers: max number tokens defined in MoEConfig
* `indices` is the token to expert map, indicating which experts each token dispatch to, with (num_tokens, experts_per_token) shape
* `weights` is weights of topk experts, used in combine, with (num_tokens, experts_per_token) shape
The ranking criteria is the geometric mean of the benchmark results.
For the grand price, your kernel will be evaluated against the speed of light analysis and AMD implementations,
the solution closest to the speed of light and AMD implementations will be awarded the grand price.
```
The speed of light analysis is:
num_experts experts_per_token hidden_dim max_num_tokens time[us]
8 2 6144 16 6.33
64 6 2048 32 7.37
128 4 2880 128 14.98
128 8 4096 256 61.78
256 8 7168 256 104.36
```
|
# pytorch_all2all.py
import os
import torch
import torch.distributed as dist
import dataclasses
from task import input_t, output_t
# ---------------- MoE config ----------------
@dataclasses.dataclass
class MoEConfig:
num_experts: int
experts_per_token: int
hidden_dim: int
max_num_tokens: int
in_dtype: torch.dtype = torch.float16
out_dtype: torch.dtype = torch.float16
# ---------------- data per dp rank ----------------
class RankTestData:
def __init__(self, cfg: MoEConfig, rng: torch.Generator, rank: int):
device = torch.device(f"cuda:{rank}")
self.num_tokens = int(
torch.randint(
1, cfg.max_num_tokens, [1], generator=rng, device=device
).item()
)
# token expert map
self.indices = torch.empty(
self.num_tokens, cfg.experts_per_token, dtype=torch.int32, device=device
)
for i in range(self.num_tokens):
perm = torch.randperm(cfg.num_experts, generator=rng, device=device)
self.indices[i] = perm[: cfg.experts_per_token]
# topk weights
self.weights = torch.rand(
self.num_tokens,
cfg.experts_per_token,
dtype=torch.float32,
generator=rng,
device=device,
)
# dp tokens, input of dispatch
self.x = torch.randn(
self.num_tokens,
cfg.hidden_dim,
dtype=cfg.in_dtype,
generator=rng,
device=device,
)
# ---------------- All2All pytorch impl ----------------
class PyTorchAllToAll:
META_DIM = 5 # global_exp, src_rank, src_token, src_k, pad
def __init__(self, cfg: MoEConfig, rank: int, world_size: int):
self.cfg = cfg
self.rank = rank
self.world_size = world_size
# num experts per rank
self.num_local_experts = cfg.num_experts // world_size
# max recv tokens per rank
self.max_recv = cfg.max_num_tokens * world_size
# ---------- dispatch ----------
def dispatch(self, dp_x: torch.Tensor, indices: torch.Tensor):
device = dp_x.device
cfg = self.cfg
# ---------1. get counts of send and recv for each rank -----------
# 1.1 token nums to send to each rank
send_counts = [0] * self.world_size
# 1.2 token id to send to each rank
token_map = [[] for _ in range(self.world_size)]
# 1.3 token meta data, need update for combine
meta_map = [[] for _ in range(self.world_size)]
for t, expert_list in enumerate(indices.tolist()):
for k, e in enumerate(expert_list):
dst_rank = e // self.num_local_experts
send_counts[dst_rank] += 1
token_map[dst_rank].append(t)
meta_map[dst_rank].extend(
[e, self.rank, t, k, 0]
) # srcGobalExpert, srcRank, srcIndex, expert index
send_counts_t = torch.tensor(send_counts, dtype=torch.long, device=device)
# 1.3 token nums to recv from each rank
recv_counts_t = torch.empty(self.world_size, dtype=torch.long, device=device)
dist.all_to_all_single(recv_counts_t, send_counts_t)
# ---------2. send and recv buffer, order by tokens on each rank ----------
send_buf = torch.cat([dp_x[idx_list] for idx_list in token_map], dim=0)
total_recv = int(recv_counts_t.sum().item())
recv_buf = torch.empty(
total_recv, cfg.hidden_dim, dtype=cfg.in_dtype, device=device
)
# 2.1 meta buf for send and recv
send_meta = torch.tensor(
[v for sub in meta_map for v in sub], dtype=torch.int32, device=device
).view(-1, self.META_DIM)
recv_meta = torch.empty(
total_recv, self.META_DIM, dtype=torch.int32, device=device
)
# ---------3. dispatch send_buf to recv_buf by recv and send counts--------------
dist.all_to_all_single(
recv_buf,
send_buf,
output_split_sizes=recv_counts_t.tolist(),
input_split_sizes=send_counts_t.tolist(),
)
dist.all_to_all_single(
recv_meta.view(-1),
send_meta.view(-1),
output_split_sizes=[c * self.META_DIM for c in recv_counts_t.tolist()],
input_split_sizes=[c * self.META_DIM for c in send_counts_t.tolist()],
)
recv_meta = recv_meta.view(-1, self.META_DIM)
# ---------4. define output tensor of dispatch ------------
# 4.1 num tokens per expert
expert_num_tokens = torch.zeros(
self.num_local_experts, dtype=torch.int32, device=device
)
# 4.2 token tensor on each expert
expert_x = torch.empty(
(self.num_local_experts, self.max_recv, cfg.hidden_dim),
dtype=cfg.in_dtype,
device=device,
)
expert_meta = torch.empty(
(self.num_local_experts, self.max_recv, self.META_DIM),
dtype=torch.int32,
device=device,
)
# ---------5. dispatch send_meta to recv_meta by recv and send counts------
# ---------6. write tokens to each expert on each rank ------
# 6.1 fetch the local expert id of corresponding token i
for i in range(total_recv):
global_eid = int(recv_meta[i, 0].item())
local_eid = global_eid % self.num_local_experts
# output, store token buf and token meta and token nums of each expert
expert_x[local_eid, expert_num_tokens[local_eid]] = recv_buf[i]
expert_meta[local_eid, expert_num_tokens[local_eid]] = recv_meta[i]
expert_num_tokens[local_eid] += 1
# 6.2 after dispatch, token nums and token and meta of token on expert
return expert_num_tokens, expert_x, expert_meta
# ---------- combine ----------
def combine(
self,
out_tokens: torch.Tensor, # output, (max num tokens, token dim)
weights: torch.Tensor, # topk weight
expert_meta: torch.Tensor, # input
expert_y: torch.Tensor, # input, (num_local_experts, max_num_tokens * num_dp, token_dim)
expert_num_tokens: torch.Tensor,
): # input
device = out_tokens.device
cfg = self.cfg
# 1. count send-back tokens in cur rank
send_counts = [0] * self.world_size
# 1.1 token that will send back
y_map = [[] for _ in range(self.world_size)]
# 1.2 meta info of each token that send back to its src rank
meta_map = [[] for _ in range(self.world_size)]
# 2. traverse each token of each local expert of each rank, fill into send_counts and y_map and meta_map
for local_eid in range(self.num_local_experts):
cnt = int(expert_num_tokens[local_eid].item())
for j in range(cnt):
# meta info token j of local eid
meta = expert_meta[local_eid, j]
dst_rank = int(meta[1].item())
send_counts[dst_rank] += 1
# token j and its meta that send back to dst rank/local eid
y_map[dst_rank].append(expert_y[local_eid, j].unsqueeze(0))
meta_map[dst_rank].extend(meta.tolist())
# token nums that cur rank plan to send to other ranks
send_counts_t = torch.tensor(send_counts, dtype=torch.long, device=device)
# token nums that will recv from other ranks
recv_counts_t = torch.empty(self.world_size, dtype=torch.long, device=device)
# call all2all to send send counts and recv recv_counts_t at each rank by all2all
dist.all_to_all_single(recv_counts_t, send_counts_t)
# 3.send buffers of each rank, that is, the tokens at its experts
y_map_tensors = []
for sub_list in y_map:
if sub_list:
y_map_tensors.append(torch.cat(sub_list, dim=0))
else:
y_map_tensors.append(
torch.empty((0, cfg.hidden_dim), dtype=cfg.out_dtype, device=device)
)
send_buf = torch.cat(y_map_tensors, dim=0)
# 4. flatten send meta by tokens
send_meta = torch.tensor(
[v for sub in meta_map for v in sub], dtype=torch.int32, device=device
).view(-1, self.META_DIM)
# 5. total recv tokens of cur rank
total_recv = int(recv_counts_t.sum().item())
# 6. recv buffer of cur rank
recv_buf = torch.empty(
total_recv, cfg.hidden_dim, dtype=cfg.out_dtype, device=device
)
recv_meta = torch.empty(
total_recv, self.META_DIM, dtype=torch.int32, device=device
)
# 7. call all2all to send and recv for each rank
dist.all_to_all_single(
recv_buf,
send_buf,
output_split_sizes=recv_counts_t.tolist(),
input_split_sizes=send_counts_t.tolist(),
)
# 8. call all2all to send meta and recv meta for each rank
dist.all_to_all_single(
recv_meta.view(-1),
send_meta.view(-1),
output_split_sizes=[c * self.META_DIM for c in recv_counts_t.tolist()],
input_split_sizes=[c * self.META_DIM for c in send_counts_t.tolist()],
)
# 9. restore recv meta
recv_meta = recv_meta.view(-1, self.META_DIM)
# 10. write back tokens from recv buf, per meta info, and do weighted sum
for i in range(total_recv):
src_token = int(recv_meta[i, 2].item())
src_k = int(recv_meta[i, 3].item())
src_rank = int(recv_meta[i, 1].item())
w = weights[src_token, src_k].to(torch.float32)
out_tokens[src_token] += recv_buf[i].to(torch.float32) * w
return out_tokens
def generate_input(
num_experts, experts_per_token, hidden_dim, max_num_tokens, seed, rank, world_size
):
device = torch.device(f"cuda:{rank}")
gen = torch.Generator(device=device)
gen.manual_seed(seed + rank)
cfg = MoEConfig(
num_experts=num_experts,
experts_per_token=experts_per_token,
hidden_dim=hidden_dim,
max_num_tokens=max_num_tokens,
in_dtype=torch.float16,
out_dtype=torch.float16,
)
rank_data = RankTestData(cfg, gen, rank)
return cfg, rank_data, rank, world_size
def ref_kernel(data: input_t) -> output_t:
cfg, rank_data, rank, world_size = data
ata = PyTorchAllToAll(cfg, rank, world_size)
expert_num, expert_x, expert_meta = ata.dispatch(rank_data.x, rank_data.indices)
expert_y = expert_x.to(cfg.out_dtype) * (1 + rank)
y = torch.zeros(
cfg.max_num_tokens,
cfg.hidden_dim,
dtype=cfg.out_dtype,
device=rank_data.x.device,
)
ata.combine(y, rank_data.weights, expert_meta, expert_y, expert_num)
return y[: rank_data.num_tokens]
def check_implementation(data: input_t, output: output_t):
expected = ref_kernel(data)
if output.device != expected.device:
return False, f"Output device mismatch: {output.device} != {expected.device}"
res = torch.allclose(output, expected, rtol=1e-2, atol=5e-3)
if not res:
return False, f"Output values mismatch, {output} != {expected}"
return True, ""
|
[
"MI300x8"
] |
564
|
amd-gemm-rs
| 2025-10-15T07:00:00
|
py
|
Implement a Gemm-ReduceScatter kernel on a single MI300X node.
Gemm-ReduceScatter is a technique that combines the ReduceScatter
communication pattern with General Matrix Multiplication (GEMM) to optimize
the performance of transformer models on GPUs. It is particularly useful for
handling large models that exceed the memory capacity of a single GPU by
distributing the model across multiple GPUs and efficiently scattering the
results of matrix multiplications.
Your task:
- Implement the Gemm-RS kernel to perform matrix multiplications in a
distributed manner, leveraging the ReduceScatter operation to distribute
data across multiple GPUs.
- Ensure that the implementation is optimized for the MI300X architecture,
taking advantage of its specific hardware features for maximum performance.
Input:
- `data`: Tuple of (input: torch.Tensor, weights: torch.Tensor,
bias: Optional, None or torch.Tensor)
- input: Local input tensor of shape [M, local_K].
- weight: Weight tensor of shape [N, local_K].
- bias: bias tensor of shape [N] or None.
Output:
- Tuple containing:
- output: Resulting tensor of shape [M // world_size, N]
The ranking criteria is the geometric mean of the benchmark results.
For the grand price, your kernel will be evaluated against the speed of light
analysis and AMD implementations, the solution closest to the speed of light
and AMD implementations will be awarded the grand price.
```
The speed of light analysis is:
m n k has_bias time[us]
64 7168 18432 False 6.46
512 4096 12288 True 8.19
2048 2880 2880 True 23.04
4096 4096 4096 False 65.54
8192 4096 14336 True 131.07
8192 8192 29568 False 379.43
```
|
from task import input_t, output_t
import torch
def generate_input(rank: int, world_size: int, m: int, n: int, k: int, has_bias: bool, seed: int) -> input_t:
"""
Generate random input and weights for the Gemm-ReduceScatter operation.
Returns:
Tuple of (
input: torch.Tensor, # [M, local_K]
weight: torch.Tensor, # [N, local_K]
bias: Optional[torch.Tensor], # [N] or None
)
"""
device = torch.device(f'cuda:{rank}')
gen = torch.Generator(device=device)
gen.manual_seed(seed + rank)
assert m % world_size == 0, "m must be divisible by world_size"
assert k % world_size == 0, "k must be divisible by world_size"
local_k = k // world_size
# Generate random inputs and weights
input = (torch.rand((m, local_k), dtype=torch.bfloat16, device=device, generator=gen) * 2 - 1) * 0.01
weight = (torch.rand((n, local_k), dtype=torch.bfloat16, device=device, generator=gen) * 2 - 1) * 0.01
bias = None
if has_bias:
gen.manual_seed(seed)
bias = (torch.rand((n,), dtype=torch.bfloat16, device=device, generator=gen) * 2 - 1) * 0.01
return (input, weight, bias)
def ref_kernel(data: input_t) -> output_t:
"""
Reference kernel for Gemm-ReduceScatter operation.
Args:
data: Tuple of (input: torch.Tensor, weight: torch.Tensor, bias: Optional[torch.Tensor])
- input: Local input tensor of shape [M, local_K].
- weight: Weight tensor of shape [N, local_K].
- bias: Optional bias tensor of shape [N] or None.
Returns:
Tuple containing:
- output: Resulting tensor of shape [M // world_size, N].
"""
input, weight, bias = data
M, local_K = input.shape
N = weight.shape[0]
world_size = torch.distributed.get_world_size()
# matmul
output = torch.matmul(input, weight.T)
if bias is not None:
output = output + bias
# reduce scatter
rs_output = torch.empty((M // world_size, N), dtype=output.dtype, device=input.device)
torch.distributed.reduce_scatter_tensor(rs_output, output)
return rs_output
def check_implementation(data: input_t, output: output_t):
expected = ref_kernel(data)
if output.device != expected.device:
return False, f"Output device mismatch: {output.device} != {expected.device}"
res = torch.allclose(output, expected, rtol=1e-2, atol=1e-2)
if not res:
return False, f"Output values mismatch, {output} != {expected}"
return True, ""
|
[
"MI300x8"
] |
This is the dataset that was created from the first and second AMD $100K kernel competitions, containing roughly 110K kernels for fp8-gemm, moe, mla, all2all, gemm+reducescatter, and allgather+gemm optimized to run on MI300. Learn more at gpumode.com/v2/news
To see the full list of kernel competitions we've ran and are running you can checkout https://github.com/gpu-mode/reference-kernels which also contains details on reference kernels and their input shapes and distributions
We are planning on adding kernels optimized for NVFP4 on Blackwell next
If you use this dataset in your work, please cite:
@inproceedings{
zhang2025kernelbot,
title={KernelBot: A Competition Platform for Writing Heterogeneous {GPU} Code},
author={Alex L Zhang and Matej Sirovatka and Erik Schultheis and Benjamin Horowitz and Mark Saroufim},
booktitle={Championing Open-source DEvelopment in ML Workshop @ ICML25},
year={2025},
url={https://openreview.net/forum?id=bq9U4dmuyJ}
}
- Downloads last month
- 476