File size: 6,420 Bytes
89dd8c5
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
---
license: apache-2.0
pipeline_tag: image-text-to-text
base_model:
  - Qwen/Qwen2.5-VL-72B-Instruct
language:
  - multilingual
---

# SafeWork-R1

[📂 GitHub](https://github.com/AI45Lab/SafeWork-R1) · [📜Technical Report](https://arxiv.org/abs/2507.18576) · [💬Online Chat](https://safework-r1.ai45.shlab.org.cn/)

<div align="center">
  <img alt="image" src="/static-proxy?url=https%3A%2F%2Fcdn-uploads.huggingface.co%2Fproduction%2Fuploads%2F666fe1a5b07525f0bde69c27%2F9VqjAkK1Lshl3TVpMFV9-.png%26quot%3B%3C%2Fspan%3E%26gt%3B%3C%2Fspan%3E%3C%2Fspan%3E
</div>

## Overview

We introduce SafeWork-R1, a cutting-edge multimodal reasoning model demonstrating the coevolution of safety and general intelligence under the guiding principle of the AI-45° Law.

SafeWork-R1 is built upon the SafeLadder framework, which integrates large-scale, progressive, safety-oriented reinforcement learning post-training supported by multi-principled verifiers. Unlike conventional RLHF that simply learns human preferences, SafeLadder enables SafeWork-R1 to develop intrinsic safety reasoning and self-reflection abilities, leading to emergent safety “aha” moments.

<div align="center">

![ai45](/static-proxy?url=https%3A%2F%2Fcdn-uploads.huggingface.co%2Fproduction%2Fuploads%2F666fe1a5b07525f0bde69c27%2F9UP0ze3exhEHJXanUTyXk.png%3C%2Fspan%3E)

</div>

## Model Zoo

<table>
  <tr>
    <th>Model Variant</th>
    <th>Parameters</th>
    <th>Base Model</th>
    <th>Link</th>
  </tr>
  <tr>
    <td>SafeWork-R1</td>
    <td>72B</td>
    <td>Qwen2.5-VL-72B</td>
    <td><a href="https://huggingface.co/AI45Research/SafeWork-R1">🤗 link</a></td>
  </tr>
  <tr>
    <td>SafeWork-R1-InternVL3-78B</td>
    <td>78B</td>
    <td>InternVL3-78B</td>
    <td><a href="https://huggingface.co/AI45Research/SafeWork-R1-InternVL3-78B">🤗 link</a></td>
  </tr>
  <tr>
    <td>SafeWork-R1-DeepSeek-70B</td>
    <td>70B</td>
    <td>Deepseek-R1-DistillLlama-70B</td>
    <td><a href="https://huggingface.co/AI45Research/SafeWork-R1-DeepSeek-70B">🤗 link</a></td>
  </tr>
  <tr>
    <td>SafeWork-R1-Qwen2.5VL-7B</td>
    <td>7B</td>
    <td>Qwen2.5-VL-7B</td>
    <td><a href="https://huggingface.co/AI45Research/SafeWork-R1-Qwen2.5VL-7B">🤗 link</a></td>
  </tr>
</table>

## Performance

### Safety Benchmarks

| Model          | MM-SafetyBench | MSSBench | XSTest-Safe | SIUO  | Avg.  |
|----------------|----------------|-----------|--------------|-------|-------|
| Gemini 2.5 pro | 79.3           | 70.5      | **100.0**    | 76.7  | 81.6  |
| Claude Opus 4  | 82.1           | 59.6      | 96.8         | 62.8  | 75.3  |
| GPT-4.1        | 78.2           | 69.1      | 96.4         | **92.9** | 84.1  |
| GPT-4o         | 70.2           | 58.8      | 94.0         | 51.8  | 68.7  |
| Qwen2.5-VL-72B | 70.4           | 53.8      | 91.2         | 38.2  | 63.4  |
| **SafeWork-R1**| **92.0**<sup>↑21.6</sup> | **74.8**<sup>↑21.0</sup> | **99.2**<sup>↑8.0</sup> | **90.5**<sup>↑52.3</sup> | **89.2**<sup>↑25.8</sup> |

### Value Benchmarks

| Model          | FLAMES | M³oralBench (Judge) | M³oralBench (Classification) | M³oralBench (Response) | Avg.  |
|----------------|---------|---------------------|------------------------------|------------------------|-------|
| Gemini 2.5 Pro | 16.8    | 70.0                | 66.2                         | **86.8**               | 44.7  |
| Claude Opus 4  | 38.1    | 70.7                | **74.7**                     | 72.5                   | 52.2  |
| GPT-4.1        | 33.3    | **74.4**            | 62.7                         | 61.7                   | 53.0  |
| GPT-4o         | 36.6    | 72.4                | 65.9                         | 79.7                   | 55.5  |
| Qwen2.5-VL-72B | 39.1    | 58.4                | 48.1                         | 75.7                   | 49.9  |
| **SafeWork-R1**| **65.3**<sup>↑26.2</sup> | **68.1**<sup>↑9.7</sup> | **54.6**<sup>↑6.5</sup> | 70.9<sup>↓4.8</sup> | **64.9**<sup>↑15.0</sup> |

### General Benchmarks

| Model          | MMMU | MathVista | Olympiad | GPQA Diamond | GAOKAO-MM | Avg.  |
|----------------|------|------------|-----------|---------------|------------|-------|
| Gemini 2.5 Pro | **82.0** | **83.0** | **81.8** | **86.9** | **87.2** | **84.2** |
| Claude Opus 4  | 73.0 | 73.0 | 68.5 | 74.7 | 73.7 | 72.6 |
| GPT-4.1        | 72.4 | 72.0 | 49.0 | 69.2 | 60.2 | 64.6 |
| GPT-4o         | 70.6 | 61.6 | 33.7 | 46.9 | 33.8 | 49.3 |
| Qwen2.5-VL-72B | 67.2 | 74.8 | 40.4 | 50.5 | 73.1 | 61.2 |
| **SafeWork-R1**| **70.9**<sup>↑3.7</sup> | **76.1**<sup>↑1.3</sup> | **59.9**<sup>↑19.5</sup> | **59.6**<sup>↑9.1</sup> | **78.2**<sup>↑5.1</sup> | **68.9**<sup>↑7.7</sup> |

## Quick Start

```python
from transformers import AutoProcessor, AutoModelForCausalLM
import torch

model_name = "AI45Research/SafeWork-R1"
processor = AutoProcessor.from_pretrained(model_name, trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained(model_name, device_map="auto", torch_dtype="auto", trust_remote_code=True)

messages = [
    {
        "role": "user",
        "content": [
            {
                "type": "image",
                "image": "file:///path/to/image",
            },
            {"type": "text", "text": "Prompt containing harmful content."},
        ],
    }
]

# Preparation for inference
text = processor.apply_chat_template(
    messages, tokenize=False, add_generation_prompt=True
)
image_inputs, video_inputs = process_vision_info(messages)
inputs = processor(
    text=[text],
    images=image_inputs,
    videos=video_inputs,
    padding=True,
    return_tensors="pt",
)
inputs = inputs.to("cuda")

# Inference: Generation of the output
generated_ids = model.generate(**inputs, max_new_tokens=8192)
generated_ids_trimmed = [
    out_ids[len(in_ids) :] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)
]
output_text = processor.batch_decode(
    generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False
)
print(output_text)
```

## License

This project is released under the Apache 2.0 license.

## Citation

If you find this work useful, feel free to give us a cite.

```
@misc{lab2025safework,
  title={SafeWork-R1: Coevolving Safety and Intelligence under the AI-45 Law},
  author={Lab, Shanghai AI and Bao, Yicheng and Chen, Guanxu and Chen, Mingkang and Chen, Yunhao and Chen, Chiyu and Chen, Lingjie and Chen, Sirui and Chen, Xinquan and Cheng, Jie and others},
  journal={arXiv preprint arXiv:2507.18576},
  year={2025}
}
```