ajkhjjha
#41
by
djhajhk
- opened
README.md
CHANGED
|
@@ -8,7 +8,6 @@ tags:
|
|
| 8 |
- ocr
|
| 9 |
- custom_code
|
| 10 |
license: mit
|
| 11 |
-
library_name: transformers
|
| 12 |
---
|
| 13 |
<div align="center">
|
| 14 |
<img src="https://github.com/deepseek-ai/DeepSeek-V2/blob/main/figures/logo.svg?raw=true" width="60%" alt="DeepSeek AI" />
|
|
@@ -45,14 +44,14 @@ library_name: transformers
|
|
| 45 |
</p>
|
| 46 |
<h2>
|
| 47 |
<p align="center">
|
| 48 |
-
<a href="
|
| 49 |
</p>
|
| 50 |
</h2>
|
| 51 |
<p align="center">
|
| 52 |
<img src="assets/fig1.png" style="width: 1000px" align=center>
|
| 53 |
</p>
|
| 54 |
<p align="center">
|
| 55 |
-
<a href="
|
| 56 |
</p>
|
| 57 |
|
| 58 |
## Usage
|
|
@@ -178,7 +177,7 @@ We also appreciate the benchmarks: [Fox](https://github.com/ucaslcl/Fox), [Omini
|
|
| 178 |
|
| 179 |
## Citation
|
| 180 |
```bibtex
|
| 181 |
-
@article{
|
| 182 |
title={DeepSeek-OCR: Contexts Optical Compression},
|
| 183 |
author={Wei, Haoran and Sun, Yaofeng and Li, Yukun},
|
| 184 |
journal={arXiv preprint arXiv:2510.18234},
|
|
|
|
| 8 |
- ocr
|
| 9 |
- custom_code
|
| 10 |
license: mit
|
|
|
|
| 11 |
---
|
| 12 |
<div align="center">
|
| 13 |
<img src="https://github.com/deepseek-ai/DeepSeek-V2/blob/main/figures/logo.svg?raw=true" width="60%" alt="DeepSeek AI" />
|
|
|
|
| 44 |
</p>
|
| 45 |
<h2>
|
| 46 |
<p align="center">
|
| 47 |
+
<a href="">DeepSeek-OCR: Contexts Optical Compression</a>
|
| 48 |
</p>
|
| 49 |
</h2>
|
| 50 |
<p align="center">
|
| 51 |
<img src="assets/fig1.png" style="width: 1000px" align=center>
|
| 52 |
</p>
|
| 53 |
<p align="center">
|
| 54 |
+
<a href="">Explore the boundaries of visual-text compression.</a>
|
| 55 |
</p>
|
| 56 |
|
| 57 |
## Usage
|
|
|
|
| 177 |
|
| 178 |
## Citation
|
| 179 |
```bibtex
|
| 180 |
+
@article{wei2024deepseek-ocr,
|
| 181 |
title={DeepSeek-OCR: Contexts Optical Compression},
|
| 182 |
author={Wei, Haoran and Sun, Yaofeng and Li, Yukun},
|
| 183 |
journal={arXiv preprint arXiv:2510.18234},
|