|  | --- | 
					
						
						|  | license: apache-2.0 | 
					
						
						|  | datasets: | 
					
						
						|  | - open-r1/codeforces-cots | 
					
						
						|  | language: | 
					
						
						|  | - en | 
					
						
						|  | base_model: | 
					
						
						|  | - Qwen/Qwen2.5-Coder-32B-Instruct | 
					
						
						|  | pipeline_tag: text-generation | 
					
						
						|  | --- | 
					
						
						|  |  | 
					
						
						|  | # Model Card for OlympicCoder-32B | 
					
						
						|  |  | 
					
						
						|  | OlympicCoder-32B is a code model that achieves very strong performance on competitive coding benchmarks such as LiveCodeBench andthe 2024 International Olympiad in Informatics. | 
					
						
						|  |  | 
					
						
						|  | * Repository: https://github.com/huggingface/open-r1 | 
					
						
						|  | * Blog post: https://huggingface.co/blog/open-r1/update-3 | 
					
						
						|  |  | 
					
						
						|  | ## Model description | 
					
						
						|  |  | 
					
						
						|  | - **Model type:** A 32B parameter model fine-tuned on a decontaminated version of the codeforces dataset. | 
					
						
						|  | - **Language(s) (NLP):** Primarily English | 
					
						
						|  | - **License:** apache-2.0 | 
					
						
						|  | - **Finetuned from model:** [Qwen/Qwen2.5-Coder-32B-Instruct](https://huggingface.co/Qwen/Qwen2.5-Coder-32B-Instruct) | 
					
						
						|  |  | 
					
						
						|  | ## Evaluation | 
					
						
						|  |  | 
					
						
						|  | We compare the performance of OlympicCoder models on two main benchmarks for competitive coding: | 
					
						
						|  |  | 
					
						
						|  | * **[IOI'2024:](https://github.com/huggingface/ioi)** 6 very challenging problems from the 2024 International Olympiad in Informatics. Models are allowed up to 50 submissions per problem. | 
					
						
						|  | * **[LiveCodeBench:](https://livecodebench.github.io)** Python programming problems source from platforms like CodeForces and LeetCoder. We use the `v4_v5` subset of [`livecodebench/code_generation_lite`](https://huggingface.co/datasets/livecodebench/code_generation_lite), which corresponds to 268 problems. We use `lighteval` to evaluate models on LiveCodeBench using the sampling parameters described [here](https://github.com/huggingface/open-r1?tab=readme-ov-file#livecodebench). | 
					
						
						|  |  | 
					
						
						|  | > [!NOTE] | 
					
						
						|  | > The OlympicCoder models were post-trained exclusively on C++ solutions generated by DeepSeek-R1. As a result the performance on LiveCodeBench should be considered to be partially _out-of-domain_, since this expects models to output solutions in Python. | 
					
						
						|  |  | 
					
						
						|  | ### IOI'24 | 
					
						
						|  |  | 
					
						
						|  |  | 
					
						
						|  |  | 
					
						
						|  | ### LiveCodeBench | 
					
						
						|  |  | 
					
						
						|  |  | 
					
						
						|  |  | 
					
						
						|  |  | 
					
						
						|  |  | 
					
						
						|  | ## Usage | 
					
						
						|  | Here's how you can run the model using the `pipeline()` function from 🤗 Transformers: | 
					
						
						|  |  | 
					
						
						|  | ```python | 
					
						
						|  | # pip install transformers | 
					
						
						|  | # pip install accelerate | 
					
						
						|  |  | 
					
						
						|  | import torch | 
					
						
						|  | from transformers import pipeline | 
					
						
						|  |  | 
					
						
						|  | pipe = pipeline("text-generation", model="open-r1/OlympicCoder-32B", torch_dtype=torch.bfloat16, device_map="auto") | 
					
						
						|  |  | 
					
						
						|  | # We use the tokenizer's chat template to format each message - see https://huggingface.co/docs/transformers/main/en/chat_templating | 
					
						
						|  | messages = [ | 
					
						
						|  | {"role": "user", "content": "Write a python program to calculate the 10th Fibonacci number"}, | 
					
						
						|  | ] | 
					
						
						|  | prompt = pipe.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) | 
					
						
						|  | outputs = pipe(prompt, max_new_tokens=8000, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) | 
					
						
						|  | print(outputs[0]["generated_text"]) | 
					
						
						|  | #<|im_start|>user | 
					
						
						|  | #Write a python program to calculate the 10th fibonacci number<|im_end|> | 
					
						
						|  | #<|im_start|>assistant | 
					
						
						|  | #<think>Okay, I need to write a Python program that calculates the 10th Fibonacci number. Hmm, the Fibonacci sequence starts with 0 and 1. Each subsequent number is the sum of the two preceding ones. So the sequence goes: 0, 1, 1, 2, 3, 5, 8, 13, 21, 34, and so on. ... | 
					
						
						|  | ``` | 
					
						
						|  |  | 
					
						
						|  | > [!IMPORTANT] | 
					
						
						|  | > To ensure that the model consistently outputs a long chain-of-thought, we have edited the chat template to prefill the first assistant turn with a `<think>` token. As a result, the outputs from this model will not show the opening `<think>` token if you use the model's `generate()` method.  To apply reinforcement learning with a format reward, either prepend the `<think>` token to the model's completions or amend the chat template to remove the prefill. Check out our [blog post](https://huggingface.co/blog/open-r1/update-3#lesson-4-prefill-with-think-to-consistently-enable-long-cot) for more details. | 
					
						
						|  |  | 
					
						
						|  |  | 
					
						
						|  | ## Training procedure | 
					
						
						|  | ### Training hyper-parameters | 
					
						
						|  |  | 
					
						
						|  | The following hyperparameters were used during training on 16 H100 nodes: | 
					
						
						|  |  | 
					
						
						|  | - dataset: open-r1/codeforces-cots_decontaminated | 
					
						
						|  | - learning_rate: 4.0e-5 | 
					
						
						|  | - train_batch_size: 1 | 
					
						
						|  | - seed: 42 | 
					
						
						|  | - packing: false | 
					
						
						|  | - distributed_type: fsdp | 
					
						
						|  | - num_devices: 128 | 
					
						
						|  | - gradient_accumulation_steps: 1 | 
					
						
						|  | - total_train_batch_size: 16 | 
					
						
						|  | - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 | 
					
						
						|  | - lr_scheduler_type: cosine_with_min_lr | 
					
						
						|  | - min_lr_rate: 0.1 | 
					
						
						|  | - lr_scheduler_warmup_ratio: 0.03 | 
					
						
						|  | - num_epochs: 10.0 | 
					
						
						|  |  |