Quentin Gallouédec commited on
Commit
86ffb20
·
1 Parent(s): a3397a4

leaderboard dataset

Browse files
Files changed (1) hide show
  1. texts/about.md +1 -1
texts/about.md CHANGED
@@ -14,7 +14,7 @@ That's it!
14
 
15
  ## 🕵 How are the models evaluated?
16
 
17
- The evaluation is done by running the agent on the environment for 50 episodes.
18
 
19
  For further information, please refer to the [Open RL Leaderboard evaulation script](https://huggingface.co/spaces/open-rl-leaderboard/leaderboard/blob/main/src/evaluation.py).
20
 
 
14
 
15
  ## 🕵 How are the models evaluated?
16
 
17
+ The evaluation is done by running the agent on the environment for 50 episodes. You can get the raw evaluation scores in the [Leaderboard dataset](https://huggingface.co/datasets/open-rl-leaderboard/results).
18
 
19
  For further information, please refer to the [Open RL Leaderboard evaulation script](https://huggingface.co/spaces/open-rl-leaderboard/leaderboard/blob/main/src/evaluation.py).
20