I was trying to train SpaceInvadersNoFrameskip-v4 env locally but have an entry lvl gpu with less than great memory. I tried to solve it with a simple bash script that reloaded the previously trained agent after ~125k steps. The results seemed to be inline with it creating a new agent everytime. I used the -i attribute to load the previously trained agent. Why didn’t this work?
Related topics
| Topic | Replies | Views | Activity | |
|---|---|---|---|---|
| Deep Q-Learning : Successful Training but Fails in Testing | 0 | 190 | August 21, 2023 | |
| Deploy my rl agent hosting in my hugging gace account in my own computer | 0 | 16 | September 3, 2024 | |
| Space stops/restarts without any error at all | 0 | 392 | April 6, 2023 | |
| Imalance memory usage on multi gpus while using Trainer and how to solve it | 0 | 171 | December 27, 2023 | |
| Autotrain Colab - Start trainer from checkpoint | 0 | 193 | April 24, 2024 |