Model save
Browse files- README.md +58 -0
- all_results.json +8 -0
- generation_config.json +14 -0
- train_results.json +8 -0
- trainer_state.json +243 -0
    	
        README.md
    ADDED
    
    | @@ -0,0 +1,58 @@ | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | 
|  | |
| 1 | 
            +
            ---
         | 
| 2 | 
            +
            base_model: Qwen/Qwen2.5-7B-Instruct
         | 
| 3 | 
            +
            library_name: transformers
         | 
| 4 | 
            +
            model_name: Agentic-Qwen2.5-7B-e7-lr4-b128-user-role
         | 
| 5 | 
            +
            tags:
         | 
| 6 | 
            +
            - generated_from_trainer
         | 
| 7 | 
            +
            - trl
         | 
| 8 | 
            +
            - sft
         | 
| 9 | 
            +
            licence: license
         | 
| 10 | 
            +
            ---
         | 
| 11 | 
            +
             | 
| 12 | 
            +
            # Model Card for Agentic-Qwen2.5-7B-e7-lr4-b128-user-role
         | 
| 13 | 
            +
             | 
| 14 | 
            +
            This model is a fine-tuned version of [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct).
         | 
| 15 | 
            +
            It has been trained using [TRL](https://github.com/huggingface/trl).
         | 
| 16 | 
            +
             | 
| 17 | 
            +
            ## Quick start
         | 
| 18 | 
            +
             | 
| 19 | 
            +
            ```python
         | 
| 20 | 
            +
            from transformers import pipeline
         | 
| 21 | 
            +
             | 
| 22 | 
            +
            question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
         | 
| 23 | 
            +
            generator = pipeline("text-generation", model="akseljoonas/Agentic-Qwen2.5-7B-e7-lr4-b128-user-role", device="cuda")
         | 
| 24 | 
            +
            output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
         | 
| 25 | 
            +
            print(output["generated_text"])
         | 
| 26 | 
            +
            ```
         | 
| 27 | 
            +
             | 
| 28 | 
            +
            ## Training procedure
         | 
| 29 | 
            +
             | 
| 30 | 
            +
            [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/akseljoonas-university-of-groningen/huggingface/runs/e65mwp2u) 
         | 
| 31 | 
            +
             | 
| 32 | 
            +
             | 
| 33 | 
            +
            This model was trained with SFT.
         | 
| 34 | 
            +
             | 
| 35 | 
            +
            ### Framework versions
         | 
| 36 | 
            +
             | 
| 37 | 
            +
            - TRL: 0.16.0
         | 
| 38 | 
            +
            - Transformers: 4.52.4
         | 
| 39 | 
            +
            - Pytorch: 2.6.0
         | 
| 40 | 
            +
            - Datasets: 3.6.0
         | 
| 41 | 
            +
            - Tokenizers: 0.21.1
         | 
| 42 | 
            +
             | 
| 43 | 
            +
            ## Citations
         | 
| 44 | 
            +
             | 
| 45 | 
            +
             | 
| 46 | 
            +
             | 
| 47 | 
            +
            Cite TRL as:
         | 
| 48 | 
            +
                
         | 
| 49 | 
            +
            ```bibtex
         | 
| 50 | 
            +
            @misc{vonwerra2022trl,
         | 
| 51 | 
            +
            	title        = {{TRL: Transformer Reinforcement Learning}},
         | 
| 52 | 
            +
            	author       = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
         | 
| 53 | 
            +
            	year         = 2020,
         | 
| 54 | 
            +
            	journal      = {GitHub repository},
         | 
| 55 | 
            +
            	publisher    = {GitHub},
         | 
| 56 | 
            +
            	howpublished = {\url{https://github.com/huggingface/trl}}
         | 
| 57 | 
            +
            }
         | 
| 58 | 
            +
            ```
         | 
    	
        all_results.json
    ADDED
    
    | @@ -0,0 +1,8 @@ | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | 
|  | |
| 1 | 
            +
            {
         | 
| 2 | 
            +
                "total_flos": 130497026588672.0,
         | 
| 3 | 
            +
                "train_loss": 0.3111057321407965,
         | 
| 4 | 
            +
                "train_runtime": 2546.477,
         | 
| 5 | 
            +
                "train_samples": 1953,
         | 
| 6 | 
            +
                "train_samples_per_second": 5.369,
         | 
| 7 | 
            +
                "train_steps_per_second": 0.044
         | 
| 8 | 
            +
            }
         | 
    	
        generation_config.json
    ADDED
    
    | @@ -0,0 +1,14 @@ | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | 
|  | |
| 1 | 
            +
            {
         | 
| 2 | 
            +
              "bos_token_id": 151643,
         | 
| 3 | 
            +
              "do_sample": true,
         | 
| 4 | 
            +
              "eos_token_id": [
         | 
| 5 | 
            +
                151645,
         | 
| 6 | 
            +
                151643
         | 
| 7 | 
            +
              ],
         | 
| 8 | 
            +
              "pad_token_id": 151643,
         | 
| 9 | 
            +
              "repetition_penalty": 1.05,
         | 
| 10 | 
            +
              "temperature": 0.7,
         | 
| 11 | 
            +
              "top_k": 20,
         | 
| 12 | 
            +
              "top_p": 0.8,
         | 
| 13 | 
            +
              "transformers_version": "4.52.4"
         | 
| 14 | 
            +
            }
         | 
    	
        train_results.json
    ADDED
    
    | @@ -0,0 +1,8 @@ | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | 
|  | |
| 1 | 
            +
            {
         | 
| 2 | 
            +
                "total_flos": 130497026588672.0,
         | 
| 3 | 
            +
                "train_loss": 0.3111057321407965,
         | 
| 4 | 
            +
                "train_runtime": 2546.477,
         | 
| 5 | 
            +
                "train_samples": 1953,
         | 
| 6 | 
            +
                "train_samples_per_second": 5.369,
         | 
| 7 | 
            +
                "train_steps_per_second": 0.044
         | 
| 8 | 
            +
            }
         | 
    	
        trainer_state.json
    ADDED
    
    | @@ -0,0 +1,243 @@ | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | 
|  | |
| 1 | 
            +
            {
         | 
| 2 | 
            +
              "best_global_step": null,
         | 
| 3 | 
            +
              "best_metric": null,
         | 
| 4 | 
            +
              "best_model_checkpoint": null,
         | 
| 5 | 
            +
              "epoch": 7.0,
         | 
| 6 | 
            +
              "eval_steps": 500,
         | 
| 7 | 
            +
              "global_step": 112,
         | 
| 8 | 
            +
              "is_hyper_param_search": false,
         | 
| 9 | 
            +
              "is_local_process_zero": true,
         | 
| 10 | 
            +
              "is_world_process_zero": true,
         | 
| 11 | 
            +
              "log_history": [
         | 
| 12 | 
            +
                {
         | 
| 13 | 
            +
                  "epoch": 0.32653061224489793,
         | 
| 14 | 
            +
                  "grad_norm": 3.1376605701360365,
         | 
| 15 | 
            +
                  "learning_rate": 1.3333333333333333e-05,
         | 
| 16 | 
            +
                  "loss": 1.4086,
         | 
| 17 | 
            +
                  "mean_token_accuracy": 0.6991279818117618,
         | 
| 18 | 
            +
                  "num_tokens": 3032788.0,
         | 
| 19 | 
            +
                  "step": 5
         | 
| 20 | 
            +
                },
         | 
| 21 | 
            +
                {
         | 
| 22 | 
            +
                  "epoch": 0.6530612244897959,
         | 
| 23 | 
            +
                  "grad_norm": 2.886401259073607,
         | 
| 24 | 
            +
                  "learning_rate": 3.0000000000000004e-05,
         | 
| 25 | 
            +
                  "loss": 0.7103,
         | 
| 26 | 
            +
                  "mean_token_accuracy": 0.8366268374025821,
         | 
| 27 | 
            +
                  "num_tokens": 6108050.0,
         | 
| 28 | 
            +
                  "step": 10
         | 
| 29 | 
            +
                },
         | 
| 30 | 
            +
                {
         | 
| 31 | 
            +
                  "epoch": 0.9795918367346939,
         | 
| 32 | 
            +
                  "grad_norm": 3.8085762872209985,
         | 
| 33 | 
            +
                  "learning_rate": 3.9960534568565436e-05,
         | 
| 34 | 
            +
                  "loss": 0.4951,
         | 
| 35 | 
            +
                  "mean_token_accuracy": 0.8857219524681568,
         | 
| 36 | 
            +
                  "num_tokens": 9153481.0,
         | 
| 37 | 
            +
                  "step": 15
         | 
| 38 | 
            +
                },
         | 
| 39 | 
            +
                {
         | 
| 40 | 
            +
                  "epoch": 1.2612244897959184,
         | 
| 41 | 
            +
                  "grad_norm": 0.7657669831409691,
         | 
| 42 | 
            +
                  "learning_rate": 3.951833523877495e-05,
         | 
| 43 | 
            +
                  "loss": 0.4221,
         | 
| 44 | 
            +
                  "mean_token_accuracy": 0.8977213052735813,
         | 
| 45 | 
            +
                  "num_tokens": 11636574.0,
         | 
| 46 | 
            +
                  "step": 20
         | 
| 47 | 
            +
                },
         | 
| 48 | 
            +
                {
         | 
| 49 | 
            +
                  "epoch": 1.5877551020408163,
         | 
| 50 | 
            +
                  "grad_norm": 0.2877956756473211,
         | 
| 51 | 
            +
                  "learning_rate": 3.859552971776503e-05,
         | 
| 52 | 
            +
                  "loss": 0.4363,
         | 
| 53 | 
            +
                  "mean_token_accuracy": 0.895506302267313,
         | 
| 54 | 
            +
                  "num_tokens": 14619147.0,
         | 
| 55 | 
            +
                  "step": 25
         | 
| 56 | 
            +
                },
         | 
| 57 | 
            +
                {
         | 
| 58 | 
            +
                  "epoch": 1.9142857142857141,
         | 
| 59 | 
            +
                  "grad_norm": 0.28656112123732563,
         | 
| 60 | 
            +
                  "learning_rate": 3.721484054007888e-05,
         | 
| 61 | 
            +
                  "loss": 0.3936,
         | 
| 62 | 
            +
                  "mean_token_accuracy": 0.9049416035413742,
         | 
| 63 | 
            +
                  "num_tokens": 17710131.0,
         | 
| 64 | 
            +
                  "step": 30
         | 
| 65 | 
            +
                },
         | 
| 66 | 
            +
                {
         | 
| 67 | 
            +
                  "epoch": 2.195918367346939,
         | 
| 68 | 
            +
                  "grad_norm": 0.2823103236835904,
         | 
| 69 | 
            +
                  "learning_rate": 3.541026485551579e-05,
         | 
| 70 | 
            +
                  "loss": 0.3158,
         | 
| 71 | 
            +
                  "mean_token_accuracy": 0.9200559068417203,
         | 
| 72 | 
            +
                  "num_tokens": 20112186.0,
         | 
| 73 | 
            +
                  "step": 35
         | 
| 74 | 
            +
                },
         | 
| 75 | 
            +
                {
         | 
| 76 | 
            +
                  "epoch": 2.522448979591837,
         | 
| 77 | 
            +
                  "grad_norm": 0.2655359257585452,
         | 
| 78 | 
            +
                  "learning_rate": 3.322623730647304e-05,
         | 
| 79 | 
            +
                  "loss": 0.3448,
         | 
| 80 | 
            +
                  "mean_token_accuracy": 0.9147586159408092,
         | 
| 81 | 
            +
                  "num_tokens": 23165264.0,
         | 
| 82 | 
            +
                  "step": 40
         | 
| 83 | 
            +
                },
         | 
| 84 | 
            +
                {
         | 
| 85 | 
            +
                  "epoch": 2.8489795918367347,
         | 
| 86 | 
            +
                  "grad_norm": 0.20795518587914283,
         | 
| 87 | 
            +
                  "learning_rate": 3.0716535899579936e-05,
         | 
| 88 | 
            +
                  "loss": 0.2919,
         | 
| 89 | 
            +
                  "mean_token_accuracy": 0.9269049897789955,
         | 
| 90 | 
            +
                  "num_tokens": 26259224.0,
         | 
| 91 | 
            +
                  "step": 45
         | 
| 92 | 
            +
                },
         | 
| 93 | 
            +
                {
         | 
| 94 | 
            +
                  "epoch": 3.130612244897959,
         | 
| 95 | 
            +
                  "grad_norm": 0.2884355032278423,
         | 
| 96 | 
            +
                  "learning_rate": 2.7942957812695613e-05,
         | 
| 97 | 
            +
                  "loss": 0.282,
         | 
| 98 | 
            +
                  "mean_token_accuracy": 0.9283535532329393,
         | 
| 99 | 
            +
                  "num_tokens": 28739676.0,
         | 
| 100 | 
            +
                  "step": 50
         | 
| 101 | 
            +
                },
         | 
| 102 | 
            +
                {
         | 
| 103 | 
            +
                  "epoch": 3.4571428571428573,
         | 
| 104 | 
            +
                  "grad_norm": 0.21897940738440588,
         | 
| 105 | 
            +
                  "learning_rate": 2.4973797743297103e-05,
         | 
| 106 | 
            +
                  "loss": 0.2199,
         | 
| 107 | 
            +
                  "mean_token_accuracy": 0.9427422039210797,
         | 
| 108 | 
            +
                  "num_tokens": 31756738.0,
         | 
| 109 | 
            +
                  "step": 55
         | 
| 110 | 
            +
                },
         | 
| 111 | 
            +
                {
         | 
| 112 | 
            +
                  "epoch": 3.783673469387755,
         | 
| 113 | 
            +
                  "grad_norm": 0.23860764801694623,
         | 
| 114 | 
            +
                  "learning_rate": 2.1882166266370292e-05,
         | 
| 115 | 
            +
                  "loss": 0.2215,
         | 
| 116 | 
            +
                  "mean_token_accuracy": 0.9424595959484577,
         | 
| 117 | 
            +
                  "num_tokens": 34883372.0,
         | 
| 118 | 
            +
                  "step": 60
         | 
| 119 | 
            +
                },
         | 
| 120 | 
            +
                {
         | 
| 121 | 
            +
                  "epoch": 4.0653061224489795,
         | 
| 122 | 
            +
                  "grad_norm": 0.46858487289456846,
         | 
| 123 | 
            +
                  "learning_rate": 1.8744189609413733e-05,
         | 
| 124 | 
            +
                  "loss": 0.1972,
         | 
| 125 | 
            +
                  "mean_token_accuracy": 0.9491587445355844,
         | 
| 126 | 
            +
                  "num_tokens": 37296561.0,
         | 
| 127 | 
            +
                  "step": 65
         | 
| 128 | 
            +
                },
         | 
| 129 | 
            +
                {
         | 
| 130 | 
            +
                  "epoch": 4.391836734693878,
         | 
| 131 | 
            +
                  "grad_norm": 0.2511271428576979,
         | 
| 132 | 
            +
                  "learning_rate": 1.5637135172069155e-05,
         | 
| 133 | 
            +
                  "loss": 0.1837,
         | 
| 134 | 
            +
                  "mean_token_accuracy": 0.9537528082728386,
         | 
| 135 | 
            +
                  "num_tokens": 40373787.0,
         | 
| 136 | 
            +
                  "step": 70
         | 
| 137 | 
            +
                },
         | 
| 138 | 
            +
                {
         | 
| 139 | 
            +
                  "epoch": 4.718367346938775,
         | 
| 140 | 
            +
                  "grad_norm": 0.2967198621367196,
         | 
| 141 | 
            +
                  "learning_rate": 1.2637508946306443e-05,
         | 
| 142 | 
            +
                  "loss": 0.165,
         | 
| 143 | 
            +
                  "mean_token_accuracy": 0.9580980874598026,
         | 
| 144 | 
            +
                  "num_tokens": 43363161.0,
         | 
| 145 | 
            +
                  "step": 75
         | 
| 146 | 
            +
                },
         | 
| 147 | 
            +
                {
         | 
| 148 | 
            +
                  "epoch": 5.0,
         | 
| 149 | 
            +
                  "grad_norm": 0.23363519084392373,
         | 
| 150 | 
            +
                  "learning_rate": 9.819171684992575e-06,
         | 
| 151 | 
            +
                  "loss": 0.1312,
         | 
| 152 | 
            +
                  "mean_token_accuracy": 0.9628054084985153,
         | 
| 153 | 
            +
                  "num_tokens": 45860256.0,
         | 
| 154 | 
            +
                  "step": 80
         | 
| 155 | 
            +
                },
         | 
| 156 | 
            +
                {
         | 
| 157 | 
            +
                  "epoch": 5.326530612244898,
         | 
| 158 | 
            +
                  "grad_norm": 0.23534750602445398,
         | 
| 159 | 
            +
                  "learning_rate": 7.251520205026206e-06,
         | 
| 160 | 
            +
                  "loss": 0.1269,
         | 
| 161 | 
            +
                  "mean_token_accuracy": 0.967383312433958,
         | 
| 162 | 
            +
                  "num_tokens": 48879333.0,
         | 
| 163 | 
            +
                  "step": 85
         | 
| 164 | 
            +
                },
         | 
| 165 | 
            +
                {
         | 
| 166 | 
            +
                  "epoch": 5.653061224489796,
         | 
| 167 | 
            +
                  "grad_norm": 0.2799375041862015,
         | 
| 168 | 
            +
                  "learning_rate": 4.997778607390809e-06,
         | 
| 169 | 
            +
                  "loss": 0.1221,
         | 
| 170 | 
            +
                  "mean_token_accuracy": 0.9683701202273369,
         | 
| 171 | 
            +
                  "num_tokens": 51946379.0,
         | 
| 172 | 
            +
                  "step": 90
         | 
| 173 | 
            +
                },
         | 
| 174 | 
            +
                {
         | 
| 175 | 
            +
                  "epoch": 5.979591836734694,
         | 
| 176 | 
            +
                  "grad_norm": 0.1875365707782748,
         | 
| 177 | 
            +
                  "learning_rate": 3.1134414899597033e-06,
         | 
| 178 | 
            +
                  "loss": 0.1279,
         | 
| 179 | 
            +
                  "mean_token_accuracy": 0.9681350864470005,
         | 
| 180 | 
            +
                  "num_tokens": 54995950.0,
         | 
| 181 | 
            +
                  "step": 95
         | 
| 182 | 
            +
                },
         | 
| 183 | 
            +
                {
         | 
| 184 | 
            +
                  "epoch": 6.261224489795918,
         | 
| 185 | 
            +
                  "grad_norm": 0.2033347485797425,
         | 
| 186 | 
            +
                  "learning_rate": 1.6449074863203773e-06,
         | 
| 187 | 
            +
                  "loss": 0.1222,
         | 
| 188 | 
            +
                  "mean_token_accuracy": 0.972901335660962,
         | 
| 189 | 
            +
                  "num_tokens": 57369628.0,
         | 
| 190 | 
            +
                  "step": 100
         | 
| 191 | 
            +
                },
         | 
| 192 | 
            +
                {
         | 
| 193 | 
            +
                  "epoch": 6.587755102040816,
         | 
| 194 | 
            +
                  "grad_norm": 0.1697720175472823,
         | 
| 195 | 
            +
                  "learning_rate": 6.283367774273785e-07,
         | 
| 196 | 
            +
                  "loss": 0.105,
         | 
| 197 | 
            +
                  "mean_token_accuracy": 0.9732269406318664,
         | 
| 198 | 
            +
                  "num_tokens": 60481208.0,
         | 
| 199 | 
            +
                  "step": 105
         | 
| 200 | 
            +
                },
         | 
| 201 | 
            +
                {
         | 
| 202 | 
            +
                  "epoch": 6.914285714285715,
         | 
| 203 | 
            +
                  "grad_norm": 0.18026706851340263,
         | 
| 204 | 
            +
                  "learning_rate": 8.876070793840008e-08,
         | 
| 205 | 
            +
                  "loss": 0.0986,
         | 
| 206 | 
            +
                  "mean_token_accuracy": 0.9752129040658474,
         | 
| 207 | 
            +
                  "num_tokens": 63509425.0,
         | 
| 208 | 
            +
                  "step": 110
         | 
| 209 | 
            +
                },
         | 
| 210 | 
            +
                {
         | 
| 211 | 
            +
                  "epoch": 7.0,
         | 
| 212 | 
            +
                  "mean_token_accuracy": 0.9745120831898281,
         | 
| 213 | 
            +
                  "num_tokens": 64131339.0,
         | 
| 214 | 
            +
                  "step": 112,
         | 
| 215 | 
            +
                  "total_flos": 130497026588672.0,
         | 
| 216 | 
            +
                  "train_loss": 0.3111057321407965,
         | 
| 217 | 
            +
                  "train_runtime": 2546.477,
         | 
| 218 | 
            +
                  "train_samples_per_second": 5.369,
         | 
| 219 | 
            +
                  "train_steps_per_second": 0.044
         | 
| 220 | 
            +
                }
         | 
| 221 | 
            +
              ],
         | 
| 222 | 
            +
              "logging_steps": 5,
         | 
| 223 | 
            +
              "max_steps": 112,
         | 
| 224 | 
            +
              "num_input_tokens_seen": 0,
         | 
| 225 | 
            +
              "num_train_epochs": 7,
         | 
| 226 | 
            +
              "save_steps": 500,
         | 
| 227 | 
            +
              "stateful_callbacks": {
         | 
| 228 | 
            +
                "TrainerControl": {
         | 
| 229 | 
            +
                  "args": {
         | 
| 230 | 
            +
                    "should_epoch_stop": false,
         | 
| 231 | 
            +
                    "should_evaluate": false,
         | 
| 232 | 
            +
                    "should_log": false,
         | 
| 233 | 
            +
                    "should_save": true,
         | 
| 234 | 
            +
                    "should_training_stop": true
         | 
| 235 | 
            +
                  },
         | 
| 236 | 
            +
                  "attributes": {}
         | 
| 237 | 
            +
                }
         | 
| 238 | 
            +
              },
         | 
| 239 | 
            +
              "total_flos": 130497026588672.0,
         | 
| 240 | 
            +
              "train_batch_size": 1,
         | 
| 241 | 
            +
              "trial_name": null,
         | 
| 242 | 
            +
              "trial_params": null
         | 
| 243 | 
            +
            }
         | 
