metadata
			license: other
library_name: transformers
datasets:
  - HuggingFaceH4/ultrachat_200k
base_model: google/gemma-2b
license_name: gemma-terms-of-use
license_link: https://ai.google.dev/gemma/terms
model-index:
  - name: gemma-2b-zephyr-sft
    results:
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: AI2 Reasoning Challenge (25-Shot)
          type: ai2_arc
          config: ARC-Challenge
          split: test
          args:
            num_few_shot: 25
        metrics:
          - type: acc_norm
            value: 49.74
            name: normalized accuracy
        source:
          url: >-
            https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=wandb/gemma-2b-zephyr-sft
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: HellaSwag (10-Shot)
          type: hellaswag
          split: validation
          args:
            num_few_shot: 10
        metrics:
          - type: acc_norm
            value: 72.38
            name: normalized accuracy
        source:
          url: >-
            https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=wandb/gemma-2b-zephyr-sft
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: MMLU (5-Shot)
          type: cais/mmlu
          config: all
          split: test
          args:
            num_few_shot: 5
        metrics:
          - type: acc
            value: 41.37
            name: accuracy
        source:
          url: >-
            https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=wandb/gemma-2b-zephyr-sft
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: TruthfulQA (0-shot)
          type: truthful_qa
          config: multiple_choice
          split: validation
          args:
            num_few_shot: 0
        metrics:
          - type: mc2
            value: 34.42
        source:
          url: >-
            https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=wandb/gemma-2b-zephyr-sft
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: Winogrande (5-shot)
          type: winogrande
          config: winogrande_xl
          split: validation
          args:
            num_few_shot: 5
        metrics:
          - type: acc
            value: 66.93
            name: accuracy
        source:
          url: >-
            https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=wandb/gemma-2b-zephyr-sft
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: GSM8k (5-shot)
          type: gsm8k
          config: main
          split: test
          args:
            num_few_shot: 5
        metrics:
          - type: acc
            value: 18.27
            name: accuracy
        source:
          url: >-
            https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=wandb/gemma-2b-zephyr-sft
          name: Open LLM Leaderboard
Gemma 2B Zephyr SFT
The Zephyr SFT recipe applied on top of Gemma 2B
Model description
- Model type: A 2.5B parameter GPT-like model fine-tuned on a mix of publicly available, synthetic datasets.
- Language(s) (NLP): Primarily English
- Finetuned from model: google/gemma-7b
Recipe
We trained using the alignment handbook recipe and logging to W&B
Visit the W&B workspace here
License
This model has the same license as the original Gemma model collection
Compute provided by Lambda Labs - 8xA100 80GB node
- Around 2 hours to train
Open LLM Leaderboard Evaluation Results
Detailed results can be found here
| Metric | Value | 
|---|---|
| Avg. | 47.18 | 
| AI2 Reasoning Challenge (25-Shot) | 49.74 | 
| HellaSwag (10-Shot) | 72.38 | 
| MMLU (5-Shot) | 41.37 | 
| TruthfulQA (0-shot) | 34.42 | 
| Winogrande (5-shot) | 66.93 | 
| GSM8k (5-shot) | 18.27 | 

