| # IFEval-Hi Evaluation Framework | |
| ## Overview | |
| IFEval-Hi is a Hindi language adaptation of the IFEval (Instruction Following Evaluation) benchmark, designed to evaluate the instruction-following capabilities of Large Language Models (LLMs) in Hindi. This implementation maintains the core evaluation methodology of the original English IFEval while incorporating language-specific modifications to ensure accurate and fair assessment of Hindi language models. | |
| ## Getting Started | |
| You have two options to use this evaluation framework: | |
| 1. **Option 1: Use the Ready-to-Use Fork** (Recommended) | |
| - Fork or clone the repository directly from: https://github.com/anushaknvidia/lm-evaluation-harness | |
| - This fork already includes all the Hindi-specific configurations and modifications | |
| - Skip to [Step 3: Run Evaluation](#step-3-run-evaluation) | |
| 2. **Option 2: Manual Setup** | |
| - Follow the step-by-step instructions below to set up IFEval-Hi from scratch | |
| - This is useful if you want to customize or understand the implementation details | |
| ## Setup and Usage | |
| ### Step 1: Create Task Configuration | |
| 1. Navigate to the lm-evaluation-harness tasks directory: | |
| ``` | |
| https://github.com/EleutherAI/lm-evaluation-harness/tree/main/lm_eval/tasks/ifeval | |
| ``` | |
| 2. Create a copy of the English IFEval directory and rename it to ifevalhi | |
| 3. Rename the task file in the copied folder to `ifevalhi.yaml` for Hindi-specific configuration | |
| ### Step 2: Configure Parameters | |
| Update the `ifevalhi.yaml` configuration file with the following Hindi-specific parameters: | |
| ```yaml | |
| # Dataset Configuration | |
| dataset_path: nvidia/IFEval-Hi | |
| # Generation Parameters | |
| max_gen_toks: 4096 # Increased from 1280 to accommodate Hindi morphology | |
| # Additional Hindi-specific settings | |
| # (Include language-specific preprocessing and normalization settings as needed) | |
| ``` | |
| **Key Configuration Changes:** | |
| - **`dataset_path`**: Changed from `google/IFEval` to `nvidia/IFEval-Hi` | |
| - **`max_gen_toks`**: Increased to 4096 tokens to handle Hindi's linguistic complexity | |
| ### Step 3: Run Evaluation | |
| Execute the evaluation using the lm-eval-harness framework with the Hindi task configuration: | |
| ```bash | |
| # Basic evaluation command add other arguments as per lm-eval-harness repo | |
| lm-eval --model hf \ | |
| --model_args pretrained=<model_name_or_path> \ | |
| --tasks ifevalhi \ | |
| --batch_size auto \ | |
| --output_path ./results/ | |
| ``` | |
| ### Expected Output | |
| The evaluation will generate results including: | |
| - **prompt_level_strict_acc**: Primary accuracy metric | |
| - **normalised_acc**: Normalized accuracy with text preprocessing | |
| ## Key Differences from English IFEval | |
| ### 1. Configuration Parameters | |
| #### Maximum Generation Token Limit | |
| - **English IFEval**: 1,280 tokens | |
| - **IFEval-Hindi**: 4,096 tokens | |
| The increased token limit accommodates the morphological and syntactic properties of Hindi text, which often requires more tokens to express equivalent content compared to English. | |
| ### 2. Language-Specific Processing | |
| #### Tokenization and Segmentation | |
| - **English Implementation**: Uses standard tokenizer for sentence and word segmentation | |
| - **IFEval-Hi**: Incorporates Hindi-specific punctuation handling, including: | |
| - Sentence delimitation using the vertical bar (`|`) character | |
| - Custom punctuation rules tailored to Hindi text structure | |
| ### 3. Constrained Response Categories | |
| IFEval-Hi expands the constrained response category with Hindi-specific response patterns: | |
| ``` | |
| - मेरा जवाब हाँ है (My answer is yes) | |
| - मेरा जवाब नहीं है (My answer is no) | |
| - मेरा जवाब शायद है (My answer is maybe) | |
| - हाँ (Yes) | |
| - नहीं (No) | |
| - शायद (Maybe) | |
| ``` | |
| These additions ensure fair evaluation for Hindi responses and align with natural Hindi language usage patterns. | |
| ### 4. Text Normalization | |
| IFEval-Hindi implements comprehensive normalization procedures for model-generated Hindi text and evaluation parameters: | |
| #### Character Normalization | |
| - **Consonant Unification**: Characters like क़ and क are unified to maintain consistency | |
| - **Diacritic Removal**: Diacritical marks such as "ँ" (chandrabindu) are stripped | |
| - **Symbol Cleanup**: Redundant symbols and spacing irregularities are removed | |
| - **Orthographic Standardization**: Variations in Hindi script representation are normalized | |
| These normalization steps ensure consistent processing across input prompts and model-generated outputs, reducing evaluation bias from orthographic variations. | |
| ### 5. Validation Logic Updates | |
| #### Letter Frequency Checker | |
| - **English IFEval**: Includes English alphabet-only validation logic | |
| - **IFEval-Hi**: English alphabet validation has been deprecated and removed from `instructions.py` to align with Hindi-specific evaluation requirements | |
| This modification ensures that character-level constraints are appropriately evaluated for the Devanagari script used in Hindi. | |
| IFEval-Hi follows the same execution pipeline as the English variant within the lm-eval-harness repository | |
| ``` | |
| Pipeline Structure: | |
| 1. Load dataset from nvidia/IFEval-Hi | |
| 2. Generate model responses with Hindi-specific configurations | |
| 3. Apply Hindi text normalization | |
| 4. Evaluate instruction-following accuracy | |
| 5. Report metrics | |
| ``` | |
| Both implementations utilize the same core Python utility modules, ensuring consistency in evaluation methodology while supporting language-specific adaptations. | |
| Please find the fork to the evaluation repo with the above changes here https://github.com/anushaknvidia/lm-evaluation-harness |