rausch's picture
Update README.md
238d288 verified
metadata
language:
  - de
license: mit
library_name: transformers
pipeline_tag: text-generation
tags:
  - t5
  - german
  - scientific
datasets:
  - unpaywall-scientific

DE-T5-Base-15k

GermanT5/t5-efficient-gc4-german-base-nl36 continued for 15 000 steps on the German portion of the scientific corpus (same preprocessing as EN). Checkpoint: cross_lingual_transfer/logs/native_baseline/.../step-step=015000.ckpt.

Model Details

  • Base: GermanT5 (same architecture as T5-base, German tokenizer)
  • Optimizer: Adafactor, lr=1e-3, inverse sqrt schedule, warmup=1.5k, grad clip=1.0
  • Effective batch: 48 (per-GPU 48, no accumulation)
  • Objective: Span corruption (15 % masking, mean span length 3)

Training Data

German split of the Unpaywall-derived corpus (continued-pretraining windows of 512 tokens, 50 % overlap).

Evaluation (Global-MMLU, zero-shot)

Metric EN DE
Overall accuracy 0.2295 0.2295
Humanities 0.2421 0.2421
STEM 0.2125 0.2125
Social Sciences 0.2171 0.2171
Other 0.2398 0.2398

Intended Use

German scientific NLP baseline; compare against WECHSEL-based models or continue fine-tuning on German datasets.

Limitations

  • Only 15k steps, so improvements over base GermanT5 are modest.
  • Uses German SentencePiece vocab; incompatible with English tokenizer out of the box.