LLama3 SatCom 70B

LLama3 SatCom 70B is a high-capacity fine-tuned Large Language Model (LLM) developed under the ESA ARTES programme as part of the SatcomLLM / SCEVA (SatCom Expert Virtual Assistant) project.
It represents the largest and most capable SatCom-specialised LLM, purpose-built to support satellite communications (SatCom) professionals through advanced reasoning, multi-step problem solving, and expert-level Q&A.


Model Description

  • Base model: meta-llama/Llama-3.3-70B-Instruct
  • Fine-tuning type: Instruction fine-tuning (IFT)
  • Training data: Curated and synthetic SatCom QA, including chain-of-thought annotated datasets
  • Architecture: Decoder-only transformer, 70 billion parameters
  • Languages: English
  • License: LLama-3.3 Community License Agreement

This large-scale variant extends the reasoning and factual accuracy of the 8B model, providing enhanced comprehension of technical systems, higher stability in multi-step mathematical reasoning, and greater contextual understanding across complex SatCom documents.
It excels in tasks such as link budget evaluation, propagation modeling, 5G/6G NTN design, and mission architecture analysis. Thanks to its larger parameter count and extended context window, it can process longer technical passages, integrate multiple data sources, and maintain coherence across complex analytical workflows.


Training Datasets

Dataset Description
esa-sceva/satcom-synth-qa Synthetic QA data generated with domain-validated prompts and expert-reviewed teacher models
esa-sceva/satcom-synth-qa-cot Chain-of-thought annotated QA to strengthen reasoning accuracy and factual traceability

Intended Use

Primary use cases:

  • Advanced reasoning and Q&A for SatCom system design and analysis
  • Automated support for link budget calculations and RF engineering tasks
  • Conceptual guidance for 5G/6G NTN and inter-satellite network operations
  • Mission design evaluation and anomaly diagnosis assistance
  • Research, education, and technical documentation in satellite communications

Intended users:

  • ESA engineers and mission planners
  • SatCom and aerospace researchers
  • System architects and technical operators
  • Academic institutions and educational users

Limitations

  • The model does not access live telemetry or proprietary ESA mission data.
  • Generated answers should be validated by domain experts before operational use.
  • It is not suitable for safety-critical or real-time decision-making.

Technical Details

Parameter Value
Base Model Llama 3.3 70B Instruct
Parameters 70 billion
Context length 128k tokens
Precision bfloat16 / fp16
Framework Lit-GPT (Lightning AI)
Training infra EuroHPC MareNostrum5 + AWS EC2
Optimisation LoRA fine-tuning, cosine LR schedule

Evaluation

Evaluation Datasets

The model was evaluated across SatCom-specific and general-domain benchmarks, focusing on mathematical reasoning, conceptual understanding, and applied problem solving.

Dataset Subset Description
esa-sceva/satcom-qa Open SatCom QA Conceptual and reasoning-based questions on SatCom workflows, regulation, and mission/system design
Math SatCom QA Quantitative and multi-step engineering problems derived from orbital mechanics and RF analysis
esa-sceva/satcom-mcqa Open MCQA Conceptual multiple-choice questions on protocols, architectures, and communication systems
Math MCQA Numerical link-budget and propagation-focused multiple-choice problems

Results

Evaluation Subset Metric Base Model LLama3 SatCom 70B Notes
Math SatCom MCQA Accuracy 0.664 0.695 Strong quantitative reasoning in link budgets and RF propagation tasks
Open SatCom MCQA Accuracy 0.944 0.947 High factual precision in conceptual multiple-choice questions
Math SatCom QA LLM-as-a-judge 0.944 0.955 Excellent performance on expert-reviewed mathematical reasoning tasks
Open SatCom QA LLM-as-a-judge 0.901 0.911 Robust understanding of terminology, architecture, and system design reasoning

The 70B model delivers improvements in structured reasoning, numerical consistency, and contextual accuracy, making it highly suited for advanced SatCom applications requiring technical depth and reliability.


Summary

LLama3 SatCom 70B combines the linguistic precision of Llama 3.3 with domain-specialised fine-tuning from ESA’s SCEVA project, achieving great reasoning and analytical performance in satellite communications.
It represents a step toward intelligent, domain-grounded AI assistants capable of supporting complex engineering and research workflows across the space communications ecosystem.


Downloads last month
40
Safetensors
Model size
71B params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for esa-sceva/llama3-satcom-70b

Finetuned
(221)
this model

Datasets used to train esa-sceva/llama3-satcom-70b