sequelbox's picture
Upload folder using huggingface_hub
f07f325 verified
metadata
language:
  - en
library_name: transformers
pipeline_tag: text-generation
tags:
  - esper
  - esper-3.1
  - esper-3
  - valiant
  - valiant-labs
  - gpt
  - gpt-oss
  - gpt-oss-20b
  - openai
  - 20b
  - reasoning
  - code
  - code-instruct
  - python
  - javascript
  - dev-ops
  - jenkins
  - terraform
  - ansible
  - docker
  - jenkins
  - kubernetes
  - helm
  - grafana
  - prometheus
  - shell
  - bash
  - azure
  - aws
  - gcp
  - cloud
  - scripting
  - powershell
  - problem-solving
  - architect
  - engineer
  - developer
  - creative
  - analytical
  - expert
  - rationality
  - conversational
  - chat
  - instruct
base_model: openai/gpt-oss-20b
datasets:
  - sequelbox/Tachibana3-Part1-DeepSeek-V3.1-Terminus
  - sequelbox/Tachibana3-Part2-DeepSeek-V3.2
  - sequelbox/Titanium3-DeepSeek-V3.1-Terminus
  - sequelbox/Mitakihara-DeepSeek-R1-0528
license: apache-2.0

Support our open-source dataset and model releases!

image/jpeg

Esper 3.1: Qwen3-4B-Thinking-2507, gpt-oss-20b

Esper 3.1 is a coding, architecture, and DevOps reasoning specialist built on gpt-oss-20b.

  • Your dedicated DevOps expert: Esper 3.1 maximizes DevOps and architecture helpfulness, powered by high-difficulty DevOps and architecture data generated with DeepSeek-V3.1-Terminus!
  • Improved coding performance: challenging code-reasoning datasets stretch DeepSeek-V3.1-Terminus and DeepSeek-V3.2 to the limits, allowing Esper 3.1 to tackle harder coding tasks!
  • AI to build AI: our high-difficulty AI expertise data boosts Esper 3.1's MLOps, AI architecture, AI research, and general reasoning skills.
  • Small model sizes allow running on local desktop and mobile, plus super-fast server inference!

Prompting Guide

Esper 3.1 uses the gpt-oss-20b prompt format.

Esper 3.1 is a reasoning finetune; reasoning level high is generally recommended.

NOTE: This release of Esper 3.1 uses bf16 for all parameters. Consider quantized models if you're not looking to use bf16.

Example inference script provided by gpt-oss-20b to get started:

from transformers import pipeline
import torch

model_id = "ValiantLabs/gpt-oss-20b-Esper3.1"

pipe = pipeline(
    "text-generation",
    model=model_id,
    torch_dtype="auto",
    device_map="auto",
)

messages = [
    {"role": "user", "content": "Design a serverless architecture for a real-time image processing application using AWS Lambda and Amazon S3."},
]

outputs = pipe(
    messages,
    max_new_tokens=15000,
)
print(outputs[0]["generated_text"][-1])

image/jpeg

Esper 3.1 is created by Valiant Labs.

Check out our HuggingFace page to see all of our models!

We care about open source. For everyone to use.