FinSight LLM (Domain-FT Qwen 3B)

TL;DR: A domain-tuned finance Q&A model (Qwen 3B) for ratios, filings, and valuation topics.
Deployed via Text Generation Inference (TGI); frontend: Next.js (Vercel).

Intended use

  • Educational finance Q&A, ratio explanations, simplified filings summaries.
  • Not for investment advice or execution. See Limitations & Safety.

How to use

Python (Transformers)

from transformers import AutoModelForCausalLM, AutoTokenizer
import torch

model_id = "willckim/domain-ft-qwen3b"
tok = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id, torch_dtype=torch.float16, device_map="auto")

prompt = "Explain PEG vs P/E with a 1-liner example."
x = tok(prompt, return_tensors="pt").to(model.device)
y = model.generate(**x, max_new_tokens=256, temperature=0.2)
print(tok.decode(y[0], skip_special_tokens=True))
Downloads last month
21
Safetensors
Model size
3B params
Tensor type
F32
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for willckim/domain-ft-qwen3b

Base model

Qwen/Qwen2.5-3B
Finetuned
(801)
this model