EdgePulse Coder 14B (LoRA)
EdgePulse Coder 14B is a production-grade coding assistant fine-tuned using LoRA on top of Qwen2.5-Coder-14B.
It is designed to handle real-world software engineering workflows with high reliability and correctness.
Model Details
Model Description
EdgePulse Coder 14B focuses on practical developer tasks, trained on a large, strictly validated dataset covering:
- Bug fixing
- Code explanation
- Refactoring
- Optimization
- Async & concurrency correction
- Logging & observability
- Security & defensive coding
- Networking & I/O handling
- Multi-file context reasoning
- Test generation and impact analysis
The model is optimized for IDE usage, CLI workflows, and Cursor-like streaming environments.
- Developed by: EdgePulseAI
- Shared by: EdgePulseAI
- Model type: Large Language Model (Code-focused)
- Language(s): Python, JavaScript, TypeScript, Bash (primary), general programming concepts
- License: Apache-2.0
- Finetuned from: Qwen/Qwen2.5-Coder-14B
Model Sources
- Base Model: https://huggingface.co/Qwen/Qwen2.5-Coder-14B
- Website: https://EdgePulseAi.com
Uses
Direct Use
EdgePulse Coder 14B can be used directly for:
- Code explanation
- Bug fixing
- Refactoring existing code
- Generating tests
- Improving logging and error handling
- Fixing async / concurrency bugs
- Secure coding suggestions
- Network & I/O robustness
Downstream Use
- IDE assistants (VS Code / Cursor-style tools)
- CI/CD automation
- Code review bots
- Developer copilots
- Internal engineering tools
Out-of-Scope Use
- Medical or legal advice
- Autonomous system control
- High-risk decision making without human review
Bias, Risks, and Limitations
- The model may occasionally produce syntactically correct but logically incorrect code.
- Security-sensitive code should always be reviewed by humans.
- Performance depends on correct prompt framing and context size.
Recommendations
- Use human review for production deployments.
- Combine with static analysis and testing tools.
- Prefer structured prompts for multi-file tasks.
How to Get Started with the Model
from transformers import AutoTokenizer, AutoModelForCausalLM
from peft import PeftModel
base_model = "Qwen/Qwen2.5-Coder-14B"
adapter_model = "edgepulse-ai/EdgePulse-Coder-14B-LoRA"
tokenizer = AutoTokenizer.from_pretrained(base_model)
model = AutoModelForCausalLM.from_pretrained(
base_model,
device_map="auto",
torch_dtype="auto"
)
model = PeftModel.from_pretrained(model, adapter_model)
model.eval()
prompt = "Fix this bug:\n\ndef add(a,b): return a-b"
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
output = model.generate(**inputs, max_new_tokens=128)
print(tokenizer.decode(output[0], skip_special_tokens=True))
- Downloads last month
- 10