--- license: apache-2.0 language: - en - zh pipeline_tag: text-generation tags: - llm - qwen3 library_name: transformers base_model: - Qwen/Qwen3-14B-Base --- # Xinyuan-LLM-14B-0428

🤗 Hugging Face   |   ðŸ¤– ModelScope
## Xinyuan-LLM-14B-0428 Highlights Xinyuan-LLM-14B-0428 is the first foundational model in the mental health industry, launched by Cylingo Group. Built upon the robust capabilities of Qwen3-14B, this model has been fine-tuned on millions of data points across diverse scenarios within the field. 1. **The First All-Scenario Mental Health Support Foundation Model with 24/7 Intelligent Capabilities** 2. **Covering Diverse Mental Health Scenarios and Building Personalized Psychological Profiles** 3. **Resolving Multiple Parenting Challenges with Customized Family Companion Solutions** ## Quickstart For deployment, you can use `sglang>=0.4.6.post1` or `vllm>=0.8.5` or to create an OpenAI-compatible API endpoint: - SGLang: ```shell python -m sglang.launch_server --model-path Cylingo/Xinyuan-LLM-14B-0428 ``` - vLLM: ```shell vllm serve Cylingo/Xinyuan-LLM-14B-0428 ``` For local use, applications such as Ollama, LMStudio, MLX-LM, llama.cpp, and KTransformers have also supported Qwen3. > [!NOTE] > For non-thinking mode, we suggest using `Temperature=0.8`, `TopP=0.8`, `TopK=20`, and `MinP=0`. For more detailed guidance, please refer to the [Best Practices](#best-practices) section. > [!NOTE] > All the notable open-source frameworks implement static YaRN, which means the scaling factor remains constant regardless of input length, **potentially impacting performance on shorter texts.** > We advise adding the `rope_scaling` configuration only when processing long contexts is required. > It is also recommended to modify the `factor` as needed. For example, if the typical context length for your application is 65,536 tokens, it would be better to set `factor` as 2.0. > [!NOTE] > **Xinyuan-LLM-14B-0428** does not include a hybrid mode for Thinking similar to Qwen3. For now, we recommend that users stick to the standard mode. We plan to gradually introduce related features to the community in the future.