Dell Pro AI Studio
Model for Dell Pro AI studio
Updated • 3Note Sample python script for running Whisper-Base-Small on CPU and NPU. This model is intended for RAI 1.5. This is a multilingual model and is not intended for production. The officially supported ASR can be found here: https://github.com/amd/RyzenAI-SW/tree/main/demo/ASR/Whisper
amd/NPU-Nomic-embed-text-v1.5-ryzen-strix-cpp
Updated • 1Note CPP implementation to test nomic inference latency. The onnx model and caches are compatible with RAI 1.4 This means that the RAI 1.4 conda environment needs to be activated, and the RYZEN_AI_INSTALLATION_PATH and XLNX_VART_FIRMWARE environment variables must be set
amd/NPU-ESRGAN-ryzen-strix-cpp
Updated • 2Note CPP script to run ESRGAN inference (upsampling). The onnx models and caches are compatible with RAI 1.4. No performance measurements RAI 1.4 conda environment needs to be activated, and the RYZEN_AI_INSTALLATION_PATH and XLNX_VART_FIRMWARE environment variables must be set ESRGAN inference on a 1x250x250x3 png (creates a 1x1000x100x3) png.
amd/NPU-CLIP-Python
Updated • 1Note Python implementation of CLIP inference. The onnx models and caches are compatible with RAI 1.5
amd/RyzenAI-1.5-ESRGAN-Inference-ryzen-strix-cpp
UpdatedNote Optimized RAI 1.5 ESRGAN inference. Timers included. OpenCV used for image manipulation. 95 - 105 ms per inference expected.
amd/RAI_1.6_CS_ORT_Infernce
Updated • 17Note Application code, onnx models, and caches for Yolo, Whisper-small-en, and CLIP models. For validation of AMD C# ORT bindings. Inference can be run independently from any RAI install, using Nuget packages, but noting that this corresponds with RAI 1.6.
amd/RAI_1.6_CS_OGA_Inference
UpdatedNote C# implementation of interactive LLM prompt running Phi-4-mini-instruct. Uses 1.6.0 Nuget package.