Enhance dataset card: Add task category, license, links, and sample usage
#2
by
nielsr
HF Staff
- opened
README.md
CHANGED
|
@@ -40,4 +40,49 @@ configs:
|
|
| 40 |
data_files:
|
| 41 |
- split: train
|
| 42 |
path: data/train-*
|
|
|
|
|
|
|
|
|
|
| 43 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 40 |
data_files:
|
| 41 |
- split: train
|
| 42 |
path: data/train-*
|
| 43 |
+
task_categories:
|
| 44 |
+
- text-to-audio
|
| 45 |
+
license: cc-by-nc-nd-4.0
|
| 46 |
---
|
| 47 |
+
|
| 48 |
+
# SingingSDS Dataset
|
| 49 |
+
|
| 50 |
+
This repository contains the dataset for **SingingSDS: A Singing-Capable Spoken Dialogue System for Conversational Roleplay Applications**.
|
| 51 |
+
|
| 52 |
+
SingingSDS is an innovative role-playing singing dialogue system that seamlessly converts natural speech input into character-based singing output. It integrates automatic speech recognition (ASR), large language models (LLM), and singing voice synthesis (SVS) to create immersive conversational singing experiences. This dataset provides structured annotations, including segment ID, transcription, labels, tempo, MIDI notes, phonemes, lyrics, and their timing information, which are crucial for training and evaluating the SVS components of the SingingSDS system.
|
| 53 |
+
|
| 54 |
+
* **Paper**: [SingingSDS: A Singing-Capable Spoken Dialogue System for Conversational Roleplay Applications](https://huggingface.co/papers/2511.20972)
|
| 55 |
+
* **Code**: [https://github.com/SingingSDS/SingingSDS](https://github.com/SingingSDS/SingingSDS)
|
| 56 |
+
* **Demo Space**: [https://huggingface.co/spaces/espnet/SingingSDS](https://huggingface.co/spaces/espnet/SingingSDS)
|
| 57 |
+
|
| 58 |
+
## Sample Usage (SingingSDS System)
|
| 59 |
+
|
| 60 |
+
The following examples demonstrate how to use the SingingSDS system via its Command Line Interface (CLI). This showcases how models trained with datasets like this can be applied for inference.
|
| 61 |
+
|
| 62 |
+
### Example Usage
|
| 63 |
+
|
| 64 |
+
```bash
|
| 65 |
+
python cli.py \
|
| 66 |
+
--query_audio tests/audio/hello.wav \
|
| 67 |
+
--config_path config/cli/yaoyin_default.yaml \
|
| 68 |
+
--output_audio outputs/yaoyin_hello.wav \
|
| 69 |
+
--eval_results_csv outputs/yaoyin_test.csv
|
| 70 |
+
```
|
| 71 |
+
|
| 72 |
+
### Inference-Only Mode
|
| 73 |
+
|
| 74 |
+
Run minimal inference without evaluation.
|
| 75 |
+
|
| 76 |
+
```bash
|
| 77 |
+
python cli.py \
|
| 78 |
+
--query_audio tests/audio/hello.wav \
|
| 79 |
+
--config_path config/cli/yaoyin_default_infer_only.yaml \
|
| 80 |
+
--output_audio outputs/yaoyin_hello.wav
|
| 81 |
+
```
|
| 82 |
+
|
| 83 |
+
### Parameter Description
|
| 84 |
+
|
| 85 |
+
* `--query_audio`: Input audio file path (required)
|
| 86 |
+
* `--config_path`: Configuration file path (default: `config/cli/yaoyin_default.yaml`)
|
| 87 |
+
* `--output_audio`: Output audio file path (required)
|
| 88 |
+
* `--eval_results_csv`: Output CSV file path for evaluation results (optional, used in example usage)
|