Datasets:

Modalities:
Tabular
Text
Formats:
arrow
ArXiv:
Libraries:
Datasets
License:
admin commited on
Commit
82b24f1
·
1 Parent(s): d9e7f94
Files changed (1) hide show
  1. README.md +207 -32
README.md CHANGED
@@ -1,18 +1,193 @@
1
  ---
2
  license: cc-by-nc-nd-4.0
3
- viewer: false
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
4
  ---
5
 
6
- # EMusicGen
7
- The EMusicGen dataset comprises four subsets: Analysis, EMOPIA, VGMIDI, and Rough4Q. The EMOPIA and VGMIDI subsets are derived from MIDI files in their respective source datasets, where all melodies in V1 soundtrack have been converted to ABC notation through a data processing script. These subsets are enriched with enhanced emotional labels. The Analysis subset involves statistical analysis of the original EMOPIA and VGMIDI datasets, aimed at guiding the enhancement and automatic annotation of musical emotional data. Lastly, the Rough4Q subset is created by merging ABC notation collections from the IrishMAN-XML, EsAC, Wikifonia, Nottingham, JSBach Chorales, and CCMusic datasets. These collections are processed and augmented based on insights from the Analysis subset, followed by rough emotional labeling using the music21 library.
8
-
9
- ## Viewer
10
- <https://www.modelscope.cn/datasets/monetjoe/EMusicGen/dataPeview>
11
 
12
  ## Maintenance
13
  ```bash
14
- git clone [email protected]:datasets/monetjoe/EMusicGen
15
- cd EMusicGen
16
  ```
17
 
18
  ## Usage
@@ -20,7 +195,7 @@ cd EMusicGen
20
  from datasets import load_dataset
21
 
22
  # VGMIDI (default) / EMOPIA / Rough4Q subset
23
- ds = load_dataset("monetjoe/EMusicGen", name="VGMIDI")
24
  for item in ds["train"]:
25
  print(item)
26
 
@@ -28,7 +203,7 @@ for item in ds["test"]:
28
  print(item)
29
 
30
  # Analysis subset
31
- ds = load_dataset("monetjoe/EMusicGen", name="Analysis", split="train")
32
  for item in ds:
33
  print(item)
34
  ```
@@ -70,16 +245,16 @@ for item in ds:
70
  }
71
  </style>
72
 
73
- | Feature | Distribution chart |
74
- | :-------: | :------------------------------------------------------------------------------------------: |
75
- | key | ![](https://www.modelscope.cn/datasets/monetjoe/EMusicGen/resolve/master/figs/key.jpg) |
76
- | pitch | ![](https://www.modelscope.cn/datasets/monetjoe/EMusicGen/resolve/master/figs/pitch.jpg) |
77
- | range | ![](https://www.modelscope.cn/datasets/monetjoe/EMusicGen/resolve/master/figs/range.jpg) |
78
- | pitchSD | ![](https://www.modelscope.cn/datasets/monetjoe/EMusicGen/resolve/master/figs/pitchSD.jpg) |
79
- | tempo | ![](https://www.modelscope.cn/datasets/monetjoe/EMusicGen/resolve/master/figs/tempo.jpg) |
80
- | volume | ![](https://www.modelscope.cn/datasets/monetjoe/EMusicGen/resolve/master/figs/volume.jpg) |
81
- | mode | ![](https://www.modelscope.cn/datasets/monetjoe/EMusicGen/resolve/master/figs/mode.jpg) |
82
- | direction | ![](https://www.modelscope.cn/datasets/monetjoe/EMusicGen/resolve/master/figs/direction.jpg) |
83
 
84
  ## Processed EMOPIA & VGMIDI
85
  The processed EMOPIA and processed VGMIDI datasets will be used to evaluate the error-free rate of music scores generated by fine-tuning the backbone with existing emotion-labeled datasets. Therefore, it is essential to ensure that the processed data is compatible with the input format required by the pre-trained backbone.
@@ -109,25 +284,25 @@ This rough labeling with noise primarily serves to record the state of mode and
109
  We discovered that the data were highly imbalanced after processing, with the quantities of Q3 and Q4 labels differing by an order of magnitude from the other categories. To address this imbalance, we performed data augmentation by transposing Q3 and Q4 categories across 15 different keys only. As a result of these processes, we ultimately obtained the Rough4Q dataset, which now comprises approximately 521K samples in total and is split into training and test sets at a 10:1 ratio.
110
 
111
  ## Statistics
112
- | Dataset | Pie chart | Total | Train | Test |
113
- | :------: | :-----------------------------------------------------------------------------------------: | -----: | -----: | ----: |
114
- | Analysis | ![](https://www.modelscope.cn/datasets/monetjoe/EMusicGen/resolve/master/figs/Analysis.jpg) | 1278 | 1278 | - |
115
- | VGMIDI | ![](https://www.modelscope.cn/datasets/monetjoe/EMusicGen/resolve/master/figs/VGMIDI.jpg) | 9315 | 8383 | 932 |
116
- | EMOPIA | ![](https://www.modelscope.cn/datasets/monetjoe/EMusicGen/resolve/master/figs/EMOPIA.jpg) | 21480 | 19332 | 2148 |
117
- | Rough4Q | ![](https://www.modelscope.cn/datasets/monetjoe/EMusicGen/resolve/master/figs/Rough4Q.jpg) | 520673 | 468605 | 52068 |
118
 
119
  ## Mirror
120
- The data processor is also included in <https://www.modelscope.cn/datasets/monetjoe/EMusicGen>
121
 
122
- ## Cite
123
  ```bibtex
124
- @article{Zhou2024EMusicGen,
125
- title = {EMusicGen: Emotion-Conditioned Melody Generation in ABC Notation},
126
  author = {Monan Zhou, Xiaobing Li, Feng Yu and Wei Li},
127
  month = {Sep},
128
  year = {2024},
129
  publisher = {GitHub},
130
  version = {0.1},
131
- url = {https://github.com/monetjoe/EMusicGen}
132
  }
133
- ```
 
1
  ---
2
  license: cc-by-nc-nd-4.0
3
+ viewer: true
4
+ dataset_info:
5
+ - config_name: VGMIDI
6
+ features:
7
+ - name: prompt
8
+ dtype: string
9
+ - name: data
10
+ dtype: string
11
+ - name: label
12
+ dtype:
13
+ class_label:
14
+ names:
15
+ '0': Q1
16
+ '1': Q2
17
+ '2': Q3
18
+ '3': Q4
19
+ splits:
20
+ - name: train
21
+ num_bytes: 6029629
22
+ num_examples: 8383
23
+ - name: test
24
+ num_bytes: 673336
25
+ num_examples: 932
26
+ download_size: 7109915
27
+ dataset_size: 6702965
28
+ - config_name: EMOPIA
29
+ features:
30
+ - name: prompt
31
+ dtype: string
32
+ - name: data
33
+ dtype: string
34
+ - name: label
35
+ dtype:
36
+ class_label:
37
+ names:
38
+ '0': Q1
39
+ '1': Q2
40
+ '2': Q3
41
+ '3': Q4
42
+ splits:
43
+ - name: train
44
+ num_bytes: 18731226
45
+ num_examples: 19332
46
+ - name: test
47
+ num_bytes: 2102303
48
+ num_examples: 2148
49
+ download_size: 21846539
50
+ dataset_size: 20833529
51
+ - config_name: Rough4Q
52
+ features:
53
+ - name: prompt
54
+ dtype: string
55
+ - name: data
56
+ dtype: string
57
+ - name: label
58
+ dtype:
59
+ class_label:
60
+ names:
61
+ '0': Q1
62
+ '1': Q2
63
+ '2': Q3
64
+ '3': Q4
65
+ splits:
66
+ - name: train
67
+ num_bytes: 133211901
68
+ num_examples: 468605
69
+ - name: test
70
+ num_bytes: 14831382
71
+ num_examples: 52068
72
+ download_size: 172425554
73
+ dataset_size: 148043283
74
+ - config_name: EMOPIA
75
+ features:
76
+ - name: prompt
77
+ dtype: string
78
+ - name: data
79
+ dtype: string
80
+ - name: label
81
+ dtype:
82
+ class_label:
83
+ names:
84
+ '0': Q1
85
+ '1': Q2
86
+ '2': Q3
87
+ '3': Q4
88
+ splits:
89
+ - name: train
90
+ num_bytes: 18731226
91
+ num_examples: 19332
92
+ - name: test
93
+ num_bytes: 2102303
94
+ num_examples: 2148
95
+ download_size: 21846539
96
+ dataset_size: 20833529
97
+ - config_name: Analysis
98
+ features:
99
+ - name: label
100
+ dtype:
101
+ class_label:
102
+ names:
103
+ '0': Q1
104
+ '1': Q2
105
+ '2': Q3
106
+ '3': Q4
107
+ - name: valence
108
+ dtype:
109
+ class_label:
110
+ names:
111
+ '0': low
112
+ '1': high
113
+ - name: arousal
114
+ dtype:
115
+ class_label:
116
+ names:
117
+ '0': low
118
+ '1': high
119
+ - name: key
120
+ dtype:
121
+ class_label:
122
+ names:
123
+ '0': "C"
124
+ '1': "C#"
125
+ '2': "D"
126
+ '3': "Eb"
127
+ '4': "E"
128
+ '5': "F"
129
+ '6': "F#"
130
+ '7': "G"
131
+ '8': "G#/Ab"
132
+ '9': "A"
133
+ '10': "Bb"
134
+ '11': "B"
135
+ - name: mode
136
+ dtype:
137
+ class_label:
138
+ names:
139
+ '0': minor
140
+ '1': major
141
+ - name: pitch
142
+ dtype: float32
143
+ - name: range
144
+ dtype: float32
145
+ - name: pitchSD
146
+ dtype: float32
147
+ - name: direction
148
+ dtype: int8
149
+ - name: tempo
150
+ dtype: float32
151
+ - name: volume
152
+ dtype: float32
153
+ splits:
154
+ - name: train
155
+ num_bytes: 77958
156
+ num_examples: 1278
157
+ download_size: 333534
158
+ dataset_size: 77958
159
+ configs:
160
+ - config_name: VGMIDI
161
+ data_files:
162
+ - split: train
163
+ path: VGMIDI/train/data-*.arrow
164
+ - split: test
165
+ path: VGMIDI/test/data-*.arrow
166
+ - config_name: EMOPIA
167
+ data_files:
168
+ - split: train
169
+ path: EMOPIA/train/data-*.arrow
170
+ - split: test
171
+ path: EMOPIA/test/data-*.arrow
172
+ - config_name: Rough4Q
173
+ data_files:
174
+ - split: train
175
+ path: Rough4Q/train/data-*.arrow
176
+ - split: test
177
+ path: Rough4Q/test/data-*.arrow
178
+ - config_name: Analysis
179
+ data_files:
180
+ - split: train
181
+ path: Analysis/train/data-*.arrow
182
  ---
183
 
184
+ # EMelodyGen
185
+ The EMelodyGen dataset comprises four subsets: Analysis, EMOPIA, VGMIDI, and Rough4Q. The EMOPIA and VGMIDI subsets are derived from MIDI files in their respective source datasets, where all melodies in V1 soundtrack have been converted to ABC notation through a data processing script. These subsets are enriched with enhanced emotional labels. The Analysis subset involves statistical analysis of the original EMOPIA and VGMIDI datasets, aimed at guiding the enhancement and automatic annotation of musical emotional data. Lastly, the Rough4Q subset is created by merging ABC notation collections from the IrishMAN-XML, EsAC, Wikifonia, Nottingham, JSBach Chorales, and CCMusic datasets. These collections are processed and augmented based on insights from the Analysis subset, followed by rough emotional labeling using the music21 library.
 
 
 
186
 
187
  ## Maintenance
188
  ```bash
189
+ GIT_LFS_SKIP_SMUDGE=1 git clone [email protected]:datasets/monetjoe/EMelodyGen
190
+ cd EMelodyGen
191
  ```
192
 
193
  ## Usage
 
195
  from datasets import load_dataset
196
 
197
  # VGMIDI (default) / EMOPIA / Rough4Q subset
198
+ ds = load_dataset("monetjoe/EMelodyGen", name="VGMIDI")
199
  for item in ds["train"]:
200
  print(item)
201
 
 
203
  print(item)
204
 
205
  # Analysis subset
206
+ ds = load_dataset("monetjoe/EMelodyGen", name="Analysis", split="train")
207
  for item in ds:
208
  print(item)
209
  ```
 
245
  }
246
  </style>
247
 
248
+ | Feature | Distribution chart |
249
+ | :-------: | :-------------------------------------------------------------------------------------------: |
250
+ | key | ![](https://www.modelscope.cn/datasets/monetjoe/EMelodyGen/resolve/master/figs/key.jpg) |
251
+ | pitch | ![](https://www.modelscope.cn/datasets/monetjoe/EMelodyGen/resolve/master/figs/pitch.jpg) |
252
+ | range | ![](https://www.modelscope.cn/datasets/monetjoe/EMelodyGen/resolve/master/figs/range.jpg) |
253
+ | pitchSD | ![](https://www.modelscope.cn/datasets/monetjoe/EMelodyGen/resolve/master/figs/pitchSD.jpg) |
254
+ | tempo | ![](https://www.modelscope.cn/datasets/monetjoe/EMelodyGen/resolve/master/figs/tempo.jpg) |
255
+ | volume | ![](https://www.modelscope.cn/datasets/monetjoe/EMelodyGen/resolve/master/figs/volume.jpg) |
256
+ | mode | ![](https://www.modelscope.cn/datasets/monetjoe/EMelodyGen/resolve/master/figs/mode.jpg) |
257
+ | direction | ![](https://www.modelscope.cn/datasets/monetjoe/EMelodyGen/resolve/master/figs/direction.jpg) |
258
 
259
  ## Processed EMOPIA & VGMIDI
260
  The processed EMOPIA and processed VGMIDI datasets will be used to evaluate the error-free rate of music scores generated by fine-tuning the backbone with existing emotion-labeled datasets. Therefore, it is essential to ensure that the processed data is compatible with the input format required by the pre-trained backbone.
 
284
  We discovered that the data were highly imbalanced after processing, with the quantities of Q3 and Q4 labels differing by an order of magnitude from the other categories. To address this imbalance, we performed data augmentation by transposing Q3 and Q4 categories across 15 different keys only. As a result of these processes, we ultimately obtained the Rough4Q dataset, which now comprises approximately 521K samples in total and is split into training and test sets at a 10:1 ratio.
285
 
286
  ## Statistics
287
+ | Dataset | Pie chart | Total | Train | Test |
288
+ | :------: | :------------------------------------------------------------------------------------------: | -----: | -----: | ----: |
289
+ | Analysis | ![](https://www.modelscope.cn/datasets/monetjoe/EMelodyGen/resolve/master/figs/Analysis.jpg) | 1278 | 1278 | - |
290
+ | VGMIDI | ![](https://www.modelscope.cn/datasets/monetjoe/EMelodyGen/resolve/master/figs/VGMIDI.jpg) | 9315 | 8383 | 932 |
291
+ | EMOPIA | ![](https://www.modelscope.cn/datasets/monetjoe/EMelodyGen/resolve/master/figs/EMOPIA.jpg) | 21480 | 19332 | 2148 |
292
+ | Rough4Q | ![](https://www.modelscope.cn/datasets/monetjoe/EMelodyGen/resolve/master/figs/Rough4Q.jpg) | 520673 | 468605 | 52068 |
293
 
294
  ## Mirror
295
+ The data processor is also included in <https://www.modelscope.cn/datasets/monetjoe/EMelodyGen>
296
 
297
+ <!-- ## Cite
298
  ```bibtex
299
+ @article{Zhou2024EMelodyGen,
300
+ title = {EMelodyGen: Emotion-Conditioned Melody Generation in ABC Notation},
301
  author = {Monan Zhou, Xiaobing Li, Feng Yu and Wei Li},
302
  month = {Sep},
303
  year = {2024},
304
  publisher = {GitHub},
305
  version = {0.1},
306
+ url = {https://github.com/monetjoe/EMelodyGen}
307
  }
308
+ ``` -->