Update README.md
Browse files
README.md
CHANGED
|
@@ -8,30 +8,32 @@ tags:
|
|
| 8 |
- Urdu
|
| 9 |
language:
|
| 10 |
- ur
|
| 11 |
-
pretty_name: '
|
| 12 |
---
|
| 13 |
# Munch Hashed Index - Lightweight Audio Reference Dataset
|
| 14 |
|
| 15 |
[](https://huggingface.co/datasets/humair025/Munch)
|
| 16 |
[](https://huggingface.co/datasets/humair025/hashed_data)
|
| 17 |
-
[. Instead of storing
|
| 24 |
|
| 25 |
-
- β
**Fast duplicate detection** across
|
| 26 |
- β
**Efficient dataset exploration** without downloading terabytes
|
| 27 |
- β
**Quick metadata queries** (voice distribution, text stats, etc.)
|
| 28 |
- β
**Selective audio retrieval** - download only what you need
|
| 29 |
-
- β
**Storage efficiency** - 99.
|
| 30 |
|
| 31 |
### π Related Datasets
|
| 32 |
|
| 33 |
-
- **Original Dataset**: [humair025/Munch](https://huggingface.co/datasets/humair025/Munch) - Full audio dataset (1
|
| 34 |
-
- **This Index**: [humair025/hashed_data](https://huggingface.co/datasets/humair025/hashed_data) - Hashed reference (~
|
|
|
|
|
|
|
| 35 |
|
| 36 |
---
|
| 37 |
|
|
@@ -39,9 +41,9 @@ pretty_name: ' Munch Hashed Index '
|
|
| 39 |
|
| 40 |
### The Challenge
|
| 41 |
The original [Munch dataset](https://huggingface.co/datasets/humair025/Munch) contains:
|
| 42 |
-
- π **
|
| 43 |
-
- πΎ **
|
| 44 |
-
- π¦
|
| 45 |
|
| 46 |
This makes it difficult to:
|
| 47 |
- β Quickly check if specific audio exists
|
|
@@ -54,7 +56,7 @@ This hashed index provides:
|
|
| 54 |
- β
**All metadata** (text, voice, timestamps) without audio bytes
|
| 55 |
- β
**SHA-256 hashes** for every audio file (unique fingerprint)
|
| 56 |
- β
**File references** (which parquet contains each audio)
|
| 57 |
-
- β
**Fast queries** - search
|
| 58 |
- β
**Retrieve on demand** - download only specific audio when needed
|
| 59 |
|
| 60 |
---
|
|
@@ -73,7 +75,7 @@ pip install datasets pandas
|
|
| 73 |
from datasets import load_dataset
|
| 74 |
import pandas as pd
|
| 75 |
|
| 76 |
-
# Load the entire hashed index (fast - only ~
|
| 77 |
ds = load_dataset("humair025/hashed_data", split="train")
|
| 78 |
df = pd.DataFrame(ds)
|
| 79 |
|
|
@@ -190,7 +192,7 @@ wav_io = pcm16_to_wav(audio_bytes)
|
|
| 190 |
'voice': 'ash',
|
| 191 |
'audio_bytes_hash': 'a3f7b2c8e9d1f4a5b6c7d8e9f0a1b2c3d4e5f6a7b8c9d0e1f2a3b4c5d6e7f8a9',
|
| 192 |
'audio_size_bytes': 52340,
|
| 193 |
-
'timestamp': '
|
| 194 |
'error': None
|
| 195 |
}
|
| 196 |
```
|
|
@@ -265,21 +267,24 @@ print(f"Similar audio candidates: {len(similar)}")
|
|
| 265 |
|
| 266 |
| Metric | Original Dataset | Hashed Index | Reduction |
|
| 267 |
|--------|------------------|--------------|-----------|
|
| 268 |
-
| Total Size |
|
| 269 |
-
|
|
| 270 |
-
|
|
| 271 |
-
|
|
|
|
|
|
|
|
| 272 |
|
| 273 |
### Content Statistics
|
| 274 |
|
| 275 |
```
|
| 276 |
π Dataset Overview:
|
| 277 |
-
Total Records:
|
| 278 |
-
|
| 279 |
Voices: 13 (alloy, echo, fable, onyx, nova, shimmer, coral, verse, ballad, ash, sage, amuch, dan)
|
| 280 |
-
|
| 281 |
Avg Audio Size: ~50-60 KB per sample
|
| 282 |
Avg Duration: ~3-5 seconds per sample
|
|
|
|
| 283 |
```
|
| 284 |
|
| 285 |
---
|
|
@@ -403,7 +408,8 @@ import time
|
|
| 403 |
|
| 404 |
# Load index
|
| 405 |
start = time.time()
|
| 406 |
-
|
|
|
|
| 407 |
print(f"Load time: {time.time() - start:.2f}s")
|
| 408 |
|
| 409 |
# Query by hash
|
|
@@ -418,7 +424,7 @@ print(f"Voice filter: {(time.time() - start)*1000:.2f}ms")
|
|
| 418 |
```
|
| 419 |
|
| 420 |
**Expected Performance**:
|
| 421 |
-
- Load
|
| 422 |
- Hash lookup: < 10 milliseconds
|
| 423 |
- Voice filter: < 50 milliseconds
|
| 424 |
- Full dataset scan: < 5 seconds
|
|
@@ -431,7 +437,7 @@ print(f"Voice filter: {(time.time() - start)*1000:.2f}ms")
|
|
| 431 |
|
| 432 |
```python
|
| 433 |
# 1. Query the index (fast)
|
| 434 |
-
df = pd.
|
| 435 |
target_rows = df[df['voice'] == 'ash'].head(100)
|
| 436 |
|
| 437 |
# 2. Get unique parquet files
|
|
@@ -463,45 +469,41 @@ If you use this dataset in your research, please cite both the original dataset
|
|
| 463 |
### BibTeX
|
| 464 |
|
| 465 |
```bibtex
|
| 466 |
-
@dataset{
|
| 467 |
title={Munch Hashed Index: Lightweight Reference Dataset for Urdu TTS},
|
| 468 |
-
author={
|
| 469 |
year={2025},
|
| 470 |
publisher={Hugging Face},
|
| 471 |
-
howpublished={
|
| 472 |
-
\url{https://huggingface.co/datasets/humair025/hashed_data}
|
| 473 |
-
},
|
| 474 |
note={Index of humair025/Munch dataset with SHA-256 audio hashes}
|
| 475 |
}
|
| 476 |
|
| 477 |
-
@dataset{
|
| 478 |
title={Munch: Large-Scale Urdu Text-to-Speech Dataset},
|
| 479 |
-
author={
|
| 480 |
year={2025},
|
| 481 |
publisher={Hugging Face},
|
| 482 |
-
howpublished={
|
| 483 |
-
\url{https://huggingface.co/datasets/humair025/Munch}
|
| 484 |
-
}
|
| 485 |
}
|
| 486 |
```
|
| 487 |
|
| 488 |
### APA Format
|
| 489 |
|
| 490 |
```
|
| 491 |
-
|
| 492 |
[Dataset]. Hugging Face. https://huggingface.co/datasets/humair025/hashed_data
|
| 493 |
|
| 494 |
-
|
| 495 |
Hugging Face. https://huggingface.co/datasets/humair025/Munch
|
| 496 |
```
|
| 497 |
|
| 498 |
### MLA Format
|
| 499 |
|
| 500 |
```
|
| 501 |
-
|
| 502 |
-
Hugging Face,
|
| 503 |
|
| 504 |
-
|
| 505 |
https://huggingface.co/datasets/humair025/Munch.
|
| 506 |
```
|
| 507 |
|
|
@@ -543,8 +545,10 @@ Under the terms:
|
|
| 543 |
|
| 544 |
## π Important Links
|
| 545 |
|
| 546 |
-
- π§ [**Original Audio Dataset**](https://huggingface.co/datasets/humair025/Munch) - Full 1
|
| 547 |
- π [**This Hashed Index**](https://huggingface.co/datasets/humair025/hashed_data) - Lightweight reference
|
|
|
|
|
|
|
| 548 |
- π¬ [**Discussions**](https://huggingface.co/datasets/humair025/hashed_data/discussions) - Ask questions
|
| 549 |
- π [**Report Issues**](https://huggingface.co/datasets/humair025/hashed_data/discussions) - Bug reports
|
| 550 |
|
|
@@ -553,7 +557,7 @@ Under the terms:
|
|
| 553 |
## β FAQ
|
| 554 |
|
| 555 |
### Q: Why use hashes instead of audio?
|
| 556 |
-
**A:** Hashes provide unique fingerprints for audio files while taking only 64 bytes vs ~
|
| 557 |
|
| 558 |
### Q: Can I reconstruct audio from hashes?
|
| 559 |
**A:** No. SHA-256 is a one-way cryptographic hash. You must download the original audio from the [Munch dataset](https://huggingface.co/datasets/humair025/Munch) using the file reference provided.
|
|
@@ -565,7 +569,10 @@ Under the terms:
|
|
| 565 |
**A:** Use the `parquet_file_name` and `id` fields to locate and download the specific audio from the [original dataset](https://huggingface.co/datasets/humair025/Munch). See examples above.
|
| 566 |
|
| 567 |
### Q: Is this dataset complete?
|
| 568 |
-
**A:**
|
|
|
|
|
|
|
|
|
|
| 569 |
|
| 570 |
### Q: Can I contribute?
|
| 571 |
**A:** Yes! Help verify hashes, report inconsistencies, or suggest improvements via discussions.
|
|
@@ -576,7 +583,7 @@ Under the terms:
|
|
| 576 |
|
| 577 |
- **Original Dataset**: [humair025/Munch](https://huggingface.co/datasets/humair025/Munch)
|
| 578 |
- **TTS Generation**: OpenAI-compatible models
|
| 579 |
-
- **Voices**: 13 high-quality voices
|
| 580 |
- **Infrastructure**: HuggingFace Datasets platform
|
| 581 |
- **Hashing**: SHA-256 cryptographic hash function
|
| 582 |
|
|
@@ -584,16 +591,17 @@ Under the terms:
|
|
| 584 |
|
| 585 |
## π Version History
|
| 586 |
|
| 587 |
-
- **v1.0.0** (December 2025): Initial release
|
| 588 |
-
- Processed
|
| 589 |
-
-
|
| 590 |
-
-
|
|
|
|
| 591 |
|
| 592 |
---
|
| 593 |
|
| 594 |
**Last Updated**: December 2025
|
| 595 |
|
| 596 |
-
**Status**:
|
| 597 |
|
| 598 |
---
|
| 599 |
|
|
|
|
| 8 |
- Urdu
|
| 9 |
language:
|
| 10 |
- ur
|
| 11 |
+
pretty_name: 'Munch Hashed Index'
|
| 12 |
---
|
| 13 |
# Munch Hashed Index - Lightweight Audio Reference Dataset
|
| 14 |
|
| 15 |
[](https://huggingface.co/datasets/humair025/Munch)
|
| 16 |
[](https://huggingface.co/datasets/humair025/hashed_data)
|
| 17 |
+
[]()
|
| 18 |
+
[]()
|
| 19 |
+
[]()
|
| 20 |
|
| 21 |
## π Overview
|
| 22 |
|
| 23 |
+
**Munch Hashed Index** is a lightweight reference dataset that provides SHA-256 hashes for all audio files in the [Munch Urdu TTS Dataset](https://huggingface.co/datasets/humair025/Munch). Instead of storing 1.27 TB of raw audio, this index stores only metadata and cryptographic hashes, enabling:
|
| 24 |
|
| 25 |
+
- β
**Fast duplicate detection** across 4.17 million audio samples
|
| 26 |
- β
**Efficient dataset exploration** without downloading terabytes
|
| 27 |
- β
**Quick metadata queries** (voice distribution, text stats, etc.)
|
| 28 |
- β
**Selective audio retrieval** - download only what you need
|
| 29 |
+
- β
**Storage efficiency** - 99.92% space reduction (1.27 TB β ~1 GB)
|
| 30 |
|
| 31 |
### π Related Datasets
|
| 32 |
|
| 33 |
+
- **Original Dataset**: [humair025/Munch](https://huggingface.co/datasets/humair025/Munch) - Full audio dataset (1.27 TB)
|
| 34 |
+
- **This Index**: [humair025/hashed_data](https://huggingface.co/datasets/humair025/hashed_data) - Hashed reference (~1 GB)
|
| 35 |
+
- **Munch-1 (v2)**: [humair025/munch-1](https://huggingface.co/datasets/humair025/munch-1) - Newer version (3.28 TB, 3.86M samples)
|
| 36 |
+
- **Munch-1 Index**: [humair025/hashed_data_munch_1](https://huggingface.co/datasets/humair025/hashed_data_munch_1) - Index for v2
|
| 37 |
|
| 38 |
---
|
| 39 |
|
|
|
|
| 41 |
|
| 42 |
### The Challenge
|
| 43 |
The original [Munch dataset](https://huggingface.co/datasets/humair025/Munch) contains:
|
| 44 |
+
- π **4,167,500 audio-text pairs**
|
| 45 |
+
- πΎ **1.27 TB total size**
|
| 46 |
+
- π¦ **~8,300 separate parquet files**
|
| 47 |
|
| 48 |
This makes it difficult to:
|
| 49 |
- β Quickly check if specific audio exists
|
|
|
|
| 56 |
- β
**All metadata** (text, voice, timestamps) without audio bytes
|
| 57 |
- β
**SHA-256 hashes** for every audio file (unique fingerprint)
|
| 58 |
- β
**File references** (which parquet contains each audio)
|
| 59 |
+
- β
**Fast queries** - search 4.17M records in seconds
|
| 60 |
- β
**Retrieve on demand** - download only specific audio when needed
|
| 61 |
|
| 62 |
---
|
|
|
|
| 75 |
from datasets import load_dataset
|
| 76 |
import pandas as pd
|
| 77 |
|
| 78 |
+
# Load the entire hashed index (fast - only ~1 GB!)
|
| 79 |
ds = load_dataset("humair025/hashed_data", split="train")
|
| 80 |
df = pd.DataFrame(ds)
|
| 81 |
|
|
|
|
| 192 |
'voice': 'ash',
|
| 193 |
'audio_bytes_hash': 'a3f7b2c8e9d1f4a5b6c7d8e9f0a1b2c3d4e5f6a7b8c9d0e1f2a3b4c5d6e7f8a9',
|
| 194 |
'audio_size_bytes': 52340,
|
| 195 |
+
'timestamp': '2025-12-03T13:03:14.123456',
|
| 196 |
'error': None
|
| 197 |
}
|
| 198 |
```
|
|
|
|
| 267 |
|
| 268 |
| Metric | Original Dataset | Hashed Index | Reduction |
|
| 269 |
|--------|------------------|--------------|-----------|
|
| 270 |
+
| Total Size | 1.27 TB | ~1 GB | **99.92%** |
|
| 271 |
+
| Records | 4,167,500 | 4,167,500 | Same |
|
| 272 |
+
| Files | ~8,300 parquet | Consolidated | **~8,300Γ fewer** |
|
| 273 |
+
| Download Time (100 Mbps) | ~28 hours | ~90 seconds | **~1,100Γ** |
|
| 274 |
+
| Load Time | Minutes-Hours | Seconds | **~100Γ** |
|
| 275 |
+
| Memory Usage | Cannot fit in RAM | ~2-3 GB RAM | **Fits easily** |
|
| 276 |
|
| 277 |
### Content Statistics
|
| 278 |
|
| 279 |
```
|
| 280 |
π Dataset Overview:
|
| 281 |
+
Total Records: 4,167,500
|
| 282 |
+
Total Files: ~8,300 parquet files
|
| 283 |
Voices: 13 (alloy, echo, fable, onyx, nova, shimmer, coral, verse, ballad, ash, sage, amuch, dan)
|
| 284 |
+
Language: Urdu (primary)
|
| 285 |
Avg Audio Size: ~50-60 KB per sample
|
| 286 |
Avg Duration: ~3-5 seconds per sample
|
| 287 |
+
Total Duration: ~3,500-5,800 hours of audio
|
| 288 |
```
|
| 289 |
|
| 290 |
---
|
|
|
|
| 408 |
|
| 409 |
# Load index
|
| 410 |
start = time.time()
|
| 411 |
+
ds = load_dataset("humair025/hashed_data", split="train")
|
| 412 |
+
df = pd.DataFrame(ds)
|
| 413 |
print(f"Load time: {time.time() - start:.2f}s")
|
| 414 |
|
| 415 |
# Query by hash
|
|
|
|
| 424 |
```
|
| 425 |
|
| 426 |
**Expected Performance**:
|
| 427 |
+
- Load full dataset: 10-30 seconds
|
| 428 |
- Hash lookup: < 10 milliseconds
|
| 429 |
- Voice filter: < 50 milliseconds
|
| 430 |
- Full dataset scan: < 5 seconds
|
|
|
|
| 437 |
|
| 438 |
```python
|
| 439 |
# 1. Query the index (fast)
|
| 440 |
+
df = pd.DataFrame(load_dataset("humair025/hashed_data", split="train"))
|
| 441 |
target_rows = df[df['voice'] == 'ash'].head(100)
|
| 442 |
|
| 443 |
# 2. Get unique parquet files
|
|
|
|
| 469 |
### BibTeX
|
| 470 |
|
| 471 |
```bibtex
|
| 472 |
+
@dataset{munch_hashed_index_2025,
|
| 473 |
title={Munch Hashed Index: Lightweight Reference Dataset for Urdu TTS},
|
| 474 |
+
author={Munir, Humair},
|
| 475 |
year={2025},
|
| 476 |
publisher={Hugging Face},
|
| 477 |
+
howpublished={\url{https://huggingface.co/datasets/humair025/hashed_data}},
|
|
|
|
|
|
|
| 478 |
note={Index of humair025/Munch dataset with SHA-256 audio hashes}
|
| 479 |
}
|
| 480 |
|
| 481 |
+
@dataset{munch_urdu_tts_2025,
|
| 482 |
title={Munch: Large-Scale Urdu Text-to-Speech Dataset},
|
| 483 |
+
author={Munir, Humair},
|
| 484 |
year={2025},
|
| 485 |
publisher={Hugging Face},
|
| 486 |
+
howpublished={\url{https://huggingface.co/datasets/humair025/Munch}}
|
|
|
|
|
|
|
| 487 |
}
|
| 488 |
```
|
| 489 |
|
| 490 |
### APA Format
|
| 491 |
|
| 492 |
```
|
| 493 |
+
Munir, H. (2025). Munch Hashed Index: Lightweight Reference Dataset for Urdu TTS
|
| 494 |
[Dataset]. Hugging Face. https://huggingface.co/datasets/humair025/hashed_data
|
| 495 |
|
| 496 |
+
Munir, H. (2025). Munch: Large-Scale Urdu Text-to-Speech Dataset [Dataset].
|
| 497 |
Hugging Face. https://huggingface.co/datasets/humair025/Munch
|
| 498 |
```
|
| 499 |
|
| 500 |
### MLA Format
|
| 501 |
|
| 502 |
```
|
| 503 |
+
Munir, Humair. "Munch Hashed Index: Lightweight Reference Dataset for Urdu TTS."
|
| 504 |
+
Hugging Face, 2025, https://huggingface.co/datasets/humair025/hashed_data.
|
| 505 |
|
| 506 |
+
Munir, Humair. "Munch: Large-Scale Urdu Text-to-Speech Dataset." Hugging Face, 2025,
|
| 507 |
https://huggingface.co/datasets/humair025/Munch.
|
| 508 |
```
|
| 509 |
|
|
|
|
| 545 |
|
| 546 |
## π Important Links
|
| 547 |
|
| 548 |
+
- π§ [**Original Audio Dataset**](https://huggingface.co/datasets/humair025/Munch) - Full 1.27 TB audio
|
| 549 |
- π [**This Hashed Index**](https://huggingface.co/datasets/humair025/hashed_data) - Lightweight reference
|
| 550 |
+
- π [**Munch-1 (v2)**](https://huggingface.co/datasets/humair025/munch-1) - Newer version (3.28 TB)
|
| 551 |
+
- π [**Munch-1 Index**](https://huggingface.co/datasets/humair025/hashed_data_munch_1) - Index for v2
|
| 552 |
- π¬ [**Discussions**](https://huggingface.co/datasets/humair025/hashed_data/discussions) - Ask questions
|
| 553 |
- π [**Report Issues**](https://huggingface.co/datasets/humair025/hashed_data/discussions) - Bug reports
|
| 554 |
|
|
|
|
| 557 |
## β FAQ
|
| 558 |
|
| 559 |
### Q: Why use hashes instead of audio?
|
| 560 |
+
**A:** Hashes provide unique fingerprints for audio files while taking only 64 bytes vs ~50KB per audio. This enables duplicate detection and fast queries without storing massive audio files.
|
| 561 |
|
| 562 |
### Q: Can I reconstruct audio from hashes?
|
| 563 |
**A:** No. SHA-256 is a one-way cryptographic hash. You must download the original audio from the [Munch dataset](https://huggingface.co/datasets/humair025/Munch) using the file reference provided.
|
|
|
|
| 569 |
**A:** Use the `parquet_file_name` and `id` fields to locate and download the specific audio from the [original dataset](https://huggingface.co/datasets/humair025/Munch). See examples above.
|
| 570 |
|
| 571 |
### Q: Is this dataset complete?
|
| 572 |
+
**A:** Yes, this index covers all 4,167,500 rows across all ~8,300 parquet files from the original Munch dataset.
|
| 573 |
+
|
| 574 |
+
### Q: What's the difference between this and Munch-1 Index?
|
| 575 |
+
**A:** This indexes the original Munch dataset (1.27 TB, 4.17M samples). The [Munch-1 Index](https://huggingface.co/datasets/humair025/hashed_data_munch_1) indexes the newer Munch-1 dataset (3.28 TB, 3.86M samples).
|
| 576 |
|
| 577 |
### Q: Can I contribute?
|
| 578 |
**A:** Yes! Help verify hashes, report inconsistencies, or suggest improvements via discussions.
|
|
|
|
| 583 |
|
| 584 |
- **Original Dataset**: [humair025/Munch](https://huggingface.co/datasets/humair025/Munch)
|
| 585 |
- **TTS Generation**: OpenAI-compatible models
|
| 586 |
+
- **Voices**: 13 high-quality voices (alloy, echo, fable, onyx, nova, shimmer, coral, verse, ballad, ash, sage, amuch, dan)
|
| 587 |
- **Infrastructure**: HuggingFace Datasets platform
|
| 588 |
- **Hashing**: SHA-256 cryptographic hash function
|
| 589 |
|
|
|
|
| 591 |
|
| 592 |
## π Version History
|
| 593 |
|
| 594 |
+
- **v1.0.0** (December 2025): Initial release
|
| 595 |
+
- Processed all ~8,300 parquet files
|
| 596 |
+
- 4,167,500 audio samples indexed
|
| 597 |
+
- SHA-256 hashes computed for all audio
|
| 598 |
+
- ~99.92% space reduction achieved
|
| 599 |
|
| 600 |
---
|
| 601 |
|
| 602 |
**Last Updated**: December 2025
|
| 603 |
|
| 604 |
+
**Status**: β
Complete
|
| 605 |
|
| 606 |
---
|
| 607 |
|