humair025 commited on
Commit
b1a23d7
Β·
verified Β·
1 Parent(s): b2c9a4a

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +57 -49
README.md CHANGED
@@ -8,30 +8,32 @@ tags:
8
  - Urdu
9
  language:
10
  - ur
11
- pretty_name: ' Munch Hashed Index '
12
  ---
13
  # Munch Hashed Index - Lightweight Audio Reference Dataset
14
 
15
  [![Original Dataset](https://img.shields.io/badge/πŸ€—%20Original-Munch-blue)](https://huggingface.co/datasets/humair025/Munch)
16
  [![Hashed Index](https://img.shields.io/badge/πŸ€—%20Index-hashed__data-green)](https://huggingface.co/datasets/humair025/hashed_data)
17
- [![Size](https://img.shields.io/badge/Size-~500MB-brightgreen)]()
18
- [![Original Size](https://img.shields.io/badge/Original-1+TB-orange)]()
19
- [![Space Saved](https://img.shields.io/badge/Space%20Saved-99.99%25-success)]()
20
 
21
  ## πŸ“– Overview
22
 
23
- **Munch Hashed Index** is a lightweight reference dataset that provides SHA-256 hashes for all audio files in the [Munch Urdu TTS Dataset](https://huggingface.co/datasets/humair025/Munch). Instead of storing 2.17 TB of raw audio, this index stores only metadata and cryptographic hashes, enabling:
24
 
25
- - βœ… **Fast duplicate detection** across 2.5M+ audio samples
26
  - βœ… **Efficient dataset exploration** without downloading terabytes
27
  - βœ… **Quick metadata queries** (voice distribution, text stats, etc.)
28
  - βœ… **Selective audio retrieval** - download only what you need
29
- - βœ… **Storage efficiency** - 99.99% space reduction (2.17 TB β†’ ~150 MB)
30
 
31
  ### πŸ”— Related Datasets
32
 
33
- - **Original Dataset**: [humair025/Munch](https://huggingface.co/datasets/humair025/Munch) - Full audio dataset (1+ TB)
34
- - **This Index**: [humair025/hashed_data](https://huggingface.co/datasets/humair025/hashed_data) - Hashed reference (~500 MB)
 
 
35
 
36
  ---
37
 
@@ -39,9 +41,9 @@ pretty_name: ' Munch Hashed Index '
39
 
40
  ### The Challenge
41
  The original [Munch dataset](https://huggingface.co/datasets/humair025/Munch) contains:
42
- - πŸ“Š **2.5M+ audio-text pairs**
43
- - πŸ’Ύ **2.17 TB total size**
44
- - πŸ“¦ **5,000+ separate parquet files**
45
 
46
  This makes it difficult to:
47
  - ❌ Quickly check if specific audio exists
@@ -54,7 +56,7 @@ This hashed index provides:
54
  - βœ… **All metadata** (text, voice, timestamps) without audio bytes
55
  - βœ… **SHA-256 hashes** for every audio file (unique fingerprint)
56
  - βœ… **File references** (which parquet contains each audio)
57
- - βœ… **Fast queries** - search 2.5M records in seconds
58
  - βœ… **Retrieve on demand** - download only specific audio when needed
59
 
60
  ---
@@ -73,7 +75,7 @@ pip install datasets pandas
73
  from datasets import load_dataset
74
  import pandas as pd
75
 
76
- # Load the entire hashed index (fast - only ~150 MB!)
77
  ds = load_dataset("humair025/hashed_data", split="train")
78
  df = pd.DataFrame(ds)
79
 
@@ -190,7 +192,7 @@ wav_io = pcm16_to_wav(audio_bytes)
190
  'voice': 'ash',
191
  'audio_bytes_hash': 'a3f7b2c8e9d1f4a5b6c7d8e9f0a1b2c3d4e5f6a7b8c9d0e1f2a3b4c5d6e7f8a9',
192
  'audio_size_bytes': 52340,
193
- 'timestamp': '2024-12-03T13:03:14.123456',
194
  'error': None
195
  }
196
  ```
@@ -265,21 +267,24 @@ print(f"Similar audio candidates: {len(similar)}")
265
 
266
  | Metric | Original Dataset | Hashed Index | Reduction |
267
  |--------|------------------|--------------|-----------|
268
- | Total Size | 2.17 TB | ~500 MB | **99%** |
269
- | Download Time (100 Mbps) | ~X hours | ~12 seconds | **Thousand TimeΓ—** |
270
- | Load Time | Minutes | Seconds | **~100Γ—** |
271
- | Memory Usage | Cannot fit in RAM | Fit | **Thousands XΓ—** |
 
 
272
 
273
  ### Content Statistics
274
 
275
  ```
276
  πŸ“Š Dataset Overview:
277
- Total Records: ~2,500,000
278
- Unique Audio: [Run analysis to determine]
279
  Voices: 13 (alloy, echo, fable, onyx, nova, shimmer, coral, verse, ballad, ash, sage, amuch, dan)
280
- Languages: Urdu (primary), Mixed (some samples)
281
  Avg Audio Size: ~50-60 KB per sample
282
  Avg Duration: ~3-5 seconds per sample
 
283
  ```
284
 
285
  ---
@@ -403,7 +408,8 @@ import time
403
 
404
  # Load index
405
  start = time.time()
406
- df = pd.read_parquet('hashed_0_39.parquet')
 
407
  print(f"Load time: {time.time() - start:.2f}s")
408
 
409
  # Query by hash
@@ -418,7 +424,7 @@ print(f"Voice filter: {(time.time() - start)*1000:.2f}ms")
418
  ```
419
 
420
  **Expected Performance**:
421
- - Load single file: < 1 second
422
  - Hash lookup: < 10 milliseconds
423
  - Voice filter: < 50 milliseconds
424
  - Full dataset scan: < 5 seconds
@@ -431,7 +437,7 @@ print(f"Voice filter: {(time.time() - start)*1000:.2f}ms")
431
 
432
  ```python
433
  # 1. Query the index (fast)
434
- df = pd.read_parquet('hashed_index.parquet')
435
  target_rows = df[df['voice'] == 'ash'].head(100)
436
 
437
  # 2. Get unique parquet files
@@ -463,45 +469,41 @@ If you use this dataset in your research, please cite both the original dataset
463
  ### BibTeX
464
 
465
  ```bibtex
466
- @dataset{munch_hashed_index_2024,
467
  title={Munch Hashed Index: Lightweight Reference Dataset for Urdu TTS},
468
- author={humair025},
469
  year={2025},
470
  publisher={Hugging Face},
471
- howpublished={
472
- \url{https://huggingface.co/datasets/humair025/hashed_data}
473
- },
474
  note={Index of humair025/Munch dataset with SHA-256 audio hashes}
475
  }
476
 
477
- @dataset{munch_urdu_tts_2024,
478
  title={Munch: Large-Scale Urdu Text-to-Speech Dataset},
479
- author={humair025},
480
  year={2025},
481
  publisher={Hugging Face},
482
- howpublished={
483
- \url{https://huggingface.co/datasets/humair025/Munch}
484
- }
485
  }
486
  ```
487
 
488
  ### APA Format
489
 
490
  ```
491
- humair025. (2024). Munch Hashed Index: Lightweight Reference Dataset for Urdu TTS
492
  [Dataset]. Hugging Face. https://huggingface.co/datasets/humair025/hashed_data
493
 
494
- humair025. (2024). Munch: Large-Scale Urdu Text-to-Speech Dataset [Dataset].
495
  Hugging Face. https://huggingface.co/datasets/humair025/Munch
496
  ```
497
 
498
  ### MLA Format
499
 
500
  ```
501
- humair025. "Munch Hashed Index: Lightweight Reference Dataset for Urdu TTS."
502
- Hugging Face, 2024, https://huggingface.co/datasets/humair025/hashed_data.
503
 
504
- humair025. "Munch: Large-Scale Urdu Text-to-Speech Dataset." Hugging Face, 2024,
505
  https://huggingface.co/datasets/humair025/Munch.
506
  ```
507
 
@@ -543,8 +545,10 @@ Under the terms:
543
 
544
  ## πŸ”— Important Links
545
 
546
- - 🎧 [**Original Audio Dataset**](https://huggingface.co/datasets/humair025/Munch) - Full 1.+ TB audio
547
  - πŸ“Š [**This Hashed Index**](https://huggingface.co/datasets/humair025/hashed_data) - Lightweight reference
 
 
548
  - πŸ’¬ [**Discussions**](https://huggingface.co/datasets/humair025/hashed_data/discussions) - Ask questions
549
  - πŸ› [**Report Issues**](https://huggingface.co/datasets/humair025/hashed_data/discussions) - Bug reports
550
 
@@ -553,7 +557,7 @@ Under the terms:
553
  ## ❓ FAQ
554
 
555
  ### Q: Why use hashes instead of audio?
556
- **A:** Hashes provide unique fingerprints for audio files while taking only 64 bytes vs ~50kb-12MB per audio. This enables duplicate detection and fast queries without storing massive audio files.
557
 
558
  ### Q: Can I reconstruct audio from hashes?
559
  **A:** No. SHA-256 is a one-way cryptographic hash. You must download the original audio from the [Munch dataset](https://huggingface.co/datasets/humair025/Munch) using the file reference provided.
@@ -565,7 +569,10 @@ Under the terms:
565
  **A:** Use the `parquet_file_name` and `id` fields to locate and download the specific audio from the [original dataset](https://huggingface.co/datasets/humair025/Munch). See examples above.
566
 
567
  ### Q: Is this dataset complete?
568
- **A:** This index is continuously updated as new batches are processed. Check the file list to see coverage.
 
 
 
569
 
570
  ### Q: Can I contribute?
571
  **A:** Yes! Help verify hashes, report inconsistencies, or suggest improvements via discussions.
@@ -576,7 +583,7 @@ Under the terms:
576
 
577
  - **Original Dataset**: [humair025/Munch](https://huggingface.co/datasets/humair025/Munch)
578
  - **TTS Generation**: OpenAI-compatible models
579
- - **Voices**: 13 high-quality voices
580
  - **Infrastructure**: HuggingFace Datasets platform
581
  - **Hashing**: SHA-256 cryptographic hash function
582
 
@@ -584,16 +591,17 @@ Under the terms:
584
 
585
  ## πŸ“ Version History
586
 
587
- - **v1.0.0** (December 2025): Initial release with hash index
588
- - Processed [X] out of N parquet files
589
- - [Y] unique audio hashes identified
590
- - [Z]% deduplication achieved
 
591
 
592
  ---
593
 
594
  **Last Updated**: December 2025
595
 
596
- **Status**: πŸ”„ Actively Processing (check file count for latest progress)
597
 
598
  ---
599
 
 
8
  - Urdu
9
  language:
10
  - ur
11
+ pretty_name: 'Munch Hashed Index'
12
  ---
13
  # Munch Hashed Index - Lightweight Audio Reference Dataset
14
 
15
  [![Original Dataset](https://img.shields.io/badge/πŸ€—%20Original-Munch-blue)](https://huggingface.co/datasets/humair025/Munch)
16
  [![Hashed Index](https://img.shields.io/badge/πŸ€—%20Index-hashed__data-green)](https://huggingface.co/datasets/humair025/hashed_data)
17
+ [![Size](https://img.shields.io/badge/Size-~1GB-brightgreen)]()
18
+ [![Original Size](https://img.shields.io/badge/Original-1.27TB-orange)]()
19
+ [![Space Saved](https://img.shields.io/badge/Space%20Saved-99.92%25-success)]()
20
 
21
  ## πŸ“– Overview
22
 
23
+ **Munch Hashed Index** is a lightweight reference dataset that provides SHA-256 hashes for all audio files in the [Munch Urdu TTS Dataset](https://huggingface.co/datasets/humair025/Munch). Instead of storing 1.27 TB of raw audio, this index stores only metadata and cryptographic hashes, enabling:
24
 
25
+ - βœ… **Fast duplicate detection** across 4.17 million audio samples
26
  - βœ… **Efficient dataset exploration** without downloading terabytes
27
  - βœ… **Quick metadata queries** (voice distribution, text stats, etc.)
28
  - βœ… **Selective audio retrieval** - download only what you need
29
+ - βœ… **Storage efficiency** - 99.92% space reduction (1.27 TB β†’ ~1 GB)
30
 
31
  ### πŸ”— Related Datasets
32
 
33
+ - **Original Dataset**: [humair025/Munch](https://huggingface.co/datasets/humair025/Munch) - Full audio dataset (1.27 TB)
34
+ - **This Index**: [humair025/hashed_data](https://huggingface.co/datasets/humair025/hashed_data) - Hashed reference (~1 GB)
35
+ - **Munch-1 (v2)**: [humair025/munch-1](https://huggingface.co/datasets/humair025/munch-1) - Newer version (3.28 TB, 3.86M samples)
36
+ - **Munch-1 Index**: [humair025/hashed_data_munch_1](https://huggingface.co/datasets/humair025/hashed_data_munch_1) - Index for v2
37
 
38
  ---
39
 
 
41
 
42
  ### The Challenge
43
  The original [Munch dataset](https://huggingface.co/datasets/humair025/Munch) contains:
44
+ - πŸ“Š **4,167,500 audio-text pairs**
45
+ - πŸ’Ύ **1.27 TB total size**
46
+ - πŸ“¦ **~8,300 separate parquet files**
47
 
48
  This makes it difficult to:
49
  - ❌ Quickly check if specific audio exists
 
56
  - βœ… **All metadata** (text, voice, timestamps) without audio bytes
57
  - βœ… **SHA-256 hashes** for every audio file (unique fingerprint)
58
  - βœ… **File references** (which parquet contains each audio)
59
+ - βœ… **Fast queries** - search 4.17M records in seconds
60
  - βœ… **Retrieve on demand** - download only specific audio when needed
61
 
62
  ---
 
75
  from datasets import load_dataset
76
  import pandas as pd
77
 
78
+ # Load the entire hashed index (fast - only ~1 GB!)
79
  ds = load_dataset("humair025/hashed_data", split="train")
80
  df = pd.DataFrame(ds)
81
 
 
192
  'voice': 'ash',
193
  'audio_bytes_hash': 'a3f7b2c8e9d1f4a5b6c7d8e9f0a1b2c3d4e5f6a7b8c9d0e1f2a3b4c5d6e7f8a9',
194
  'audio_size_bytes': 52340,
195
+ 'timestamp': '2025-12-03T13:03:14.123456',
196
  'error': None
197
  }
198
  ```
 
267
 
268
  | Metric | Original Dataset | Hashed Index | Reduction |
269
  |--------|------------------|--------------|-----------|
270
+ | Total Size | 1.27 TB | ~1 GB | **99.92%** |
271
+ | Records | 4,167,500 | 4,167,500 | Same |
272
+ | Files | ~8,300 parquet | Consolidated | **~8,300Γ— fewer** |
273
+ | Download Time (100 Mbps) | ~28 hours | ~90 seconds | **~1,100Γ—** |
274
+ | Load Time | Minutes-Hours | Seconds | **~100Γ—** |
275
+ | Memory Usage | Cannot fit in RAM | ~2-3 GB RAM | **Fits easily** |
276
 
277
  ### Content Statistics
278
 
279
  ```
280
  πŸ“Š Dataset Overview:
281
+ Total Records: 4,167,500
282
+ Total Files: ~8,300 parquet files
283
  Voices: 13 (alloy, echo, fable, onyx, nova, shimmer, coral, verse, ballad, ash, sage, amuch, dan)
284
+ Language: Urdu (primary)
285
  Avg Audio Size: ~50-60 KB per sample
286
  Avg Duration: ~3-5 seconds per sample
287
+ Total Duration: ~3,500-5,800 hours of audio
288
  ```
289
 
290
  ---
 
408
 
409
  # Load index
410
  start = time.time()
411
+ ds = load_dataset("humair025/hashed_data", split="train")
412
+ df = pd.DataFrame(ds)
413
  print(f"Load time: {time.time() - start:.2f}s")
414
 
415
  # Query by hash
 
424
  ```
425
 
426
  **Expected Performance**:
427
+ - Load full dataset: 10-30 seconds
428
  - Hash lookup: < 10 milliseconds
429
  - Voice filter: < 50 milliseconds
430
  - Full dataset scan: < 5 seconds
 
437
 
438
  ```python
439
  # 1. Query the index (fast)
440
+ df = pd.DataFrame(load_dataset("humair025/hashed_data", split="train"))
441
  target_rows = df[df['voice'] == 'ash'].head(100)
442
 
443
  # 2. Get unique parquet files
 
469
  ### BibTeX
470
 
471
  ```bibtex
472
+ @dataset{munch_hashed_index_2025,
473
  title={Munch Hashed Index: Lightweight Reference Dataset for Urdu TTS},
474
+ author={Munir, Humair},
475
  year={2025},
476
  publisher={Hugging Face},
477
+ howpublished={\url{https://huggingface.co/datasets/humair025/hashed_data}},
 
 
478
  note={Index of humair025/Munch dataset with SHA-256 audio hashes}
479
  }
480
 
481
+ @dataset{munch_urdu_tts_2025,
482
  title={Munch: Large-Scale Urdu Text-to-Speech Dataset},
483
+ author={Munir, Humair},
484
  year={2025},
485
  publisher={Hugging Face},
486
+ howpublished={\url{https://huggingface.co/datasets/humair025/Munch}}
 
 
487
  }
488
  ```
489
 
490
  ### APA Format
491
 
492
  ```
493
+ Munir, H. (2025). Munch Hashed Index: Lightweight Reference Dataset for Urdu TTS
494
  [Dataset]. Hugging Face. https://huggingface.co/datasets/humair025/hashed_data
495
 
496
+ Munir, H. (2025). Munch: Large-Scale Urdu Text-to-Speech Dataset [Dataset].
497
  Hugging Face. https://huggingface.co/datasets/humair025/Munch
498
  ```
499
 
500
  ### MLA Format
501
 
502
  ```
503
+ Munir, Humair. "Munch Hashed Index: Lightweight Reference Dataset for Urdu TTS."
504
+ Hugging Face, 2025, https://huggingface.co/datasets/humair025/hashed_data.
505
 
506
+ Munir, Humair. "Munch: Large-Scale Urdu Text-to-Speech Dataset." Hugging Face, 2025,
507
  https://huggingface.co/datasets/humair025/Munch.
508
  ```
509
 
 
545
 
546
  ## πŸ”— Important Links
547
 
548
+ - 🎧 [**Original Audio Dataset**](https://huggingface.co/datasets/humair025/Munch) - Full 1.27 TB audio
549
  - πŸ“Š [**This Hashed Index**](https://huggingface.co/datasets/humair025/hashed_data) - Lightweight reference
550
+ - πŸ”„ [**Munch-1 (v2)**](https://huggingface.co/datasets/humair025/munch-1) - Newer version (3.28 TB)
551
+ - πŸ“‡ [**Munch-1 Index**](https://huggingface.co/datasets/humair025/hashed_data_munch_1) - Index for v2
552
  - πŸ’¬ [**Discussions**](https://huggingface.co/datasets/humair025/hashed_data/discussions) - Ask questions
553
  - πŸ› [**Report Issues**](https://huggingface.co/datasets/humair025/hashed_data/discussions) - Bug reports
554
 
 
557
  ## ❓ FAQ
558
 
559
  ### Q: Why use hashes instead of audio?
560
+ **A:** Hashes provide unique fingerprints for audio files while taking only 64 bytes vs ~50KB per audio. This enables duplicate detection and fast queries without storing massive audio files.
561
 
562
  ### Q: Can I reconstruct audio from hashes?
563
  **A:** No. SHA-256 is a one-way cryptographic hash. You must download the original audio from the [Munch dataset](https://huggingface.co/datasets/humair025/Munch) using the file reference provided.
 
569
  **A:** Use the `parquet_file_name` and `id` fields to locate and download the specific audio from the [original dataset](https://huggingface.co/datasets/humair025/Munch). See examples above.
570
 
571
  ### Q: Is this dataset complete?
572
+ **A:** Yes, this index covers all 4,167,500 rows across all ~8,300 parquet files from the original Munch dataset.
573
+
574
+ ### Q: What's the difference between this and Munch-1 Index?
575
+ **A:** This indexes the original Munch dataset (1.27 TB, 4.17M samples). The [Munch-1 Index](https://huggingface.co/datasets/humair025/hashed_data_munch_1) indexes the newer Munch-1 dataset (3.28 TB, 3.86M samples).
576
 
577
  ### Q: Can I contribute?
578
  **A:** Yes! Help verify hashes, report inconsistencies, or suggest improvements via discussions.
 
583
 
584
  - **Original Dataset**: [humair025/Munch](https://huggingface.co/datasets/humair025/Munch)
585
  - **TTS Generation**: OpenAI-compatible models
586
+ - **Voices**: 13 high-quality voices (alloy, echo, fable, onyx, nova, shimmer, coral, verse, ballad, ash, sage, amuch, dan)
587
  - **Infrastructure**: HuggingFace Datasets platform
588
  - **Hashing**: SHA-256 cryptographic hash function
589
 
 
591
 
592
  ## πŸ“ Version History
593
 
594
+ - **v1.0.0** (December 2025): Initial release
595
+ - Processed all ~8,300 parquet files
596
+ - 4,167,500 audio samples indexed
597
+ - SHA-256 hashes computed for all audio
598
+ - ~99.92% space reduction achieved
599
 
600
  ---
601
 
602
  **Last Updated**: December 2025
603
 
604
+ **Status**: βœ… Complete
605
 
606
  ---
607