rtu-tgn / README.md
morka17's picture
Upload README.md with huggingface_hub
3d2bab5 verified
metadata
language:
  - en
task_categories:
  - text-generation
  - conversational
  - instruction-following
size_categories:
  - n<1M
tags:
  - youtube
  - transcripts
  - llm-training
  - fine-tuning
  - whisper
  - conversational-ai

YouTube Transcripts Dataset for LLM Training

This dataset contains high-quality, structured transcripts from YouTube videos, specifically formatted for Large Language Model (LLM) training and fine-tuning.

Dataset Structure

The dataset is optimized for LLM training with the following structure:

Core Training Fields

  • text: Cleaned and normalized transcript text
  • instruction: Instruction format for fine-tuning (e.g., "Provide a transcript of the video titled '...'")
  • response: The transcript content (same as text but in instruction-response format)

Content Analysis

  • word_count: Number of words in the transcript
  • character_count: Number of characters
  • estimated_tokens: Estimated token count for training
  • quality_score: Quality score (0-1) based on length, structure, and metadata
  • content_type: Classified content type (educational, conversational, instructional, narrative, general)

Metadata

  • video_id: YouTube video ID
  • source: Always "youtube"
  • transcription_method: "whisper" (OpenAI Whisper)
  • language: "en" (English)
  • timestamp: Processing timestamp
  • video_metadata: Structured video information
    • title: Video title
    • channel: Channel name
    • duration_seconds: Video duration in seconds
    • duration_formatted: Human-readable duration (MM:SS or HH:MM:SS)
    • upload_date: Video upload date
    • view_count: Number of views
    • category: Auto-classified category (education, business, health, technology, etc.)

Loading the Dataset

from datasets import load_dataset

# Load the complete dataset
dataset = load_dataset("morka17/rtu-tgn", data_files="data_shard_*.jsonl")

# For instruction fine-tuning
train_data = dataset['train']
for example in train_data:
    instruction = example['instruction']
    response = example['response']
    # Use for instruction-following fine-tuning

# For general language modeling
for example in train_data:
    text = example['text']
    # Use for general language model training

Filtering and Quality Control

# Filter by quality score
high_quality = dataset.filter(lambda x: x['quality_score'] > 0.7)

# Filter by content type
educational_content = dataset.filter(lambda x: x['content_type'] == 'educational')

# Filter by length (optimal for training)
optimal_length = dataset.filter(lambda x: 1000 <= x['word_count'] <= 5000)

# Filter by category
business_content = dataset.filter(lambda x: x['video_metadata']['category'] == 'business')

Use Cases

1. Instruction Fine-tuning

Use the instruction and response fields for training models to follow instructions.

2. Conversational AI Training

Filter for content_type == 'conversational' for dialogue training.

3. Domain-specific Training

Filter by video_metadata.category for domain-specific fine-tuning.

4. Quality-based Training

Use quality_score to select high-quality training examples.

Data Quality

  • Text Cleaning: Transcripts are cleaned to remove artifacts, normalize punctuation, and improve readability
  • Quality Scoring: Each entry has a quality score based on length, structure, punctuation, and metadata
  • Content Classification: Automatic classification into content types for targeted training
  • Metadata Enrichment: Rich metadata for filtering and analysis

Sharding

The dataset is automatically sharded into files of max 10MB each (data_shard_XXXX.jsonl) for efficient loading and processing.

Last Updated

2025-10-26T14:52:26.835885

License and Usage

Please ensure compliance with YouTube's Terms of Service when using this dataset. This dataset is intended for research and educational purposes in natural language processing and machine learning.

Citation

If you use this dataset in your research, please cite:

@dataset{youtube_transcripts_llm,
  title={YouTube Transcripts Dataset for LLM Training},
  author={Generated via OpenAI Whisper},
  year={2025},
  url={https://huggingface.co/datasets/morka17/rtu-tgn}
}