BBC News Articles Dataset
Dataset Description
A collection of 2,225 news articles from BBC, suitable for text classification, summarization, and NLP tasks.
Dataset Summary
| Metric |
Value |
| Total Articles |
2,225 |
| Unique Articles |
2,092 |
| Columns |
filename, article_text |
| Language |
English |
| Source |
BBC News |
Dataset Structure
Data Fields
| Field |
Type |
Description |
filename |
string |
Unique identifier/filename for each article |
article_text |
string |
Full text content of the news article |
Text Statistics
| Metric |
Min |
Max |
Mean |
Median |
Std |
| Characters |
470 |
25,453 |
2,232 |
1,935 |
1,364 |
| Words |
84 |
4,428 |
379 |
326 |
238 |
| Sentences |
4 |
248 |
19 |
16 |
13 |
Vocabulary Statistics
| Metric |
Value |
| Total Words (corpus) |
815,279 |
| Unique Words (vocabulary) |
27,205 |
| Vocabulary (excl. stopwords) |
27,070 |
| Lexical Diversity |
0.0334 |
| Avg Words per Article |
366.4 |
Top 10 Most Frequent Words
| Word |
Frequency |
| said |
7,253 |
| mr |
2,994 |
| would |
2,628 |
| also |
2,156 |
| people |
2,041 |
| new |
1,898 |
| us |
1,818 |
| year |
1,813 |
| one |
1,752 |
| could |
1,534 |

Usage
Loading with Hugging Face Datasets
from datasets import load_dataset
dataset = load_dataset("Omarrran/BBC_Eng_News_Articles_dataset")
train_data = dataset['train']
print(train_data[0]['article_text'][:500])
Loading with Pandas
import pandas as pd
from datasets import load_dataset
dataset = load_dataset("Omarrran/BBC_Eng_News_Articles_dataset")
df = dataset['train'].to_pandas()
print(f"Total articles: {len(df)}")
print(df.head())
Text Classification Example
from sklearn.model_selection import train_test_split
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.naive_bayes import MultinomialNB
df = dataset['train'].to_pandas()
df['category'] = df['filename'].apply(lambda x: x.split('/')[0])
X_train, X_test, y_train, y_test = train_test_split(
df['article_text'], df['category'], test_size=0.2, random_state=42
)
vectorizer = TfidfVectorizer(max_features=5000, stop_words='english')
X_train_vec = vectorizer.fit_transform(X_train)
X_test_vec = vectorizer.transform(X_test)
clf = MultinomialNB()
clf.fit(X_train_vec, y_train)
print(f"Accuracy: {clf.score(X_test_vec, y_test):.2%}")
Summarization Example
from transformers import pipeline
summarizer = pipeline("summarization", model="facebook/bart-large-cnn")
article = dataset['train'][0]['article_text']
summary = summarizer(article[:1024], max_length=130, min_length=30)
print(summary[0]['summary_text'])
Suitable Tasks
This dataset is ideal for:
- Text Classification: Categorize articles by topic
- Summarization: Generate article summaries
- Named Entity Recognition: Extract entities from news
- Keyword Extraction: Identify key topics
- Topic Modeling: Discover latent themes
- Sentiment Analysis: Analyze article tone
- Text Generation: Fine-tune language models
- Information Retrieval: Build search systems
Data Quality
| Check |
Status |
| Empty/null articles |
0 found |
| Encoding issues |
Clean (UTF-8) |
Limitations
- Dataset is limited to BBC News articles
- May contain temporal bias based on collection period
- English language only
- News domain specific vocabulary
Citation
@dataset{bbc_news_articles,
title = {BBC_Eng_News_Articles_dataset_hnm},
Author ={Haq Nawaz Malik}
year = {2025},
publisher = {Hugging Face},
howpublished = {\url{https://huggingface.co/datasets/Omarrran/BBC_Eng_News_Articles_dataset/}}
}
License
This dataset is provided for research and educational purposes under the MIT License.