The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
Error code: DatasetGenerationError
Exception: ArrowNotImplementedError
Message: Cannot write struct type '_format_kwargs' with no child field to Parquet. Consider adding a dummy child field.
Traceback: Traceback (most recent call last):
File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 1831, in _prepare_split_single
writer.write_table(table)
File "/usr/local/lib/python3.12/site-packages/datasets/arrow_writer.py", line 712, in write_table
self._build_writer(inferred_schema=pa_table.schema)
File "/usr/local/lib/python3.12/site-packages/datasets/arrow_writer.py", line 757, in _build_writer
self.pa_writer = pq.ParquetWriter(
^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/pyarrow/parquet/core.py", line 1070, in __init__
self.writer = _parquet.ParquetWriter(
^^^^^^^^^^^^^^^^^^^^^^^
File "pyarrow/_parquet.pyx", line 2363, in pyarrow._parquet.ParquetWriter.__cinit__
File "pyarrow/error.pxi", line 155, in pyarrow.lib.pyarrow_internal_check_status
File "pyarrow/error.pxi", line 92, in pyarrow.lib.check_status
pyarrow.lib.ArrowNotImplementedError: Cannot write struct type '_format_kwargs' with no child field to Parquet. Consider adding a dummy child field.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 1847, in _prepare_split_single
num_examples, num_bytes = writer.finalize()
^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/arrow_writer.py", line 731, in finalize
self._build_writer(self.schema)
File "/usr/local/lib/python3.12/site-packages/datasets/arrow_writer.py", line 757, in _build_writer
self.pa_writer = pq.ParquetWriter(
^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/pyarrow/parquet/core.py", line 1070, in __init__
self.writer = _parquet.ParquetWriter(
^^^^^^^^^^^^^^^^^^^^^^^
File "pyarrow/_parquet.pyx", line 2363, in pyarrow._parquet.ParquetWriter.__cinit__
File "pyarrow/error.pxi", line 155, in pyarrow.lib.pyarrow_internal_check_status
File "pyarrow/error.pxi", line 92, in pyarrow.lib.check_status
pyarrow.lib.ArrowNotImplementedError: Cannot write struct type '_format_kwargs' with no child field to Parquet. Consider adding a dummy child field.
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1339, in compute_config_parquet_and_info_response
parquet_operations = convert_to_parquet(builder)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 972, in convert_to_parquet
builder.download_and_prepare(
File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 894, in download_and_prepare
self._download_and_prepare(
File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 970, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 1702, in _prepare_split
for job_id, done, content in self._prepare_split_single(
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 1858, in _prepare_split_single
raise DatasetGenerationError("An error occurred while generating the dataset") from e
datasets.exceptions.DatasetGenerationError: An error occurred while generating the datasetNeed help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
_data_files
list | _fingerprint
string | _format_columns
null | _format_kwargs
dict | _format_type
null | _output_all_columns
bool | _split
null |
|---|---|---|---|---|---|---|
[
{
"filename": "data-00000-of-00001.arrow"
}
] |
ab1a640952897a77
| null |
{}
| null | false
| null |
Merged Invoice NER Dataset
Dataset Description
This dataset consists of processed invoice documents intended for Document Layout Analysis and Named Entity Recognition (NER) tasks (e.g., training LayoutLM, LayoutLMv2, LayoutLMv3, or LiLT).
The data was created by merging multiple invoice datasets. It includes the document images, OCR-extracted words, bounding boxes, and NER tags.
Supported Tasks
- Token Classification: Extracting entities like
INVOICE_NUMBER,TOTAL_GROSS_WORTH,SELLER_NAME, etc. - Key Information Extraction (KIE)
Data Structure
Each example in the dataset contains the following fields:
id: A unique identifier for the document.image: The PIL Image of the invoice.words: List of strings (tokens) obtained via OCR.boxes: List of bounding boxes corresponding to the words.- Format:
[x_min, y_min, x_max, y_max] - Note: These boxes are likely in absolute pixel coordinates (based on the image size) and NOT normalized to 0-1000. You must normalize them before inputting them into models like LayoutLM.
- Format:
ner_tags: List of class IDs (integers) in BIO (Beginning, Inside, Outside) format.
Important Usage Warnings
1. Bounding Box Normalization Required
The boxes in this dataset are likely stored in absolute pixel values (e.g., 0-2500). Models like LayoutLM expect boxes normalized to a 0-1000 scale.
How to handle this: When loading the dataset, you should normalize the boxes relative to the image size:
def normalize_box(box, width, height):
return [
int(1000 * (box[0] / width)),
int(1000 * (box[1] / height)),
int(1000 * (box[2] / width)),
int(1000 * (box[3] / height)),
]
def preprocess(example):
w, h = example['image'].size
# Normalize boxes to 0-1000
example['boxes'] = [normalize_box(box, w, h) for box in example['boxes']]
return example
dataset = load_dataset("your-username/dataset-name")
dataset = dataset.map(preprocess)
2. Mixed OCR Sources
This dataset was created by merging sources that may have used different OCR engines (e.g., Tesseract, EasyOCR, or commercial APIs).
Some documents may have slightly different bounding box tightness or word-splitting logic.
Ensure your training pipeline is robust to slight variations in OCR quality.
3. Missing I-Tags for some entities
The label map includes B- (Beginning) tags for all entities, but some entities (like BALANCE_DUE, TOTAL_VAT, IBAN) do not have a corresponding I- (Inside) tag in the schema.
If a value spans multiple words (e.g., "Total VAT"), the model may only be able to label the first word, or you may need to map the subsequent words to O or a generic tag during training.
4. Privacy / PII
This dataset contains invoice data. While processed, users should handle the data with care regarding Personally Identifiable Information (PII) such as names, addresses, and financial details.
- Downloads last month
- 7