Hallucinations
My testing of the granite-docling model ended prematurely, when I observed that, in the very first sentence of the document's body text, the word "premises" had been replaced by the model with the word "Wales". The meaning of the sentence ended up entirely different from the original. The original document was in PDF with the text embedded, i.e. not scanned.
Could be white text on a white background.
If you select all with Ctrl/cmd + A on your keyboard and paste do you see "Wales"?
To my chagrin the model does indeed hallucinate a lot even with the provided example code snippet:
docling --to html --to md --pipeline vlm --vlm-model granite_docling "https://arxiv.org/pdf/2501.17887" # accepts files, urls or directories
- Repetitions with errors: !! 12 !! identical rows with a broken link: "Repository - https://github.com/D4SD/docling"
- Multiple substitutions in a single sentece
Original text:
Docling has been already integrated in other popular open-source frameworks (e.g., LangChain, LlamaIndex, spaCy), making it a natural fit for the processing of documents and the development of high-end applications
Processed text:
It has been already integrated in other open-source frameworks (e.g., LangChain, LlamaIndex, spaCy), making it natural for the processing of documents. We present the development ...
I am still looking for high-quality VL model for docling. The current solutions fall short of associating/linking footnotes to the relevant reference in the main text, merge tables which are split across two or more continuous pages, and make sense of split cells, track header levels, and several other basic document structural features.
IBM-granite had made an announcement about larger models, potentially more accurate and feature-rich, but nothing yet on huggingface.