Skip to main content
The Intelligent Document Processing (IDP) Leaderboard is an open benchmark that compares document AI models across OCR, table extraction, key information extraction, and visual QA. Nanonets OCR2+ ranks #1 overall.
#ModelOverallOlmOCROmniDocIDPSize
1Nanonets OCR2+81.882.289.573.8
2Gemini-3-Pro81.473.588.881.8
3Claude Sonnet 4.680.874.486.981.2
4Claude Opus 4.680.373.985.981.1
5Gemini-3-Flash79.969.290.180.5
6GPT-5.279.272.288.077.4
7GPT-5-Mini70.856.782.573.3
8GPT-4.170.055.579.974.7
9Claude Haiku 4.569.656.279.672.9
10Ministral-8B68.057.878.367.98B

What the benchmarks measure

OlmOCR

Optical character recognition accuracy across diverse document types and layouts.

OmniDoc

End-to-end document understanding covering structure, tables, and formatting.

IDP

Pulling structured fields from invoices, receipts, and forms.

Overall

The mean of all benchmark scores, giving a single measure of document AI capability.

Methodology

The overall score is the mean of all individual benchmark scores. The full benchmark is open source and fully reproducible.

View Full Leaderboard

See all models, detailed breakdowns, and methodology on the IDP Leaderboard.