About iqtest.online
The science and people behind our cognitive assessment
Bring rigorous IQ testing out of the clinic
Validated cognitive assessment used to mean an expensive clinical visit. We built iqtest.online so anyone, anywhere, can take a research-grounded IQ test in 20 minutes and walk away with results they can actually trust — not a gamified score designed to sell you a certificate.
Built by people who care about measurement
iqtest.online started with a simple observation: most online IQ tests are either gamified entertainment or crude conversion funnels. Neither respects the century of psychometric research behind real intelligence testing.
We spent over a year calibrating our question pool against established instruments — Raven's Progressive Matrices, WAIS-IV subtests, Cattell Culture Fair. Every item was validated on thousands of respondents before going into the live test, and we continue to retire biased or low-signal items as new data comes in.
Today, people in 180+ countries across 18 languages take our assessment. We publish our methodology openly, iterate in public, and measure ourselves by one standard: does the score correlate with the thing it claims to measure?

Ivan Ivanov
Founder & Product Lead
Ivan leads product and engineering at iqtest.online. He works closely with psychometricians and cognitive researchers to ensure every item on the test is grounded in peer-reviewed methodology, and that scoring maps honestly to real population percentiles — not inflated vanity numbers.
- 10+ years building data-driven web products
- Collaborates with psychometric researchers on item validation
- Led the 18-language localization and norming effort
“A good IQ test is invisible — you notice the questions, not the platform. Our job is to get out of the way and let the science do the work.”
How we build a test you can trust
Research-backed items
Every question is derived from peer-reviewed frameworks covering fluid reasoning, working memory, pattern recognition, and spatial rotation.
Statistical calibration
We run item response theory (IRT) analysis on each question to measure its difficulty and discrimination before it goes live.
Continuous validation
Biased or low-signal items are flagged and retired. Norms are re-weighted regularly based on the latest respondent data.
Transparent scoring
Your score maps directly to a population-referenced percentile — no gamified inflation, no vanity metrics.