Who decides the scores?
Credentialed expert panels — media scholars, educators, cognitive scientists, and domain specialists — using structured rubrics with defined criteria. Minimum 10 reviewers per title. Inter-rater reliability is measured and published. This is augmented by computational NLP analysis that measures linguistic complexity, information density, and narrative structure.
How is this different from Rotten Tomatoes?
Rotten Tomatoes aggregates binary critic opinion (fresh/rotten) on a single dimension. The IQ Score evaluates three dimensions with twelve sub-metrics, uses credentialed expert panels with published rubrics, and is grounded in cognitive science literature. RT measures whether critics liked something. We measure what it does to the viewer.
Isn't this just opinion?
No. Narrative complexity, dialogue density, factual accuracy, and information transfer rate are measurable properties. Our NLP pipeline quantifies them computationally. The expert panels score against structured rubrics, not personal taste. The methodology is transparent — rubrics and criteria are published.
Does a low score mean the show is bad?
No. A low IQ Score means low intellectual demand — not low quality. The Office scores 87 (Passive) and is one of the best comedies ever made. The IQ Score measures cognitive value, not entertainment value in isolation. Entertainment quality is one dimension, not the whole score.
Will you pursue scientific validation?
Yes. Our Phase 3 roadmap includes university neuroscience partnerships for EEG and fMRI correlation studies. But the platform launches with a methodology that already exceeds the rigor of every existing content rating system. Validation deepens the moat — it's not a prerequisite.
How many titles are scored?
1,100+ titles at launch, spanning TV series, films, documentaries, anime, K-drama, reality TV, game shows, and more across all major streaming platforms.