Every design decision traces back to peer-reviewed studies. Here's the evidence that shaped how we build.
A systematic review of 105 randomized controlled trials with 7,776 students found that adding video to existing teaching methods produces large learning gains. Swapping video for existing methods showed smaller but still positive effects. Interactive video dramatically outperformed passive formats.
A large-scale field experiment with nearly 1,000 high school math students compared two AI tutors: one mimicking standard ChatGPT and one with pedagogical safeguards. Students with unrestricted AI access performed 17% worse on exams when the tool was removed. The safeguarded version eliminated this harm entirely.
In a randomized controlled trial with 194 Harvard physics students, a carefully designed AI tutor helped students learn more than twice as much as an active learning classroom while spending less time. Students also reported feeling more engaged and motivated.
A review of more than 60 studies comparing human tutors, intelligent tutoring systems, and standard instruction found that step-based tutoring systems—those that check understanding throughout a problem, not just the final answer—were nearly as effective as expert human tutors.
A randomized experiment with 122 students aged 14-16 found that Socratic AI chatbots created more engagement and richer dialogue than direct-answer chatbots, but produced no measurable improvement in learning or retention.
See the research in action. Upload a question and watch it become a video.
Try Think10x.ai