What makes ApertIQ different?
ApertIQ studies more than final answers. It connects visual reasoning performance with timing, hesitation, answer changes, calibration quality, and biometric attention signals during the assessment.
FAQ
ApertIQ is a cognitive assessment platform that combines visual reasoning tasks, response behavior, and camera-based biometric attention tracking.
ApertIQ studies more than final answers. It connects visual reasoning performance with timing, hesitation, answer changes, calibration quality, and biometric attention signals during the assessment.
ApertIQ is an IQ-style cognitive assessment. It uses visual reasoning tasks and generates an estimated reasoning profile, but it should not be treated as a clinical IQ test or formal psychological evaluation.
Reasoning is a process, not only an outcome. Biometric attention tracking helps ApertIQ study how visual attention, search behavior, and focus relate to problem solving.
ApertIQ captures your assessment answers, response timing, answer changes, item-level behavior, calibration quality, device and browser context, and your biometric attention stream — including real-time gaze position, ocular activity, blink rhythm, and head motion.
Users can unlock a personalized report describing estimated reasoning range, cognitive style, pacing, hesitation, attention behavior, and performance under difficulty.
ApertIQ does NOT store raw camera images or video. Your camera feed is processed in real time to derive gaze position, attention signals, and calibration data — then discarded. See our full privacy policy here.
No. ApertIQ is not a medical diagnosis, clinical evaluation, or psychological treatment tool. It is a cognitive insight and measurement platform.
Users can contact ApertIQ for product questions. Research, platform, and ML teams can contact ApertIQ to discuss validation, collaboration, or data-related opportunities.
Experience ApertIQ
Complete the browser-based assessment and unlock a personalized report showing how your reasoning process responds to visual complexity.