Our Methodology

Transparency in how we predict neural responses

What We Do — And What We Don't

Buyology Labs does not simulate brain scans. We do not have access to your audience's brain activity. What we do is predict neural response patterns using machine learning models trained on published neuroscience research data, combined with AI interpretation of your creative assets.

Our predictions are informed by the same research that guides physical neuroscience labs — we make those findings accessible without requiring R4M+ in lab equipment and 6-8 weeks of study time.

Think of it like a weather forecast: meteorologists don't create weather — they predict it using models trained on historical data. We predict neural engagement using models trained on published experimental data.

Our Data Sources

Published Neuroscience Research

Our analysis prompts are grounded in 30+ published studies from journals including Neuron, Journal of Marketing Research, Psychological Science, and Journal of Consumer Psychology. Key researchers include Pieters & Wedel (eye-tracking), Knutson et al. (fMRI purchase prediction), Cahill & McGaugh (emotional memory), and Reber et al. (processing fluency).

DEAP Dataset

1,280 EEG and GSR recordings from 32 participants during emotional stimulus viewing (Koelstra et al., 2012). Used to inform our emotional engagement and arousal models.

GSR Physiological Data

Real galvanic skin response recordings measuring cognitive workload under controlled conditions. Our GSR model achieves AUC 0.70 in distinguishing high vs low mental workload states.

NeuMa Neuromarketing Dataset

42 participants with simultaneous EEG and eye-tracking recordings during supermarket brochure viewing with purchase decisions (Georgiadis et al., 2023, Nature Scientific Data). Used for purchase intent model calibration.

Visual Saliency Research

Attention heatmaps generated using a 6-channel saliency computation based on Itti & Koch (1998) visual attention theory: color contrast, edge density, color warmth, intensity contrast, center bias, and text region detection.

Our ML Pipeline

Every analysis runs through a three-stage pipeline:

Stage 1: AI Vision Analysis

Claude AI examines your creative asset and identifies visual elements, emotional triggers, cognitive complexity, and demographic relevance. The AI is prompted with research context from our database of 30+ neuroscience papers relevant to the asset type.

Stage 2: ML Model Calibration

Six ONNX machine learning models independently score the creative based on extracted features. Five models predict attention, emotion, memory, purchase intent, and cognitive load using feature weights derived from published research. A sixth model trained on real GSR physiological data predicts cognitive workload. These scores are blended with the AI analysis at a 30/70 ratio (30% ML, 70% AI).

Our newest model — trained on the NeuMa neuromarketing dataset (Georgiadis et al., 2023, Nature Scientific Data) — predicts purchase intent from EEG brain activity patterns recorded during real supermarket brochure viewing. This model achieves AUC 0.746 in distinguishing Buy from NoBuy decisions, validated on 405 labeled brain recordings from 41 participants.

Packaging-specific analysis evaluates shelf differentiation, purchase trigger speed, color psychology, typography readability at shelf distance, material perception, and redesign risk — grounded in packaging research from Clement (2007), Orth & Malkewitz (2008), and Silayoi & Speece (2007).

Stage 3: Behavioral Science Audit

Our AI Creative Director — informed by 20 behavioral science principles from Kahneman, Cialdini, Ariely, and Sutherland — critiques the creative, identifies principle violations, and provides specific fixes with expected impact percentages grounded in published research.

Limitations — What We're Honest About

No predictive model is perfect. Here's what you should know:

  • Our predictions are directional, not absolute. A NeuroScore of 65 vs 72 indicates meaningful difference. A difference of 1-2 points may not be.
  • Demographic calibration is inference-based, not empirically measured per segment. We don't have separate neural recordings for each LSM tier or province.
  • Our attention heatmaps use computational saliency models, not actual eye-tracking hardware. They predict where attention is likely to land based on visual properties.
  • Individual variation is real. Our models predict population-level tendencies, not how any single person will respond.
  • We continuously improve. As we collect validation data from customer outcomes, our models get more accurate.

Validation Roadmap

We are actively working on:

  • Hardware validation studies comparing our predictions against Emotiv EEG and Tobii eye-tracking data
  • Published correlation analysis using the NeuMa neuromarketing dataset
  • Customer outcome tracking to measure prediction-to-performance accuracy
  • Expansion to additional neuroscience datasets for model training

Research Citations

  • Pieters, R., & Wedel, M. (2004). Attention capture and transfer in advertising. Journal of Marketing.
  • Knutson, B., et al. (2007). Neural predictors of purchases. Neuron.
  • Cahill, L., & McGaugh, J.L. (1998). Mechanisms of emotional arousal and lasting declarative memory. TINS.
  • Reber, R., Schwarz, N., & Winkielman, P. (2004). Processing fluency and aesthetic pleasure. Personality and Social Psychology Review.
  • Itti, L., & Koch, C. (1998). A model of saliency-based visual attention. IEEE PAMI.
  • Kahneman, D. (2003). A perspective on judgment and choice. American Psychologist.
  • Cialdini, R.B. (2001). Influence: Science and Practice.
  • Georgiadis, K., et al. (2023). NeuMa neuromarketing dataset. Nature Scientific Data.
  • Koelstra, S., et al. (2012). DEAP: A database for emotion analysis using physiological signals. IEEE TAC.
  • Wang, H.E., et al. (2024). Virtual brain twins. National Science Review.