Cognitive Paradigm Design

Expert guidance for selecting and parameterizing cognitive psychology experimental paradigms based on research questions

Safety Notice

This listing is imported from skills.sh public index metadata. Review upstream SKILL.md and repository scripts before running.

Copy this and send it to your AI assistant to learn

Install skill "Cognitive Paradigm Design" with this command: npx skills add haoxuanlithuai/awesome_cognitive_and_neuroscience_skills/haoxuanlithuai-awesome-cognitive-and-neuroscience-skills-cognitive-paradigm-design

Cognitive Paradigm Design Skill

This skill helps researchers select appropriate experimental paradigms for cognitive psychology research questions, configure their parameters with cited defaults, and design proper controls. It encodes methodological knowledge from the cognitive experimental literature that a non-specialist would not know.

For detailed paradigm parameters, see references/classic-paradigms.md. For design methodology, see references/design-principles.md.


Research Planning Protocol

Before executing the domain-specific steps below, you MUST:

  1. State the research question — What specific cognitive process or phenomenon is being investigated?
  2. Justify the method choice — Why an experimental paradigm (not survey, corpus, modeling)? What alternatives were considered?
  3. Declare expected outcomes — What pattern of results would support vs. refute the hypothesis?
  4. Note assumptions and limitations — What does this paradigm assume? Where could it mislead?
  5. Present the plan to the user and WAIT for confirmation before proceeding.

For detailed methodology guidance, see the research-literacy skill.

⚠️ Verification Notice

This skill was generated by AI from academic literature. All parameters, thresholds, and citations require independent verification before use in research. If you find errors, please open an issue.

Core Workflow

When given a research question, follow this sequence:

Step 1: Identify the Cognitive Construct

Map the research question to one or more core cognitive domains:

DomainCore ConstructsExample Research Questions
AttentionSelective attention, spatial orienting, temporal attention, attentional capture"Does emotion capture attention automatically?"
MemoryEncoding, retrieval, WM capacity, false memory, recognition vs. recall"Do older adults show increased false memory?"
Decision MakingRisk, reward learning, impulsivity, perceptual decisions"Are substance users more impulsive in intertemporal choice?"
PerceptionThresholds, masking, awareness, object recognition"What is the contrast threshold for face detection?"
LanguageLexical access, sentence parsing, semantic processing"Does syntactic complexity slow reading at the verb?"
Executive FunctionInhibition, task switching, updating, cognitive flexibility"Is SSRT longer in ADHD children?"

Step 2: Select a Paradigm

Use this decision tree to narrow paradigm choices:

Attention:

  • Conflict/interference between dimensions -> Stroop task or Flanker task
  • If response-level conflict is key -> Flanker (separates search from conflict)
  • If word-reading automaticity is key -> Stroop
  • Spatial orienting -> Posner cueing
  • Exogenous (reflexive) vs. endogenous (voluntary) -> vary cue type and SOA
  • Search efficiency / feature binding -> Visual search
  • Temporal limits of attention -> Attentional blink

Memory:

  • STM scanning speed -> Sternberg task
  • False memory production -> DRM paradigm
  • Recollection vs. familiarity -> Remember-Know
  • VWM capacity -> Change detection
  • Serial position effects -> Serial position curve

Decision Making:

  • Decision making under ambiguity with learning -> Iowa Gambling Task
  • Impulsivity / temporal discounting -> Delay discounting
  • Sensitivity vs. bias decomposition -> Signal Detection Theory (Yes/No or 2AFC)
  • Perceptual/cognitive discrimination -> 2AFC

Perception:

  • Threshold estimation -> Psychophysical staircase (1-up/2-down or QUEST)
  • Few trials available -> QUEST (converges in ~30-50 trials; Watson & Pelli, 1983)
  • Simple implementation needed -> 1-up/2-down (converges in ~50-80 trials; Levitt, 1971)
  • Subliminal processing / visibility control -> Masking paradigms
  • Vary visibility continuously -> backward masking (SOA manipulation)
  • Prevent conscious identification -> sandwich masking (forward + backward)

Language:

  • Single-word recognition / lexical access -> Lexical decision
  • Spreading activation / semantic networks -> Priming (with lexical decision or naming)
  • Incremental sentence comprehension -> Self-paced reading or eye-tracking
  • Budget-friendly, no specialized equipment -> Self-paced reading
  • Maximum ecological validity and rich temporal data -> Eye-tracking

Executive Function:

  • Simple response inhibition (withholding) -> Go/No-Go
  • Action cancellation (stopping initiated response) -> Stop-signal task
  • Need a latent measure of inhibition speed -> Stop-signal (yields SSRT)
  • Cognitive flexibility / set shifting -> Task switching
  • Working memory updating under continuous load -> N-back

Step 3: Configure Parameters

For each selected paradigm, consult references/classic-paradigms.md for the full parameter reference. Apply these general rules:

Timing Parameters

ParameterDefaultAdjustment Rule
Stimulus durationUntil response (RT tasks) or 100-500 ms (brief presentation)Shorten for masking or iconic memory studies; lengthen for patient populations
ISI / ITI1000-2000 msIncrease to 2000-3000 ms for EEG (to separate ERPs); increase for fMRI (jittered 2-8 s for HRF deconvolution)
SOAParadigm-specific (see reference)Short SOA (<300 ms): automatic processes; Long SOA (>500 ms): strategic/controlled processes (Neely, 1977)
Response deadline1500-2000 ms for RT tasksTighten for speed-emphasis; loosen for accuracy-emphasis or elderly/clinical samples

Trial Counts

ScenarioMinimum Trials per ConditionRationale
Large effect (d > 0.8)40-60Stroop, Flanker, AB (Hedge et al., 2018)
Medium effect (d ~ 0.5)60-100Priming, switching, search slopes (McNamara, 2005; Monsell, 2003)
Small effect (d ~ 0.3)100-200Subtle manipulations, individual differences (Baker et al., 2021)
SDT measures (d', c)100+ total (50+ signal, 50+ noise)Macmillan & Creelman (2005)
Reliability-critical (SSRT, K)160-200 totalVerbruggen et al. (2019); Rouder et al. (2011)

Proportion Manipulations

  • Congruency proportion (Stroop, Flanker): 50/50 is the unbiased standard. Deviating introduces list-wide proportion congruency effects that modulate conflict (Logan & Zbrodoff, 1979; Bugg & Crump, 2012). Only deviate if proportion effects are the research question.
  • Cue validity (Posner): 80% valid for endogenous orienting (Posner, 1980); 50% (uninformative) for pure exogenous effects.
  • Stop-signal proportion: 25% is standard. Higher rates induce proactive slowing (Verbruggen et al., 2019).
  • Target prevalence (search, detection): 50% unless studying prevalence effects (Wolfe et al., 2005).
  • Relatedness proportion (priming): Keep at ~25-50% to minimize strategic expectancy; lower RP isolates automatic priming (Neely et al., 1989).

Step 4: Design Controls

Apply these control procedures:

4.1 Condition Assignment

  • Within-subjects preferred for most cognitive paradigms (maximizes power by eliminating between-subject variance; Maxwell & Delaney, 2004)
  • Between-subjects required when conditions produce carry-over (e.g., training studies, deception manipulations, proportion manipulations)
  • See references/design-principles.md, Section 1 for the full decision framework

4.2 Counterbalancing

  • 2-3 conditions: Full counterbalancing (all k! orders)
  • 4+ conditions: Balanced Latin Square (Williams, 1949); ensures each condition precedes every other condition equally often
  • Always counterbalance: Stimulus-response mappings, response hand assignments
  • Pseudo-randomize within blocks: No more than 3-4 consecutive same-condition trials; equal condition transitions (see references/design-principles.md, Section 5.4)

4.3 Practice Trials

  • Simple RT tasks: 10-20 practice trials (Luce, 1986)
  • Complex tasks (task switching, N-back): 20-40 practice trials with feedback
  • Adaptive tasks (staircase): 50-100 familiarization trials before data collection (Watson & Pelli, 1983)
  • Require >80% accuracy in practice before advancing
  • Use different stimuli from experimental trials

4.4 Catch Trials and Comprehension Checks

  • Detection tasks: Include 10-20% no-target catch trials (Posner, 1980)
  • Reading tasks: Comprehension probes after 30-50% of sentences (Just et al., 1982)
  • Masked priming: Post-experiment visibility check or 5-10% awareness probes (Forster & Davis, 1984)

Step 5: Specify Dependent Variables and Analysis

Primary DVs by Paradigm Type

Paradigm TypePrimary DVAnalysis Notes
Speeded RT tasksRT (ms) + accuracy (%)Always report both. Apply RT trimming: remove anticipatory (<200 ms) and slow (>2.5 SD or >2000 ms) responses. Analyze only correct trials for RT.
Accuracy-focused tasksProportion correct or d'Use SDT when signal/noise distinction applies (Macmillan & Creelman, 2005)
Memory tasksHit rate, false alarm rate, d', KCowan's K for change detection; d' for recognition
Adaptive thresholdThreshold estimateAverage last 6-8 reversals (staircase); maximum-likelihood estimate (QUEST)
Learning/decision tasksBlock-by-block performanceIGT: (C+D)-(A+B) per block of 20; Delay discounting: indifference points per delay

Recommended Statistical Approaches

  • Repeated-measures ANOVA: Standard for factorial designs; check sphericity (Girden, 1992)
  • Linear mixed-effects models: Preferred for unbalanced designs, missing data, item-level analysis; include random intercepts for subjects and items ("by-subject and by-item" approach; Baayen et al., 2008)
  • Bayesian analysis: Report Bayes factors for key comparisons when sample size is limited or null effects are informative (Rouder et al., 2009)
  • Drift-diffusion modeling: For decomposing RT and accuracy into drift rate, boundary separation, and non-decision time (Ratcliff & McKoon, 2008)

Quick Reference: Paradigm Selection Matrix

Research Question TypeFirst-Choice ParadigmAlternative
Does X capture attention?Posner cueing / Visual searchDot-probe task
Does X interfere with processing?Stroop / FlankerSimon task
What is VWM capacity for X?Change detectionContinuous report
Does X cause false memories?DRM paradigmMisinformation paradigm
Is recognition based on recollection or familiarity?Remember-KnowROC analysis
Does X affect inhibitory control?Stop-signal (SSRT)Go/No-Go
Does X modulate cognitive flexibility?Task switchingWisconsin Card Sorting
Is X processed without awareness?Backward masking + primingContinuous flash suppression
What is the perceptual threshold for X?QUEST / Staircase + 2AFCMethod of constant stimuli
Does X affect reading?Self-paced reading / Eye-trackingERP (N400, P600)
Does X prime Y?Semantic priming + LDTCross-modal priming
Is X related to impulsivity?Delay discountingStop-signal
Does X affect decision making under risk?Iowa Gambling TaskBalloon Analogue Risk Task
Does X affect WM updating?N-backOperation span

Domain-Specific Warnings

These are non-obvious pitfalls that require domain expertise:

  1. Stroop: Using fewer than 4 color-response mappings introduces item-specific contingency learning that mimics Stroop effects but is not conflict-based (Schmidt & Besner, 2008). Always use >= 4 colors.

  2. Stop-signal: Never estimate SSRT from mean Go RT alone. The integration method accounts for the Go RT distribution shape. Failed-stop RTs must be faster than Go RTs (independence assumption check; Logan & Cowan, 1984). Use the consensus guide (Verbruggen et al., 2019).

  3. Attentional blink: T1 must be masked (by a trailing distractor). Removing the T1+1 item eliminates the AB entirely (Raymond et al., 1992). Always include T1+1 distractor.

  4. Change detection (VWM): Retention intervals shorter than ~900 ms may allow iconic memory to contribute, inflating K estimates. Use >=900 ms retention interval, and consider articulatory suppression to prevent verbal recoding (Luck & Vogel, 1997; Vogel et al., 2001).

  5. DRM: False recall varies dramatically across lists (10-60%). Always report which word lists were used and their BAS values (Stadler et al., 1999). Roediger et al. (2001) normed 55 lists.

  6. Iowa Gambling Task: Apparent "learning" may reflect frequency-of-loss avoidance rather than long-term value sensitivity. Consider deck-by-deck analysis, not just (C+D)-(A+B) (Steingroever et al., 2013).

  7. Priming: High relatedness proportions (>50%) inflate priming through strategic expectancy, not automatic spreading activation. Use RP <= 25% to isolate automatic priming (Neely et al., 1989).

  8. Task switching: In alternating-runs designs (AABB), the response-stimulus interval (RSI) is confounded with cue-stimulus interval (CSI). Use cued-switching designs to separate preparation time from passive decay (Monsell, 2003; Meiran, 1996).

  9. Psychophysical staircases: Step sizes of <5% lead to staircases that fail to generate enough reversals. Use initial step sizes of at least 10-20% of the expected threshold range, then halve after the first 2-4 reversals (Garcia-Perez, 1998).

  10. N-back: Omission errors are more informative than commission errors (unlike Go/No-Go). Always report d' rather than raw accuracy, as d' separates sensitivity from bias (Haatveit et al., 2010). Include lure trials (n-1 or n+1 matches) to assess interference susceptibility (Gray et al., 2003).


References

  • Baayen, R. H., Davidson, D. J., & Bates, D. M. (2008). Mixed-effects modeling with crossed random effects for subjects and items. Journal of Memory and Language, 59, 390-412.
  • Baker, D. H., Vilidaite, G., Lygo, F. A., et al. (2021). Power contours: Optimising sample size and precision in experimental psychology and human neuroscience. Psychological Methods, 26, 295-314.
  • Brysbaert, M., & Stevens, M. (2018). Power analysis and effect size in mixed effects models. Journal of Cognition, 1(1), 9.
  • Cohen, J. (1988). Statistical Power Analysis for the Behavioral Sciences (2nd ed.). Erlbaum.
  • Hedge, C., Powell, G., & Sumner, P. (2018). The reliability paradox: Why robust cognitive tasks do not produce reliable individual differences. Behavior Research Methods, 50, 1166-1186.
  • Macmillan, N. A., & Creelman, C. D. (2005). Detection Theory: A User's Guide (2nd ed.). Erlbaum.
  • Verbruggen, F., Aron, A. R., Band, G. P., et al. (2019). A consensus guide to capturing the ability to inhibit actions and impulsive behaviors in the stop-signal task. eLife, 8, e46323.

Source Transparency

This detail page is rendered from real SKILL.md content. Trust labels are metadata-based hints, not a safety guarantee.

Related Skills

Related by shared tags or category signals.

Research

cognitive science statistical analysis

No summary provided by upstream source.

Repository SourceNeeds Review
Research

neural population decoding analysis

No summary provided by upstream source.

Repository SourceNeeds Review
Research

creativity self-efficacy mediation analysis

No summary provided by upstream source.

Repository SourceNeeds Review
Research

erp data analysis

No summary provided by upstream source.

Repository SourceNeeds Review