ACT-R Model Builder

Guides ACT-R cognitive model construction: chunk types, production rules, subsymbolic parameters, and model validation

Safety Notice

This listing is imported from skills.sh public index metadata. Review upstream SKILL.md and repository scripts before running.

Copy this and send it to your AI assistant to learn

Install skill "ACT-R Model Builder" with this command: npx skills add haoxuanlithuai/awesome_cognitive_and_neuroscience_skills/haoxuanlithuai-awesome-cognitive-and-neuroscience-skills-act-r-model-builder

ACT-R Model Builder

Purpose

This skill encodes expert knowledge for constructing computational cognitive models within the ACT-R (Adaptive Control of Thought -- Rational) architecture. It provides guidance on chunk type definition, production rule authoring, subsymbolic parameter selection with empirically validated defaults, model fitting workflows, and validation procedures. A general-purpose programmer would not know the architecture constraints, parameter defaults, or model validation standards without specialized cognitive modeling training.

When to Use This Skill

  • Designing a new ACT-R model for a cognitive task (memory retrieval, decision-making, skill acquisition)
  • Setting subsymbolic parameters and understanding their theoretical justification
  • Structuring chunk types and production rules for a specific experimental paradigm
  • Fitting an ACT-R model to behavioral data (RT, accuracy)
  • Validating a model via parameter recovery, cross-validation, or qualitative predictions
  • Choosing between ACT-R 7.x (Lisp) and pyactr (Python) for implementation

Research Planning Protocol

Before executing the domain-specific steps below, you MUST:

  1. State the research question -- What specific question is this analysis/paradigm addressing?
  2. Justify the method choice -- Why is this approach appropriate? What alternatives were considered?
  3. Declare expected outcomes -- What results would support vs. refute the hypothesis?
  4. Note assumptions and limitations -- What does this method assume? Where could it mislead?
  5. Present the plan to the user and WAIT for confirmation before proceeding.

For detailed methodology guidance, see the research-literacy skill.

⚠️ Verification Notice

This skill was generated by AI from academic literature. All parameters, thresholds, and citations require independent verification before use in research. If you find errors, please open an issue.

ACT-R Architecture Overview

ACT-R is a hybrid cognitive architecture with symbolic and subsymbolic components (Anderson, 2007; Anderson & Lebiere, 1998).

Core Modules and Buffers

ModuleBufferFunctionSource
Declarative memoryretrievalStores and retrieves chunks (facts)Anderson, 2007, Ch. 2
Procedural memory(none; fires productions)Stores production rules (skills)Anderson, 2007, Ch. 3
GoalgoalTracks current task stateAnderson, 2007, Ch. 4
ImaginalimaginalHolds intermediate problem representationsAnderson, 2007, Ch. 4
Visualvisual, visual-locationAttends to and encodes visual objectsAnderson, 2007, Ch. 6
MotormanualExecutes motor responses (keypresses)Anderson, 2007, Ch. 6
TemporaltemporalTracks time intervalsTaatgen et al., 2007

Processing Cycle

  1. Buffers hold one chunk each (the "bottleneck" assumption; Anderson, 2007, Ch. 1)
  2. Productions match against buffer contents (pattern matching)
  3. Conflict resolution selects one production per cycle (~50 ms per production firing; Anderson, 2007)
  4. Selected production modifies buffers or makes requests to modules
  5. Modules process requests asynchronously

Building the Symbolic Model

Step 1: Define Chunk Types

Chunk types define the structure of declarative knowledge:

;; ACT-R 7.x Lisp syntax
(chunk-type addition-problem arg1 arg2 answer)
(chunk-type counting-fact number next)

Decision rules for chunk type design:

  1. Each chunk type represents one category of knowledge (Anderson, 2007, Ch. 2)
  2. Slots should correspond to meaningful features of the domain
  3. Use inheritance when chunk types share structure (e.g., a "problem" parent type)
  4. Keep chunks small -- typically 3-6 slots per chunk type (Anderson & Lebiere, 1998)

Step 2: Write Production Rules

Productions follow an IF-THEN structure:

(p retrieve-answer
 =goal>
 isa addition-problem
 arg1 =num1
 arg2 =num2
 answer nil
 ?retrieval>
 state free
==>
 +retrieval>
 isa addition-fact
 addend1 =num1
 addend2 =num2
 =goal>
)

Production rule guidelines:

GuidelineRationaleSource
One request per productionModule bottleneck constraintAnderson, 2007, Ch. 3
Test buffer state before requestingPrevents jamming the moduleBothell, 2023, ACT-R reference manual
Use =goal> to maintain goal bufferPrevents goal harvestingBothell, 2023
Minimize productions per task stepSimpler models are preferred (parsimony)Anderson, 2007, Ch. 1

Step 3: Structure the Goal Stack

Is the task sequential with clear phases?
 |
 +-- YES --> Use a single goal chunk with a "step" slot
 | that tracks the current phase
 |
 +-- NO --> Does the task require subgoaling?
 |
 +-- YES --> Use goal push/pop (stack)
 |
 +-- NO --> Use the imaginal buffer for
 intermediate representations

Subsymbolic Parameters

These parameters govern memory activation, retrieval, and production selection. See references/parameter-table.yaml for the complete table.

Core Declarative Memory Parameters

ParameterSymbolDefaultTypical RangeSource
Base-level learning decayd0.50.1 -- 1.0Anderson & Schooler, 1991; Anderson, 2007
Activation noises0.40.1 -- 0.8Anderson, 2007
Latency factorF1.00.2 -- 5.0Anderson, 2007
Latency exponentf1.0Fixed in most modelsAnderson, 2007
Retrieval thresholdtau-infinity (default)Set empirically; often 0.0 to -2.0Anderson, 2007
Maximum associative strengthS (mas)context-dependent1.0 -- 5.0Anderson & Reder, 1999
Mismatch penaltyPapplication-dependent0.5 -- 2.0Anderson, 2007

Production Utility Parameters

ParameterSymbolDefaultTypical RangeSource
Utility noisesigma0.0 (deterministic)0.1 -- 2.0 when enabledAnderson, 2007
Utility learning ratealpha0.20.01 -- 1.0Anderson, 2007
Initial utilityU00.0Set per productionAnderson, 2007
Production compilationenabled/disabledDisabled by default--Taatgen & Anderson, 2002

Timing Parameters

ParameterValueSource
Production cycle time50 msAnderson, 2007
Visual encoding time85 msAnderson, 2007, Ch. 6
Motor initiation time50 msAnderson, 2007, Ch. 6
Motor execution time100 ms (Fitts' law applies)Anderson, 2007, Ch. 6
Imaginal delay200 msAnderson, 2007, Ch. 4

Activation Equation

Total activation of chunk i:

A_i = B_i + sum_j(W_j * S_ji) + PM_i + noise

Where:

  • B_i = base-level activation (log of weighted recency; decay d; Anderson & Schooler, 1991)
  • W_j * S_ji = spreading activation from source j (Anderson, 2007, Ch. 5)
  • PM_i = partial matching component (Anderson, 2007)
  • noise = logistic noise with scale s (Anderson, 2007)

Retrieval time: RT = F * e^(-f * A_i) (Anderson, 2007)

Model Fitting Workflow

Step 1: Identify Free Parameters

How many free parameters?
 |
 +-- <= 3 --> Standard practice; proceed
 |
 +-- 4-6 --> Acceptable if justified by model complexity
 |
 +-- > 6 --> Warning: overfitting risk. Consider fixing some
 to default values (Anderson, 2007)

Rule of thumb: The number of free parameters should be substantially less than the number of independent data points being fit (Roberts & Pashler, 2000).

Step 2: Choose Fitting Method

MethodWhen to UseSource
Grid searchFew parameters (1-3), bounded spaceStandard practice
Simplex (Nelder-Mead)Moderate parameters, smooth landscapeAnderson, 2007
Differential evolutionMany parameters, multimodal landscapeStorn & Price, 1997
Bayesian optimizationExpensive evaluations, informed priorsPalestro et al., 2018

Step 3: Fit to Multiple Dependent Variables

ACT-R models should simultaneously account for:

  • Response times (correct trials, mean or quantiles)
  • Accuracy (proportion correct by condition)
  • Qualitative patterns (error types, learning curves)

Use weighted sum of squared deviations or log-likelihood across measures (Anderson, 2007, Ch. 4).

Step 4: Parameter Recovery

Before trusting fitted parameter values, conduct a parameter recovery study. See the parameter-recovery-checker skill.

Common Model Patterns

See references/model-patterns.md for detailed implementations of:

  1. Memory retrieval -- Paired associates, fan effect (Anderson, 2007, Ch. 5)
  2. Skill acquisition -- Production compilation, power law of practice (Taatgen & Anderson, 2002)
  3. Decision-making -- Instance-based learning, utility-based selection (Gonzalez et al., 2003)
  4. Problem solving -- Means-ends analysis, goal stacking (Anderson, 2007, Ch. 8)

Model Validation Checklist

Validation StepMethodMinimum Standard
Parameter recoverySimulate and refitr > 0.9 between true and recovered (Heathcote et al., 2015)
Cross-validationFit half, predict halfPrediction RMSE within 2x of fitting RMSE
Qualitative predictionsNovel conditionsModel predicts ordinal pattern correctly
Model comparisonAIC/BIC or Bayes factorCompare against plausible alternatives (Burnham & Anderson, 2002)
Sensitivity analysisVary fixed parametersConclusions robust to +/-20% variation

Software and Implementation

PlatformLanguageURLNotes
ACT-R 7.xCommon Lispact-r.psy.cmu.eduReference implementation (Bothell, 2023)
pyactrPythongithub.com/jakdot/pyactrPython interface, good for batch simulations (Dotlacil, 2018)
jACT-RJavajactr.orgJava implementation

Recommendation: Use ACT-R 7.x for model development and validation. Use pyactr when integrating with Python data analysis pipelines or running large parameter sweeps (Dotlacil, 2018).

Common Pitfalls

  1. Too many free parameters: Fitting more than 5-6 free parameters without strong justification risks overfitting (Roberts & Pashler, 2000). Fix well-established parameters (d = 0.5, production cycle = 50 ms) to defaults.
  2. Ignoring parameter correlations: Parameters like s (noise) and tau (threshold) trade off. Run parameter recovery to verify identifiability.
  3. Fitting only means: ACT-R makes distributional predictions. Fitting only mean RT discards information. Use quantile-based fitting where possible (Heathcote et al., 2015).
  4. Incorrect timing alignment: ACT-R's predicted RT includes perceptual and motor times. Account for these when comparing to behavioral RT.
  5. Overly complex models: Prefer models with fewer productions and chunk types that still capture the qualitative pattern. Complexity should be motivated by the data (Anderson, 2007, Ch. 1).
  6. Neglecting model comparison: Always compare your ACT-R model against at least one alternative (simpler ACT-R variant or a different architecture) using formal model comparison (Burnham & Anderson, 2002).

Minimum Reporting Checklist

Based on best practices from Anderson (2007) and Heathcote et al. (2015):

  • Architecture version (e.g., ACT-R 7.27)
  • All chunk types and their slots
  • Number of production rules
  • All parameter values: fixed (with default source) and free (with fitted values and confidence intervals)
  • Fitting method and objective function
  • Data fitted: number of conditions, number of data points, dependent variables
  • Goodness of fit: R-squared, RMSE, or log-likelihood per dependent variable
  • Parameter recovery results (r, bias, RMSE for each free parameter)
  • Model comparison results (AIC/BIC/Bayes factor vs. alternatives)
  • Qualitative predictions and whether they matched data

References

  • Anderson, J. R. (2007). How Can the Human Mind Occur in the Physical Universe? Oxford University Press.
  • Anderson, J. R., & Lebiere, C. (1998). The Atomic Components of Thought. Lawrence Erlbaum Associates.
  • Anderson, J. R., & Reder, L. M. (1999). The fan effect: New results and new theories. Journal of Experimental Psychology: General, 128(2), 186-197.
  • Anderson, J. R., & Schooler, L. J. (1991). Reflections of the environment in memory. Psychological Science, 2(6), 396-408.
  • Bothell, D. (2023). ACT-R 7.27 Reference Manual. Carnegie Mellon University.
  • Burnham, K. P., & Anderson, D. R. (2002). Model Selection and Multimodel Inference (2nd ed.). Springer.
  • Dotlacil, J. (2018). Building an ACT-R reader for eye-tracking corpus data. Topics in Cognitive Science, 10(1), 144-160.
  • Gonzalez, C., Lerch, J. F., & Lebiere, C. (2003). Instance-based learning in dynamic decision making. Cognitive Science, 27(4), 591-635.
  • Heathcote, A., Brown, S. D., & Wagenmakers, E.-J. (2015). An introduction to good practices in cognitive modeling. In B. U. Forstmann & E.-J. Wagenmakers (Eds.), An Introduction to Model-Based Cognitive Neuroscience. Springer.
  • Palestro, J. J., Sederberg, P. B., Osth, A. F., Van Zandt, T., & Turner, B. M. (2018). Likelihood-free methods for cognitive science. Springer.
  • Roberts, S., & Pashler, H. (2000). How persuasive is a good fit? A comment on theory testing. Psychological Review, 107(2), 358-367.
  • Storn, R., & Price, K. (1997). Differential evolution. Journal of Global Optimization, 11(4), 341-359.
  • Taatgen, N. A., & Anderson, J. R. (2002). Why do children learn to say "Broke"? A model of learning the past tense without feedback. Cognition, 86(2), 123-155.
  • Taatgen, N. A., van Rijn, H., & Anderson, J. R. (2007). An integrated theory of prospective time interval estimation. Psychological Review, 114(3), 577-598.

See references/ for detailed parameter tables and common model patterns.

Source Transparency

This detail page is rendered from real SKILL.md content. Trust labels are metadata-based hints, not a safety guarantee.

Related Skills

Related by shared tags or category signals.

General

eeg preprocessing pipeline guide

No summary provided by upstream source.

Repository SourceNeeds Review
General

self-paced reading designer

No summary provided by upstream source.

Repository SourceNeeds Review
General

lesion-symptom mapping guide

No summary provided by upstream source.

Repository SourceNeeds Review
General

verify skill

No summary provided by upstream source.

Repository SourceNeeds Review