Visual Search Array Generator

Specifies display parameters, set sizes, target-distractor similarity, and randomization constraints for visual search experiments

Safety Notice

This listing is imported from skills.sh public index metadata. Review upstream SKILL.md and repository scripts before running.

Copy this and send it to your AI assistant to learn

Install skill "Visual Search Array Generator" with this command: npx skills add haoxuanlithuai/awesome_cognitive_and_neuroscience_skills/haoxuanlithuai-awesome-cognitive-and-neuroscience-skills-visual-search-array-generator

Visual Search Array Generator

Purpose

This skill encodes expert methodological knowledge for designing and generating visual search arrays. A competent programmer could easily generate random stimulus displays, but without domain training they would likely violate critical constraints: items too closely spaced (causing crowding), eccentricities beyond useful vision, inappropriate set sizes that cannot distinguish search types, target-distractor similarity levels that produce ceiling or floor effects, or trial ratios that distort search behavior. This skill provides the validated parameters needed to create psychophysically sound visual search experiments.

When to Use

Use this skill when:

  • Designing a visual search experiment (feature search, conjunction search, spatial configuration search)
  • Generating stimulus arrays with specific set sizes, spacings, and feature dimensions
  • Selecting target-distractor similarity levels to manipulate search efficiency
  • Choosing set sizes and trial structure for measuring search slopes
  • Configuring display timing, inter-trial intervals, and response windows

Do not use this skill when:

  • The task is not visual search (e.g., change detection, visual working memory, attentional capture without search)
  • You are analyzing existing visual search data rather than designing new experiments
  • The display involves naturalistic scenes rather than controlled arrays (use scene perception methods)

Research Planning Protocol

Before executing the domain-specific steps below, you MUST:

  1. State the research question -- What specific question is this analysis/paradigm addressing?
  2. Justify the method choice -- Why is this approach appropriate? What alternatives were considered?
  3. Declare expected outcomes -- What results would support vs. refute the hypothesis?
  4. Note assumptions and limitations -- What does this method assume? Where could it mislead?
  5. Present the plan to the user and WAIT for confirmation before proceeding.

For detailed methodology guidance, see the research-literacy skill.

⚠️ Verification Notice

This skill was generated by AI from academic literature. All parameters, thresholds, and citations require independent verification before use in research. If you find errors, please open an issue.

Search Type Classification

Feature Search (Parallel / Pop-out)

Target defined by a single unique feature (Treisman & Gelade, 1980).

  • Search slope: < 10 ms/item for target-present trials (Wolfe, 2021)
  • RT x set size function: Flat or near-flat
  • Example: Red target among green distractors; vertical target among horizontal distractors
  • Theoretical basis: Pre-attentive feature maps can detect unique singletons without serial scanning (Treisman & Gelade, 1980)

Conjunction Search (Inefficient / Serial)

Target defined by a combination of features shared individually with distractors (Treisman & Gelade, 1980).

  • Search slope: 20-30 ms/item for target-present trials (Wolfe, 2021)
  • Absent:present slope ratio: Approximately 2:1 if search is self-terminating (Treisman & Gelade, 1980)
  • Example: Red vertical target among red horizontal and green vertical distractors
  • Note: Many conjunction searches are more efficient than predicted by strict serial models; guided search theory accounts for this (Wolfe, 1994)

Spatial Configuration Search

Target differs from distractors in spatial arrangement of parts rather than simple features.

  • Search slope: 30-50+ ms/item (Wolfe, 2021)
  • Example: T among Ls; 2 among 5s
  • These are among the most inefficient search tasks and should be used when studying attentional limits

Search Slope Classification Benchmarks

Slope (ms/item)ClassificationCitation
< 5Highly efficient / pop-outWolfe, 2021
5-10Efficient (feature-like)Wolfe, 2021
10-20Moderately efficient (guided)Wolfe, 1994; Wolfe, 2021
20-30Inefficient (conjunction-like)Treisman & Gelade, 1980; Wolfe, 2021
> 30Very inefficient (serial)Wolfe, 2021

Display Parameters

Spatial Layout

ParameterRecommended ValueCitation / Rationale
Maximum eccentricity15 degrees of visual angle from fixationBeyond ~15 deg, acuity drops substantially; standard upper bound (Wolfe et al., 1998)
Minimum inter-item spacing> 1 degree center-to-centerPrevents crowding effects (Bouma, 1970: crowding zone ~ 0.5 x eccentricity)
Item size0.5-2 degrees of visual angleStandard range for search items (Wolfe, 2021)
Display areaCircular or rectangular region within eccentricity limitAvoid items near monitor edges where distortion may occur
Fixation crossPresent for 500-1000 ms before array onsetStandard in visual search (Wolfe et al., 1998)

Preventing Crowding

Crowding impairs identification when flanking items are too close to the target, especially in the periphery (Pelli & Tillman, 2008).

  • Critical spacing: Approximately 0.5 x eccentricity (Bouma, 1970)
  • At 5 degrees eccentricity, items must be > 2.5 degrees apart to avoid crowding
  • At 10 degrees eccentricity, items must be > 5 degrees apart
  • For items near fixation (< 2 degrees), minimum spacing of 1 degree is sufficient

Set Sizes

Design GoalRecommended Set SizesRationale
Classify search type4, 8, 12, 16 (minimum 3 set sizes)Need multiple points to estimate slope reliably (Wolfe, 2021)
Test for pop-out8, 16, 32 (wide range)Pop-out confirmed if slope ~ 0 even at large set sizes (Treisman & Gelade, 1980)
Standard conjunction search4, 8, 12, 16, 20Finer-grained slope estimation (Wolfe, 1994)
Quick screening6, 12, 18Three evenly spaced set sizes for slope estimation

Minimum set sizes: At least 3 different set sizes are required to reliably estimate a search slope. Two set sizes cannot distinguish linear from nonlinear search functions.

Maximum set size: Constrained by display density. With 1 degree minimum spacing and 15 degree eccentricity limit, the practical maximum is approximately 40-50 items for typical item sizes (Wolfe et al., 1998).

Trial Structure

ParameterRecommended ValueCitation
Target-present : target-absent ratio1:1 (50% present)Chun & Wolfe, 1996; standard in most search tasks
Low prevalence condition10% target-presentWolfe et al., 2005 (miss rate increases dramatically)
Trials per cellMinimum 20-30 trials per set size x presence combinationWolfe, 2021; more for stable RT distributions
Practice trials10-20 trials before data collectionStandard practice
Total trial countTypically 400-800 for a standard search taskDepends on number of conditions and set sizes

Critical warning about target prevalence: When target prevalence drops below ~25%, miss rates increase dramatically -- the "prevalence effect" (Wolfe et al., 2005). This is a critical design consideration for applied search tasks (e.g., medical image screening).

Timing Parameters

ParameterRecommended ValueRationale
Fixation duration500-1000 msAllow fixation stabilization
Display durationUntil response (standard) or fixed (brief search)Self-paced search is default (Wolfe, 2021)
Brief display search100-200 ms (then mask)Tests pre-attentive processing (Treisman & Gelade, 1980)
Response deadline3000-5000 msExclude abnormally slow RTs
Inter-trial interval500-1000 msPrevent carryover effects
Feedback duration500 ms (if used)Brief error/correct feedback

Feature Dimensions and Similarity

Color

ParameterGuidelineCitation
Feature search JNDTarget-distractor color difference > 30 degrees in CIE Lab* or CIELUV hue angle for pop-outDerived from Nagy & Sanchez, 1990
Conjunction controlEquate target-distractor color distance across conditionsEssential for isolating conjunction cost
Number of colorsTypically 2-4 distinct colors for conjunction searchWolfe, 1994
LuminanceEquate luminance across colors to avoid luminance pop-outUse isoluminant colors or verify with photometer
Color spaceSpecify in CIE Lab* or Munsell; avoid RGB for scientific reportingRGB is device-dependent

Orientation

ParameterGuidelineCitation
Feature search JNDTarget-distractor difference > 15-20 degrees for efficient searchFoster & Ward, 1991
Pop-out thresholdOrientation difference > 30 degrees produces reliable pop-outWolfe et al., 1992
Cardinal advantageVertical and horizontal orientations are detected faster than obliquesAppelle, 1972
Recommended: Use oblique orientations (e.g., 45 deg, 135 deg) to avoid cardinal effects unless cardinals are of interest

Size

ParameterGuidelineCitation
Feature search JNDTarget at least 1.5-2x distractor size for pop-outTreisman & Gelade, 1980
Weber fractionSize discrimination Weber fraction ~ 0.04-0.06 (JND/standard)Nachmias, 2011
For search: Size ratio of > 1.5:1 (target:distractor) typically needed for efficient searchWolfe, 2021

Target-Distractor Similarity and Distractor Heterogeneity

Duncan & Humphreys (1989) Framework

Search efficiency depends on two factors:

  1. Target-distractor (T-D) similarity: Higher similarity = less efficient search
  2. Distractor-distractor (D-D) similarity: Lower D-D similarity (heterogeneous distractors) = less efficient search
T-D SimilarityD-D SimilarityExpected SearchExample
LowHighVery efficient (pop-out)Red among identical greens
LowLowEfficientRed among varied colors (not red)
HighHighInefficientPink among reds
HighLowVery inefficientPink among varied warm colors

Practical Implementation

  • Homogeneous distractors: All distractors identical; cleanest test of T-D similarity
  • Heterogeneous distractors: Distractors vary in the search-relevant feature; tests the D-D similarity effect
  • Controlling heterogeneity: Sample distractor features from a uniform distribution within a defined range (e.g., orientation distractors drawn from 0 +/- 10 degrees; Duncan & Humphreys, 1989)

Array Generation Algorithm

Placement Algorithm (Recommended)

  1. Define the display region (circular with radius = max eccentricity)
  2. Generate candidate positions using one of:
  • Grid + jitter: Place items on a regular grid, then add random jitter (uniform, +/- 0.3 deg) to break regularity (Wolfe et al., 1998)
  • Random placement with rejection: Sample random positions; reject any that violate minimum spacing
  • Concentric rings: Place items on concentric rings at fixed eccentricities (controls eccentricity distribution)
  1. Enforce minimum inter-item spacing (> 1 degree center-to-center)
  2. Enforce minimum distance from fixation (> 1 degree; avoids masking by fixation cross)
  3. Balance target position across eccentricity bins and quadrants over the experiment
  4. For each trial, randomly assign target to one position (present trials) or assign no target (absent trials)

Randomization Constraints

  • Target position: Counterbalance across quadrants and eccentricity bins within each set size
  • Set size order: Randomize or pseudorandomize within blocks
  • Target presence: Pseudorandomize to avoid long runs of present or absent trials (max run length: 4 consecutive same-type trials; standard practice)
  • Feature assignment: For conjunction search, ensure equal numbers of each distractor type (e.g., 50% share color with target, 50% share orientation; Treisman & Gelade, 1980)
  • Block structure: If multiple set sizes are used, either mix within blocks or block by set size (within-block mixing is standard; Wolfe, 2021)

Common Pitfalls

  1. Not controlling for eccentricity confounds: Larger set sizes place items at greater eccentricities on average, confounding set size with acuity. Solution: Use a fixed display area and add items by filling in gaps, not by expanding the area (Wolfe et al., 1998).

  2. Interpreting null set-size effects as "pop-out" without verification: A flat slope does not guarantee parallel processing. Verify with brief presentations (100-200 ms + mask) and check that accuracy remains high (Treisman & Gelade, 1980).

  3. Ignoring the prevalence effect: With low target prevalence (<25%), observers adopt a more liberal quitting threshold, increasing miss rates from ~5% to >25% (Wolfe et al., 2005). Design accordingly for applied contexts.

  4. Using too few set sizes: Two set sizes define only a line; you cannot assess linearity or detect nonlinear search functions. Use at least 3 set sizes, preferably 4-5 (Wolfe, 2021).

  5. Not equating luminance across color conditions: Luminance differences create an unintended pop-out cue. Always measure and equate luminance (use a photometer or validated software settings; Nagy & Sanchez, 1990).

  6. Placing items too close together: Violating minimum spacing creates crowding, where items become unidentifiable not because of search difficulty but because of peripheral vision limits (Bouma, 1970; Pelli & Tillman, 2008).

  7. Confounding distractor heterogeneity with target discriminability: Adding distractor variability reduces search efficiency independently of T-D similarity. Manipulate one while controlling the other (Duncan & Humphreys, 1989).

  8. Failing to counterbalance target position: If the target systematically appears at certain locations, observers develop spatial biases. Counterbalance across quadrants and eccentricities.

Minimum Reporting Checklist

Based on current best practices in visual search research:

  • Search type (feature, conjunction, spatial configuration) and theoretical motivation
  • Set sizes used and number of trials per set size per target-presence condition
  • Target-present to target-absent ratio
  • Display parameters: eccentricity range, item size (in degrees of visual angle), minimum spacing
  • Item features: colors (in device-independent space), orientations (in degrees), sizes (in degrees)
  • Target-distractor similarity metric and value
  • Distractor composition (homogeneous vs. heterogeneous; how features were assigned)
  • Viewing distance and display specifications (size, resolution, refresh rate)
  • Timing: fixation duration, display duration, response deadline, ITI
  • Randomization scheme: how set size, target presence, and target position were randomized
  • Search slope values (ms/item) with confidence intervals for target-present and target-absent
  • Slope ratio (absent:present) to assess self-termination
  • Error rates by condition (especially miss rates)
  • RT trimming criteria and percentage of data excluded
  • Software used for stimulus generation and presentation (with version)

References

  • Appelle, S. (1972). Perception and discrimination as a function of stimulus orientation: The "oblique effect" in man and animals. Psychological Bulletin, 78, 266-278.
  • Bouma, H. (1970). Interaction effects in parafoveal letter recognition. Nature, 226, 177-178.
  • Chun, M. M., & Wolfe, J. M. (1996). Just say no: How are visual searches terminated when there is no target present? Cognitive Psychology, 30, 39-78.
  • Duncan, J., & Humphreys, G. W. (1989). Visual search and stimulus similarity. Psychological Review, 96, 433-458.
  • Foster, D. H., & Ward, P. A. (1991). Asymmetries in oriented-line detection indicate two orthogonal filters in early vision. Proceedings of the Royal Society B, 243, 75-81.
  • Nachmias, J. (2011). Shape and size discrimination compared. Vision Research, 51, 400-407.
  • Nagy, A. L., & Sanchez, R. R. (1990). Critical color differences determined with a visual search task. Journal of the Optical Society of America A, 7, 1209-1217.
  • Pelli, D. G., & Tillman, K. A. (2008). The uncrowded window of object recognition. Nature Neuroscience, 11, 1129-1135.
  • Treisman, A. M., & Gelade, G. (1980). A feature-integration theory of attention. Cognitive Psychology, 12, 97-136.
  • Wolfe, J. M. (1994). Guided Search 2.0: A revised model of visual search. Psychonomic Bulletin & Review, 1, 202-238.
  • Wolfe, J. M. (2021). Guided Search 6.0: An updated model of visual search. Psychonomic Bulletin & Review, 28, 1060-1092.
  • Wolfe, J. M., Cave, K. R., & Franzel, S. L. (1989). Guided search: An alternative to the feature integration model for visual search. Journal of Experimental Psychology: Human Perception and Performance, 15, 419-433.
  • Wolfe, J. M., Friedman-Hill, S. R., Stewart, M. I., & O'Connell, K. M. (1992). The role of categorization in visual search for orientation. Journal of Experimental Psychology: Human Perception and Performance, 18, 34-49.
  • Wolfe, J. M., Horowitz, T. S., & Kenner, N. M. (2005). Rare items often missed in visual searches. Nature, 435, 439-440.
  • Wolfe, J. M., O'Neill, P., & Bennett, S. C. (1998). Why are there eccentricity effects in visual search? Perception & Psychophysics, 60, 140-156.

See references/array-generation-parameters.yaml for a machine-readable parameter specification.

Source Transparency

This detail page is rendered from real SKILL.md content. Trust labels are metadata-based hints, not a safety guarantee.

Related Skills

Related by shared tags or category signals.

General

eeg preprocessing pipeline guide

No summary provided by upstream source.

Repository SourceNeeds Review
General

self-paced reading designer

No summary provided by upstream source.

Repository SourceNeeds Review
General

act-r model builder

No summary provided by upstream source.

Repository SourceNeeds Review
General

verify skill

No summary provided by upstream source.

Repository SourceNeeds Review