code-refactor-for-reproducibility

Use when refactoring research code for publication, adding documentation to existing analysis scripts, creating reproducible computational workflows, or preparing code for sharing with collaborators. Transforms research code into publication-ready, reproducible workflows. Adds documentation, implements error handling, creates environment specifications, and ensures computational reproducibility for scientific publications.

Safety Notice

This listing is from the official public ClawHub registry. Review SKILL.md and referenced scripts before running.

Copy this and send it to your AI assistant to learn

Install skill "code-refactor-for-reproducibility" with this command: npx skills add aipoch-ai/code-refactor-for-reproducibility-1

Research Code Reproducibility Refactoring Tool

When to Use

  • Use this skill when the task needs Use when refactoring research code for publication, adding documentation to existing analysis scripts, creating reproducible computational workflows, or preparing code for sharing with collaborators. Transforms research code into publication-ready, reproducible workflows. Adds documentation, implements error handling, creates environment specifications, and ensures computational reproducibility for scientific publications.
  • Use this skill for data analysis tasks that require explicit assumptions, bounded scope, and a reproducible output format.
  • Use this skill when you need a documented fallback path for missing inputs, execution errors, or partial evidence.

Key Features

  • Scope-focused workflow aligned to: Use when refactoring research code for publication, adding documentation to existing analysis scripts, creating reproducible computational workflows, or preparing code for sharing with collaborators. Transforms research code into publication-ready, reproducible workflows. Adds documentation, implements error handling, creates environment specifications, and ensures computational reproducibility for scientific publications.
  • Packaged executable path(s): scripts/main.py.
  • Structured execution path designed to keep outputs consistent and reviewable.

Dependencies

  • Python: 3.10+. Repository baseline for current packaged skills.
  • numpy: unspecified. Declared in requirements.txt.
  • pandas: unspecified. Declared in requirements.txt.
  • pytest: unspecified. Declared in requirements.txt.
  • scipy: unspecified. Declared in requirements.txt.
  • src: unspecified. Declared in requirements.txt.

Example Usage

cd "20260318/scientific-skills/Data Analytics/code-refactor-for-reproducibility"
python -m py_compile scripts/main.py
python scripts/main.py --help

Example run plan:

  1. Confirm the user input, output path, and any required config values.
  2. Edit the in-file CONFIG block or documented parameters if the script uses fixed settings.
  3. Run python scripts/main.py with the validated inputs.
  4. Review the generated output and return the final artifact with any assumptions called out.

Implementation Details

See ## Workflow above for related details.

  • Execution model: validate the request, choose the packaged workflow, and produce a bounded deliverable.
  • Input controls: confirm the source files, scope limits, output format, and acceptance criteria before running any script.
  • Primary implementation surface: scripts/main.py.
  • Parameters to clarify first: input path, output path, scope filters, thresholds, and any domain-specific constraints.
  • Output discipline: keep results reproducible, identify assumptions explicitly, and avoid undocumented side effects.

Quick Check

Use this command to verify that the packaged script entry point can be parsed before deeper execution.

python -m py_compile scripts/main.py

Audit-Ready Commands

Use these concrete commands for validation. They are intentionally self-contained and avoid placeholder paths.

python -m py_compile scripts/main.py
python scripts/main.py --help

Workflow

  1. Confirm the user objective, required inputs, and non-negotiable constraints before doing detailed work.
  2. Validate that the request matches the documented scope and stop early if the task would require unsupported assumptions.
  3. Use the packaged script path or the documented reasoning path with only the inputs that are actually available.
  4. Return a structured result that separates assumptions, deliverables, risks, and unresolved items.
  5. If execution fails or inputs are incomplete, switch to the fallback path and state exactly what blocked full completion.

Workflow Overview

Follow this sequence when refactoring a research codebase:

  1. Analyze — identify reproducibility issues in existing code
  2. Refactor — apply documentation, parameterization, and error handling
  3. Specify environment — pin dependencies and create environment files
  4. Validate — run tests and verify behaviour is unchanged

Step 1: Analyze Code for Reproducibility Issues

Read each source file and check for the following problems. Document findings before making any changes.

Checklist: missing docstrings · hardcoded absolute paths · missing random seeds · bare except: clauses · unpinned imports · unexplained magic numbers

Example — detecting issues manually:

import ast, pathlib

def find_hardcoded_paths(source: str) -> list[str]:
    """Return string literals that look like absolute paths."""
    tree = ast.parse(source)
    return [
        node.s for node in ast.walk(tree)
        if isinstance(node, ast.Constant)
        and isinstance(node.s, str)
        and node.s.startswith("/")
    ]

source = pathlib.Path("analysis.py").read_text()
print(find_hardcoded_paths(source))

Step 2: Refactor for Best Practices

Apply improvements in place. Always back up originals first.

2a. Add docstrings


# Before
def load_data(path):
    import pandas as pd
    return pd.read_csv(path)

# After
def load_data(path: str) -> "pd.DataFrame":
    """Load a CSV dataset from disk.

    Parameters
    ----------
    path : str
        Path to the CSV file (relative to project root).

    Returns
    -------
    pd.DataFrame
        Raw dataset with original column names preserved.
    """
    import pandas as pd
    return pd.read_csv(path)

2b. Parameterize hardcoded values

from pathlib import Path
import argparse

def parse_args():
    parser = argparse.ArgumentParser()
    parser.add_argument("--data", type=Path, default=Path("data/raw.csv"))
    parser.add_argument("--output", type=Path, default=Path("results/"))
    return parser.parse_args()

args = parse_args()
df = pd.read_csv(args.data)
args.output.mkdir(parents=True, exist_ok=True)

2c. Set random seeds

import random
import numpy as np

SEED = 42  # document this constant at module level

random.seed(SEED)
np.random.seed(SEED)

# scikit-learn
from sklearn.ensemble import RandomForestClassifier
clf = RandomForestClassifier(random_state=SEED)

# PyTorch
import torch
torch.manual_seed(SEED)
torch.backends.cudnn.deterministic = True

2d. Add error handling and logging

import logging
from pathlib import Path

logging.basicConfig(level=logging.INFO, format="%(asctime)s [%(levelname)s] %(message)s")
logger = logging.getLogger(__name__)

def load_data(path: Path) -> "pd.DataFrame":
    """Load dataset with validation."""
    import pandas as pd
    if not path.exists():
        raise FileNotFoundError(f"Data file not found: {path}")
    logger.info("Loading data from %s", path)
    df = pd.read_csv(path)
    if df.empty:
        raise ValueError(f"Loaded dataframe is empty: {path}")
    logger.info("Loaded %d rows, %d columns", *df.shape)
    return df

Step 3: Generate Environment Specifications

See references/environment-setup.md for full Dockerfile and Conda environment templates.

requirements.txt (pip)

pip install pipreqs
pipreqs src/ --output requirements.txt --force

Verify resolution:

python -m venv .venv_test && source .venv_test/bin/activate
pip install -r requirements.txt
python -c "import pandas, numpy, sklearn"
deactivate && rm -rf .venv_test

environment.yml (Conda)

name: my-research-env
channels:
  - conda-forge
  - defaults
dependencies:
  - python=3.9
  - numpy=1.24.3
  - pandas=2.0.1
  - scikit-learn=1.2.2
  - matplotlib=3.7.1
  - pip:
    - some-pip-only-package==0.5.0
conda env create -f environment.yml
conda activate my-research-env

Step 4: Create Documentation

README structure

Generate a README.md containing at minimum:


## Requirements
<!-- List Python version and key packages with versions -->

## Installation
```text
conda env create -f environment.yml
conda activate my-research-env

Data

<!-- Describe input data format, source, and where to place files -->

Running the Analysis

python main.py --data data/raw.csv --output results/

Expected Outputs

<!-- Describe files created and how to interpret them -->

Reproducing Results

  • Random seed: 42 (set in config.py)
  • Hardware: results validated on CPU; GPU results may differ slightly

---

## Step 5: Validate Reproducibility

After all changes, verify that behaviour is unchanged:

```text

# 1. Run the full pipeline and capture output checksums
python main.py --data data/raw.csv --output results/
md5sum results/*.csv > checksums_refactored.md5
diff checksums_original.md5 checksums_refactored.md5

# 2. Run unit tests
pytest tests/ -v --tb=short

# 3. Confirm determinism across two clean runs
python main.py --output results_run1/
python main.py --output results_run2/
diff -r results_run1/ results_run2/

Reproducibility verification checklist:

  • Output checksums match pre-refactor baseline
  • All tests pass
  • Pipeline runs twice and produces identical outputs
  • requirements.txt / environment.yml installs cleanly in a fresh environment
  • No absolute paths remain in source files
  • Random seeds are set and documented
  • All public functions have docstrings
  • README contains complete reproduction instructions

Best Practices Summary

Practice
Relative paths only
Pin dependency versions
Set random seeds
Docstrings on all public functions
Validate outputs against a baseline
Automate environment setup

References

  • references/guide.md — Comprehensive user guide
  • references/environment-setup.md — Dockerfile and full environment templates
  • references/examples/ — Working code examples
  • references/api-docs/ — Complete API documentation

Skill ID: 455 | Version: 1.0 | License: MIT

Output Requirements

Every final response should make these items explicit when they are relevant:

  • Objective or requested deliverable
  • Inputs used and assumptions introduced
  • Workflow or decision path
  • Core result, recommendation, or artifact
  • Constraints, risks, caveats, or validation needs
  • Unresolved items and next-step checks

Error Handling

  • If required inputs are missing, state exactly which fields are missing and request only the minimum additional information.
  • If the task goes outside the documented scope, stop instead of guessing or silently widening the assignment.
  • If scripts/main.py fails, report the failure point, summarize what still can be completed safely, and provide a manual fallback.
  • Do not fabricate files, citations, data, search results, or execution outcomes.

Input Validation

This skill accepts requests that match the documented purpose of code-refactor-for-reproducibility and include enough context to complete the workflow safely.

Do not continue the workflow when the request is out of scope, missing a critical input, or would require unsupported assumptions. Instead respond:

code-refactor-for-reproducibility only handles its documented workflow. Please provide the missing required inputs or switch to a more suitable skill.

Response Template

Use the following fixed structure for non-trivial requests:

  1. Objective
  2. Inputs Received
  3. Assumptions
  4. Workflow
  5. Deliverable
  6. Risks and Limits
  7. Next Checks

If the request is simple, you may compress the structure, but still keep assumptions and limits explicit when they affect correctness.

Source Transparency

This detail page is rendered from real SKILL.md content. Trust labels are metadata-based hints, not a safety guarantee.

Related Skills

Related by shared tags or category signals.

Coding

Spicy Ai Video

Turn a 60-second talking head clip into 1080p high-energy edited videos just by typing what you need. Whether it's turning bland footage into visually intens...

Registry SourceRecently Updated
Coding

Video Maker Fast

Get polished MP4 videos ready to post, without touching a single slider. Upload your video clips (MP4, MOV, AVI, WebM, up to 500MB), say something like "trim...

Registry SourceRecently Updated
Coding

Generation Generator

generate text prompts or clips into AI generated videos with this skill. Works with MP4, MOV, PNG, JPG files up to 500MB. marketers, content creators, social...

Registry SourceRecently Updated
Coding

Editor On Android

Get edited MP4 clips ready to post, without touching a single slider. Upload your video clips (MP4, MOV, AVI, WebM, up to 500MB), say something like "trim th...

Registry SourceRecently Updated