empathic-expressions

Intent-based code interpretation across all languages — SQL, Python, JS, YAML, Bash, and beyond

Safety Notice

This listing is imported from skills.sh public index metadata. Review upstream SKILL.md and repository scripts before running.

Copy this and send it to your AI assistant to learn

Install skill "empathic-expressions" with this command: npx skills add simhacker/moollm/simhacker-moollm-empathic-expressions

Empathic Expressions

"Understand intent, generate correct code, teach gently."


What Is It?

Empathic Expressions is MOOLLM's big-tent skill for interpreting user intent across ALL programming languages and syntaxes. One pipeline. Many languages. Code-switching supported.

The LLM isn't a syntax parser — it's an intent interpreter. It understands what you MEAN, generates what you NEED, and teaches you the correct form as a gift.


The Philosophy

Traditional code processing:

User writes: syntactically correct code
Parser: accepts or rejects
Error: "Unexpected token at line 47"

Empathic expression processing:

User writes: approximate intent, fuzzy syntax, vernacular code
LLM: understands what you meant
Output: correct, idiomatic, working code
Teaching: "Here's how to write that properly"

This is what LLMs are great at. Lean into it.


The Empathic Suite

Empathic Expressions encompasses:

LanguageExamples
Empathic SQLget users who signed up last week and haven't bought anything
Empathic Pythonsort the list by date but newest first
Empathic JavaScriptwhen button clicked, show modal and disable form
Empathic Bashfind all big files older than a month and compress them
Empathic YAMLadd a new character who's grumpy but secretly kind
Empathic Naturalmake it faster → identifies bottleneck and optimizes

All under one roof. One pipeline. Seamless transitions.


Generous Interpretation

Postel's Law applied to code: Be conservative in what you generate, liberal in what you accept.

What It Does

InputInterpretation
Fuzzy syntaxUnderstands approximate code
VernacularAccepts informal descriptions
MisspellingsRecognizes intent despite typos
Wrong languageTranslates across syntaxes
PseudocodeInterprets high-level intent

What It Generates

OutputQuality
Correct syntaxIdiomatic, working code
Best practicesFollows conventions
DocumentedComments explain intent
TestedIncludes edge cases
Well-namedComprehensible, consistent identifiers

Naming Conventions

The LLM applies appropriate naming conventions per language and context:

ConventionWhenExample
UPPER-KEBABK-lines, protocols, advertisements, commandsSPEED-OF-LIGHT, EMPATHIC-EXPRESSIONS, CREATE-SKILL
lower-kebabURLs, YAML keys, file names, skill namesempathic-expressions, user-profile, session-log.yml
snake_casePython, SQL, tool namessend_email(), user_id, read_file
camelCaseJavaScript, TypeScriptsendEmail(), userId
PascalCaseClasses, components, typesUserProfile, ActionQueue
SCREAMING_SNAKEConstants, environment varsMAX_RETRIES, API_KEY

Big-endian naming: General → Specific

# Good (big-endian): category first, specific last
user-profile-avatar
session-log-entry
room-description-short

# Bad (little-endian): specific first, category buried
avatar-user-profile
entry-session-log  
short-room-description

Why big-endian:

  • Sorts related things together
  • Tab-completion finds related items
  • Grep patterns work naturally
  • Human scanning is faster

The Teaching Gift

generous-interpretation-protocol:
  
  step-1-understand:
    # Accept whatever the user wrote
    # Interpret with maximum charity
    # Model what they probably meant
    
  step-2-generate:
    # Produce correct, idiomatic code
    # Follow language best practices
    # Include appropriate comments
    
  step-3-teach:
    # Echo back the correct form
    # Show what they wrote vs. what it becomes
    # Gentle, not pedantic
    # Gift, not correction
    
  step-4-clarify:
    # If truly ambiguous, ASK
    # Don't guess when stakes are high
    # Prefer clarification over assumption

Critical: Never make unwarranted assumptions. When truly ambiguous, ask for clarification.


Code-Switching Support

Explicit Switching (Markdown Style)

First, let's query the data:

```sql
SELECT * FROM users WHERE active = true
```

Then process in Python:

```python
for user in results:
    send_welcome_email(user)
```

And deploy with bash:

```bash
kubectl apply -f deployment.yaml
```

Clean data islands. Clear boundaries. Syntax highlighting preserved.

Nesting code blocks (CommonMark/GFM standard):

  • Use 4+ backticks for outer fence: ```````` wraps content with ````` blocks
  • Or use tildes for one level: ~~~ outer, ``` inner
  • Both are widely supported (GitHub, GitLab, VS Code, most parsers)
  • Individual backticks can be escaped with ``` but fences cannot

Inline Switching

When context makes it clear:

Get the user_id from the request, look it up in the database,
and return JSON with their profile and last 10 orders.

The LLM understands this involves:

  • HTTP request handling (language TBD)
  • SQL query (SELECT * FROM users WHERE id = ?)
  • JSON serialization
  • Another SQL query (SELECT * FROM orders WHERE user_id = ? LIMIT 10)

Context carries across switches. Variables established in one block are available in the next.

Polylinguistic Expressions

Sometimes the best expression mixes languages:

users.filter(u => u.active)  // JS filter
  .map(u => `INSERT INTO archive VALUES (${u.id})`)  // SQL generation
  .forEach(sql => db.exec(sql))  // execution

Empathic Expressions handles these mashups gracefully.


The LLM as Code Processor

The LLM plays many roles:

RoleFunction
Pseudocode InterpreterExecutes high-level intent
Empathic Pretty PrinterFormats with understanding
Generous LinterCatches errors, suggests fixes kindly
Intent CompilerTranslates intent → working code
DepseudofierConverts vague to precise

Example: The Full Pipeline

User writes:

get me all the users who havent logged in for like 30 days
and send them a "we miss you" email
but dont send to anyone whos unsubscribed

LLM interprets:

-- Empathic SQL interpretation
SELECT id, email, name 
FROM users 
WHERE last_login < NOW() - INTERVAL 30 DAY
  AND unsubscribed = FALSE
  AND email IS NOT NULL;

LLM generates:

# Empathic Python generation
from datetime import datetime, timedelta

def send_win_back_emails():
    """Send 'we miss you' emails to inactive users."""
    cutoff = datetime.now() - timedelta(days=30)
    
    inactive_users = User.objects.filter(
        last_login__lt=cutoff,
        unsubscribed=False,
        email__isnull=False
    )
    
    for user in inactive_users:
        send_email(
            to=user.email,
            subject="We miss you!",
            template="win_back",
            context={"name": user.name}
        )

LLM teaches:

I interpreted your request as:
- "30 days" → `datetime.now() - timedelta(days=30)`
- "haven't logged in" → `last_login < cutoff`
- "unsubscribed" → `unsubscribed=False` filter
- "we miss you email" → win-back campaign template

Here's the idiomatic way to write this query...

Used Throughout MOOLLM

Empathic Expressions powers:

SystemUsage
Empathic Templates{{#if user.active}} — expressions in templates
Buff Expressionstrigger: "happiness > 80" — conditions
Advertisementscondition: "has_item('key')" — capability checks
Action Queueparameters: { count: user_input } — dynamic params
Mind Mirrorcuriosity: "very high" — fuzzy state descriptions
Room Exitslocked_unless: "player.has('golden_key')" — gate conditions

The glue that makes everything expressive.


Clarification Protocol

When should the LLM ask for clarification?

SituationAction
Low stakes, clear intentInterpret and proceed
Low stakes, ambiguousMake reasonable choice, note it
High stakes, clear intentProceed with confirmation
High stakes, ambiguousASK FIRST

High stakes examples:

  • Deleting data
  • Financial transactions
  • Irreversible operations
  • Security-sensitive code
clarification-triggers:
  always-ask:
    - "DELETE" without WHERE clause
    - "DROP TABLE" anything
    - Production deployments
    - Payment processing
    - User data exports
    
  ask-if-ambiguous:
    - Multiple valid interpretations
    - Missing critical parameters
    - Conflicting requirements

Relationship to Other Skills

# The Empathic Suite
empathic_suite:
  components:
    empathic_expressions:
      role: "interpret intent"
      feeds_into: [empathic_templates, postel]
    empathic_templates:
      role: "instantiate"
      feeds_into: [yaml_jazz]
    postel:
      role: "generous interpretation"
    yaml_jazz:
      role: "expressive style"
  
  philosophy: "SPEED-OF-LIGHT"
  principles:
    - "Work in vectors, delay tokenization"
    - "Preserve precision as long as possible"
    - "Minimize boundary crossings"

Dovetails With


Protocol Symbol

EMPATHIC-EXPRESSIONS

Invoke when: Interpreting fuzzy user intent into working code.

See: PROTOCOLS.yml

Source Transparency

This detail page is rendered from real SKILL.md content. Trust labels are metadata-based hints, not a safety guarantee.

Related Skills

Related by shared tags or category signals.

Coding

code-review

No summary provided by upstream source.

Repository SourceNeeds Review
Coding

sniffable-python

No summary provided by upstream source.

Repository SourceNeeds Review
General

self-repair

No summary provided by upstream source.

Repository SourceNeeds Review