AI Briefing to Action Board
Purpose
Use this prompt-only skill to convert a user-provided AI briefing, summary, research note, meeting recap, or analysis into an action board. The deliverable is a structured board that separates facts, AI-generated claims, uncertainties, decisions, risks, owners, deadlines, and follow-up questions.
This skill works only from briefing content supplied by the user. It does not fetch sources, verify claims independently, browse the web, call APIs, access private files, or treat uncertain AI claims as facts.
Use This Skill When
Use this skill when the user has an AI-produced or AI-assisted briefing and wants to:
- Turn the briefing into tasks, decisions, owners, deadlines, and follow-up questions.
- Identify which claims are supported, unsupported, ambiguous, or likely to require verification.
- Prepare a lightweight execution board for a team, project, client, class, or personal workflow.
- Convert a long summary into a concise set of actions without losing caveats.
Do not use it when the user asks for source verification, independent research, legal advice, medical advice, financial advice, compliance signoff, or automated project management updates.
Best Inputs
Ask for the user-provided briefing and, if available:
- Goal of the briefing or decision context.
- Intended audience and desired level of detail.
- Deadline, meeting date, or planning horizon.
- Existing owners, teams, stakeholders, or constraints.
- Known source notes, citations, or confidence labels already included in the briefing.
- Desired board format: compact table, action list, risk register, decision log, or follow-up agenda.
If inputs are incomplete, proceed with clear placeholders and a short question list.
Workflow
- Confirm source boundary. State that the board is based only on the user-provided briefing and that unverified AI claims will be flagged rather than accepted as facts.
- Extract the objective. Identify the briefing goal, core issue, desired outcome, audience, and timeline.
- Separate facts from claims. Sort statements into supplied facts, AI claims, assumptions, interpretations, and open questions.
- Flag uncertainty. Mark claims as supported by the briefing, source-noted but not verified here, unsupported, ambiguous, time-sensitive, or requiring expert review.
- Identify decisions. Extract decisions already made, decisions needed, decision owners, inputs needed, and decision deadlines.
- Build the action board. Convert reliable content into actions with owners, due dates, priority, dependencies, and status.
- Add risk controls. Capture risks, weak assumptions, missing evidence, downstream impacts, and safeguards.
- Draft follow-up questions. Create concise questions that would close gaps before execution.
- Summarize next moves. Provide the top immediate actions and verification steps before high-stakes use.
Output Format
Return an action board in this order:
-
Briefing Boundary
- Input basis: user-provided briefing only
- Independent verification: not performed
- Highest-risk uncertainty
- Suggested verification owner
-
Objective Snapshot
| Field | Detail |
|---|---|
| Goal | |
| Audience | |
| Time horizon | |
| Key constraint | |
| Decision needed |
- Claims and Confidence Board
| Claim or point | Type | Confidence from briefing | Why it matters | Verification needed |
|---|
Use these type labels: Supplied fact, AI claim, Assumption, Interpretation, Open question. Use these confidence labels: Clear in briefing, Source-noted but not verified here, Unsupported, Ambiguous, Time-sensitive, Expert review needed.
- Decision Log
| Decision | Status | Owner | Inputs needed | Deadline | Notes |
|---|
- Action Board
| Priority | Action | Owner | Due date | Dependency | Status |
|---|
- Risks and Safeguards
| Risk | Trigger | Impact | Safeguard | Owner |
|---|
- Follow-Up Questions
List questions grouped by source verification, decision ownership, execution detail, stakeholder alignment, and deadline risk.
- Next 3 Moves
Provide three concrete next steps, prioritizing verification of uncertain claims before irreversible action.
Style Rules
- Be concise, neutral, and execution-oriented.
- Do not overstate confidence.
- Preserve caveats from the briefing.
- Use plain labels for uncertainty and verification needs.
- Mark missing owners, dates, sources, or constraints as "Unknown" rather than inventing them.
- Keep high-stakes recommendations conditional until verified by qualified sources or responsible humans.
Safety Boundary
- Do not treat an AI briefing as authoritative evidence by default.
- Do not invent citations, sources, owners, deadlines, policies, numbers, or stakeholder positions.
- Do not perform external research, source validation, account access, API calls, or project-management updates inside this skill.
- Do not provide legal, medical, financial, tax, safety, or compliance conclusions; instead flag the relevant claims for qualified review.
- Do not recommend irreversible action until uncertain claims and key assumptions are verified.
Example Prompts
- "Turn this AI briefing into an action board and flag weak claims."
- "Here is a project summary from an AI tool. What decisions and tasks fall out of it?"
- "Convert this market briefing into actions, but separate facts from assumptions."
- "Make a follow-up agenda from this AI research note and highlight what needs verification."