Skills Registry

Filters

Official References

3p_updates
anthropic SKILL.md License: LICENSE.txt Version: Unknown
Imported skill 3p_updates from anthropic
View skill
## Instructions
You are being asked to write a 3P update. 3P updates stand for "Progress, Plans, Problems." The main audience is for executives, leadership, other teammates, etc. They're meant to be very succinct and to-the-point: think something you can read in 30-60sec or less. They're also for people with some, but not a lot of context on what the team does.

3Ps can cover a team of any size, ranging all the way up to the entire company. The bigger the team, the less granular the tasks should be. For example, "mobile team" might have "shipped feature" or "fixed bugs," whereas the company might have really meaty 3Ps, like "hired 20 new people" or "closed 10 new deals." 

They represent the work of the team across a time period, almost always one week. They include three sections:
1) Progress: what the team has accomplished over the next time period. Focus mainly on things shipped, milestones achieved, tasks created, etc.
2) Plans: what the team plans to do over the next time period. Focus on what things are top-of-mind, really high priority, etc. for the team.
3) Problems: anything that is slowing the team down. This could be things like too few people, bugs or blockers that are preventing the team from moving forward, some deal that fell through, etc.

Before writing them, make sure that you know the team name. If it's not specified, you can ask explicitly what the team name you're writing for is.


## Tools Available
Whenever possible, try to pull from available sources to get the information you need:
- Slack: posts from team members with their updates - ideally look for posts in large channels with lots of reactions
- Google Drive: docs written from critical team members with lots of views
- Email: emails with lots of responses of lots of content that seems relevant
- Calendar: non-recurring meetings that have a lot of importance, like product reviews, etc.


Try to gather as much context as you can, focusing on the things that covered the time period you're writing for:
- Progress: anything between a week ago and today
- Plans: anything from today to the next week
- Problems: anything between a week ago and today


If you don't have access, you can ask the user for things they want to cover. They might also include these things to you directly, in which case you're mostly just formatting for this particular format.

## Workflow

1. **Clarify scope**: Confirm the team name and time period (usually past week for Progress/Problems, next
week for Plans)
2. **Gather information**: Use available tools or ask the user directly
3. **Draft the update**: Follow the strict formatting guidelines
4. **Review**: Ensure it's concise (30-60 seconds to read) and data-driven

## Formatting

The format is always the same, very strict formatting. Never use any formatting other than this. Pick an emoji that is fun and captures the vibe of the team and update.

[pick an emoji] [Team Name] (Dates Covered, usually a week)
Progress: [1-3 sentences of content]
Plans: [1-3 sentences of content]
Problems: [1-3 sentences of content]

Each section should be no more than 1-3 sentences: clear, to the point. It should be data-driven, and generally include metrics where possible. The tone should be very matter-of-fact, not super prose-heavy.
agent_skills_spec
anthropic SKILL.md License: LICENSE.txt Version: Unknown
Imported skill agent_skills_spec from anthropic
View skill
# Agent Skills Spec

The spec is now located at <https://agentskills.io/specification>
arctic_frost
anthropic SKILL.md License: LICENSE.txt Version: Unknown
Imported skill arctic_frost from anthropic
View skill
# Arctic Frost

A cool and crisp winter-inspired theme that conveys clarity, precision, and professionalism.

## Color Palette

- **Ice Blue**: `#d4e4f7` - Light backgrounds and highlights
- **Steel Blue**: `#4a6fa5` - Primary accent color
- **Silver**: `#c0c0c0` - Metallic accent elements
- **Crisp White**: `#fafafa` - Clean backgrounds and text

## Typography

- **Headers**: DejaVu Sans Bold
- **Body Text**: DejaVu Sans

## Best Used For

Healthcare presentations, technology solutions, winter sports, clean tech, pharmaceutical content.
botanical_garden
anthropic SKILL.md License: LICENSE.txt Version: Unknown
Imported skill botanical_garden from anthropic
View skill
# Botanical Garden

A fresh and organic theme featuring vibrant garden-inspired colors for lively presentations.

## Color Palette

- **Fern Green**: `#4a7c59` - Rich natural green
- **Marigold**: `#f9a620` - Bright floral accent
- **Terracotta**: `#b7472a` - Earthy warm tone
- **Cream**: `#f5f3ed` - Soft neutral backgrounds

## Typography

- **Headers**: DejaVu Serif Bold
- **Body Text**: DejaVu Sans

## Best Used For

Garden centers, food presentations, farm-to-table content, botanical brands, natural products.
company_newsletter
anthropic SKILL.md License: LICENSE.txt Version: Unknown
Imported skill company_newsletter from anthropic
View skill
## Instructions
You are being asked to write a company-wide newsletter update. You are meant to summarize the past week/month of a company in the form of a newsletter that the entire company will read. It should be maybe ~20-25 bullet points long. It will be sent via Slack and email, so make it consumable for that.

Ideally it includes the following attributes:
- Lots of links: pulling documents from Google Drive that are very relevant, linking to prominent Slack messages in announce channels and from executives, perhgaps referencing emails that went company-wide, highlighting significant things that have happened in the company.
- Short and to-the-point: each bullet should probably be no longer than ~1-2 sentences
- Use the "we" tense, as you are part of the company. Many of the bullets should say "we did this" or "we did that"

## Tools to use
If you have access to the following tools, please try to use them. If not, you can also let the user know directly that their responses would be better if they gave them access.

- Slack: look for messages in channels with lots of people, with lots of reactions or lots of responses within the thread
- Email: look for things from executives that discuss company-wide announcements
- Calendar: if there were meetings with large attendee lists, particularly things like All-Hands meetings, big company announcements, etc. If there were documents attached to those meetings, those are great links to include.
- Documents: if there were new docs published in the last week or two that got a lot of attention, you can link them. These should be things like company-wide vision docs, plans for the upcoming quarter or half, things authored by critical executives, etc.
- External press: if you see references to articles or press we've received over the past week, that could be really cool too.

If you don't have access to any of these things, you can ask the user for things they want to cover. In this case, you'll mostly just be polishing up and fitting to this format more directly.

## Sections
The company is pretty big: 1000+ people. There are a variety of different teams and initiatives going on across the company. To make sure the update works well, try breaking it into sections of similar things. You might break into clusters like {product development, go to market, finance} or {recruiting, execution, vision}, or {external news, internal news} etc. Try to make sure the different areas of the company are highlighted well.

## Prioritization
Focus on:
- Company-wide impact (not team-specific details)
- Announcements from leadership
- Major milestones and achievements
- Information that affects most employees
- External recognition or press

Avoid:
- Overly granular team updates (save those for 3Ps)
- Information only relevant to small groups
- Duplicate information already communicated

## Example Formats

:megaphone: Company Announcements
- Announcement 1
- Announcement 2
- Announcement 3

:dart: Progress on Priorities
- Area 1
    - Sub-area 1
    - Sub-area 2
    - Sub-area 3
- Area 2
    - Sub-area 1
    - Sub-area 2
    - Sub-area 3
- Area 3
    - Sub-area 1
    - Sub-area 2
    - Sub-area 3

:pillar: Leadership Updates
- Post 1
- Post 2
- Post 3

:thread: Social Updates
- Update 1
- Update 2
- Update 3
desert_rose
anthropic SKILL.md License: LICENSE.txt Version: Unknown
Imported skill desert_rose from anthropic
View skill
# Desert Rose

A soft and sophisticated theme with dusty, muted tones perfect for elegant presentations.

## Color Palette

- **Dusty Rose**: `#d4a5a5` - Soft primary color
- **Clay**: `#b87d6d` - Earthy accent
- **Sand**: `#e8d5c4` - Warm neutral backgrounds
- **Deep Burgundy**: `#5d2e46` - Rich dark contrast

## Typography

- **Headers**: FreeSans Bold
- **Body Text**: FreeSans

## Best Used For

Fashion presentations, beauty brands, wedding planning, interior design, boutique businesses.
docx_js
anthropic SKILL.md License: LICENSE.txt Version: Unknown
Imported skill docx_js from anthropic
View skill
# DOCX Library Tutorial

Generate .docx files with JavaScript/TypeScript.

**Important: Read this entire document before starting.** Critical formatting rules and common pitfalls are covered throughout - skipping sections may result in corrupted files or rendering issues.

## Setup
Assumes docx is already installed globally
If not installed: `npm install -g docx`

```javascript
const { Document, Packer, Paragraph, TextRun, Table, TableRow, TableCell, ImageRun, Media, 
        Header, Footer, AlignmentType, PageOrientation, LevelFormat, ExternalHyperlink, 
        InternalHyperlink, TableOfContents, HeadingLevel, BorderStyle, WidthType, TabStopType, 
        TabStopPosition, UnderlineType, ShadingType, VerticalAlign, SymbolRun, PageNumber,
        FootnoteReferenceRun, Footnote, PageBreak } = require('docx');

// Create & Save
const doc = new Document({ sections: [{ children: [/* content */] }] });
Packer.toBuffer(doc).then(buffer => fs.writeFileSync("doc.docx", buffer)); // Node.js
Packer.toBlob(doc).then(blob => { /* download logic */ }); // Browser
```

## Text & Formatting
```javascript
// IMPORTANT: Never use \n for line breaks - always use separate Paragraph elements
// ❌ WRONG: new TextRun("Line 1\nLine 2")
// ✅ CORRECT: new Paragraph({ children: [new TextRun("Line 1")] }), new Paragraph({ children: [new TextRun("Line 2")] })

// Basic text with all formatting options
new Paragraph({
  alignment: AlignmentType.CENTER,
  spacing: { before: 200, after: 200 },
  indent: { left: 720, right: 720 },
  children: [
    new TextRun({ text: "Bold", bold: true }),
    new TextRun({ text: "Italic", italics: true }),
    new TextRun({ text: "Underlined", underline: { type: UnderlineType.DOUBLE, color: "FF0000" } }),
    new TextRun({ text: "Colored", color: "FF0000", size: 28, font: "Arial" }), // Arial default
    new TextRun({ text: "Highlighted", highlight: "yellow" }),
    new TextRun({ text: "Strikethrough", strike: true }),
    new TextRun({ text: "x2", superScript: true }),
    new TextRun({ text: "H2O", subScript: true }),
    new TextRun({ text: "SMALL CAPS", smallCaps: true }),
    new SymbolRun({ char: "2022", font: "Symbol" }), // Bullet •
    new SymbolRun({ char: "00A9", font: "Arial" })   // Copyright © - Arial for symbols
  ]
})
```

## Styles & Professional Formatting

```javascript
const doc = new Document({
  styles: {
    default: { document: { run: { font: "Arial", size: 24 } } }, // 12pt default
    paragraphStyles: [
      // Document title style - override built-in Title style
      { id: "Title", name: "Title", basedOn: "Normal",
        run: { size: 56, bold: true, color: "000000", font: "Arial" },
        paragraph: { spacing: { before: 240, after: 120 }, alignment: AlignmentType.CENTER } },
      // IMPORTANT: Override built-in heading styles by using their exact IDs
      { id: "Heading1", name: "Heading 1", basedOn: "Normal", next: "Normal", quickFormat: true,
        run: { size: 32, bold: true, color: "000000", font: "Arial" }, // 16pt
        paragraph: { spacing: { before: 240, after: 240 }, outlineLevel: 0 } }, // Required for TOC
      { id: "Heading2", name: "Heading 2", basedOn: "Normal", next: "Normal", quickFormat: true,
        run: { size: 28, bold: true, color: "000000", font: "Arial" }, // 14pt
        paragraph: { spacing: { before: 180, after: 180 }, outlineLevel: 1 } },
      // Custom styles use your own IDs
      { id: "myStyle", name: "My Style", basedOn: "Normal",
        run: { size: 28, bold: true, color: "000000" },
        paragraph: { spacing: { after: 120 }, alignment: AlignmentType.CENTER } }
    ],
    characterStyles: [{ id: "myCharStyle", name: "My Char Style",
      run: { color: "FF0000", bold: true, underline: { type: UnderlineType.SINGLE } } }]
  },
  sections: [{
    properties: { page: { margin: { top: 1440, right: 1440, bottom: 1440, left: 1440 } } },
    children: [
      new Paragraph({ heading: HeadingLevel.TITLE, children: [new TextRun("Document Title")] }), // Uses overridden Title style
      new Paragraph({ heading: HeadingLevel.HEADING_1, children: [new TextRun("Heading 1")] }), // Uses overridden Heading1 style
      new Paragraph({ style: "myStyle", children: [new TextRun("Custom paragraph style")] }),
      new Paragraph({ children: [
        new TextRun("Normal with "),
        new TextRun({ text: "custom char style", style: "myCharStyle" })
      ]})
    ]
  }]
});
```

**Professional Font Combinations:**
- **Arial (Headers) + Arial (Body)** - Most universally supported, clean and professional
- **Times New Roman (Headers) + Arial (Body)** - Classic serif headers with modern sans-serif body
- **Georgia (Headers) + Verdana (Body)** - Optimized for screen reading, elegant contrast

**Key Styling Principles:**
- **Override built-in styles**: Use exact IDs like "Heading1", "Heading2", "Heading3" to override Word's built-in heading styles
- **HeadingLevel constants**: `HeadingLevel.HEADING_1` uses "Heading1" style, `HeadingLevel.HEADING_2` uses "Heading2" style, etc.
- **Include outlineLevel**: Set `outlineLevel: 0` for H1, `outlineLevel: 1` for H2, etc. to ensure TOC works correctly
- **Use custom styles** instead of inline formatting for consistency
- **Set a default font** using `styles.default.document.run.font` - Arial is universally supported
- **Establish visual hierarchy** with different font sizes (titles > headers > body)
- **Add proper spacing** with `before` and `after` paragraph spacing
- **Use colors sparingly**: Default to black (000000) and shades of gray for titles and headings (heading 1, heading 2, etc.)
- **Set consistent margins** (1440 = 1 inch is standard)


## Lists (ALWAYS USE PROPER LISTS - NEVER USE UNICODE BULLETS)
```javascript
// Bullets - ALWAYS use the numbering config, NOT unicode symbols
// CRITICAL: Use LevelFormat.BULLET constant, NOT the string "bullet"
const doc = new Document({
  numbering: {
    config: [
      { reference: "bullet-list",
        levels: [{ level: 0, format: LevelFormat.BULLET, text: "•", alignment: AlignmentType.LEFT,
          style: { paragraph: { indent: { left: 720, hanging: 360 } } } }] },
      { reference: "first-numbered-list",
        levels: [{ level: 0, format: LevelFormat.DECIMAL, text: "%1.", alignment: AlignmentType.LEFT,
          style: { paragraph: { indent: { left: 720, hanging: 360 } } } }] },
      { reference: "second-numbered-list", // Different reference = restarts at 1
        levels: [{ level: 0, format: LevelFormat.DECIMAL, text: "%1.", alignment: AlignmentType.LEFT,
          style: { paragraph: { indent: { left: 720, hanging: 360 } } } }] }
    ]
  },
  sections: [{
    children: [
      // Bullet list items
      new Paragraph({ numbering: { reference: "bullet-list", level: 0 },
        children: [new TextRun("First bullet point")] }),
      new Paragraph({ numbering: { reference: "bullet-list", level: 0 },
        children: [new TextRun("Second bullet point")] }),
      // Numbered list items
      new Paragraph({ numbering: { reference: "first-numbered-list", level: 0 },
        children: [new TextRun("First numbered item")] }),
      new Paragraph({ numbering: { reference: "first-numbered-list", level: 0 },
        children: [new TextRun("Second numbered item")] }),
      // ⚠️ CRITICAL: Different reference = INDEPENDENT list that restarts at 1
      // Same reference = CONTINUES previous numbering
      new Paragraph({ numbering: { reference: "second-numbered-list", level: 0 },
        children: [new TextRun("Starts at 1 again (because different reference)")] })
    ]
  }]
});

// ⚠️ CRITICAL NUMBERING RULE: Each reference creates an INDEPENDENT numbered list
// - Same reference = continues numbering (1, 2, 3... then 4, 5, 6...)
// - Different reference = restarts at 1 (1, 2, 3... then 1, 2, 3...)
// Use unique reference names for each separate numbered section!

// ⚠️ CRITICAL: NEVER use unicode bullets - they create fake lists that don't work properly
// new TextRun("• Item")           // WRONG
// new SymbolRun({ char: "2022" }) // WRONG
// ✅ ALWAYS use numbering config with LevelFormat.BULLET for real Word lists
```

## Tables
```javascript
// Complete table with margins, borders, headers, and bullet points
const tableBorder = { style: BorderStyle.SINGLE, size: 1, color: "CCCCCC" };
const cellBorders = { top: tableBorder, bottom: tableBorder, left: tableBorder, right: tableBorder };

new Table({
  columnWidths: [4680, 4680], // ⚠️ CRITICAL: Set column widths at table level - values in DXA (twentieths of a point)
  margins: { top: 100, bottom: 100, left: 180, right: 180 }, // Set once for all cells
  rows: [
    new TableRow({
      tableHeader: true,
      children: [
        new TableCell({
          borders: cellBorders,
          width: { size: 4680, type: WidthType.DXA }, // ALSO set width on each cell
          // ⚠️ CRITICAL: Always use ShadingType.CLEAR to prevent black backgrounds in Word.
          shading: { fill: "D5E8F0", type: ShadingType.CLEAR }, 
          verticalAlign: VerticalAlign.CENTER,
          children: [new Paragraph({ 
            alignment: AlignmentType.CENTER,
            children: [new TextRun({ text: "Header", bold: true, size: 22 })]
          })]
        }),
        new TableCell({
          borders: cellBorders,
          width: { size: 4680, type: WidthType.DXA }, // ALSO set width on each cell
          shading: { fill: "D5E8F0", type: ShadingType.CLEAR },
          children: [new Paragraph({ 
            alignment: AlignmentType.CENTER,
            children: [new TextRun({ text: "Bullet Points", bold: true, size: 22 })]
          })]
        })
      ]
    }),
    new TableRow({
      children: [
        new TableCell({
          borders: cellBorders,
          width: { size: 4680, type: WidthType.DXA }, // ALSO set width on each cell
          children: [new Paragraph({ children: [new TextRun("Regular data")] })]
        }),
        new TableCell({
          borders: cellBorders,
          width: { size: 4680, type: WidthType.DXA }, // ALSO set width on each cell
          children: [
            new Paragraph({ 
              numbering: { reference: "bullet-list", level: 0 },
              children: [new TextRun("First bullet point")] 
            }),
            new Paragraph({ 
              numbering: { reference: "bullet-list", level: 0 },
              children: [new TextRun("Second bullet point")] 
            })
          ]
        })
      ]
    })
  ]
})
```

**IMPORTANT: Table Width & Borders**
- Use BOTH `columnWidths: [width1, width2, ...]` array AND `width: { size: X, type: WidthType.DXA }` on each cell
- Values in DXA (twentieths of a point): 1440 = 1 inch, Letter usable width = 9360 DXA (with 1" margins)
- Apply borders to individual `TableCell` elements, NOT the `Table` itself

**Precomputed Column Widths (Letter size with 1" margins = 9360 DXA total):**
- **2 columns:** `columnWidths: [4680, 4680]` (equal width)
- **3 columns:** `columnWidths: [3120, 3120, 3120]` (equal width)

## Links & Navigation
```javascript
// TOC (requires headings) - CRITICAL: Use HeadingLevel only, NOT custom styles
// ❌ WRONG: new Paragraph({ heading: HeadingLevel.HEADING_1, style: "customHeader", children: [new TextRun("Title")] })
// ✅ CORRECT: new Paragraph({ heading: HeadingLevel.HEADING_1, children: [new TextRun("Title")] })
new TableOfContents("Table of Contents", { hyperlink: true, headingStyleRange: "1-3" }),

// External link
new Paragraph({
  children: [new ExternalHyperlink({
    children: [new TextRun({ text: "Google", style: "Hyperlink" })],
    link: "https://www.google.com"
  })]
}),

// Internal link & bookmark
new Paragraph({
  children: [new InternalHyperlink({
    children: [new TextRun({ text: "Go to Section", style: "Hyperlink" })],
    anchor: "section1"
  })]
}),
new Paragraph({
  children: [new TextRun("Section Content")],
  bookmark: { id: "section1", name: "section1" }
}),
```

## Images & Media
```javascript
// Basic image with sizing & positioning
// CRITICAL: Always specify 'type' parameter - it's REQUIRED for ImageRun
new Paragraph({
  alignment: AlignmentType.CENTER,
  children: [new ImageRun({
    type: "png", // NEW REQUIREMENT: Must specify image type (png, jpg, jpeg, gif, bmp, svg)
    data: fs.readFileSync("image.png"),
    transformation: { width: 200, height: 150, rotation: 0 }, // rotation in degrees
    altText: { title: "Logo", description: "Company logo", name: "Name" } // IMPORTANT: All three fields are required
  })]
})
```

## Page Breaks
```javascript
// Manual page break
new Paragraph({ children: [new PageBreak()] }),

// Page break before paragraph
new Paragraph({
  pageBreakBefore: true,
  children: [new TextRun("This starts on a new page")]
})

// ⚠️ CRITICAL: NEVER use PageBreak standalone - it will create invalid XML that Word cannot open
// ❌ WRONG: new PageBreak() 
// ✅ CORRECT: new Paragraph({ children: [new PageBreak()] })
```

## Headers/Footers & Page Setup
```javascript
const doc = new Document({
  sections: [{
    properties: {
      page: {
        margin: { top: 1440, right: 1440, bottom: 1440, left: 1440 }, // 1440 = 1 inch
        size: { orientation: PageOrientation.LANDSCAPE },
        pageNumbers: { start: 1, formatType: "decimal" } // "upperRoman", "lowerRoman", "upperLetter", "lowerLetter"
      }
    },
    headers: {
      default: new Header({ children: [new Paragraph({ 
        alignment: AlignmentType.RIGHT,
        children: [new TextRun("Header Text")]
      })] })
    },
    footers: {
      default: new Footer({ children: [new Paragraph({ 
        alignment: AlignmentType.CENTER,
        children: [new TextRun("Page "), new TextRun({ children: [PageNumber.CURRENT] }), new TextRun(" of "), new TextRun({ children: [PageNumber.TOTAL_PAGES] })]
      })] })
    },
    children: [/* content */]
  }]
});
```

## Tabs
```javascript
new Paragraph({
  tabStops: [
    { type: TabStopType.LEFT, position: TabStopPosition.MAX / 4 },
    { type: TabStopType.CENTER, position: TabStopPosition.MAX / 2 },
    { type: TabStopType.RIGHT, position: TabStopPosition.MAX * 3 / 4 }
  ],
  children: [new TextRun("Left\tCenter\tRight")]
})
```

## Constants & Quick Reference
- **Underlines:** `SINGLE`, `DOUBLE`, `WAVY`, `DASH`
- **Borders:** `SINGLE`, `DOUBLE`, `DASHED`, `DOTTED`  
- **Numbering:** `DECIMAL` (1,2,3), `UPPER_ROMAN` (I,II,III), `LOWER_LETTER` (a,b,c)
- **Tabs:** `LEFT`, `CENTER`, `RIGHT`, `DECIMAL`
- **Symbols:** `"2022"` (•), `"00A9"` (©), `"00AE"` (®), `"2122"` (™), `"00B0"` (°), `"F070"` (✓), `"F0FC"` (✗)

## Critical Issues & Common Mistakes
- **CRITICAL: PageBreak must ALWAYS be inside a Paragraph** - standalone PageBreak creates invalid XML that Word cannot open
- **ALWAYS use ShadingType.CLEAR for table cell shading** - Never use ShadingType.SOLID (causes black background).
- Measurements in DXA (1440 = 1 inch) | Each table cell needs ≥1 Paragraph | TOC requires HeadingLevel styles only
- **ALWAYS use custom styles** with Arial font for professional appearance and proper visual hierarchy
- **ALWAYS set a default font** using `styles.default.document.run.font` - Arial recommended
- **ALWAYS use columnWidths array for tables** + individual cell widths for compatibility
- **NEVER use unicode symbols for bullets** - always use proper numbering configuration with `LevelFormat.BULLET` constant (NOT the string "bullet")
- **NEVER use \n for line breaks anywhere** - always use separate Paragraph elements for each line
- **ALWAYS use TextRun objects within Paragraph children** - never use text property directly on Paragraph
- **CRITICAL for images**: ImageRun REQUIRES `type` parameter - always specify "png", "jpg", "jpeg", "gif", "bmp", or "svg"
- **CRITICAL for bullets**: Must use `LevelFormat.BULLET` constant, not string "bullet", and include `text: "•"` for the bullet character
- **CRITICAL for numbering**: Each numbering reference creates an INDEPENDENT list. Same reference = continues numbering (1,2,3 then 4,5,6). Different reference = restarts at 1 (1,2,3 then 1,2,3). Use unique reference names for each separate numbered section!
- **CRITICAL for TOC**: When using TableOfContents, headings must use HeadingLevel ONLY - do NOT add custom styles to heading paragraphs or TOC will break
- **Tables**: Set `columnWidths` array + individual cell widths, apply borders to cells not table
- **Set table margins at TABLE level** for consistent cell padding (avoids repetition per cell)
faq_answers
anthropic SKILL.md License: LICENSE.txt Version: Unknown
Imported skill faq_answers from anthropic
View skill
## Instructions
You are an assistant for answering questions that are being asked across the company. Every week, there are lots of questions that get asked across the company, and your goal is to try to summarize what those questions are. We want our company to be well-informed and on the same page, so your job is to produce a set of frequently asked questions that our employees are asking and attempt to answer them. Your singular job is to do two things:

- Find questions that are big sources of confusion for lots of employees at the company, generally about things that affect a large portion of the employee base
- Attempt to give a nice summarized answer to that question in order to minimize confusion.

Some examples of areas that may be interesting to folks: recent corporate events (fundraising, new executives, etc.), upcoming launches, hiring progress, changes to vision or focus, etc.


## Tools Available
You should use the company's available tools, where communication and work happens. For most companies, it looks something like this:
- Slack: questions being asked across the company - it could be questions in response to posts with lots of responses, questions being asked with lots of reactions or thumbs up to show support, or anything else to show that a large number of employees want to ask the same things
- Email: emails with FAQs written directly in them can be a good source as well
- Documents: docs in places like Google Drive, linked on calendar events, etc. can also be a good source of FAQs, either directly added or inferred based on the contents of the doc

## Formatting
The formatting should be pretty basic:

- *Question*: [insert question - 1 sentence]
- *Answer*: [insert answer - 1-2 sentence]

## Guidance
Make sure you're being holistic in your questions. Don't focus too much on just the user in question or the team they are a part of, but try to capture the entire company. Try to be as holistic as you can in reading all the tools available, producing responses that are relevant to all at the company.

## Answer Guidelines
- Base answers on official company communications when possible
- If information is uncertain, indicate that clearly
- Link to authoritative sources (docs, announcements, emails)
- Keep tone professional but approachable
- Flag if a question requires executive input or official response
forest_canopy
anthropic SKILL.md License: LICENSE.txt Version: Unknown
Imported skill forest_canopy from anthropic
View skill
# Forest Canopy

A natural and grounded theme featuring earth tones inspired by dense forest environments.

## Color Palette

- **Forest Green**: `#2d4a2b` - Primary dark green
- **Sage**: `#7d8471` - Muted green accent
- **Olive**: `#a4ac86` - Light accent color
- **Ivory**: `#faf9f6` - Backgrounds and text

## Typography

- **Headers**: FreeSerif Bold
- **Body Text**: FreeSans

## Best Used For

Environmental presentations, sustainability reports, outdoor brands, wellness content, organic products.
forms
anthropic SKILL.md License: LICENSE.txt Version: Unknown
Imported skill forms from anthropic
View skill
**CRITICAL: You MUST complete these steps in order. Do not skip ahead to writing code.**

If you need to fill out a PDF form, first check to see if the PDF has fillable form fields. Run this script from this file's directory:
 `python scripts/check_fillable_fields <file.pdf>`, and depending on the result go to either the "Fillable fields" or "Non-fillable fields" and follow those instructions.

# Fillable fields
If the PDF has fillable form fields:
- Run this script from this file's directory: `python scripts/extract_form_field_info.py <input.pdf> <field_info.json>`. It will create a JSON file with a list of fields in this format:
```
[
  {
    "field_id": (unique ID for the field),
    "page": (page number, 1-based),
    "rect": ([left, bottom, right, top] bounding box in PDF coordinates, y=0 is the bottom of the page),
    "type": ("text", "checkbox", "radio_group", or "choice"),
  },
  // Checkboxes have "checked_value" and "unchecked_value" properties:
  {
    "field_id": (unique ID for the field),
    "page": (page number, 1-based),
    "type": "checkbox",
    "checked_value": (Set the field to this value to check the checkbox),
    "unchecked_value": (Set the field to this value to uncheck the checkbox),
  },
  // Radio groups have a "radio_options" list with the possible choices.
  {
    "field_id": (unique ID for the field),
    "page": (page number, 1-based),
    "type": "radio_group",
    "radio_options": [
      {
        "value": (set the field to this value to select this radio option),
        "rect": (bounding box for the radio button for this option)
      },
      // Other radio options
    ]
  },
  // Multiple choice fields have a "choice_options" list with the possible choices:
  {
    "field_id": (unique ID for the field),
    "page": (page number, 1-based),
    "type": "choice",
    "choice_options": [
      {
        "value": (set the field to this value to select this option),
        "text": (display text of the option)
      },
      // Other choice options
    ],
  }
]
```
- Convert the PDF to PNGs (one image for each page) with this script (run from this file's directory):
`python scripts/convert_pdf_to_images.py <file.pdf> <output_directory>`
Then analyze the images to determine the purpose of each form field (make sure to convert the bounding box PDF coordinates to image coordinates).
- Create a `field_values.json` file in this format with the values to be entered for each field:
```
[
  {
    "field_id": "last_name", // Must match the field_id from `extract_form_field_info.py`
    "description": "The user's last name",
    "page": 1, // Must match the "page" value in field_info.json
    "value": "Simpson"
  },
  {
    "field_id": "Checkbox12",
    "description": "Checkbox to be checked if the user is 18 or over",
    "page": 1,
    "value": "/On" // If this is a checkbox, use its "checked_value" value to check it. If it's a radio button group, use one of the "value" values in "radio_options".
  },
  // more fields
]
```
- Run the `fill_fillable_fields.py` script from this file's directory to create a filled-in PDF:
`python scripts/fill_fillable_fields.py <input pdf> <field_values.json> <output pdf>`
This script will verify that the field IDs and values you provide are valid; if it prints error messages, correct the appropriate fields and try again.

# Non-fillable fields
If the PDF doesn't have fillable form fields, you'll need to visually determine where the data should be added and create text annotations. Follow the below steps *exactly*. You MUST perform all of these steps to ensure that the the form is accurately completed. Details for each step are below.
- Convert the PDF to PNG images and determine field bounding boxes.
- Create a JSON file with field information and validation images showing the bounding boxes.
- Validate the the bounding boxes.
- Use the bounding boxes to fill in the form.

## Step 1: Visual Analysis (REQUIRED)
- Convert the PDF to PNG images. Run this script from this file's directory:
`python scripts/convert_pdf_to_images.py <file.pdf> <output_directory>`
The script will create a PNG image for each page in the PDF.
- Carefully examine each PNG image and identify all form fields and areas where the user should enter data. For each form field where the user should enter text, determine bounding boxes for both the form field label, and the area where the user should enter text. The label and entry bounding boxes MUST NOT INTERSECT; the text entry box should only include the area where data should be entered. Usually this area will be immediately to the side, above, or below its label. Entry bounding boxes must be tall and wide enough to contain their text.

These are some examples of form structures that you might see:

*Label inside box*
```
┌────────────────────────┐
│ Name:                  │
└────────────────────────┘
```
The input area should be to the right of the "Name" label and extend to the edge of the box.

*Label before line*
```
Email: _______________________
```
The input area should be above the line and include its entire width.

*Label under line*
```
_________________________
Name
```
The input area should be above the line and include the entire width of the line. This is common for signature and date fields.

*Label above line*
```
Please enter any special requests:
________________________________________________
```
The input area should extend from the bottom of the label to the line, and should include the entire width of the line.

*Checkboxes*
```
Are you a US citizen? Yes □  No □
```
For checkboxes:
- Look for small square boxes (□) - these are the actual checkboxes to target. They may be to the left or right of their labels.
- Distinguish between label text ("Yes", "No") and the clickable checkbox squares.
- The entry bounding box should cover ONLY the small square, not the text label.

### Step 2: Create fields.json and validation images (REQUIRED)
- Create a file named `fields.json` with information for the form fields and bounding boxes in this format:
```
{
  "pages": [
    {
      "page_number": 1,
      "image_width": (first page image width in pixels),
      "image_height": (first page image height in pixels),
    },
    {
      "page_number": 2,
      "image_width": (second page image width in pixels),
      "image_height": (second page image height in pixels),
    }
    // additional pages
  ],
  "form_fields": [
    // Example for a text field.
    {
      "page_number": 1,
      "description": "The user's last name should be entered here",
      // Bounding boxes are [left, top, right, bottom]. The bounding boxes for the label and text entry should not overlap.
      "field_label": "Last name",
      "label_bounding_box": [30, 125, 95, 142],
      "entry_bounding_box": [100, 125, 280, 142],
      "entry_text": {
        "text": "Johnson", // This text will be added as an annotation at the entry_bounding_box location
        "font_size": 14, // optional, defaults to 14
        "font_color": "000000", // optional, RRGGBB format, defaults to 000000 (black)
      }
    },
    // Example for a checkbox. TARGET THE SQUARE for the entry bounding box, NOT THE TEXT
    {
      "page_number": 2,
      "description": "Checkbox that should be checked if the user is over 18",
      "entry_bounding_box": [140, 525, 155, 540],  // Small box over checkbox square
      "field_label": "Yes",
      "label_bounding_box": [100, 525, 132, 540],  // Box containing "Yes" text
      // Use "X" to check a checkbox.
      "entry_text": {
        "text": "X",
      }
    }
    // additional form field entries
  ]
}
```

Create validation images by running this script from this file's directory for each page:
`python scripts/create_validation_image.py <page_number> <path_to_fields.json> <input_image_path> <output_image_path>

The validation images will have red rectangles where text should be entered, and blue rectangles covering label text.

### Step 3: Validate Bounding Boxes (REQUIRED)
#### Automated intersection check
- Verify that none of bounding boxes intersect and that the entry bounding boxes are tall enough by checking the fields.json file with the `check_bounding_boxes.py` script (run from this file's directory):
`python scripts/check_bounding_boxes.py <JSON file>`

If there are errors, reanalyze the relevant fields, adjust the bounding boxes, and iterate until there are no remaining errors. Remember: label (blue) bounding boxes should contain text labels, entry (red) boxes should not.

#### Manual image inspection
**CRITICAL: Do not proceed without visually inspecting validation images**
- Red rectangles must ONLY cover input areas
- Red rectangles MUST NOT contain any text
- Blue rectangles should contain label text
- For checkboxes:
  - Red rectangle MUST be centered on the checkbox square
  - Blue rectangle should cover the text label for the checkbox

- If any rectangles look wrong, fix fields.json, regenerate the validation images, and verify again. Repeat this process until the bounding boxes are fully accurate.


### Step 4: Add annotations to the PDF
Run this script from this file's directory to create a filled-out PDF using the information in fields.json:
`python scripts/fill_pdf_form_with_annotations.py <input_pdf_path> <path_to_fields.json> <output_pdf_path>
general_comms
anthropic SKILL.md License: LICENSE.txt Version: Unknown
Imported skill general_comms from anthropic
View skill
## Instructions
  You are being asked to write internal company communication that doesn't fit into the standard formats (3P
  updates, newsletters, or FAQs).

  Before proceeding:
  1. Ask the user about their target audience
  2. Understand the communication's purpose
  3. Clarify the desired tone (formal, casual, urgent, informational)
  4. Confirm any specific formatting requirements

  Use these general principles:
  - Be clear and concise
  - Use active voice
  - Put the most important information first
  - Include relevant links and references
  - Match the company's communication style
golden_hour
anthropic SKILL.md License: LICENSE.txt Version: Unknown
Imported skill golden_hour from anthropic
View skill
# Golden Hour

A rich and warm autumnal palette that creates an inviting and sophisticated atmosphere.

## Color Palette

- **Mustard Yellow**: `#f4a900` - Bold primary accent
- **Terracotta**: `#c1666b` - Warm secondary color
- **Warm Beige**: `#d4b896` - Neutral backgrounds
- **Chocolate Brown**: `#4a403a` - Dark text and anchors

## Typography

- **Headers**: FreeSans Bold
- **Body Text**: FreeSans

## Best Used For

Restaurant presentations, hospitality brands, fall campaigns, cozy lifestyle content, artisan products.
mcp_best_practices
anthropic SKILL.md License: LICENSE.txt Version: Unknown
Imported skill mcp_best_practices from anthropic
View skill
# MCP Server Best Practices

## Quick Reference

### Server Naming
- **Python**: `{service}_mcp` (e.g., `slack_mcp`)
- **Node/TypeScript**: `{service}-mcp-server` (e.g., `slack-mcp-server`)

### Tool Naming
- Use snake_case with service prefix
- Format: `{service}_{action}_{resource}`
- Example: `slack_send_message`, `github_create_issue`

### Response Formats
- Support both JSON and Markdown formats
- JSON for programmatic processing
- Markdown for human readability

### Pagination
- Always respect `limit` parameter
- Return `has_more`, `next_offset`, `total_count`
- Default to 20-50 items

### Transport
- **Streamable HTTP**: For remote servers, multi-client scenarios
- **stdio**: For local integrations, command-line tools
- Avoid SSE (deprecated in favor of streamable HTTP)

---

## Server Naming Conventions

Follow these standardized naming patterns:

**Python**: Use format `{service}_mcp` (lowercase with underscores)
- Examples: `slack_mcp`, `github_mcp`, `jira_mcp`

**Node/TypeScript**: Use format `{service}-mcp-server` (lowercase with hyphens)
- Examples: `slack-mcp-server`, `github-mcp-server`, `jira-mcp-server`

The name should be general, descriptive of the service being integrated, easy to infer from the task description, and without version numbers.

---

## Tool Naming and Design

### Tool Naming

1. **Use snake_case**: `search_users`, `create_project`, `get_channel_info`
2. **Include service prefix**: Anticipate that your MCP server may be used alongside other MCP servers
   - Use `slack_send_message` instead of just `send_message`
   - Use `github_create_issue` instead of just `create_issue`
3. **Be action-oriented**: Start with verbs (get, list, search, create, etc.)
4. **Be specific**: Avoid generic names that could conflict with other servers

### Tool Design

- Tool descriptions must narrowly and unambiguously describe functionality
- Descriptions must precisely match actual functionality
- Provide tool annotations (readOnlyHint, destructiveHint, idempotentHint, openWorldHint)
- Keep tool operations focused and atomic

---

## Response Formats

All tools that return data should support multiple formats:

### JSON Format (`response_format="json"`)
- Machine-readable structured data
- Include all available fields and metadata
- Consistent field names and types
- Use for programmatic processing

### Markdown Format (`response_format="markdown"`, typically default)
- Human-readable formatted text
- Use headers, lists, and formatting for clarity
- Convert timestamps to human-readable format
- Show display names with IDs in parentheses
- Omit verbose metadata

---

## Pagination

For tools that list resources:

- **Always respect the `limit` parameter**
- **Implement pagination**: Use `offset` or cursor-based pagination
- **Return pagination metadata**: Include `has_more`, `next_offset`/`next_cursor`, `total_count`
- **Never load all results into memory**: Especially important for large datasets
- **Default to reasonable limits**: 20-50 items is typical

Example pagination response:
```json
{
  "total": 150,
  "count": 20,
  "offset": 0,
  "items": [...],
  "has_more": true,
  "next_offset": 20
}
```

---

## Transport Options

### Streamable HTTP

**Best for**: Remote servers, web services, multi-client scenarios

**Characteristics**:
- Bidirectional communication over HTTP
- Supports multiple simultaneous clients
- Can be deployed as a web service
- Enables server-to-client notifications

**Use when**:
- Serving multiple clients simultaneously
- Deploying as a cloud service
- Integration with web applications

### stdio

**Best for**: Local integrations, command-line tools

**Characteristics**:
- Standard input/output stream communication
- Simple setup, no network configuration needed
- Runs as a subprocess of the client

**Use when**:
- Building tools for local development environments
- Integrating with desktop applications
- Single-user, single-session scenarios

**Note**: stdio servers should NOT log to stdout (use stderr for logging)

### Transport Selection

| Criterion | stdio | Streamable HTTP |
|-----------|-------|-----------------|
| **Deployment** | Local | Remote |
| **Clients** | Single | Multiple |
| **Complexity** | Low | Medium |
| **Real-time** | No | Yes |

---

## Security Best Practices

### Authentication and Authorization

**OAuth 2.1**:
- Use secure OAuth 2.1 with certificates from recognized authorities
- Validate access tokens before processing requests
- Only accept tokens specifically intended for your server

**API Keys**:
- Store API keys in environment variables, never in code
- Validate keys on server startup
- Provide clear error messages when authentication fails

### Input Validation

- Sanitize file paths to prevent directory traversal
- Validate URLs and external identifiers
- Check parameter sizes and ranges
- Prevent command injection in system calls
- Use schema validation (Pydantic/Zod) for all inputs

### Error Handling

- Don't expose internal errors to clients
- Log security-relevant errors server-side
- Provide helpful but not revealing error messages
- Clean up resources after errors

### DNS Rebinding Protection

For streamable HTTP servers running locally:
- Enable DNS rebinding protection
- Validate the `Origin` header on all incoming connections
- Bind to `127.0.0.1` rather than `0.0.0.0`

---

## Tool Annotations

Provide annotations to help clients understand tool behavior:

| Annotation | Type | Default | Description |
|-----------|------|---------|-------------|
| `readOnlyHint` | boolean | false | Tool does not modify its environment |
| `destructiveHint` | boolean | true | Tool may perform destructive updates |
| `idempotentHint` | boolean | false | Repeated calls with same args have no additional effect |
| `openWorldHint` | boolean | true | Tool interacts with external entities |

**Important**: Annotations are hints, not security guarantees. Clients should not make security-critical decisions based solely on annotations.

---

## Error Handling

- Use standard JSON-RPC error codes
- Report tool errors within result objects (not protocol-level errors)
- Provide helpful, specific error messages with suggested next steps
- Don't expose internal implementation details
- Clean up resources properly on errors

Example error handling:
```typescript
try {
  const result = performOperation();
  return { content: [{ type: "text", text: result }] };
} catch (error) {
  return {
    isError: true,
    content: [{
      type: "text",
      text: `Error: ${error.message}. Try using filter='active_only' to reduce results.`
    }]
  };
}
```

---

## Testing Requirements

Comprehensive testing should cover:

- **Functional testing**: Verify correct execution with valid/invalid inputs
- **Integration testing**: Test interaction with external systems
- **Security testing**: Validate auth, input sanitization, rate limiting
- **Performance testing**: Check behavior under load, timeouts
- **Error handling**: Ensure proper error reporting and cleanup

---

## Documentation Requirements

- Provide clear documentation of all tools and capabilities
- Include working examples (at least 3 per major feature)
- Document security considerations
- Specify required permissions and access levels
- Document rate limits and performance characteristics
midnight_galaxy
anthropic SKILL.md License: LICENSE.txt Version: Unknown
Imported skill midnight_galaxy from anthropic
View skill
# Midnight Galaxy

A dramatic and cosmic theme with deep purples and mystical tones for impactful presentations.

## Color Palette

- **Deep Purple**: `#2b1e3e` - Rich dark base
- **Cosmic Blue**: `#4a4e8f` - Mystical mid-tone
- **Lavender**: `#a490c2` - Soft accent color
- **Silver**: `#e6e6fa` - Light highlights and text

## Typography

- **Headers**: FreeSans Bold
- **Body Text**: FreeSans

## Best Used For

Entertainment industry, gaming presentations, nightlife venues, luxury brands, creative agencies.
modern_minimalist
anthropic SKILL.md License: LICENSE.txt Version: Unknown
Imported skill modern_minimalist from anthropic
View skill
# Modern Minimalist

A clean and contemporary theme with a sophisticated grayscale palette for maximum versatility.

## Color Palette

- **Charcoal**: `#36454f` - Primary dark color
- **Slate Gray**: `#708090` - Medium gray for accents
- **Light Gray**: `#d3d3d3` - Backgrounds and dividers
- **White**: `#ffffff` - Text and clean backgrounds

## Typography

- **Headers**: DejaVu Sans Bold
- **Body Text**: DejaVu Sans

## Best Used For

Tech presentations, architecture portfolios, design showcases, modern business proposals, data visualization.
node_mcp_server
anthropic SKILL.md License: LICENSE.txt Version: Unknown
Imported skill node_mcp_server from anthropic
View skill
# Node/TypeScript MCP Server Implementation Guide

## Overview

This document provides Node/TypeScript-specific best practices and examples for implementing MCP servers using the MCP TypeScript SDK. It covers project structure, server setup, tool registration patterns, input validation with Zod, error handling, and complete working examples.

---

## Quick Reference

### Key Imports
```typescript
import { McpServer } from "@modelcontextprotocol/sdk/server/mcp.js";
import { StreamableHTTPServerTransport } from "@modelcontextprotocol/sdk/server/streamableHttp.js";
import { StdioServerTransport } from "@modelcontextprotocol/sdk/server/stdio.js";
import express from "express";
import { z } from "zod";
```

### Server Initialization
```typescript
const server = new McpServer({
  name: "service-mcp-server",
  version: "1.0.0"
});
```

### Tool Registration Pattern
```typescript
server.registerTool(
  "tool_name",
  {
    title: "Tool Display Name",
    description: "What the tool does",
    inputSchema: { param: z.string() },
    outputSchema: { result: z.string() }
  },
  async ({ param }) => {
    const output = { result: `Processed: ${param}` };
    return {
      content: [{ type: "text", text: JSON.stringify(output) }],
      structuredContent: output // Modern pattern for structured data
    };
  }
);
```

---

## MCP TypeScript SDK

The official MCP TypeScript SDK provides:
- `McpServer` class for server initialization
- `registerTool` method for tool registration
- Zod schema integration for runtime input validation
- Type-safe tool handler implementations

**IMPORTANT - Use Modern APIs Only:**
- **DO use**: `server.registerTool()`, `server.registerResource()`, `server.registerPrompt()`
- **DO NOT use**: Old deprecated APIs such as `server.tool()`, `server.setRequestHandler(ListToolsRequestSchema, ...)`, or manual handler registration
- The `register*` methods provide better type safety, automatic schema handling, and are the recommended approach

See the MCP SDK documentation in the references for complete details.

## Server Naming Convention

Node/TypeScript MCP servers must follow this naming pattern:
- **Format**: `{service}-mcp-server` (lowercase with hyphens)
- **Examples**: `github-mcp-server`, `jira-mcp-server`, `stripe-mcp-server`

The name should be:
- General (not tied to specific features)
- Descriptive of the service/API being integrated
- Easy to infer from the task description
- Without version numbers or dates

## Project Structure

Create the following structure for Node/TypeScript MCP servers:

```
{service}-mcp-server/
├── package.json
├── tsconfig.json
├── README.md
├── src/
│   ├── index.ts          # Main entry point with McpServer initialization
│   ├── types.ts          # TypeScript type definitions and interfaces
│   ├── tools/            # Tool implementations (one file per domain)
│   ├── services/         # API clients and shared utilities
│   ├── schemas/          # Zod validation schemas
│   └── constants.ts      # Shared constants (API_URL, CHARACTER_LIMIT, etc.)
└── dist/                 # Built JavaScript files (entry point: dist/index.js)
```

## Tool Implementation

### Tool Naming

Use snake_case for tool names (e.g., "search_users", "create_project", "get_channel_info") with clear, action-oriented names.

**Avoid Naming Conflicts**: Include the service context to prevent overlaps:
- Use "slack_send_message" instead of just "send_message"
- Use "github_create_issue" instead of just "create_issue"
- Use "asana_list_tasks" instead of just "list_tasks"

### Tool Structure

Tools are registered using the `registerTool` method with the following requirements:
- Use Zod schemas for runtime input validation and type safety
- The `description` field must be explicitly provided - JSDoc comments are NOT automatically extracted
- Explicitly provide `title`, `description`, `inputSchema`, and `annotations`
- The `inputSchema` must be a Zod schema object (not a JSON schema)
- Type all parameters and return values explicitly

```typescript
import { McpServer } from "@modelcontextprotocol/sdk/server/mcp.js";
import { z } from "zod";

const server = new McpServer({
  name: "example-mcp",
  version: "1.0.0"
});

// Zod schema for input validation
const UserSearchInputSchema = z.object({
  query: z.string()
    .min(2, "Query must be at least 2 characters")
    .max(200, "Query must not exceed 200 characters")
    .describe("Search string to match against names/emails"),
  limit: z.number()
    .int()
    .min(1)
    .max(100)
    .default(20)
    .describe("Maximum results to return"),
  offset: z.number()
    .int()
    .min(0)
    .default(0)
    .describe("Number of results to skip for pagination"),
  response_format: z.nativeEnum(ResponseFormat)
    .default(ResponseFormat.MARKDOWN)
    .describe("Output format: 'markdown' for human-readable or 'json' for machine-readable")
}).strict();

// Type definition from Zod schema
type UserSearchInput = z.infer<typeof UserSearchInputSchema>;

server.registerTool(
  "example_search_users",
  {
    title: "Search Example Users",
    description: `Search for users in the Example system by name, email, or team.

This tool searches across all user profiles in the Example platform, supporting partial matches and various search filters. It does NOT create or modify users, only searches existing ones.

Args:
  - query (string): Search string to match against names/emails
  - limit (number): Maximum results to return, between 1-100 (default: 20)
  - offset (number): Number of results to skip for pagination (default: 0)
  - response_format ('markdown' | 'json'): Output format (default: 'markdown')

Returns:
  For JSON format: Structured data with schema:
  {
    "total": number,           // Total number of matches found
    "count": number,           // Number of results in this response
    "offset": number,          // Current pagination offset
    "users": [
      {
        "id": string,          // User ID (e.g., "U123456789")
        "name": string,        // Full name (e.g., "John Doe")
        "email": string,       // Email address
        "team": string,        // Team name (optional)
        "active": boolean      // Whether user is active
      }
    ],
    "has_more": boolean,       // Whether more results are available
    "next_offset": number      // Offset for next page (if has_more is true)
  }

Examples:
  - Use when: "Find all marketing team members" -> params with query="team:marketing"
  - Use when: "Search for John's account" -> params with query="john"
  - Don't use when: You need to create a user (use example_create_user instead)

Error Handling:
  - Returns "Error: Rate limit exceeded" if too many requests (429 status)
  - Returns "No users found matching '<query>'" if search returns empty`,
    inputSchema: UserSearchInputSchema,
    annotations: {
      readOnlyHint: true,
      destructiveHint: false,
      idempotentHint: true,
      openWorldHint: true
    }
  },
  async (params: UserSearchInput) => {
    try {
      // Input validation is handled by Zod schema
      // Make API request using validated parameters
      const data = await makeApiRequest<any>(
        "users/search",
        "GET",
        undefined,
        {
          q: params.query,
          limit: params.limit,
          offset: params.offset
        }
      );

      const users = data.users || [];
      const total = data.total || 0;

      if (!users.length) {
        return {
          content: [{
            type: "text",
            text: `No users found matching '${params.query}'`
          }]
        };
      }

      // Prepare structured output
      const output = {
        total,
        count: users.length,
        offset: params.offset,
        users: users.map((user: any) => ({
          id: user.id,
          name: user.name,
          email: user.email,
          ...(user.team ? { team: user.team } : {}),
          active: user.active ?? true
        })),
        has_more: total > params.offset + users.length,
        ...(total > params.offset + users.length ? {
          next_offset: params.offset + users.length
        } : {})
      };

      // Format text representation based on requested format
      let textContent: string;
      if (params.response_format === ResponseFormat.MARKDOWN) {
        const lines = [`# User Search Results: '${params.query}'`, "",
          `Found ${total} users (showing ${users.length})`, ""];
        for (const user of users) {
          lines.push(`## ${user.name} (${user.id})`);
          lines.push(`- **Email**: ${user.email}`);
          if (user.team) lines.push(`- **Team**: ${user.team}`);
          lines.push("");
        }
        textContent = lines.join("\n");
      } else {
        textContent = JSON.stringify(output, null, 2);
      }

      return {
        content: [{ type: "text", text: textContent }],
        structuredContent: output // Modern pattern for structured data
      };
    } catch (error) {
      return {
        content: [{
          type: "text",
          text: handleApiError(error)
        }]
      };
    }
  }
);
```

## Zod Schemas for Input Validation

Zod provides runtime type validation:

```typescript
import { z } from "zod";

// Basic schema with validation
const CreateUserSchema = z.object({
  name: z.string()
    .min(1, "Name is required")
    .max(100, "Name must not exceed 100 characters"),
  email: z.string()
    .email("Invalid email format"),
  age: z.number()
    .int("Age must be a whole number")
    .min(0, "Age cannot be negative")
    .max(150, "Age cannot be greater than 150")
}).strict();  // Use .strict() to forbid extra fields

// Enums
enum ResponseFormat {
  MARKDOWN = "markdown",
  JSON = "json"
}

const SearchSchema = z.object({
  response_format: z.nativeEnum(ResponseFormat)
    .default(ResponseFormat.MARKDOWN)
    .describe("Output format")
});

// Optional fields with defaults
const PaginationSchema = z.object({
  limit: z.number()
    .int()
    .min(1)
    .max(100)
    .default(20)
    .describe("Maximum results to return"),
  offset: z.number()
    .int()
    .min(0)
    .default(0)
    .describe("Number of results to skip")
});
```

## Response Format Options

Support multiple output formats for flexibility:

```typescript
enum ResponseFormat {
  MARKDOWN = "markdown",
  JSON = "json"
}

const inputSchema = z.object({
  query: z.string(),
  response_format: z.nativeEnum(ResponseFormat)
    .default(ResponseFormat.MARKDOWN)
    .describe("Output format: 'markdown' for human-readable or 'json' for machine-readable")
});
```

**Markdown format**:
- Use headers, lists, and formatting for clarity
- Convert timestamps to human-readable format
- Show display names with IDs in parentheses
- Omit verbose metadata
- Group related information logically

**JSON format**:
- Return complete, structured data suitable for programmatic processing
- Include all available fields and metadata
- Use consistent field names and types

## Pagination Implementation

For tools that list resources:

```typescript
const ListSchema = z.object({
  limit: z.number().int().min(1).max(100).default(20),
  offset: z.number().int().min(0).default(0)
});

async function listItems(params: z.infer<typeof ListSchema>) {
  const data = await apiRequest(params.limit, params.offset);

  const response = {
    total: data.total,
    count: data.items.length,
    offset: params.offset,
    items: data.items,
    has_more: data.total > params.offset + data.items.length,
    next_offset: data.total > params.offset + data.items.length
      ? params.offset + data.items.length
      : undefined
  };

  return JSON.stringify(response, null, 2);
}
```

## Character Limits and Truncation

Add a CHARACTER_LIMIT constant to prevent overwhelming responses:

```typescript
// At module level in constants.ts
export const CHARACTER_LIMIT = 25000;  // Maximum response size in characters

async function searchTool(params: SearchInput) {
  let result = generateResponse(data);

  // Check character limit and truncate if needed
  if (result.length > CHARACTER_LIMIT) {
    const truncatedData = data.slice(0, Math.max(1, data.length / 2));
    response.data = truncatedData;
    response.truncated = true;
    response.truncation_message =
      `Response truncated from ${data.length} to ${truncatedData.length} items. ` +
      `Use 'offset' parameter or add filters to see more results.`;
    result = JSON.stringify(response, null, 2);
  }

  return result;
}
```

## Error Handling

Provide clear, actionable error messages:

```typescript
import axios, { AxiosError } from "axios";

function handleApiError(error: unknown): string {
  if (error instanceof AxiosError) {
    if (error.response) {
      switch (error.response.status) {
        case 404:
          return "Error: Resource not found. Please check the ID is correct.";
        case 403:
          return "Error: Permission denied. You don't have access to this resource.";
        case 429:
          return "Error: Rate limit exceeded. Please wait before making more requests.";
        default:
          return `Error: API request failed with status ${error.response.status}`;
      }
    } else if (error.code === "ECONNABORTED") {
      return "Error: Request timed out. Please try again.";
    }
  }
  return `Error: Unexpected error occurred: ${error instanceof Error ? error.message : String(error)}`;
}
```

## Shared Utilities

Extract common functionality into reusable functions:

```typescript
// Shared API request function
async function makeApiRequest<T>(
  endpoint: string,
  method: "GET" | "POST" | "PUT" | "DELETE" = "GET",
  data?: any,
  params?: any
): Promise<T> {
  try {
    const response = await axios({
      method,
      url: `${API_BASE_URL}/${endpoint}`,
      data,
      params,
      timeout: 30000,
      headers: {
        "Content-Type": "application/json",
        "Accept": "application/json"
      }
    });
    return response.data;
  } catch (error) {
    throw error;
  }
}
```

## Async/Await Best Practices

Always use async/await for network requests and I/O operations:

```typescript
// Good: Async network request
async function fetchData(resourceId: string): Promise<ResourceData> {
  const response = await axios.get(`${API_URL}/resource/${resourceId}`);
  return response.data;
}

// Bad: Promise chains
function fetchData(resourceId: string): Promise<ResourceData> {
  return axios.get(`${API_URL}/resource/${resourceId}`)
    .then(response => response.data);  // Harder to read and maintain
}
```

## TypeScript Best Practices

1. **Use Strict TypeScript**: Enable strict mode in tsconfig.json
2. **Define Interfaces**: Create clear interface definitions for all data structures
3. **Avoid `any`**: Use proper types or `unknown` instead of `any`
4. **Zod for Runtime Validation**: Use Zod schemas to validate external data
5. **Type Guards**: Create type guard functions for complex type checking
6. **Error Handling**: Always use try-catch with proper error type checking
7. **Null Safety**: Use optional chaining (`?.`) and nullish coalescing (`??`)

```typescript
// Good: Type-safe with Zod and interfaces
interface UserResponse {
  id: string;
  name: string;
  email: string;
  team?: string;
  active: boolean;
}

const UserSchema = z.object({
  id: z.string(),
  name: z.string(),
  email: z.string().email(),
  team: z.string().optional(),
  active: z.boolean()
});

type User = z.infer<typeof UserSchema>;

async function getUser(id: string): Promise<User> {
  const data = await apiCall(`/users/${id}`);
  return UserSchema.parse(data);  // Runtime validation
}

// Bad: Using any
async function getUser(id: string): Promise<any> {
  return await apiCall(`/users/${id}`);  // No type safety
}
```

## Package Configuration

### package.json

```json
{
  "name": "{service}-mcp-server",
  "version": "1.0.0",
  "description": "MCP server for {Service} API integration",
  "type": "module",
  "main": "dist/index.js",
  "scripts": {
    "start": "node dist/index.js",
    "dev": "tsx watch src/index.ts",
    "build": "tsc",
    "clean": "rm -rf dist"
  },
  "engines": {
    "node": ">=18"
  },
  "dependencies": {
    "@modelcontextprotocol/sdk": "^1.6.1",
    "axios": "^1.7.9",
    "zod": "^3.23.8"
  },
  "devDependencies": {
    "@types/node": "^22.10.0",
    "tsx": "^4.19.2",
    "typescript": "^5.7.2"
  }
}
```

### tsconfig.json

```json
{
  "compilerOptions": {
    "target": "ES2022",
    "module": "Node16",
    "moduleResolution": "Node16",
    "lib": ["ES2022"],
    "outDir": "./dist",
    "rootDir": "./src",
    "strict": true,
    "esModuleInterop": true,
    "skipLibCheck": true,
    "forceConsistentCasingInFileNames": true,
    "declaration": true,
    "declarationMap": true,
    "sourceMap": true,
    "allowSyntheticDefaultImports": true
  },
  "include": ["src/**/*"],
  "exclude": ["node_modules", "dist"]
}
```

## Complete Example

```typescript
#!/usr/bin/env node
/**
 * MCP Server for Example Service.
 *
 * This server provides tools to interact with Example API, including user search,
 * project management, and data export capabilities.
 */

import { McpServer } from "@modelcontextprotocol/sdk/server/mcp.js";
import { StdioServerTransport } from "@modelcontextprotocol/sdk/server/stdio.js";
import { z } from "zod";
import axios, { AxiosError } from "axios";

// Constants
const API_BASE_URL = "https://api.example.com/v1";
const CHARACTER_LIMIT = 25000;

// Enums
enum ResponseFormat {
  MARKDOWN = "markdown",
  JSON = "json"
}

// Zod schemas
const UserSearchInputSchema = z.object({
  query: z.string()
    .min(2, "Query must be at least 2 characters")
    .max(200, "Query must not exceed 200 characters")
    .describe("Search string to match against names/emails"),
  limit: z.number()
    .int()
    .min(1)
    .max(100)
    .default(20)
    .describe("Maximum results to return"),
  offset: z.number()
    .int()
    .min(0)
    .default(0)
    .describe("Number of results to skip for pagination"),
  response_format: z.nativeEnum(ResponseFormat)
    .default(ResponseFormat.MARKDOWN)
    .describe("Output format: 'markdown' for human-readable or 'json' for machine-readable")
}).strict();

type UserSearchInput = z.infer<typeof UserSearchInputSchema>;

// Shared utility functions
async function makeApiRequest<T>(
  endpoint: string,
  method: "GET" | "POST" | "PUT" | "DELETE" = "GET",
  data?: any,
  params?: any
): Promise<T> {
  try {
    const response = await axios({
      method,
      url: `${API_BASE_URL}/${endpoint}`,
      data,
      params,
      timeout: 30000,
      headers: {
        "Content-Type": "application/json",
        "Accept": "application/json"
      }
    });
    return response.data;
  } catch (error) {
    throw error;
  }
}

function handleApiError(error: unknown): string {
  if (error instanceof AxiosError) {
    if (error.response) {
      switch (error.response.status) {
        case 404:
          return "Error: Resource not found. Please check the ID is correct.";
        case 403:
          return "Error: Permission denied. You don't have access to this resource.";
        case 429:
          return "Error: Rate limit exceeded. Please wait before making more requests.";
        default:
          return `Error: API request failed with status ${error.response.status}`;
      }
    } else if (error.code === "ECONNABORTED") {
      return "Error: Request timed out. Please try again.";
    }
  }
  return `Error: Unexpected error occurred: ${error instanceof Error ? error.message : String(error)}`;
}

// Create MCP server instance
const server = new McpServer({
  name: "example-mcp",
  version: "1.0.0"
});

// Register tools
server.registerTool(
  "example_search_users",
  {
    title: "Search Example Users",
    description: `[Full description as shown above]`,
    inputSchema: UserSearchInputSchema,
    annotations: {
      readOnlyHint: true,
      destructiveHint: false,
      idempotentHint: true,
      openWorldHint: true
    }
  },
  async (params: UserSearchInput) => {
    // Implementation as shown above
  }
);

// Main function
// For stdio (local):
async function runStdio() {
  if (!process.env.EXAMPLE_API_KEY) {
    console.error("ERROR: EXAMPLE_API_KEY environment variable is required");
    process.exit(1);
  }

  const transport = new StdioServerTransport();
  await server.connect(transport);
  console.error("MCP server running via stdio");
}

// For streamable HTTP (remote):
async function runHTTP() {
  if (!process.env.EXAMPLE_API_KEY) {
    console.error("ERROR: EXAMPLE_API_KEY environment variable is required");
    process.exit(1);
  }

  const app = express();
  app.use(express.json());

  app.post('/mcp', async (req, res) => {
    const transport = new StreamableHTTPServerTransport({
      sessionIdGenerator: undefined,
      enableJsonResponse: true
    });
    res.on('close', () => transport.close());
    await server.connect(transport);
    await transport.handleRequest(req, res, req.body);
  });

  const port = parseInt(process.env.PORT || '3000');
  app.listen(port, () => {
    console.error(`MCP server running on http://localhost:${port}/mcp`);
  });
}

// Choose transport based on environment
const transport = process.env.TRANSPORT || 'stdio';
if (transport === 'http') {
  runHTTP().catch(error => {
    console.error("Server error:", error);
    process.exit(1);
  });
} else {
  runStdio().catch(error => {
    console.error("Server error:", error);
    process.exit(1);
  });
}
```

---

## Advanced MCP Features

### Resource Registration

Expose data as resources for efficient, URI-based access:

```typescript
import { ResourceTemplate } from "@modelcontextprotocol/sdk/types.js";

// Register a resource with URI template
server.registerResource(
  {
    uri: "file://documents/{name}",
    name: "Document Resource",
    description: "Access documents by name",
    mimeType: "text/plain"
  },
  async (uri: string) => {
    // Extract parameter from URI
    const match = uri.match(/^file:\/\/documents\/(.+)$/);
    if (!match) {
      throw new Error("Invalid URI format");
    }

    const documentName = match[1];
    const content = await loadDocument(documentName);

    return {
      contents: [{
        uri,
        mimeType: "text/plain",
        text: content
      }]
    };
  }
);

// List available resources dynamically
server.registerResourceList(async () => {
  const documents = await getAvailableDocuments();
  return {
    resources: documents.map(doc => ({
      uri: `file://documents/${doc.name}`,
      name: doc.name,
      mimeType: "text/plain",
      description: doc.description
    }))
  };
});
```

**When to use Resources vs Tools:**
- **Resources**: For data access with simple URI-based parameters
- **Tools**: For complex operations requiring validation and business logic
- **Resources**: When data is relatively static or template-based
- **Tools**: When operations have side effects or complex workflows

### Transport Options

The TypeScript SDK supports two main transport mechanisms:

#### Streamable HTTP (Recommended for Remote Servers)

```typescript
import { StreamableHTTPServerTransport } from "@modelcontextprotocol/sdk/server/streamableHttp.js";
import express from "express";

const app = express();
app.use(express.json());

app.post('/mcp', async (req, res) => {
  // Create new transport for each request (stateless, prevents request ID collisions)
  const transport = new StreamableHTTPServerTransport({
    sessionIdGenerator: undefined,
    enableJsonResponse: true
  });

  res.on('close', () => transport.close());

  await server.connect(transport);
  await transport.handleRequest(req, res, req.body);
});

app.listen(3000);
```

#### stdio (For Local Integrations)

```typescript
import { StdioServerTransport } from "@modelcontextprotocol/sdk/server/stdio.js";

const transport = new StdioServerTransport();
await server.connect(transport);
```

**Transport selection:**
- **Streamable HTTP**: Web services, remote access, multiple clients
- **stdio**: Command-line tools, local development, subprocess integration

### Notification Support

Notify clients when server state changes:

```typescript
// Notify when tools list changes
server.notification({
  method: "notifications/tools/list_changed"
});

// Notify when resources change
server.notification({
  method: "notifications/resources/list_changed"
});
```

Use notifications sparingly - only when server capabilities genuinely change.

---

## Code Best Practices

### Code Composability and Reusability

Your implementation MUST prioritize composability and code reuse:

1. **Extract Common Functionality**:
   - Create reusable helper functions for operations used across multiple tools
   - Build shared API clients for HTTP requests instead of duplicating code
   - Centralize error handling logic in utility functions
   - Extract business logic into dedicated functions that can be composed
   - Extract shared markdown or JSON field selection & formatting functionality

2. **Avoid Duplication**:
   - NEVER copy-paste similar code between tools
   - If you find yourself writing similar logic twice, extract it into a function
   - Common operations like pagination, filtering, field selection, and formatting should be shared
   - Authentication/authorization logic should be centralized

## Building and Running

Always build your TypeScript code before running:

```bash
# Build the project
npm run build

# Run the server
npm start

# Development with auto-reload
npm run dev
```

Always ensure `npm run build` completes successfully before considering the implementation complete.

## Quality Checklist

Before finalizing your Node/TypeScript MCP server implementation, ensure:

### Strategic Design
- [ ] Tools enable complete workflows, not just API endpoint wrappers
- [ ] Tool names reflect natural task subdivisions
- [ ] Response formats optimize for agent context efficiency
- [ ] Human-readable identifiers used where appropriate
- [ ] Error messages guide agents toward correct usage

### Implementation Quality
- [ ] FOCUSED IMPLEMENTATION: Most important and valuable tools implemented
- [ ] All tools registered using `registerTool` with complete configuration
- [ ] All tools include `title`, `description`, `inputSchema`, and `annotations`
- [ ] Annotations correctly set (readOnlyHint, destructiveHint, idempotentHint, openWorldHint)
- [ ] All tools use Zod schemas for runtime input validation with `.strict()` enforcement
- [ ] All Zod schemas have proper constraints and descriptive error messages
- [ ] All tools have comprehensive descriptions with explicit input/output types
- [ ] Descriptions include return value examples and complete schema documentation
- [ ] Error messages are clear, actionable, and educational

### TypeScript Quality
- [ ] TypeScript interfaces are defined for all data structures
- [ ] Strict TypeScript is enabled in tsconfig.json
- [ ] No use of `any` type - use `unknown` or proper types instead
- [ ] All async functions have explicit Promise<T> return types
- [ ] Error handling uses proper type guards (e.g., `axios.isAxiosError`, `z.ZodError`)

### Advanced Features (where applicable)
- [ ] Resources registered for appropriate data endpoints
- [ ] Appropriate transport configured (stdio or streamable HTTP)
- [ ] Notifications implemented for dynamic server capabilities
- [ ] Type-safe with SDK interfaces

### Project Configuration
- [ ] Package.json includes all necessary dependencies
- [ ] Build script produces working JavaScript in dist/ directory
- [ ] Main entry point is properly configured as dist/index.js
- [ ] Server name follows format: `{service}-mcp-server`
- [ ] tsconfig.json properly configured with strict mode

### Code Quality
- [ ] Pagination is properly implemented where applicable
- [ ] Large responses check CHARACTER_LIMIT constant and truncate with clear messages
- [ ] Filtering options are provided for potentially large result sets
- [ ] All network operations handle timeouts and connection errors gracefully
- [ ] Common functionality is extracted into reusable functions
- [ ] Return types are consistent across similar operations

### Testing and Build
- [ ] `npm run build` completes successfully without errors
- [ ] dist/index.js created and executable
- [ ] Server runs: `node dist/index.js --help`
- [ ] All imports resolve correctly
- [ ] Sample tool calls work as expected
ocean_depths
anthropic SKILL.md License: LICENSE.txt Version: Unknown
Imported skill ocean_depths from anthropic
View skill
# Ocean Depths

A professional and calming maritime theme that evokes the serenity of deep ocean waters.

## Color Palette

- **Deep Navy**: `#1a2332` - Primary background color
- **Teal**: `#2d8b8b` - Accent color for highlights and emphasis
- **Seafoam**: `#a8dadc` - Secondary accent for lighter elements
- **Cream**: `#f1faee` - Text and light backgrounds

## Typography

- **Headers**: DejaVu Sans Bold
- **Body Text**: DejaVu Sans

## Best Used For

Corporate presentations, financial reports, professional consulting decks, trust-building content.
ooxml
anthropic SKILL.md License: LICENSE.txt Version: Unknown
Imported skill ooxml from anthropic
View skill
# Office Open XML Technical Reference for PowerPoint

**Important: Read this entire document before starting.** Critical XML schema rules and formatting requirements are covered throughout. Incorrect implementation can create invalid PPTX files that PowerPoint cannot open.

## Technical Guidelines

### Schema Compliance
- **Element ordering in `<p:txBody>`**: `<a:bodyPr>`, `<a:lstStyle>`, `<a:p>`
- **Whitespace**: Add `xml:space='preserve'` to `<a:t>` elements with leading/trailing spaces
- **Unicode**: Escape characters in ASCII content: `"` becomes `&#8220;`
- **Images**: Add to `ppt/media/`, reference in slide XML, set dimensions to fit slide bounds
- **Relationships**: Update `ppt/slides/_rels/slideN.xml.rels` for each slide's resources
- **Dirty attribute**: Add `dirty="0"` to `<a:rPr>` and `<a:endParaRPr>` elements to indicate clean state

## Presentation Structure

### Basic Slide Structure
```xml
<!-- ppt/slides/slide1.xml -->
<p:sld>
  <p:cSld>
    <p:spTree>
      <p:nvGrpSpPr>...</p:nvGrpSpPr>
      <p:grpSpPr>...</p:grpSpPr>
      <!-- Shapes go here -->
    </p:spTree>
  </p:cSld>
</p:sld>
```

### Text Box / Shape with Text
```xml
<p:sp>
  <p:nvSpPr>
    <p:cNvPr id="2" name="Title"/>
    <p:cNvSpPr>
      <a:spLocks noGrp="1"/>
    </p:cNvSpPr>
    <p:nvPr>
      <p:ph type="ctrTitle"/>
    </p:nvPr>
  </p:nvSpPr>
  <p:spPr>
    <a:xfrm>
      <a:off x="838200" y="365125"/>
      <a:ext cx="7772400" cy="1470025"/>
    </a:xfrm>
  </p:spPr>
  <p:txBody>
    <a:bodyPr/>
    <a:lstStyle/>
    <a:p>
      <a:r>
        <a:t>Slide Title</a:t>
      </a:r>
    </a:p>
  </p:txBody>
</p:sp>
```

### Text Formatting
```xml
<!-- Bold -->
<a:r>
  <a:rPr b="1"/>
  <a:t>Bold Text</a:t>
</a:r>

<!-- Italic -->
<a:r>
  <a:rPr i="1"/>
  <a:t>Italic Text</a:t>
</a:r>

<!-- Underline -->
<a:r>
  <a:rPr u="sng"/>
  <a:t>Underlined</a:t>
</a:r>

<!-- Highlight -->
<a:r>
  <a:rPr>
    <a:highlight>
      <a:srgbClr val="FFFF00"/>
    </a:highlight>
  </a:rPr>
  <a:t>Highlighted Text</a:t>
</a:r>

<!-- Font and Size -->
<a:r>
  <a:rPr sz="2400" typeface="Arial">
    <a:solidFill>
      <a:srgbClr val="FF0000"/>
    </a:solidFill>
  </a:rPr>
  <a:t>Colored Arial 24pt</a:t>
</a:r>

<!-- Complete formatting example -->
<a:r>
  <a:rPr lang="en-US" sz="1400" b="1" dirty="0">
    <a:solidFill>
      <a:srgbClr val="FAFAFA"/>
    </a:solidFill>
  </a:rPr>
  <a:t>Formatted text</a:t>
</a:r>
```

### Lists
```xml
<!-- Bullet list -->
<a:p>
  <a:pPr lvl="0">
    <a:buChar char="•"/>
  </a:pPr>
  <a:r>
    <a:t>First bullet point</a:t>
  </a:r>
</a:p>

<!-- Numbered list -->
<a:p>
  <a:pPr lvl="0">
    <a:buAutoNum type="arabicPeriod"/>
  </a:pPr>
  <a:r>
    <a:t>First numbered item</a:t>
  </a:r>
</a:p>

<!-- Second level indent -->
<a:p>
  <a:pPr lvl="1">
    <a:buChar char="•"/>
  </a:pPr>
  <a:r>
    <a:t>Indented bullet</a:t>
  </a:r>
</a:p>
```

### Shapes
```xml
<!-- Rectangle -->
<p:sp>
  <p:nvSpPr>
    <p:cNvPr id="3" name="Rectangle"/>
    <p:cNvSpPr/>
    <p:nvPr/>
  </p:nvSpPr>
  <p:spPr>
    <a:xfrm>
      <a:off x="1000000" y="1000000"/>
      <a:ext cx="3000000" cy="2000000"/>
    </a:xfrm>
    <a:prstGeom prst="rect">
      <a:avLst/>
    </a:prstGeom>
    <a:solidFill>
      <a:srgbClr val="FF0000"/>
    </a:solidFill>
    <a:ln w="25400">
      <a:solidFill>
        <a:srgbClr val="000000"/>
      </a:solidFill>
    </a:ln>
  </p:spPr>
</p:sp>

<!-- Rounded Rectangle -->
<p:sp>
  <p:spPr>
    <a:prstGeom prst="roundRect">
      <a:avLst/>
    </a:prstGeom>
  </p:spPr>
</p:sp>

<!-- Circle/Ellipse -->
<p:sp>
  <p:spPr>
    <a:prstGeom prst="ellipse">
      <a:avLst/>
    </a:prstGeom>
  </p:spPr>
</p:sp>
```

### Images
```xml
<p:pic>
  <p:nvPicPr>
    <p:cNvPr id="4" name="Picture">
      <a:hlinkClick r:id="" action="ppaction://media"/>
    </p:cNvPr>
    <p:cNvPicPr>
      <a:picLocks noChangeAspect="1"/>
    </p:cNvPicPr>
    <p:nvPr/>
  </p:nvPicPr>
  <p:blipFill>
    <a:blip r:embed="rId2"/>
    <a:stretch>
      <a:fillRect/>
    </a:stretch>
  </p:blipFill>
  <p:spPr>
    <a:xfrm>
      <a:off x="1000000" y="1000000"/>
      <a:ext cx="3000000" cy="2000000"/>
    </a:xfrm>
    <a:prstGeom prst="rect">
      <a:avLst/>
    </a:prstGeom>
  </p:spPr>
</p:pic>
```

### Tables
```xml
<p:graphicFrame>
  <p:nvGraphicFramePr>
    <p:cNvPr id="5" name="Table"/>
    <p:cNvGraphicFramePr>
      <a:graphicFrameLocks noGrp="1"/>
    </p:cNvGraphicFramePr>
    <p:nvPr/>
  </p:nvGraphicFramePr>
  <p:xfrm>
    <a:off x="1000000" y="1000000"/>
    <a:ext cx="6000000" cy="2000000"/>
  </p:xfrm>
  <a:graphic>
    <a:graphicData uri="http://schemas.openxmlformats.org/drawingml/2006/table">
      <a:tbl>
        <a:tblGrid>
          <a:gridCol w="3000000"/>
          <a:gridCol w="3000000"/>
        </a:tblGrid>
        <a:tr h="500000">
          <a:tc>
            <a:txBody>
              <a:bodyPr/>
              <a:lstStyle/>
              <a:p>
                <a:r>
                  <a:t>Cell 1</a:t>
                </a:r>
              </a:p>
            </a:txBody>
          </a:tc>
          <a:tc>
            <a:txBody>
              <a:bodyPr/>
              <a:lstStyle/>
              <a:p>
                <a:r>
                  <a:t>Cell 2</a:t>
                </a:r>
              </a:p>
            </a:txBody>
          </a:tc>
        </a:tr>
      </a:tbl>
    </a:graphicData>
  </a:graphic>
</p:graphicFrame>
```

### Slide Layouts

```xml
<!-- Title Slide Layout -->
<p:sp>
  <p:nvSpPr>
    <p:nvPr>
      <p:ph type="ctrTitle"/>
    </p:nvPr>
  </p:nvSpPr>
  <!-- Title content -->
</p:sp>

<p:sp>
  <p:nvSpPr>
    <p:nvPr>
      <p:ph type="subTitle" idx="1"/>
    </p:nvPr>
  </p:nvSpPr>
  <!-- Subtitle content -->
</p:sp>

<!-- Content Slide Layout -->
<p:sp>
  <p:nvSpPr>
    <p:nvPr>
      <p:ph type="title"/>
    </p:nvPr>
  </p:nvSpPr>
  <!-- Slide title -->
</p:sp>

<p:sp>
  <p:nvSpPr>
    <p:nvPr>
      <p:ph type="body" idx="1"/>
    </p:nvPr>
  </p:nvSpPr>
  <!-- Content body -->
</p:sp>
```

## File Updates

When adding content, update these files:

**`ppt/_rels/presentation.xml.rels`:**
```xml
<Relationship Id="rId1" Type="http://schemas.openxmlformats.org/officeDocument/2006/relationships/slide" Target="slides/slide1.xml"/>
<Relationship Id="rId2" Type="http://schemas.openxmlformats.org/officeDocument/2006/relationships/slideMaster" Target="slideMasters/slideMaster1.xml"/>
```

**`ppt/slides/_rels/slide1.xml.rels`:**
```xml
<Relationship Id="rId1" Type="http://schemas.openxmlformats.org/officeDocument/2006/relationships/slideLayout" Target="../slideLayouts/slideLayout1.xml"/>
<Relationship Id="rId2" Type="http://schemas.openxmlformats.org/officeDocument/2006/relationships/image" Target="../media/image1.png"/>
```

**`[Content_Types].xml`:**
```xml
<Default Extension="png" ContentType="image/png"/>
<Default Extension="jpg" ContentType="image/jpeg"/>
<Override PartName="/ppt/slides/slide1.xml" ContentType="application/vnd.openxmlformats-officedocument.presentationml.slide+xml"/>
```

**`ppt/presentation.xml`:**
```xml
<p:sldIdLst>
  <p:sldId id="256" r:id="rId1"/>
  <p:sldId id="257" r:id="rId2"/>
</p:sldIdLst>
```

**`docProps/app.xml`:** Update slide count and statistics
```xml
<Slides>2</Slides>
<Paragraphs>10</Paragraphs>
<Words>50</Words>
```

## Slide Operations

### Adding a New Slide
When adding a slide to the end of the presentation:

1. **Create the slide file** (`ppt/slides/slideN.xml`)
2. **Update `[Content_Types].xml`**: Add Override for the new slide
3. **Update `ppt/_rels/presentation.xml.rels`**: Add relationship for the new slide
4. **Update `ppt/presentation.xml`**: Add slide ID to `<p:sldIdLst>`
5. **Create slide relationships** (`ppt/slides/_rels/slideN.xml.rels`) if needed
6. **Update `docProps/app.xml`**: Increment slide count and update statistics (if present)

### Duplicating a Slide
1. Copy the source slide XML file with a new name
2. Update all IDs in the new slide to be unique
3. Follow the "Adding a New Slide" steps above
4. **CRITICAL**: Remove or update any notes slide references in `_rels` files
5. Remove references to unused media files

### Reordering Slides
1. **Update `ppt/presentation.xml`**: Reorder `<p:sldId>` elements in `<p:sldIdLst>`
2. The order of `<p:sldId>` elements determines slide order
3. Keep slide IDs and relationship IDs unchanged

Example:
```xml
<!-- Original order -->
<p:sldIdLst>
  <p:sldId id="256" r:id="rId2"/>
  <p:sldId id="257" r:id="rId3"/>
  <p:sldId id="258" r:id="rId4"/>
</p:sldIdLst>

<!-- After moving slide 3 to position 2 -->
<p:sldIdLst>
  <p:sldId id="256" r:id="rId2"/>
  <p:sldId id="258" r:id="rId4"/>
  <p:sldId id="257" r:id="rId3"/>
</p:sldIdLst>
```

### Deleting a Slide
1. **Remove from `ppt/presentation.xml`**: Delete the `<p:sldId>` entry
2. **Remove from `ppt/_rels/presentation.xml.rels`**: Delete the relationship
3. **Remove from `[Content_Types].xml`**: Delete the Override entry
4. **Delete files**: Remove `ppt/slides/slideN.xml` and `ppt/slides/_rels/slideN.xml.rels`
5. **Update `docProps/app.xml`**: Decrement slide count and update statistics
6. **Clean up unused media**: Remove orphaned images from `ppt/media/`

Note: Don't renumber remaining slides - keep their original IDs and filenames.


## Common Errors to Avoid

- **Encodings**: Escape unicode characters in ASCII content: `"` becomes `&#8220;`
- **Images**: Add to `ppt/media/` and update relationship files
- **Lists**: Omit bullets from list headers
- **IDs**: Use valid hexadecimal values for UUIDs
- **Themes**: Check all themes in `theme` directory for colors

## Validation Checklist for Template-Based Presentations

### Before Packing, Always:
- **Clean unused resources**: Remove unreferenced media, fonts, and notes directories
- **Fix Content_Types.xml**: Declare ALL slides, layouts, and themes present in the package
- **Fix relationship IDs**: 
   - Remove font embed references if not using embedded fonts
- **Remove broken references**: Check all `_rels` files for references to deleted resources

### Common Template Duplication Pitfalls:
- Multiple slides referencing the same notes slide after duplication
- Image/media references from template slides that no longer exist
- Font embedding references when fonts aren't included
- Missing slideLayout declarations for layouts 12-25
- docProps directory may not unpack - this is optional
output_patterns
anthropic SKILL.md License: LICENSE.txt Version: Unknown
Imported skill output_patterns from anthropic
View skill
# Output Patterns

Use these patterns when skills need to produce consistent, high-quality output.

## Template Pattern

Provide templates for output format. Match the level of strictness to your needs.

**For strict requirements (like API responses or data formats):**

```markdown
## Report structure

ALWAYS use this exact template structure:

# [Analysis Title]

## Executive summary
[One-paragraph overview of key findings]

## Key findings
- Finding 1 with supporting data
- Finding 2 with supporting data
- Finding 3 with supporting data

## Recommendations
1. Specific actionable recommendation
2. Specific actionable recommendation
```

**For flexible guidance (when adaptation is useful):**

```markdown
## Report structure

Here is a sensible default format, but use your best judgment:

# [Analysis Title]

## Executive summary
[Overview]

## Key findings
[Adapt sections based on what you discover]

## Recommendations
[Tailor to the specific context]

Adjust sections as needed for the specific analysis type.
```

## Examples Pattern

For skills where output quality depends on seeing examples, provide input/output pairs:

```markdown
## Commit message format

Generate commit messages following these examples:

**Example 1:**
Input: Added user authentication with JWT tokens
Output:
```
feat(auth): implement JWT-based authentication

Add login endpoint and token validation middleware
```

**Example 2:**
Input: Fixed bug where dates displayed incorrectly in reports
Output:
```
fix(reports): correct date formatting in timezone conversion

Use UTC timestamps consistently across report generation
```

Follow this style: type(scope): brief description, then detailed explanation.
```

Examples help Claude understand the desired style and level of detail more clearly than descriptions alone.
python_mcp_server
anthropic SKILL.md License: LICENSE.txt Version: Unknown
Imported skill python_mcp_server from anthropic
View skill
# Python MCP Server Implementation Guide

## Overview

This document provides Python-specific best practices and examples for implementing MCP servers using the MCP Python SDK. It covers server setup, tool registration patterns, input validation with Pydantic, error handling, and complete working examples.

---

## Quick Reference

### Key Imports
```python
from mcp.server.fastmcp import FastMCP
from pydantic import BaseModel, Field, field_validator, ConfigDict
from typing import Optional, List, Dict, Any
from enum import Enum
import httpx
```

### Server Initialization
```python
mcp = FastMCP("service_mcp")
```

### Tool Registration Pattern
```python
@mcp.tool(name="tool_name", annotations={...})
async def tool_function(params: InputModel) -> str:
    # Implementation
    pass
```

---

## MCP Python SDK and FastMCP

The official MCP Python SDK provides FastMCP, a high-level framework for building MCP servers. It provides:
- Automatic description and inputSchema generation from function signatures and docstrings
- Pydantic model integration for input validation
- Decorator-based tool registration with `@mcp.tool`

**For complete SDK documentation, use WebFetch to load:**
`https://raw.githubusercontent.com/modelcontextprotocol/python-sdk/main/README.md`

## Server Naming Convention

Python MCP servers must follow this naming pattern:
- **Format**: `{service}_mcp` (lowercase with underscores)
- **Examples**: `github_mcp`, `jira_mcp`, `stripe_mcp`

The name should be:
- General (not tied to specific features)
- Descriptive of the service/API being integrated
- Easy to infer from the task description
- Without version numbers or dates

## Tool Implementation

### Tool Naming

Use snake_case for tool names (e.g., "search_users", "create_project", "get_channel_info") with clear, action-oriented names.

**Avoid Naming Conflicts**: Include the service context to prevent overlaps:
- Use "slack_send_message" instead of just "send_message"
- Use "github_create_issue" instead of just "create_issue"
- Use "asana_list_tasks" instead of just "list_tasks"

### Tool Structure with FastMCP

Tools are defined using the `@mcp.tool` decorator with Pydantic models for input validation:

```python
from pydantic import BaseModel, Field, ConfigDict
from mcp.server.fastmcp import FastMCP

# Initialize the MCP server
mcp = FastMCP("example_mcp")

# Define Pydantic model for input validation
class ServiceToolInput(BaseModel):
    '''Input model for service tool operation.'''
    model_config = ConfigDict(
        str_strip_whitespace=True,  # Auto-strip whitespace from strings
        validate_assignment=True,    # Validate on assignment
        extra='forbid'              # Forbid extra fields
    )

    param1: str = Field(..., description="First parameter description (e.g., 'user123', 'project-abc')", min_length=1, max_length=100)
    param2: Optional[int] = Field(default=None, description="Optional integer parameter with constraints", ge=0, le=1000)
    tags: Optional[List[str]] = Field(default_factory=list, description="List of tags to apply", max_items=10)

@mcp.tool(
    name="service_tool_name",
    annotations={
        "title": "Human-Readable Tool Title",
        "readOnlyHint": True,     # Tool does not modify environment
        "destructiveHint": False,  # Tool does not perform destructive operations
        "idempotentHint": True,    # Repeated calls have no additional effect
        "openWorldHint": False     # Tool does not interact with external entities
    }
)
async def service_tool_name(params: ServiceToolInput) -> str:
    '''Tool description automatically becomes the 'description' field.

    This tool performs a specific operation on the service. It validates all inputs
    using the ServiceToolInput Pydantic model before processing.

    Args:
        params (ServiceToolInput): Validated input parameters containing:
            - param1 (str): First parameter description
            - param2 (Optional[int]): Optional parameter with default
            - tags (Optional[List[str]]): List of tags

    Returns:
        str: JSON-formatted response containing operation results
    '''
    # Implementation here
    pass
```

## Pydantic v2 Key Features

- Use `model_config` instead of nested `Config` class
- Use `field_validator` instead of deprecated `validator`
- Use `model_dump()` instead of deprecated `dict()`
- Validators require `@classmethod` decorator
- Type hints are required for validator methods

```python
from pydantic import BaseModel, Field, field_validator, ConfigDict

class CreateUserInput(BaseModel):
    model_config = ConfigDict(
        str_strip_whitespace=True,
        validate_assignment=True
    )

    name: str = Field(..., description="User's full name", min_length=1, max_length=100)
    email: str = Field(..., description="User's email address", pattern=r'^[\w\.-]+@[\w\.-]+\.\w+$')
    age: int = Field(..., description="User's age", ge=0, le=150)

    @field_validator('email')
    @classmethod
    def validate_email(cls, v: str) -> str:
        if not v.strip():
            raise ValueError("Email cannot be empty")
        return v.lower()
```

## Response Format Options

Support multiple output formats for flexibility:

```python
from enum import Enum

class ResponseFormat(str, Enum):
    '''Output format for tool responses.'''
    MARKDOWN = "markdown"
    JSON = "json"

class UserSearchInput(BaseModel):
    query: str = Field(..., description="Search query")
    response_format: ResponseFormat = Field(
        default=ResponseFormat.MARKDOWN,
        description="Output format: 'markdown' for human-readable or 'json' for machine-readable"
    )
```

**Markdown format**:
- Use headers, lists, and formatting for clarity
- Convert timestamps to human-readable format (e.g., "2024-01-15 10:30:00 UTC" instead of epoch)
- Show display names with IDs in parentheses (e.g., "@john.doe (U123456)")
- Omit verbose metadata (e.g., show only one profile image URL, not all sizes)
- Group related information logically

**JSON format**:
- Return complete, structured data suitable for programmatic processing
- Include all available fields and metadata
- Use consistent field names and types

## Pagination Implementation

For tools that list resources:

```python
class ListInput(BaseModel):
    limit: Optional[int] = Field(default=20, description="Maximum results to return", ge=1, le=100)
    offset: Optional[int] = Field(default=0, description="Number of results to skip for pagination", ge=0)

async def list_items(params: ListInput) -> str:
    # Make API request with pagination
    data = await api_request(limit=params.limit, offset=params.offset)

    # Return pagination info
    response = {
        "total": data["total"],
        "count": len(data["items"]),
        "offset": params.offset,
        "items": data["items"],
        "has_more": data["total"] > params.offset + len(data["items"]),
        "next_offset": params.offset + len(data["items"]) if data["total"] > params.offset + len(data["items"]) else None
    }
    return json.dumps(response, indent=2)
```

## Error Handling

Provide clear, actionable error messages:

```python
def _handle_api_error(e: Exception) -> str:
    '''Consistent error formatting across all tools.'''
    if isinstance(e, httpx.HTTPStatusError):
        if e.response.status_code == 404:
            return "Error: Resource not found. Please check the ID is correct."
        elif e.response.status_code == 403:
            return "Error: Permission denied. You don't have access to this resource."
        elif e.response.status_code == 429:
            return "Error: Rate limit exceeded. Please wait before making more requests."
        return f"Error: API request failed with status {e.response.status_code}"
    elif isinstance(e, httpx.TimeoutException):
        return "Error: Request timed out. Please try again."
    return f"Error: Unexpected error occurred: {type(e).__name__}"
```

## Shared Utilities

Extract common functionality into reusable functions:

```python
# Shared API request function
async def _make_api_request(endpoint: str, method: str = "GET", **kwargs) -> dict:
    '''Reusable function for all API calls.'''
    async with httpx.AsyncClient() as client:
        response = await client.request(
            method,
            f"{API_BASE_URL}/{endpoint}",
            timeout=30.0,
            **kwargs
        )
        response.raise_for_status()
        return response.json()
```

## Async/Await Best Practices

Always use async/await for network requests and I/O operations:

```python
# Good: Async network request
async def fetch_data(resource_id: str) -> dict:
    async with httpx.AsyncClient() as client:
        response = await client.get(f"{API_URL}/resource/{resource_id}")
        response.raise_for_status()
        return response.json()

# Bad: Synchronous request
def fetch_data(resource_id: str) -> dict:
    response = requests.get(f"{API_URL}/resource/{resource_id}")  # Blocks
    return response.json()
```

## Type Hints

Use type hints throughout:

```python
from typing import Optional, List, Dict, Any

async def get_user(user_id: str) -> Dict[str, Any]:
    data = await fetch_user(user_id)
    return {"id": data["id"], "name": data["name"]}
```

## Tool Docstrings

Every tool must have comprehensive docstrings with explicit type information:

```python
async def search_users(params: UserSearchInput) -> str:
    '''
    Search for users in the Example system by name, email, or team.

    This tool searches across all user profiles in the Example platform,
    supporting partial matches and various search filters. It does NOT
    create or modify users, only searches existing ones.

    Args:
        params (UserSearchInput): Validated input parameters containing:
            - query (str): Search string to match against names/emails (e.g., "john", "@example.com", "team:marketing")
            - limit (Optional[int]): Maximum results to return, between 1-100 (default: 20)
            - offset (Optional[int]): Number of results to skip for pagination (default: 0)

    Returns:
        str: JSON-formatted string containing search results with the following schema:

        Success response:
        {
            "total": int,           # Total number of matches found
            "count": int,           # Number of results in this response
            "offset": int,          # Current pagination offset
            "users": [
                {
                    "id": str,      # User ID (e.g., "U123456789")
                    "name": str,    # Full name (e.g., "John Doe")
                    "email": str,   # Email address (e.g., "john@example.com")
                    "team": str     # Team name (e.g., "Marketing") - optional
                }
            ]
        }

        Error response:
        "Error: <error message>" or "No users found matching '<query>'"

    Examples:
        - Use when: "Find all marketing team members" -> params with query="team:marketing"
        - Use when: "Search for John's account" -> params with query="john"
        - Don't use when: You need to create a user (use example_create_user instead)
        - Don't use when: You have a user ID and need full details (use example_get_user instead)

    Error Handling:
        - Input validation errors are handled by Pydantic model
        - Returns "Error: Rate limit exceeded" if too many requests (429 status)
        - Returns "Error: Invalid API authentication" if API key is invalid (401 status)
        - Returns formatted list of results or "No users found matching 'query'"
    '''
```

## Complete Example

See below for a complete Python MCP server example:

```python
#!/usr/bin/env python3
'''
MCP Server for Example Service.

This server provides tools to interact with Example API, including user search,
project management, and data export capabilities.
'''

from typing import Optional, List, Dict, Any
from enum import Enum
import httpx
from pydantic import BaseModel, Field, field_validator, ConfigDict
from mcp.server.fastmcp import FastMCP

# Initialize the MCP server
mcp = FastMCP("example_mcp")

# Constants
API_BASE_URL = "https://api.example.com/v1"

# Enums
class ResponseFormat(str, Enum):
    '''Output format for tool responses.'''
    MARKDOWN = "markdown"
    JSON = "json"

# Pydantic Models for Input Validation
class UserSearchInput(BaseModel):
    '''Input model for user search operations.'''
    model_config = ConfigDict(
        str_strip_whitespace=True,
        validate_assignment=True
    )

    query: str = Field(..., description="Search string to match against names/emails", min_length=2, max_length=200)
    limit: Optional[int] = Field(default=20, description="Maximum results to return", ge=1, le=100)
    offset: Optional[int] = Field(default=0, description="Number of results to skip for pagination", ge=0)
    response_format: ResponseFormat = Field(default=ResponseFormat.MARKDOWN, description="Output format")

    @field_validator('query')
    @classmethod
    def validate_query(cls, v: str) -> str:
        if not v.strip():
            raise ValueError("Query cannot be empty or whitespace only")
        return v.strip()

# Shared utility functions
async def _make_api_request(endpoint: str, method: str = "GET", **kwargs) -> dict:
    '''Reusable function for all API calls.'''
    async with httpx.AsyncClient() as client:
        response = await client.request(
            method,
            f"{API_BASE_URL}/{endpoint}",
            timeout=30.0,
            **kwargs
        )
        response.raise_for_status()
        return response.json()

def _handle_api_error(e: Exception) -> str:
    '''Consistent error formatting across all tools.'''
    if isinstance(e, httpx.HTTPStatusError):
        if e.response.status_code == 404:
            return "Error: Resource not found. Please check the ID is correct."
        elif e.response.status_code == 403:
            return "Error: Permission denied. You don't have access to this resource."
        elif e.response.status_code == 429:
            return "Error: Rate limit exceeded. Please wait before making more requests."
        return f"Error: API request failed with status {e.response.status_code}"
    elif isinstance(e, httpx.TimeoutException):
        return "Error: Request timed out. Please try again."
    return f"Error: Unexpected error occurred: {type(e).__name__}"

# Tool definitions
@mcp.tool(
    name="example_search_users",
    annotations={
        "title": "Search Example Users",
        "readOnlyHint": True,
        "destructiveHint": False,
        "idempotentHint": True,
        "openWorldHint": True
    }
)
async def example_search_users(params: UserSearchInput) -> str:
    '''Search for users in the Example system by name, email, or team.

    [Full docstring as shown above]
    '''
    try:
        # Make API request using validated parameters
        data = await _make_api_request(
            "users/search",
            params={
                "q": params.query,
                "limit": params.limit,
                "offset": params.offset
            }
        )

        users = data.get("users", [])
        total = data.get("total", 0)

        if not users:
            return f"No users found matching '{params.query}'"

        # Format response based on requested format
        if params.response_format == ResponseFormat.MARKDOWN:
            lines = [f"# User Search Results: '{params.query}'", ""]
            lines.append(f"Found {total} users (showing {len(users)})")
            lines.append("")

            for user in users:
                lines.append(f"## {user['name']} ({user['id']})")
                lines.append(f"- **Email**: {user['email']}")
                if user.get('team'):
                    lines.append(f"- **Team**: {user['team']}")
                lines.append("")

            return "\n".join(lines)

        else:
            # Machine-readable JSON format
            import json
            response = {
                "total": total,
                "count": len(users),
                "offset": params.offset,
                "users": users
            }
            return json.dumps(response, indent=2)

    except Exception as e:
        return _handle_api_error(e)

if __name__ == "__main__":
    mcp.run()
```

---

## Advanced FastMCP Features

### Context Parameter Injection

FastMCP can automatically inject a `Context` parameter into tools for advanced capabilities like logging, progress reporting, resource reading, and user interaction:

```python
from mcp.server.fastmcp import FastMCP, Context

mcp = FastMCP("example_mcp")

@mcp.tool()
async def advanced_search(query: str, ctx: Context) -> str:
    '''Advanced tool with context access for logging and progress.'''

    # Report progress for long operations
    await ctx.report_progress(0.25, "Starting search...")

    # Log information for debugging
    await ctx.log_info("Processing query", {"query": query, "timestamp": datetime.now()})

    # Perform search
    results = await search_api(query)
    await ctx.report_progress(0.75, "Formatting results...")

    # Access server configuration
    server_name = ctx.fastmcp.name

    return format_results(results)

@mcp.tool()
async def interactive_tool(resource_id: str, ctx: Context) -> str:
    '''Tool that can request additional input from users.'''

    # Request sensitive information when needed
    api_key = await ctx.elicit(
        prompt="Please provide your API key:",
        input_type="password"
    )

    # Use the provided key
    return await api_call(resource_id, api_key)
```

**Context capabilities:**
- `ctx.report_progress(progress, message)` - Report progress for long operations
- `ctx.log_info(message, data)` / `ctx.log_error()` / `ctx.log_debug()` - Logging
- `ctx.elicit(prompt, input_type)` - Request input from users
- `ctx.fastmcp.name` - Access server configuration
- `ctx.read_resource(uri)` - Read MCP resources

### Resource Registration

Expose data as resources for efficient, template-based access:

```python
@mcp.resource("file://documents/{name}")
async def get_document(name: str) -> str:
    '''Expose documents as MCP resources.

    Resources are useful for static or semi-static data that doesn't
    require complex parameters. They use URI templates for flexible access.
    '''
    document_path = f"./docs/{name}"
    with open(document_path, "r") as f:
        return f.read()

@mcp.resource("config://settings/{key}")
async def get_setting(key: str, ctx: Context) -> str:
    '''Expose configuration as resources with context.'''
    settings = await load_settings()
    return json.dumps(settings.get(key, {}))
```

**When to use Resources vs Tools:**
- **Resources**: For data access with simple parameters (URI templates)
- **Tools**: For complex operations with validation and business logic

### Structured Output Types

FastMCP supports multiple return types beyond strings:

```python
from typing import TypedDict
from dataclasses import dataclass
from pydantic import BaseModel

# TypedDict for structured returns
class UserData(TypedDict):
    id: str
    name: str
    email: str

@mcp.tool()
async def get_user_typed(user_id: str) -> UserData:
    '''Returns structured data - FastMCP handles serialization.'''
    return {"id": user_id, "name": "John Doe", "email": "john@example.com"}

# Pydantic models for complex validation
class DetailedUser(BaseModel):
    id: str
    name: str
    email: str
    created_at: datetime
    metadata: Dict[str, Any]

@mcp.tool()
async def get_user_detailed(user_id: str) -> DetailedUser:
    '''Returns Pydantic model - automatically generates schema.'''
    user = await fetch_user(user_id)
    return DetailedUser(**user)
```

### Lifespan Management

Initialize resources that persist across requests:

```python
from contextlib import asynccontextmanager

@asynccontextmanager
async def app_lifespan():
    '''Manage resources that live for the server's lifetime.'''
    # Initialize connections, load config, etc.
    db = await connect_to_database()
    config = load_configuration()

    # Make available to all tools
    yield {"db": db, "config": config}

    # Cleanup on shutdown
    await db.close()

mcp = FastMCP("example_mcp", lifespan=app_lifespan)

@mcp.tool()
async def query_data(query: str, ctx: Context) -> str:
    '''Access lifespan resources through context.'''
    db = ctx.request_context.lifespan_state["db"]
    results = await db.query(query)
    return format_results(results)
```

### Transport Options

FastMCP supports two main transport mechanisms:

```python
# stdio transport (for local tools) - default
if __name__ == "__main__":
    mcp.run()

# Streamable HTTP transport (for remote servers)
if __name__ == "__main__":
    mcp.run(transport="streamable_http", port=8000)
```

**Transport selection:**
- **stdio**: Command-line tools, local integrations, subprocess execution
- **Streamable HTTP**: Web services, remote access, multiple clients

---

## Code Best Practices

### Code Composability and Reusability

Your implementation MUST prioritize composability and code reuse:

1. **Extract Common Functionality**:
   - Create reusable helper functions for operations used across multiple tools
   - Build shared API clients for HTTP requests instead of duplicating code
   - Centralize error handling logic in utility functions
   - Extract business logic into dedicated functions that can be composed
   - Extract shared markdown or JSON field selection & formatting functionality

2. **Avoid Duplication**:
   - NEVER copy-paste similar code between tools
   - If you find yourself writing similar logic twice, extract it into a function
   - Common operations like pagination, filtering, field selection, and formatting should be shared
   - Authentication/authorization logic should be centralized

### Python-Specific Best Practices

1. **Use Type Hints**: Always include type annotations for function parameters and return values
2. **Pydantic Models**: Define clear Pydantic models for all input validation
3. **Avoid Manual Validation**: Let Pydantic handle input validation with constraints
4. **Proper Imports**: Group imports (standard library, third-party, local)
5. **Error Handling**: Use specific exception types (httpx.HTTPStatusError, not generic Exception)
6. **Async Context Managers**: Use `async with` for resources that need cleanup
7. **Constants**: Define module-level constants in UPPER_CASE

## Quality Checklist

Before finalizing your Python MCP server implementation, ensure:

### Strategic Design
- [ ] Tools enable complete workflows, not just API endpoint wrappers
- [ ] Tool names reflect natural task subdivisions
- [ ] Response formats optimize for agent context efficiency
- [ ] Human-readable identifiers used where appropriate
- [ ] Error messages guide agents toward correct usage

### Implementation Quality
- [ ] FOCUSED IMPLEMENTATION: Most important and valuable tools implemented
- [ ] All tools have descriptive names and documentation
- [ ] Return types are consistent across similar operations
- [ ] Error handling is implemented for all external calls
- [ ] Server name follows format: `{service}_mcp`
- [ ] All network operations use async/await
- [ ] Common functionality is extracted into reusable functions
- [ ] Error messages are clear, actionable, and educational
- [ ] Outputs are properly validated and formatted

### Tool Configuration
- [ ] All tools implement 'name' and 'annotations' in the decorator
- [ ] Annotations correctly set (readOnlyHint, destructiveHint, idempotentHint, openWorldHint)
- [ ] All tools use Pydantic BaseModel for input validation with Field() definitions
- [ ] All Pydantic Fields have explicit types and descriptions with constraints
- [ ] All tools have comprehensive docstrings with explicit input/output types
- [ ] Docstrings include complete schema structure for dict/JSON returns
- [ ] Pydantic models handle input validation (no manual validation needed)

### Advanced Features (where applicable)
- [ ] Context injection used for logging, progress, or elicitation
- [ ] Resources registered for appropriate data endpoints
- [ ] Lifespan management implemented for persistent connections
- [ ] Structured output types used (TypedDict, Pydantic models)
- [ ] Appropriate transport configured (stdio or streamable HTTP)

### Code Quality
- [ ] File includes proper imports including Pydantic imports
- [ ] Pagination is properly implemented where applicable
- [ ] Filtering options are provided for potentially large result sets
- [ ] All async functions are properly defined with `async def`
- [ ] HTTP client usage follows async patterns with proper context managers
- [ ] Type hints are used throughout the code
- [ ] Constants are defined at module level in UPPER_CASE

### Testing
- [ ] Server runs successfully: `python your_server.py --help`
- [ ] All imports resolve correctly
- [ ] Sample tool calls work as expected
- [ ] Error scenarios handled gracefully
readme
anthropic SKILL.md License: LICENSE.txt Version: Unknown
Imported skill readme from anthropic
View skill
> **Note:** This repository contains Anthropic's implementation of skills for Claude. For information about the Agent Skills standard, see [agentskills.io](http://agentskills.io).

# Skills
Skills are folders of instructions, scripts, and resources that Claude loads dynamically to improve performance on specialized tasks. Skills teach Claude how to complete specific tasks in a repeatable way, whether that's creating documents with your company's brand guidelines, analyzing data using your organization's specific workflows, or automating personal tasks.

For more information, check out:
- [What are skills?](https://support.claude.com/en/articles/12512176-what-are-skills)
- [Using skills in Claude](https://support.claude.com/en/articles/12512180-using-skills-in-claude)
- [How to create custom skills](https://support.claude.com/en/articles/12512198-creating-custom-skills)
- [Equipping agents for the real world with Agent Skills](https://anthropic.com/engineering/equipping-agents-for-the-real-world-with-agent-skills)

# About This Repository

This repository contains skills that demonstrate what's possible with Claude's skills system. These skills range from creative applications (art, music, design) to technical tasks (testing web apps, MCP server generation) to enterprise workflows (communications, branding, etc.).

Each skill is self-contained in its own folder with a `SKILL.md` file containing the instructions and metadata that Claude uses. Browse through these skills to get inspiration for your own skills or to understand different patterns and approaches.

Many skills in this repo are open source (Apache 2.0). We've also included the document creation & editing skills that power [Claude's document capabilities](https://www.anthropic.com/news/create-files) under the hood in the [`skills/docx`](./skills/docx), [`skills/pdf`](./skills/pdf), [`skills/pptx`](./skills/pptx), and [`skills/xlsx`](./skills/xlsx) subfolders. These are source-available, not open source, but we wanted to share these with developers as a reference for more complex skills that are actively used in a production AI application.

## Disclaimer

**These skills are provided for demonstration and educational purposes only.** While some of these capabilities may be available in Claude, the implementations and behaviors you receive from Claude may differ from what is shown in these skills. These skills are meant to illustrate patterns and possibilities. Always test skills thoroughly in your own environment before relying on them for critical tasks.

# Skill Sets
- [./skills](./skills): Skill examples for Creative & Design, Development & Technical, Enterprise & Communication, and Document Skills
- [./spec](./spec): The Agent Skills specification
- [./template](./template): Skill template

# Try in Claude Code, Claude.ai, and the API

## Claude Code
You can register this repository as a Claude Code Plugin marketplace by running the following command in Claude Code:
```
/plugin marketplace add anthropics/skills
```

Then, to install a specific set of skills:
1. Select `Browse and install plugins`
2. Select `anthropic-agent-skills`
3. Select `document-skills` or `example-skills`
4. Select `Install now`

Alternatively, directly install either Plugin via:
```
/plugin install document-skills@anthropic-agent-skills
/plugin install example-skills@anthropic-agent-skills
```

After installing the plugin, you can use the skill by just mentioning it. For instance, if you install the `document-skills` plugin from the marketplace, you can ask Claude Code to do something like: "Use the PDF skill to extract the form fields from `path/to/some-file.pdf`"

## Claude.ai

These example skills are all already available to paid plans in Claude.ai. 

To use any skill from this repository or upload custom skills, follow the instructions in [Using skills in Claude](https://support.claude.com/en/articles/12512180-using-skills-in-claude#h_a4222fa77b).

## Claude API

You can use Anthropic's pre-built skills, and upload custom skills, via the Claude API. See the [Skills API Quickstart](https://docs.claude.com/en/api/skills-guide#creating-a-skill) for more.

# Creating a Basic Skill

Skills are simple to create - just a folder with a `SKILL.md` file containing YAML frontmatter and instructions. You can use the **template-skill** in this repository as a starting point:

```markdown
---
name: my-skill-name
description: A clear description of what this skill does and when to use it
---

# My Skill Name

[Add your instructions here that Claude will follow when this skill is active]

## Examples
- Example usage 1
- Example usage 2

## Guidelines
- Guideline 1
- Guideline 2
```

The frontmatter requires only two fields:
- `name` - A unique identifier for your skill (lowercase, hyphens for spaces)
- `description` - A complete description of what the skill does and when to use it

The markdown content below contains the instructions, examples, and guidelines that Claude will follow. For more details, see [How to create custom skills](https://support.claude.com/en/articles/12512198-creating-custom-skills).

# Partner Skills

Skills are a great way to teach Claude how to get better at using specific pieces of software. As we see awesome example skills from partners, we may highlight some of them here:

- **Notion** - [Notion Skills for Claude](https://www.notion.so/notiondevs/Notion-Skills-for-Claude-28da4445d27180c7af1df7d8615723d0)
reference
anthropic SKILL.md License: LICENSE.txt Version: Unknown
Imported skill reference from anthropic
View skill
# PDF Processing Advanced Reference

This document contains advanced PDF processing features, detailed examples, and additional libraries not covered in the main skill instructions.

## pypdfium2 Library (Apache/BSD License)

### Overview
pypdfium2 is a Python binding for PDFium (Chromium's PDF library). It's excellent for fast PDF rendering, image generation, and serves as a PyMuPDF replacement.

### Render PDF to Images
```python
import pypdfium2 as pdfium
from PIL import Image

# Load PDF
pdf = pdfium.PdfDocument("document.pdf")

# Render page to image
page = pdf[0]  # First page
bitmap = page.render(
    scale=2.0,  # Higher resolution
    rotation=0  # No rotation
)

# Convert to PIL Image
img = bitmap.to_pil()
img.save("page_1.png", "PNG")

# Process multiple pages
for i, page in enumerate(pdf):
    bitmap = page.render(scale=1.5)
    img = bitmap.to_pil()
    img.save(f"page_{i+1}.jpg", "JPEG", quality=90)
```

### Extract Text with pypdfium2
```python
import pypdfium2 as pdfium

pdf = pdfium.PdfDocument("document.pdf")
for i, page in enumerate(pdf):
    text = page.get_text()
    print(f"Page {i+1} text length: {len(text)} chars")
```

## JavaScript Libraries

### pdf-lib (MIT License)

pdf-lib is a powerful JavaScript library for creating and modifying PDF documents in any JavaScript environment.

#### Load and Manipulate Existing PDF
```javascript
import { PDFDocument } from 'pdf-lib';
import fs from 'fs';

async function manipulatePDF() {
    // Load existing PDF
    const existingPdfBytes = fs.readFileSync('input.pdf');
    const pdfDoc = await PDFDocument.load(existingPdfBytes);

    // Get page count
    const pageCount = pdfDoc.getPageCount();
    console.log(`Document has ${pageCount} pages`);

    // Add new page
    const newPage = pdfDoc.addPage([600, 400]);
    newPage.drawText('Added by pdf-lib', {
        x: 100,
        y: 300,
        size: 16
    });

    // Save modified PDF
    const pdfBytes = await pdfDoc.save();
    fs.writeFileSync('modified.pdf', pdfBytes);
}
```

#### Create Complex PDFs from Scratch
```javascript
import { PDFDocument, rgb, StandardFonts } from 'pdf-lib';
import fs from 'fs';

async function createPDF() {
    const pdfDoc = await PDFDocument.create();

    // Add fonts
    const helveticaFont = await pdfDoc.embedFont(StandardFonts.Helvetica);
    const helveticaBold = await pdfDoc.embedFont(StandardFonts.HelveticaBold);

    // Add page
    const page = pdfDoc.addPage([595, 842]); // A4 size
    const { width, height } = page.getSize();

    // Add text with styling
    page.drawText('Invoice #12345', {
        x: 50,
        y: height - 50,
        size: 18,
        font: helveticaBold,
        color: rgb(0.2, 0.2, 0.8)
    });

    // Add rectangle (header background)
    page.drawRectangle({
        x: 40,
        y: height - 100,
        width: width - 80,
        height: 30,
        color: rgb(0.9, 0.9, 0.9)
    });

    // Add table-like content
    const items = [
        ['Item', 'Qty', 'Price', 'Total'],
        ['Widget', '2', '$50', '$100'],
        ['Gadget', '1', '$75', '$75']
    ];

    let yPos = height - 150;
    items.forEach(row => {
        let xPos = 50;
        row.forEach(cell => {
            page.drawText(cell, {
                x: xPos,
                y: yPos,
                size: 12,
                font: helveticaFont
            });
            xPos += 120;
        });
        yPos -= 25;
    });

    const pdfBytes = await pdfDoc.save();
    fs.writeFileSync('created.pdf', pdfBytes);
}
```

#### Advanced Merge and Split Operations
```javascript
import { PDFDocument } from 'pdf-lib';
import fs from 'fs';

async function mergePDFs() {
    // Create new document
    const mergedPdf = await PDFDocument.create();

    // Load source PDFs
    const pdf1Bytes = fs.readFileSync('doc1.pdf');
    const pdf2Bytes = fs.readFileSync('doc2.pdf');

    const pdf1 = await PDFDocument.load(pdf1Bytes);
    const pdf2 = await PDFDocument.load(pdf2Bytes);

    // Copy pages from first PDF
    const pdf1Pages = await mergedPdf.copyPages(pdf1, pdf1.getPageIndices());
    pdf1Pages.forEach(page => mergedPdf.addPage(page));

    // Copy specific pages from second PDF (pages 0, 2, 4)
    const pdf2Pages = await mergedPdf.copyPages(pdf2, [0, 2, 4]);
    pdf2Pages.forEach(page => mergedPdf.addPage(page));

    const mergedPdfBytes = await mergedPdf.save();
    fs.writeFileSync('merged.pdf', mergedPdfBytes);
}
```

### pdfjs-dist (Apache License)

PDF.js is Mozilla's JavaScript library for rendering PDFs in the browser.

#### Basic PDF Loading and Rendering
```javascript
import * as pdfjsLib from 'pdfjs-dist';

// Configure worker (important for performance)
pdfjsLib.GlobalWorkerOptions.workerSrc = './pdf.worker.js';

async function renderPDF() {
    // Load PDF
    const loadingTask = pdfjsLib.getDocument('document.pdf');
    const pdf = await loadingTask.promise;

    console.log(`Loaded PDF with ${pdf.numPages} pages`);

    // Get first page
    const page = await pdf.getPage(1);
    const viewport = page.getViewport({ scale: 1.5 });

    // Render to canvas
    const canvas = document.createElement('canvas');
    const context = canvas.getContext('2d');
    canvas.height = viewport.height;
    canvas.width = viewport.width;

    const renderContext = {
        canvasContext: context,
        viewport: viewport
    };

    await page.render(renderContext).promise;
    document.body.appendChild(canvas);
}
```

#### Extract Text with Coordinates
```javascript
import * as pdfjsLib from 'pdfjs-dist';

async function extractText() {
    const loadingTask = pdfjsLib.getDocument('document.pdf');
    const pdf = await loadingTask.promise;

    let fullText = '';

    // Extract text from all pages
    for (let i = 1; i <= pdf.numPages; i++) {
        const page = await pdf.getPage(i);
        const textContent = await page.getTextContent();

        const pageText = textContent.items
            .map(item => item.str)
            .join(' ');

        fullText += `\n--- Page ${i} ---\n${pageText}`;

        // Get text with coordinates for advanced processing
        const textWithCoords = textContent.items.map(item => ({
            text: item.str,
            x: item.transform[4],
            y: item.transform[5],
            width: item.width,
            height: item.height
        }));
    }

    console.log(fullText);
    return fullText;
}
```

#### Extract Annotations and Forms
```javascript
import * as pdfjsLib from 'pdfjs-dist';

async function extractAnnotations() {
    const loadingTask = pdfjsLib.getDocument('annotated.pdf');
    const pdf = await loadingTask.promise;

    for (let i = 1; i <= pdf.numPages; i++) {
        const page = await pdf.getPage(i);
        const annotations = await page.getAnnotations();

        annotations.forEach(annotation => {
            console.log(`Annotation type: ${annotation.subtype}`);
            console.log(`Content: ${annotation.contents}`);
            console.log(`Coordinates: ${JSON.stringify(annotation.rect)}`);
        });
    }
}
```

## Advanced Command-Line Operations

### poppler-utils Advanced Features

#### Extract Text with Bounding Box Coordinates
```bash
# Extract text with bounding box coordinates (essential for structured data)
pdftotext -bbox-layout document.pdf output.xml

# The XML output contains precise coordinates for each text element
```

#### Advanced Image Conversion
```bash
# Convert to PNG images with specific resolution
pdftoppm -png -r 300 document.pdf output_prefix

# Convert specific page range with high resolution
pdftoppm -png -r 600 -f 1 -l 3 document.pdf high_res_pages

# Convert to JPEG with quality setting
pdftoppm -jpeg -jpegopt quality=85 -r 200 document.pdf jpeg_output
```

#### Extract Embedded Images
```bash
# Extract all embedded images with metadata
pdfimages -j -p document.pdf page_images

# List image info without extracting
pdfimages -list document.pdf

# Extract images in their original format
pdfimages -all document.pdf images/img
```

### qpdf Advanced Features

#### Complex Page Manipulation
```bash
# Split PDF into groups of pages
qpdf --split-pages=3 input.pdf output_group_%02d.pdf

# Extract specific pages with complex ranges
qpdf input.pdf --pages input.pdf 1,3-5,8,10-end -- extracted.pdf

# Merge specific pages from multiple PDFs
qpdf --empty --pages doc1.pdf 1-3 doc2.pdf 5-7 doc3.pdf 2,4 -- combined.pdf
```

#### PDF Optimization and Repair
```bash
# Optimize PDF for web (linearize for streaming)
qpdf --linearize input.pdf optimized.pdf

# Remove unused objects and compress
qpdf --optimize-level=all input.pdf compressed.pdf

# Attempt to repair corrupted PDF structure
qpdf --check input.pdf
qpdf --fix-qdf damaged.pdf repaired.pdf

# Show detailed PDF structure for debugging
qpdf --show-all-pages input.pdf > structure.txt
```

#### Advanced Encryption
```bash
# Add password protection with specific permissions
qpdf --encrypt user_pass owner_pass 256 --print=none --modify=none -- input.pdf encrypted.pdf

# Check encryption status
qpdf --show-encryption encrypted.pdf

# Remove password protection (requires password)
qpdf --password=secret123 --decrypt encrypted.pdf decrypted.pdf
```

## Advanced Python Techniques

### pdfplumber Advanced Features

#### Extract Text with Precise Coordinates
```python
import pdfplumber

with pdfplumber.open("document.pdf") as pdf:
    page = pdf.pages[0]
    
    # Extract all text with coordinates
    chars = page.chars
    for char in chars[:10]:  # First 10 characters
        print(f"Char: '{char['text']}' at x:{char['x0']:.1f} y:{char['y0']:.1f}")
    
    # Extract text by bounding box (left, top, right, bottom)
    bbox_text = page.within_bbox((100, 100, 400, 200)).extract_text()
```

#### Advanced Table Extraction with Custom Settings
```python
import pdfplumber
import pandas as pd

with pdfplumber.open("complex_table.pdf") as pdf:
    page = pdf.pages[0]
    
    # Extract tables with custom settings for complex layouts
    table_settings = {
        "vertical_strategy": "lines",
        "horizontal_strategy": "lines",
        "snap_tolerance": 3,
        "intersection_tolerance": 15
    }
    tables = page.extract_tables(table_settings)
    
    # Visual debugging for table extraction
    img = page.to_image(resolution=150)
    img.save("debug_layout.png")
```

### reportlab Advanced Features

#### Create Professional Reports with Tables
```python
from reportlab.platypus import SimpleDocTemplate, Table, TableStyle, Paragraph
from reportlab.lib.styles import getSampleStyleSheet
from reportlab.lib import colors

# Sample data
data = [
    ['Product', 'Q1', 'Q2', 'Q3', 'Q4'],
    ['Widgets', '120', '135', '142', '158'],
    ['Gadgets', '85', '92', '98', '105']
]

# Create PDF with table
doc = SimpleDocTemplate("report.pdf")
elements = []

# Add title
styles = getSampleStyleSheet()
title = Paragraph("Quarterly Sales Report", styles['Title'])
elements.append(title)

# Add table with advanced styling
table = Table(data)
table.setStyle(TableStyle([
    ('BACKGROUND', (0, 0), (-1, 0), colors.grey),
    ('TEXTCOLOR', (0, 0), (-1, 0), colors.whitesmoke),
    ('ALIGN', (0, 0), (-1, -1), 'CENTER'),
    ('FONTNAME', (0, 0), (-1, 0), 'Helvetica-Bold'),
    ('FONTSIZE', (0, 0), (-1, 0), 14),
    ('BOTTOMPADDING', (0, 0), (-1, 0), 12),
    ('BACKGROUND', (0, 1), (-1, -1), colors.beige),
    ('GRID', (0, 0), (-1, -1), 1, colors.black)
]))
elements.append(table)

doc.build(elements)
```

## Complex Workflows

### Extract Figures/Images from PDF

#### Method 1: Using pdfimages (fastest)
```bash
# Extract all images with original quality
pdfimages -all document.pdf images/img
```

#### Method 2: Using pypdfium2 + Image Processing
```python
import pypdfium2 as pdfium
from PIL import Image
import numpy as np

def extract_figures(pdf_path, output_dir):
    pdf = pdfium.PdfDocument(pdf_path)
    
    for page_num, page in enumerate(pdf):
        # Render high-resolution page
        bitmap = page.render(scale=3.0)
        img = bitmap.to_pil()
        
        # Convert to numpy for processing
        img_array = np.array(img)
        
        # Simple figure detection (non-white regions)
        mask = np.any(img_array != [255, 255, 255], axis=2)
        
        # Find contours and extract bounding boxes
        # (This is simplified - real implementation would need more sophisticated detection)
        
        # Save detected figures
        # ... implementation depends on specific needs
```

### Batch PDF Processing with Error Handling
```python
import os
import glob
from pypdf import PdfReader, PdfWriter
import logging

logging.basicConfig(level=logging.INFO)
logger = logging.getLogger(__name__)

def batch_process_pdfs(input_dir, operation='merge'):
    pdf_files = glob.glob(os.path.join(input_dir, "*.pdf"))
    
    if operation == 'merge':
        writer = PdfWriter()
        for pdf_file in pdf_files:
            try:
                reader = PdfReader(pdf_file)
                for page in reader.pages:
                    writer.add_page(page)
                logger.info(f"Processed: {pdf_file}")
            except Exception as e:
                logger.error(f"Failed to process {pdf_file}: {e}")
                continue
        
        with open("batch_merged.pdf", "wb") as output:
            writer.write(output)
    
    elif operation == 'extract_text':
        for pdf_file in pdf_files:
            try:
                reader = PdfReader(pdf_file)
                text = ""
                for page in reader.pages:
                    text += page.extract_text()
                
                output_file = pdf_file.replace('.pdf', '.txt')
                with open(output_file, 'w', encoding='utf-8') as f:
                    f.write(text)
                logger.info(f"Extracted text from: {pdf_file}")
                
            except Exception as e:
                logger.error(f"Failed to extract text from {pdf_file}: {e}")
                continue
```

### Advanced PDF Cropping
```python
from pypdf import PdfWriter, PdfReader

reader = PdfReader("input.pdf")
writer = PdfWriter()

# Crop page (left, bottom, right, top in points)
page = reader.pages[0]
page.mediabox.left = 50
page.mediabox.bottom = 50
page.mediabox.right = 550
page.mediabox.top = 750

writer.add_page(page)
with open("cropped.pdf", "wb") as output:
    writer.write(output)
```

## Performance Optimization Tips

### 1. For Large PDFs
- Use streaming approaches instead of loading entire PDF in memory
- Use `qpdf --split-pages` for splitting large files
- Process pages individually with pypdfium2

### 2. For Text Extraction
- `pdftotext -bbox-layout` is fastest for plain text extraction
- Use pdfplumber for structured data and tables
- Avoid `pypdf.extract_text()` for very large documents

### 3. For Image Extraction
- `pdfimages` is much faster than rendering pages
- Use low resolution for previews, high resolution for final output

### 4. For Form Filling
- pdf-lib maintains form structure better than most alternatives
- Pre-validate form fields before processing

### 5. Memory Management
```python
# Process PDFs in chunks
def process_large_pdf(pdf_path, chunk_size=10):
    reader = PdfReader(pdf_path)
    total_pages = len(reader.pages)
    
    for start_idx in range(0, total_pages, chunk_size):
        end_idx = min(start_idx + chunk_size, total_pages)
        writer = PdfWriter()
        
        for i in range(start_idx, end_idx):
            writer.add_page(reader.pages[i])
        
        # Process chunk
        with open(f"chunk_{start_idx//chunk_size}.pdf", "wb") as output:
            writer.write(output)
```

## Troubleshooting Common Issues

### Encrypted PDFs
```python
# Handle password-protected PDFs
from pypdf import PdfReader

try:
    reader = PdfReader("encrypted.pdf")
    if reader.is_encrypted:
        reader.decrypt("password")
except Exception as e:
    print(f"Failed to decrypt: {e}")
```

### Corrupted PDFs
```bash
# Use qpdf to repair
qpdf --check corrupted.pdf
qpdf --replace-input corrupted.pdf
```

### Text Extraction Issues
```python
# Fallback to OCR for scanned PDFs
import pytesseract
from pdf2image import convert_from_path

def extract_text_with_ocr(pdf_path):
    images = convert_from_path(pdf_path)
    text = ""
    for i, image in enumerate(images):
        text += pytesseract.image_to_string(image)
    return text
```

## License Information

- **pypdf**: BSD License
- **pdfplumber**: MIT License
- **pypdfium2**: Apache/BSD License
- **reportlab**: BSD License
- **poppler-utils**: GPL-2 License
- **qpdf**: Apache License
- **pdf-lib**: MIT License
- **pdfjs-dist**: Apache License
skill
anthropic SKILL.md License: LICENSE.txt Version: Unknown
Imported skill skill from anthropic
View skill
---
name: template-skill
description: Replace with description of the skill and when Claude should use it.
---

# Insert instructions below
sunset_boulevard
anthropic SKILL.md License: LICENSE.txt Version: Unknown
Imported skill sunset_boulevard from anthropic
View skill
# Sunset Boulevard

A warm and vibrant theme inspired by golden hour sunsets, perfect for energetic and creative presentations.

## Color Palette

- **Burnt Orange**: `#e76f51` - Primary accent color
- **Coral**: `#f4a261` - Secondary warm accent
- **Warm Sand**: `#e9c46a` - Highlighting and backgrounds
- **Deep Purple**: `#264653` - Dark contrast and text

## Typography

- **Headers**: DejaVu Serif Bold
- **Body Text**: DejaVu Sans

## Best Used For

Creative pitches, marketing presentations, lifestyle brands, event promotions, inspirational content.
tech_innovation
anthropic SKILL.md License: LICENSE.txt Version: Unknown
Imported skill tech_innovation from anthropic
View skill
# Tech Innovation

A bold and modern theme with high-contrast colors perfect for cutting-edge technology presentations.

## Color Palette

- **Electric Blue**: `#0066ff` - Vibrant primary accent
- **Neon Cyan**: `#00ffff` - Bright highlight color
- **Dark Gray**: `#1e1e1e` - Deep backgrounds
- **White**: `#ffffff` - Clean text and contrast

## Typography

- **Headers**: DejaVu Sans Bold
- **Body Text**: DejaVu Sans

## Best Used For

Tech startups, software launches, innovation showcases, AI/ML presentations, digital transformation content.
third_party_notices
anthropic SKILL.md License: LICENSE.txt Version: Unknown
Imported skill third_party_notices from anthropic
View skill
# **Third-Party Notices**

THE FOLLOWING SETS FORTH ATTRIBUTION NOTICES FOR THIRD PARTY SOFTWARE THAT MAY BE CONTAINED IN PORTIONS OF THIS PRODUCT.

---

## **BSD 2-Clause License**

The following components are licensed under BSD 2-Clause License reproduced below:

**imageio 2.37.0**, Copyright (c) 2014-2022, imageio developers

**imageio-ffmpeg 0.6.0**, Copyright (c) 2019-2025, imageio 

**License Text:**

Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met:

1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer.  
     
2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution.

THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

---

## **GNU General Public License v3.0**

The following components are licensed under GNU General Public License v3.0 reproduced below:

**FFmpeg 7.0.2**, Copyright (c) 2000-2024 the FFmpeg developers

Source Code: [https://ffmpeg.org/releases/ffmpeg-7.0.2.tar.xz](https://ffmpeg.org/releases/ffmpeg-7.0.2.tar.xz)

**License Text:**

GNU GENERAL PUBLIC LICENSE Version 3, 29 June 2007

Copyright © 2007 Free Software Foundation, Inc. [https://fsf.org/](https://fsf.org/)

Everyone is permitted to copy and distribute verbatim copies of this license document, but changing it is not allowed.

Preamble

The GNU General Public License is a free, copyleft license for software and other kinds of works.

The licenses for most software and other practical works are designed to take away your freedom to share and change the works. By contrast, the GNU General Public License is intended to guarantee your freedom to share and change all versions of a program--to make sure it remains free software for all its users. We, the Free Software Foundation, use the GNU General Public License for most of our software; it applies also to any other work released this way by its authors. You can apply it to your programs, too.

When we speak of free software, we are referring to freedom, not price. Our General Public Licenses are designed to make sure that you have the freedom to distribute copies of free software (and charge for them if you wish), that you receive source code or can get it if you want it, that you can change the software or use pieces of it in new free programs, and that you know you can do these things.

To protect your rights, we need to prevent others from denying you these rights or asking you to surrender the rights. Therefore, you have certain responsibilities if you distribute copies of the software, or if you modify it: responsibilities to respect the freedom of others.

For example, if you distribute copies of such a program, whether gratis or for a fee, you must pass on to the recipients the same freedoms that you received. You must make sure that they, too, receive or can get the source code. And you must show them these terms so they know their rights.

Developers that use the GNU GPL protect your rights with two steps: (1) assert copyright on the software, and (2) offer you this License giving you legal permission to copy, distribute and/or modify it.

For the developers' and authors' protection, the GPL clearly explains that there is no warranty for this free software. For both users' and authors' sake, the GPL requires that modified versions be marked as changed, so that their problems will not be attributed erroneously to authors of previous versions.

Some devices are designed to deny users access to install or run modified versions of the software inside them, although the manufacturer can do so. This is fundamentally incompatible with the aim of protecting users' freedom to change the software. The systematic pattern of such abuse occurs in the area of products for individuals to use, which is precisely where it is most unacceptable. Therefore, we have designed this version of the GPL to prohibit the practice for those products. If such problems arise substantially in other domains, we stand ready to extend this provision to those domains in future versions of the GPL, as needed to protect the freedom of users.

Finally, every program is threatened constantly by software patents. States should not allow patents to restrict development and use of software on general-purpose computers, but in those that do, we wish to avoid the special danger that patents applied to a free program could make it effectively proprietary. To prevent this, the GPL assures that patents cannot be used to render the program non-free.

The precise terms and conditions for copying, distribution and modification follow.

TERMS AND CONDITIONS

0. Definitions.

"This License" refers to version 3 of the GNU General Public License.

"Copyright" also means copyright-like laws that apply to other kinds of works, such as semiconductor masks.

"The Program" refers to any copyrightable work licensed under this License. Each licensee is addressed as "you". "Licensees" and "recipients" may be individuals or organizations.

To "modify" a work means to copy from or adapt all or part of the work in a fashion requiring copyright permission, other than the making of an exact copy. The resulting work is called a "modified version" of the earlier work or a work "based on" the earlier work.

A "covered work" means either the unmodified Program or a work based on the Program.

To "propagate" a work means to do anything with it that, without permission, would make you directly or secondarily liable for infringement under applicable copyright law, except executing it on a computer or modifying a private copy. Propagation includes copying, distribution (with or without modification), making available to the public, and in some countries other activities as well.

To "convey" a work means any kind of propagation that enables other parties to make or receive copies. Mere interaction with a user through a computer network, with no transfer of a copy, is not conveying.

An interactive user interface displays "Appropriate Legal Notices" to the extent that it includes a convenient and prominently visible feature that (1) displays an appropriate copyright notice, and (2) tells the user that there is no warranty for the work (except to the extent that warranties are provided), that licensees may convey the work under this License, and how to view a copy of this License. If the interface presents a list of user commands or options, such as a menu, a prominent item in the list meets this criterion.

1. Source Code.

The "source code" for a work means the preferred form of the work for making modifications to it. "Object code" means any non-source form of a work.

A "Standard Interface" means an interface that either is an official standard defined by a recognized standards body, or, in the case of interfaces specified for a particular programming language, one that is widely used among developers working in that language.

The "System Libraries" of an executable work include anything, other than the work as a whole, that (a) is included in the normal form of packaging a Major Component, but which is not part of that Major Component, and (b) serves only to enable use of the work with that Major Component, or to implement a Standard Interface for which an implementation is available to the public in source code form. A "Major Component", in this context, means a major essential component (kernel, window system, and so on) of the specific operating system (if any) on which the executable work runs, or a compiler used to produce the work, or an object code interpreter used to run it.

The "Corresponding Source" for a work in object code form means all the source code needed to generate, install, and (for an executable work) run the object code and to modify the work, including scripts to control those activities. However, it does not include the work's System Libraries, or general-purpose tools or generally available free programs which are used unmodified in performing those activities but which are not part of the work. For example, Corresponding Source includes interface definition files associated with source files for the work, and the source code for shared libraries and dynamically linked subprograms that the work is specifically designed to require, such as by intimate data communication or control flow between those subprograms and other parts of the work.

The Corresponding Source need not include anything that users can regenerate automatically from other parts of the Corresponding Source.

The Corresponding Source for a work in source code form is that same work.

2. Basic Permissions.

All rights granted under this License are granted for the term of copyright on the Program, and are irrevocable provided the stated conditions are met. This License explicitly affirms your unlimited permission to run the unmodified Program. The output from running a covered work is covered by this License only if the output, given its content, constitutes a covered work. This License acknowledges your rights of fair use or other equivalent, as provided by copyright law.

You may make, run and propagate covered works that you do not convey, without conditions so long as your license otherwise remains in force. You may convey covered works to others for the sole purpose of having them make modifications exclusively for you, or provide you with facilities for running those works, provided that you comply with the terms of this License in conveying all material for which you do not control copyright. Those thus making or running the covered works for you must do so exclusively on your behalf, under your direction and control, on terms that prohibit them from making any copies of your copyrighted material outside their relationship with you.

Conveying under any other circumstances is permitted solely under the conditions stated below. Sublicensing is not allowed; section 10 makes it unnecessary.

3. Protecting Users' Legal Rights From Anti-Circumvention Law.

No covered work shall be deemed part of an effective technological measure under any applicable law fulfilling obligations under article 11 of the WIPO copyright treaty adopted on 20 December 1996, or similar laws prohibiting or restricting circumvention of such measures.

When you convey a covered work, you waive any legal power to forbid circumvention of technological measures to the extent such circumvention is effected by exercising rights under this License with respect to the covered work, and you disclaim any intention to limit operation or modification of the work as a means of enforcing, against the work's users, your or third parties' legal rights to forbid circumvention of technological measures.

4. Conveying Verbatim Copies.

You may convey verbatim copies of the Program's source code as you receive it, in any medium, provided that you conspicuously and appropriately publish on each copy an appropriate copyright notice; keep intact all notices stating that this License and any non-permissive terms added in accord with section 7 apply to the code; keep intact all notices of the absence of any warranty; and give all recipients a copy of this License along with the Program.

You may charge any price or no price for each copy that you convey, and you may offer support or warranty protection for a fee.

5. Conveying Modified Source Versions.

You may convey a work based on the Program, or the modifications to produce it from the Program, in the form of source code under the terms of section 4, provided that you also meet all of these conditions:

a) The work must carry prominent notices stating that you modified it, and giving a relevant date.

b) The work must carry prominent notices stating that it is released under this License and any conditions added under section 7\. This requirement modifies the requirement in section 4 to "keep intact all notices".

c) You must license the entire work, as a whole, under this License to anyone who comes into possession of a copy. This License will therefore apply, along with any applicable section 7 additional terms, to the whole of the work, and all its parts, regardless of how they are packaged. This License gives no permission to license the work in any other way, but it does not invalidate such permission if you have separately received it.

d) If the work has interactive user interfaces, each must display Appropriate Legal Notices; however, if the Program has interactive interfaces that do not display Appropriate Legal Notices, your work need not make them do so.

A compilation of a covered work with other separate and independent works, which are not by their nature extensions of the covered work, and which are not combined with it such as to form a larger program, in or on a volume of a storage or distribution medium, is called an "aggregate" if the compilation and its resulting copyright are not used to limit the access or legal rights of the compilation's users beyond what the individual works permit. Inclusion of a covered work in an aggregate does not cause this License to apply to the other parts of the aggregate.

6. Conveying Non-Source Forms.

You may convey a covered work in object code form under the terms of sections 4 and 5, provided that you also convey the machine-readable Corresponding Source under the terms of this License, in one of these ways:

a) Convey the object code in, or embodied in, a physical product (including a physical distribution medium), accompanied by the Corresponding Source fixed on a durable physical medium customarily used for software interchange.

b) Convey the object code in, or embodied in, a physical product (including a physical distribution medium), accompanied by a written offer, valid for at least three years and valid for as long as you offer spare parts or customer support for that product model, to give anyone who possesses the object code either (1) a copy of the Corresponding Source for all the software in the product that is covered by this License, on a durable physical medium customarily used for software interchange, for a price no more than your reasonable cost of physically performing this conveying of source, or (2) access to copy the Corresponding Source from a network server at no charge.

c) Convey individual copies of the object code with a copy of the written offer to provide the Corresponding Source. This alternative is allowed only occasionally and noncommercially, and only if you received the object code with such an offer, in accord with subsection 6b.

d) Convey the object code by offering access from a designated place (gratis or for a charge), and offer equivalent access to the Corresponding Source in the same way through the same place at no further charge. You need not require recipients to copy the Corresponding Source along with the object code. If the place to copy the object code is a network server, the Corresponding Source may be on a different server (operated by you or a third party) that supports equivalent copying facilities, provided you maintain clear directions next to the object code saying where to find the Corresponding Source. Regardless of what server hosts the Corresponding Source, you remain obligated to ensure that it is available for as long as needed to satisfy these requirements.

e) Convey the object code using peer-to-peer transmission, provided you inform other peers where the object code and Corresponding Source of the work are being offered to the general public at no charge under subsection 6d.

A separable portion of the object code, whose source code is excluded from the Corresponding Source as a System Library, need not be included in conveying the object code work.

A "User Product" is either (1) a "consumer product", which means any tangible personal property which is normally used for personal, family, or household purposes, or (2) anything designed or sold for incorporation into a dwelling. In determining whether a product is a consumer product, doubtful cases shall be resolved in favor of coverage. For a particular product received by a particular user, "normally used" refers to a typical or common use of that class of product, regardless of the status of the particular user or of the way in which the particular user actually uses, or expects or is expected to use, the product. A product is a consumer product regardless of whether the product has substantial commercial, industrial or non-consumer uses, unless such uses represent the only significant mode of use of the product.

"Installation Information" for a User Product means any methods, procedures, authorization keys, or other information required to install and execute modified versions of a covered work in that User Product from a modified version of its Corresponding Source. The information must suffice to ensure that the continued functioning of the modified object code is in no case prevented or interfered with solely because modification has been made.

If you convey an object code work under this section in, or with, or specifically for use in, a User Product, and the conveying occurs as part of a transaction in which the right of possession and use of the User Product is transferred to the recipient in perpetuity or for a fixed term (regardless of how the transaction is characterized), the Corresponding Source conveyed under this section must be accompanied by the Installation Information. But this requirement does not apply if neither you nor any third party retains the ability to install modified object code on the User Product (for example, the work has been installed in ROM).

The requirement to provide Installation Information does not include a requirement to continue to provide support service, warranty, or updates for a work that has been modified or installed by the recipient, or for the User Product in which it has been modified or installed. Access to a network may be denied when the modification itself materially and adversely affects the operation of the network or violates the rules and protocols for communication across the network.

Corresponding Source conveyed, and Installation Information provided, in accord with this section must be in a format that is publicly documented (and with an implementation available to the public in source code form), and must require no special password or key for unpacking, reading or copying.

7. Additional Terms.

"Additional permissions" are terms that supplement the terms of this License by making exceptions from one or more of its conditions. Additional permissions that are applicable to the entire Program shall be treated as though they were included in this License, to the extent that they are valid under applicable law. If additional permissions apply only to part of the Program, that part may be used separately under those permissions, but the entire Program remains governed by this License without regard to the additional permissions.

When you convey a copy of a covered work, you may at your option remove any additional permissions from that copy, or from any part of it. (Additional permissions may be written to require their own removal in certain cases when you modify the work.) You may place additional permissions on material, added by you to a covered work, for which you have or can give appropriate copyright permission.

Notwithstanding any other provision of this License, for material you add to a covered work, you may (if authorized by the copyright holders of that material) supplement the terms of this License with terms:

a) Disclaiming warranty or limiting liability differently from the terms of sections 15 and 16 of this License; or

b) Requiring preservation of specified reasonable legal notices or author attributions in that material or in the Appropriate Legal Notices displayed by works containing it; or

c) Prohibiting misrepresentation of the origin of that material, or requiring that modified versions of such material be marked in reasonable ways as different from the original version; or

d) Limiting the use for publicity purposes of names of licensors or authors of the material; or

e) Declining to grant rights under trademark law for use of some trade names, trademarks, or service marks; or

f) Requiring indemnification of licensors and authors of that material by anyone who conveys the material (or modified versions of it) with contractual assumptions of liability to the recipient, for any liability that these contractual assumptions directly impose on those licensors and authors.

All other non-permissive additional terms are considered "further restrictions" within the meaning of section 10\. If the Program as you received it, or any part of it, contains a notice stating that it is governed by this License along with a term that is a further restriction, you may remove that term. If a license document contains a further restriction but permits relicensing or conveying under this License, you may add to a covered work material governed by the terms of that license document, provided that the further restriction does not survive such relicensing or conveying.

If you add terms to a covered work in accord with this section, you must place, in the relevant source files, a statement of the additional terms that apply to those files, or a notice indicating where to find the applicable terms.

Additional terms, permissive or non-permissive, may be stated in the form of a separately written license, or stated as exceptions; the above requirements apply either way.

8. Termination.

You may not propagate or modify a covered work except as expressly provided under this License. Any attempt otherwise to propagate or modify it is void, and will automatically terminate your rights under this License (including any patent licenses granted under the third paragraph of section 11).

However, if you cease all violation of this License, then your license from a particular copyright holder is reinstated (a) provisionally, unless and until the copyright holder explicitly and finally terminates your license, and (b) permanently, if the copyright holder fails to notify you of the violation by some reasonable means prior to 60 days after the cessation.

Moreover, your license from a particular copyright holder is reinstated permanently if the copyright holder notifies you of the violation by some reasonable means, this is the first time you have received notice of violation of this License (for any work) from that copyright holder, and you cure the violation prior to 30 days after your receipt of the notice.

Termination of your rights under this section does not terminate the licenses of parties who have received copies or rights from you under this License. If your rights have been terminated and not permanently reinstated, you do not qualify to receive new licenses for the same material under section 10\.

9. Acceptance Not Required for Having Copies.

You are not required to accept this License in order to receive or run a copy of the Program. Ancillary propagation of a covered work occurring solely as a consequence of using peer-to-peer transmission to receive a copy likewise does not require acceptance. However, nothing other than this License grants you permission to propagate or modify any covered work. These actions infringe copyright if you do not accept this License. Therefore, by modifying or propagating a covered work, you indicate your acceptance of this License to do so.

10. Automatic Licensing of Downstream Recipients.

Each time you convey a covered work, the recipient automatically receives a license from the original licensors, to run, modify and propagate that work, subject to this License. You are not responsible for enforcing compliance by third parties with this License.

An "entity transaction" is a transaction transferring control of an organization, or substantially all assets of one, or subdividing an organization, or merging organizations. If propagation of a covered work results from an entity transaction, each party to that transaction who receives a copy of the work also receives whatever licenses to the work the party's predecessor in interest had or could give under the previous paragraph, plus a right to possession of the Corresponding Source of the work from the predecessor in interest, if the predecessor has it or can get it with reasonable efforts.

You may not impose any further restrictions on the exercise of the rights granted or affirmed under this License. For example, you may not impose a license fee, royalty, or other charge for exercise of rights granted under this License, and you may not initiate litigation (including a cross-claim or counterclaim in a lawsuit) alleging that any patent claim is infringed by making, using, selling, offering for sale, or importing the Program or any portion of it.

11. Patents.

A "contributor" is a copyright holder who authorizes use under this License of the Program or a work on which the Program is based. The work thus licensed is called the contributor's "contributor version".

A contributor's "essential patent claims" are all patent claims owned or controlled by the contributor, whether already acquired or hereafter acquired, that would be infringed by some manner, permitted by this License, of making, using, or selling its contributor version, but do not include claims that would be infringed only as a consequence of further modification of the contributor version. For purposes of this definition, "control" includes the right to grant patent sublicenses in a manner consistent with the requirements of this License.

Each contributor grants you a non-exclusive, worldwide, royalty-free patent license under the contributor's essential patent claims, to make, use, sell, offer for sale, import and otherwise run, modify and propagate the contents of its contributor version.

In the following three paragraphs, a "patent license" is any express agreement or commitment, however denominated, not to enforce a patent (such as an express permission to practice a patent or covenant not to sue for patent infringement). To "grant" such a patent license to a party means to make such an agreement or commitment not to enforce a patent against the party.

If you convey a covered work, knowingly relying on a patent license, and the Corresponding Source of the work is not available for anyone to copy, free of charge and under the terms of this License, through a publicly available network server or other readily accessible means, then you must either (1) cause the Corresponding Source to be so available, or (2) arrange to deprive yourself of the benefit of the patent license for this particular work, or (3) arrange, in a manner consistent with the requirements of this License, to extend the patent license to downstream recipients. "Knowingly relying" means you have actual knowledge that, but for the patent license, your conveying the covered work in a country, or your recipient's use of the covered work in a country, would infringe one or more identifiable patents in that country that you have reason to believe are valid.

If, pursuant to or in connection with a single transaction or arrangement, you convey, or propagate by procuring conveyance of, a covered work, and grant a patent license to some of the parties receiving the covered work authorizing them to use, propagate, modify or convey a specific copy of the covered work, then the patent license you grant is automatically extended to all recipients of the covered work and works based on it.

A patent license is "discriminatory" if it does not include within the scope of its coverage, prohibits the exercise of, or is conditioned on the non-exercise of one or more of the rights that are specifically granted under this License. You may not convey a covered work if you are a party to an arrangement with a third party that is in the business of distributing software, under which you make payment to the third party based on the extent of your activity of conveying the work, and under which the third party grants, to any of the parties who would receive the covered work from you, a discriminatory patent license (a) in connection with copies of the covered work conveyed by you (or copies made from those copies), or (b) primarily for and in connection with specific products or compilations that contain the covered work, unless you entered into that arrangement, or that patent license was granted, prior to 28 March 2007\.

Nothing in this License shall be construed as excluding or limiting any implied license or other defenses to infringement that may otherwise be available to you under applicable patent law.

12. No Surrender of Others' Freedom.

If conditions are imposed on you (whether by court order, agreement or otherwise) that contradict the conditions of this License, they do not excuse you from the conditions of this License. If you cannot convey a covered work so as to satisfy simultaneously your obligations under this License and any other pertinent obligations, then as a consequence you may not convey it at all. For example, if you agree to terms that obligate you to collect a royalty for further conveying from those to whom you convey the Program, the only way you could satisfy both those terms and this License would be to refrain entirely from conveying the Program.

13. Use with the GNU Affero General Public License.

Notwithstanding any other provision of this License, you have permission to link or combine any covered work with a work licensed under version 3 of the GNU Affero General Public License into a single combined work, and to convey the resulting work. The terms of this License will continue to apply to the part which is the covered work, but the special requirements of the GNU Affero General Public License, section 13, concerning interaction through a network will apply to the combination as such.

14. Revised Versions of this License.

The Free Software Foundation may publish revised and/or new versions of the GNU General Public License from time to time. Such new versions will be similar in spirit to the present version, but may differ in detail to address new problems or concerns.

Each version is given a distinguishing version number. If the Program specifies that a certain numbered version of the GNU General Public License "or any later version" applies to it, you have the option of following the terms and conditions either of that numbered version or of any later version published by the Free Software Foundation. If the Program does not specify a version number of the GNU General Public License, you may choose any version ever published by the Free Software Foundation.

If the Program specifies that a proxy can decide which future versions of the GNU General Public License can be used, that proxy's public statement of acceptance of a version permanently authorizes you to choose that version for the Program.

Later license versions may give you additional or different permissions. However, no additional obligations are imposed on any author or copyright holder as a result of your choosing to follow a later version.

15. Disclaimer of Warranty.

THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF ALL NECESSARY SERVICING, REPAIR OR CORRECTION.

16. Limitation of Liability.

IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR CONVEYS THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS), EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES.

17. Interpretation of Sections 15 and 16\.

If the disclaimer of warranty and limitation of liability provided above cannot be given local legal effect according to their terms, reviewing courts shall apply local law that most closely approximates an absolute waiver of all civil liability in connection with the Program, unless a warranty or assumption of liability accompanies a copy of the Program in return for a fee.

END OF TERMS AND CONDITIONS

How to Apply These Terms to Your New Programs

If you develop a new program, and you want it to be of the greatest possible use to the public, the best way to achieve this is to make it free software which everyone can redistribute and change under these terms.

To do so, attach the following notices to the program. It is safest to attach them to the start of each source file to most effectively state the exclusion of warranty; and each file should have at least the "copyright" line and a pointer to where the full notice is found.

\<one line to give the program's name and a brief idea of what it does.\>   
Copyright (C) \<year\>  \<name of author\>

This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version.

This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details.

You should have received a copy of the GNU General Public License along with this program. If not, see [https://www.gnu.org/licenses/](https://www.gnu.org/licenses/).

Also add information on how to contact you by electronic and paper mail.

If the program does terminal interaction, make it output a short notice like this when it starts in an interactive mode:

\<program\> Copyright (C) \<year\> \<name of author\>  
This program comes with ABSOLUTELY NO WARRANTY; for details type 'show w'. This is free software, and you are welcome to redistribute it under certain conditions; type 'show c' for details.

The hypothetical commands 'show w' and 'show c' should show the appropriate parts of the General Public License. Of course, your program's commands might be different; for a GUI interface, you would use an "about box".

You should also get your employer (if you work as a programmer) or school, if any, to sign a "copyright disclaimer" for the program, if necessary. For more information on this, and how to apply and follow the GNU GPL, see [https://www.gnu.org/licenses/](https://www.gnu.org/licenses/).

The GNU General Public License does not permit incorporating your program into proprietary programs. If your program is a subroutine library, you may consider it more useful to permit linking proprietary applications with the library. If this is what you want to do, use the GNU Lesser General Public License instead of this License. But first, please read [https://www.gnu.org/licenses/why-not-lgpl.html](https://www.gnu.org/licenses/why-not-lgpl.html).

---

## **MIT-CMU License (HPND)**

The following components are licensed under MIT-CMU License (HPND) reproduced below:

**Pillow 11.3.0**, Copyright © 1997-2011 by Secret Labs AB, Copyright © 1995-2011 by Fredrik Lundh and contributors, Copyright © 2010 by Jeffrey A. Clark and contributors

**License Text:**

By obtaining, using, and/or copying this software and/or its associated documentation, you agree that you have read, understood, and will comply with the following terms and conditions:

Permission to use, copy, modify and distribute this software and its documentation for any purpose and without fee is hereby granted, provided that the above copyright notice appears in all copies, and that both that copyright notice and this permission notice appear in supporting documentation, and that the name of Secret Labs AB or the author not be used in advertising or publicity pertaining to distribution of the software without specific, written prior permission.

SECRET LABS AB AND THE AUTHOR DISCLAIMS ALL WARRANTIES WITH REGARD TO THIS SOFTWARE, INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL SECRET LABS AB OR THE AUTHOR BE LIABLE FOR ANY SPECIAL, INDIRECT OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.

---

## **SIL Open Font License v1.1**

The following fonts are licensed under SIL Open Font License v1.1 reproduced below:

**Arsenal SC**, Copyright 2012 The Arsenal Project Authors ([andrij.design@gmail.com](mailto:andrij.design@gmail.com))

**Big Shoulders**, Copyright 2019 The Big Shoulders Project Authors ([https://github.com/xotypeco/big\_shoulders](https://github.com/xotypeco/big_shoulders))

**Boldonse**, Copyright 2024 The Boldonse Project Authors ([https://github.com/googlefonts/boldonse](https://github.com/googlefonts/boldonse))

**Bricolage Grotesque**, Copyright 2022 The Bricolage Grotesque Project Authors ([https://github.com/ateliertriay/bricolage](https://github.com/ateliertriay/bricolage))

**Crimson Pro**, Copyright 2018 The Crimson Pro Project Authors ([https://github.com/Fonthausen/CrimsonPro](https://github.com/Fonthausen/CrimsonPro))

**DM Mono**, Copyright 2020 The DM Mono Project Authors ([https://www.github.com/googlefonts/dm-mono](https://www.github.com/googlefonts/dm-mono))

**Erica One**, Copyright (c) 2011 by LatinoType Limitada ([luciano@latinotype.com](mailto:luciano@latinotype.com)), with Reserved Font Name "Erica One"

**Geist Mono**, Copyright 2024 The Geist Project Authors ([https://github.com/vercel/geist-font.git](https://github.com/vercel/geist-font.git))

**Gloock**, Copyright 2022 The Gloock Project Authors ([https://github.com/duartp/gloock](https://github.com/duartp/gloock))

**IBM Plex Mono**, Copyright © 2017 IBM Corp., with Reserved Font Name "Plex"

**Instrument Sans**, Copyright 2022 The Instrument Sans Project Authors ([https://github.com/Instrument/instrument-sans](https://github.com/Instrument/instrument-sans))

**Italiana**, Copyright (c) 2011, Santiago Orozco ([hi@typemade.mx](mailto:hi@typemade.mx)), with Reserved Font Name "Italiana"

**JetBrains Mono**, Copyright 2020 The JetBrains Mono Project Authors ([https://github.com/JetBrains/JetBrainsMono](https://github.com/JetBrains/JetBrainsMono))

**Jura**, Copyright 2019 The Jura Project Authors ([https://github.com/ossobuffo/jura](https://github.com/ossobuffo/jura))

**Libre Baskerville**, Copyright 2012 The Libre Baskerville Project Authors ([https://github.com/impallari/Libre-Baskerville](https://github.com/impallari/Libre-Baskerville)), with Reserved Font Name "Libre Baskerville"

**Lora**, Copyright 2011 The Lora Project Authors ([https://github.com/cyrealtype/Lora-Cyrillic](https://github.com/cyrealtype/Lora-Cyrillic)), with Reserved Font Name "Lora"

**National Park**, Copyright 2025 The National Park Project Authors ([https://github.com/benhoepner/National-Park](https://github.com/benhoepner/National-Park))

**Nothing You Could Do**, Copyright (c) 2010, Kimberly Geswein (kimberlygeswein.com)

**Outfit**, Copyright 2021 The Outfit Project Authors ([https://github.com/Outfitio/Outfit-Fonts](https://github.com/Outfitio/Outfit-Fonts))

**Pixelify Sans**, Copyright 2021 The Pixelify Sans Project Authors ([https://github.com/eifetx/Pixelify-Sans](https://github.com/eifetx/Pixelify-Sans))

**Poiret One**, Copyright (c) 2011, Denis Masharov ([denis.masharov@gmail.com](mailto:denis.masharov@gmail.com))

**Red Hat Mono**, Copyright 2024 The Red Hat Project Authors ([https://github.com/RedHatOfficial/RedHatFont](https://github.com/RedHatOfficial/RedHatFont))

**Silkscreen**, Copyright 2001 The Silkscreen Project Authors ([https://github.com/googlefonts/silkscreen](https://github.com/googlefonts/silkscreen))

**Smooch Sans**, Copyright 2016 The Smooch Sans Project Authors ([https://github.com/googlefonts/smooch-sans](https://github.com/googlefonts/smooch-sans))

**Tektur**, Copyright 2023 The Tektur Project Authors ([https://www.github.com/hyvyys/Tektur](https://www.github.com/hyvyys/Tektur))

**Work Sans**, Copyright 2019 The Work Sans Project Authors ([https://github.com/weiweihuanghuang/Work-Sans](https://github.com/weiweihuanghuang/Work-Sans))

**Young Serif**, Copyright 2023 The Young Serif Project Authors ([https://github.com/noirblancrouge/YoungSerif](https://github.com/noirblancrouge/YoungSerif))

**License Text:**

---

## **SIL OPEN FONT LICENSE Version 1.1 \- 26 February 2007**

PREAMBLE

The goals of the Open Font License (OFL) are to stimulate worldwide development of collaborative font projects, to support the font creation efforts of academic and linguistic communities, and to provide a free and open framework in which fonts may be shared and improved in partnership with others.

The OFL allows the licensed fonts to be used, studied, modified and redistributed freely as long as they are not sold by themselves. The fonts, including any derivative works, can be bundled, embedded, redistributed and/or sold with any software provided that any reserved names are not used by derivative works. The fonts and derivatives, however, cannot be released under any other type of license. The requirement for fonts to remain under this license does not apply to any document created using the fonts or their derivatives.

DEFINITIONS

"Font Software" refers to the set of files released by the Copyright Holder(s) under this license and clearly marked as such. This may include source files, build scripts and documentation.

"Reserved Font Name" refers to any names specified as such after the copyright statement(s).

"Original Version" refers to the collection of Font Software components as distributed by the Copyright Holder(s).

"Modified Version" refers to any derivative made by adding to, deleting, or substituting \-- in part or in whole \-- any of the components of the Original Version, by changing formats or by porting the Font Software to a new environment.

"Author" refers to any designer, engineer, programmer, technical writer or other person who contributed to the Font Software.

PERMISSION & CONDITIONS

Permission is hereby granted, free of charge, to any person obtaining a copy of the Font Software, to use, study, copy, merge, embed, modify, redistribute, and sell modified and unmodified copies of the Font Software, subject to the following conditions:

1) Neither the Font Software nor any of its individual components, in Original or Modified Versions, may be sold by itself.  
     
2) Original or Modified Versions of the Font Software may be bundled, redistributed and/or sold with any software, provided that each copy contains the above copyright notice and this license. These can be included either as stand-alone text files, human-readable headers or in the appropriate machine-readable metadata fields within text or binary files as long as those fields can be easily viewed by the user.  
     
3) No Modified Version of the Font Software may use the Reserved Font Name(s) unless explicit written permission is granted by the corresponding Copyright Holder. This restriction only applies to the primary font name as presented to the users.  
     
4) The name(s) of the Copyright Holder(s) or the Author(s) of the Font Software shall not be used to promote, endorse or advertise any Modified Version, except to acknowledge the contribution(s) of the Copyright Holder(s) and the Author(s) or with their explicit written permission.  
     
5) The Font Software, modified or unmodified, in part or in whole, must be distributed entirely under this license, and must not be distributed under any other license. The requirement for fonts to remain under this license does not apply to any document created using the Font Software.

TERMINATION

This license becomes null and void if any of the above conditions are not met.

DISCLAIMER

THE FONT SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO ANY WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT OF COPYRIGHT, PATENT, TRADEMARK, OR OTHER RIGHT. IN NO EVENT SHALL THE COPYRIGHT HOLDER BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, INCLUDING ANY GENERAL, SPECIAL, INDIRECT, INCIDENTAL, OR CONSEQUENTIAL DAMAGES, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF THE USE OR INABILITY TO USE THE FONT SOFTWARE OR FROM OTHER DEALINGS IN THE FONT SOFTWARE.
workflows
anthropic SKILL.md License: LICENSE.txt Version: Unknown
Imported skill workflows from anthropic
View skill
# Workflow Patterns

## Sequential Workflows

For complex tasks, break operations into clear, sequential steps. It is often helpful to give Claude an overview of the process towards the beginning of SKILL.md:

```markdown
Filling a PDF form involves these steps:

1. Analyze the form (run analyze_form.py)
2. Create field mapping (edit fields.json)
3. Validate mapping (run validate_fields.py)
4. Fill the form (run fill_form.py)
5. Verify output (run verify_output.py)
```

## Conditional Workflows

For tasks with branching logic, guide Claude through decision points:

```markdown
1. Determine the modification type:
   **Creating new content?** → Follow "Creation workflow" below
   **Editing existing content?** → Follow "Editing workflow" below

2. Creation workflow: [steps]
3. Editing workflow: [steps]
```
agents
langchain SKILL.md License: LICENSE Version: Unknown
Imported skill agents from langchain
View skill
# Content Writer Agent

You are a content writer for a technology company. Your job is to create engaging, informative content that educates readers about AI, software development, and emerging technologies.

## Brand Voice

- **Professional but approachable**: Write like a knowledgeable colleague, not a textbook
- **Clear and direct**: Avoid jargon unless necessary; explain technical concepts simply
- **Confident but not arrogant**: Share expertise without being condescending
- **Engaging**: Use concrete examples, analogies, and stories to illustrate points

## Writing Standards

1. **Use active voice**: "The agent processes requests" not "Requests are processed by the agent"
2. **Lead with value**: Start with what matters to the reader, not background
3. **One idea per paragraph**: Keep paragraphs focused and scannable
4. **Concrete over abstract**: Use specific examples, numbers, and case studies
5. **End with action**: Every piece should leave the reader knowing what to do next

## Content Pillars

Our content focuses on:
- AI agents and automation
- Developer tools and productivity
- Software architecture and best practices
- Emerging technologies and trends

## Formatting Guidelines

- Use headers (H2, H3) to break up long content
- Include code examples where relevant (with syntax highlighting)
- Add bullet points for lists of 3+ items
- Keep sentences under 25 words when possible
- Include a clear call-to-action at the end

## Research Requirements

Before writing on any topic:
1. Use the `researcher` subagent for in-depth topic research
2. Gather at least 3 credible sources
3. Identify the key points readers need to understand
4. Find concrete examples or case studies to illustrate concepts
default_agent_prompt
langchain SKILL.md License: LICENSE Version: Unknown
Imported skill default_agent_prompt from langchain
View skill
You are an AI assistant that helps users with various tasks including coding, research, and analysis.

# Core Behavior

Be concise and direct. Answer in fewer than 4 lines unless the user asks for detail.
After working on a file, just stop - don't explain what you did unless asked.
Avoid unnecessary introductions or conclusions.

When you run non-trivial bash commands, briefly explain what they do.

## Proactiveness
Take action when asked, but don't surprise users with unrequested actions.
If asked how to approach something, answer first before taking action.

## Following Conventions
- Check existing code for libraries and frameworks before assuming availability
- Mimic existing code style, naming conventions, and patterns
- Never add comments unless asked

## Task Management
Use write_todos for complex multi-step tasks (3+ steps). Mark tasks in_progress before starting, completed immediately after finishing.
For simple 1-2 step tasks, just do them directly without todos.

## File Reading Best Practices

When exploring codebases or reading multiple files, use pagination to prevent context overflow.

**Pattern for codebase exploration:**
1. First scan: `read_file(path, limit=100)` - See file structure and key sections
2. Targeted read: `read_file(path, offset=100, limit=200)` - Read specific sections if needed
3. Full read: Only use `read_file(path)` without limit when necessary for editing

**When to paginate:**
- Reading any file >500 lines
- Exploring unfamiliar codebases (always start with limit=100)
- Reading multiple files in sequence

**When full read is OK:**
- Small files (<500 lines)
- Files you need to edit immediately after reading

## Working with Subagents (task tool)
When delegating to subagents:
- **Use filesystem for large I/O**: If input/output is large (>500 words), communicate via files
- **Parallelize independent work**: Spawn parallel subagents for independent tasks
- **Clear specifications**: Tell subagent exactly what format/structure you need
- **Main agent synthesizes**: Subagents gather/execute, main agent integrates results

## Tools

### shell
Execute shell commands. Always quote paths with spaces.
The bash command will be run from your current working directory.
Examples: `pytest /foo/bar/tests` (good), `cd /foo/bar && pytest tests` (bad)

### File Tools
- read_file: Read file contents (use absolute paths)
- edit_file: Replace exact strings in files (must read first, provide unique old_string)
- write_file: Create or overwrite files
- ls: List directory contents
- glob: Find files by pattern (e.g., "**/*.py")
- grep: Search file contents

Always use absolute paths starting with /.

### web_search
Search for documentation, error solutions, and code examples.

### http_request
Make HTTP requests to APIs (GET, POST, etc.).

## Code References
When referencing code, use format: `file_path:line_number`

## Documentation
- Do NOT create excessive markdown summary/documentation files after completing work
- Focus on the work itself, not documenting what you did
- Only create documentation when explicitly requested

## Bundled Sources

### __main__.py

Source: `/a0/tmp/skills_research/langchain/libs/deepagents-cli/deepagents_cli/__main__.py`

```python
"""Allow running the CLI as: python -m deepagents.cli."""

from deepagents_cli.main import cli_main

if __name__ == "__main__":
    cli_main()
```

### _version.py

Source: `/a0/tmp/skills_research/langchain/libs/deepagents-cli/deepagents_cli/_version.py`

```python
"""Version information for deepagents-cli."""

__version__ = "0.0.13a2"
```

### app.py

Source: `/a0/tmp/skills_research/langchain/libs/deepagents-cli/deepagents_cli/app.py`

```python
"""Textual UI application for deepagents-cli."""
# ruff: noqa: BLE001, PLR0912, PLR2004, S110, SIM108

from __future__ import annotations

import asyncio
import contextlib
import subprocess
import uuid
from pathlib import Path
from typing import TYPE_CHECKING, Any, ClassVar

from textual.app import App
from textual.binding import Binding, BindingType
from textual.containers import Container, VerticalScroll
from textual.css.query import NoMatches
from textual.events import Click, MouseUp  # noqa: TC002 - used in type annotation
from textual.widgets import Static  # noqa: TC002 - used at runtime

from deepagents_cli.clipboard import copy_selection_to_clipboard
from deepagents_cli.textual_adapter import TextualUIAdapter, execute_task_textual
from deepagents_cli.widgets.approval import ApprovalMenu
from deepagents_cli.widgets.chat_input import ChatInput
from deepagents_cli.widgets.loading import LoadingWidget
from deepagents_cli.widgets.messages import (
    AssistantMessage,
    ErrorMessage,
    SystemMessage,
    ToolCallMessage,
    UserMessage,
)
from deepagents_cli.widgets.status import StatusBar
from deepagents_cli.widgets.welcome import WelcomeBanner

if TYPE_CHECKING:
    from langgraph.pregel import Pregel
    from textual.app import ComposeResult
    from textual.worker import Worker


class TextualTokenTracker:
    """Token tracker that updates the status bar."""

    def __init__(self, update_callback: callable, hide_callback: callable | None = None) -> None:
        """Initialize with callbacks to update the display."""
        self._update_callback = update_callback
        self._hide_callback = hide_callback
        self.current_context = 0

    def add(self, total_tokens: int, _output_tokens: int = 0) -> None:
        """Update token count from a response.

        Args:
            total_tokens: Total context tokens (input + output from usage_metadata)
            _output_tokens: Unused, kept for backwards compatibility
        """
        self.current_context = total_tokens
        self._update_callback(self.current_context)

    def reset(self) -> None:
        """Reset token count."""
        self.current_context = 0
        self._update_callback(0)

    def hide(self) -> None:
        """Hide the token display (e.g., during streaming)."""
        if self._hide_callback:
            self._hide_callback()

    def show(self) -> None:
        """Show the token display with current value (e.g., after interrupt)."""
        self._update_callback(self.current_context)


class TextualSessionState:
    """Session state for the Textual app."""

    def __init__(
        self,
        *,
        auto_approve: bool = False,
        thread_id: str | None = None,
    ) -> None:
        """Initialize session state.

        Args:
            auto_approve: Whether to auto-approve tool calls
            thread_id: Optional thread ID (generates 8-char hex if not provided)
        """
        self.auto_approve = auto_approve
        self.thread_id = thread_id if thread_id else uuid.uuid4().hex[:8]

    def reset_thread(self) -> str:
        """Reset to a new thread. Returns the new thread_id."""
        self.thread_id = uuid.uuid4().hex[:8]
        return self.thread_id


class DeepAgentsApp(App):
    """Main Textual application for deepagents-cli."""

    TITLE = "DeepAgents"
    CSS_PATH = "app.tcss"
    ENABLE_COMMAND_PALETTE = False

    # Slow down scroll speed (default is 3 lines per scroll event)
    # Using 0.25 to require 4 scroll events per line - very smooth
    SCROLL_SENSITIVITY_Y = 0.25

    BINDINGS: ClassVar[list[BindingType]] = [
        Binding("escape", "interrupt", "Interrupt", show=False, priority=True),
        Binding("ctrl+c", "quit_or_interrupt", "Quit/Interrupt", show=False),
        Binding("ctrl+d", "quit_app", "Quit", show=False, priority=True),
        Binding("ctrl+t", "toggle_auto_approve", "Toggle Auto-Approve", show=False),
        Binding(
            "shift+tab", "toggle_auto_approve", "Toggle Auto-Approve", show=False, priority=True
        ),
        Binding("ctrl+o", "toggle_tool_output", "Toggle Tool Output", show=False),
        # Approval menu keys (handled at App level for reliability)
        Binding("up", "approval_up", "Up", show=False),
        Binding("k", "approval_up", "Up", show=False),
        Binding("down", "approval_down", "Down", show=False),
        Binding("j", "approval_down", "Down", show=False),
        Binding("enter", "approval_select", "Select", show=False),
        Binding("y", "approval_yes", "Yes", show=False),
        Binding("1", "approval_yes", "Yes", show=False),
        Binding("n", "approval_no", "No", show=False),
        Binding("2", "approval_no", "No", show=False),
        Binding("a", "approval_auto", "Auto", show=False),
        Binding("3", "approval_auto", "Auto", show=False),
    ]

    def __init__(
        self,
        *,
        agent: Pregel | None = None,
        assistant_id: str | None = None,
        backend: Any = None,  # noqa: ANN401  # CompositeBackend
        auto_approve: bool = False,
        cwd: str | Path | None = None,
        thread_id: str | None = None,
        initial_prompt: str | None = None,
        **kwargs: Any,
    ) -> None:
        """Initialize the DeepAgents application.

        Args:
            agent: Pre-configured LangGraph agent (optional for standalone mode)
            assistant_id: Agent identifier for memory storage
            backend: Backend for file operations
            auto_approve: Whether to start with auto-approve enabled
            cwd: Current working directory to display
            thread_id: Optional thread ID for session persistence
            initial_prompt: Optional prompt to auto-submit when session starts
            **kwargs: Additional arguments passed to parent
        """
        super().__init__(**kwargs)
        self._agent = agent
        self._assistant_id = assistant_id
        self._backend = backend
        self._auto_approve = auto_approve
        self._cwd = str(cwd) if cwd else str(Path.cwd())
        # Avoid collision with App._thread_id
        self._lc_thread_id = thread_id
        self._initial_prompt = initial_prompt
        self._status_bar: StatusBar | None = None
        self._chat_input: ChatInput | None = None
        self._quit_pending = False
        self._session_state: TextualSessionState | None = None
        self._ui_adapter: TextualUIAdapter | None = None
        self._pending_approval: asyncio.Future | None = None
        self._pending_approval_widget: Any = None
        # Agent task tracking for interruption
        self._agent_worker: Worker[None] | None = None
        self._agent_running = False
        self._loading_widget: LoadingWidget | None = None
        self._token_tracker: TextualTokenTracker | None = None

    def compose(self) -> ComposeResult:
        """Compose the application layout."""
        # Main chat area with scrollable messages
        with VerticalScroll(id="chat"):
            yield WelcomeBanner(id="welcome-banner")
            yield Container(id="messages")  # Container can have children mounted

        # Bottom app container - holds either ChatInput OR ApprovalMenu (swapped)
        # This is OUTSIDE VerticalScroll so arrow keys work in approval
        with Container(id="bottom-app-container"):
            yield ChatInput(cwd=self._cwd, id="input-area")

        # Status bar at bottom
        yield StatusBar(cwd=self._cwd, id="status-bar")

    async def on_mount(self) -> None:
        """Initialize components after mount."""
        self._status_bar = self.query_one("#status-bar", StatusBar)
        self._chat_input = self.query_one("#input-area", ChatInput)

        # Set initial auto-approve state
        if self._auto_approve:
            self._status_bar.set_auto_approve(enabled=True)

        # Create session state
        self._session_state = TextualSessionState(
            auto_approve=self._auto_approve,
            thread_id=self._lc_thread_id,
        )

        # Create token tracker that updates status bar
        self._token_tracker = TextualTokenTracker(self._update_tokens, self._hide_tokens)

        # Create UI adapter if agent is provided
        if self._agent:
            self._ui_adapter = TextualUIAdapter(
                mount_message=self._mount_message,
                update_status=self._update_status,
                request_approval=self._request_approval,
                on_auto_approve_enabled=self._on_auto_approve_enabled,
                scroll_to_bottom=self._scroll_chat_to_bottom,
            )
            self._ui_adapter.set_token_tracker(self._token_tracker)

        # Focus the input (autocomplete is now built into ChatInput)
        self._chat_input.focus_input()

        # Auto-submit initial prompt if provided
        if self._initial_prompt and self._initial_prompt.strip():
            # Use call_after_refresh to ensure UI is fully mounted before submitting
            self.call_after_refresh(
                lambda: asyncio.create_task(self._handle_user_message(self._initial_prompt))
            )

    def _update_status(self, message: str) -> None:
        """Update the status bar with a message."""
        if self._status_bar:
            self._status_bar.set_status_message(message)

    def _update_tokens(self, count: int) -> None:
        """Update the token count in status bar."""
        if self._status_bar:
            self._status_bar.set_tokens(count)

    def _hide_tokens(self) -> None:
        """Hide the token display during streaming."""
        if self._status_bar:
            self._status_bar.hide_tokens()

    def _scroll_chat_to_bottom(self) -> None:
        """Scroll the chat area to the bottom.

        Uses anchor() for smoother streaming - keeps scroll locked to bottom
        as new content is added without causing visual jumps.
        """
        try:
            chat = self.query_one("#chat", VerticalScroll)
            # anchor() locks scroll to bottom and auto-scrolls as content grows
            # Much smoother than calling scroll_end() on every chunk
            chat.anchor()
        except NoMatches:
            pass

    async def _request_approval(
        self,
        action_request: Any,  # noqa: ANN401
        assistant_id: str | None,
    ) -> asyncio.Future:
        """Request user approval inline in the messages area.

        Returns a Future that resolves to the user's decision.
        Mounts ApprovalMenu in the messages area (inline with chat).
        ChatInput stays visible - user can still see it.

        If another approval is already pending, queue this one.
        """
        loop = asyncio.get_running_loop()
        result_future: asyncio.Future = loop.create_future()

        # If there's already a pending approval, wait for it to complete first
        if self._pending_approval_widget is not None:
            while self._pending_approval_widget is not None:  # noqa: ASYNC110
                await asyncio.sleep(0.1)

        # Create menu with unique ID to avoid conflicts
        unique_id = f"approval-menu-{uuid.uuid4().hex[:8]}"
        menu = ApprovalMenu(action_request, assistant_id, id=unique_id)
        menu.set_future(result_future)

        # Store reference
        self._pending_approval_widget = menu

        # Pause the loading spinner during approval
        if self._loading_widget:
            self._loading_widget.pause("Awaiting decision")

        # Update status to show we're waiting for approval
        self._update_status("Waiting for approval...")

        # Mount approval inline in messages area (not replacing ChatInput)
        try:
            messages = self.query_one("#messages", Container)
            await messages.mount(menu)
            self._scroll_chat_to_bottom()
            # Focus approval menu
            self.call_after_refresh(menu.focus)
        except Exception as e:
            self._pending_approval_widget = None
            if not result_future.done():
                result_future.set_exception(e)

        return result_future

    def _on_auto_approve_enabled(self) -> None:
        """Callback when auto-approve mode is enabled via HITL."""
        self._auto_approve = True
        if self._status_bar:
            self._status_bar.set_auto_approve(enabled=True)
        if self._session_state:
            self._session_state.auto_approve = True

    async def on_chat_input_submitted(self, event: ChatInput.Submitted) -> None:
        """Handle submitted input from ChatInput widget."""
        value = event.value
        mode = event.mode

        # Reset quit pending state on any input
        self._quit_pending = False

        # Handle different modes
        if mode == "bash":
            # Bash command - strip the ! prefix
            await self._handle_bash_command(value.removeprefix("!"))
        elif mode == "command":
            # Slash command
            await self._handle_command(value)
        else:
            # Normal message - will be sent to agent
            await self._handle_user_message(value)

    def on_chat_input_mode_changed(self, event: ChatInput.ModeChanged) -> None:
        """Update status bar when input mode changes."""
        if self._status_bar:
            self._status_bar.set_mode(event.mode)

    async def on_approval_menu_decided(
        self,
        event: Any,  # noqa: ANN401, ARG002
    ) -> None:
        """Handle approval menu decision - remove from messages and refocus input."""
        # Remove ApprovalMenu using stored reference
        if self._pending_approval_widget:
            await self._pending_approval_widget.remove()
            self._pending_approval_widget = None

        # Resume the loading spinner after approval
        if self._loading_widget:
            self._loading_widget.resume()

        # Clear status message
        self._update_status("")

        # Refocus the chat input
        if self._chat_input:
            self.call_after_refresh(self._chat_input.focus_input)

    async def _handle_bash_command(self, command: str) -> None:
        """Handle a bash command (! prefix).

        Args:
            command: The bash command to execute
        """
        # Mount user message showing the bash command
        await self._mount_message(UserMessage(f"!{command}"))

        # Execute the bash command (shell=True is intentional for user-requested bash)
        try:
            result = await asyncio.to_thread(  # noqa: S604
                subprocess.run,
                command,
                shell=True,
                capture_output=True,
                text=True,
                cwd=self._cwd,
                timeout=60,
            )
            output = result.stdout.strip()
            if result.stderr:
                output += f"\n[stderr]\n{result.stderr.strip()}"

            if output:
                # Display output as assistant message (uses markdown for code blocks)
                msg = AssistantMessage(f"```\n{output}\n```")
                await self._mount_message(msg)
                await msg.write_initial_content()
            else:
                await self._mount_message(SystemMessage("Command completed (no output)"))

            if result.returncode != 0:
                await self._mount_message(ErrorMessage(f"Exit code: {result.returncode}"))

            # Scroll to show the output
            self._scroll_chat_to_bottom()

        except subprocess.TimeoutExpired:
            await self._mount_message(ErrorMessage("Command timed out (60s limit)"))
        except OSError as e:
            await self._mount_message(ErrorMessage(str(e)))

    async def _handle_command(self, command: str) -> None:
        """Handle a slash command.

        Args:
            command: The slash command (including /)
        """
        cmd = command.lower().strip()

        if cmd in ("/quit", "/exit", "/q"):
            self.exit()
        elif cmd == "/help":
            await self._mount_message(UserMessage(command))
            await self._mount_message(
                SystemMessage("Commands: /quit, /clear, /tokens, /threads, /help")
            )

        elif cmd == "/version":
            await self._mount_message(UserMessage(command))
            # Show CLI package version
            try:
                from deepagents_cli._version import __version__

                await self._mount_message(SystemMessage(f"deepagents version: {__version__}"))
            except Exception:
                await self._mount_message(SystemMessage("deepagents version: unknown"))
        elif cmd == "/clear":
            await self._clear_messages()
            if self._token_tracker:
                self._token_tracker.reset()
            # Clear status message (e.g., "Interrupted" from previous session)
            self._update_status("")
            # Reset thread to start fresh conversation
            if self._session_state:
                new_thread_id = self._session_state.reset_thread()
                await self._mount_message(SystemMessage(f"Started new session: {new_thread_id}"))
        elif cmd == "/threads":
            await self._mount_message(UserMessage(command))
            if self._session_state:
                await self._mount_message(
                    SystemMessage(f"Current session: {self._session_state.thread_id}")
                )
            else:
                await self._mount_message(SystemMessage("No active session"))
        elif cmd == "/tokens":
            await self._mount_message(UserMessage(command))
            if self._token_tracker and self._token_tracker.current_context > 0:
                count = self._token_tracker.current_context
                if count >= 1000:
                    formatted = f"{count / 1000:.1f}K"
                else:
                    formatted = str(count)
                await self._mount_message(SystemMessage(f"Current context: {formatted} tokens"))
            else:
                await self._mount_message(SystemMessage("No token usage yet"))
        else:
            await self._mount_message(UserMessage(command))
            await self._mount_message(SystemMessage(f"Unknown command: {cmd}"))

    async def _handle_user_message(self, message: str) -> None:
        """Handle a user message to send to the agent.

        Args:
            message: The user's message
        """
        # Mount the user message
        await self._mount_message(UserMessage(message))

        # Check if agent is available
        if self._agent and self._ui_adapter and self._session_state:
            # Show loading widget
            self._loading_widget = LoadingWidget("Thinking")
            await self._mount_message(self._loading_widget)
            self._agent_running = True

            # Disable cursor blink while agent is working
            if self._chat_input:
                self._chat_input.set_cursor_active(active=False)

            # Use run_worker to avoid blocking the main event loop
            # This allows the UI to remain responsive during agent execution
            self._agent_worker = self.run_worker(
                self._run_agent_task(message),
                exclusive=False,
            )
        else:
            await self._mount_message(
                SystemMessage("Agent not configured. Run with --agent flag or use standalone mode.")
            )

    async def _run_agent_task(self, message: str) -> None:
        """Run the agent task in a background worker.

        This runs in a worker thread so the main event loop stays responsive.
        """
        try:
            await execute_task_textual(
                user_input=message,
                agent=self._agent,
                assistant_id=self._assistant_id,
                session_state=self._session_state,
                adapter=self._ui_adapter,
                backend=self._backend,
            )
        except Exception as e:
            await self._mount_message(ErrorMessage(f"Agent error: {e}"))
        finally:
            # Clean up loading widget and agent state
            await self._cleanup_agent_task()

    async def _cleanup_agent_task(self) -> None:
        """Clean up after agent task completes or is cancelled."""
        self._agent_running = False
        self._agent_worker = None

        # Remove loading widget if present
        if self._loading_widget:
            with contextlib.suppress(Exception):
                await self._loading_widget.remove()
            self._loading_widget = None

        # Re-enable cursor blink now that agent is done
        if self._chat_input:
            self._chat_input.set_cursor_active(active=True)

        # Ensure token display is restored (in case of early cancellation)
        if self._token_tracker:
            self._token_tracker.show()

    async def _mount_message(self, widget: Static) -> None:
        """Mount a message widget to the messages area.

        Args:
            widget: The message widget to mount
        """
        try:
            messages = self.query_one("#messages", Container)
            await messages.mount(widget)
            # Scroll to bottom
            chat = self.query_one("#chat", VerticalScroll)
            chat.scroll_end(animate=False)
        except NoMatches:
            pass

    async def _clear_messages(self) -> None:
        """Clear the messages area."""
        try:
            messages = self.query_one("#messages", Container)
            await messages.remove_children()
        except NoMatches:
            # Widget not found - can happen during shutdown
            pass

    def action_quit_or_interrupt(self) -> None:
        """Handle Ctrl+C - interrupt agent, reject approval, or quit on double press.

        Priority order:
        1. If agent is running, interrupt it (preserve input)
        2. If approval menu is active, reject it
        3. If double press (quit_pending), quit
        4. Otherwise show quit hint
        """
        # If agent is running, interrupt it
        if self._agent_running and self._agent_worker:
            self._agent_worker.cancel()
            self._quit_pending = False
            return

        # If approval menu is active, reject it
        if self._pending_approval_widget:
            self._pending_approval_widget.action_select_reject()
            self._quit_pending = False
            return

        # Double Ctrl+C to quit
        if self._quit_pending:
            self.exit()
        else:
            self._quit_pending = True
            self.notify("Press Ctrl+C again to quit", timeout=3)

    def action_interrupt(self) -> None:
        """Handle escape key - interrupt agent or reject approval.

        This is the primary way to stop a running agent.
        """
        # If agent is running, interrupt it
        if self._agent_running and self._agent_worker:
            self._agent_worker.cancel()
            return

        # If approval menu is active, reject it
        if self._pending_approval_widget:
            self._pending_approval_widget.action_select_reject()

    def action_quit_app(self) -> None:
        """Handle quit action (Ctrl+D)."""
        self.exit()

    def action_toggle_auto_approve(self) -> None:
        """Toggle auto-approve mode."""
        self._auto_approve = not self._auto_approve
        if self._status_bar:
            self._status_bar.set_auto_approve(enabled=self._auto_approve)
        if self._session_state:
            self._session_state.auto_approve = self._auto_approve

    def action_toggle_tool_output(self) -> None:
        """Toggle expand/collapse of the most recent tool output."""
        # Find all tool messages with output, get the most recent one
        try:
            tool_messages = list(self.query(ToolCallMessage))
            # Find ones with output, toggle the most recent
            for tool_msg in reversed(tool_messages):
                if tool_msg.has_output:
                    tool_msg.toggle_output()
                    return
        except Exception:
            pass

    # Approval menu action handlers (delegated from App-level bindings)
    # NOTE: These only activate when approval widget is pending AND input is not focused
    def action_approval_up(self) -> None:
        """Handle up arrow in approval menu."""
        # Only handle if approval is active (input handles its own up for history/completion)
        if self._pending_approval_widget and not self._is_input_focused():
            self._pending_approval_widget.action_move_up()

    def action_approval_down(self) -> None:
        """Handle down arrow in approval menu."""
        if self._pending_approval_widget and not self._is_input_focused():
            self._pending_approval_widget.action_move_down()

    def action_approval_select(self) -> None:
        """Handle enter in approval menu."""
        # Only handle if approval is active AND input is not focused
        if self._pending_approval_widget and not self._is_input_focused():
            self._pending_approval_widget.action_select()

    def _is_input_focused(self) -> bool:
        """Check if the chat input (or its text area) has focus."""
        if not self._chat_input:
            return False
        focused = self.focused
        if focused is None:
            return False
        # Check if focused widget is the text area inside chat input
        return focused.id == "chat-input" or focused in self._chat_input.walk_children()

    def action_approval_yes(self) -> None:
        """Handle yes/1 in approval menu."""
        if self._pending_approval_widget:
            self._pending_approval_widget.action_select_approve()

    def action_approval_no(self) -> None:
        """Handle no/2 in approval menu."""
        if self._pending_approval_widget:
            self._pending_approval_widget.action_select_reject()

    def action_approval_auto(self) -> None:
        """Handle auto/3 in approval menu."""
        if self._pending_approval_widget:
            self._pending_approval_widget.action_select_auto()

    def action_approval_escape(self) -> None:
        """Handle escape in approval menu - reject."""
        if self._pending_approval_widget:
            self._pending_approval_widget.action_select_reject()

    def on_click(self, _event: Click) -> None:
        """Handle clicks anywhere in the terminal to focus on the command line."""
        if not self._chat_input:
            return

        self.call_after_refresh(self._chat_input.focus_input)

    def on_mouse_up(self, event: MouseUp) -> None:  # noqa: ARG002
        """Copy selection to clipboard on mouse release."""
        copy_selection_to_clipboard(self)


async def run_textual_app(
    *,
    agent: Pregel | None = None,
    assistant_id: str | None = None,
    backend: Any = None,  # noqa: ANN401  # CompositeBackend
    auto_approve: bool = False,
    cwd: str | Path | None = None,
    thread_id: str | None = None,
    initial_prompt: str | None = None,
) -> None:
    """Run the Textual application.

    Args:
        agent: Pre-configured LangGraph agent (optional)
        assistant_id: Agent identifier for memory storage
        backend: Backend for file operations
        auto_approve: Whether to start with auto-approve enabled
        cwd: Current working directory to display
        thread_id: Optional thread ID for session persistence
        initial_prompt: Optional prompt to auto-submit when session starts
    """
    app = DeepAgentsApp(
        agent=agent,
        assistant_id=assistant_id,
        backend=backend,
        auto_approve=auto_approve,
        cwd=cwd,
        thread_id=thread_id,
        initial_prompt=initial_prompt,
    )
    await app.run_async()


if __name__ == "__main__":
    import asyncio

    asyncio.run(run_textual_app())
```

### clipboard.py

Source: `/a0/tmp/skills_research/langchain/libs/deepagents-cli/deepagents_cli/clipboard.py`

```python
"""Clipboard utilities for deepagents-cli."""

from __future__ import annotations

import base64
import os
from typing import TYPE_CHECKING

if TYPE_CHECKING:
    from textual.app import App

_PREVIEW_MAX_LENGTH = 40


def _copy_osc52(text: str) -> None:
    """Copy text using OSC 52 escape sequence (works over SSH/tmux)."""
    encoded = base64.b64encode(text.encode("utf-8")).decode("ascii")
    osc52_seq = f"\033]52;c;{encoded}\a"
    if os.environ.get("TMUX"):
        osc52_seq = f"\033Ptmux;\033{osc52_seq}\033\\"

    with open("/dev/tty", "w") as tty:
        tty.write(osc52_seq)
        tty.flush()


def _shorten_preview(texts: list[str]) -> str:
    """Shorten text for notification preview."""
    dense_text = "⏎".join(texts).replace("\n", "⏎")
    if len(dense_text) > _PREVIEW_MAX_LENGTH:
        return f"{dense_text[: _PREVIEW_MAX_LENGTH - 1]}…"
    return dense_text


def copy_selection_to_clipboard(app: App) -> None:
    """Copy selected text from app widgets to clipboard.

    This queries all widgets for their text_selection and copies
    any selected text to the system clipboard.
    """
    selected_texts = []

    for widget in app.query("*"):
        if not hasattr(widget, "text_selection") or not widget.text_selection:
            continue

        selection = widget.text_selection

        try:
            result = widget.get_selection(selection)
        except Exception:
            continue

        if not result:
            continue

        selected_text, _ = result
        if selected_text.strip():
            selected_texts.append(selected_text)

    if not selected_texts:
        return

    combined_text = "\n".join(selected_texts)

    # Try multiple clipboard methods
    copy_methods = [_copy_osc52, app.copy_to_clipboard]

    # Try pyperclip if available
    try:
        import pyperclip

        copy_methods.insert(1, pyperclip.copy)
    except ImportError:
        pass

    for copy_fn in copy_methods:
        try:
            copy_fn(combined_text)
            # Use markup=False to prevent copied text from being parsed as Rich markup
            app.notify(
                f'"{_shorten_preview(selected_texts)}" copied',
                severity="information",
                timeout=2,
                markup=False,
            )
            return
        except Exception:
            continue

    # If all methods fail, still notify but warn
    app.notify(
        "Failed to copy - no clipboard method available",
        severity="warning",
        timeout=3,
    )
```

### config.py

Source: `/a0/tmp/skills_research/langchain/libs/deepagents-cli/deepagents_cli/config.py`

```python
"""Configuration, constants, and model creation for the CLI."""

import os
import re
import sys
import uuid
from dataclasses import dataclass
from pathlib import Path

import dotenv
from rich.console import Console

from deepagents_cli._version import __version__

dotenv.load_dotenv()

# CRITICAL: Override LANGSMITH_PROJECT to route agent traces to separate project
# LangSmith reads LANGSMITH_PROJECT at invocation time, so we override it here
# and preserve the user's original value for shell commands
_deepagents_project = os.environ.get("DEEPAGENTS_LANGSMITH_PROJECT")
_original_langsmith_project = os.environ.get("LANGSMITH_PROJECT")
if _deepagents_project:
    # Override LANGSMITH_PROJECT for agent traces
    os.environ["LANGSMITH_PROJECT"] = _deepagents_project

# Now safe to import LangChain modules
from langchain_core.language_models import BaseChatModel

# Color scheme
COLORS = {
    "primary": "#10b981",
    "dim": "#6b7280",
    "user": "#ffffff",
    "agent": "#10b981",
    "thinking": "#34d399",
    "tool": "#fbbf24",
}

# ASCII art banner

DEEP_AGENTS_ASCII = f"""
 ██████╗  ███████╗ ███████╗ ██████╗
 ██╔══██╗ ██╔════╝ ██╔════╝ ██╔══██╗
 ██║  ██║ █████╗   █████╗   ██████╔╝
 ██║  ██║ ██╔══╝   ██╔══╝   ██╔═══╝
 ██████╔╝ ███████╗ ███████╗ ██║
 ╚═════╝  ╚══════╝ ╚══════╝ ╚═╝

  █████╗   ██████╗  ███████╗ ███╗   ██╗ ████████╗ ███████╗
 ██╔══██╗ ██╔════╝  ██╔════╝ ████╗  ██║ ╚══██╔══╝ ██╔════╝
 ███████║ ██║  ███╗ █████╗   ██╔██╗ ██║    ██║    ███████╗
 ██╔══██║ ██║   ██║ ██╔══╝   ██║╚██╗██║    ██║    ╚════██║
 ██║  ██║ ╚██████╔╝ ███████╗ ██║ ╚████║    ██║    ███████║
 ╚═╝  ╚═╝  ╚═════╝  ╚══════╝ ╚═╝  ╚═══╝    ╚═╝    ╚══════╝
                                              v{__version__}
"""

# Interactive commands
COMMANDS = {
    "clear": "Clear screen and reset conversation",
    "help": "Show help information",
    "tokens": "Show token usage for current session",
    "quit": "Exit the CLI",
    "exit": "Exit the CLI",
}


# Maximum argument length for display
MAX_ARG_LENGTH = 150

# Agent configuration
config = {"recursion_limit": 1000}

# Rich console instance
console = Console(highlight=False)


def _find_project_root(start_path: Path | None = None) -> Path | None:
    """Find the project root by looking for .git directory.

    Walks up the directory tree from start_path (or cwd) looking for a .git
    directory, which indicates the project root.

    Args:
        start_path: Directory to start searching from. Defaults to current working directory.

    Returns:
        Path to the project root if found, None otherwise.
    """
    current = Path(start_path or Path.cwd()).resolve()

    # Walk up the directory tree
    for parent in [current, *list(current.parents)]:
        git_dir = parent / ".git"
        if git_dir.exists():
            return parent

    return None


def _find_project_agent_md(project_root: Path) -> list[Path]:
    """Find project-specific AGENTS.md file(s).

    Checks two locations and returns ALL that exist:
    1. project_root/.deepagents/AGENTS.md
    2. project_root/AGENTS.md

    Both files will be loaded and combined if both exist.

    Args:
        project_root: Path to the project root directory.

    Returns:
        List of paths to project AGENTS.md files (may contain 0, 1, or 2 paths).
    """
    paths = []

    # Check .deepagents/AGENTS.md (preferred)
    deepagents_md = project_root / ".deepagents" / "AGENTS.md"
    if deepagents_md.exists():
        paths.append(deepagents_md)

    # Check root AGENTS.md (fallback, but also include if both exist)
    root_md = project_root / "AGENTS.md"
    if root_md.exists():
        paths.append(root_md)

    return paths


@dataclass
class Settings:
    """Global settings and environment detection for deepagents-cli.

    This class is initialized once at startup and provides access to:
    - Available models and API keys
    - Current project information
    - Tool availability (e.g., Tavily)
    - File system paths

    Attributes:
        project_root: Current project root directory (if in a git project)

        openai_api_key: OpenAI API key if available
        anthropic_api_key: Anthropic API key if available
        tavily_api_key: Tavily API key if available
        deepagents_langchain_project: LangSmith project name for deepagents agent tracing
        user_langchain_project: Original LANGSMITH_PROJECT from environment (for user code)
    """

    # API keys
    openai_api_key: str | None
    anthropic_api_key: str | None
    google_api_key: str | None
    tavily_api_key: str | None

    # LangSmith configuration
    deepagents_langchain_project: str | None  # For deepagents agent tracing
    user_langchain_project: str | None  # Original LANGSMITH_PROJECT for user code

    # Model configuration
    model_name: str | None = None  # Currently active model name
    model_provider: str | None = None  # Provider (openai, anthropic, google)

    # Project information
    project_root: Path | None = None

    @classmethod
    def from_environment(cls, *, start_path: Path | None = None) -> "Settings":
        """Create settings by detecting the current environment.

        Args:
            start_path: Directory to start project detection from (defaults to cwd)

        Returns:
            Settings instance with detected configuration
        """
        # Detect API keys
        openai_key = os.environ.get("OPENAI_API_KEY")
        anthropic_key = os.environ.get("ANTHROPIC_API_KEY")
        google_key = os.environ.get("GOOGLE_API_KEY")
        tavily_key = os.environ.get("TAVILY_API_KEY")

        # Detect LangSmith configuration
        # DEEPAGENTS_LANGSMITH_PROJECT: Project for deepagents agent tracing
        # user_langchain_project: User's ORIGINAL LANGSMITH_PROJECT (before override)
        # Note: LANGSMITH_PROJECT was already overridden at module import time (above)
        # so we use the saved original value, not the current os.environ value
        deepagents_langchain_project = os.environ.get("DEEPAGENTS_LANGSMITH_PROJECT")
        user_langchain_project = _original_langsmith_project  # Use saved original!

        # Detect project
        project_root = _find_project_root(start_path)

        return cls(
            openai_api_key=openai_key,
            anthropic_api_key=anthropic_key,
            google_api_key=google_key,
            tavily_api_key=tavily_key,
            deepagents_langchain_project=deepagents_langchain_project,
            user_langchain_project=user_langchain_project,
            project_root=project_root,
        )

    @property
    def has_openai(self) -> bool:
        """Check if OpenAI API key is configured."""
        return self.openai_api_key is not None

    @property
    def has_anthropic(self) -> bool:
        """Check if Anthropic API key is configured."""
        return self.anthropic_api_key is not None

    @property
    def has_google(self) -> bool:
        """Check if Google API key is configured."""
        return self.google_api_key is not None

    @property
    def has_tavily(self) -> bool:
        """Check if Tavily API key is configured."""
        return self.tavily_api_key is not None

    @property
    def has_deepagents_langchain_project(self) -> bool:
        """Check if deepagents LangChain project name is configured."""
        return self.deepagents_langchain_project is not None

    @property
    def has_project(self) -> bool:
        """Check if currently in a git project."""
        return self.project_root is not None

    @property
    def user_deepagents_dir(self) -> Path:
        """Get the base user-level .deepagents directory.

        Returns:
            Path to ~/.deepagents
        """
        return Path.home() / ".deepagents"

    def get_user_agent_md_path(self, agent_name: str) -> Path:
        """Get user-level AGENTS.md path for a specific agent.

        Returns path regardless of whether the file exists.

        Args:
            agent_name: Name of the agent

        Returns:
            Path to ~/.deepagents/{agent_name}/AGENTS.md
        """
        return Path.home() / ".deepagents" / agent_name / "AGENTS.md"

    def get_project_agent_md_path(self) -> Path | None:
        """Get project-level AGENTS.md path.

        Returns path regardless of whether the file exists.

        Returns:
            Path to {project_root}/.deepagents/AGENTS.md, or None if not in a project
        """
        if not self.project_root:
            return None
        return self.project_root / ".deepagents" / "AGENTS.md"

    @staticmethod
    def _is_valid_agent_name(agent_name: str) -> bool:
        """Validate prevent invalid filesystem paths and security issues."""
        if not agent_name or not agent_name.strip():
            return False
        # Allow only alphanumeric, hyphens, underscores, and whitespace
        return bool(re.match(r"^[a-zA-Z0-9_\-\s]+$", agent_name))

    def get_agent_dir(self, agent_name: str) -> Path:
        """Get the global agent directory path.

        Args:
            agent_name: Name of the agent

        Returns:
            Path to ~/.deepagents/{agent_name}
        """
        if not self._is_valid_agent_name(agent_name):
            msg = (
                f"Invalid agent name: {agent_name!r}. "
                "Agent names can only contain letters, numbers, hyphens, underscores, and spaces."
            )
            raise ValueError(msg)
        return Path.home() / ".deepagents" / agent_name

    def ensure_agent_dir(self, agent_name: str) -> Path:
        """Ensure the global agent directory exists and return its path.

        Args:
            agent_name: Name of the agent

        Returns:
            Path to ~/.deepagents/{agent_name}
        """
        if not self._is_valid_agent_name(agent_name):
            msg = (
                f"Invalid agent name: {agent_name!r}. "
                "Agent names can only contain letters, numbers, hyphens, underscores, and spaces."
            )
            raise ValueError(msg)
        agent_dir = self.get_agent_dir(agent_name)
        agent_dir.mkdir(parents=True, exist_ok=True)
        return agent_dir

    def ensure_project_deepagents_dir(self) -> Path | None:
        """Ensure the project .deepagents directory exists and return its path.

        Returns:
            Path to project .deepagents directory, or None if not in a project
        """
        if not self.project_root:
            return None

        project_deepagents_dir = self.project_root / ".deepagents"
        project_deepagents_dir.mkdir(parents=True, exist_ok=True)
        return project_deepagents_dir

    def get_user_skills_dir(self, agent_name: str) -> Path:
        """Get user-level skills directory path for a specific agent.

        Args:
            agent_name: Name of the agent

        Returns:
            Path to ~/.deepagents/{agent_name}/skills/
        """
        return self.get_agent_dir(agent_name) / "skills"

    def ensure_user_skills_dir(self, agent_name: str) -> Path:
        """Ensure user-level skills directory exists and return its path.

        Args:
            agent_name: Name of the agent

        Returns:
            Path to ~/.deepagents/{agent_name}/skills/
        """
        skills_dir = self.get_user_skills_dir(agent_name)
        skills_dir.mkdir(parents=True, exist_ok=True)
        return skills_dir

    def get_project_skills_dir(self) -> Path | None:
        """Get project-level skills directory path.

        Returns:
            Path to {project_root}/.deepagents/skills/, or None if not in a project
        """
        if not self.project_root:
            return None
        return self.project_root / ".deepagents" / "skills"

    def ensure_project_skills_dir(self) -> Path | None:
        """Ensure project-level skills directory exists and return its path.

        Returns:
            Path to {project_root}/.deepagents/skills/, or None if not in a project
        """
        if not self.project_root:
            return None
        skills_dir = self.get_project_skills_dir()
        skills_dir.mkdir(parents=True, exist_ok=True)
        return skills_dir


# Global settings instance (initialized once)
settings = Settings.from_environment()


class SessionState:
    """Holds mutable session state (auto-approve mode, etc)."""

    def __init__(self, auto_approve: bool = False, no_splash: bool = False) -> None:
        self.auto_approve = auto_approve
        self.no_splash = no_splash
        self.exit_hint_until: float | None = None
        self.exit_hint_handle = None
        self.thread_id = str(uuid.uuid4())

    def toggle_auto_approve(self) -> bool:
        """Toggle auto-approve and return new state."""
        self.auto_approve = not self.auto_approve
        return self.auto_approve


def get_default_coding_instructions() -> str:
    """Get the default coding agent instructions.

    These are the immutable base instructions that cannot be modified by the agent.
    Long-term memory (AGENTS.md) is handled separately by the middleware.
    """
    default_prompt_path = Path(__file__).parent / "default_agent_prompt.md"
    return default_prompt_path.read_text()


def _detect_provider(model_name: str) -> str | None:
    """Auto-detect provider from model name.

    Args:
        model_name: Model name to detect provider from

    Returns:
        Provider name (openai, anthropic, google) or None if can't detect
    """
    model_lower = model_name.lower()
    if any(x in model_lower for x in ["gpt", "o1", "o3"]):
        return "openai"
    if "claude" in model_lower:
        return "anthropic"
    if "gemini" in model_lower:
        return "google"
    return None


def create_model(model_name_override: str | None = None) -> BaseChatModel:
    """Create the appropriate model based on available API keys.

    Uses the global settings instance to determine which model to create.

    Args:
        model_name_override: Optional model name to use instead of environment variable

    Returns:
        ChatModel instance (OpenAI, Anthropic, or Google)

    Raises:
        SystemExit if no API key is configured or model provider can't be determined
    """
    # Determine provider and model
    if model_name_override:
        # Use provided model, auto-detect provider
        provider = _detect_provider(model_name_override)
        if not provider:
            console.print(
                f"[bold red]Error:[/bold red] Could not detect provider from model name: {model_name_override}"
            )
            console.print("\nSupported model name patterns:")
            console.print("  - OpenAI: gpt-*, o1-*, o3-*")
            console.print("  - Anthropic: claude-*")
            console.print("  - Google: gemini-*")
            sys.exit(1)

        # Check if API key for detected provider is available
        if provider == "openai" and not settings.has_openai:
            console.print(
                f"[bold red]Error:[/bold red] Model '{model_name_override}' requires OPENAI_API_KEY"
            )
            sys.exit(1)
        elif provider == "anthropic" and not settings.has_anthropic:
            console.print(
                f"[bold red]Error:[/bold red] Model '{model_name_override}' requires ANTHROPIC_API_KEY"
            )
            sys.exit(1)
        elif provider == "google" and not settings.has_google:
            console.print(
                f"[bold red]Error:[/bold red] Model '{model_name_override}' requires GOOGLE_API_KEY"
            )
            sys.exit(1)

        model_name = model_name_override
    # Use environment variable defaults, detect provider by API key priority
    elif settings.has_openai:
        provider = "openai"
        model_name = os.environ.get("OPENAI_MODEL", "gpt-5-mini")
    elif settings.has_anthropic:
        provider = "anthropic"
        model_name = os.environ.get("ANTHROPIC_MODEL", "claude-sonnet-4-5-20250929")
    elif settings.has_google:
        provider = "google"
        model_name = os.environ.get("GOOGLE_MODEL", "gemini-3-pro-preview")
    else:
        console.print("[bold red]Error:[/bold red] No API key configured.")
        console.print("\nPlease set one of the following environment variables:")
        console.print("  - OPENAI_API_KEY     (for OpenAI models like gpt-5-mini)")
        console.print("  - ANTHROPIC_API_KEY  (for Claude models)")
        console.print("  - GOOGLE_API_KEY     (for Google Gemini models)")
        console.print("\nExample:")
        console.print("  export OPENAI_API_KEY=your_api_key_here")
        console.print("\nOr add it to your .env file.")
        sys.exit(1)

    # Store model info in settings for display
    settings.model_name = model_name
    settings.model_provider = provider

    # Create and return the model
    if provider == "openai":
        from langchain_openai import ChatOpenAI

        return ChatOpenAI(model=model_name)
    if provider == "anthropic":
        from langchain_anthropic import ChatAnthropic

        return ChatAnthropic(
            model_name=model_name,
            max_tokens=20_000,  # type: ignore[arg-type]
        )
    if provider == "google":
        from langchain_google_genai import ChatGoogleGenerativeAI

        return ChatGoogleGenerativeAI(
            model=model_name,
            temperature=0,
            max_tokens=None,
        )
```

### file_ops.py

Source: `/a0/tmp/skills_research/langchain/libs/deepagents-cli/deepagents_cli/file_ops.py`

```python
"""Helpers for tracking file operations and computing diffs for CLI display."""

from __future__ import annotations

import difflib
from dataclasses import dataclass, field
from pathlib import Path
from typing import TYPE_CHECKING, Any, Literal

from deepagents.backends.utils import perform_string_replacement

from deepagents_cli.config import settings

if TYPE_CHECKING:
    from deepagents.backends.protocol import BACKEND_TYPES

FileOpStatus = Literal["pending", "success", "error"]


@dataclass
class ApprovalPreview:
    """Data used to render HITL previews."""

    title: str
    details: list[str]
    diff: str | None = None
    diff_title: str | None = None
    error: str | None = None


def _safe_read(path: Path) -> str | None:
    """Read file content, returning None on failure."""
    try:
        return path.read_text()
    except (OSError, UnicodeDecodeError):
        return None


def _count_lines(text: str) -> int:
    """Count lines in text, treating empty strings as zero lines."""
    if not text:
        return 0
    return len(text.splitlines())


def compute_unified_diff(
    before: str,
    after: str,
    display_path: str,
    *,
    max_lines: int | None = 800,
    context_lines: int = 3,
) -> str | None:
    """Compute a unified diff between before and after content.

    Args:
        before: Original content
        after: New content
        display_path: Path for display in diff headers
        max_lines: Maximum number of diff lines (None for unlimited)
        context_lines: Number of context lines around changes (default 3)

    Returns:
        Unified diff string or None if no changes
    """
    before_lines = before.splitlines()
    after_lines = after.splitlines()
    diff_lines = list(
        difflib.unified_diff(
            before_lines,
            after_lines,
            fromfile=f"{display_path} (before)",
            tofile=f"{display_path} (after)",
            lineterm="",
            n=context_lines,
        )
    )
    if not diff_lines:
        return None
    if max_lines is not None and len(diff_lines) > max_lines:
        truncated = diff_lines[: max_lines - 1]
        truncated.append("...")
        return "\n".join(truncated)
    return "\n".join(diff_lines)


@dataclass
class FileOpMetrics:
    """Line and byte level metrics for a file operation."""

    lines_read: int = 0
    start_line: int | None = None
    end_line: int | None = None
    lines_written: int = 0
    lines_added: int = 0
    lines_removed: int = 0
    bytes_written: int = 0


@dataclass
class FileOperationRecord:
    """Track a single filesystem tool call."""

    tool_name: str
    display_path: str
    physical_path: Path | None
    tool_call_id: str | None
    args: dict[str, Any] = field(default_factory=dict)
    status: FileOpStatus = "pending"
    error: str | None = None
    metrics: FileOpMetrics = field(default_factory=FileOpMetrics)
    diff: str | None = None
    before_content: str | None = None
    after_content: str | None = None
    read_output: str | None = None
    hitl_approved: bool = False


def resolve_physical_path(path_str: str | None, assistant_id: str | None) -> Path | None:
    """Convert a virtual/relative path to a physical filesystem path."""
    if not path_str:
        return None
    try:
        if assistant_id and path_str.startswith("/memories/"):
            agent_dir = settings.get_agent_dir(assistant_id)
            suffix = path_str.removeprefix("/memories/").lstrip("/")
            return (agent_dir / suffix).resolve()
        path = Path(path_str)
        if path.is_absolute():
            return path
        return (Path.cwd() / path).resolve()
    except (OSError, ValueError):
        return None


def format_display_path(path_str: str | None) -> str:
    """Format a path for display."""
    if not path_str:
        return "(unknown)"
    try:
        path = Path(path_str)
        if path.is_absolute():
            return path.name or str(path)
        return str(path)
    except (OSError, ValueError):
        return str(path_str)


def build_approval_preview(
    tool_name: str,
    args: dict[str, Any],
    assistant_id: str | None,
) -> ApprovalPreview | None:
    """Collect summary info and diff for HITL approvals."""
    path_str = str(args.get("file_path") or args.get("path") or "")
    display_path = format_display_path(path_str)
    physical_path = resolve_physical_path(path_str, assistant_id)

    if tool_name == "write_file":
        content = str(args.get("content", ""))
        before = _safe_read(physical_path) if physical_path and physical_path.exists() else ""
        after = content
        diff = compute_unified_diff(before or "", after, display_path, max_lines=100)
        additions = 0
        if diff:
            additions = sum(
                1
                for line in diff.splitlines()
                if line.startswith("+") and not line.startswith("+++")
            )
        total_lines = _count_lines(after)
        details = [
            f"File: {path_str}",
            "Action: Create new file" + (" (overwrites existing content)" if before else ""),
            f"Lines to write: {additions or total_lines}",
        ]
        return ApprovalPreview(
            title=f"Write {display_path}",
            details=details,
            diff=diff,
            diff_title=f"Diff {display_path}",
        )

    if tool_name == "edit_file":
        if physical_path is None:
            return ApprovalPreview(
                title=f"Update {display_path}",
                details=[f"File: {path_str}", "Action: Replace text"],
                error="Unable to resolve file path.",
            )
        before = _safe_read(physical_path)
        if before is None:
            return ApprovalPreview(
                title=f"Update {display_path}",
                details=[f"File: {path_str}", "Action: Replace text"],
                error="Unable to read current file contents.",
            )
        old_string = str(args.get("old_string", ""))
        new_string = str(args.get("new_string", ""))
        replace_all = bool(args.get("replace_all", False))
        replacement = perform_string_replacement(before, old_string, new_string, replace_all)
        if isinstance(replacement, str):
            return ApprovalPreview(
                title=f"Update {display_path}",
                details=[f"File: {path_str}", "Action: Replace text"],
                error=replacement,
            )
        after, occurrences = replacement
        diff = compute_unified_diff(before, after, display_path, max_lines=None)
        additions = 0
        deletions = 0
        if diff:
            additions = sum(
                1
                for line in diff.splitlines()
                if line.startswith("+") and not line.startswith("+++")
            )
            deletions = sum(
                1
                for line in diff.splitlines()
                if line.startswith("-") and not line.startswith("---")
            )
        details = [
            f"File: {path_str}",
            f"Action: Replace text ({'all occurrences' if replace_all else 'single occurrence'})",
            f"Occurrences matched: {occurrences}",
            f"Lines changed: +{additions} / -{deletions}",
        ]
        return ApprovalPreview(
            title=f"Update {display_path}",
            details=details,
            diff=diff,
            diff_title=f"Diff {display_path}",
        )

    return None


class FileOpTracker:
    """Collect file operation metrics during a CLI interaction."""

    def __init__(self, *, assistant_id: str | None, backend: BACKEND_TYPES | None = None) -> None:
        """Initialize the tracker."""
        self.assistant_id = assistant_id
        self.backend = backend
        self.active: dict[str | None, FileOperationRecord] = {}
        self.completed: list[FileOperationRecord] = []

    def start_operation(
        self, tool_name: str, args: dict[str, Any], tool_call_id: str | None
    ) -> None:
        if tool_name not in {"read_file", "write_file", "edit_file"}:
            return
        path_str = str(args.get("file_path") or args.get("path") or "")
        display_path = format_display_path(path_str)
        record = FileOperationRecord(
            tool_name=tool_name,
            display_path=display_path,
            physical_path=resolve_physical_path(path_str, self.assistant_id),
            tool_call_id=tool_call_id,
            args=args,
        )
        if tool_name in {"write_file", "edit_file"}:
            if self.backend and path_str:
                try:
                    responses = self.backend.download_files([path_str])
                    if (
                        responses
                        and responses[0].content is not None
                        and responses[0].error is None
                    ):
                        record.before_content = responses[0].content.decode("utf-8")
                    else:
                        record.before_content = ""
                except Exception:
                    record.before_content = ""
            elif record.physical_path:
                record.before_content = _safe_read(record.physical_path) or ""
        self.active[tool_call_id] = record

    def update_args(self, tool_call_id: str, args: dict[str, Any]) -> None:
        """Update arguments for an active operation and retry capturing before_content."""
        record = self.active.get(tool_call_id)
        if not record:
            return

        record.args.update(args)

        # If we haven't captured before_content yet, try again now that we might have the path
        if record.before_content is None and record.tool_name in {"write_file", "edit_file"}:
            path_str = str(record.args.get("file_path") or record.args.get("path") or "")
            if path_str:
                record.display_path = format_display_path(path_str)
                record.physical_path = resolve_physical_path(path_str, self.assistant_id)
                if self.backend:
                    try:
                        responses = self.backend.download_files([path_str])
                        if (
                            responses
                            and responses[0].content is not None
                            and responses[0].error is None
                        ):
                            record.before_content = responses[0].content.decode("utf-8")
                        else:
                            record.before_content = ""
                    except Exception:
                        record.before_content = ""
                elif record.physical_path:
                    record.before_content = _safe_read(record.physical_path) or ""

    def complete_with_message(self, tool_message: Any) -> FileOperationRecord | None:
        tool_call_id = getattr(tool_message, "tool_call_id", None)
        record = self.active.get(tool_call_id)
        if record is None:
            return None

        content = tool_message.content
        if isinstance(content, list):
            # Some tool messages may return list segments; join them for analysis.
            joined = []
            for item in content:
                if isinstance(item, str):
                    joined.append(item)
                else:
                    joined.append(str(item))
            content_text = "\n".join(joined)
        else:
            content_text = str(content) if content is not None else ""

        if getattr(
            tool_message, "status", "success"
        ) != "success" or content_text.lower().startswith("error"):
            record.status = "error"
            record.error = content_text
            self._finalize(record)
            return record

        record.status = "success"

        if record.tool_name == "read_file":
            record.read_output = content_text
            lines = _count_lines(content_text)
            record.metrics.lines_read = lines
            offset = record.args.get("offset")
            limit = record.args.get("limit")
            if isinstance(offset, int):
                if offset > lines:
                    offset = 0
                record.metrics.start_line = offset + 1
                if lines:
                    record.metrics.end_line = offset + lines
            elif lines:
                record.metrics.start_line = 1
                record.metrics.end_line = lines
            if isinstance(limit, int) and lines > limit:
                record.metrics.end_line = (record.metrics.start_line or 1) + limit - 1
        else:
            # For write/edit operations, read back from backend (or local filesystem)
            self._populate_after_content(record)
            if record.after_content is None:
                record.status = "error"
                record.error = "Could not read updated file content."
                self._finalize(record)
                return record
            record.metrics.lines_written = _count_lines(record.after_content)
            before_lines = _count_lines(record.before_content or "")
            diff = compute_unified_diff(
                record.before_content or "",
                record.after_content,
                record.display_path,
                max_lines=100,
            )
            record.diff = diff
            if diff:
                additions = sum(
                    1
                    for line in diff.splitlines()
                    if line.startswith("+") and not line.startswith("+++")
                )
                deletions = sum(
                    1
                    for line in diff.splitlines()
                    if line.startswith("-") and not line.startswith("---")
                )
                record.metrics.lines_added = additions
                record.metrics.lines_removed = deletions
            elif record.tool_name == "write_file" and (record.before_content or "") == "":
                record.metrics.lines_added = record.metrics.lines_written
            record.metrics.bytes_written = len(record.after_content.encode("utf-8"))
            if record.diff is None and (record.before_content or "") != record.after_content:
                record.diff = compute_unified_diff(
                    record.before_content or "",
                    record.after_content,
                    record.display_path,
                    max_lines=100,
                )
            if record.diff is None and before_lines != record.metrics.lines_written:
                record.metrics.lines_added = max(record.metrics.lines_written - before_lines, 0)

        self._finalize(record)
        return record

    def mark_hitl_approved(self, tool_name: str, args: dict[str, Any]) -> None:
        """Mark operations matching tool_name and file_path as HIL-approved."""
        file_path = args.get("file_path") or args.get("path")
        if not file_path:
            return

        # Mark all active records that match
        for record in self.active.values():
            if record.tool_name == tool_name:
                record_path = record.args.get("file_path") or record.args.get("path")
                if record_path == file_path:
                    record.hitl_approved = True

    def _populate_after_content(self, record: FileOperationRecord) -> None:
        # Use backend if available (works for any BackendProtocol implementation)
        if self.backend:
            try:
                file_path = record.args.get("file_path") or record.args.get("path")
                if file_path:
                    responses = self.backend.download_files([file_path])
                    if (
                        responses
                        and responses[0].content is not None
                        and responses[0].error is None
                    ):
                        record.after_content = responses[0].content.decode("utf-8")
                    else:
                        record.after_content = None
                else:
                    record.after_content = None
            except Exception:
                record.after_content = None
        else:
            # Fallback: direct filesystem read when no backend provided
            if record.physical_path is None:
                record.after_content = None
                return
            record.after_content = _safe_read(record.physical_path)

    def _finalize(self, record: FileOperationRecord) -> None:
        self.completed.append(record)
        self.active.pop(record.tool_call_id, None)
```

### image_utils.py

Source: `/a0/tmp/skills_research/langchain/libs/deepagents-cli/deepagents_cli/image_utils.py`

```python
"""Utilities for handling image paste from clipboard."""

import base64
import io
import os
import subprocess
import sys
import tempfile
from dataclasses import dataclass

from PIL import Image


@dataclass
class ImageData:
    """Represents a pasted image with its base64 encoding."""

    base64_data: str
    format: str  # "png", "jpeg", etc.
    placeholder: str  # Display text like "[image 1]"

    def to_message_content(self) -> dict:
        """Convert to LangChain message content format.

        Returns:
            Dict with type and image_url for multimodal messages
        """
        return {
            "type": "image_url",
            "image_url": {"url": f"data:image/{self.format};base64,{self.base64_data}"},
        }


def get_clipboard_image() -> ImageData | None:
    """Attempt to read an image from the system clipboard.

    Supports macOS via `pngpaste` or `osascript`.

    Returns:
        ImageData if an image is found, None otherwise
    """
    if sys.platform == "darwin":
        return _get_macos_clipboard_image()
    # Linux/Windows support could be added here
    return None


def _get_macos_clipboard_image() -> ImageData | None:
    """Get clipboard image on macOS using pngpaste or osascript.

    First tries pngpaste (faster if installed), then falls back to osascript.

    Returns:
        ImageData if an image is found, None otherwise
    """
    # Try pngpaste first (fast if installed)
    try:
        result = subprocess.run(
            ["pngpaste", "-"],
            capture_output=True,
            check=False,
            timeout=2,
        )
        if result.returncode == 0 and result.stdout:
            # Successfully got PNG data
            try:
                Image.open(io.BytesIO(result.stdout))  # Validate it's a real image
                base64_data = base64.b64encode(result.stdout).decode("utf-8")
                return ImageData(
                    base64_data=base64_data,
                    format="png",  # 'pngpaste -' always outputs PNG
                    placeholder="[image]",
                )
            except Exception:
                pass  # Invalid image data
    except (FileNotFoundError, subprocess.TimeoutExpired):
        pass  # pngpaste not installed or timed out

    # Fallback to osascript with temp file (built-in but slower)
    return _get_clipboard_via_osascript()


def _get_clipboard_via_osascript() -> ImageData | None:
    """Get clipboard image via osascript using a temp file.

    osascript outputs data in a special format that can't be captured as raw binary,
    so we write to a temp file instead.

    Returns:
        ImageData if an image is found, None otherwise
    """
    # Create a temp file for the image
    fd, temp_path = tempfile.mkstemp(suffix=".png")
    os.close(fd)

    try:
        # First check if clipboard has PNG data
        check_result = subprocess.run(
            ["osascript", "-e", "clipboard info"],
            capture_output=True,
            check=False,
            timeout=2,
            text=True,
        )

        if check_result.returncode != 0:
            return None

        # Check for PNG or TIFF in clipboard info
        clipboard_info = check_result.stdout.lower()
        if "pngf" not in clipboard_info and "tiff" not in clipboard_info:
            return None

        # Try to get PNG first, fall back to TIFF
        if "pngf" in clipboard_info:
            get_script = f"""
            set pngData to the clipboard as «class PNGf»
            set theFile to open for access POSIX file "{temp_path}" with write permission
            write pngData to theFile
            close access theFile
            return "success"
            """
        else:
            get_script = f"""
            set tiffData to the clipboard as TIFF picture
            set theFile to open for access POSIX file "{temp_path}" with write permission
            write tiffData to theFile
            close access theFile
            return "success"
            """

        result = subprocess.run(
            ["osascript", "-e", get_script],
            capture_output=True,
            check=False,
            timeout=3,
            text=True,
        )

        if result.returncode != 0 or "success" not in result.stdout:
            return None

        # Check if file was created and has content
        if not os.path.exists(temp_path) or os.path.getsize(temp_path) == 0:
            return None

        # Read and validate the image
        with open(temp_path, "rb") as f:
            image_data = f.read()

        try:
            image = Image.open(io.BytesIO(image_data))
            # Convert to PNG if it's not already (e.g., if we got TIFF)
            buffer = io.BytesIO()
            image.save(buffer, format="PNG")
            buffer.seek(0)
            base64_data = base64.b64encode(buffer.getvalue()).decode("utf-8")

            return ImageData(
                base64_data=base64_data,
                format="png",
                placeholder="[image]",
            )
        except Exception:
            return None

    except (subprocess.TimeoutExpired, OSError):
        return None
    finally:
        # Clean up temp file
        try:
            os.unlink(temp_path)
        except OSError:
            pass


def encode_image_to_base64(image_bytes: bytes) -> str:
    """Encode image bytes to base64 string.

    Args:
        image_bytes: Raw image bytes

    Returns:
        Base64-encoded string
    """
    return base64.b64encode(image_bytes).decode("utf-8")


def create_multimodal_content(text: str, images: list[ImageData]) -> list[dict]:
    """Create multimodal message content with text and images.

    Args:
        text: Text content of the message
        images: List of ImageData objects

    Returns:
        List of content blocks in LangChain format
    """
    content_blocks = []

    # Add text block
    if text.strip():
        content_blocks.append({"type": "text", "text": text})

    # Add image blocks
    for image in images:
        content_blocks.append(image.to_message_content())

    return content_blocks
```

### input.py

Source: `/a0/tmp/skills_research/langchain/libs/deepagents-cli/deepagents_cli/input.py`

```python
"""Input handling, completers, and prompt session for the CLI."""

import asyncio
import os
import re
import time
from collections.abc import Callable
from pathlib import Path

from prompt_toolkit import PromptSession
from prompt_toolkit.completion import (
    Completer,
    Completion,
    PathCompleter,
    merge_completers,
)
from prompt_toolkit.document import Document
from prompt_toolkit.enums import EditingMode
from prompt_toolkit.formatted_text import HTML
from prompt_toolkit.key_binding import KeyBindings

from .config import COLORS, COMMANDS, SessionState, console
from .image_utils import ImageData, get_clipboard_image

# Regex patterns for context-aware completion
AT_MENTION_RE = re.compile(r"@(?P<path>(?:[^\s@]|(?<=\\)\s)*)$")
SLASH_COMMAND_RE = re.compile(r"^/(?P<command>[a-z]*)$")

EXIT_CONFIRM_WINDOW = 3.0


class ImageTracker:
    """Track pasted images in the current conversation."""

    def __init__(self) -> None:
        self.images: list[ImageData] = []
        self.next_id = 1

    def add_image(self, image_data: ImageData) -> str:
        """Add an image and return its placeholder text.

        Args:
            image_data: The image data to track

        Returns:
            Placeholder string like "[image 1]"
        """
        placeholder = f"[image {self.next_id}]"
        image_data.placeholder = placeholder
        self.images.append(image_data)
        self.next_id += 1
        return placeholder

    def get_images(self) -> list[ImageData]:
        """Get all tracked images."""
        return self.images.copy()

    def clear(self) -> None:
        """Clear all tracked images and reset counter."""
        self.images.clear()
        self.next_id = 1


class FilePathCompleter(Completer):
    """Activate filesystem completion only when cursor is after '@'."""

    def __init__(self) -> None:
        self.path_completer = PathCompleter(
            expanduser=True,
            min_input_len=0,
            only_directories=False,
        )

    def get_completions(self, document, complete_event):
        """Get file path completions when @ is detected."""
        text = document.text_before_cursor

        # Use regex to detect @path pattern at end of line
        m = AT_MENTION_RE.search(text)
        if not m:
            return  # Not in an @path context

        path_fragment = m.group("path")

        # Unescape the path for PathCompleter (it doesn't understand escape sequences)
        unescaped_fragment = path_fragment.replace("\\ ", " ")

        # Strip trailing backslash if present (user is in the process of typing an escape)
        unescaped_fragment = unescaped_fragment.removesuffix("\\")

        # Create temporary document for the unescaped path fragment
        temp_doc = Document(text=unescaped_fragment, cursor_position=len(unescaped_fragment))

        # Get completions from PathCompleter and use its start_position
        # PathCompleter returns suffix text with start_position=0 (insert at cursor)
        for comp in self.path_completer.get_completions(temp_doc, complete_event):
            # Add trailing / for directories so users can continue navigating
            completed_path = Path(unescaped_fragment + comp.text).expanduser()
            # Re-escape spaces in the completion text for the command line
            completion_text = comp.text.replace(" ", "\\ ")
            if completed_path.is_dir() and not completion_text.endswith("/"):
                completion_text += "/"

            yield Completion(
                text=completion_text,
                start_position=comp.start_position,  # Use PathCompleter's position (usually 0)
                display=comp.display,
                display_meta=comp.display_meta,
            )


class CommandCompleter(Completer):
    """Activate command completion only when line starts with '/'."""

    def get_completions(self, document, _complete_event):
        """Get command completions when / is at the start."""
        text = document.text_before_cursor

        # Use regex to detect /command pattern at start of line
        m = SLASH_COMMAND_RE.match(text)
        if not m:
            return  # Not in a /command context

        command_fragment = m.group("command")

        # Match commands that start with the fragment (case-insensitive)
        for cmd_name, cmd_desc in COMMANDS.items():
            if cmd_name.startswith(command_fragment.lower()):
                yield Completion(
                    text=cmd_name,
                    start_position=-len(command_fragment),  # Fixed position for original document
                    display=cmd_name,
                    display_meta=cmd_desc,
                )


def parse_file_mentions(text: str) -> tuple[str, list[Path]]:
    """Extract @file mentions and return cleaned text with resolved file paths."""
    pattern = r"@((?:[^\s@]|(?<=\\)\s)+)"  # Match @filename, allowing escaped spaces
    matches = re.findall(pattern, text)

    files = []
    for match in matches:
        # Remove escape characters
        clean_path = match.replace("\\ ", " ")
        path = Path(clean_path).expanduser()

        # Try to resolve relative to cwd
        if not path.is_absolute():
            path = Path.cwd() / path

        try:
            path = path.resolve()
            if path.exists() and path.is_file():
                files.append(path)
            else:
                console.print(f"[yellow]Warning: File not found: {match}[/yellow]")
        except Exception as e:
            console.print(f"[yellow]Warning: Invalid path {match}: {e}[/yellow]")

    return text, files


def parse_image_placeholders(text: str) -> tuple[str, int]:
    """Count image placeholders in text.

    Args:
        text: Input text potentially containing [image] or [image N] placeholders

    Returns:
        Tuple of (text, count) where count is the number of image placeholders found
    """
    # Match [image] or [image N] patterns
    pattern = r"\[image(?:\s+\d+)?\]"
    matches = re.findall(pattern, text, re.IGNORECASE)
    return text, len(matches)


def get_bottom_toolbar(
    session_state: SessionState, session_ref: dict
) -> Callable[[], list[tuple[str, str]]]:
    """Return toolbar function that shows auto-approve status and BASH MODE."""

    def toolbar() -> list[tuple[str, str]]:
        parts = []

        # Check if we're in BASH mode (input starts with !)
        try:
            session = session_ref.get("session")
            if session:
                current_text = session.default_buffer.text
                if current_text.startswith("!"):
                    parts.append(("bg:#ff1493 fg:#ffffff bold", " BASH MODE "))
                    parts.append(("", " | "))
        except (AttributeError, TypeError):
            # Silently ignore - toolbar is non-critical and called frequently
            pass

        # Base status message
        if session_state.auto_approve:
            base_msg = "auto-accept ON (CTRL+T to toggle)"
            base_class = "class:toolbar-green"
        else:
            base_msg = "manual accept (CTRL+T to toggle)"
            base_class = "class:toolbar-orange"

        parts.append((base_class, base_msg))

        # Show exit confirmation hint if active
        hint_until = session_state.exit_hint_until
        if hint_until is not None:
            now = time.monotonic()
            if now < hint_until:
                parts.append(("", " | "))
                parts.append(("class:toolbar-exit", " Ctrl+C again to exit "))
            else:
                session_state.exit_hint_until = None

        return parts

    return toolbar


def create_prompt_session(
    _assistant_id: str, session_state: SessionState, image_tracker: ImageTracker | None = None
) -> PromptSession:
    """Create a configured PromptSession with all features."""
    # Set default editor if not already set
    if "EDITOR" not in os.environ:
        os.environ["EDITOR"] = "nano"

    # Create key bindings
    kb = KeyBindings()

    @kb.add("c-c")
    def _(event) -> None:
        """Require double Ctrl+C within a short window to exit."""
        app = event.app
        now = time.monotonic()

        if session_state.exit_hint_until is not None and now < session_state.exit_hint_until:
            handle = session_state.exit_hint_handle
            if handle:
                handle.cancel()
                session_state.exit_hint_handle = None
            session_state.exit_hint_until = None
            app.invalidate()
            app.exit(exception=KeyboardInterrupt())
            return

        session_state.exit_hint_until = now + EXIT_CONFIRM_WINDOW

        handle = session_state.exit_hint_handle
        if handle:
            handle.cancel()

        loop = asyncio.get_running_loop()
        app_ref = app

        def clear_hint() -> None:
            if (
                session_state.exit_hint_until is not None
                and time.monotonic() >= session_state.exit_hint_until
            ):
                session_state.exit_hint_until = None
                session_state.exit_hint_handle = None
                app_ref.invalidate()

        session_state.exit_hint_handle = loop.call_later(EXIT_CONFIRM_WINDOW, clear_hint)

        app.invalidate()

    # Bind Ctrl+T to toggle auto-approve
    @kb.add("c-t")
    def _(event) -> None:
        """Toggle auto-approve mode."""
        session_state.toggle_auto_approve()
        # Force UI refresh to update toolbar
        event.app.invalidate()

    # Custom paste handler to detect images
    if image_tracker:
        from prompt_toolkit.keys import Keys

        def _handle_paste_with_image_check(event, pasted_text: str = "") -> None:
            """Check clipboard for image, otherwise insert pasted text."""
            # Try to get an image from clipboard
            clipboard_image = get_clipboard_image()

            if clipboard_image:
                # Found an image! Add it to tracker and insert placeholder
                placeholder = image_tracker.add_image(clipboard_image)
                # Insert placeholder (no confirmation message)
                event.current_buffer.insert_text(placeholder)
            elif pasted_text:
                # No image, insert the pasted text
                event.current_buffer.insert_text(pasted_text)
            else:
                # Fallback: try to get text from prompt_toolkit clipboard
                clipboard_data = event.app.clipboard.get_data()
                if clipboard_data and clipboard_data.text:
                    event.current_buffer.insert_text(clipboard_data.text)

        @kb.add(Keys.BracketedPaste)
        def _(event) -> None:
            """Handle bracketed paste (Cmd+V on macOS) - check for images first."""
            # Bracketed paste provides the pasted text in event.data
            pasted_text = event.data if hasattr(event, "data") else ""
            _handle_paste_with_image_check(event, pasted_text)

        @kb.add("c-v")
        def _(event) -> None:
            """Handle Ctrl+V paste - check for images first."""
            _handle_paste_with_image_check(event)

    # Bind regular Enter to submit (intuitive behavior)
    @kb.add("enter")
    def _(event) -> None:
        """Enter submits the input, unless completion menu is active."""
        buffer = event.current_buffer

        # If completion menu is showing, apply the current completion
        if buffer.complete_state:
            # Get the current completion (the highlighted one)
            current_completion = buffer.complete_state.current_completion

            # If no completion is selected (user hasn't navigated), select and apply the first one
            if not current_completion and buffer.complete_state.completions:
                # Move to the first completion
                buffer.complete_next()
                # Now apply it
                buffer.apply_completion(buffer.complete_state.current_completion)
            elif current_completion:
                # Apply the already-selected completion
                buffer.apply_completion(current_completion)
            else:
                # No completions available, close menu
                buffer.complete_state = None
        # Don't submit if buffer is empty or only whitespace
        elif buffer.text.strip():
            # Normal submit
            buffer.validate_and_handle()
            # If empty, do nothing (don't submit)

    # Alt+Enter for newlines (press ESC then Enter, or Option+Enter on Mac)
    @kb.add("escape", "enter")
    def _(event) -> None:
        """Alt+Enter inserts a newline for multi-line input."""
        event.current_buffer.insert_text("\n")

    # Ctrl+E to open in external editor
    @kb.add("c-e")
    def _(event) -> None:
        """Open the current input in an external editor (nano by default)."""
        event.current_buffer.open_in_editor()

    # Backspace handler to retrigger completions and delete image tags as units
    @kb.add("backspace")
    def _(event) -> None:
        """Handle backspace: delete image tags as single unit, retrigger completion."""
        buffer = event.current_buffer
        text_before = buffer.document.text_before_cursor

        # Check if cursor is right after an image tag like [image 1] or [image 12]
        image_tag_pattern = r"\[image \d+\]$"
        match = re.search(image_tag_pattern, text_before)

        if match and image_tracker:
            # Delete the entire tag
            tag_length = len(match.group(0))
            buffer.delete_before_cursor(count=tag_length)

            # Remove the image from tracker and reset counter
            tag_text = match.group(0)
            image_num_match = re.search(r"\d+", tag_text)
            if image_num_match:
                image_num = int(image_num_match.group(0))
                # Remove image at index (1-based to 0-based)
                if 0 < image_num <= len(image_tracker.images):
                    image_tracker.images.pop(image_num - 1)
                    # Reset counter to next available number
                    image_tracker.next_id = len(image_tracker.images) + 1
        else:
            # Normal backspace
            buffer.delete_before_cursor(count=1)

        # Check if we're in a completion context (@ or /)
        text = buffer.document.text_before_cursor
        if AT_MENTION_RE.search(text) or SLASH_COMMAND_RE.match(text):
            # Retrigger completion
            buffer.start_completion(select_first=False)

    from prompt_toolkit.styles import Style

    # Define styles for the toolbar with full-width background colors
    toolbar_style = Style.from_dict(
        {
            "bottom-toolbar": "noreverse",  # Disable default reverse video
            "toolbar-green": "bg:#10b981 #000000",  # Green for auto-accept ON
            "toolbar-orange": "bg:#f59e0b #000000",  # Orange for manual accept
            "toolbar-exit": "bg:#2563eb #ffffff",  # Blue for exit hint
        }
    )

    # Create session reference dict for toolbar to access session
    session_ref = {}

    # Create the session
    session = PromptSession(
        message=HTML(f'<style fg="{COLORS["user"]}">></style> '),
        multiline=True,  # Keep multiline support but Enter submits
        key_bindings=kb,
        completer=merge_completers([CommandCompleter(), FilePathCompleter()]),
        editing_mode=EditingMode.EMACS,
        complete_while_typing=True,  # Show completions as you type
        complete_in_thread=True,  # Async completion prevents menu freezing
        mouse_support=False,
        enable_open_in_editor=True,  # Allow Ctrl+X Ctrl+E to open external editor
        bottom_toolbar=get_bottom_toolbar(
            session_state, session_ref
        ),  # Persistent status bar at bottom
        style=toolbar_style,  # Apply toolbar styling
        reserve_space_for_menu=7,  # Reserve space for completion menu to show 5-6 results
    )

    # Store session reference for toolbar to access
    session_ref["session"] = session

    return session
```

### local_context.py

Source: `/a0/tmp/skills_research/langchain/libs/deepagents-cli/deepagents_cli/local_context.py`

```python
"""Middleware for injecting local context into system prompt."""

from __future__ import annotations

import subprocess
from collections.abc import Awaitable, Callable
from pathlib import Path
from typing import NotRequired, TypedDict, cast

from langchain.agents.middleware.types import (
    AgentMiddleware,
    AgentState,
    ModelRequest,
    ModelResponse,
)
from langgraph.runtime import Runtime

# Directories to ignore in file listings and tree views
IGNORE_PATTERNS = frozenset(
    {
        ".git",
        "node_modules",
        ".venv",
        "__pycache__",
        ".pytest_cache",
        ".mypy_cache",
        ".ruff_cache",
        ".tox",
        ".coverage",
        ".eggs",
        "dist",
        "build",
    }
)


class LocalContextState(AgentState):
    """State for local context middleware."""

    local_context: NotRequired[str]
    """Formatted local context: git, cwd, files, tree."""


class LocalContextStateUpdate(TypedDict):
    """State update for local context middleware."""

    local_context: str
    """Formatted local context: git, cwd, files, tree."""


class LocalContextMiddleware(AgentMiddleware):
    """Middleware for injecting local context into system prompt.

    This middleware:
    1. Detects current git branch (if in a git repo)
    2. Checks if main/master branches exist locally
    3. Lists files in current directory (max 20)
    4. Shows directory tree structure (max 3 levels, 20 entries)
    5. Appends local context to system prompt
    """

    state_schema = LocalContextState

    def _get_git_info(self) -> dict[str, str | list[str]]:
        """Gather git state information.

        Returns:
            Dict with 'branch' (current branch) and 'main_branches' (list of main/master if they exist).
            Returns empty dict if not in git repo.
        """
        try:
            # Get current branch
            result = subprocess.run(
                ["git", "rev-parse", "--abbrev-ref", "HEAD"],
                capture_output=True,
                text=True,
                timeout=2,
                cwd=Path.cwd(),
                check=False,
            )
            if result.returncode != 0:
                return {}

            current_branch = result.stdout.strip()

            # Get local branches to check for main/master
            main_branches = []
            result = subprocess.run(
                ["git", "branch"],
                capture_output=True,
                text=True,
                timeout=2,
                cwd=Path.cwd(),
                check=False,
            )
            if result.returncode == 0:
                branches = set()
                for line in result.stdout.strip().split("\n"):
                    branch = line.strip().lstrip("*").strip()
                    if branch:
                        branches.add(branch)

                if "main" in branches:
                    main_branches.append("main")
                if "master" in branches:
                    main_branches.append("master")

            return {"branch": current_branch, "main_branches": main_branches}

        except (subprocess.TimeoutExpired, FileNotFoundError, OSError):
            return {}

    def _get_file_list(self, max_files: int = 20) -> list[str]:
        """Get list of files in current directory (non-recursive).

        Args:
            max_files: Maximum number of files to show (default 20).

        Returns:
            List of file paths (sorted), truncated to max_files.
        """
        cwd = Path.cwd()

        files = []
        try:
            for item in sorted(cwd.iterdir()):
                # Skip hidden files (except .deepagents)
                if item.name.startswith(".") and item.name != ".deepagents":
                    continue

                # Skip ignored patterns
                if item.name in IGNORE_PATTERNS:
                    continue

                # Add files and dirs
                if item.is_file():
                    files.append(item.name)
                elif item.is_dir():
                    files.append(f"{item.name}/")

                if len(files) >= max_files:
                    break

        except (OSError, PermissionError):
            return []

        return files

    def _get_directory_tree(self, max_depth: int = 3, max_entries: int = 20) -> str:
        """Get directory tree structure.

        Args:
            max_depth: Maximum depth to traverse (default 3).
            max_entries: Maximum total entries to show (default 20).

        Returns:
            Formatted tree string or empty if error.
        """
        cwd = Path.cwd()

        lines: list[str] = []
        entry_count = [0]  # Mutable for closure

        def _should_include(item: Path) -> bool:
            """Check if item should be included in tree."""
            # Skip hidden files (except .deepagents)
            if item.name.startswith(".") and item.name != ".deepagents":
                return False
            # Skip ignored patterns
            return item.name not in IGNORE_PATTERNS

        def _build_tree(path: Path, prefix: str = "", depth: int = 0) -> None:
            """Recursive tree builder."""
            if depth >= max_depth or entry_count[0] >= max_entries:
                return

            try:
                all_items = sorted(path.iterdir(), key=lambda p: (not p.is_dir(), p.name))
                # Pre-filter to get correct is_last determination
                items = [item for item in all_items if _should_include(item)]
            except (OSError, PermissionError):
                return

            for i, item in enumerate(items):
                if entry_count[0] >= max_entries:
                    lines.append(f"{prefix}... (truncated)")
                    return

                is_last = i == len(items) - 1
                connector = "└── " if is_last else "├── "

                display_name = f"{item.name}/" if item.is_dir() else item.name
                lines.append(f"{prefix}{connector}{display_name}")
                entry_count[0] += 1

                # Recurse into directories
                if item.is_dir() and depth + 1 < max_depth:
                    extension = "    " if is_last else "│   "
                    _build_tree(item, prefix + extension, depth + 1)

        try:
            lines.append(f"{cwd.name}/")
            _build_tree(cwd)
        except (OSError, PermissionError):
            return ""

        return "\n".join(lines)

    def _detect_package_manager(self) -> str | None:
        """Detect Python package manager in use.

        Checks for lock files and config files to determine the package manager.

        Uses priority order: `uv > poetry > pipenv > pip`. First match wins if multiple
        indicators are present.

        Returns:
            Package manager name (uv, poetry, pipenv, pip) or `None` if not detected.
        """
        cwd = Path.cwd()

        # Check for uv (uv.lock or pyproject.toml with [tool.uv])
        if (cwd / "uv.lock").exists():
            return "uv"

        # Check for poetry (poetry.lock or pyproject.toml with [tool.poetry])
        if (cwd / "poetry.lock").exists():
            return "poetry"

        # Check for pipenv
        if (cwd / "Pipfile.lock").exists() or (cwd / "Pipfile").exists():
            return "pipenv"

        # Check pyproject.toml for tool sections
        pyproject = cwd / "pyproject.toml"
        if pyproject.exists():
            try:
                content = pyproject.read_text()
                if "[tool.uv]" in content:
                    return "uv"
                if "[tool.poetry]" in content:
                    return "poetry"
                # Has pyproject.toml but no specific tool - likely pip/setuptools
                return "pip"
            except (OSError, PermissionError, UnicodeDecodeError):
                pass

        # Check for requirements.txt
        if (cwd / "requirements.txt").exists():
            return "pip"

        return None

    def _detect_node_package_manager(self) -> str | None:
        """Detect Node.js package manager in use.

        Uses priority order: `bun > pnpm > yarn > npm`.

        First match wins if multiple lock files are present.

        Returns:
            Package manager name (bun, pnpm, yarn, npm) or `None` if not detected.
        """
        cwd = Path.cwd()

        if (cwd / "bun.lockb").exists() or (cwd / "bun.lock").exists():
            return "bun"
        if (cwd / "pnpm-lock.yaml").exists():
            return "pnpm"
        if (cwd / "yarn.lock").exists():
            return "yarn"
        if (cwd / "package-lock.json").exists() or (cwd / "package.json").exists():
            return "npm"

        return None

    def _get_makefile_preview(self, max_lines: int = 20) -> str | None:
        """Get first N lines of `Makefile` if present.

        Args:
            max_lines: Maximum lines to show.

        Returns:
            `Makefile` preview or `None` if not found.
        """
        cwd = Path.cwd()
        makefile = cwd / "Makefile"

        if not makefile.exists():
            return None

        try:
            content = makefile.read_text()
            lines = content.split("\n")[:max_lines]
            preview = "\n".join(lines)
            if len(content.split("\n")) > max_lines:
                preview += "\n... (truncated)"
            return preview
        except (OSError, PermissionError, UnicodeDecodeError):
            return None

    def _detect_project_info(self) -> dict[str, str | bool | None]:
        """Detect project type, language, and structure.

        Returns:
            Dict with `language`, `is_monorepo`, `project_root`, `has_venv`, `has_node_modules`.
        """
        cwd = Path.cwd()
        info: dict[str, str | bool | None] = {
            "language": None,
            "is_monorepo": False,
            "project_root": None,
            "has_venv": False,
            "has_node_modules": False,
        }

        # Check for virtual environments
        info["has_venv"] = (cwd / ".venv").exists() or (cwd / "venv").exists()
        info["has_node_modules"] = (cwd / "node_modules").exists()

        # Detect primary language
        if (cwd / "pyproject.toml").exists() or (cwd / "setup.py").exists():
            info["language"] = "python"
        elif (cwd / "package.json").exists():
            info["language"] = "javascript/typescript"
        elif (cwd / "Cargo.toml").exists():
            info["language"] = "rust"
        elif (cwd / "go.mod").exists():
            info["language"] = "go"
        elif (cwd / "pom.xml").exists() or (cwd / "build.gradle").exists():
            info["language"] = "java"

        # Detect monorepo patterns
        # Check for common monorepo indicators
        monorepo_indicators = [
            (cwd / "lerna.json").exists(),
            (cwd / "pnpm-workspace.yaml").exists(),
            (cwd / "packages").is_dir(),
            (cwd / "libs").is_dir() and (cwd / "apps").is_dir(),
            (cwd / "workspaces").is_dir(),
        ]
        info["is_monorepo"] = any(monorepo_indicators)

        # Try to find project root (look for .git or pyproject.toml up the tree)
        try:
            result = subprocess.run(
                ["git", "rev-parse", "--show-toplevel"],
                capture_output=True,
                text=True,
                timeout=2,
                cwd=cwd,
                check=False,
            )
            if result.returncode == 0:
                info["project_root"] = result.stdout.strip()
        except (subprocess.TimeoutExpired, FileNotFoundError, OSError):
            pass

        return info

    def _detect_test_command(self) -> str | None:
        """Detect how to run tests based on project structure.

        Returns:
            Suggested test command or `None` if not detected.
        """
        cwd = Path.cwd()

        # Check Makefile for test target
        makefile = cwd / "Makefile"
        if makefile.exists():
            try:
                content = makefile.read_text()
                if "test:" in content or "tests:" in content:
                    return "make test"
            except (OSError, PermissionError, UnicodeDecodeError):
                pass

        # Python projects
        if (cwd / "pyproject.toml").exists():
            pyproject = cwd / "pyproject.toml"
            try:
                content = pyproject.read_text()
                if "[tool.pytest" in content or (cwd / "pytest.ini").exists():
                    return "pytest"
            except (OSError, PermissionError, UnicodeDecodeError):
                pass
            if (cwd / "tests").is_dir() or (cwd / "test").is_dir():
                return "pytest"

        # Node projects
        if (cwd / "package.json").exists():
            try:
                import json

                pkg = json.loads((cwd / "package.json").read_text())
                if "scripts" in pkg and "test" in pkg["scripts"]:
                    return "npm test"
            except (OSError, PermissionError, UnicodeDecodeError, json.JSONDecodeError):
                pass

        return None

    def before_agent(
        self,
        state: LocalContextState,
        runtime: Runtime,
    ) -> LocalContextStateUpdate | None:
        """Load local context before agent execution.

        Runs once at session start to preserve prompt caching.

        Args:
            state: Current agent state.
            runtime: Runtime context.

        Returns:
            Updated state with local_context populated, or None if already set.
        """
        # Only compute context on first interaction to preserve prompt caching
        if state.get("local_context"):
            return None

        cwd = Path.cwd()
        sections = ["## Local Context", ""]

        # Current directory
        sections.append(f"**Current Directory**: `{cwd}`")
        sections.append("")

        # Project info (language, monorepo, root, environments)
        project_info = self._detect_project_info()
        project_lines = []
        if project_info.get("language"):
            project_lines.append(f"Language: {project_info['language']}")
        if project_info.get("project_root") and str(project_info["project_root"]) != str(cwd):
            project_lines.append(f"Project root: `{project_info['project_root']}`")
        if project_info.get("is_monorepo"):
            project_lines.append("Monorepo: yes")
        env_indicators = []
        if project_info.get("has_venv"):
            env_indicators.append(".venv")
        if project_info.get("has_node_modules"):
            env_indicators.append("node_modules")
        if env_indicators:
            project_lines.append(f"Environments: {', '.join(env_indicators)}")
        if project_lines:
            sections.append("**Project**:")
            sections.extend(f"- {line}" for line in project_lines)
            sections.append("")

        # Package managers
        pkg_managers = []
        python_pkg = self._detect_package_manager()
        if python_pkg:
            pkg_managers.append(f"Python: {python_pkg}")
        node_pkg = self._detect_node_package_manager()
        if node_pkg:
            pkg_managers.append(f"Node: {node_pkg}")
        if pkg_managers:
            sections.append(f"**Package Manager**: {', '.join(pkg_managers)}")
            sections.append("")

        # Git info
        git_info = self._get_git_info()
        if git_info:
            git_text = f"**Git**: Current branch `{git_info['branch']}`"
            if git_info.get("main_branches"):
                main_branches = ", ".join(f"`{b}`" for b in git_info["main_branches"])
                git_text += f", main branch available: {main_branches}"
            sections.append(git_text)
            sections.append("")

        # Test command
        test_cmd = self._detect_test_command()
        if test_cmd:
            sections.append(f"**Run Tests**: `{test_cmd}`")
            sections.append("")

        # File list
        files = self._get_file_list()
        if files:
            total_items = len(list(Path.cwd().iterdir()))
            sections.append(f"**Files** ({len(files)} shown):")
            for file in files:
                sections.append(f"- {file}")
            if len(files) < total_items:
                remaining = total_items - len(files)
                sections.append(f"... ({remaining} more files)")
            sections.append("")

        # Directory tree
        tree = self._get_directory_tree()
        if tree:
            sections.append("**Tree** (3 levels):")
            sections.append("```text")
            sections.append(tree)
            sections.append("```")
            sections.append("")

        # Makefile preview
        makefile_preview = self._get_makefile_preview()
        if makefile_preview:
            sections.append("**Makefile** (first 20 lines):")
            sections.append("```makefile")
            sections.append(makefile_preview)
            sections.append("```")

        local_context = "\n".join(sections)
        return LocalContextStateUpdate(local_context=local_context)

    def _get_modified_request(self, request: ModelRequest) -> ModelRequest | None:
        """Get modified request with local context injected, or None if no context.

        Args:
            request: The original model request.

        Returns:
            Modified request with local context appended, or None if no local context.
        """
        state = cast("LocalContextState", request.state)
        local_context = state.get("local_context", "")

        if not local_context:
            return None

        # Append local context to system prompt
        system_prompt = request.system_prompt or ""
        new_prompt = system_prompt + "\n\n" + local_context

        return request.override(system_prompt=new_prompt)

    def wrap_model_call(
        self,
        request: ModelRequest,
        handler: Callable[[ModelRequest], ModelResponse],
    ) -> ModelResponse:
        """Inject local context into system prompt.

        Args:
            request: The model request being processed.
            handler: The handler function to call with the modified request.

        Returns:
            The model response from the handler.
        """
        modified_request = self._get_modified_request(request)
        return handler(modified_request if modified_request else request)

    async def awrap_model_call(
        self,
        request: ModelRequest,
        handler: Callable[[ModelRequest], Awaitable[ModelResponse]],
    ) -> ModelResponse:
        """(async) Inject local context into system prompt.

        Args:
            request: The model request being processed.
            handler: The handler function to call with the modified request.

        Returns:
            The model response from the handler.
        """
        modified_request = self._get_modified_request(request)
        return await handler(modified_request if modified_request else request)


__all__ = ["LocalContextMiddleware"]
```

### main.py

Source: `/a0/tmp/skills_research/langchain/libs/deepagents-cli/deepagents_cli/main.py`

```python
"""Main entry point and CLI loop for deepagents."""
# ruff: noqa: T201, E402, BLE001, PLR0912, PLR0915

# Suppress deprecation warnings from langchain_core (e.g., Pydantic V1 on Python 3.14+)
# ruff: noqa: E402
import warnings

warnings.filterwarnings("ignore", module="langchain_core._api.deprecation")

import argparse
import asyncio
import contextlib
import os
import sys
import warnings
from pathlib import Path

# Suppress Pydantic v1 compatibility warnings from langchain on Python 3.14+
warnings.filterwarnings("ignore", message=".*Pydantic V1.*", category=UserWarning)

from rich.text import Text

from deepagents_cli._version import __version__

# Now safe to import agent (which imports LangChain modules)
from deepagents_cli.agent import create_cli_agent, list_agents, reset_agent

# CRITICAL: Import config FIRST to set LANGSMITH_PROJECT before LangChain loads
from deepagents_cli.config import (
    console,
    create_model,
    settings,
)
from deepagents_cli.integrations.sandbox_factory import create_sandbox
from deepagents_cli.sessions import (
    delete_thread_command,
    generate_thread_id,
    get_checkpointer,
    get_most_recent,
    get_thread_agent,
    list_threads_command,
    thread_exists,
)
from deepagents_cli.skills import execute_skills_command, setup_skills_parser
from deepagents_cli.tools import fetch_url, http_request, web_search
from deepagents_cli.ui import show_help


def check_cli_dependencies() -> None:
    """Check if CLI optional dependencies are installed."""
    missing = []

    try:
        import requests  # noqa: F401
    except ImportError:
        missing.append("requests")

    try:
        import dotenv  # noqa: F401
    except ImportError:
        missing.append("python-dotenv")

    try:
        import tavily  # noqa: F401
    except ImportError:
        missing.append("tavily-python")

    try:
        import textual  # noqa: F401
    except ImportError:
        missing.append("textual")

    if missing:
        print("\n❌ Missing required CLI dependencies!")
        print("\nThe following packages are required to use the deepagents CLI:")
        for pkg in missing:
            print(f"  - {pkg}")
        print("\nPlease install them with:")
        print("  pip install deepagents[cli]")
        print("\nOr install all dependencies:")
        print("  pip install 'deepagents[cli]'")
        sys.exit(1)


def parse_args() -> argparse.Namespace:
    """Parse command line arguments."""
    parser = argparse.ArgumentParser(
        description="DeepAgents - AI Coding Assistant",
        formatter_class=argparse.RawDescriptionHelpFormatter,
        add_help=False,
    )
    parser.add_argument(
        "--version",
        action="version",
        version=f"deepagents {__version__}",
    )

    subparsers = parser.add_subparsers(dest="command", help="Command to run")

    # List command
    subparsers.add_parser("list", help="List all available agents")

    # Help command
    subparsers.add_parser("help", help="Show help information")

    # Reset command
    reset_parser = subparsers.add_parser("reset", help="Reset an agent")
    reset_parser.add_argument("--agent", required=True, help="Name of agent to reset")
    reset_parser.add_argument(
        "--target", dest="source_agent", help="Copy prompt from another agent"
    )

    # Skills command - setup delegated to skills module
    setup_skills_parser(subparsers)

    # Threads command
    threads_parser = subparsers.add_parser("threads", help="Manage conversation threads")
    threads_sub = threads_parser.add_subparsers(dest="threads_command")

    # threads list
    threads_list = threads_sub.add_parser("list", help="List threads")
    threads_list.add_argument(
        "--agent", default=None, help="Filter by agent name (default: show all)"
    )
    threads_list.add_argument("--limit", type=int, default=20, help="Max threads (default: 20)")

    # threads delete
    threads_delete = threads_sub.add_parser("delete", help="Delete a thread")
    threads_delete.add_argument("thread_id", help="Thread ID to delete")

    # Default interactive mode
    parser.add_argument(
        "--agent",
        default="agent",
        help="Agent identifier for separate memory stores (default: agent).",
    )

    # Thread resume argument - matches PR #638: -r for most recent, -r <ID> for specific
    parser.add_argument(
        "-r",
        "--resume",
        dest="resume_thread",
        nargs="?",
        const="__MOST_RECENT__",
        default=None,
        help="Resume thread: -r for most recent, -r <ID> for specific thread",
    )

    # Initial prompt - auto-submit when session starts
    parser.add_argument(
        "-m",
        "--message",
        dest="initial_prompt",
        help="Initial prompt to auto-submit when session starts",
    )

    parser.add_argument(
        "--model",
        help="Model to use (e.g., claude-sonnet-4-5-20250929, gpt-5-mini). "
        "Provider is auto-detected from model name.",
    )
    parser.add_argument(
        "--auto-approve",
        action="store_true",
        help="Auto-approve tool usage without prompting (disables human-in-the-loop)",
    )
    parser.add_argument(
        "--sandbox",
        choices=["none", "modal", "daytona", "runloop"],
        default="none",
        help="Remote sandbox for code execution (default: none - local only)",
    )
    parser.add_argument(
        "--sandbox-id",
        help="Existing sandbox ID to reuse (skips creation and cleanup)",
    )
    parser.add_argument(
        "--sandbox-setup",
        help="Path to setup script to run in sandbox after creation",
    )
    return parser.parse_args()


async def run_textual_cli_async(
    assistant_id: str,
    *,
    auto_approve: bool = False,
    sandbox_type: str = "none",
    sandbox_id: str | None = None,
    model_name: str | None = None,
    thread_id: str | None = None,
    is_resumed: bool = False,
    initial_prompt: str | None = None,
) -> None:
    """Run the Textual CLI interface (async version).

    Args:
        assistant_id: Agent identifier for memory storage
        auto_approve: Whether to auto-approve tool usage
        sandbox_type: Type of sandbox ("none", "modal", "runloop", "daytona")
        sandbox_id: Optional existing sandbox ID to reuse
        model_name: Optional model name to use
        thread_id: Thread ID to use (new or resumed)
        is_resumed: Whether this is a resumed session
        initial_prompt: Optional prompt to auto-submit when session starts
    """
    from deepagents_cli.app import run_textual_app

    model = create_model(model_name)

    # Show thread info
    if is_resumed:
        console.print(f"[green]Resuming thread:[/green] {thread_id}")
    else:
        console.print(f"[dim]Thread: {thread_id}[/dim]")

    # Use async context manager for checkpointer
    async with get_checkpointer() as checkpointer:
        # Create agent with conditional tools
        tools = [http_request, fetch_url]
        if settings.has_tavily:
            tools.append(web_search)

        # Handle sandbox mode
        sandbox_backend = None
        sandbox_cm = None

        if sandbox_type != "none":
            try:
                # Create sandbox context manager but keep it open
                sandbox_cm = create_sandbox(sandbox_type, sandbox_id=sandbox_id)
                sandbox_backend = sandbox_cm.__enter__()
            except (ImportError, ValueError, RuntimeError, NotImplementedError) as e:
                console.print()
                console.print("[red]❌ Sandbox creation failed[/red]")
                console.print(Text(str(e), style="dim"))
                sys.exit(1)

        try:
            agent, composite_backend = create_cli_agent(
                model=model,
                assistant_id=assistant_id,
                tools=tools,
                sandbox=sandbox_backend,
                sandbox_type=sandbox_type if sandbox_type != "none" else None,
                auto_approve=auto_approve,
                checkpointer=checkpointer,
            )

            # Run Textual app
            await run_textual_app(
                agent=agent,
                assistant_id=assistant_id,
                backend=composite_backend,
                auto_approve=auto_approve,
                cwd=Path.cwd(),
                thread_id=thread_id,
                initial_prompt=initial_prompt,
            )
        except Exception as e:
            error_text = Text("❌ Failed to create agent: ", style="red")
            error_text.append(str(e))
            console.print(error_text)
            sys.exit(1)
        finally:
            # Clean up sandbox if we created one
            if sandbox_cm is not None:
                with contextlib.suppress(Exception):
                    sandbox_cm.__exit__(None, None, None)


def cli_main() -> None:
    """Entry point for console script."""
    # Fix for gRPC fork issue on macOS
    # https://github.com/grpc/grpc/issues/37642
    if sys.platform == "darwin":
        os.environ["GRPC_ENABLE_FORK_SUPPORT"] = "0"

    # Note: LANGSMITH_PROJECT is already overridden in config.py (before LangChain imports)
    # This ensures agent traces → DEEPAGENTS_LANGSMITH_PROJECT
    # Shell commands → user's original LANGSMITH_PROJECT (via ShellMiddleware env)

    # Check dependencies first
    check_cli_dependencies()

    try:
        args = parse_args()

        if args.command == "help":
            show_help()
        elif args.command == "list":
            list_agents()
        elif args.command == "reset":
            reset_agent(args.agent, args.source_agent)
        elif args.command == "skills":
            execute_skills_command(args)
        elif args.command == "threads":
            if args.threads_command == "list":
                asyncio.run(
                    list_threads_command(
                        agent_name=getattr(args, "agent", None),
                        limit=getattr(args, "limit", 20),
                    )
                )
            elif args.threads_command == "delete":
                asyncio.run(delete_thread_command(args.thread_id))
            else:
                console.print("[yellow]Usage: deepagents threads <list|delete>[/yellow]")
        else:
            # Interactive mode - handle thread resume
            thread_id = None
            is_resumed = False

            if args.resume_thread == "__MOST_RECENT__":
                # -r (no ID): Get most recent thread
                # If --agent specified, filter by that agent; otherwise get most recent overall
                agent_filter = args.agent if args.agent != "agent" else None
                thread_id = asyncio.run(get_most_recent(agent_filter))
                if thread_id:
                    is_resumed = True
                    agent_name = asyncio.run(get_thread_agent(thread_id))
                    if agent_name:
                        args.agent = agent_name
                else:
                    if agent_filter:
                        msg = Text("No previous thread for '", style="yellow")
                        msg.append(args.agent)
                        msg.append("', starting new.", style="yellow")
                    else:
                        msg = Text("No previous threads, starting new.", style="yellow")
                    console.print(msg)

            elif args.resume_thread:
                # -r <ID>: Resume specific thread
                if asyncio.run(thread_exists(args.resume_thread)):
                    thread_id = args.resume_thread
                    is_resumed = True
                    if args.agent == "agent":
                        agent_name = asyncio.run(get_thread_agent(thread_id))
                        if agent_name:
                            args.agent = agent_name
                else:
                    error_msg = Text("Thread '", style="red")
                    error_msg.append(args.resume_thread)
                    error_msg.append("' not found.", style="red")
                    console.print(error_msg)
                    console.print(
                        "[dim]Use 'deepagents threads list' to see available threads.[/dim]"
                    )
                    sys.exit(1)

            # Generate new thread ID if not resuming
            if thread_id is None:
                thread_id = generate_thread_id()

            # Run Textual CLI
            asyncio.run(
                run_textual_cli_async(
                    assistant_id=args.agent,
                    auto_approve=args.auto_approve,
                    sandbox_type=args.sandbox,
                    sandbox_id=args.sandbox_id,
                    model_name=getattr(args, "model", None),
                    thread_id=thread_id,
                    is_resumed=is_resumed,
                    initial_prompt=getattr(args, "initial_prompt", None),
                )
            )
    except KeyboardInterrupt:
        # Clean exit on Ctrl+C - suppress ugly traceback
        console.print("\n\n[yellow]Interrupted[/yellow]")
        sys.exit(0)


if __name__ == "__main__":
    cli_main()
```

### project_utils.py

Source: `/a0/tmp/skills_research/langchain/libs/deepagents-cli/deepagents_cli/project_utils.py`

```python
"""Utilities for project root detection and project-specific configuration."""

from pathlib import Path


def find_project_root(start_path: Path | None = None) -> Path | None:
    """Find the project root by looking for .git directory.

    Walks up the directory tree from start_path (or cwd) looking for a .git
    directory, which indicates the project root.

    Args:
        start_path: Directory to start searching from. Defaults to current working directory.

    Returns:
        Path to the project root if found, None otherwise.
    """
    current = Path(start_path or Path.cwd()).resolve()

    # Walk up the directory tree
    for parent in [current, *list(current.parents)]:
        git_dir = parent / ".git"
        if git_dir.exists():
            return parent

    return None


def find_project_agent_md(project_root: Path) -> list[Path]:
    """Find project-specific agent.md file(s).

    Checks two locations and returns ALL that exist:
    1. project_root/.deepagents/agent.md
    2. project_root/agent.md

    Both files will be loaded and combined if both exist.

    Args:
        project_root: Path to the project root directory.

    Returns:
        List of paths to project agent.md files (may contain 0, 1, or 2 paths).
    """
    paths = []

    # Check .deepagents/agent.md (preferred)
    deepagents_md = project_root / ".deepagents" / "agent.md"
    if deepagents_md.exists():
        paths.append(deepagents_md)

    # Check root agent.md (fallback, but also include if both exist)
    root_md = project_root / "agent.md"
    if root_md.exists():
        paths.append(root_md)

    return paths
```

### sessions.py

Source: `/a0/tmp/skills_research/langchain/libs/deepagents-cli/deepagents_cli/sessions.py`

```python
"""Thread management using LangGraph's built-in checkpoint persistence."""

import uuid
from collections.abc import AsyncIterator
from contextlib import asynccontextmanager
from datetime import datetime
from pathlib import Path

import aiosqlite
from langgraph.checkpoint.sqlite.aio import AsyncSqliteSaver
from rich.table import Table

from deepagents_cli.config import COLORS, console

# Patch aiosqlite.Connection to add is_alive() method required by langgraph-checkpoint>=2.1.0
# See: https://github.com/langchain-ai/langgraph/issues/6583
if not hasattr(aiosqlite.Connection, "is_alive"):

    def _is_alive(self: aiosqlite.Connection) -> bool:
        """Check if the connection is still alive."""
        return self._connection is not None

    aiosqlite.Connection.is_alive = _is_alive


def _format_timestamp(iso_timestamp: str | None) -> str:
    """Format ISO timestamp for display (e.g., 'Dec 30, 6:10pm')."""
    if not iso_timestamp:
        return ""
    try:
        dt = datetime.fromisoformat(iso_timestamp).astimezone()
        return dt.strftime("%b %d, %-I:%M%p").lower().replace("am", "am").replace("pm", "pm")
    except (ValueError, TypeError):
        return ""


def get_db_path() -> Path:
    """Get path to global database."""
    db_dir = Path.home() / ".deepagents"
    db_dir.mkdir(parents=True, exist_ok=True)
    return db_dir / "sessions.db"


def generate_thread_id() -> str:
    """Generate a new 8-char hex thread ID."""
    return uuid.uuid4().hex[:8]


async def _table_exists(conn: aiosqlite.Connection, table: str) -> bool:
    """Check if a table exists in the database."""
    query = "SELECT 1 FROM sqlite_master WHERE type='table' AND name=?"
    async with conn.execute(query, (table,)) as cursor:
        return await cursor.fetchone() is not None


async def list_threads(
    agent_name: str | None = None,
    limit: int = 20,
) -> list[dict]:
    """List threads from checkpoints table."""
    db_path = str(get_db_path())
    async with aiosqlite.connect(db_path, timeout=30.0) as conn:
        # Return empty if table doesn't exist yet (fresh install)
        if not await _table_exists(conn, "checkpoints"):
            return []

        if agent_name:
            query = """
                SELECT thread_id,
                       json_extract(metadata, '$.agent_name') as agent_name,
                       MAX(json_extract(metadata, '$.updated_at')) as updated_at
                FROM checkpoints
                WHERE json_extract(metadata, '$.agent_name') = ?
                GROUP BY thread_id
                ORDER BY updated_at DESC
                LIMIT ?
            """
            params: tuple = (agent_name, limit)
        else:
            query = """
                SELECT thread_id,
                       json_extract(metadata, '$.agent_name') as agent_name,
                       MAX(json_extract(metadata, '$.updated_at')) as updated_at
                FROM checkpoints
                GROUP BY thread_id
                ORDER BY updated_at DESC
                LIMIT ?
            """
            params = (limit,)

        async with conn.execute(query, params) as cursor:
            rows = await cursor.fetchall()
            return [{"thread_id": r[0], "agent_name": r[1], "updated_at": r[2]} for r in rows]


async def get_most_recent(agent_name: str | None = None) -> str | None:
    """Get most recent thread_id, optionally filtered by agent."""
    db_path = str(get_db_path())
    async with aiosqlite.connect(db_path, timeout=30.0) as conn:
        if not await _table_exists(conn, "checkpoints"):
            return None

        if agent_name:
            query = """
                SELECT thread_id FROM checkpoints
                WHERE json_extract(metadata, '$.agent_name') = ?
                ORDER BY checkpoint_id DESC
                LIMIT 1
            """
            params: tuple = (agent_name,)
        else:
            query = "SELECT thread_id FROM checkpoints ORDER BY checkpoint_id DESC LIMIT 1"
            params = ()

        async with conn.execute(query, params) as cursor:
            row = await cursor.fetchone()
            return row[0] if row else None


async def get_thread_agent(thread_id: str) -> str | None:
    """Get agent_name for a thread."""
    db_path = str(get_db_path())
    async with aiosqlite.connect(db_path, timeout=30.0) as conn:
        if not await _table_exists(conn, "checkpoints"):
            return None

        query = """
            SELECT json_extract(metadata, '$.agent_name')
            FROM checkpoints
            WHERE thread_id = ?
            LIMIT 1
        """
        async with conn.execute(query, (thread_id,)) as cursor:
            row = await cursor.fetchone()
            return row[0] if row else None


async def thread_exists(thread_id: str) -> bool:
    """Check if a thread exists in checkpoints."""
    db_path = str(get_db_path())
    async with aiosqlite.connect(db_path, timeout=30.0) as conn:
        if not await _table_exists(conn, "checkpoints"):
            return False

        query = "SELECT 1 FROM checkpoints WHERE thread_id = ? LIMIT 1"
        async with conn.execute(query, (thread_id,)) as cursor:
            row = await cursor.fetchone()
            return row is not None


async def delete_thread(thread_id: str) -> bool:
    """Delete thread checkpoints. Returns True if deleted."""
    db_path = str(get_db_path())
    async with aiosqlite.connect(db_path, timeout=30.0) as conn:
        if not await _table_exists(conn, "checkpoints"):
            return False

        cursor = await conn.execute("DELETE FROM checkpoints WHERE thread_id = ?", (thread_id,))
        deleted = cursor.rowcount > 0
        if await _table_exists(conn, "writes"):
            await conn.execute("DELETE FROM writes WHERE thread_id = ?", (thread_id,))
        await conn.commit()
        return deleted


@asynccontextmanager
async def get_checkpointer() -> AsyncIterator[AsyncSqliteSaver]:
    """Get AsyncSqliteSaver for the global database."""
    async with AsyncSqliteSaver.from_conn_string(str(get_db_path())) as checkpointer:
        yield checkpointer


async def list_threads_command(
    agent_name: str | None = None,
    limit: int = 20,
) -> None:
    """CLI handler for: deepagents threads list."""
    threads = await list_threads(agent_name, limit=limit)

    if not threads:
        if agent_name:
            console.print(f"[yellow]No threads found for agent '{agent_name}'.[/yellow]")
        else:
            console.print("[yellow]No threads found.[/yellow]")
        console.print("[dim]Start a conversation with: deepagents[/dim]")
        return

    title = f"Threads for '{agent_name}'" if agent_name else "All Threads"

    table = Table(title=title, show_header=True, header_style=f"bold {COLORS['primary']}")
    table.add_column("Thread ID", style="bold")
    table.add_column("Agent")
    table.add_column("Last Used", style="dim")

    for t in threads:
        table.add_row(
            t["thread_id"],
            t["agent_name"] or "unknown",
            _format_timestamp(t.get("updated_at")),
        )

    console.print()
    console.print(table)
    console.print()


async def delete_thread_command(thread_id: str) -> None:
    """CLI handler for: deepagents threads delete."""
    deleted = await delete_thread(thread_id)

    if deleted:
        console.print(f"[green]Thread '{thread_id}' deleted.[/green]")
    else:
        console.print(f"[red]Thread '{thread_id}' not found.[/red]")
```

### shell.py

Source: `/a0/tmp/skills_research/langchain/libs/deepagents-cli/deepagents_cli/shell.py`

```python
"""Simplified middleware that exposes a basic shell tool to agents."""

from __future__ import annotations

import os
import subprocess
from typing import Any

from langchain.agents.middleware.types import AgentMiddleware, AgentState
from langchain.tools import ToolRuntime, tool
from langchain_core.messages import ToolMessage
from langchain_core.tools.base import ToolException


class ShellMiddleware(AgentMiddleware[AgentState, Any]):
    """Give basic shell access to agents via the shell.

    This shell will execute on the local machine and has NO safeguards except
    for the human in the loop safeguard provided by the CLI itself.
    """

    def __init__(
        self,
        *,
        workspace_root: str,
        timeout: float = 120.0,
        max_output_bytes: int = 100_000,
        env: dict[str, str] | None = None,
    ) -> None:
        """Initialize an instance of `ShellMiddleware`.

        Args:
            workspace_root: Working directory for shell commands.
            timeout: Maximum time in seconds to wait for command completion.
                Defaults to 120 seconds.
            max_output_bytes: Maximum number of bytes to capture from command output.
                Defaults to 100,000 bytes.
            env: Environment variables to pass to the subprocess. If None,
                uses the current process's environment. Defaults to None.
        """
        super().__init__()
        self._timeout = timeout
        self._max_output_bytes = max_output_bytes
        self._tool_name = "shell"
        self._env = env if env is not None else os.environ.copy()
        self._workspace_root = workspace_root

        # Build description with working directory information
        description = (
            f"Execute a shell command directly on the host. Commands will run in "
            f"the working directory: {workspace_root}. Each command runs in a fresh shell "
            f"environment with the current process's environment variables. Commands may "
            f"be truncated if they exceed the configured timeout or output limits."
        )

        @tool(self._tool_name, description=description)
        def shell_tool(
            command: str,
            runtime: ToolRuntime[None, AgentState],
        ) -> ToolMessage | str:
            """Execute a shell command.

            Args:
                command: The shell command to execute.
                runtime: The tool runtime context.
            """
            return self._run_shell_command(command, tool_call_id=runtime.tool_call_id)

        self._shell_tool = shell_tool
        self.tools = [self._shell_tool]

    def _run_shell_command(
        self,
        command: str,
        *,
        tool_call_id: str | None,
    ) -> ToolMessage | str:
        """Execute a shell command and return the result.

        Args:
            command: The shell command to execute.
            tool_call_id: The tool call ID for creating a ToolMessage.

        Returns:
            A ToolMessage with the command output or an error message.
        """
        if not command or not isinstance(command, str):
            msg = "Shell tool expects a non-empty command string."
            raise ToolException(msg)

        try:
            result = subprocess.run(
                command,
                check=False,
                shell=True,
                capture_output=True,
                text=True,
                timeout=self._timeout,
                env=self._env,
                cwd=self._workspace_root,
            )

            # Combine stdout and stderr
            output_parts = []
            if result.stdout:
                output_parts.append(result.stdout)
            if result.stderr:
                stderr_lines = result.stderr.strip().split("\n")
                for line in stderr_lines:
                    output_parts.append(f"[stderr] {line}")

            output = "\n".join(output_parts) if output_parts else "<no output>"

            # Truncate output if needed
            if len(output) > self._max_output_bytes:
                output = output[: self._max_output_bytes]
                output += f"\n\n... Output truncated at {self._max_output_bytes} bytes."

            # Add exit code info if non-zero
            if result.returncode != 0:
                output = f"{output.rstrip()}\n\nExit code: {result.returncode}"
                status = "error"
            else:
                status = "success"

        except subprocess.TimeoutExpired:
            output = f"Error: Command timed out after {self._timeout:.1f} seconds."
            status = "error"

        return ToolMessage(
            content=output,
            tool_call_id=tool_call_id,
            name=self._tool_name,
            status=status,
        )


__all__ = ["ShellMiddleware"]
```

### textual_adapter.py

Source: `/a0/tmp/skills_research/langchain/libs/deepagents-cli/deepagents_cli/textual_adapter.py`

```python
"""Textual UI adapter for agent execution."""
# ruff: noqa: PLR0912, PLR0915, ANN401, PLR2004, BLE001, TRY203
# This module has complex streaming logic ported from execution.py

from __future__ import annotations

import asyncio
import json
from datetime import UTC, datetime
from typing import TYPE_CHECKING, Any

from langchain.agents.middleware.human_in_the_loop import (
    ActionRequest,
    HITLRequest,
    HITLResponse,
)
from langchain_core.messages import HumanMessage, ToolMessage
from langgraph.types import Command, Interrupt
from pydantic import TypeAdapter, ValidationError

from deepagents_cli.file_ops import FileOpTracker
from deepagents_cli.image_utils import create_multimodal_content
from deepagents_cli.input import ImageTracker, parse_file_mentions
from deepagents_cli.ui import format_tool_display, format_tool_message_content
from deepagents_cli.widgets.messages import (
    AssistantMessage,
    DiffMessage,
    ErrorMessage,
    SystemMessage,
    ToolCallMessage,
)

if TYPE_CHECKING:
    from collections.abc import Callable

_HITL_REQUEST_ADAPTER = TypeAdapter(HITLRequest)


def _is_summarization_chunk(metadata: dict | None) -> bool:
    """Check if a message chunk is from summarization middleware.

    Args:
        metadata: The metadata dict from the stream chunk.

    Returns:
        Whether the chunk is from summarization and should be filtered.
    """
    if metadata is None:
        return False
    return metadata.get("lc_source") == "summarization"


class TextualUIAdapter:
    """Adapter for rendering agent output to Textual widgets.

    This adapter provides an abstraction layer between the agent execution
    and the Textual UI, allowing streaming output to be rendered as widgets.
    """

    def __init__(
        self,
        mount_message: Callable,
        update_status: Callable[[str], None],
        request_approval: Callable,  # async callable returning Future
        on_auto_approve_enabled: Callable[[], None] | None = None,
        scroll_to_bottom: Callable[[], None] | None = None,
    ) -> None:
        """Initialize the adapter.

        Args:
            mount_message: Async callable to mount a message widget
            update_status: Callable to update the status bar message
            request_approval: Callable that returns a Future for HITL approval
            on_auto_approve_enabled: Callback when auto-approve is enabled
            scroll_to_bottom: Callback to scroll chat to bottom
        """
        self._mount_message = mount_message
        self._update_status = update_status
        self._request_approval = request_approval
        self._on_auto_approve_enabled = on_auto_approve_enabled
        self._scroll_to_bottom = scroll_to_bottom

        # State tracking
        self._current_assistant_message: AssistantMessage | None = None
        self._current_tool_messages: dict[str, ToolCallMessage] = {}
        self._pending_text = ""
        self._token_tracker: Any = None

    def set_token_tracker(self, tracker: Any) -> None:
        """Set the token tracker for usage tracking."""
        self._token_tracker = tracker


async def execute_task_textual(
    user_input: str,
    agent: Any,
    assistant_id: str | None,
    session_state: Any,
    adapter: TextualUIAdapter,
    backend: Any = None,
    image_tracker: ImageTracker | None = None,
) -> None:
    """Execute a task with output directed to Textual UI.

    This is the Textual-compatible version of execute_task() that uses
    the TextualUIAdapter for all UI operations.

    Args:
        user_input: The user's input message
        agent: The LangGraph agent to execute
        assistant_id: The agent identifier
        session_state: Session state with auto_approve flag
        adapter: The TextualUIAdapter for UI operations
        backend: Optional backend for file operations
        image_tracker: Optional tracker for images
    """
    # Parse file mentions and inject content if any
    prompt_text, mentioned_files = parse_file_mentions(user_input)

    # Max file size to embed inline (256KB, matching mistral-vibe)
    # Larger files get a reference instead - use read_file tool to view them
    max_embed_bytes = 256 * 1024

    if mentioned_files:
        context_parts = [prompt_text, "\n\n## Referenced Files\n"]
        for file_path in mentioned_files:
            try:
                file_size = file_path.stat().st_size
                if file_size > max_embed_bytes:
                    # File too large - include reference instead of content
                    size_kb = file_size // 1024
                    context_parts.append(
                        f"\n### {file_path.name}\n"
                        f"Path: `{file_path}`\n"
                        f"Size: {size_kb}KB (too large to embed, use read_file tool to view)"
                    )
                else:
                    content = file_path.read_text()
                    context_parts.append(
                        f"\n### {file_path.name}\nPath: `{file_path}`\n```\n{content}\n```"
                    )
            except Exception as e:
                context_parts.append(f"\n### {file_path.name}\n[Error reading file: {e}]")
        final_input = "\n".join(context_parts)
    else:
        final_input = prompt_text

    # Include images in the message content
    images_to_send = []
    if image_tracker:
        images_to_send = image_tracker.get_images()
    if images_to_send:
        message_content = create_multimodal_content(final_input, images_to_send)
    else:
        message_content = final_input

    thread_id = session_state.thread_id
    config = {
        "configurable": {"thread_id": thread_id},
        "metadata": {
            "assistant_id": assistant_id,
            "agent_name": assistant_id,
            "updated_at": datetime.now(UTC).isoformat(),
        }
        if assistant_id
        else {},
    }

    captured_input_tokens = 0
    captured_output_tokens = 0

    # Update status to show thinking
    adapter._update_status("Agent is thinking...")

    # Hide token display during streaming (will be shown with accurate count at end)
    if adapter._token_tracker:
        adapter._token_tracker.hide()

    file_op_tracker = FileOpTracker(assistant_id=assistant_id, backend=backend)
    displayed_tool_ids: set[str] = set()
    tool_call_buffers: dict[str | int, dict] = {}

    # Track pending text and assistant messages PER NAMESPACE to avoid interleaving
    # when multiple subagents stream in parallel
    pending_text_by_namespace: dict[tuple, str] = {}
    assistant_message_by_namespace: dict[tuple, Any] = {}

    # Clear images from tracker after creating the message
    if image_tracker:
        image_tracker.clear()

    stream_input: dict | Command = {"messages": [{"role": "user", "content": message_content}]}

    try:
        while True:
            interrupt_occurred = False
            hitl_response: dict[str, HITLResponse] = {}
            suppress_resumed_output = False
            pending_interrupts: dict[str, HITLRequest] = {}

            async for chunk in agent.astream(
                stream_input,
                stream_mode=["messages", "updates"],
                subgraphs=True,
                config=config,
                durability="exit",
            ):
                if not isinstance(chunk, tuple) or len(chunk) != 3:
                    continue

                namespace, current_stream_mode, data = chunk

                # Convert namespace to hashable tuple for dict keys
                ns_key = tuple(namespace) if namespace else ()

                # Filter out subagent outputs - only show main agent (empty namespace)
                # Subagents run via Task tool and should only report back to the main agent
                is_main_agent = ns_key == ()

                # Handle UPDATES stream - for interrupts and todos
                if current_stream_mode == "updates":
                    if not isinstance(data, dict):
                        continue

                    # Check for interrupts
                    if "__interrupt__" in data:
                        interrupts: list[Interrupt] = data["__interrupt__"]
                        if interrupts:
                            for interrupt_obj in interrupts:
                                try:
                                    validated_request = _HITL_REQUEST_ADAPTER.validate_python(
                                        interrupt_obj.value
                                    )
                                    pending_interrupts[interrupt_obj.id] = validated_request
                                    interrupt_occurred = True
                                except ValidationError:
                                    raise

                    # Check for todo updates (not yet implemented in Textual UI)
                    chunk_data = next(iter(data.values())) if data else None
                    if chunk_data and isinstance(chunk_data, dict) and "todos" in chunk_data:
                        pass  # Future: render todo list widget

                # Handle MESSAGES stream - for content and tool calls
                elif current_stream_mode == "messages":
                    # Skip subagent outputs - only render main agent content in chat
                    if not is_main_agent:
                        continue

                    if not isinstance(data, tuple) or len(data) != 2:
                        continue

                    message, _metadata = data

                    # Filter out summarization LLM output & update status to reflect
                    if _is_summarization_chunk(_metadata):
                        adapter._update_status("Summarizing conversation...")
                        continue

                    if isinstance(message, HumanMessage):
                        content = message.text
                        # Flush pending text for this namespace
                        pending_text = pending_text_by_namespace.get(ns_key, "")
                        if content and pending_text:
                            await _flush_assistant_text_ns(
                                adapter, pending_text, ns_key, assistant_message_by_namespace
                            )
                            pending_text_by_namespace[ns_key] = ""
                        continue

                    if isinstance(message, ToolMessage):
                        tool_name = getattr(message, "name", "")
                        tool_status = getattr(message, "status", "success")
                        tool_content = format_tool_message_content(message.content)
                        record = file_op_tracker.complete_with_message(message)

                        adapter._update_status("Agent is thinking...")

                        # Update tool call status with output
                        tool_id = getattr(message, "tool_call_id", None)
                        if tool_id and tool_id in adapter._current_tool_messages:
                            tool_msg = adapter._current_tool_messages[tool_id]
                            output_str = str(tool_content) if tool_content else ""
                            if tool_status == "success":
                                tool_msg.set_success(output_str)
                            else:
                                tool_msg.set_error(output_str or "Error")
                            # Clean up - remove from tracking dict after status update
                            del adapter._current_tool_messages[tool_id]

                        # Show shell errors
                        if tool_name == "shell" and tool_status != "success":
                            pending_text = pending_text_by_namespace.get(ns_key, "")
                            if pending_text:
                                await _flush_assistant_text_ns(
                                    adapter, pending_text, ns_key, assistant_message_by_namespace
                                )
                                pending_text_by_namespace[ns_key] = ""
                            if tool_content:
                                await adapter._mount_message(ErrorMessage(str(tool_content)))

                        # Show file operation results - always show diffs in chat
                        if record:
                            pending_text = pending_text_by_namespace.get(ns_key, "")
                            if pending_text:
                                await _flush_assistant_text_ns(
                                    adapter, pending_text, ns_key, assistant_message_by_namespace
                                )
                                pending_text_by_namespace[ns_key] = ""
                            if record.diff:
                                await adapter._mount_message(
                                    DiffMessage(record.diff, record.display_path)
                                )
                        continue

                    # Extract token usage (before content_blocks check - usage may be on any chunk)
                    if adapter._token_tracker and hasattr(message, "usage_metadata"):
                        usage = message.usage_metadata
                        if usage:
                            # Use total_tokens which includes input + output
                            total_toks = usage.get("total_tokens", 0)
                            if total_toks:
                                captured_input_tokens = max(captured_input_tokens, total_toks)
                            else:
                                # Fallback to input + output if total not provided
                                input_toks = usage.get("input_tokens", 0)
                                output_toks = usage.get("output_tokens", 0)
                                if input_toks or output_toks:
                                    total = input_toks + output_toks
                                    captured_input_tokens = max(captured_input_tokens, total)

                    # Check if this is an AIMessageChunk with content
                    if not hasattr(message, "content_blocks"):
                        continue

                    # Process content blocks
                    for block in message.content_blocks:
                        block_type = block.get("type")

                        if block_type == "text":
                            text = block.get("text", "")
                            if text:
                                # Track accumulated text for reference
                                pending_text = pending_text_by_namespace.get(ns_key, "")
                                pending_text += text
                                pending_text_by_namespace[ns_key] = pending_text

                                # Get or create assistant message for this namespace
                                current_msg = assistant_message_by_namespace.get(ns_key)
                                if current_msg is None:
                                    current_msg = AssistantMessage()
                                    await adapter._mount_message(current_msg)
                                    assistant_message_by_namespace[ns_key] = current_msg
                                    # Anchor scroll once when message is created
                                    # anchor() keeps scroll locked to bottom as content grows
                                    if adapter._scroll_to_bottom:
                                        adapter._scroll_to_bottom()

                                # Append just the new text chunk for smoother streaming
                                # (uses MarkdownStream internally for better performance)
                                await current_msg.append_content(text)

                        elif block_type in ("tool_call_chunk", "tool_call"):
                            chunk_name = block.get("name")
                            chunk_args = block.get("args")
                            chunk_id = block.get("id")
                            chunk_index = block.get("index")

                            buffer_key: str | int
                            if chunk_index is not None:
                                buffer_key = chunk_index
                            elif chunk_id is not None:
                                buffer_key = chunk_id
                            else:
                                buffer_key = f"unknown-{len(tool_call_buffers)}"

                            buffer = tool_call_buffers.setdefault(
                                buffer_key,
                                {"name": None, "id": None, "args": None, "args_parts": []},
                            )

                            if chunk_name:
                                buffer["name"] = chunk_name
                            if chunk_id:
                                buffer["id"] = chunk_id

                            if isinstance(chunk_args, dict):
                                buffer["args"] = chunk_args
                                buffer["args_parts"] = []
                            elif isinstance(chunk_args, str):
                                if chunk_args:
                                    parts: list[str] = buffer.setdefault("args_parts", [])
                                    if not parts or chunk_args != parts[-1]:
                                        parts.append(chunk_args)
                                    buffer["args"] = "".join(parts)
                            elif chunk_args is not None:
                                buffer["args"] = chunk_args

                            buffer_name = buffer.get("name")
                            buffer_id = buffer.get("id")
                            if buffer_name is None:
                                continue

                            parsed_args = buffer.get("args")
                            if isinstance(parsed_args, str):
                                if not parsed_args:
                                    continue
                                try:
                                    parsed_args = json.loads(parsed_args)
                                except json.JSONDecodeError:
                                    continue
                            elif parsed_args is None:
                                continue

                            if not isinstance(parsed_args, dict):
                                parsed_args = {"value": parsed_args}

                            # Flush pending text before tool call
                            pending_text = pending_text_by_namespace.get(ns_key, "")
                            if pending_text:
                                await _flush_assistant_text_ns(
                                    adapter, pending_text, ns_key, assistant_message_by_namespace
                                )
                                pending_text_by_namespace[ns_key] = ""
                                assistant_message_by_namespace.pop(ns_key, None)

                            if buffer_id is not None and buffer_id not in displayed_tool_ids:
                                displayed_tool_ids.add(buffer_id)
                                file_op_tracker.start_operation(buffer_name, parsed_args, buffer_id)

                                # Mount tool call message
                                tool_msg = ToolCallMessage(buffer_name, parsed_args)
                                await adapter._mount_message(tool_msg)
                                adapter._current_tool_messages[buffer_id] = tool_msg

                            tool_call_buffers.pop(buffer_key, None)
                            display_str = format_tool_display(buffer_name, parsed_args)
                            adapter._update_status(f"Executing {display_str}...")

                    if getattr(message, "chunk_position", None) == "last":
                        pending_text = pending_text_by_namespace.get(ns_key, "")
                        if pending_text:
                            await _flush_assistant_text_ns(
                                adapter, pending_text, ns_key, assistant_message_by_namespace
                            )
                            pending_text_by_namespace[ns_key] = ""
                            assistant_message_by_namespace.pop(ns_key, None)

            # Flush any remaining text from all namespaces
            for ns_key, pending_text in list(pending_text_by_namespace.items()):
                if pending_text:
                    await _flush_assistant_text_ns(
                        adapter, pending_text, ns_key, assistant_message_by_namespace
                    )
            pending_text_by_namespace.clear()
            assistant_message_by_namespace.clear()

            # Handle HITL after stream completes
            if interrupt_occurred:
                any_rejected = False

                for interrupt_id, hitl_request in pending_interrupts.items():
                    if session_state.auto_approve:
                        # Auto-approve silently (user sees tool calls already)
                        decisions = [{"type": "approve"} for _ in hitl_request["action_requests"]]
                        hitl_response[interrupt_id] = {"decisions": decisions}
                    else:
                        # Request approval via adapter
                        decisions = []

                        def mark_hitl_approved(action_request: ActionRequest) -> None:
                            tool_name = action_request.get("name")
                            if tool_name not in {"write_file", "edit_file"}:
                                return
                            args = action_request.get("args", {})
                            if isinstance(args, dict):
                                file_op_tracker.mark_hitl_approved(tool_name, args)

                        for action_request in hitl_request["action_requests"]:
                            future = await adapter._request_approval(action_request, assistant_id)
                            decision = await future

                            # Check for auto-approve-all
                            if (
                                isinstance(decision, dict)
                                and decision.get("type") == "auto_approve_all"
                            ):
                                session_state.auto_approve = True
                                if adapter._on_auto_approve_enabled:
                                    adapter._on_auto_approve_enabled()
                                decisions.append({"type": "approve"})
                                mark_hitl_approved(action_request)
                                # Approve remaining actions
                                for _ in hitl_request["action_requests"][len(decisions) :]:
                                    decisions.append({"type": "approve"})
                                break

                            decisions.append(decision)
                            # Try multiple keys for tool call id
                            tool_id = (
                                action_request.get("id")
                                or action_request.get("tool_call_id")
                                or action_request.get("call_id")
                            )
                            tool_name = action_request.get("name", "")

                            # Find matching tool message - by id or by name as fallback
                            tool_msg = None
                            tool_msg_key = None  # Track key for cleanup
                            if tool_id and tool_id in adapter._current_tool_messages:
                                tool_msg = adapter._current_tool_messages[tool_id]
                                tool_msg_key = tool_id
                            elif tool_name:
                                # Fallback: find last tool message with matching name
                                for key, msg in reversed(
                                    list(adapter._current_tool_messages.items())
                                ):
                                    if msg._tool_name == tool_name:
                                        tool_msg = msg
                                        tool_msg_key = key
                                        break

                            if isinstance(decision, dict) and decision.get("type") == "approve":
                                mark_hitl_approved(action_request)
                                # Don't call set_success here - wait for actual tool output
                                # The ToolMessage handler will update with real results
                            elif isinstance(decision, dict) and decision.get("type") == "reject":
                                if tool_msg:
                                    tool_msg.set_rejected()
                                # Only remove from tracking on reject (approved tools need output update)
                                if tool_msg_key and tool_msg_key in adapter._current_tool_messages:
                                    del adapter._current_tool_messages[tool_msg_key]

                        if any(d.get("type") == "reject" for d in decisions):
                            any_rejected = True

                        hitl_response[interrupt_id] = {"decisions": decisions}

                suppress_resumed_output = any_rejected

            if interrupt_occurred and hitl_response:
                if suppress_resumed_output:
                    await adapter._mount_message(
                        SystemMessage("Command rejected. Tell the agent what you'd like instead.")
                    )
                    return

                stream_input = Command(resume=hitl_response)
            else:
                break

    except asyncio.CancelledError:
        adapter._update_status("Interrupted")

        # Mark any pending tools as rejected
        for tool_msg in list(adapter._current_tool_messages.values()):
            tool_msg.set_rejected()
        adapter._current_tool_messages.clear()

        await adapter._mount_message(SystemMessage("Interrupted by user"))

        # Append cancellation message to agent state so LLM knows what happened
        # This preserves context rather than rolling back
        try:
            cancellation_msg = HumanMessage(
                content="[SYSTEM] Task interrupted by user. Previous operation was cancelled."
            )
            await agent.aupdate_state(config, {"messages": [cancellation_msg]})
        except Exception:  # noqa: S110
            pass  # State update is best-effort
        # Report tokens even on interrupt (or restore display if none captured)
        if adapter._token_tracker:
            if captured_input_tokens or captured_output_tokens:
                adapter._token_tracker.add(captured_input_tokens, captured_output_tokens)
            else:
                adapter._token_tracker.show()  # Restore previous value
        return

    except KeyboardInterrupt:
        adapter._update_status("Interrupted")

        # Mark any pending tools as rejected
        for tool_msg in list(adapter._current_tool_messages.values()):
            tool_msg.set_rejected()
        adapter._current_tool_messages.clear()

        await adapter._mount_message(SystemMessage("Interrupted by user"))

        # Append cancellation message to agent state
        try:
            cancellation_msg = HumanMessage(
                content="[SYSTEM] Task interrupted by user. Previous operation was cancelled."
            )
            await agent.aupdate_state(config, {"messages": [cancellation_msg]})
        except Exception:  # noqa: S110
            pass  # State update is best-effort
        # Report tokens even on interrupt (or restore display if none captured)
        if adapter._token_tracker:
            if captured_input_tokens or captured_output_tokens:
                adapter._token_tracker.add(captured_input_tokens, captured_output_tokens)
            else:
                adapter._token_tracker.show()  # Restore previous value
        return

    adapter._update_status("Ready")

    # Update token tracker
    if adapter._token_tracker and (captured_input_tokens or captured_output_tokens):
        adapter._token_tracker.add(captured_input_tokens, captured_output_tokens)


async def _flush_assistant_text_ns(
    adapter: TextualUIAdapter,
    text: str,
    ns_key: tuple,
    assistant_message_by_namespace: dict[tuple, Any],
) -> None:
    """Flush accumulated assistant text for a specific namespace.

    Finalizes the streaming by stopping the MarkdownStream.
    If no message exists yet, creates one with the full content.
    """
    if not text.strip():
        return

    current_msg = assistant_message_by_namespace.get(ns_key)
    if current_msg is None:
        # No message was created during streaming - create one with full content
        current_msg = AssistantMessage(text)
        await adapter._mount_message(current_msg)
        await current_msg.write_initial_content()
        assistant_message_by_namespace[ns_key] = current_msg
    else:
        # Stop the stream to finalize the content
        await current_msg.stop_stream()
```

### tools.py

Source: `/a0/tmp/skills_research/langchain/libs/deepagents-cli/deepagents_cli/tools.py`

```python
"""Custom tools for the CLI agent."""

from typing import Any, Literal

import requests
from markdownify import markdownify
from tavily import TavilyClient

from deepagents_cli.config import settings

# Initialize Tavily client if API key is available
tavily_client = TavilyClient(api_key=settings.tavily_api_key) if settings.has_tavily else None


def http_request(
    url: str,
    method: str = "GET",
    headers: dict[str, str] | None = None,
    data: str | dict | None = None,
    params: dict[str, str] | None = None,
    timeout: int = 30,
) -> dict[str, Any]:
    """Make HTTP requests to APIs and web services.

    Args:
        url: Target URL
        method: HTTP method (GET, POST, PUT, DELETE, etc.)
        headers: HTTP headers to include
        data: Request body data (string or dict)
        params: URL query parameters
        timeout: Request timeout in seconds

    Returns:
        Dictionary with response data including status, headers, and content
    """
    try:
        kwargs = {"url": url, "method": method.upper(), "timeout": timeout}

        if headers:
            kwargs["headers"] = headers
        if params:
            kwargs["params"] = params
        if data:
            if isinstance(data, dict):
                kwargs["json"] = data
            else:
                kwargs["data"] = data

        response = requests.request(**kwargs)

        try:
            content = response.json()
        except:
            content = response.text

        return {
            "success": response.status_code < 400,
            "status_code": response.status_code,
            "headers": dict(response.headers),
            "content": content,
            "url": response.url,
        }

    except requests.exceptions.Timeout:
        return {
            "success": False,
            "status_code": 0,
            "headers": {},
            "content": f"Request timed out after {timeout} seconds",
            "url": url,
        }
    except requests.exceptions.RequestException as e:
        return {
            "success": False,
            "status_code": 0,
            "headers": {},
            "content": f"Request error: {e!s}",
            "url": url,
        }
    except Exception as e:
        return {
            "success": False,
            "status_code": 0,
            "headers": {},
            "content": f"Error making request: {e!s}",
            "url": url,
        }


def web_search(
    query: str,
    max_results: int = 5,
    topic: Literal["general", "news", "finance"] = "general",
    include_raw_content: bool = False,
):
    """Search the web using Tavily for current information and documentation.

    This tool searches the web and returns relevant results. After receiving results,
    you MUST synthesize the information into a natural, helpful response for the user.

    Args:
        query: The search query (be specific and detailed)
        max_results: Number of results to return (default: 5)
        topic: Search topic type - "general" for most queries, "news" for current events
        include_raw_content: Include full page content (warning: uses more tokens)

    Returns:
        Dictionary containing:
        - results: List of search results, each with:
            - title: Page title
            - url: Page URL
            - content: Relevant excerpt from the page
            - score: Relevance score (0-1)
        - query: The original search query

    IMPORTANT: After using this tool:
    1. Read through the 'content' field of each result
    2. Extract relevant information that answers the user's question
    3. Synthesize this into a clear, natural language response
    4. Cite sources by mentioning the page titles or URLs
    5. NEVER show the raw JSON to the user - always provide a formatted response
    """
    if tavily_client is None:
        return {
            "error": "Tavily API key not configured. Please set TAVILY_API_KEY environment variable.",
            "query": query,
        }

    try:
        return tavily_client.search(
            query,
            max_results=max_results,
            include_raw_content=include_raw_content,
            topic=topic,
        )
    except Exception as e:
        return {"error": f"Web search error: {e!s}", "query": query}


def fetch_url(url: str, timeout: int = 30) -> dict[str, Any]:
    """Fetch content from a URL and convert HTML to markdown format.

    This tool fetches web page content and converts it to clean markdown text,
    making it easy to read and process HTML content. After receiving the markdown,
    you MUST synthesize the information into a natural, helpful response for the user.

    Args:
        url: The URL to fetch (must be a valid HTTP/HTTPS URL)
        timeout: Request timeout in seconds (default: 30)

    Returns:
        Dictionary containing:
        - success: Whether the request succeeded
        - url: The final URL after redirects
        - markdown_content: The page content converted to markdown
        - status_code: HTTP status code
        - content_length: Length of the markdown content in characters

    IMPORTANT: After using this tool:
    1. Read through the markdown content
    2. Extract relevant information that answers the user's question
    3. Synthesize this into a clear, natural language response
    4. NEVER show the raw markdown to the user unless specifically requested
    """
    try:
        response = requests.get(
            url,
            timeout=timeout,
            headers={"User-Agent": "Mozilla/5.0 (compatible; DeepAgents/1.0)"},
        )
        response.raise_for_status()

        # Convert HTML content to markdown
        markdown_content = markdownify(response.text)

        return {
            "url": str(response.url),
            "markdown_content": markdown_content,
            "status_code": response.status_code,
            "content_length": len(markdown_content),
        }
    except Exception as e:
        return {"error": f"Fetch URL error: {e!s}", "url": url}
```

### ui.py

Source: `/a0/tmp/skills_research/langchain/libs/deepagents-cli/deepagents_cli/ui.py`

```python
"""UI rendering and display utilities for the CLI."""

import json
from pathlib import Path
from typing import Any

from .config import COLORS, DEEP_AGENTS_ASCII, MAX_ARG_LENGTH, console


def truncate_value(value: str, max_length: int = MAX_ARG_LENGTH) -> str:
    """Truncate a string value if it exceeds max_length."""
    if len(value) > max_length:
        return value[:max_length] + "..."
    return value


def format_tool_display(tool_name: str, tool_args: dict) -> str:
    """Format tool calls for display with tool-specific smart formatting.

    Shows the most relevant information for each tool type rather than all arguments.

    Args:
        tool_name: Name of the tool being called
        tool_args: Dictionary of tool arguments

    Returns:
        Formatted string for display (e.g., "read_file(config.py)")

    Examples:
        read_file(path="/long/path/file.py") → "read_file(file.py)"
        web_search(query="how to code", max_results=5) → 'web_search("how to code")'
        shell(command="pip install foo") → 'shell("pip install foo")'
    """

    def abbreviate_path(path_str: str, max_length: int = 60) -> str:
        """Abbreviate a file path intelligently - show basename or relative path."""
        try:
            path = Path(path_str)

            # If it's just a filename (no directory parts), return as-is
            if len(path.parts) == 1:
                return path_str

            # Try to get relative path from current working directory
            try:
                rel_path = path.relative_to(Path.cwd())
                rel_str = str(rel_path)
                # Use relative if it's shorter and not too long
                if len(rel_str) < len(path_str) and len(rel_str) <= max_length:
                    return rel_str
            except (ValueError, Exception):
                pass

            # If absolute path is reasonable length, use it
            if len(path_str) <= max_length:
                return path_str

            # Otherwise, just show basename (filename only)
            return path.name
        except Exception:
            # Fallback to original string if any error
            return truncate_value(path_str, max_length)

    # Tool-specific formatting - show the most important argument(s)
    if tool_name in ("read_file", "write_file", "edit_file"):
        # File operations: show the primary file path argument (file_path or path)
        path_value = tool_args.get("file_path")
        if path_value is None:
            path_value = tool_args.get("path")
        if path_value is not None:
            path = abbreviate_path(str(path_value))
            return f"{tool_name}({path})"

    elif tool_name == "web_search":
        # Web search: show the query string
        if "query" in tool_args:
            query = str(tool_args["query"])
            query = truncate_value(query, 100)
            return f'{tool_name}("{query}")'

    elif tool_name == "grep":
        # Grep: show the search pattern
        if "pattern" in tool_args:
            pattern = str(tool_args["pattern"])
            pattern = truncate_value(pattern, 70)
            return f'{tool_name}("{pattern}")'

    elif tool_name == "shell":
        # Shell: show the command being executed
        if "command" in tool_args:
            command = str(tool_args["command"])
            command = truncate_value(command, 120)
            return f'{tool_name}("{command}")'

    elif tool_name == "ls":
        # ls: show directory, or empty if current directory
        if tool_args.get("path"):
            path = abbreviate_path(str(tool_args["path"]))
            return f"{tool_name}({path})"
        return f"{tool_name}()"

    elif tool_name == "glob":
        # Glob: show the pattern
        if "pattern" in tool_args:
            pattern = str(tool_args["pattern"])
            pattern = truncate_value(pattern, 80)
            return f'{tool_name}("{pattern}")'

    elif tool_name == "http_request":
        # HTTP: show method and URL
        parts = []
        if "method" in tool_args:
            parts.append(str(tool_args["method"]).upper())
        if "url" in tool_args:
            url = str(tool_args["url"])
            url = truncate_value(url, 80)
            parts.append(url)
        if parts:
            return f"{tool_name}({' '.join(parts)})"

    elif tool_name == "fetch_url":
        # Fetch URL: show the URL being fetched
        if "url" in tool_args:
            url = str(tool_args["url"])
            url = truncate_value(url, 80)
            return f'{tool_name}("{url}")'

    elif tool_name == "task":
        # Task: show the task description
        if "description" in tool_args:
            desc = str(tool_args["description"])
            desc = truncate_value(desc, 100)
            return f'{tool_name}("{desc}")'

    elif tool_name == "write_todos":
        # Todos: show count of items
        if "todos" in tool_args and isinstance(tool_args["todos"], list):
            count = len(tool_args["todos"])
            return f"{tool_name}({count} items)"

    # Fallback: generic formatting for unknown tools
    # Show all arguments in key=value format
    args_str = ", ".join(f"{k}={truncate_value(str(v), 50)}" for k, v in tool_args.items())
    return f"{tool_name}({args_str})"


def format_tool_message_content(content: Any) -> str:
    """Convert ToolMessage content into a printable string."""
    if content is None:
        return ""
    if isinstance(content, list):
        parts = []
        for item in content:
            if isinstance(item, str):
                parts.append(item)
            else:
                try:
                    parts.append(json.dumps(item))
                except Exception:
                    parts.append(str(item))
        return "\n".join(parts)
    return str(content)


def show_help() -> None:
    """Show help information."""
    console.print()
    console.print(DEEP_AGENTS_ASCII, style=f"bold {COLORS['primary']}")
    console.print()

    console.print("[bold]Usage:[/bold]", style=COLORS["primary"])
    console.print("  deepagents [OPTIONS]                           Start interactive session")
    console.print("  deepagents list                                List all available agents")
    console.print("  deepagents reset --agent AGENT                 Reset agent to default prompt")
    console.print(
        "  deepagents reset --agent AGENT --target SOURCE Reset agent to copy of another agent"
    )
    console.print("  deepagents help                                Show this help message")
    console.print("  deepagents --version                           Show deepagents version")
    console.print()

    console.print("[bold]Options:[/bold]", style=COLORS["primary"])
    console.print("  --agent NAME                  Agent identifier (default: agent)")
    console.print(
        "  --model MODEL                 Model to use (e.g., claude-sonnet-4-5-20250929, gpt-4o)"
    )
    console.print("  --auto-approve                Auto-approve tool usage without prompting")
    console.print(
        "  --sandbox TYPE                Remote sandbox for execution (modal, runloop, daytona)"
    )
    console.print("  --sandbox-id ID               Reuse existing sandbox (skips creation/cleanup)")
    console.print(
        "  -r, --resume [ID]             Resume thread: -r for most recent, -r <ID> for specific"
    )
    console.print()

    console.print("[bold]Examples:[/bold]", style=COLORS["primary"])
    console.print(
        "  deepagents                              # Start with default agent", style=COLORS["dim"]
    )
    console.print(
        "  deepagents --agent mybot                # Start with agent named 'mybot'",
        style=COLORS["dim"],
    )
    console.print(
        "  deepagents --model gpt-4o               # Use specific model (auto-detects provider)",
        style=COLORS["dim"],
    )
    console.print(
        "  deepagents -r                           # Resume most recent session",
        style=COLORS["dim"],
    )
    console.print(
        "  deepagents -r abc123                    # Resume specific thread",
        style=COLORS["dim"],
    )
    console.print(
        "  deepagents --auto-approve               # Start with auto-approve enabled",
        style=COLORS["dim"],
    )
    console.print(
        "  deepagents --sandbox runloop            # Execute code in Runloop sandbox",
        style=COLORS["dim"],
    )
    console.print()

    console.print("[bold]Thread Management:[/bold]", style=COLORS["primary"])
    console.print(
        "  deepagents threads list                 # List all sessions", style=COLORS["dim"]
    )
    console.print(
        "  deepagents threads delete <ID>          # Delete a session", style=COLORS["dim"]
    )
    console.print()

    console.print("[bold]Interactive Features:[/bold]", style=COLORS["primary"])
    console.print("  Enter           Submit your message", style=COLORS["dim"])
    console.print("  Ctrl+J          Insert newline", style=COLORS["dim"])
    console.print("  Shift+Tab       Toggle auto-approve mode", style=COLORS["dim"])
    console.print("  @filename       Auto-complete files and inject content", style=COLORS["dim"])
    console.print("  /command        Slash commands (/help, /clear, /quit)", style=COLORS["dim"])
    console.print("  !command        Run bash commands directly", style=COLORS["dim"])
    console.print()
```
readme
langchain SKILL.md License: LICENSE Version: Unknown
Imported skill readme from langchain
View skill
# Content Builder Agent

<img width="1255" height="756" alt="content-cover-image" src="https://github.com/user-attachments/assets/4ebe0aba-2780-4644-8a00-ed4b96680dc9" />

A content writing agent for writing blog posts, LinkedIn posts, and tweets with cover images included.

**This example demonstrates how to define an agent through three filesystem primitives:**
- **Memory** (`AGENTS.md`) – persistent context like brand voice and style guidelines
- **Skills** (`skills/*/SKILL.md`) – workflows for specific tasks, loaded on demand
- **Subagents** (`subagents.yaml`) – specialized agents for delegated tasks like research

The `content_writer.py` script shows how to combine these into a working agent.

## Quick Start

```bash
# Set API keys
export ANTHROPIC_API_KEY="..."
export GOOGLE_API_KEY="..."      # For image generation
export TAVILY_API_KEY="..."      # For web search (optional)

# Run (uv automatically installs dependencies on first run)
cd examples/content-builder-agent
uv run python content_writer.py "Write a blog post about prompt engineering"
```

**More examples:**
```bash
uv run python content_writer.py "Create a LinkedIn post about AI agents"
uv run python content_writer.py "Write a Twitter thread about the future of coding"
```

## How It Works

The agent is configured by files on disk, not code:

```
content-builder-agent/
├── AGENTS.md                    # Brand voice & style guide
├── subagents.yaml               # Subagent definitions
├── skills/
│   ├── blog-post/
│   │   └── SKILL.md             # Blog writing workflow
│   └── social-media/
│       └── SKILL.md             # Social media workflow
└── content_writer.py            # Wires it together (includes tools)
```

| File | Purpose | When Loaded |
|------|---------|-------------|
| `AGENTS.md` | Brand voice, tone, writing standards | Always (system prompt) |
| `subagents.yaml` | Research and other delegated tasks | Always (defines `task` tool) |
| `skills/*/SKILL.md` | Content-specific workflows | On demand |

**What's in the skills?** Each skill teaches the agent a specific workflow:
- **Blog posts:** Structure (hook → context → main content → CTA), SEO best practices, research-first approach
- **Social media:** Platform-specific formats (LinkedIn character limits, Twitter thread structure), hashtag usage
- **Image generation:** Detailed prompt engineering guides with examples for different content types (technical posts, announcements, thought leadership)

## Architecture

```python
agent = create_deep_agent(
    memory=["./AGENTS.md"],                        # ← Middleware loads into system prompt
    skills=["./skills/"],                          # ← Middleware loads on demand
    tools=[generate_cover, generate_social_image], # ← Image generation tools
    subagents=load_subagents("./subagents.yaml"),  # ← See note below
    backend=FilesystemBackend(root_dir="./"),
)
```

The `memory` and `skills` parameters are handled natively by deepagents middleware. Tools are defined in the script and passed directly.

**Note on subagents:** Unlike `memory` and `skills`, subagents must be defined in code. We use a small `load_subagents()` helper to externalize config to YAML. You can also define them inline:

```python
subagents=[
    {
        "name": "researcher",
        "description": "Research topics before writing...",
        "model": "anthropic:claude-haiku-4-5-20251001",
        "system_prompt": "You are a research assistant...",
        "tools": [web_search],
    }
],
```

**Flow:**
1. Agent receives task → loads relevant skill (blog-post or social-media)
2. Delegates research to `researcher` subagent → saves to `research/`
3. Writes content following skill workflow → saves to `blogs/` or `linkedin/`
4. Generates cover image with Gemini → saves alongside content

## Output

```
blogs/
└── prompt-engineering/
    ├── post.md       # Blog content
    └── hero.png      # Generated cover image

linkedin/
└── ai-agents/
    ├── post.md       # Post content
    └── image.png     # Generated image

research/
└── prompt-engineering.md   # Research notes
```

## Customizing

**Change the voice:** Edit `AGENTS.md` to modify brand tone and style.

**Add a content type:** Create `skills/<name>/SKILL.md` with YAML frontmatter:
```yaml
---
name: newsletter
description: Use this skill when writing email newsletters
---
# Newsletter Skill
...
```

**Add a subagent:** Add to `subagents.yaml`:
```yaml
editor:
  description: Review and improve drafted content
  model: anthropic:claude-haiku-4-5-20251001
  system_prompt: |
    You are an editor. Review the content and suggest improvements...
  tools: []
```

**Add a tool:** Define it in `content_writer.py` with the `@tool` decorator and add to `tools=[]`.

## Security Note

This agent has filesystem access and can read, write, and delete files on your machine. Review generated content before publishing and avoid running in directories with sensitive data.

## Requirements

- Python 3.11+
- `ANTHROPIC_API_KEY` - For the main agent
- `GOOGLE_API_KEY` - For image generation (uses Gemini's [Imagen / "nano banana"](https://ai.google.dev/gemini-api/docs/image-generation) via `gemini-2.5-flash-image`)
- `TAVILY_API_KEY` - For web search (optional, research still works without it)

## Bundled Sources

### content_writer.py

Source: `/a0/tmp/skills_research/langchain/examples/content-builder-agent/content_writer.py`

```python
#!/usr/bin/env python3
import warnings
warnings.filterwarnings("ignore", message="Core Pydantic V1 functionality")

"""
Content Builder Agent

A content writer agent configured entirely through files on disk:
- AGENTS.md defines brand voice and style guide
- skills/ provides specialized workflows (blog posts, social media)
- skills/*/scripts/ provides tools bundled with each skill
- subagents handle research and other delegated tasks

Usage:
    uv run python content_writer.py "Write a blog post about AI agents"
    uv run python content_writer.py "Create a LinkedIn post about prompt engineering"
"""

import asyncio
import os
import sys
from pathlib import Path
from typing import Literal

import yaml

from langchain_core.messages import AIMessage, HumanMessage, ToolMessage
from langchain_core.tools import tool
from rich.console import Console
from rich.live import Live
from rich.markdown import Markdown
from rich.panel import Panel
from rich.spinner import Spinner
from rich.text import Text

from deepagents import create_deep_agent
from deepagents.backends import FilesystemBackend

EXAMPLE_DIR = Path(__file__).parent
console = Console()


# Web search tool for the researcher subagent
@tool
def web_search(
    query: str,
    max_results: int = 5,
    topic: Literal["general", "news"] = "general",
) -> dict:
    """Search the web for current information.

    Args:
        query: The search query (be specific and detailed)
        max_results: Number of results to return (default: 5)
        topic: "general" for most queries, "news" for current events

    Returns:
        Search results with titles, URLs, and content excerpts.
    """
    try:
        from tavily import TavilyClient

        api_key = os.environ.get("TAVILY_API_KEY")
        if not api_key:
            return {"error": "TAVILY_API_KEY not set"}

        client = TavilyClient(api_key=api_key)
        return client.search(query, max_results=max_results, topic=topic)
    except Exception as e:
        return {"error": f"Search failed: {e}"}


@tool
def generate_cover(prompt: str, slug: str) -> str:
    """Generate a cover image for a blog post.

    Args:
        prompt: Detailed description of the image to generate.
        slug: Blog post slug. Image saves to blogs/<slug>/hero.png
    """
    try:
        from google import genai

        client = genai.Client()
        response = client.models.generate_content(
            model="gemini-2.5-flash-image",
            contents=[prompt],
        )

        for part in response.parts:
            if part.inline_data is not None:
                image = part.as_image()
                output_path = EXAMPLE_DIR / "blogs" / slug / "hero.png"
                output_path.parent.mkdir(parents=True, exist_ok=True)
                image.save(str(output_path))
                return f"Image saved to {output_path}"

        return "No image generated"
    except Exception as e:
        return f"Error: {e}"


@tool
def generate_social_image(prompt: str, platform: str, slug: str) -> str:
    """Generate an image for a social media post.

    Args:
        prompt: Detailed description of the image to generate.
        platform: Either "linkedin" or "tweets"
        slug: Post slug. Image saves to <platform>/<slug>/image.png
    """
    try:
        from google import genai

        client = genai.Client()
        response = client.models.generate_content(
            model="gemini-2.5-flash-image",
            contents=[prompt],
        )

        for part in response.parts:
            if part.inline_data is not None:
                image = part.as_image()
                output_path = EXAMPLE_DIR / platform / slug / "image.png"
                output_path.parent.mkdir(parents=True, exist_ok=True)
                image.save(str(output_path))
                return f"Image saved to {output_path}"

        return "No image generated"
    except Exception as e:
        return f"Error: {e}"


def load_subagents(config_path: Path) -> list:
    """Load subagent definitions from YAML and wire up tools.

    NOTE: This is a custom utility for this example. Unlike `memory` and `skills`,
    deepagents doesn't natively load subagents from files - they're normally
    defined inline in the create_deep_agent() call. We externalize to YAML here
    to keep configuration separate from code.
    """
    # Map tool names to actual tool objects
    available_tools = {
        "web_search": web_search,
    }

    with open(config_path) as f:
        config = yaml.safe_load(f)

    subagents = []
    for name, spec in config.items():
        subagent = {
            "name": name,
            "description": spec["description"],
            "system_prompt": spec["system_prompt"],
        }
        if "model" in spec:
            subagent["model"] = spec["model"]
        if "tools" in spec:
            subagent["tools"] = [available_tools[t] for t in spec["tools"]]
        subagents.append(subagent)

    return subagents


def create_content_writer():
    """Create a content writer agent configured by filesystem files."""
    return create_deep_agent(
        memory=["./AGENTS.md"],           # Loaded by MemoryMiddleware
        skills=["./skills/"],             # Loaded by SkillsMiddleware
        tools=[generate_cover, generate_social_image],  # Image generation
        subagents=load_subagents(EXAMPLE_DIR / "subagents.yaml"),  # Custom helper
        backend=FilesystemBackend(root_dir=EXAMPLE_DIR),
    )


class AgentDisplay:
    """Manages the display of agent progress."""

    def __init__(self):
        self.printed_count = 0
        self.current_status = ""
        self.spinner = Spinner("dots", text="Thinking...")

    def update_status(self, status: str):
        self.current_status = status
        self.spinner = Spinner("dots", text=status)

    def print_message(self, msg):
        """Print a message with nice formatting."""
        if isinstance(msg, HumanMessage):
            console.print(Panel(str(msg.content), title="You", border_style="blue"))

        elif isinstance(msg, AIMessage):
            content = msg.content
            if isinstance(content, list):
                text_parts = [p.get("text", "") for p in content if isinstance(p, dict) and p.get("type") == "text"]
                content = "\n".join(text_parts)

            if content and content.strip():
                console.print(Panel(Markdown(content), title="Agent", border_style="green"))

            if msg.tool_calls:
                for tc in msg.tool_calls:
                    name = tc.get("name", "unknown")
                    args = tc.get("args", {})

                    if name == "task":
                        desc = args.get("description", "researching...")
                        console.print(f"  [bold magenta]>> Researching:[/] {desc[:60]}...")
                        self.update_status(f"Researching: {desc[:40]}...")
                    elif name in ("generate_cover", "generate_social_image"):
                        console.print(f"  [bold cyan]>> Generating image...[/]")
                        self.update_status("Generating image...")
                    elif name == "write_file":
                        path = args.get("file_path", "file")
                        console.print(f"  [bold yellow]>> Writing:[/] {path}")
                    elif name == "web_search":
                        query = args.get("query", "")
                        console.print(f"  [bold blue]>> Searching:[/] {query[:50]}...")
                        self.update_status(f"Searching: {query[:30]}...")

        elif isinstance(msg, ToolMessage):
            name = getattr(msg, "name", "")
            if name in ("generate_cover", "generate_social_image"):
                if "saved" in msg.content.lower():
                    console.print(f"  [green]✓ Image saved[/]")
                else:
                    console.print(f"  [red]✗ Image failed: {msg.content}[/]")
            elif name == "write_file":
                console.print(f"  [green]✓ File written[/]")
            elif name == "task":
                console.print(f"  [green]✓ Research complete[/]")
            elif name == "web_search":
                if "error" not in msg.content.lower():
                    console.print(f"  [green]✓ Found results[/]")


async def main():
    """Run the content writer agent with streaming output."""
    if len(sys.argv) > 1:
        task = " ".join(sys.argv[1:])
    else:
        task = "Write a blog post about how AI agents are transforming software development"

    console.print()
    console.print("[bold blue]Content Builder Agent[/]")
    console.print(f"[dim]Task: {task}[/]")
    console.print()

    agent = create_content_writer()
    display = AgentDisplay()

    console.print()

    # Use Live display for spinner during waiting periods
    with Live(display.spinner, console=console, refresh_per_second=10, transient=True) as live:
        async for chunk in agent.astream(
            {"messages": [("user", task)]},
            config={"configurable": {"thread_id": "content-writer-demo"}},
            stream_mode="values",
        ):
            if "messages" in chunk:
                messages = chunk["messages"]
                if len(messages) > display.printed_count:
                    # Temporarily stop spinner to print
                    live.stop()
                    for msg in messages[display.printed_count:]:
                        display.print_message(msg)
                    display.printed_count = len(messages)
                    # Resume spinner
                    live.start()
                    live.update(display.spinner)

    console.print()
    console.print("[bold green]✓ Done![/]")


if __name__ == "__main__":
    try:
        asyncio.run(main())
    except KeyboardInterrupt:
        console.print("\n[yellow]Interrupted[/]")
```
skill
langchain SKILL.md License: LICENSE Version: Unknown
Imported skill skill from langchain
View skill
---
name: blog-post
description: Use this skill when writing long-form blog posts, tutorials, or educational articles that require structure, depth, and SEO considerations
---

# Blog Post Writing Skill

This skill provides a structured workflow for creating high-quality blog posts that educate and engage readers.

## When to Use This Skill

Use this skill when asked to:
- Write a blog post or article
- Create a tutorial or how-to guide
- Develop educational long-form content
- Write thought leadership pieces

## Research First (Required)

**Before writing any blog post, you MUST delegate research:**

1. Use the `task` tool with `subagent_type: "researcher"`
2. In the description, specify BOTH the topic AND where to save:

```
task(
    subagent_type="researcher",
    description="Research [TOPIC]. Save findings to research/[slug].md"
)
```

Example:
```
task(
    subagent_type="researcher",
    description="Research the current state of AI agents in 2025. Save findings to research/ai-agents-2025.md"
)
```

3. After research completes, read the findings file before writing

## Output Structure (Required)

**Every blog post MUST have both a post AND a cover image:**

```
blogs/
└── <slug>/
    ├── post.md        # The blog post content
    └── hero.png       # REQUIRED: Generated cover image
```

Example: A post about "AI Agents in 2025" → `blogs/ai-agents-2025/`

**You MUST complete both steps:**
1. Write the post to `blogs/<slug>/post.md`
2. Generate a cover image using `generate_image` and save to `blogs/<slug>/hero.png`

**A blog post is NOT complete without its cover image.**

## Blog Post Structure

Every blog post should follow this structure:

### 1. Hook (Opening)
- Start with a compelling question, statistic, or statement
- Make the reader want to continue
- Keep it to 2-3 sentences

### 2. Context (The Problem)
- Explain why this topic matters
- Describe the problem or opportunity
- Connect to the reader's experience

### 3. Main Content (The Solution)
- Break into 3-5 main sections with H2 headers
- Each section covers one key point
- Include code examples, diagrams, or screenshots where helpful
- Use bullet points for lists

### 4. Practical Application
- Show how to apply the concepts
- Include step-by-step instructions if applicable
- Provide code snippets or templates

### 5. Conclusion & CTA
- Summarize key takeaways (3 bullets max)
- End with a clear call-to-action
- Link to related resources

## Cover Image Generation

After writing the post, generate a cover image using the `generate_cover` tool:

```
generate_cover(prompt="A detailed description of the image...", slug="your-blog-slug")
```

The tool saves the image to `blogs/<slug>/hero.png`.

### Writing Effective Image Prompts

Structure your prompt with these elements:

1. **Subject**: What is the main focus? Be specific and concrete.
2. **Style**: Art direction (minimalist, isometric, flat design, 3D render, watercolor, etc.)
3. **Composition**: How elements are arranged (centered, rule of thirds, symmetrical)
4. **Color palette**: Specific colors or mood (warm earth tones, cool blues and purples, high contrast)
5. **Lighting/Atmosphere**: Soft diffused light, dramatic shadows, golden hour, neon glow
6. **Technical details**: Aspect ratio considerations, negative space for text overlay

### Example Prompts

**For a technical blog post:**
```
Isometric 3D illustration of interconnected glowing cubes representing AI agents, each cube has subtle circuit patterns. Cubes connected by luminous data streams. Deep navy background (#0a192f) with electric blue (#64ffda) and soft purple (#c792ea) accents. Clean minimal style, lots of negative space at top for title. Professional tech aesthetic.
```

**For a tutorial/how-to:**
```
Clean flat illustration of hands typing on a keyboard with abstract code symbols floating upward, transforming into lightbulbs and gears. Warm gradient background from soft coral to light peach. Friendly, approachable style. Centered composition with space for text overlay.
```

**For thought leadership:**
```
Abstract visualization of a human silhouette profile merging with geometric neural network patterns. Split composition - organic watercolor texture on left transitioning to clean vector lines on right. Muted sage green and warm terracotta color scheme. Contemplative, forward-thinking mood.
```

## SEO Considerations

- Include the main keyword in the title and first paragraph
- Use the keyword naturally 3-5 times throughout
- Keep the title under 60 characters
- Write a meta description (150-160 characters)

## Quality Checklist

Before finishing:
- [ ] Post saved to `blogs/<slug>/post.md`
- [ ] Hero image generated at `blogs/<slug>/hero.png`
- [ ] Hook grabs attention in first 2 sentences
- [ ] Each section has a clear purpose
- [ ] Conclusion summarizes key points
- [ ] CTA tells reader what to do next
advanced_search
openai SKILL.md License: LICENSE.txt Version: Unknown
Imported skill advanced_search from openai
View skill
# Advanced Search Techniques

## Search Filtering

### By Date Range

Use `created_date_range` to find recent content:

```
filters: {
  created_date_range: {
    start_date: "2024-01-01",
    end_date: "2025-01-01"
  }
}
```

**When to use**:
- Finding recent updates on a topic
- Focusing on current information
- Excluding outdated content

### By Creator

Use `created_by_user_ids` to find content from specific people:

```
filters: {
  created_by_user_ids: ["user-id-1", "user-id-2"]
}
```

**When to use**:
- Research from subject matter experts
- Team-specific information
- Attribution tracking

### Combined Filters

Stack filters for precision:

```
filters: {
  created_date_range: {
    start_date: "2024-10-01"
  },
  created_by_user_ids: ["expert-user-id"]
}
```

## Scoped Searches

### Teamspace Scoping

Restrict search to specific teamspace:

```
teamspace_id: "teamspace-uuid"
```

**When to use**:
- Project-specific research
- Department-focused information
- Reducing noise from irrelevant results

### Page Scoping

Search within a specific page and its subpages:

```
page_url: "https://notion.so/workspace/Page-Title-uuid"
```

**When to use**:
- Research within a project hierarchy
- Documentation updates
- Focused investigation

### Database Scoping

Search within a database's content:

```
data_source_url: "collection://data-source-uuid"
```

**When to use**:
- Task/project database research
- Structured data investigation
- Finding specific entries

## Search Strategies

### Broad to Narrow

1. Start with general search term
2. Review results for relevant teamspaces/pages
3. Re-search with scope filters
4. Fetch detailed content from top results

**Example**:
```
Search 1: query="API integration" → 50 results across workspace
Search 2: query="API integration", teamspace_id="engineering" → 12 results
Fetch: Top 3-5 most relevant pages
```

### Multi-Query Approach

Run parallel searches with related terms:

```
Query 1: "API integration"
Query 2: "API authentication"
Query 3: "API documentation"
```

Combine results to build comprehensive picture.

### Temporal Research

Search across time periods to track evolution:

```
Search 1: created_date_range 2023 → Historical context
Search 2: created_date_range 2024 → Recent developments
Search 3: created_date_range 2025 → Current state
```

## Result Processing

### Identifying Relevant Results

Look for:
- **High semantic match**: Result summary closely matches query intent
- **Recent updates**: Last-edited date is recent
- **Authoritative sources**: Created by known experts or in official locations
- **Comprehensive content**: Result summary suggests detailed information

### Prioritizing Fetches

Fetch pages in order of relevance:

1. **Primary sources**: Direct documentation, official pages
2. **Recent updates**: Newly edited content
3. **Related context**: Supporting information
4. **Historical reference**: Background and context

Don't fetch everything - be selective based on research needs.

### Handling Too Many Results

If search returns 20+ results:

1. **Add filters**: Narrow by date, creator, or teamspace
2. **Refine query**: Use more specific terms
3. **Use page scoping**: Search within relevant parent page
4. **Sample strategically**: Fetch diverse results (recent, popular, authoritative)

### Handling Too Few Results

If search returns < 3 results:

1. **Broaden query**: Use more general terms
2. **Remove filters**: Search full workspace
3. **Try synonyms**: Alternative terminology
4. **Search in related areas**: Adjacent teamspaces or pages

## Search Quality

### Effective Search Queries

**Good queries** (specific, semantic):
- "Q4 product roadmap"
- "authentication implementation guide"
- "customer feedback themes"

**Weak queries** (too vague):
- "roadmap"
- "guide"
- "feedback"

**Over-specific queries** (too narrow):
- "Q4 2024 product roadmap for mobile app version 3.2 feature X"

### User Context

Always use available user context:
- Query should match their terminology
- Scope to their relevant teamspaces
- Consider their role/department
- Reference their recent pages

## Connected Sources

### Notion Integrations

Search extends beyond Notion pages to:
- Slack messages (if connected)
- Google Drive documents (if connected)
- GitHub issues/PRs (if connected)
- Jira tickets (if connected)

Be aware results may come from these sources.

### Source Attribution

When citing results from connected sources:
- Note the source type in documentation
- Use appropriate mention format
- Verify user has access to the source system
api_feature
openai SKILL.md License: LICENSE.txt Version: Unknown
Imported skill api_feature from openai
View skill
# Example: API Feature Implementation

## User Request

> "Create an implementation plan for the User Profile API spec"

## Workflow Execution

### Step 1: Fetch Specification

```
Notion:notion-search
query: "User Profile API spec"
query_type: "internal"
```

Found: "User Profile API Specification" in Engineering Specs

```
Notion:notion-fetch
id: "user-profile-api-spec-page-id"
```

### Step 2: Parse Specification

**Spec Content Extracted:**

```markdown
# User Profile API Specification

## Overview
RESTful API for user profile management

## Requirements

### Functional
- FR-1: Get user profile by ID
- FR-2: Update user profile (name, bio, avatar)
- FR-3: Upload profile avatar
- FR-4: Get user's public profile (limited fields)
- FR-5: Search users by name

### Non-Functional
- NFR-1: Response time < 200ms (p95)
- NFR-2: Support 1000 concurrent users
- NFR-3: Avatar upload < 5MB
- NFR-4: GDPR compliant (data portability)

## API Endpoints

GET /api/v1/users/:id
PUT /api/v1/users/:id
POST /api/v1/users/:id/avatar
GET /api/v1/users/:id/public
GET /api/v1/users/search

## Data Model
- id (UUID)
- email (string, unique)
- name (string)
- bio (text, max 500 chars)
- avatar_url (string)
- created_at (timestamp)
- updated_at (timestamp)

## Security
- Authentication: JWT bearer token
- Authorization: Users can only update own profile
- Rate limiting: 100 req/min per user

## Acceptance Criteria
- AC-1: All endpoints return proper HTTP status codes
- AC-2: Validation errors return 400 with error details
- AC-3: Unauthorized access returns 401
- AC-4: Rate limit exceeded returns 429
- AC-5: Avatar images stored in S3
```

### Step 3: Create Implementation Plan

```
Notion:notion-create-pages

parent: { page_id: "engineering-plans-parent-id" }

pages: [{
  properties: {
    "title": "Implementation Plan: User Profile API"
  },
  content: "[Implementation plan]"
}]
```

**Implementation Plan Created:**

```markdown
# Implementation Plan: User Profile API

## Overview
Build RESTful API for user profile management with CRUD operations, avatar upload, and search functionality.

## Linked Specification
<mention-page url="...">User Profile API Specification</mention-page>

## Requirements Summary

### Functional Requirements
- ✅ Get user profile (authenticated)
- ✅ Update profile fields (name, bio)
- ✅ Upload avatar with image processing
- ✅ Public profile view (limited fields)
- ✅ User search by name

### Non-Functional Requirements
- **Performance**: < 200ms response time (p95)
- **Scalability**: 1000 concurrent users
- **Storage**: Avatar files < 5MB to S3
- **Compliance**: GDPR data portability

### Acceptance Criteria
- All endpoints with proper status codes
- Input validation with error details
- JWT authentication required
- Rate limiting enforced
- Avatars stored in S3

## Technical Approach

### Architecture
- **Framework**: Express.js (Node.js)
- **Database**: PostgreSQL
- **Storage**: AWS S3 for avatars
- **Cache**: Redis for profile data
- **Search**: PostgreSQL full-text search

### Key Design Decisions
1. **JWT Authentication**: Stateless auth, scales horizontally
2. **S3 for Avatars**: Offload storage, CDN integration ready
3. **Redis Caching**: Reduce DB load for frequently accessed profiles
4. **Rate Limiting**: Token bucket algorithm, per-user limits

## Implementation Phases

### Phase 1: Foundation (Days 1-2)
**Goal**: Set up core infrastructure

**Tasks**:
- [ ] <mention-page url="...">Setup database schema</mention-page>
- [ ] <mention-page url="...">Configure S3 bucket</mention-page>
- [ ] <mention-page url="...">Setup Redis cache</mention-page>
- [ ] <mention-page url="...">Create API scaffolding</mention-page>

**Deliverables**: Working skeleton with DB, storage, cache ready  
**Estimated effort**: 2 days

### Phase 2: Core Endpoints (Days 3-5)
**Goal**: Implement main CRUD operations

**Tasks**:
- [ ] <mention-page url="...">Implement GET user profile</mention-page>
- [ ] <mention-page url="...">Implement PUT update profile</mention-page>
- [ ] <mention-page url="...">Add input validation</mention-page>
- [ ] <mention-page url="...">Add JWT authentication middleware</mention-page>
- [ ] <mention-page url="...">Implement rate limiting</mention-page>

**Deliverables**: Working CRUD operations with auth  
**Estimated effort**: 3 days

### Phase 3: Avatar Upload (Days 6-7)
**Goal**: Avatar management with S3

**Tasks**:
- [ ] <mention-page url="...">Implement avatar upload endpoint</mention-page>
- [ ] <mention-page url="...">Add image validation (size, format)</mention-page>
- [ ] <mention-page url="...">Process and resize images</mention-page>
- [ ] <mention-page url="...">Upload to S3 with signed URLs</mention-page>

**Deliverables**: Avatar upload/update functionality  
**Estimated effort**: 2 days

### Phase 4: Search & Public Profile (Days 8-9)
**Goal**: Complete remaining features

**Tasks**:
- [ ] <mention-page url="...">Implement user search</mention-page>
- [ ] <mention-page url="...">Implement public profile endpoint</mention-page>
- [ ] <mention-page url="...">Add search indexing</mention-page>
- [ ] <mention-page url="...">Optimize search queries</mention-page>

**Deliverables**: Search and public profiles working  
**Estimated effort**: 2 days

### Phase 5: Testing & Optimization (Days 10-12)
**Goal**: Production-ready quality

**Tasks**:
- [ ] <mention-page url="...">Write unit tests</mention-page>
- [ ] <mention-page url="...">Write integration tests</mention-page>
- [ ] <mention-page url="...">Performance testing</mention-page>
- [ ] <mention-page url="...">Security audit</mention-page>
- [ ] <mention-page url="...">API documentation</mention-page>

**Deliverables**: Tested, documented, production-ready API  
**Estimated effort**: 3 days

## Dependencies

### External Dependencies
- AWS S3 bucket created ✅
- Redis instance available ✅
- PostgreSQL database provisioned ✅

### Internal Dependencies
- JWT authentication service (exists)
- User database table (exists)
- Logging infrastructure (exists)

### Blockers
None currently

## Risks & Mitigation

### Risk 1: Image Processing Performance
- **Probability**: Medium
- **Impact**: Medium
- **Mitigation**: Use background job queue for processing, return signed upload URL immediately

### Risk 2: S3 Upload Failures
- **Probability**: Low
- **Impact**: Medium
- **Mitigation**: Implement retry logic with exponential backoff, fallback to local storage temporarily

### Risk 3: Rate Limiting Complexity
- **Probability**: Low
- **Impact**: Low
- **Mitigation**: Use proven library (express-rate-limit with Redis store)

### Risk 4: Search Performance
- **Probability**: Medium
- **Impact**: Medium
- **Mitigation**: Add database indexes, consider Elasticsearch if needed later

## Timeline

| Milestone | Target Date | Status |
|-----------|-------------|--------|
| Phase 1 Complete | Oct 16 | ⏳ Planned |
| Phase 2 Complete | Oct 19 | ⏳ Planned |
| Phase 3 Complete | Oct 21 | ⏳ Planned |
| Phase 4 Complete | Oct 23 | ⏳ Planned |
| Phase 5 Complete | Oct 26 | ⏳ Planned |
| **Production Deploy** | **Oct 28** | ⏳ Planned |

**Total Duration**: 12 working days (~2.5 weeks)

## Success Criteria

### Technical Success
- [ ] All 5 endpoints implemented and working
- [ ] Response time < 200ms (p95) verified in load testing
- [ ] Handles 1000 concurrent users
- [ ] All acceptance criteria met
- [ ] Test coverage > 80%
- [ ] Security scan passed
- [ ] API documentation complete

### Business Success
- [ ] User profile updates functional
- [ ] Avatar uploads working reliably
- [ ] Search returns relevant results in < 500ms
- [ ] Zero critical bugs in first week

## Resources

### Documentation
- <mention-page url="...">User Profile API Specification</mention-page> (original spec)
- <mention-page url="...">Authentication Service Docs</mention-page>
- <mention-page url="...">AWS S3 Setup Guide</mention-page>

### Related Work
- <mention-page url="...">User Authentication API</mention-page> (similar pattern)
- <mention-page url="...">File Upload Service</mention-page> (avatar upload reference)

### External References
- Express.js best practices
- AWS S3 SDK documentation
- PostgreSQL full-text search guide

## Progress Tracking

### Phase Status
- Phase 1: ⏳ Not Started
- Phase 2: ⏳ Not Started
- Phase 3: ⏳ Not Started
- Phase 4: ⏳ Not Started
- Phase 5: ⏳ Not Started

**Overall Progress**: 0% complete

### Latest Update
*Implementation plan created on October 14, 2025*
```

### Step 4: Find Task Database

```
Notion:notion-search
query: "Tasks database"
query_type: "internal"
```

Found: "Engineering Tasks" database

### Step 5: Fetch Task Database Schema

```
Notion:notion-fetch
id: "tasks-database-id"
```

**Schema retrieved:**
- Data source: `collection://tasks-db-uuid`
- Properties: Name (title), Status (select), Priority (select), Related Tasks (relation), Story Points (number), Tags (multi_select)

### Step 6: Create Implementation Tasks

Now create tasks for Phase 1:

**Task 1: Setup database schema**

```
Notion:notion-create-pages

parent: { data_source_id: "collection://tasks-db-uuid" }

pages: [{
  properties: {
    "Name": "Setup database schema for User Profile API",
    "Status": "To Do",
    "Priority": "High",
    "Related Tasks": ["impl-plan-page-id", "spec-page-id"],
    "Story Points": 3,
    "Tags": "backend, database, api"
  },
  content: "## Context\nImplementation task for <mention-page url=\"...\">User Profile API Specification</mention-page>\n\nPart of <mention-page url=\"...\">Implementation Plan: User Profile API</mention-page> - Phase 1\n\n## Objective\nCreate database schema for user profile storage\n\n## Requirements\nBased on spec data model:\n- id (UUID, primary key)\n- email (string, unique index)\n- name (string, not null)\n- bio (text, max 500 chars)\n- avatar_url (string, nullable)\n- created_at (timestamp)\n- updated_at (timestamp)\n\n## Acceptance Criteria\n- [ ] Migration file created\n- [ ] Schema includes all required fields\n- [ ] Indexes on email (unique) and name (search)\n- [ ] Constraints validated (bio length, email format)\n- [ ] Migration tested on dev database\n- [ ] Rollback migration created\n\n## Technical Approach\n```sql\nCREATE TABLE user_profiles (\n  id UUID PRIMARY KEY DEFAULT gen_random_uuid(),\n  email VARCHAR(255) UNIQUE NOT NULL,\n  name VARCHAR(255) NOT NULL,\n  bio TEXT CHECK (length(bio) <= 500),\n  avatar_url TEXT,\n  created_at TIMESTAMP DEFAULT NOW(),\n  updated_at TIMESTAMP DEFAULT NOW()\n);\n\nCREATE INDEX idx_user_profiles_email ON user_profiles(email);\nCREATE INDEX idx_user_profiles_name ON user_profiles USING gin(to_tsvector('english', name));\n```\n\n## Dependencies\n- Blocked By: None\n- Blocks: All Phase 2 tasks\n\n## Estimated Effort\n3 story points (half day)\n"
}]
```

*[Create similar tasks for all phases - 20 tasks total]*

### Step 7: Link Plan Back to Spec

```
Notion:notion-update-page
page_id: "user-profile-api-spec-page-id"
command: "insert_content_after"
selection_with_ellipsis: "## Acceptance Criteria..."
new_str: "

---

## Implementation

**Implementation Plan**: <mention-page url=\"...\">Implementation Plan: User Profile API</mention-page>

**Implementation Tasks**: See plan for full task breakdown (20 tasks across 5 phases)

**Status**: Planning complete, ready to start implementation
"
```

## Summary Provided to User

```markdown
## Implementation Plan Created ✅

### Plan Overview
- **Feature**: User Profile API
- **Duration**: 12 days (~2.5 weeks)
- **Phases**: 5 (Foundation → Core → Avatar → Search → Testing)
- **Tasks**: 20 tasks created
- **Target Launch**: October 28, 2025

### Implementation Phases

**Phase 1: Foundation** (2 days)
- Database schema
- S3 and Redis setup
- API scaffolding

**Phase 2: Core Endpoints** (3 days)
- GET/PUT user profile
- Authentication & validation
- Rate limiting

**Phase 3: Avatar Upload** (2 days)
- Image upload & validation
- S3 integration
- Image processing

**Phase 4: Search & Public Profile** (2 days)
- User search
- Public profile endpoint

**Phase 5: Testing & Optimization** (3 days)
- Unit & integration tests
- Performance testing
- Documentation

### Key Deliverables
- 5 REST API endpoints
- Avatar upload to S3
- User search functionality
- Comprehensive tests
- API documentation

### Links Created
✅ Plan page: <mention-page url="...">Implementation Plan: User Profile API</mention-page>
✅ Spec updated with plan link
✅ 20 tasks created in task database
✅ All tasks linked to plan and spec

### Next Steps
1. Review and approve plan
2. Assign tasks to team members
3. Begin Phase 1 (Foundation)
4. Daily standups for progress tracking

**Ready to start implementation! 🚀**
```

## Key Features Demonstrated

### Spec Parsing
- Extracted requirements (functional & non-functional)
- Identified API endpoints
- Noted data model
- Captured acceptance criteria
- Understood security requirements

### Implementation Planning
- Broke into logical phases
- Sequenced work appropriately (foundation → features → testing)
- Identified dependencies
- Estimated effort per phase
- Created realistic timeline

### Task Creation
- Generated 20 specific tasks
- Each task has context, acceptance criteria, technical approach
- Tasks link to both spec and plan
- Proper dependencies noted

### Bidirectional Linking
- Plan links to spec
- Spec updated to link to plan
- Tasks link to both
- Easy navigation between all artifacts

Perfect for: Feature implementation, API development, technical projects
brainstorming_template
openai SKILL.md License: LICENSE.txt Version: Unknown
Imported skill brainstorming_template from openai
View skill
# Brainstorming Meeting Template

Use this template for creative ideation and brainstorming sessions.

```markdown
# [Topic] Brainstorming - [Date]

## Meeting Details
**Date**: [Date]
**Facilitator**: [Name]
**Note-taker**: [Name]
**Attendees**: [List]

## Objective

[Clear statement of what we're brainstorming]

**Success looks like**: [How we'll know brainstorming was successful]

## Background & Context

[Context from research - 2-3 paragraphs]

**Related Pages**:
- <mention-page url="...">Context Page 1</mention-page>
- <mention-page url="...">Context Page 2</mention-page>

## Constraints

- [Constraint]
- [Constraint]
- [Constraint]

## Seed Ideas

[Starting ideas from research to spark discussion]:

1. **[Idea]**: [Brief description]
2. **[Idea]**: [Brief description]

## Ground Rules

- No criticism during ideation
- Build on others' ideas
- Quantity over quality initially
- Wild ideas welcome

## Brainstorming Notes

### Ideas Generated

[To be filled during meeting]

1. [Idea with brief description]
2. [Idea with brief description]

### Themes/Patterns

[Groupings that emerge]

## Evaluation

[If time permits, evaluate top ideas]

### Top Ideas

| Idea | Feasibility | Impact | Effort | Score |
|------|-------------|---------|--------|-------|
| [Idea] | [H/M/L] | [H/M/L] | [H/M/L] | [#] |

## Next Steps

- [ ] [Action to explore idea]
- [ ] [Action to prototype]
- [ ] [Action to research]

## Follow-up

**Next meeting**: [Date to reconvene]
```
citations
openai SKILL.md License: LICENSE.txt Version: Unknown
Imported skill citations from openai
View skill
# Citation Styles

## Basic Page Citation

Always cite sources using Notion page mentions:

```markdown
<mention-page url="https://notion.so/workspace/Page-Title-uuid">Page Title</mention-page>
```

The URL must be provided. The title is optional but improves readability:

```markdown
<mention-page url="https://notion.so/workspace/Page-Title-uuid"/>
```

## Inline Citations

Cite immediately after referenced information:

```markdown
The Q4 revenue increased by 23% quarter-over-quarter (<mention-page url="...">Q4 Financial Report</mention-page>).
```

## Multiple Sources

When information comes from multiple sources:

```markdown
Customer satisfaction has improved across all metrics (<mention-page url="...">Q3 Survey Results</mention-page>, <mention-page url="...">Support Analysis</mention-page>).
```

## Section-Level Citations

For longer sections derived from one source:

```markdown
### Engineering Priorities

According to the <mention-page url="...">Engineering Roadmap 2025</mention-page>:

- Focus on API scalability
- Improve developer experience
- Migrate to microservices architecture
```

## Sources Section

Always include a "Sources" section at document end:

```markdown
## Sources

- <mention-page url="...">Strategic Plan 2025</mention-page>
- <mention-page url="...">Market Analysis Report</mention-page>
- <mention-page url="...">Competitor Research: Q3</mention-page>
- <mention-page url="...">Customer Interview Notes</mention-page>
```

Group by category for long lists:

```markdown
## Sources

### Primary Sources
- <mention-page url="...">Official Roadmap</mention-page>
- <mention-page url="...">Strategy Document</mention-page>

### Supporting Research
- <mention-page url="...">Market Trends</mention-page>
- <mention-page url="...">Customer Feedback</mention-page>

### Background Context
- <mention-page url="...">Historical Analysis</mention-page>
```

## Quoting Content

When quoting directly from source:

```markdown
The product team noted: "We need to prioritize mobile experience improvements" (<mention-page url="...">Product Meeting Notes</mention-page>).
```

For block quotes:

```markdown
> We need to prioritize mobile experience improvements to meet our Q4 goals. This includes performance optimization and UI refresh.
>
> — <mention-page url="...">Product Meeting Notes - Oct 2025</mention-page>
```

## Data Citations

When presenting data, cite the source:

```markdown
| Metric | Q3 | Q4 | Change |
|--------|----|----|--------|
| Revenue | $2.3M | $2.8M | +21.7% |
| Users | 12.4K | 15.1K | +21.8% |

Source: <mention-page url="...">Financial Dashboard</mention-page>
```

## Database Citations

When referencing database content:

```markdown
Based on analysis of the <mention-database url="...">Projects Database</mention-database>, 67% of projects are on track.
```

## User Citations

When attributing information to specific people:

```markdown
<mention-user url="...">Sarah Chen</mention-user> noted in <mention-page url="...">Architecture Review</mention-page> that the microservices migration is ahead of schedule.
```

## Citation Frequency

**Over-citing** (every sentence):
```markdown
The revenue increased (<mention-page url="...">Report</mention-page>). 
Costs decreased (<mention-page url="...">Report</mention-page>). 
Margin improved (<mention-page url="...">Report</mention-page>).
```

**Under-citing** (no attribution):
```markdown
The revenue increased, costs decreased, and margin improved.
```

**Right balance** (grouped citation):
```markdown
The revenue increased, costs decreased, and margin improved (<mention-page url="...">Q4 Financial Report</mention-page>).
```

## Outdated Information

Note when source information might be outdated:

```markdown
The original API design (<mention-page url="...">API Spec v1</mention-page>, last updated January 2024) has been superseded by the new architecture in <mention-page url="...">API Spec v2</mention-page>.
```

## Cross-References

Link to related research documents:

```markdown
## Related Research

This research builds on previous findings:
- <mention-page url="...">Market Analysis - Q2 2025</mention-page>
- <mention-page url="...">Competitor Landscape Review</mention-page>

For implementation details, see:
- <mention-page url="...">Technical Implementation Guide</mention-page>
```

## Citation Validation

Before finalizing research:

✓ Every key claim has a source citation
✓ All page mentions have valid URLs
✓ Sources section includes all cited pages
✓ Outdated sources are noted as such
✓ Direct quotes are clearly marked
✓ Data sources are attributed

## Citation Style Consistency

Choose one citation style and use throughout:

**Inline style** (lightweight):
```markdown
Revenue grew 23% (Financial Report). Customer count increased 18% (Metrics Dashboard).
```

**Formal style** (full mentions):
```markdown
Revenue grew 23% (<mention-page url="...">Q4 Financial Report</mention-page>). Customer count increased 18% (<mention-page url="...">Metrics Dashboard</mention-page>).
```

**Recommend formal style** for most research documentation as it provides clickable navigation.
cla
openai SKILL.md License: LICENSE.txt Version: Unknown
Imported skill cla from openai
View skill
# Individual Contributor License Agreement (v1.0, OpenAI)

_Based on the Apache Software Foundation Individual CLA v 2.2._

By commenting **“I have read the CLA Document and I hereby sign the CLA”**
on a Pull Request, **you (“Contributor”) agree to the following terms** for any
past and future “Contributions” submitted to the **OpenAI Codex CLI project
(the “Project”)**.

---

## 1. Definitions

- **“Contribution”** – any original work of authorship submitted to the Project
  (code, documentation, designs, etc.).
- **“You” / “Your”** – the individual (or legal entity) posting the acceptance
  comment.

## 2. Copyright License

You grant **OpenAI, Inc.** and all recipients of software distributed by the
Project a perpetual, worldwide, non‑exclusive, royalty‑free, irrevocable
license to reproduce, prepare derivative works of, publicly display, publicly
perform, sublicense, and distribute Your Contributions and derivative works.

## 3. Patent License

You grant **OpenAI, Inc.** and all recipients of the Project a perpetual,
worldwide, non‑exclusive, royalty‑free, irrevocable (except as below) patent
license to make, have made, use, sell, offer to sell, import, and otherwise
transfer Your Contributions alone or in combination with the Project.

If any entity brings patent litigation alleging that the Project or a
Contribution infringes a patent, the patent licenses granted by You to that
entity under this CLA terminate.

## 4. Representations

1. You are legally entitled to grant the licenses above.
2. Each Contribution is either Your original creation or You have authority to
   submit it under this CLA.
3. Your Contributions are provided **“AS IS”** without warranties of any kind.
4. You will notify the Project if any statement above becomes inaccurate.

## 5. Miscellany

This Agreement is governed by the laws of the **State of California**, USA,
excluding its conflict‑of‑laws rules. If any provision is held unenforceable,
the remaining provisions remain in force.
comparison_format
openai SKILL.md License: LICENSE.txt Version: Unknown
Imported skill comparison_format from openai
View skill
# Comparison Format

**When to use**:
- Evaluating multiple options
- Tool/vendor selection
- Approach comparison
- Decision support

## Characteristics

**Length**: 800-1200 words typically

**Structure**:
- Overview of what's being compared
- Comparison matrix table
- Detailed analysis per option (pros/cons)
- Clear recommendation with rationale
- Sources

## Template

See [comparison-template.md](comparison-template.md) for the full template.

## Best For

- Decision support with multiple options
- Tool or vendor selection
- Comparing different technical approaches
- Evaluating trade-offs between alternatives

## Example Use Cases

- "Compare the three database options discussed in our tech docs"
- "What are the pros and cons of each deployment approach?"
- "Compare the vendor proposals"
- "Evaluate the different authentication methods we've documented"
comparison_template
openai SKILL.md License: LICENSE.txt Version: Unknown
Imported skill comparison_template from openai
View skill
# Comparison Template

Use when researching multiple options or alternatives. See [comparison-format.md](comparison-format.md) for when to use this format.

```markdown
# [Topic] Comparison

## Overview
[Brief introduction to what's being compared and why]

## Comparison Matrix

| Criteria | Option A | Option B | Option C |
|----------|----------|----------|----------|
| [Criterion 1] | [Rating/Details] | [Rating/Details] | [Rating/Details] |
| [Criterion 2] | [Rating/Details] | [Rating/Details] | [Rating/Details] |

## Detailed Analysis

### Option A
**Pros**:
- [Advantage]
- [Advantage]

**Cons**:
- [Disadvantage]
- [Disadvantage]

**Best for**: [Use case]

**Source**: <mention-page url="...">Source Page</mention-page>

[Repeat for each option]

## Recommendation

**Selected option**: [Choice]

**Rationale**: [Why this option is best given the context]

## Sources
[List all consulted pages]
```
competitor_analysis
openai SKILL.md License: LICENSE.txt Version: Unknown
Imported skill competitor_analysis from openai
View skill
# Example: Competitor Analysis

## User Request

> "Research competitor pricing models and create a comparison document"

## Workflow Execution

### Step 1: Search for Competitor Information

```
Notion:notion-search
query: "competitor pricing"
query_type: "internal"
filters: {
  created_date_range: {
    start_date: "2024-01-01"
  }
}
```

**Results Found:**
- "Competitor Analysis - Stripe" (Product)
- "Market Research Q3 2024" (Strategy)
- "Pricing Strategy Discussion" (Product)
- "Competitor Feature Matrix" (Product)

### Step 2: Fetch and Analyze

```
Notion:notion-fetch for each page
```

**Extracted Information:**

**Stripe**:
- Pay-as-you-go: 2.9% + $0.30 per transaction
- No monthly fees
- Volume discounts available

**Square**:
- Pay-as-you-go: 2.6% + $0.10 per transaction
- Monthly plan: $29/month with lower rates
- Hardware costs separate

**PayPal**:
- Standard: 3.49% + $0.49 per transaction
- Advanced: $30/month + 2.59% + $0.49
- Multi-currency fees

**Adyen**:
- Interchange++ pricing model
- Custom enterprise pricing
- Setup and monthly fees apply

### Step 3: Create Comparison Document

```
Notion:notion-create-pages

pages: [{
  properties: {
    "title": "Competitor Pricing Analysis - Payment Processors"
  },
  content: "[Comparison document using template]"
}]
```

## Output Document

```markdown
# Competitor Pricing Analysis - Payment Processors

**Date**: October 14, 2025  
**Research Team**: Product Strategy

## Executive Summary

We analyzed four major payment processor competitors: Stripe, Square, PayPal, and Adyen. Pricing models vary from simple pay-as-you-go (Stripe) to complex interchange-plus (Adyen). Key insight: All competitors offer volume discounts for high-transaction merchants, with breakpoints typically at $100K/month processing volume.

## Comparison Matrix

| Feature | Stripe | Square | PayPal | Adyen |
|---------|--------|--------|--------|-------|
| **Base Rate** | 2.9% + $0.30 | 2.6% + $0.10 | 3.49% + $0.49 | Interchange++ |
| **Monthly Fee** | $0 | $0-29 | $0-30 | Custom |
| **Volume Discounts** | Yes, >$80K | Yes, >$250K | Yes, >$100K | Yes, custom |
| **Setup Fee** | $0 | $0 | $0 | $1,000-5,000 |
| **Multi-currency** | 1% extra | 3% extra | 3-4% extra | Included |
| **Chargeback Fee** | $15 | $15-25 | $20 | Custom |
| **Target Market** | Startups-Enterprise | Small-Medium | Small-Medium | Enterprise |

## Detailed Analysis

### Stripe

**Pricing Structure**:
- **Standard**: 2.9% + $0.30 per successful card charge
- **Volume discounts**: Available for businesses processing >$80,000/month
- **International cards**: +1% fee
- **Currency conversion**: 1% above market rate

**Strengths**:
- Simple, transparent pricing
- No setup fees or monthly minimums
- Excellent developer experience
- Quick onboarding

**Weaknesses**:
- Higher per-transaction fee for high volume
- Volume discounts less aggressive than Adyen

**Best for**: Startups and growth-stage companies needing quick integration

**Source**: <mention-page url="...">Competitor Analysis - Stripe</mention-page>

### Square

**Pricing Structure**:
- **Pay-as-you-go**: 2.6% + $0.10 per tap, dip, or swipe
- **Keyed-in**: 3.5% + $0.15
- **Plus plan**: $29/month for lower rates (2.5% + $0.10)
- **Premium plan**: Custom pricing

**Strengths**:
- Lowest per-transaction fee for in-person
- All-in-one hardware + software
- No long-term contracts

**Weaknesses**:
- Higher rates for online/keyed transactions
- Hardware costs ($49-$299)
- Less suitable for online-only businesses

**Best for**: Brick-and-mortar retail and restaurants

**Source**: <mention-page url="...">Market Research Q3 2024</mention-page>

### PayPal

**Pricing Structure**:
- **Standard**: 3.49% + $0.49 per transaction
- **Advanced**: $30/month + 2.59% + $0.49
- **Payments Pro**: Additional $30/month for direct credit card processing

**Strengths**:
- Huge customer base (PayPal checkout)
- Buyer protection increases trust
- International reach (200+ countries)

**Weaknesses**:
- Highest per-transaction fees
- Complex fee structure
- Account holds and reserves common

**Best for**: Businesses where PayPal brand trust matters (e-commerce, marketplaces)

**Source**: <mention-page url="...">Pricing Strategy Discussion</mention-page>

### Adyen

**Pricing Structure**:
- **Interchange++**: Actual interchange + scheme fees + fixed markup
- **Setup fee**: $1,000-5,000 (negotiable)
- **Monthly minimum**: Typically $10,000+ processing volume
- **Per-transaction**: Interchange + 0.6% + $0.12 (example)

**Strengths**:
- Most transparent cost structure at scale
- Lowest effective rate for high volume
- True multi-currency (100+ currencies)
- Direct connections to schemes

**Weaknesses**:
- Complex pricing requires analysis
- High minimums ($10K+/month)
- Longer integration time
- Not suitable for small businesses

**Best for**: Enterprise with $1M+/month processing volume

**Source**: <mention-page url="...">Competitor Feature Matrix</mention-page>

## Pricing Trends & Insights

### Volume-Based Discounting
All competitors offer discounts at scale:
- **Entry point**: $80K-$250K/month processing
- **Typical discount**: 10-30 basis points reduction
- **Negotiation leverage**: Begins at $500K/month+

### Hidden Costs to Consider

| Cost Type | Stripe | Square | PayPal | Adyen |
|-----------|--------|--------|--------|-------|
| Chargeback | $15 | $15-25 | $20 | $15-25 |
| Account verification | $0 | $0 | $0 | Varies |
| PCI compliance | $0 | $0 | $0 | $0 |
| Currency conversion | 1% | 3% | 3-4% | 0% |
| Refund fees | Returned | Returned | Not returned | Varies |

### Market Positioning

```
High Volume / Enterprise
    ↑
    |                    Adyen
    |                      
    |         Stripe             
    |    
    |  Square    PayPal
    |
    └──────────────────→
      Small / Simple        Complex / International
```

## Strategic Implications

### For Startups (<$100K/month)
**Recommended**: Stripe
- Lowest friction to start
- No upfront costs
- Great documentation
- Acceptable rates at this scale

### For Growing Companies ($100K-$1M/month)
**Recommended**: Stripe or Square
- Negotiate volume discounts
- Evaluate interchange++ if international
- Consider Square if in-person dominant

### For Enterprises (>$1M/month)
**Recommended**: Adyen or Negotiated Stripe
- Interchange++ models save significantly
- Direct scheme connections
- Multi-region capabilities matter
- ROI on integration complexity

## Recommendations

1. **Immediate**: Benchmark our current 2.8% + $0.25 against Stripe's standard
2. **Short-term**: Request volume discount quote from Stripe at our current $150K/month
3. **Long-term**: Evaluate Adyen when we cross $1M/month threshold

## Next Steps

- [ ] Request detailed pricing proposal from Stripe for volume discounts
- [ ] Create pricing calculator comparing all 4 at different volume levels
- [ ] Interview customers about payment method preferences
- [ ] Analyze our transaction mix (domestic vs international, card types)

## Sources

### Primary Research
- <mention-page url="...">Competitor Analysis - Stripe</mention-page>
- <mention-page url="...">Market Research Q3 2024</mention-page>
- <mention-page url="...">Pricing Strategy Discussion</mention-page>
- <mention-page url="...">Competitor Feature Matrix</mention-page>

### External References
- Stripe.com pricing page (Oct 2025)
- Square pricing documentation
- PayPal merchant fees
- Adyen pricing transparency report
```

## Key Success Factors

1. **Structured comparison**: Matrix format for quick scanning
2. **Multiple dimensions**: Price, features, target market
3. **Strategic recommendations**: Not just data, but implications
4. **Visual elements**: Table and positioning diagram
5. **Actionable next steps**: Clear recommendations
6. **Comprehensive sources**: Internal research + external validation

## Workflow Pattern Demonstrated

- **Date-filtered search** (recent information only)
- **Multiple competitor synthesis** (4 different companies)
- **Comparison template** (matrix + detailed analysis)
- **Strategic layer** (implications and recommendations)
- **Action-oriented** (next steps included)
comprehensive_report_format
openai SKILL.md License: LICENSE.txt Version: Unknown
Imported skill comprehensive_report_format from openai
View skill
# Comprehensive Report Format

**When to use**: 
- Formal documentation requirements
- Strategic decision support
- Complex topics requiring extensive analysis
- Multiple stakeholders need alignment

## Characteristics

**Length**: 1500+ words

**Structure**:
- Executive summary
- Background & context
- Methodology
- Detailed findings with subsections
- Data & evidence section
- Implications (short and long-term)
- Prioritized recommendations
- Appendix

## Template

See [comprehensive-report-template.md](comprehensive-report-template.md) for the full template.

## Best For

- Deep analysis and strategic decisions
- Formal documentation requirements
- Complex topics with multiple facets
- When stakeholders need extensive context
- Board presentations or executive briefings

## Example Use Cases

- "Create a comprehensive analysis of our market position"
- "Document the full technical investigation of the database migration"
- "Prepare an in-depth report on vendor options for executive review"
- "Analyze the pros and cons of different architectural approaches"
comprehensive_report_template
openai SKILL.md License: LICENSE.txt Version: Unknown
Imported skill comprehensive_report_template from openai
View skill
# Comprehensive Report Template

Use for in-depth research requiring extensive analysis. See [comprehensive-report-format.md](comprehensive-report-format.md) for when to use this format.

```markdown
# [Report Title]

## Executive Summary
[One paragraph summarizing the entire report]

## Background & Context
[Why this research was conducted, what questions it addresses]

## Methodology
- Sources consulted: [number] Notion pages across [teamspaces]
- Time period: [if relevant]
- Scope: [what was included/excluded]

## Key Findings

### [Major Theme 1]
**Summary**: [One sentence]

**Details**:
- [Supporting point with evidence]
- [Supporting point with evidence]
- [Supporting point with evidence]

**Sources**: [Page mentions]

### [Major Theme 2]
[Repeat structure]

## Data & Evidence

[Tables, quotes, specific data points]

## Implications

### Short-term
[Immediate implications]

### Long-term
[Strategic implications]

## Recommendations

### Priority 1: [High priority action]
- **What**: [Specific action]
- **Why**: [Rationale]
- **How**: [Implementation approach]

### Priority 2: [Medium priority action]
[Repeat structure]

## Appendix

### Additional Resources
- [Related pages]

### Open Questions
- [Unanswered questions for future research]
```
contributing
openai SKILL.md License: LICENSE.txt Version: Unknown
Imported skill contributing from openai
View skill
## Contributing

### Community values

- **Be kind and inclusive.** Treat others with respect; we follow the [Contributor Covenant](https://www.contributor-covenant.org/).
- **Assume good intent.** Written communication is hard - err on the side of generosity.
- **Teach & learn.** If you spot something confusing, open an issue or PR with improvements.

### Contributor license agreement (CLA)

All contributors **must** accept the CLA. The process is lightweight:

1. Open your pull request.
2. Paste the following comment (or reply `recheck` if you've signed before):

   ```text
   I have read the CLA Document and I hereby sign the CLA
   ```

3. The CLA-Assistant bot records your signature in the repo and marks the status check as passed.

No special Git commands, email attachments, or commit footers required.

### Security & responsible AI

Have you discovered a vulnerability or have concerns about model output? Please e-mail **security@openai.com** and we will respond promptly.
conversation_to_faq
openai SKILL.md License: LICENSE.txt Version: Unknown
Imported skill conversation_to_faq from openai
View skill
# Example: Conversation to FAQ

## User Request

> "Save this conversation about deployment troubleshooting to the FAQ"

**Context**: User just had a conversation explaining how to troubleshoot common deployment errors.

## Conversation Summary

The conversation covered:
1. **Question**: "Why does deployment fail with 'port already in use' error?"
2. **Answer**: Process from previous deployment still running, need to kill it
3. **Question**: "How do I find and kill the process?"
4. **Commands shared**: `lsof -ti:3000 | xargs kill -9` or use `pm2 restart app`
5. **Question**: "What about 'cannot connect to database' errors?"
6. **Answer**: Check database credentials, verify database is running, check network connectivity
7. **Best practice**: Always check logs first (`pm2 logs` or `docker logs`)

## Workflow Execution

### Step 1: Identify Content Type

This is Q&A content → **FAQ Entry** format is appropriate

### Step 2: Structure FAQ Entries

Extract distinct Q&A pairs from conversation:

**FAQ 1**: Port already in use error
**FAQ 2**: Cannot connect to database error
**FAQ 3**: General troubleshooting approach

### Step 3: Determine Destination

```
Notion:notion-search
query: "FAQ deployment"
query_type: "internal"
```

Found: "Deployment FAQ" database in Engineering Docs

```
Notion:notion-fetch
id: "deployment-faq-database-id"
```

**Schema**:
- Question (title)
- Category (select): Deployment, Configuration, Troubleshooting, etc.
- Tags (multi_select)
- Last Reviewed (date)

### Step 4: Create FAQ Entries

**FAQ Entry 1: Port Already in Use**

```
Notion:notion-create-pages

parent: { data_source_id: "collection://faq-db-uuid" }

pages: [{
  properties: {
    "Question": "Why does deployment fail with 'port already in use' error?",
    "Category": "Troubleshooting",
    "Tags": "deployment, errors, ports",
    "date:Last Reviewed:start": "2025-10-14",
    "date:Last Reviewed:is_datetime": 0
  },
  content: "## Short Answer\n\nThe deployment port (usually 3000) is still occupied by a process from a previous deployment. You need to kill the existing process before deploying again.\n\n## Detailed Explanation\n\nWhen you deploy the application, it tries to bind to a specific port (e.g., port 3000). If a previous deployment didn't shut down cleanly, that process may still be running and holding the port. This prevents the new deployment from starting.\n\n**Common causes**:\n- Previous deployment crashed without cleanup\n- Manual `node` process started and forgotten\n- PM2 or other process manager didn't restart properly\n- Multiple deployments attempted simultaneously\n\n## Solution\n\n### Option 1: Kill the process using the port\n\n```bash\n# Find and kill process on port 3000\nlsof -ti:3000 | xargs kill -9\n```\n\nReplace `3000` with your application's port number.\n\n### Option 2: If using PM2\n\n```bash\n# Restart the application\npm2 restart app\n\n# Or delete and restart\npm2 delete app\npm2 start app.js --name app\n```\n\n### Option 3: Check all node processes\n\n```bash\n# List all node processes\nps aux | grep node\n\n# Kill specific process by PID\nkill -9 <PID>\n```\n\n## Prevention\n\n1. **Use process managers**: PM2, systemd, or Docker handle cleanup automatically\n2. **Graceful shutdown**: Implement proper shutdown handlers in your app\n3. **Health checks**: Monitor if previous deployment shut down before starting new one\n\n```javascript\n// Graceful shutdown example\nprocess.on('SIGTERM', () => {\n  server.close(() => {\n    console.log('Server closed');\n    process.exit(0);\n  });\n});\n```\n\n## Verification\n\nAfter killing the process, verify the port is free:\n\n```bash\nlsof -ti:3000\n# Should return nothing if port is free\n```\n\n## Related Questions\n- <mention-page url=\"...\">How do I check what's using a port?</mention-page>\n- <mention-page url=\"...\">How do I configure the application port?</mention-page>\n- <mention-page url=\"...\">PM2 deployment best practices</mention-page>\n\n## Last Updated\nOctober 14, 2025"
}]
```

**FAQ Entry 2: Database Connection Errors**

```
Notion:notion-create-pages

parent: { data_source_id: "collection://faq-db-uuid" }

pages: [{
  properties: {
    "Question": "Why do I get 'cannot connect to database' errors during deployment?",
    "Category": "Troubleshooting",
    "Tags": "deployment, database, errors",
    "date:Last Reviewed:start": "2025-10-14",
    "date:Last Reviewed:is_datetime": 0
  },
  content: "## Short Answer\n\nDatabase connection errors usually mean either the database isn't running, credentials are incorrect, or there's a network connectivity issue. Check database status, verify credentials, and test connectivity.\n\n## Detailed Explanation\n\nThe application can't establish a connection to the database during startup. This prevents the application from initializing properly.\n\n**Common causes**:\n- Database service isn't running\n- Incorrect connection credentials\n- Network connectivity issues (firewall, security groups)\n- Database host/port misconfigured\n- Database is at connection limit\n- SSL/TLS configuration mismatch\n\n## Troubleshooting Steps\n\n### Step 1: Check database status\n\n```bash\n# For local PostgreSQL\npg_isready -h localhost -p 5432\n\n# For Docker\ndocker ps | grep postgres\n\n# For MongoDB\nmongosh --eval \"db.adminCommand('ping')\"\n```\n\n### Step 2: Verify credentials\n\nCheck your `.env` or configuration file:\n\n```bash\n# Common environment variables\nDB_HOST=localhost\nDB_PORT=5432\nDB_NAME=myapp_production\nDB_USER=myapp_user\nDB_PASSWORD=***********\n```\n\nTest connection manually:\n\n```bash\n# PostgreSQL\npsql -h $DB_HOST -p $DB_PORT -U $DB_USER -d $DB_NAME\n\n# MongoDB\nmongosh \"mongodb://$DB_USER:$DB_PASSWORD@$DB_HOST:$DB_PORT/$DB_NAME\"\n```\n\n### Step 3: Check network connectivity\n\n```bash\n# Test if port is reachable\ntelnet $DB_HOST $DB_PORT\n\n# Or using nc\nnc -zv $DB_HOST $DB_PORT\n\n# Check firewall rules (if applicable)\nsudo iptables -L\n```\n\n### Step 4: Check application logs\n\n```bash\n# PM2 logs\npm2 logs app\n\n# Docker logs\ndocker logs container-name\n\n# Application logs\ntail -f /var/log/app/error.log\n```\n\nLook for specific error messages:\n- `ECONNREFUSED`: Database not running or wrong host/port\n- `Authentication failed`: Wrong credentials\n- `Timeout`: Network/firewall issue\n- `Too many connections`: Database connection limit reached\n\n## Solutions by Error Type\n\n### Database Not Running\n\n```bash\n# Start PostgreSQL\nsudo systemctl start postgresql\n\n# Start via Docker\ndocker start postgres-container\n```\n\n### Wrong Credentials\n\n1. Reset database password\n2. Update `.env` file\n3. Restart application\n\n### Connection Limit Reached\n\n```sql\n-- Check current connections (PostgreSQL)\nSELECT count(*) FROM pg_stat_activity;\n\n-- Increase max connections\nALTER SYSTEM SET max_connections = 200;\n```\n\n### SSL/TLS Issues\n\nAdd to connection string:\n```\nssl=true&sslmode=require\n```\n\nOr disable SSL for dev:\n```\nsslmode=disable\n```\n\n## Prevention\n\n1. **Connection pooling**: Limits concurrent connections\n```javascript\npool: {\n  min: 2,\n  max: 10\n}\n```\n\n2. **Health checks**: Verify database before starting app\n```javascript\nconst healthCheck = async () => {\n  try {\n    await db.raw('SELECT 1');\n    console.log('Database connected');\n  } catch (err) {\n    console.error('Database connection failed:', err);\n    process.exit(1);\n  }\n};\n```\n\n3. **Retry logic**: Auto-reconnect on temporary failures\n```javascript\nretry: {\n  max: 3,\n  backoff: 1000\n}\n```\n\n4. **Environment validation**: Check all required env vars exist\n```javascript\nconst required = ['DB_HOST', 'DB_PORT', 'DB_NAME', 'DB_USER', 'DB_PASSWORD'];\nfor (const key of required) {\n  if (!process.env[key]) {\n    throw new Error(`Missing required env var: ${key}`);\n  }\n}\n```\n\n## Related Questions\n- <mention-page url=\"...\">How do I configure database connection pooling?</mention-page>\n- <mention-page url=\"...\">Database backup and restore procedures</mention-page>\n- <mention-page url=\"...\">Database performance optimization</mention-page>\n\n## Last Updated\nOctober 14, 2025"
}]
```

**FAQ Entry 3: General Troubleshooting**

```
Notion:notion-create-pages

parent: { data_source_id: "collection://faq-db-uuid" }

pages: [{
  properties: {
    "Question": "What's the first thing I should check when deployment fails?",
    "Category": "Troubleshooting",
    "Tags": "deployment, debugging, best-practices",
    "date:Last Reviewed:start": "2025-10-14",
    "date:Last Reviewed:is_datetime": 0
  },
  content: "## Short Answer\n\n**Always check the logs first.** Logs contain error messages that point you directly to the problem. Use `pm2 logs`, `docker logs`, or check your application's log files.\n\n## Detailed Explanation\n\nLogs are your first and most important debugging tool. They show:\n- Exact error messages\n- Stack traces\n- Timing information\n- Configuration issues\n- Dependency problems\n\nMost deployment issues can be diagnosed and fixed by reading the logs carefully.\n\n## How to Check Logs\n\n### PM2\n\n```bash\n# View all logs\npm2 logs\n\n# View logs for specific app\npm2 logs app-name\n\n# View only errors\npm2 logs --err\n\n# Follow logs in real-time\npm2 logs --lines 100\n```\n\n### Docker\n\n```bash\n# View logs\ndocker logs container-name\n\n# Follow logs\ndocker logs -f container-name\n\n# Last 100 lines\ndocker logs --tail 100 container-name\n\n# With timestamps\ndocker logs -t container-name\n```\n\n### Application Logs\n\n```bash\n# Tail application logs\ntail -f /var/log/app/app.log\ntail -f /var/log/app/error.log\n\n# Search logs for errors\ngrep -i error /var/log/app/*.log\n\n# View logs with context\ngrep -B 5 -A 5 \"ERROR\" app.log\n```\n\n## Systematic Troubleshooting Approach\n\n### 1. Check the logs\n- Read error messages carefully\n- Note the exact error type and message\n- Check timestamps to find when error occurred\n\n### 2. Verify configuration\n- Environment variables set correctly?\n- Configuration files present and valid?\n- Paths and file permissions correct?\n\n### 3. Check dependencies\n- All packages installed? (`node_modules` present?)\n- Correct versions installed?\n- Any native module compilation errors?\n\n### 4. Verify environment\n- Required services running (database, Redis, etc.)?\n- Ports available?\n- Network connectivity working?\n\n### 5. Test components individually\n- Can you connect to database manually?\n- Can you run application locally?\n- Do health check endpoints work?\n\n### 6. Check recent changes\n- What changed since last successful deployment?\n- New dependencies added?\n- Configuration modified?\n- Environment differences?\n\n## Common Error Patterns\n\n### \"Module not found\"\n```bash\n# Solution: Install dependencies\nnpm install\n# or\nnpm ci\n```\n\n### \"Permission denied\"\n```bash\n# Solution: Fix file permissions\nchmod +x start.sh\nsudo chown -R appuser:appuser /app\n```\n\n### \"Address already in use\"\n```bash\n# Solution: Kill process on port\nlsof -ti:3000 | xargs kill -9\n```\n\n### \"Cannot connect to...\"\n```bash\n# Solution: Verify service is running and reachable\ntelnet service-host port\n```\n\n## Debugging Tools\n\n### Log Aggregation\n- **PM2**: Built-in log management\n- **Docker**: Centralized logging with log drivers\n- **ELK Stack**: Elasticsearch, Logstash, Kibana for large scale\n- **CloudWatch**: For AWS deployments\n\n### Monitoring\n- **PM2 Monit**: `pm2 monit` for real-time metrics\n- **Docker Stats**: `docker stats` for resource usage\n- **System metrics**: `top`, `htop` for CPU/memory\n\n### Network Debugging\n```bash\n# Check open connections\nnetstat -tulpn\n\n# Check specific port\nlsof -i :3000\n\n# Test connectivity\ncurl http://localhost:3000/health\n```\n\n## When to Escalate\n\nEscalate to senior team member if:\n- Error message is cryptic and not Google-able\n- Problem persists after trying obvious solutions\n- Production is affected\n- Data integrity may be compromised\n- Security-related errors\n\n## Prevention\n\n1. **Automated health checks** before deployment\n2. **Staged deployments** (dev → staging → production)\n3. **Rollback plan** ready for quick recovery\n4. **Monitoring and alerts** for early problem detection\n5. **Deployment checklist** to ensure all steps followed\n\n## Related Questions\n- <mention-page url=\"...\">How do I set up log aggregation?</mention-page>\n- <mention-page url=\"...\">Deployment checklist and best practices</mention-page>\n- <mention-page url=\"...\">How to roll back a failed deployment?</mention-page>\n\n## Last Updated\nOctober 14, 2025"
}]
```

### Step 5: Update FAQ Index

```
Notion:notion-fetch
id: "faq-index-page-id"
```

```
Notion:notion-update-page
page_id: "faq-index-page-id"
command: "insert_content_after"
selection_with_ellipsis: "## Deployment & Troubleshooting..."
new_str: "
- <mention-page url=\"...\">Why does deployment fail with 'port already in use' error?</mention-page>
- <mention-page url=\"...\">Why do I get 'cannot connect to database' errors?</mention-page>
- <mention-page url=\"...\">What's the first thing I should check when deployment fails?</mention-page>
"
```

## Summary Provided to User

```markdown
## FAQ Entries Created ✅

### Conversation Captured
Transformed deployment troubleshooting conversation into 3 comprehensive FAQ entries.

### FAQ Entries Created

1. **Port Already in Use Error**
   - Problem: Deployment fails due to occupied port
   - Solutions: Kill process, use PM2, check node processes
   - Prevention: Use process managers, graceful shutdown
   - Link: <mention-page url="...">Port already in use FAQ</mention-page>

2. **Database Connection Errors**
   - Problem: Cannot connect to database during deployment
   - Troubleshooting: 4-step systematic approach
   - Solutions: By error type (credentials, network, limits)
   - Prevention: Connection pooling, health checks, retry logic
   - Link: <mention-page url="...">Database connection FAQ</mention-page>

3. **General Troubleshooting Approach**
   - Best practice: Always check logs first
   - Tools: PM2 logs, Docker logs, application logs
   - Systematic approach: 6-step debugging process
   - Common errors: Module not found, permissions, ports
   - Link: <mention-page url="...">Troubleshooting first steps FAQ</mention-page>

### Content Structure

Each FAQ includes:
✅ Short answer (quick reference)
✅ Detailed explanation (full context)
✅ Multiple solution options
✅ Code examples and commands
✅ Prevention strategies
✅ Links to related FAQs
✅ Last updated date

### Database Integration

- ✅ Added to FAQ database (not standalone pages)
- ✅ Properties set: Category (Troubleshooting), Tags, Date
- ✅ FAQ index page updated with links
- ✅ Cross-references between related questions

### Value Added

**From conversation** → **To structured knowledge**:
- Casual Q&A → Professional FAQ entries
- Implicit knowledge → Explicit documentation
- One-time help → Reusable resource
- 5-minute conversation → Permanent team knowledge

**Future benefit**:
- Team can self-serve on common issues
- New team members can onboard faster
- Reduces repeated questions
- Improves troubleshooting speed

**Knowledge captured and preserved! 📚**
```

## Key Features Demonstrated

### Content Extraction
- Identified distinct Q&A pairs from conversation
- Extracted commands, solutions, and best practices
- Preserved troubleshooting workflows

### FAQ Structure
- Short answer for quick reference
- Detailed explanation for understanding
- Multiple solutions (Options 1, 2, 3)
- Code examples with comments
- Prevention strategies
- Related questions linking

### Database Integration
- Created in FAQ database (not standalone)
- Set appropriate properties (category, tags, date)
- Updated index page with links
- Cross-referenced related questions

### Value Transformation
- Converted informal conversation to professional docs
- Added structure, examples, and prevention tips
- Made knowledge searchable and discoverable
- Created permanent team resource

Perfect for: FAQ creation, knowledge preservation, team documentation
customer_meeting
openai SKILL.md License: LICENSE.txt Version: Unknown
Imported skill customer_meeting from openai
View skill
# Example: Customer Meeting Prep

**User Request**: "Prep for sales meeting with Acme Corp on Thursday"

## Workflow

### 1. Search Customer Info
```
Notion:notion-search
query: "Acme Corp"
```

Found:
- CRM notes from initial call
- Product demo feedback
- Pricing proposal
- Competitor analysis

### 2. Fetch Details
```
Notion:notion-fetch (4 pages)
```

**Key context**:
- **Company**: 500-person fintech startup, Series B
- **Pain Points**: Manual workflows, data silos, scaling issues
- **Decision Makers**: CTO (technical), VP Product (user needs), CFO (budget)
- **Timeline**: Need solution by Q1
- **Budget**: $50-75K/year range
- **Competition**: Evaluating us vs. Competitor A and Competitor B

### 3. Add Research Context
Codex research added:
- Fintech industry compliance requirements
- Common integration needs for financial products
- Typical procurement processes for Series B companies

### 4. Create Internal Pre-Read
```
Notion:notion-create-pages
pages: [{
  properties: {
    title: "Acme Corp Meeting - Pre-Read (Internal)"
  },
  content: "# Acme Corp Meeting - Pre-Read (Internal)

## Company Overview
**Company**: Acme Corp (500-person fintech, Series B)
**Decision Makers**: CTO, VP Product, CFO
Source: <mention-page url='...'>CRM Notes</mention-page>

## Their Pain Points → Our Solutions
- Manual workflows → Workflow automation
- Data silos → API integrations
- Scaling issues → Enterprise features

## Competitive Position
**Our advantages over Competitor A/B**:
- Better integration ecosystem (Stripe, Plaid)
- Stronger compliance features
- Faster implementation (4 weeks vs 12 weeks)

## Pricing Strategy
Recommend: $60K/year (mid-range of their budget)

## Potential Objections & Responses
..."
}]
```

### 5. Create Meeting Agenda
```
Notion:notion-create-pages
pages: [{
  properties: {
    title: "Acme Corp - Product Demo & Q&A"
  },
  content: "# Acme Corp - Product Demo & Q&A

## Meeting Details
**Date**: Thursday, Oct 17, 2025 @ 2pm PST
**Attendees**: CTO, VP Product, CFO (Acme) | Sales Lead, Solutions Engineer (Us)
**Duration**: 60 minutes

## Objective
Demonstrate how our platform solves Acme's workflow automation and integration challenges.

## Agenda

**1. Introductions** (5 min)

**2. Understand Current Workflow** (10 min)
- Current pain points
- Integration requirements
- Success criteria

**3. Product Demo** (25 min)
- Workflow automation capabilities
- API integrations (Stripe, Plaid)
- Security & compliance features

**4. Pricing & Implementation** (10 min)

**5. Next Steps** (10 min)
"
}]
```

### 6. Link Resources
Connected agenda to CRM page, pricing sheet, and technical integration docs.

## Outputs

**Internal Pre-Read**: Full context for sales team
**Customer Agenda**: Professional meeting structure
**Both in Notion** with links to supporting materials

## Key Success Factors
- Understood customer's specific pain points
- Researched industry context (fintech compliance)
- Mapped features to their needs
- Prepared competitive differentiators
- Structured demo around their use cases
- Pre-planned objection responses
- Clear next steps in agenda
database_best_practices
openai SKILL.md License: LICENSE.txt Version: Unknown
Imported skill database_best_practices from openai
View skill
# Database Best Practices

General guidance for creating and maintaining knowledge capture databases.

## Core Principles

### 1. Keep It Simple
- Start with core properties
- Add more only when needed
- Don't over-engineer

### 2. Use Consistent Naming
- Title property for main identifier
- Status for lifecycle tracking
- Tags for flexible categorization
- Owner for accountability

### 3. Include Metadata
- Created/Updated timestamps
- Owner or maintainer
- Last reviewed dates
- Status indicators

### 4. Enable Discovery
- Use tags liberally
- Create helpful views
- Link related content
- Use clear titles

### 5. Plan for Scale
- Consider filters early
- Use relations for connections
- Think about search
- Organize with categories

## Creating a Database

### Using `Notion:notion-create-database`

Example for documentation database:

```javascript
{
  "parent": {"page_id": "wiki-page-id"},
  "title": [{"text": {"content": "Team Documentation"}}],
  "properties": {
    "Type": {
      "select": {
        "options": [
          {"name": "How-To", "color": "blue"},
          {"name": "Concept", "color": "green"},
          {"name": "Reference", "color": "gray"},
          {"name": "FAQ", "color": "yellow"}
        ]
      }
    },
    "Category": {
      "select": {
        "options": [
          {"name": "Engineering", "color": "red"},
          {"name": "Product", "color": "purple"},
          {"name": "Design", "color": "pink"}
        ]
      }
    },
    "Tags": {"multi_select": {"options": []}},
    "Owner": {"people": {}},
    "Status": {
      "select": {
        "options": [
          {"name": "Draft", "color": "gray"},
          {"name": "Final", "color": "green"},
          {"name": "Deprecated", "color": "red"}
        ]
      }
    }
  }
}
```

### Fetching Database Schema

Before creating pages, always fetch database to get schema:

```
Notion:notion-fetch
id: "database-url-or-id"
```

This returns the exact property names and types to use.

## Database Selection Guide

| Need | Use This Database |
|------|-------------------|
| General documentation | [Documentation Database](documentation-database.md) |
| Track decisions | [Decision Log](decision-log-database.md) |
| Q&A knowledge base | [FAQ Database](faq-database.md) |
| Team-specific content | [Team Wiki](team-wiki-database.md) |
| Step-by-step guides | [How-To Guide Database](how-to-guide-database.md) |
| Incident/project learnings | [Learning Database](learning-database.md) |

## Tips

1. **Start with general documentation database** - most flexible
2. **Add specialized databases** as needs emerge (FAQ, Decisions)
3. **Use relations** to connect related docs
4. **Create views** for common use cases
5. **Review properties** quarterly - remove unused ones
6. **Document the schema** in database description
7. **Train team** on property usage and conventions
database_migration
openai SKILL.md License: LICENSE.txt Version: Unknown
Imported skill database_migration from openai
View skill
# Example: Database Migration Implementation

**User Request**: "Plan and implement the database migration for user preferences schema"

## Workflow

### 1. Find & Fetch Spec
```
Notion:notion-search → Found "User Preferences Schema Migration Spec"
Notion:notion-fetch → Extracted requirements
```

**Spec Summary**: Migrate from JSON blob to structured schema for better performance and data integrity.

### 2. Parse Requirements
- **Current**: JSONB preferences column
- **Target**: Separate `user_preferences` and `notification_preferences` tables
- **Must maintain**: Backward compatibility during migration
- **Performance**: Support 1M+ users with zero downtime

### 3. Create Implementation Plan
```
Notion:notion-create-pages
pages: [{
  properties: {
    title: "Implementation Plan: User Preferences Migration"
  },
  content: "[Full implementation plan with phases]"
}]
```

**Plan included**:
- **Phase 1**: Create new tables with indexes
- **Phase 2**: Backfill data from JSONB
- **Phase 3**: Dual-write mode (both old and new)
- **Phase 4**: Switch reads to new schema
- **Phase 5**: Drop old JSONB column

### 4. Find Task Database & Create Tasks
```
Notion:notion-search → Found "Engineering Tasks" database
Notion:notion-fetch → Got schema (Task, Status, Priority, Assignee, etc.)

Notion:notion-create-pages
parent: { data_source_id: "collection://xyz" }
pages: [
  {
    properties: {
      "Task": "Write migration SQL scripts",
      "Status": "To Do",
      "Priority": "High",
      "Sprint": "Sprint 25"
    },
    content: "## Context\nPart of User Preferences Migration...\n\n## Acceptance Criteria\n- [ ] Migration script creates tables\n- [ ] Indexes defined..."
  },
  // ... 4 more tasks
]
```

**Tasks created**:
1. Write migration SQL scripts
2. Implement backfill job
3. Add dual-write logic to API
4. Update read queries
5. Rollback plan & monitoring

### 5. Track Progress
Regular updates to implementation plan with status, blockers, and completion notes.

## Key Outputs

**Implementation Plan Page** (linked to spec)
**5 Tasks in Database** (with dependencies, acceptance criteria)
**Progress Tracking** (updated as work progresses)

## Success Factors
- Broke down complex migration into clear phases
- Created tasks with specific acceptance criteria
- Established dependencies (Phase 1 → 2 → 3 → 4 → 5)
- Zero-downtime approach with rollback plan
- Linked all work back to original spec
decision_capture
openai SKILL.md License: LICENSE.txt Version: Unknown
Imported skill decision_capture from openai
View skill
# Example: Decision Record Capture

**User Request**: "Document our decision to move from REST to GraphQL API"

## Workflow

### 1. Extract Decision from Context
From conversation, identified:
- **Decision**: Migrate customer-facing API from REST to GraphQL
- **Context**: REST endpoints becoming unwieldy (50+ endpoints)
- **Alternatives**: Keep REST, try gRPC, adopt GraphQL
- **Rationale**: Better client experience, type safety, single endpoint

### 2. Find Decision Log
```
Notion:notion-search
query: "architecture decisions" or "ADR"
```

Found: "Architecture Decision Records" database

### 3. Fetch Schema
```
Notion:notion-fetch
```

Got properties: Decision (title), Date, Status, Domain, Impact, Deciders, Stakeholders

### 4. Create Decision Record
```
Notion:notion-create-pages
parent: { data_source_id: "decision-log-collection-id" }
pages: [{
  properties: {
    "Decision": "Migrate to GraphQL API",
    "date:Date:start": "2025-10-16",
    "date:Date:is_datetime": 0,
    "Status": "Accepted",
    "Domain": "Architecture",
    "Impact": "High"
  },
  content: "[Full decision record with context, rationale, alternatives...]"
}]
```

**Content sample**:

```markdown
# Migrate Customer-Facing API to GraphQL

## Context
Our REST API has grown to 50+ endpoints with inconsistent patterns. Frontend and mobile teams request new endpoints frequently, leading to:
- API bloat and maintenance burden
- Over-fetching/under-fetching data
- Slow iteration on client features
- Inconsistent error handling

## Decision
Migrate customer-facing API from REST to GraphQL while maintaining REST for internal services.

## Rationale
**Why GraphQL**:
- Clients fetch exactly what they need (no over/under-fetching)
- Single endpoint, self-documenting schema
- Type safety with code generation
- Better developer experience
- Industry standard for client-facing APIs

## Options Considered

### Option A: Keep REST (Status Quo)
**Pros**: No migration cost, team familiar
**Cons**: Doesn't solve core problems, maintenance keeps growing
**Decision**: Rejected - doesn't address root cause

### Option B: gRPC
**Pros**: Performance, strong typing
**Cons**: Browser support issues, steeper learning curve, not designed for client-facing
**Decision**: Rejected - better for internal services

### Option C: GraphQL
**Pros**: Solves over/under-fetching, great DX, strong ecosystem
**Cons**: Learning curve, caching complexity, migration effort
**Decision**: **Accepted**

## Consequences

**Positive**:
- Faster frontend/mobile development
- Reduced API maintenance burden
- Better type safety and tooling
- Single endpoint simplifies deployment

**Negative**:
- 3-4 month migration timeline
- Team needs training
- Need to solve caching strategy
- Monitoring/debugging patterns different

## Implementation Plan
1. GraphQL server setup (Apollo Server)
2. Schema design workshop
3. Gradual migration (start with new features)
4. Dual-run period (both REST and GraphQL)
5. Deprecate old REST endpoints

**Timeline**: Start Q4 2025, complete Q1 2026
**Owner**: Backend team lead

## Success Metrics
- API response times improve 30%
- Client fetch efficiency (less data transferred)
- Reduced new endpoint requests
- Developer satisfaction scores
```

### 5. Make Discoverable
Added link from Architecture Wiki and notified team in Slack.

## Key Success Factors
- Captured decision while context fresh
- Documented alternatives considered
- Included both pros and cons
- Clear implementation plan
- Saved to decision log for future reference
- Made discoverable for team
decision_log_database
openai SKILL.md License: LICENSE.txt Version: Unknown
Imported skill decision_log_database from openai
View skill
# Decision Log Database (ADR - Architecture Decision Records)

**Purpose**: Track important decisions with context and rationale.

## Schema

| Property | Type | Options | Purpose |
|----------|------|---------|---------|
| **Decision** | title | - | What was decided |
| **Date** | date | - | When decision was made |
| **Status** | select | Proposed, Accepted, Superseded, Deprecated | Current decision status |
| **Domain** | select | Architecture, Product, Business, Design, Operations | Decision category |
| **Impact** | select | High, Medium, Low | Expected impact level |
| **Deciders** | people | - | Who made the decision |
| **Stakeholders** | people | - | Who's affected by decision |
| **Related Decisions** | relation | Links to other decisions | Context and dependencies |

## Usage

```
Create decision records with properties:
{
  "Decision": "Use PostgreSQL for Primary Database",
  "Date": "2025-10-15",
  "Status": "Accepted",
  "Domain": "Architecture",
  "Impact": "High",
  "Deciders": [tech_lead, architect],
  "Stakeholders": [eng_team]
}
```

## Content Template

Each decision page should include:
- **Context**: Why this decision was needed
- **Decision**: What was decided
- **Rationale**: Why this option was chosen
- **Options Considered**: Alternatives and trade-offs
- **Consequences**: Expected outcomes (positive and negative)
- **Implementation**: How decision will be executed

## Views

**Recent Decisions**: Sort by Date descending
**Active Decisions**: Filter where Status = "Accepted"
**By Domain**: Group by Domain
**High Impact**: Filter where Impact = "High"
**Pending**: Filter where Status = "Proposed"

## Best Practices

1. **Document immediately**: Record decisions when made, while context is fresh
2. **Include alternatives**: Show what was considered and why it wasn't chosen
3. **Track superseded decisions**: Update status when decisions change
4. **Link related decisions**: Use relations to show dependencies
5. **Review periodically**: Check if old decisions are still valid
decision_meeting_template
openai SKILL.md License: LICENSE.txt Version: Unknown
Imported skill decision_meeting_template from openai
View skill
# Decision Meeting Template

Use this template when you need to make an important decision with your team.

```markdown
# [Decision Topic] - [Date]

## Meeting Details
**Date & Time**: [Date and time]
**Duration**: [Length]
**Attendees**: [List of attendees with roles]
**Location**: [Physical location or video link]
**Facilitator**: [Name]

## Pre-Read Summary

### Background
[2-3 sentences providing context from related project pages]

**Related Pages**:
- <mention-page url="...">Project Overview</mention-page>
- <mention-page url="...">Previous Discussion</mention-page>

### Current Situation
[What brings us to this decision point]

## Decision Required

**Question**: [Clear statement of decision needed]

**Timeline**: [When decision needs to be made]

**Impact**: [Who/what is affected by this decision]

## Options Analysis

### Option A: [Name]
**Description**: [What this option entails]

**Pros**:
- [Advantage]
- [Advantage]

**Cons**:
- [Disadvantage]
- [Disadvantage]

**Cost/Effort**: [Estimate]
**Risk**: [Risk assessment]

### Option B: [Name]
[Repeat structure]

### Option C: Do Nothing
**Description**: What happens if we don't decide
**Implications**: [Consequences]

## Recommendation

[If there is a recommended option, state it with rationale]

## Discussion Topics

1. [Topic to discuss]
2. [Clarification needed on]
3. [Trade-offs to consider]

## Decision Framework

**Criteria for evaluation**:
- [Criterion 1]
- [Criterion 2]
- [Criterion 3]

## Decision

[To be filled during meeting]

**Selected Option**: [Option chosen]
**Rationale**: [Why]
**Owner**: [Who will implement]
**Timeline**: [When]

## Action Items

- [ ] [Action] - @[Owner] - Due: [Date]
- [ ] [Action] - @[Owner] - Due: [Date]

## Follow-up

**Next review**: [Date]
**Success metrics**: [How we'll know this worked]
```
documentation_database
openai SKILL.md License: LICENSE.txt Version: Unknown
Imported skill documentation_database from openai
View skill
# General Documentation Database

**Purpose**: Store all types of documentation in a searchable, organized database.

## Schema

| Property | Type | Options | Purpose |
|----------|------|---------|---------|
| **Title** | title | - | Document name |
| **Type** | select | How-To, Concept, Reference, FAQ, Decision, Post-Mortem | Categorize content type |
| **Category** | select | Engineering, Product, Design, Operations, General | Organize by department/topic |
| **Tags** | multi_select | - | Additional categorization (languages, tools, topics) |
| **Status** | select | Draft, In Review, Final, Deprecated | Track document lifecycle |
| **Owner** | people | - | Document maintainer |
| **Created** | created_time | - | Auto-populated creation date |
| **Last Updated** | last_edited_time | - | Auto-populated last edit |
| **Last Reviewed** | date | - | Manual review tracking |

## Usage

```
Create pages with properties:
{
  "Title": "How to Deploy to Production",
  "Type": "How-To",
  "Category": "Engineering",
  "Tags": "deployment, production, DevOps",
  "Status": "Final",
  "Owner": [current_user],
  "Last Reviewed": "2025-10-01"
}
```

## Views

**By Type**: Group by Type property
**By Category**: Group by Category property  
**Recent Updates**: Sort by Last Updated descending
**Needs Review**: Filter where Last Reviewed > 90 days ago
**Draft Docs**: Filter where Status = "Draft"

## Creating This Database

Use `Notion:notion-create-database`:

```javascript
{
  "parent": {"page_id": "wiki-page-id"},
  "title": [{"text": {"content": "Team Documentation"}}],
  "properties": {
    "Type": {
      "select": {
        "options": [
          {"name": "How-To", "color": "blue"},
          {"name": "Concept", "color": "green"},
          {"name": "Reference", "color": "gray"},
          {"name": "FAQ", "color": "yellow"}
        ]
      }
    },
    "Category": {
      "select": {
        "options": [
          {"name": "Engineering", "color": "red"},
          {"name": "Product", "color": "purple"},
          {"name": "Design", "color": "pink"}
        ]
      }
    },
    "Tags": {"multi_select": {"options": []}},
    "Owner": {"people": {}},
    "Status": {
      "select": {
        "options": [
          {"name": "Draft", "color": "gray"},
          {"name": "Final", "color": "green"},
          {"name": "Deprecated", "color": "red"}
        ]
      }
    }
  }
}
```

## Best Practices

1. **Start with this schema** - most flexible for general documentation
2. **Use relations** to connect related docs
3. **Create views** for common use cases
4. **Review properties** quarterly - remove unused ones
5. **Document the schema** in database description
6. **Train team** on property usage and conventions
executive_review
openai SKILL.md License: LICENSE.txt Version: Unknown
Imported skill executive_review from openai
View skill
# Example: Executive Review Prep

**User Request**: "Prep for quarterly executive review on Friday"

## Workflow

### 1. Search for Context
```
Notion:notion-search
query: "Q4 objectives" + "KPIs" + "quarterly results"
```

Found:
- Q4 OKRs and progress
- Product metrics dashboard
- Engineering velocity reports
- Customer feedback summary

### 2. Fetch & Analyze
```
Notion:notion-fetch (5 pages)
```

**Key metrics**:
- **Revenue**: $2.4M ARR (96% of Q4 target)
- **Customer Growth**: 145 new customers (exceeds 120 target)
- **Churn**: 3.2% (below 5% target)
- **Product**: 3 major features shipped, 2 in beta
- **Engineering**: 94% uptime (above 95% SLA)

### 3. Add Codex Research Context
Added context on:
- Industry benchmarks for SaaS metrics
- Typical Q4 sales patterns
- Best practices for executive presentations

### 4. Create Pre-Read (Internal)
```
Notion:notion-create-pages
title: "Q4 Review - Pre-Read (Internal)"
```

**Pre-read sections**:
- **Executive Summary**: Strong quarter, missed revenue by 4% but exceeded customer growth
- **Detailed Metrics**: All KPIs with trend lines
- **Wins**: Product launches, key customer acquisitions
- **Challenges**: Sales pipeline conversion, engineering hiring
- **Q1 Preview**: Strategic priorities

### 5. Create Presentation Agenda
```
Notion:notion-create-pages
title: "Q4 Executive Review - Agenda"
```

**Agenda** (90 min):
- Q4 Results Overview (15 min)
- Revenue & Growth Deep Dive (20 min)
- Product & Engineering Update (20 min)
- Customer Success Highlights (15 min)
- Q1 Strategic Plan (15 min)
- Discussion & Questions (15 min)

### 6. Link Supporting Docs
Connected to OKRs, metrics dashboards, and Q1 planning docs.

## Outputs

**Internal Pre-Read**: Comprehensive context with honest assessment
**Executive Agenda**: Structured 90-min presentation
**Both in Notion** with links to supporting data

## Key Success Factors
- Synthesized data from multiple sources (OKRs, metrics, feedback)
- Added industry context and benchmarks
- Created honest internal assessment (not just wins)
- Structured agenda with time allocations
- Linked to source data for drill-down during Q&A
faq_database
openai SKILL.md License: LICENSE.txt Version: Unknown
Imported skill faq_database from openai
View skill
# FAQ Database

**Purpose**: Organize frequently asked questions with answers.

## Schema

| Property | Type | Options | Purpose |
|----------|------|---------|---------|
| **Question** | title | - | The question being asked |
| **Category** | select | Product, Engineering, Support, HR, General | Question topic |
| **Tags** | multi_select | - | Specific topics (auth, billing, onboarding, etc.) |
| **Answer Type** | select | Quick Answer, Detailed Guide, Link to Docs | Response format |
| **Last Reviewed** | date | - | When answer was verified |
| **Helpful Count** | number | - | Track usefulness (optional) |
| **Audience** | select | Internal, External, All | Who should see this |
| **Related Questions** | relation | Links to related FAQs | Connect similar topics |

## Usage

```
Create FAQ entries with properties:
{
  "Question": "How do I reset my password?",
  "Category": "Support",
  "Tags": "authentication, password, login",
  "Answer Type": "Quick Answer",
  "Last Reviewed": "2025-10-01",
  "Audience": "External"
}
```

## Content Template

Each FAQ page should include:
- **Short Answer**: 1-2 sentence quick response
- **Detailed Explanation**: Full answer with context
- **Steps** (if applicable): Numbered procedure
- **Screenshots** (if helpful): Visual guidance
- **Related Questions**: Links to similar FAQs
- **Additional Resources**: External docs or videos

## Views

**By Category**: Group by Category
**Recently Updated**: Sort by Last Reviewed descending
**Needs Review**: Filter where Last Reviewed > 180 days ago
**External FAQs**: Filter where Audience contains "External"
**Popular**: Sort by Helpful Count descending (if tracking)

## Best Practices

1. **Use clear questions**: Write questions as users would ask them
2. **Provide quick answers**: Lead with the direct answer, then elaborate
3. **Link related FAQs**: Help users discover related information
4. **Review regularly**: Keep answers current and accurate
5. **Track what's helpful**: Use feedback to improve frequently accessed FAQs
format_selection_guide
openai SKILL.md License: LICENSE.txt Version: Unknown
Imported skill format_selection_guide from openai
View skill
# Format Selection Guide

Choose the right output format for your research needs.

## Decision Tree

```
Is this comparing multiple options?
  ├─ YES → Use Comparison Format
  └─ NO ↓

Is this time-sensitive or simple?
  ├─ YES → Use Quick Brief
  └─ NO ↓

Does this require formal/extensive documentation?
  ├─ YES → Use Comprehensive Report
  └─ NO → Use Research Summary (default)
```

## Format Overview

| Format | Length | When to Use | Template |
|--------|--------|-------------|----------|
| [Research Summary](research-summary-format.md) | 500-1000 words | Most research requests (default) | [Template](research-summary-template.md) |
| [Comprehensive Report](comprehensive-report-format.md) | 1500+ words | Formal docs, strategic decisions | [Template](comprehensive-report-template.md) |
| [Quick Brief](quick-brief-format.md) | 200-400 words | Time-sensitive, simple topics | [Template](quick-brief-template.md) |
| [Comparison](comparison-format.md) | 800-1200 words | Evaluating options | [Template](comparison-template.md) |

## Formatting Guidelines

### Headings
- Use `#` for title
- Use `##` for major sections
- Use `###` for subsections
- Keep heading hierarchy consistent

### Lists
- Use `-` for bullet points
- Use `1.` for numbered lists
- Keep list items parallel in structure

### Emphasis
- Use `**bold**` for key terms and section labels
- Use `*italic*` for emphasis
- Use sparingly for maximum impact

### Citations
- Always use `<mention-page url="...">Page Title</mention-page>` for source pages
- Include citation immediately after referenced information
- Group all sources in a "Sources" section at the end

### Tables
- Use for structured data comparison
- Keep columns to 3-5 for readability
- Include header row
- Align content appropriately

### Code Blocks
Use when including:
- Technical specifications
- Configuration examples
- Command examples

```
Example code or configuration here
```

## Content Guidelines

### Executive Summaries
- Lead with the most important finding
- Include 1-2 key implications
- Make it standalone (reader gets value without reading further)
- Target 2-3 sentences for summaries, 1 paragraph for reports

### Key Findings
- Start with a clear headline
- Support with specific evidence
- Include relevant data points or quotes
- Cite source immediately
- Focus on actionable insights

### Recommendations
- Make them specific and actionable
- Explain the "why" behind each recommendation
- Prioritize clearly (Priority 1, 2, 3 or High/Medium/Low)
- Include implementation hints when relevant

### Source Citations
- Link to original pages using mentions
- Note if information is outdated (check last-edited dates)
- Credit specific sections when quoting
- Group related sources together
how_to_guide
openai SKILL.md License: LICENSE.txt Version: Unknown
Imported skill how_to_guide from openai
View skill
# Example: How-To Guide from Discussion

**User Request**: "Save our discussion about deploying to production as a how-to guide"

## Workflow

### 1. Extract Content from Chat
From conversation, identified:
- Deployment prerequisites
- Step-by-step procedure
- Common issues and solutions
- Best practices and tips

### 2. Structure as How-To
Organized into:
- Overview & prerequisites
- Numbered deployment steps
- Verification steps
- Troubleshooting section
- Related resources

### 3. Find Location
```
Notion:notion-search
query: "deployment documentation"
```
Found: Engineering Wiki → Deployment section

### 4. Create Guide
```
Notion:notion-create-pages
parent: { page_id: "deployment-section-id" }
```

## Output

```markdown
# How to Deploy to Production

## Overview
Production deployment using GitHub Actions with zero-downtime rolling updates.
**Time Required**: 15-20 minutes | **Complexity**: Intermediate

## Prerequisites
- [ ] PR approved and merged to main
- [ ] All CI tests passing
- [ ] Database migrations reviewed
- [ ] Feature flags configured

## Deployment Steps

1. **Verify main branch is ready**
   ```bash
   git checkout main && git pull
   ```

2. **Tag release**
   ```bash
   git tag -a v1.2.3 -m "Release v1.2.3"
   git push origin v1.2.3
   ```

3. **Trigger deployment**
   - GitHub Actions auto-starts from tag push
   - Monitor: https://github.com/org/repo/actions

4. **Database migrations** (if needed)
   - Auto-run in GitHub Actions
   - Check logs for completion

5. **Verify deployment**
   - Wait for health checks (2-3 min)
   - Test key endpoints
   - Check error rates in Datadog

## Verification Checklist
- [ ] All pods healthy in k8s dashboard
- [ ] Error rate < 0.1% in last 10 min
- [ ] Response time p95 < 500ms
- [ ] Test login flow
- [ ] Check Slack #alerts channel

## Troubleshooting

**Health checks failing**
→ Check pod logs: `kubectl logs -f deployment/api -n production`

**Migration errors**
→ Rollback: Revert tag, migrations auto-rollback

**High error rate**
→ Emergency rollback: Previous tag auto-deploys via GitHub Actions

## Best Practices
- Deploy during low-traffic hours (2-4am PST)
- Have 2 engineers available
- Monitor for 30 min post-deploy
- Update #engineering Slack with deploy notice

## Related Docs
- <mention-page url="...">Rollback Procedure</mention-page>
- <mention-page url="...">Database Migration Guide</mention-page>
```

### 5. Make Discoverable
```
Notion:notion-update-page
page_id: "engineering-wiki-homepage"
command: "insert_content_after"
```
Added link in Engineering Wiki → How-To Guides section

## Key Success Factors
- Captured tribal knowledge from discussion
- Structured as actionable steps
- Included troubleshooting from experience
- Made discoverable by linking from wiki index
- Added metadata (time, complexity)
how_to_guide_database
openai SKILL.md License: LICENSE.txt Version: Unknown
Imported skill how_to_guide_database from openai
View skill
# How-To Guide Database

**Purpose**: Procedural documentation for common tasks.

## Schema

| Property | Type | Options | Purpose |
|----------|------|---------|---------|
| **Title** | title | - | "How to [Task]" |
| **Complexity** | select | Beginner, Intermediate, Advanced | Skill level required |
| **Time Required** | number | - | Estimated minutes to complete |
| **Prerequisites** | relation | Links to other guides | Required knowledge |
| **Category** | select | Development, Deployment, Testing, Tools | Task category |
| **Last Tested** | date | - | When procedure was verified |
| **Tags** | multi_select | - | Technology/tool tags |

## Usage

```
Create how-to guides with properties:
{
  "Title": "How to Set Up Local Development Environment",
  "Complexity": "Beginner",
  "Time Required": 30,
  "Category": "Development",
  "Last Tested": "2025-10-01",
  "Tags": "setup, environment, docker"
}
```

## Best Practices

1. **Use consistent naming**: Always start with "How to..."
2. **Test procedures**: Verify steps work before publishing
3. **Include time estimates**: Help users plan their time
4. **Link prerequisites**: Make dependencies clear
5. **Update regularly**: Re-test procedures when tools/systems change
learning_database
openai SKILL.md License: LICENSE.txt Version: Unknown
Imported skill learning_database from openai
View skill
# Learning/Post-Mortem Database

**Purpose**: Capture learnings from incidents, projects, or experiences.

## Schema

| Property | Type | Options | Purpose |
|----------|------|---------|---------|
| **Title** | title | - | Event or project name |
| **Date** | date | - | When it happened |
| **Type** | select | Incident, Project, Experiment, Retrospective | Learning type |
| **Severity** | select | Critical, Major, Minor | Impact level (for incidents) |
| **Team** | people | - | Who was involved |
| **Key Learnings** | number | - | Count of learnings |
| **Action Items** | relation | Links to tasks | Follow-up actions |

## Content Template

Each learning page should include:
- **What Happened**: Situation description
- **What Went Well**: Success factors
- **What Didn't Go Well**: Problems encountered
- **Root Causes**: Why things happened
- **Learnings**: Key takeaways
- **Action Items**: Improvements to implement

## Best Practices

1. **Blameless approach**: Focus on systems and processes, not individuals
2. **Document quickly**: Capture while memory is fresh
3. **Identify root causes**: Go beyond surface-level problems
4. **Create action items**: Turn learnings into improvements
5. **Follow up**: Track that action items are completed
6. **Share widely**: Make learnings accessible to entire team
market_research
openai SKILL.md License: LICENSE.txt Version: Unknown
Imported skill market_research from openai
View skill
# Example: Market Research

**User Request**: "Research the current state of AI coding assistants market and create a summary document in Notion"

## Workflow

### 1. Search
```
Notion:notion-search
query: "AI coding assistants market"
```
Found 3 relevant pages across Engineering, Strategy, and Product teamspaces.

### 2. Fetch & Analyze
```
Notion:notion-fetch (3x)
```
Extracted market size, competitive landscape, technology trends from source pages.

### 3. Create Documentation
```
Notion:notion-create-pages
```

## Output (Condensed)

```markdown
# AI Coding Assistants Market Research - Oct 2025

## Executive Summary
The AI coding assistant market is experiencing 150%+ YoY growth. GitHub Copilot dominates with 60% share, but specialized tools are gaining traction in specific niches.

## Key Findings

### Market Size and Growth
$800M in 2024 → $2.5B projected by 2026. Developer adoption: 23% (2023) → 47% (2024).
Source: <mention-page url="...">Market Trends Q3 2025</mention-page>

### Competitive Landscape
- GitHub Copilot: 60% (strong IDE integration)
- Cursor: 15% (rapid growth, full IDE)
- Tabnine: 10% (enterprise, on-premise)
- Cody: 5% (codebase-aware)
- CodeWhisperer: 8% (AWS integration)
Source: <mention-page url="...">AI Tools Competitive Analysis</mention-page>

### Technology Trends
Key differentiators: context awareness, customization, multi-modal interfaces, code verification.
Source: <mention-page url="...">Developer Tools Landscape</mention-page>

## Next Steps
1. Monitor Cursor growth and feature releases
2. Evaluate Cody's codebase-aware capabilities
3. Document enterprise security/compliance requirements
4. Track pricing trends
```

## Key Takeaways
- Found relevant pages across multiple teamspaces
- Synthesized competitive, market, and technical perspectives
- Used proper citations linking to source pages
- Created actionable recommendations
milestone_summary_template
openai SKILL.md License: LICENSE.txt Version: Unknown
Imported skill milestone_summary_template from openai
View skill
# Milestone Summary Template

Use this when completing major phases or milestones.

```markdown
## Phase [N] Complete: [Date]

### Accomplishments
- [Major item delivered]
- [Major item delivered]

### Deliverables
- <mention-page url="...">Deliverable 1</mention-page>
- [Link to PR/deployment]

### Metrics
- [Relevant metric]
- [Relevant metric]

### Learnings
- [What went well]
- [What to improve]

### Next Phase
Starting [Phase name] on [Date]
```
one_on_one_template
openai SKILL.md License: LICENSE.txt Version: Unknown
Imported skill one_on_one_template from openai
View skill
# 1:1 Meeting Template

Use this template for manager/report one-on-one meetings.

```markdown
# 1:1: [Manager] & [Report] - [Date]

## Meeting Details
**Date**: [Date]
**Last meeting**: <mention-page url="...">Previous 1:1</mention-page>

## Agenda

### [Report]'s Topics
1. [Topic to discuss]
2. [Question or concern]

### [Manager]'s Topics
1. [Topic to cover]
2. [Feedback or update]

## Discussion Notes

### [Topic 1]
[Discussion points]

**Action items**:
- [ ] [Action] - @[Owner]

### [Topic 2]
[Discussion points]

## Career Development

**Current focus**: [Development goal]
**Progress**: [Update on progress]

## Feedback

**What's going well**:
- [Positive feedback]

**Areas for growth**:
- [Developmental feedback]

## Action Items

- [ ] [Action] - @[Report] - Due: [Date]
- [ ] [Action] - @[Manager] - Due: [Date]

## Next Meeting

**Date**: [Date]
**Topics to cover**:
- [Carry-over topic]
- [Upcoming topic]
```
progress_tracking
openai SKILL.md License: LICENSE.txt Version: Unknown
Imported skill progress_tracking from openai
View skill
# Progress Tracking

## Update Frequency

### Daily Updates

For active implementation work:

**What to update**:
- Task status if changed
- Add progress note to task
- Update blockers

**When**:
- End of work day
- After completing significant work
- When encountering blockers

### Milestone Updates

For phase/milestone completion:

**What to update**:
- Mark phase complete in plan
- Add milestone summary
- Update timeline if needed
- Report to stakeholders

**When**:
- Phase completion
- Major deliverable ready
- Sprint end
- Release

### Status Change Updates

For task state transitions:

**What to update**:
- Task status property
- Add transition note
- Notify relevant people

**When**:
- Start work (To Do → In Progress)
- Ready for review (In Progress → In Review)
- Complete (In Review → Done)
- Block (Any → Blocked)

## Progress Note Format

### Daily Progress Note

```markdown
## Progress: [Date]

### Completed
- [Specific accomplishment with details]
- [Specific accomplishment with details]

### In Progress
- [Current work item]
- Current status: [Percentage or description]

### Next Steps
1. [Next planned action]
2. [Next planned action]

### Blockers
- [Blocker description and who/what needed to unblock]
- Or: None

### Decisions Made
- [Any technical/product decisions]

### Notes
[Additional context, learnings, issues encountered]
```

Example:

```markdown
## Progress: Oct 14, 2025

### Completed
- Implemented user authentication API endpoints (login, logout, refresh)
- Added JWT token generation and validation
- Wrote unit tests for auth service (95% coverage)

### In Progress
- Frontend login form integration
- Currently: Form submits but need to handle error states

### Next Steps
1. Complete error handling in login form
2. Add loading states
3. Implement "remember me" functionality

### Blockers
None

### Decisions Made
- Using HttpOnly cookies for refresh tokens (more secure than localStorage)
- Session timeout set to 24 hours based on security review

### Notes
- Found edge case with concurrent login attempts, added to backlog
- Performance of auth check is good (<10ms)
```

### Milestone Summary

```markdown
## Phase [N] Complete: [Date]

### Overview
[Brief description of what was accomplished in this phase]

### Completed Tasks
- <mention-page url="...">Task 1</mention-page> ✅
- <mention-page url="...">Task 2</mention-page> ✅
- <mention-page url="...">Task 3</mention-page> ✅

### Deliverables
- [Deliverable 1]: [Link/description]
- [Deliverable 2]: [Link/description]

### Key Accomplishments
- [Major achievement]
- [Major achievement]

### Metrics
- [Relevant metric]: [Value]
- [Relevant metric]: [Value]

### Challenges Overcome
- [Challenge and how it was solved]

### Learnings
**What went well**:
- [Success factor]

**What to improve**:
- [Area for improvement]

### Impact on Timeline
- On schedule / [X days ahead/behind]
- Reason: [If deviation, explain why]

### Next Phase
- **Starting**: [Next phase name]
- **Target start date**: [Date]
- **Focus**: [Main objectives]
```

## Updating Implementation Plan

### Progress Indicators

Update plan page regularly:

```markdown
## Status Overview

**Overall Progress**: 45% complete

### Phase Status
- ✅ Phase 1: Foundation - Complete
- 🔄 Phase 2: Core Features - In Progress (60%)
- ⏳ Phase 3: Integration - Not Started

### Task Summary
- ✅ Completed: 12 tasks
- 🔄 In Progress: 5 tasks
- 🚧 Blocked: 1 task
- ⏳ Not Started: 8 tasks

**Last Updated**: [Date]
```

### Task Checklist Updates

Mark completed tasks:

```markdown
## Implementation Phases

### Phase 1: Foundation
- [x] <mention-page url="...">Database schema</mention-page>
- [x] <mention-page url="...">API scaffolding</mention-page>
- [x] <mention-page url="...">Auth setup</mention-page>

### Phase 2: Core Features
- [x] <mention-page url="...">User management</mention-page>
- [ ] <mention-page url="...">Dashboard</mention-page>
- [ ] <mention-page url="...">Reporting</mention-page>
```

### Timeline Updates

Update milestone dates:

```markdown
## Timeline

| Milestone | Original | Current | Status |
|-----------|----------|---------|--------|
| Phase 1 | Oct 15 | Oct 14 | ✅ Complete (1 day early) |
| Phase 2 | Oct 30 | Nov 2 | 🔄 In Progress (3 days delay) |
| Phase 3 | Nov 15 | Nov 18 | ⏳ Planned (adjusted) |
| Launch | Nov 20 | Nov 22 | ⏳ Planned (adjusted) |

**Timeline Status**: Slightly behind due to [reason]
```

## Task Status Tracking

### Status Definitions

**To Do**: Not started
- Task is ready to begin
- Dependencies met
- Assigned (or available)

**In Progress**: Actively being worked
- Work has started
- Assigned to someone
- Regular updates expected

**Blocked**: Cannot proceed
- Dependency not met
- External blocker
- Waiting on decision/resource

**In Review**: Awaiting review
- Work complete from implementer perspective
- Needs code review, QA, or approval
- Reviewers identified

**Done**: Complete
- All acceptance criteria met
- Reviewed and approved
- Deployed/delivered

### Updating Task Status

When updating:

```
1. Update Status property
2. Add progress note explaining change
3. Update related tasks if needed
4. Notify relevant people via comment

Example:
properties: { "Status": "In Progress" }

Content update:
## Progress: Oct 14, 2025
Started implementation. Set up basic structure and wrote initial tests.
```

## Blocker Tracking

### Recording Blockers

When encountering a blocker:

```markdown
## Blockers

### [Date]: [Blocker Description]
**Status**: 🚧 Active
**Impact**: [What's blocked]
**Needed to unblock**: [Action/person/decision needed]
**Owner**: [Who's responsible for unblocking]
**Target resolution**: [Date or timeframe]
```

### Resolving Blockers

When unblocked:

```markdown
## Blockers

### [Date]: [Blocker Description]
**Status**: ✅ Resolved on [Date]
**Resolution**: [How it was resolved]
**Impact**: [Any timeline/scope impact]
```

### Escalating Blockers

If blocker needs escalation:

```
1. Update blocker status in task
2. Add comment tagging stakeholder
3. Update plan with blocker impact
4. Propose mitigation if possible
```

## Metrics Tracking

### Velocity Tracking

Track completion rate:

```markdown
## Velocity

### Week 1
- Tasks completed: 8
- Story points: 21
- Velocity: Strong

### Week 2
- Tasks completed: 6
- Story points: 18
- Velocity: Moderate (1 blocker)

### Week 3
- Tasks completed: 9
- Story points: 24
- Velocity: Strong (blocker resolved)
```

### Quality Metrics

Track quality indicators:

```markdown
## Quality Metrics

- Test coverage: 87%
- Code review approval rate: 95%
- Bug count: 3 (2 minor, 1 cosmetic)
- Performance: All targets met
- Security: No issues found
```

### Progress Metrics

Quantitative progress:

```markdown
## Progress Metrics

- Requirements implemented: 15/20 (75%)
- Acceptance criteria met: 42/56 (75%)
- Test cases passing: 128/135 (95%)
- Code complete: 80%
- Documentation: 60%
```

## Stakeholder Communication

### Weekly Status Report

```markdown
## Weekly Status: [Week of Date]

### Summary
[One paragraph overview of progress and status]

### This Week's Accomplishments
- [Key accomplishment]
- [Key accomplishment]
- [Key accomplishment]

### Next Week's Plan
- [Planned work]
- [Planned work]

### Status
- On track / At risk / Behind schedule
- [If at risk or behind, explain and provide mitigation plan]

### Blockers & Needs
- [Active blocker or need for help]
- Or: None

### Risks
- [New or evolving risk]
- Or: None currently identified
```

### Executive Summary

For leadership updates:

```markdown
## Implementation Status: [Feature Name]

**Overall Status**: 🟢 On Track / 🟡 At Risk / 🔴 Behind

**Progress**: [X]% complete

**Key Updates**:
- [Most important update]
- [Most important update]

**Timeline**: [Status vs original plan]

**Risks**: [Top 1-2 risks]

**Next Milestone**: [Upcoming milestone and date]
```

## Automated Progress Tracking

### Query-Based Status

Generate status from task database:

```
Query task database:
SELECT 
  "Status",
  COUNT(*) as count
FROM "collection://tasks-uuid"
WHERE "Related Tasks" CONTAINS 'plan-page-id'
GROUP BY "Status"

Generate summary:
- To Do: 8
- In Progress: 5
- Blocked: 1
- In Review: 2
- Done: 12

Overall: 44% complete (12/28 tasks)
```

### Timeline Calculation

Calculate projected completion:

```
Average velocity: 6 tasks/week
Remaining tasks: 14
Projected completion: 2.3 weeks from now

Compares to target: [On schedule/Behind/Ahead]
```

## Best Practices

1. **Update regularly**: Don't let updates pile up
2. **Be specific**: "Completed login" vs "Made progress"
3. **Quantify progress**: Use percentages, counts, metrics
4. **Note blockers immediately**: Don't wait to report blockers
5. **Link to work**: Reference PRs, deployments, demos
6. **Track decisions**: Document why, not just what
7. **Be honest**: Report actual status, not optimistic status
8. **Update in one place**: Keep implementation plan as source of truth
progress_update_template
openai SKILL.md License: LICENSE.txt Version: Unknown
Imported skill progress_update_template from openai
View skill
# Progress Update Template

Use this to update progress on implementation plans and tasks.

```markdown
## Progress: [Date]

### Completed Today
- [Specific item completed]
- [Specific item completed]

### In Progress
- [Current work item and status]

### Next Steps
1. [Next action]
2. [Next action]

### Blockers
- [Blocker description] or None

### Notes
[Additional context, decisions made, issues encountered]
```
project_decision
openai SKILL.md License: LICENSE.txt Version: Unknown
Imported skill project_decision from openai
View skill
# Example: Project Decision Meeting Prep

## User Request

> "Prep for tomorrow's architecture meeting where we need to decide on database migration strategy"

## Workflow Execution

### Step 1: Search for Context

```
Notion:notion-search
query: "database migration"
query_type: "internal"
teamspace_id: "engineering-teamspace-id"
filters: {
  created_date_range: {
    start_date: "2024-01-01"
  }
}
```

**Results Found:**
- "Database Migration Proposal" (Engineering)
- "Current Database Performance Issues" (Engineering)
- "Migration Options Analysis" (Architecture)
- "Previous Migration Post-mortem" (Engineering)

### Step 2: Fetch Relevant Pages

```
Notion:notion-fetch
id: "database-migration-proposal-page-id"
```

**Extracted**: Proposes migrating from MongoDB to PostgreSQL, outlines timeline and risks

```
Notion:notion-fetch
id: "database-performance-issues-page-id"
```

**Extracted**: Current pain points - slow queries, schema flexibility issues, transaction limitations

```
Notion:notion-fetch
id: "migration-options-analysis-page-id"
```

**Extracted**: Three options analyzed:
1. Stay on MongoDB with optimizations
2. Migrate to PostgreSQL
3. Hybrid approach (MongoDB + PostgreSQL)

```
Notion:notion-fetch
id: "previous-migration-postmortem-page-id"
```

**Extracted**: Lessons from 2023 Redis migration - importance of rollback plan, testing strategy

### Step 3: Synthesize Pre-Read

**Key Information Identified:**
- **Problem**: MongoDB performance degrading, schema flexibility causing data consistency issues
- **Options**: Three approaches with different trade-offs
- **Recommendation**: PostgreSQL migration favored by engineering team
- **Concerns**: Downtime requirements, data migration complexity, application changes needed
- **Historical context**: Previous migration took 6 weeks, had rollback issues

### Step 4: Create Meeting Agenda

```
Notion:notion-create-pages

parent: { page_id: "meetings-parent-id" }

pages: [{
  properties: {
    "title": "Architecture Decision: Database Migration Strategy - Oct 15, 2025"
  },
  content: "[Decision meeting agenda]"
}]
```

**Agenda Created:**

```markdown
# Architecture Decision: Database Migration Strategy

## Meeting Details
**Date & Time**: October 15, 2025 at 2:00 PM  
**Duration**: 90 minutes  
**Location**: Conference Room B / Zoom  
**Facilitator**: Sarah Chen (Engineering Lead)  
**Attendees**:
- Sarah Chen (Engineering Lead)
- David Kim (CTO)
- Alice Wang (Backend Team Lead)
- Bob Martinez (DevOps Lead)
- Carol Johnson (Product Director)

## Pre-Read Summary

### Background

Our MongoDB database is experiencing performance degradation as we scale. Query response times have increased from 50ms to 400ms average over the past 6 months. The schema-less nature that initially provided flexibility is now causing data consistency issues and making application logic more complex.

**Source**: <mention-page url="...">Current Database Performance Issues</mention-page>

### Current Situation

**Performance metrics**:
- Average query time: 400ms (was 50ms 6 months ago)
- p95 query time: 1.2s (was 200ms)
- Database size: 500GB (growing 20GB/month)
- Connection pool exhaustion during peak traffic

**Technical debt**:
- 15+ application-layer validation rules compensating for lack of schema
- Complex data migration scripts for schema changes
- Limited transaction support causing race conditions

**Source**: <mention-page url="...">Database Migration Proposal</mention-page>

### Historical Context

We successfully migrated from Redis to Memcached in 2023, which took 6 weeks. Key learnings:
- Underestimated application code changes (3 weeks instead of 1 week)
- Rollback plan was crucial when we discovered compatibility issues
- Parallel running period (dual writes) was essential for safe migration

**Source**: <mention-page url="...">Previous Migration Post-mortem</mention-page>

## Decision Required

**Question**: Which database migration strategy should we adopt?

**Timeline**: Need decision by end of week to include in Q4 planning

**Impact**: 
- Engineering team (4-8 weeks of work)
- Application architecture
- Operations & monitoring
- Future feature development velocity

## Options Analysis

### Option A: Stay on MongoDB with Optimizations

**Description**: Invest in MongoDB performance tuning, add indexes, upgrade to latest version, implement better query patterns.

**Pros**:
- ✅ No migration complexity
- ✅ Team familiar with MongoDB
- ✅ Can implement immediately
- ✅ Lower risk
- ✅ Estimated 2 weeks effort

**Cons**:
- ❌ Doesn't solve fundamental schema flexibility issues
- ❌ Still limited transaction support
- ❌ Performance improvements may be temporary
- ❌ Continues technical debt accumulation

**Cost/Effort**: 2 weeks engineering + $5K/year additional MongoDB infrastructure

**Risk**: Medium - Improvements may not be sufficient

**Source**: <mention-page url="...">Migration Options Analysis</mention-page>

### Option B: Migrate to PostgreSQL

**Description**: Full migration from MongoDB to PostgreSQL. Redesign schema with proper constraints, implement dual-write period, then cut over.

**Pros**:
- ✅ Solves schema consistency issues
- ✅ Full ACID transactions
- ✅ Better performance for relational queries
- ✅ Lower long-term complexity
- ✅ Industry standard, easier hiring

**Cons**:
- ❌ High migration effort (6-8 weeks)
- ❌ Requires schema redesign
- ❌ Application code changes extensive
- ❌ Risk of data loss during migration
- ❌ Downtime required (4-6 hours estimated)

**Cost/Effort**: 8 weeks engineering + $8K migration costs - $15K/year MongoDB savings = net $7K/year savings

**Risk**: High - Complex migration, application changes required

**Recommendation**: ✅ **Favored by engineering team**

**Source**: <mention-page url="...">Database Migration Proposal</mention-page>

### Option C: Hybrid Approach

**Description**: Keep MongoDB for document-heavy data (logs, analytics), migrate transactional data to PostgreSQL. Run both databases.

**Pros**:
- ✅ Phased migration (lower risk)
- ✅ Use best tool for each data type
- ✅ Can migrate incrementally
- ✅ Smaller initial scope (4 weeks)

**Cons**:
- ❌ Increased operational complexity
- ❌ Two databases to maintain
- ❌ Data consistency between databases challenging
- ❌ Higher infrastructure costs
- ❌ Complex application logic

**Cost/Effort**: 4 weeks initial + ongoing complexity + $10K/year additional infrastructure

**Risk**: Medium - Operational complexity increases

**Source**: <mention-page url="...">Migration Options Analysis</mention-page>

### Option D: Do Nothing

**Description**: Accept current performance and continue with MongoDB as-is.

**Implications**:
- Performance continues to degrade
- Technical debt increases
- Feature development slows
- Customer experience suffers
- Eventually forced into emergency migration

**Not recommended**

## Discussion Topics

### Technical Feasibility
1. Can we achieve < 4 hours downtime for Option B?
2. What's the rollback plan if PostgreSQL migration fails?
3. How do we handle data migration for 500GB?
4. Schema design - what constraints do we need?

### Business Impact
5. What's the customer impact of 4-6 hours downtime?
6. Can we schedule migration during low-traffic period?
7. How does this affect Q4 feature roadmap?
8. Cost-benefit analysis over 2-year horizon?

### Risk Management
9. What are the biggest risks with Option B?
10. How do we test thoroughly before cutover?
11. What's the rollback procedure and time?
12. Do we have necessary expertise on team?

### Timeline & Resources
13. Can we allocate 2 engineers full-time for 8 weeks?
14. Do we need external consultants?
15. What's the impact on other Q4 projects?
16. When could we realistically complete this?

## Decision Framework

**Evaluation criteria**:
1. **Performance improvement**: Will this solve our performance issues?
2. **Technical debt**: Does this reduce or increase complexity?
3. **Risk**: What's the probability and impact of failure?
4. **Cost**: Total cost of ownership (effort + infrastructure)
5. **Timeline**: Can we complete in Q4?
6. **Reversibility**: Can we roll back if needed?

**Decision makers**:
- Primary: David Kim (CTO)
- Consulted: Sarah Chen (Engineering), Bob Martinez (DevOps)
- Informed: Product, Executive team

## Decision

*[To be filled during meeting]*

**Selected Option**: ___________________

**Rationale**:

**Timeline**:

**Owner**: ___________________

**Success Criteria**:

## Action Items

*[To be filled during meeting]*

- [ ] [Action item] - @[Owner] - Due: [Date]
- [ ] [Action item] - @[Owner] - Due: [Date]

## Next Steps

**Immediate**:
- Document decision rationale
- Create detailed implementation plan
- Schedule kickoff meeting

**Follow-up meeting**: [Date] to review migration plan

## Related Resources

### Primary Sources
- <mention-page url="...">Database Migration Proposal</mention-page>
- <mention-page url="...">Current Database Performance Issues</mention-page>
- <mention-page url="...">Migration Options Analysis</mention-page>
- <mention-page url="...">Previous Migration Post-mortem</mention-page>

### Supporting Documents
- MongoDB performance metrics dashboard
- PostgreSQL evaluation results
- Cost analysis spreadsheet
```

### Step 5: Share Agenda

```
Notion:notion-create-comment
parent: { page_id: "agenda-page-id" }
rich_text: [{
  type: "text",
  text: { content: "📋 Meeting agenda ready! Please review the pre-read before tomorrow's meeting. cc: @Sarah @David @Alice @Bob @Carol" }
}]
```

## Summary Provided to User

```markdown
## Meeting Prep Complete ✅

### Meeting Details
- **Topic**: Database Migration Strategy Decision
- **Date**: October 15, 2025 at 2:00 PM
- **Duration**: 90 minutes
- **Attendees**: 5 (Engineering leads + CTO + Product)

### Agenda Created
<mention-page url="...">Architecture Decision: Database Migration Strategy</mention-page>

### Pre-Read Summary

**Problem**: MongoDB performance degrading (50ms → 400ms queries), schema flexibility causing issues

**Decision needed**: Choose between:
1. Stay on MongoDB (optimizations) - 2 weeks, low risk
2. Migrate to PostgreSQL - 8 weeks, high value, high risk ✅ Recommended
3. Hybrid approach - 4 weeks, medium complexity
4. Do nothing - Not recommended

**Recommendation**: Engineering team favors PostgreSQL migration

### Key Information Included

✅ **Background context** from 4 relevant pages
✅ **Options analysis** with pros/cons for each
✅ **Cost-benefit** breakdown
✅ **Risk assessment** for each option
✅ **Discussion topics** (16 questions organized by theme)
✅ **Decision framework** with evaluation criteria
✅ **Historical context** from previous migration
✅ **Empty sections** for decision and action items

### Discussion Topics Prepared

**Technical** (4 questions):
- Downtime achievability
- Rollback planning
- Data migration approach
- Schema design

**Business** (4 questions):
- Customer impact
- Scheduling
- Roadmap impact
- Cost analysis

**Risk** (4 questions):
- Risk identification
- Testing strategy
- Rollback procedure
- Expertise availability

**Resources** (4 questions):
- Team allocation
- External help needs
- Project trade-offs
- Timeline realism

### Success Factors

1. **Comprehensive research**: 4 relevant pages analyzed
2. **Clear options**: Pros/cons for each approach
3. **Structured discussion**: 16 questions organized by theme
4. **Decision framework**: Clear criteria for evaluation
5. **Historical context**: Lessons from previous migration
6. **Ready for outcomes**: Sections prepared for decision and actions

**Meeting is well-prepared for productive decision-making! 📊**
```

## Key Features Demonstrated

### Context Gathering
- **Targeted search**: Engineering teamspace, recent documents
- **Multiple perspectives**: Performance data, proposals, analysis, historical lessons
- **Comprehensive synthesis**: Combined technical, business, and historical context

### Options Analysis
- **Structured comparison**: Pros/cons for each option
- **Cost-benefit**: Effort and infrastructure costs included
- **Risk assessment**: Probability and impact noted
- **Recommendation**: Clear engineering preference stated

### Decision Support
- **Discussion topics**: 16 questions organized by theme
- **Decision framework**: Evaluation criteria defined
- **Decision makers**: Roles and responsibilities clear
- **Outcome capture**: Sections ready for decision and actions

### Meeting Structure
- **Pre-read**: Comprehensive background (can be read in 10 minutes)
- **Options**: Clear comparison for quick decision
- **Discussion**: Structured topics prevent rambling
- **Capture**: Templates for decision and actions

Perfect for: Architecture decisions, technical trade-offs, strategic choices
quick_brief_format
openai SKILL.md License: LICENSE.txt Version: Unknown
Imported skill quick_brief_format from openai
View skill
# Quick Brief Format

**When to use**:
- Time-sensitive requests
- Simple topics
- Status updates
- Quick reference needs

## Characteristics

**Length**: 200-400 words

**Structure**:
- 3-4 sentence summary
- 3-5 bullet key points
- Short action items list
- Brief source list

## Template

See [quick-brief-template.md](quick-brief-template.md) for the full template.

## Best For

- Fast turnaround requests
- Simple, straightforward topics
- Quick status updates
- When time is more important than depth
- Initial exploration before deeper research

## Example Use Cases

- "Quick summary of what's in our API docs"
- "Fast brief on the meeting notes from yesterday"
- "What are the key points from that spec?"
- "Give me a quick overview of the project status"
quick_brief_template
openai SKILL.md License: LICENSE.txt Version: Unknown
Imported skill quick_brief_template from openai
View skill
# Quick Brief Template

Use for fast turnaround requests or simple topics. See [quick-brief-format.md](quick-brief-format.md) for when to use this format.

```markdown
# [Topic] - Quick Brief

**Date**: [Current date]

## Summary
[3-4 sentences covering the essentials]

## Key Points
- **Point 1**: [Details]
- **Point 2**: [Details]
- **Point 3**: [Details]

## Action Items
1. [Immediate next step]
2. [Follow-up action]

## Sources
[Brief list of pages consulted]
```
quick_implementation_plan
openai SKILL.md License: LICENSE.txt Version: Unknown
Imported skill quick_implementation_plan from openai
View skill
# Quick Implementation Plan Template

For simpler features or small changes.

```markdown
# Implementation: [Feature Name]

## Spec
<mention-page url="...">Specification</mention-page>

## Summary
[Quick description]

## Tasks
- [ ] <mention-page url="...">Task 1</mention-page>
- [ ] <mention-page url="...">Task 2</mention-page>
- [ ] <mention-page url="...">Task 3</mention-page>

## Timeline
Start: [Date]
Target completion: [Date]

## Status
[Update as work progresses]
```
readme
openai SKILL.md License: LICENSE.txt Version: Unknown
Imported skill readme from openai
View skill
# Knowledge Capture Skill Evaluations

Evaluation scenarios for testing the Knowledge Capture skill across different Codex models.

## Purpose

These evaluations ensure the Knowledge Capture skill:
- Correctly identifies content types (how-to guides, FAQs, decision records, wikis)
- Extracts relevant information from conversations
- Structures content appropriately for each type
- Searches and places content in the right Notion location
- Works consistently across Haiku, Sonnet, and Opus

## Evaluation Files

### conversation-to-wiki.json
Tests capturing conversation content as a how-to guide for the team wiki.

**Scenario**: Save deployment discussion to wiki  
**Key Behaviors**:
- Extracts steps, gotchas, and best practices from conversation
- Identifies content as How-To Guide
- Structures with proper sections (Overview, Prerequisites, Steps, Troubleshooting)
- Searches for team wiki location
- Preserves technical details (commands, configs)

### decision-record.json
Tests capturing architectural or technical decisions with full context.

**Scenario**: Document database migration decision  
**Key Behaviors**:
- Extracts decision context, alternatives, and rationale
- Follows decision record structure (Context, Decision, Alternatives, Consequences)
- Captures both selected and rejected options with reasoning
- Places in decision log or ADR database
- Links to related technical documentation

## Running Evaluations

1. Enable the `knowledge-capture` skill
2. Submit the query from the evaluation file
3. Provide conversation context as specified
4. Verify all expected behaviors are met
5. Check success criteria for quality
6. Test with Haiku, Sonnet, and Opus

## Expected Skill Behaviors

Knowledge Capture evaluations should verify:

### Content Extraction
- Accurately captures key points from conversation context
- Preserves specific technical details, not generic placeholders
- Maintains context and nuance from discussion

### Content Type Selection
- Correctly identifies appropriate content type (how-to, FAQ, decision record, wiki page)
- Uses matching structure from reference documentation
- Applies proper Notion markdown formatting

### Notion Integration
- Searches for appropriate target location (wiki, decision log, etc.)
- Creates well-structured pages with clear titles
- Uses proper parent placement
- Includes discoverable titles and metadata

### Quality Standards
- Content is actionable and future-reference ready
- Technical accuracy is preserved
- Organization aids discoverability
- Formatting enhances readability

## Creating New Evaluations

When adding Knowledge Capture evaluations:

1. **Use realistic conversation content** - Include actual technical details, decisions, or processes
2. **Test different content types** - How-to guides, FAQs, decision records, meeting notes, learnings
3. **Vary complexity** - Simple captures vs. complex technical discussions
4. **Test discovery** - Finding the right wiki section or database
5. **Include edge cases** - Unclear content types, minimal context, overlapping categories

## Example Success Criteria

**Good** (specific, testable):
- "Structures content using How-To format with numbered steps"
- "Preserves exact bash commands from conversation"
- "Creates page with title format 'How to [Action]'"
- "Places in Engineering Wiki → Deployment section"

**Bad** (vague, untestable):
- "Creates good documentation"
- "Uses appropriate structure"
- "Saves to the right place"

## Bundled Sources

### conversation-to-wiki.json

Source: `/a0/tmp/skills_research/openai/skills/.curated/notion-knowledge-capture/evaluations/conversation-to-wiki.json`

```json
{
  "name": "Save Conversation to Wiki",
  "skills": ["knowledge-capture"],
  "query": "Save this conversation about deploying our application to production to the team wiki",
  "context": "Preceding conversation contains discussion about deployment process, including steps, gotchas, and best practices",
  "expected_behavior": [
    "Extracts key information from conversation context (deployment steps, gotchas, best practices)",
    "Identifies content type as How-To Guide based on procedural nature",
    "Structures content using How-To structure: Overview → Prerequisites → Steps (numbered) → Verification → Troubleshooting → Related",
    "Organizes information into clear sections with proper headings",
    "Includes specific commands, configurations, or examples from conversation",
    "Adds context about why/when to use this process in Overview section",
    "Notes common issues and solutions mentioned in discussion in Troubleshooting section",
    "Uses Notion:notion-search to find team wiki location or asks user",
    "Creates page using Notion:notion-create-pages with structured content and appropriate parent",
    "Uses clear, descriptive title like 'How to Deploy to Production'",
    "Applies Notion markdown formatting (headings, code blocks, bullets)",
    "Suggests tags/categories for discoverability if wiki database"
  ],
  "success_criteria": [
    "Content is structured using How-To format from SKILL.md content types",
    "Key points from conversation are captured accurately (not generic)",
    "Information is organized with proper Notion markdown (##, ###, bullets, code blocks)",
    "Specific technical details (commands, configs) are preserved from conversation",
    "Document is written for future reference with clear step-by-step instructions",
    "Title is searchable and descriptive (e.g., 'How to Deploy to Production')",
    "Page is placed in appropriate wiki location (general wiki or specific section)",
    "Uses correct tool name (Notion:notion-create-pages)"
  ]
}
```

### decision-record.json

Source: `/a0/tmp/skills_research/openai/skills/.curated/notion-knowledge-capture/evaluations/decision-record.json`

```json
{
  "name": "Create Decision Record",
  "skills": ["knowledge-capture"],
  "query": "Document our decision to use PostgreSQL instead of MongoDB for our new service",
  "context": "User has just explained the decision with rationale, options considered, and trade-offs",
  "expected_behavior": [
    "Recognizes this as a decision record (architectural decision) from conversation context",
    "Uses Decision structure: Context → Decision → Rationale → Options Considered (with Pros/Cons) → Consequences → Implementation",
    "Extracts from context: decision made, options considered (PostgreSQL vs MongoDB), rationale, trade-offs",
    "Creates document with proper structure including Date, Status (Accepted), and Deciders",
    "Includes both positive and negative consequences (trade-offs) in Consequences section",
    "Uses Notion:notion-search to check if decision log database exists",
    "If database exists, asks whether to add there or create standalone page",
    "If creating in database, fetches schema using Notion:notion-fetch and sets properties: Decision title, Date, Status, Domain (Architecture), Deciders, Impact",
    "Uses Notion:notion-create-pages with parent: { data_source_id } for database or { page_id } for parent page",
    "Applies proper Notion markdown formatting with sections",
    "Suggests linking from architecture docs or project pages"
  ],
  "success_criteria": [
    "Document follows Decision structure from SKILL.md content types",
    "All key sections present: Context, Decision, Rationale, Options Considered (with Pros/Cons for each), Consequences, Implementation",
    "Decision is clearly stated (PostgreSQL chosen over MongoDB)",
    "Options that were considered are documented with pros/cons structure",
    "Rationale explains why PostgreSQL was chosen based on conversation context",
    "Consequences include both positive (benefits) and negative (trade-offs)",
    "If in database, properties are set correctly from schema (Decision, Date, Status: Accepted, Domain: Architecture, Impact)",
    "Document is dated and has status 'Accepted'",
    "Uses correct tool names (Notion:notion-search, Notion:notion-fetch, Notion:notion-create-pages)"
  ]
}
```
research_summary_format
openai SKILL.md License: LICENSE.txt Version: Unknown
Imported skill research_summary_format from openai
View skill
# Research Summary Format

**When to use**: General research requests, most common format

## Characteristics

**Length**: 500-1000 words typically

**Structure**:
- Executive summary (2-3 sentences)
- 3-5 key findings with supporting evidence
- Detailed analysis section
- Conclusions and next steps
- Source citations

## Template

See [research-summary-template.md](research-summary-template.md) for the full template.

## Best For

- Most general-purpose research requests
- Standard documentation needs
- Balanced depth and readability
- When you need comprehensive but accessible information

## Example Use Cases

- "Research our authentication options"
- "What does our project documentation say about the API redesign?"
- "Summarize the team's discussion about mobile strategy"
- "Compile information about our deployment process"
research_summary_template
openai SKILL.md License: LICENSE.txt Version: Unknown
Imported skill research_summary_template from openai
View skill
# Research Summary Template

Use this for most research requests. See [research-summary-format.md](research-summary-format.md) for when to use this format.

```markdown
# [Topic Name]

## Executive Summary
[2-3 sentence overview of key findings and implications]

## Key Findings

### Finding 1: [Clear headline]
[Details and supporting evidence]
- Source: <mention-page url="...">Original Page</mention-page>

### Finding 2: [Clear headline]
[Details and supporting evidence]
- Source: <mention-page url="...">Original Page</mention-page>

### Finding 3: [Clear headline]
[Details and supporting evidence]
- Source: <mention-page url="...">Original Page</mention-page>

## Detailed Analysis

### [Section 1]
[In-depth discussion of first major theme]

### [Section 2]
[In-depth discussion of second major theme]

## Conclusions

[Summary of implications and insights]

## Next Steps

1. [Actionable recommendation]
2. [Actionable recommendation]
3. [Actionable recommendation]

## Sources

- <mention-page url="...">Page Title</mention-page>
- <mention-page url="...">Page Title</mention-page>
- <mention-page url="...">Page Title</mention-page>
```
retrospective_template
openai SKILL.md License: LICENSE.txt Version: Unknown
Imported skill retrospective_template from openai
View skill
# Retrospective Template

Use this template for sprint retrospectives and team retrospectives.

```markdown
# Sprint [#] Retrospective - [Date]

## Meeting Details
**Date**: [Date]
**Team**: [Team]
**Sprint**: [Sprint dates]
**Facilitator**: [Name]

## Sprint Summary

**Sprint Goal**: [Goal]
**Goal Met**: Yes / Partially / No

**Completed**: [#] points
**Velocity**: [#] points
**Planned**: [#] points

## Pre-Read

**Sprint Metrics**:
- Tasks completed: [#]
- Tasks carried over: [#]
- Bugs found: [#]
- Blockers encountered: [#]

## Discussion

### What Went Well (Keep)

[Team input during meeting]

### What Didn't Go Well (Stop)

[Team input during meeting]

### What To Try (Start)

[Team input during meeting]

### Shout-outs

[Team recognition]

## Action Items

- [ ] [Improvement to implement] - @[Owner] - Due: [Date]
- [ ] [Process change] - @[Owner] - Due: [Date]

## Follow-up

**Review actions in**: [Next retro date]
```
skill
openai SKILL.md License: LICENSE.txt Version: Unknown
Imported skill skill from openai
View skill
---
name: gh-address-comments
description: Help address review/issue comments on the open GitHub PR for the current branch using gh CLI; verify gh auth first and prompt the user to authenticate if not logged in.
metadata:
  short-description: Address comments in a GitHub PR review
---

# PR Comment Handler

Guide to find the open PR for the current branch and address its comments with gh CLI. Run all `gh` commands with elevated network access.

Prereq: ensure `gh` is authenticated (for example, run `gh auth login` once), then run `gh auth status` with escalated permissions (include workflow/repo scopes) so `gh` commands succeed. If sandboxing blocks `gh auth status`, rerun it with `sandbox_permissions=require_escalated`.

## 1) Inspect comments needing attention
- Run scripts/fetch_comments.py which will print out all the comments and review threads on the PR

## 2) Ask the user for clarification
- Number all the review threads and comments and provide a short summary of what would be required to apply a fix for it
- Ask the user which numbered comments should be addressed

## 3) If user chooses comments
- Apply fixes for the selected comments

Notes:
- If gh hits auth/rate issues mid-run, prompt the user to re-authenticate with `gh auth login`, then retry.

## Bundled Sources

### LICENSE.txt

Source: `/a0/tmp/skills_research/openai/skills/.curated/gh-address-comments/LICENSE.txt`

```text
Apache License
                           Version 2.0, January 2004
                        http://www.apache.org/licenses/

   TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION

   1. Definitions.

      "License" shall mean the terms and conditions for use, reproduction,
      and distribution as defined by Sections 1 through 9 of this document.

      "Licensor" shall mean the copyright owner or entity authorized by
      the copyright owner that is granting the License.

      "Legal Entity" shall mean the union of the acting entity and all
      other entities that control, are controlled by, or are under common
      control with that entity. For the purposes of this definition,
      "control" means (i) the power, direct or indirect, to cause the
      direction or management of such entity, whether by contract or
      otherwise, or (ii) ownership of fifty percent (50%) or more of the
      outstanding shares, or (iii) beneficial ownership of such entity.

      "You" (or "Your") shall mean an individual or Legal Entity
      exercising permissions granted by this License.

      "Source" form shall mean the preferred form for making modifications,
      including but not limited to software source code, documentation
      source, and configuration files.

      "Object" form shall mean any form resulting from mechanical
      transformation or translation of a Source form, including but
      not limited to compiled object code, generated documentation,
      and conversions to other media types.

      "Work" shall mean the work of authorship, whether in Source or
      Object form, made available under the License, as indicated by a
      copyright notice that is included in or attached to the work
      (an example is provided in the Appendix below).

      "Derivative Works" shall mean any work, whether in Source or Object
      form, that is based on (or derived from) the Work and for which the
      editorial revisions, annotations, elaborations, or other modifications
      represent, as a whole, an original work of authorship. For the purposes
      of this License, Derivative Works shall not include works that remain
      separable from, or merely link (or bind by name) to the interfaces of,
      the Work and Derivative Works thereof.

      "Contribution" shall mean any work of authorship, including
      the original version of the Work and any modifications or additions
      to that Work or Derivative Works thereof, that is intentionally
      submitted to Licensor for inclusion in the Work by the copyright owner
      or by an individual or Legal Entity authorized to submit on behalf of
      the copyright owner. For the purposes of this definition, "submitted"
      means any form of electronic, verbal, or written communication sent
      to the Licensor or its representatives, including but not limited to
      communication on electronic mailing lists, source code control systems,
      and issue tracking systems that are managed by, or on behalf of, the
      Licensor for the purpose of discussing and improving the Work, but
      excluding communication that is conspicuously marked or otherwise
      designated in writing by the copyright owner as "Not a Contribution."

      "Contributor" shall mean Licensor and any individual or Legal Entity
      on behalf of whom a Contribution has been received by Licensor and
      subsequently incorporated within the Work.

   2. Grant of Copyright License. Subject to the terms and conditions of
      this License, each Contributor hereby grants to You a perpetual,
      worldwide, non-exclusive, no-charge, royalty-free, irrevocable
      copyright license to reproduce, prepare Derivative Works of,
      publicly display, publicly perform, sublicense, and distribute the
      Work and such Derivative Works in Source or Object form.

   3. Grant of Patent License. Subject to the terms and conditions of
      this License, each Contributor hereby grants to You a perpetual,
      worldwide, non-exclusive, no-charge, royalty-free, irrevocable
      (except as stated in this section) patent license to make, have made,
      use, offer to sell, sell, import, and otherwise transfer the Work,
      where such license applies only to those patent claims licensable
      by such Contributor that are necessarily infringed by their
      Contribution(s) alone or by combination of their Contribution(s)
      with the Work to which such Contribution(s) was submitted. If You
      institute patent litigation against any entity (including a
      cross-claim or counterclaim in a lawsuit) alleging that the Work
      or a Contribution incorporated within the Work constitutes direct
      or contributory patent infringement, then any patent licenses
      granted to You under this License for that Work shall terminate
      as of the date such litigation is filed.

   4. Redistribution. You may reproduce and distribute copies of the
      Work or Derivative Works thereof in any medium, with or without
      modifications, and in Source or Object form, provided that You
      meet the following conditions:

      (a) You must give any other recipients of the Work or
          Derivative Works a copy of this License; and

      (b) You must cause any modified files to carry prominent notices
          stating that You changed the files; and

      (c) You must retain, in the Source form of any Derivative Works
          that You distribute, all copyright, patent, trademark, and
          attribution notices from the Source form of the Work,
          excluding those notices that do not pertain to any part of
          the Derivative Works; and

      (d) If the Work includes a "NOTICE" text file as part of its
          distribution, then any Derivative Works that You distribute must
          include a readable copy of the attribution notices contained
          within such NOTICE file, excluding those notices that do not
          pertain to any part of the Derivative Works, in at least one
          of the following places: within a NOTICE text file distributed
          as part of the Derivative Works; within the Source form or
          documentation, if provided along with the Derivative Works; or,
          within a display generated by the Derivative Works, if and
          wherever such third-party notices normally appear. The contents
          of the NOTICE file are for informational purposes only and
          do not modify the License. You may add Your own attribution
          notices within Derivative Works that You distribute, alongside
          or as an addendum to the NOTICE text from the Work, provided
          that such additional attribution notices cannot be construed
          as modifying the License.

      You may add Your own copyright statement to Your modifications and
      may provide additional or different license terms and conditions
      for use, reproduction, or distribution of Your modifications, or
      for any such Derivative Works as a whole, provided Your use,
      reproduction, and distribution of the Work otherwise complies with
      the conditions stated in this License.

   5. Submission of Contributions. Unless You explicitly state otherwise,
      any Contribution intentionally submitted for inclusion in the Work
      by You to the Licensor shall be under the terms and conditions of
      this License, without any additional terms or conditions.
      Notwithstanding the above, nothing herein shall supersede or modify
      the terms of any separate license agreement you may have executed
      with Licensor regarding such Contributions.

   6. Trademarks. This License does not grant permission to use the trade
      names, trademarks, service marks, or product names of the Licensor,
      except as required for reasonable and customary use in describing the
      origin of the Work and reproducing the content of the NOTICE file.

   7. Disclaimer of Warranty. Unless required by applicable law or
      agreed to in writing, Licensor provides the Work (and each
      Contributor provides its Contributions) on an "AS IS" BASIS,
      WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
      implied, including, without limitation, any warranties or conditions
      of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
      PARTICULAR PURPOSE. You are solely responsible for determining the
      appropriateness of using or redistributing the Work and assume any
      risks associated with Your exercise of permissions under this License.

   8. Limitation of Liability. In no event and under no legal theory,
      whether in tort (including negligence), contract, or otherwise,
      unless required by applicable law (such as deliberate and grossly
      negligent acts) or agreed to in writing, shall any Contributor be
      liable to You for damages, including any direct, indirect, special,
      incidental, or consequential damages of any character arising as a
      result of this License or out of the use or inability to use the
      Work (including but not limited to damages for loss of goodwill,
      work stoppage, computer failure or malfunction, or any and all
      other commercial damages or losses), even if such Contributor
      has been advised of the possibility of such damages.

   9. Accepting Warranty or Additional Liability. While redistributing
      the Work or Derivative Works thereof, You may choose to offer,
      and charge a fee for, acceptance of support, warranty, indemnity,
      or other liability obligations and/or rights consistent with this
      License. However, in accepting such obligations, You may act only
      on Your own behalf and on Your sole responsibility, not on behalf
      of any other Contributor, and only if You agree to indemnify,
      defend, and hold each Contributor harmless for any liability
      incurred by, or claims asserted against, such Contributor by reason
      of your accepting any such warranty or additional liability.

   END OF TERMS AND CONDITIONS

   APPENDIX: How to apply the Apache License to your work.

      To apply the Apache License to your work, attach the following
      boilerplate notice, with the fields enclosed by brackets "[]"
      replaced with your own identifying information. (Don't include
      the brackets!)  The text should be enclosed in the appropriate
      comment syntax for the file format. We also recommend that a
      file or class name and description of purpose be included on the
      same "printed page" as the copyright notice for easier
      identification within third-party archives.

   Copyright [yyyy] [name of copyright owner]

   Licensed under the Apache License, Version 2.0 (the "License");
   you may not use this file except in compliance with the License.
   You may obtain a copy of the License at

       http://www.apache.org/licenses/LICENSE-2.0

   Unless required by applicable law or agreed to in writing, software
   distributed under the License is distributed on an "AS IS" BASIS,
   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
   See the License for the specific language governing permissions and
   limitations under the License.
```
spec_parsing
openai SKILL.md License: LICENSE.txt Version: Unknown
Imported skill spec_parsing from openai
View skill
# Specification Parsing

## Finding the Specification

Before parsing, locate the spec page:

```
1. Search for spec:
   Notion:notion-search
   query: "[Feature Name] spec" or "[Feature Name] specification"
   
2. Handle results:
   - If found → use page URL/ID
   - If multiple → ask user which one
   - If not found → ask user for URL/ID

Example:
Notion:notion-search
query: "User Profile API spec"
query_type: "internal"
```

## Reading Specifications

After finding the spec, fetch it with `Notion:notion-fetch`:

1. Read the full content
2. Identify key sections
3. Extract structured information
4. Note ambiguities or gaps

```
Notion:notion-fetch
id: "spec-page-id-from-search"
```

## Common Spec Structures

### Requirements-Based Spec

```
# Feature Spec
## Overview
[Feature description]

## Requirements
### Functional
- REQ-1: [Requirement]
- REQ-2: [Requirement]

### Non-Functional
- PERF-1: [Performance requirement]
- SEC-1: [Security requirement]

## Acceptance Criteria
- AC-1: [Criterion]
- AC-2: [Criterion]
```

Extract:
- List of functional requirements
- List of non-functional requirements
- List of acceptance criteria

### User Story Based Spec

```
# Feature Spec
## User Stories
### As a [user type]
I want [goal]
So that [benefit]

**Acceptance Criteria**:
- [Criterion]
- [Criterion]
```

Extract:
- User personas
- Goals/capabilities needed
- Acceptance criteria per story

### Technical Design Doc

```
# Technical Design
## Problem Statement
[Problem description]

## Proposed Solution
[Solution approach]

## Architecture
[Architecture details]

## Implementation Plan
[Implementation approach]
```

Extract:
- Problem being solved
- Proposed solution approach
- Architectural decisions
- Implementation guidance

### Product Requirements Document (PRD)

```
# PRD: [Feature]
## Goals
[Business goals]

## User Needs
[User problems being solved]

## Features
[Feature list]

## Success Metrics
[How to measure success]
```

Extract:
- Business goals
- User needs
- Feature list
- Success metrics

## Extraction Strategies

### Requirement Identification

Look for:
- "Must", "Should", "Will" statements
- Numbered requirements (REQ-1, etc.)
- User stories (As a... I want...)
- Acceptance criteria sections
- Feature lists

### Categorization

Group requirements by:

**Functional**: What the system does
- User capabilities
- System behaviors
- Data operations

**Non-Functional**: How the system performs
- Performance targets
- Security requirements
- Scalability needs
- Availability requirements
- Compliance requirements

**Constraints**: Limitations
- Technical constraints
- Business constraints
- Timeline constraints

### Priority Extraction

Identify priority indicators:
- "Critical", "Must have", "P0"
- "Important", "Should have", "P1"
- "Nice to have", "Could have", "P2"
- "Future", "Won't have", "P3"

Map to implementation phases based on priority.

## Handling Ambiguity

### Unclear Requirements

When requirement is ambiguous:

```markdown
## Clarifications Needed

### [Requirement ID/Description]
**Current text**: "[Ambiguous requirement]"
**Question**: [What needs clarification]
**Impact**: [Why this matters for implementation]
**Assumed for now**: [Working assumption if any]
```

Create clarification task or add comment to spec.

### Missing Information

When critical info is missing:

```markdown
## Missing Information

- **[Topic]**: Spec doesn't specify [what's missing]
- **Impact**: Blocks [affected tasks]
- **Action**: Need to [how to resolve]
```

### Conflicting Requirements

When requirements conflict:

```markdown
## Conflicting Requirements

**Conflict**: REQ-1 says [X] but REQ-5 says [Y]
**Impact**: [Implementation impact]
**Resolution needed**: [Decision needed]
```

## Acceptance Criteria Parsing

### Explicit Criteria

Direct acceptance criteria:

```
## Acceptance Criteria
- User can log in with email and password
- System sends confirmation email
- Session expires after 24 hours
```

Convert to checklist:
- [ ] User can log in with email and password
- [ ] System sends confirmation email
- [ ] Session expires after 24 hours

### Implicit Criteria

Derive from requirements:

```
Requirement: "Users can upload files up to 100MB"

Implied acceptance criteria:
- [ ] Files up to 100MB upload successfully
- [ ] Files over 100MB are rejected with error message
- [ ] Progress indicator shows during upload
- [ ] Upload can be cancelled
```

### Testable Criteria

Ensure criteria are testable:

❌ **Not testable**: "System is fast"
✓ **Testable**: "Page loads in < 2 seconds"

❌ **Not testable**: "Users like the interface"
✓ **Testable**: "90% of test users complete task successfully"

## Technical Detail Extraction

### Architecture Information

Extract:
- System components
- Data models
- APIs/interfaces
- Integration points
- Technology choices

### Design Decisions

Note:
- Technology selections
- Architecture patterns
- Trade-offs made
- Rationale provided

### Implementation Guidance

Look for:
- Suggested approach
- Code examples
- Library recommendations
- Best practices mentioned

## Dependency Identification

### External Dependencies

From spec, identify:
- Third-party services required
- External APIs needed
- Infrastructure requirements
- Tool/library dependencies

### Internal Dependencies

Identify:
- Other features needed first
- Shared components required
- Team dependencies
- Data dependencies

### Timeline Dependencies

Note:
- Hard deadlines
- Milestone dependencies
- Sequencing requirements

## Scope Extraction

### In Scope

What's explicitly included:
- Features to build
- Use cases to support
- Users/personas to serve

### Out of Scope

What's explicitly excluded:
- Features deferred
- Use cases not supported
- Edge cases not handled

### Assumptions

What's assumed:
- Environment assumptions
- User assumptions
- System state assumptions

## Risk Identification

Extract risk information:

### Technical Risks
- Unproven technology
- Complex integration
- Performance concerns
- Scalability unknowns

### Business Risks
- Market timing
- Resource availability
- Dependency on others

### Mitigation Strategies

Note any mitigation approaches mentioned in spec.

## Spec Quality Assessment

Evaluate spec completeness:

✓ **Good spec**:
- Clear requirements
- Explicit acceptance criteria
- Priorities defined
- Risks identified
- Technical approach outlined

⚠️ **Incomplete spec**:
- Vague requirements
- Missing acceptance criteria
- Unclear priorities
- No risk analysis
- Technical details absent

Document gaps and create clarification tasks.

## Parsing Checklist

Before creating implementation plan:

☐ All functional requirements identified
☐ Non-functional requirements noted
☐ Acceptance criteria extracted
☐ Dependencies identified
☐ Risks noted
☐ Ambiguities documented
☐ Technical approach understood
☐ Scope is clear
☐ Priorities are defined
sprint_planning
openai SKILL.md License: LICENSE.txt Version: Unknown
Imported skill sprint_planning from openai
View skill
# Example: Sprint Planning Meeting Prep

**User Request**: "Prepare for tomorrow's sprint planning meeting"

## Workflow

### 1. Search for Context
```
Notion:notion-search
query: "sprint planning" + "product backlog"
teamspace_id: "engineering-team"
```

Found:
- Last sprint retrospective
- Product backlog (prioritized)
- Current sprint progress
- Team capacity notes

### 2. Fetch Details
```
Notion:notion-fetch (4 pages)
```

**Key context**:
- **Last Sprint**: Completed 32/35 story points (91%)
- **Velocity**: Consistent 30-35 points over last 3 sprints
- **Team**: 5 engineers, 1 on vacation next sprint (80% capacity)
- **Top Backlog Items**: User auth improvements, API performance, mobile responsive fixes

### 3. Query Current Sprint Tasks
```
Notion:notion-query-data-sources
query: "SELECT * FROM tasks WHERE Sprint = 'Sprint 24' AND Status != 'Done'"
```

3 tasks carrying over (technical debt items)

### 4. Create Pre-Read (Internal)
```
Notion:notion-create-pages
title: "Sprint 25 Planning - Pre-Read (Internal)"
```

**Pre-read included**:
- Sprint 24 summary (velocity, what carried over)
- Team capacity for Sprint 25
- Top backlog candidates with story points
- Technical dependencies
- Risk items (auth changes need QA time)

### 5. Create Agenda
```
Notion:notion-create-pages  
title: "Sprint 25 Planning - Agenda"
```

**Agenda**:
- Review Sprint 24 completion (5 min)
- Discuss carryover items (5 min)
- Review capacity (28 points available)
- Select backlog items (30 min)
- Identify dependencies & risks (10 min)
- Confirm commitments (10 min)

### 6. Link Documents
Cross-linked pre-read and agenda, referenced last retro and backlog.

## Output Summary

**Internal Pre-Read**: Team context, capacity, blockers
**External Agenda**: Meeting structure, discussion topics
**Both saved to Notion** and linked to project pages

## Key Success Factors
- Gathered sprint history for velocity trends
- Calculated realistic capacity (account for PTO)
- Identified carryover items upfront
- Pre-read gave team context before meeting
- Agenda kept meeting focused and timeboxed
sprint_planning_template
openai SKILL.md License: LICENSE.txt Version: Unknown
Imported skill sprint_planning_template from openai
View skill
# Sprint Planning Template

Use this template for agile sprint planning meetings.

```markdown
# Sprint [#] Planning - [Date]

## Meeting Details
**Date**: [Date]
**Team**: [Team name]
**Sprint Duration**: [Dates]

## Sprint Goal

[Clear statement of what this sprint aims to accomplish]

## Capacity

| Team Member | Availability | Capacity (points) |
|-------------|--------------|-------------------|
| [Name] | [%] | [#] |
| **Total** | | [#] |

## Backlog Review

### High Priority Items

[From product backlog, linked from task database]

- <mention-page url="...">Task 1</mention-page> - [Points]
- <mention-page url="...">Task 2</mention-page> - [Points]

## Sprint Backlog

### Committed Items

- [x] <mention-page url="...">Task</mention-page> - [Points] - @[Owner]
- [ ] <mention-page url="...">Task</mention-page> - [Points] - @[Owner]

**Total committed**: [Points]

### Stretch Goals

- [ ] <mention-page url="...">Task</mention-page> - [Points]

## Dependencies & Risks

**Dependencies**:
- [Dependency]

**Risks**:
- [Risk]

## Definition of Done

- [ ] Code complete and reviewed
- [ ] Tests written and passing
- [ ] Documentation updated
- [ ] Deployed to staging
- [ ] QA approved

## Next Steps

- Team begins sprint work
- Daily standups at [Time]
- Sprint review on [Date]
```
standard_implementation_plan
openai SKILL.md License: LICENSE.txt Version: Unknown
Imported skill standard_implementation_plan from openai
View skill
# Standard Implementation Plan Template

Use this template for most feature implementations.

```markdown
# Implementation Plan: [Feature Name]

## Overview
[1-2 sentence feature description and business value]

## Linked Specification
<mention-page url="...">Original Specification</mention-page>

## Requirements Summary

### Functional Requirements
- [Requirement 1]
- [Requirement 2]
- [Requirement 3]

### Non-Functional Requirements
- **Performance**: [Targets]
- **Security**: [Requirements]
- **Scalability**: [Needs]

### Acceptance Criteria
- [ ] [Criterion 1]
- [ ] [Criterion 2]
- [ ] [Criterion 3]

## Technical Approach

### Architecture
[High-level architectural decisions]

### Technology Stack
- Backend: [Technologies]
- Frontend: [Technologies]
- Infrastructure: [Technologies]

### Key Design Decisions
1. **[Decision]**: [Rationale]
2. **[Decision]**: [Rationale]

## Implementation Phases

### Phase 1: Foundation (Week 1)
**Goal**: Set up core infrastructure

**Tasks**:
- [ ] <mention-page url="...">Database schema design</mention-page>
- [ ] <mention-page url="...">API scaffolding</mention-page>
- [ ] <mention-page url="...">Authentication setup</mention-page>

**Deliverables**: Working API skeleton
**Estimated effort**: 3 days

### Phase 2: Core Features (Week 2-3)
**Goal**: Implement main functionality

**Tasks**:
- [ ] <mention-page url="...">Feature A implementation</mention-page>
- [ ] <mention-page url="...">Feature B implementation</mention-page>

**Deliverables**: Core features working
**Estimated effort**: 1 week

### Phase 3: Integration & Polish (Week 4)
**Goal**: Complete integration and refinement

**Tasks**:
- [ ] <mention-page url="...">Frontend integration</mention-page>
- [ ] <mention-page url="...">Testing & QA</mention-page>

**Deliverables**: Production-ready feature
**Estimated effort**: 1 week

## Dependencies

### External Dependencies
- [Dependency 1]: [Status]
- [Dependency 2]: [Status]

### Internal Dependencies
- [Team/component dependency]

### Blockers
- [Known blocker] or None currently

## Risks & Mitigation

### Risk 1: [Description]
- **Probability**: High/Medium/Low
- **Impact**: High/Medium/Low
- **Mitigation**: [Strategy]

### Risk 2: [Description]
- **Probability**: High/Medium/Low
- **Impact**: High/Medium/Low
- **Mitigation**: [Strategy]

## Timeline

| Milestone | Target Date | Status |
|-----------|-------------|--------|
| Phase 1 Complete | [Date] | ⏳ Planned |
| Phase 2 Complete | [Date] | ⏳ Planned |
| Phase 3 Complete | [Date] | ⏳ Planned |
| Launch | [Date] | ⏳ Planned |

## Success Criteria

### Technical Success
- [ ] All acceptance criteria met
- [ ] Performance targets achieved
- [ ] Security requirements satisfied
- [ ] Test coverage > 80%

### Business Success
- [ ] [Business metric 1]
- [ ] [Business metric 2]

## Resources

### Documentation
- <mention-page url="...">Design Doc</mention-page>
- <mention-page url="...">API Spec</mention-page>

### Related Work
- <mention-page url="...">Related Feature</mention-page>

## Progress Tracking

[This section updated regularly]

### Phase Status
- Phase 1: ⏳ Not Started
- Phase 2: ⏳ Not Started
- Phase 3: ⏳ Not Started

**Overall Progress**: 0% complete

### Latest Update: [Date]
[Brief status update]
```
status_update_template
openai SKILL.md License: LICENSE.txt Version: Unknown
Imported skill status_update_template from openai
View skill
# Status Update Meeting Template

Use this template for regular project status updates and check-ins.

```markdown
# [Project Name] Status Update - [Date]

## Meeting Details
**Date**: [Date and time]
**Attendees**: [List]
**Project**: <mention-page url="...">Project Page</mention-page>

## Executive Summary

**Status**: 🟢 On Track / 🟡 At Risk / 🔴 Behind

**Progress**: [Percentage] complete
**Timeline**: [Status vs original plan]

## Progress Since Last Meeting

### Completed
- [Accomplishment with specifics]
- [Accomplishment with specifics]

### In Progress
- [Work item and status]
- [Work item and status]

## Metrics

| Metric | Current | Target | Status |
|--------|---------|--------|--------|
| [Metric] | [Value] | [Value] | [Icon] |
| [Metric] | [Value] | [Value] | [Icon] |

## Upcoming Work

**Next 2 Weeks**:
- [Planned work]
- [Planned work]

**Next Month**:
- [Milestone or major work]

## Blockers & Risks

### Active Blockers
- **[Blocker]**: [Description and impact]
  - Action: [What's being done]

### Risks
- **[Risk]**: [Description]
  - Mitigation: [Strategy]

## Discussion Topics

1. [Topic requiring input]
2. [Topic for alignment]

## Decisions Needed

- [Decision] or None

## Action Items

- [ ] [Action] - @[Owner] - Due: [Date]

## Next Meeting

**Date**: [Date]
**Focus**: [What next meeting will cover]
```
task_creation
openai SKILL.md License: LICENSE.txt Version: Unknown
Imported skill task_creation from openai
View skill
# Task Creation from Specs

## Finding the Task Database

Before creating tasks, locate the task database:

```
1. Search for task database:
   Notion:notion-search
   query: "Tasks" or "Task Management" or "[Project] Tasks"
   
2. Fetch database schema:
   Notion:notion-fetch
   id: "database-id-from-search"
   
3. Identify data source:
   - Look for <data-source url="collection://..."> tags
   - Extract collection ID for parent parameter
   
4. Note schema:
   - Required properties
   - Property types and options
   - Relation properties for linking

Example:
Notion:notion-search
query: "Engineering Tasks"
query_type: "internal"

Notion:notion-fetch
id: "tasks-database-id"
```

Result: `collection://abc-123-def` for use as parent

## Task Breakdown Strategy

### Size Guidelines

**Good task size**:
- Completable in 1-2 days
- Single clear deliverable
- Independently testable
- Minimal dependencies

**Too large**:
- Takes > 3 days
- Multiple deliverables
- Many dependencies
- Break down further

**Too small**:
- Takes < 2 hours
- Too granular
- Group with related work

### Granularity by Phase

**Early phases**: Larger tasks acceptable
- "Design database schema"
- "Set up API structure"

**Middle phases**: Medium-sized tasks
- "Implement user authentication"
- "Build dashboard UI"

**Late phases**: Smaller, precise tasks
- "Fix validation bug in form"
- "Add loading state to button"

## Task Creation Pattern

For each requirement or work item:

```
1. Identify the work
2. Determine task size
3. Create task in database
4. Set properties
5. Write task description
6. Link to spec/plan
```

### Creating Task

```
Use Notion:notion-create-pages:

parent: {
  type: "data_source_id",
  data_source_id: "collection://tasks-db-uuid"
}

properties: {
  "[Title Property]": "Task: [Clear task name]",
  "Status": "To Do",
  "Priority": "[High/Medium/Low]",
  "[Project/Related]": ["spec-page-id", "plan-page-id"],
  "Assignee": "[Person]" (if known),
  "date:Due Date:start": "[Date]" (if applicable),
  "date:Due Date:is_datetime": 0
}

content: "[Task description using template]"
```

## Task Description Template

```markdown
# [Task Name]

## Context
Implementation task for <mention-page url="...">Feature Spec</mention-page>

Part of <mention-page url="...">Implementation Plan</mention-page> - Phase [N]

## Objective
[What this task accomplishes]

## Requirements
Based on spec requirements:
- [Relevant requirement 1]
- [Relevant requirement 2]

## Acceptance Criteria
- [ ] [Specific, testable criterion]
- [ ] [Specific, testable criterion]
- [ ] [Specific, testable criterion]

## Technical Approach
[Suggested implementation approach]

### Components Affected
- [Component 1]
- [Component 2]

### Key Decisions
- [Decision point 1]
- [Decision point 2]

## Dependencies

### Blocked By
- <mention-page url="...">Prerequisite Task</mention-page> or None

### Blocks
- <mention-page url="...">Dependent Task</mention-page> or None

## Resources
- [Link to design mockup]
- [Link to API spec]
- [Link to relevant code]

## Estimated Effort
[Time estimate]

## Progress
[To be updated during implementation]
```

## Task Types

### Infrastructure/Setup Tasks

```
Title: "Setup: [What's being set up]"
Examples:
- "Setup: Configure database connection pool"
- "Setup: Initialize authentication middleware"
- "Setup: Create CI/CD pipeline"

Focus: Getting environment/tooling ready
```

### Feature Implementation Tasks

```
Title: "Implement: [Feature name]"
Examples:
- "Implement: User login flow"
- "Implement: File upload functionality"
- "Implement: Dashboard widget"

Focus: Building specific functionality
```

### Integration Tasks

```
Title: "Integrate: [What's being integrated]"
Examples:
- "Integrate: Connect frontend to API"
- "Integrate: Add payment provider"
- "Integrate: Link user profile to dashboard"

Focus: Connecting components
```

### Testing Tasks

```
Title: "Test: [What's being tested]"
Examples:
- "Test: Write unit tests for auth service"
- "Test: E2E testing for checkout flow"
- "Test: Performance testing for API"

Focus: Validation and quality assurance
```

### Documentation Tasks

```
Title: "Document: [What's being documented]"
Examples:
- "Document: API endpoints"
- "Document: Setup instructions"
- "Document: Architecture decisions"

Focus: Creating documentation
```

### Bug Fix Tasks

```
Title: "Fix: [Bug description]"
Examples:
- "Fix: Login error on Safari"
- "Fix: Memory leak in image processing"
- "Fix: Race condition in payment flow"

Focus: Resolving issues
```

### Refactoring Tasks

```
Title: "Refactor: [What's being refactored]"
Examples:
- "Refactor: Extract auth logic to service"
- "Refactor: Optimize database queries"
- "Refactor: Simplify component hierarchy"

Focus: Code quality improvement
```

## Sequencing Tasks

### Critical Path

Identify must-happen-first tasks:

```
1. Database schema
2. API foundation
3. Core business logic
4. Frontend integration
5. Testing
6. Deployment
```

### Parallel Tracks

Tasks that can happen simultaneously:

```
Track A: Backend development
- API endpoints
- Business logic
- Database operations

Track B: Frontend development
- UI components
- State management
- Routing

Track C: Infrastructure
- CI/CD setup
- Monitoring
- Documentation
```

### Phase-Based Sequencing

Group by implementation phase:

```
Phase 1 (Foundation):
- Setup tasks
- Infrastructure tasks

Phase 2 (Core):
- Feature implementation tasks
- Integration tasks

Phase 3 (Polish):
- Testing tasks
- Documentation tasks
- Optimization tasks
```

## Priority Assignment

### P0/Critical
- Blocks everything else
- Core functionality
- Security requirements
- Data integrity

### P1/High
- Important features
- User-facing functionality
- Performance requirements

### P2/Medium
- Nice-to-have features
- Optimizations
- Minor improvements

### P3/Low
- Future enhancements
- Edge case handling
- Cosmetic improvements

## Estimation

### Story Points

If using story points:
- 1 point: Few hours
- 2 points: Half day
- 3 points: Full day
- 5 points: 2 days
- 8 points: 3-4 days (consider breaking down)

### Time Estimates

Direct time estimates:
- 2-4 hours: Small task
- 1 day: Medium task
- 2 days: Large task
- 3+ days: Break down further

### Estimation Factors

Consider:
- Complexity
- Unknowns
- Dependencies
- Testing requirements
- Documentation needs

## Task Relationships

### Parent Task Pattern

For large features:

```
Parent: "Feature: User Authentication"
Children:
- "Setup: Configure auth library"
- "Implement: Login flow"
- "Implement: Password reset"
- "Test: Auth functionality"
```

### Dependency Chain Pattern

For sequential work:

```
Task A: "Design database schema"
↓ (blocks)
Task B: "Implement data models"
↓ (blocks)
Task C: "Create API endpoints"
↓ (blocks)
Task D: "Integrate with frontend"
```

### Related Tasks Pattern

For parallel work:

```
Central: "Feature: Dashboard"
Related:
- "Backend API for dashboard data"
- "Frontend dashboard component"
- "Dashboard data caching"
```

## Bulk Task Creation

When creating many tasks:

```
For each work item in breakdown:
  1. Determine task properties
  2. Create task page
  3. Link to spec/plan
  4. Set relationships

Then:
  1. Update plan with task links
  2. Review sequencing
  3. Assign tasks (if known)
```

## Task Naming Conventions

**Be specific**:
✓ "Implement user login with email/password"
✗ "Add login"

**Include context**:
✓ "Dashboard: Add revenue chart widget"
✗ "Add chart"

**Use action verbs**:
- Implement, Build, Create
- Integrate, Connect, Link
- Fix, Resolve, Debug
- Test, Validate, Verify
- Document, Write, Update
- Refactor, Optimize, Improve

## Validation Checklist

Before finalizing tasks:

☐ Each task has clear objective
☐ Acceptance criteria are testable
☐ Dependencies identified
☐ Appropriate size (1-2 days)
☐ Priority assigned
☐ Linked to spec/plan
☐ Proper sequencing
☐ Resources noted
task_creation_template
openai SKILL.md License: LICENSE.txt Version: Unknown
Imported skill task_creation_template from openai
View skill
# Task Creation Template

When creating tasks from spec.

```markdown
# [Task Name]

## Context
Part of implementation for <mention-page url="...">Feature Spec</mention-page>

Implementation plan: <mention-page url="...">Implementation Plan</mention-page>

## Description
[What needs to be done]

## Acceptance Criteria
- [ ] [Criterion 1]
- [ ] [Criterion 2]

## Technical Details
[Technical approach or notes]

## Dependencies
- Blocked by: [Task] or None
- Blocks: [Task] or None

## Resources
- [Link to design]
- [Link to related code]

## Progress
[To be updated during implementation]
```
team_wiki_database
openai SKILL.md License: LICENSE.txt Version: Unknown
Imported skill team_wiki_database from openai
View skill
# Team Wiki Database

**Purpose**: Centralized team knowledge and resources.

## Schema

| Property | Type | Options | Purpose |
|----------|------|---------|---------|
| **Title** | title | - | Page name |
| **Section** | select | Getting Started, Processes, Tools, Reference, Onboarding | Wiki organization |
| **Tags** | multi_select | - | Topic tags |
| **Owner** | people | - | Page maintainer |
| **Last Updated** | last_edited_time | - | Auto-tracked |
| **Visibility** | select | Public, Team Only, Confidential | Access level |

## Usage

Use for team-specific documentation that doesn't fit other databases.

## Best Practices

1. **Organize by sections**: Use clear top-level organization
2. **Assign owners**: Every page should have a maintainer
3. **Control visibility**: Set appropriate access levels
4. **Link extensively**: Connect related pages
5. **Keep current**: Regular reviews to remove outdated content
technical_investigation
openai SKILL.md License: LICENSE.txt Version: Unknown
Imported skill technical_investigation from openai
View skill
# Example: Technical Investigation

## User Request

> "Research our current caching strategy and create a technical summary"

## Workflow Execution

### Step 1: Search for Caching Information

```
Notion:notion-search
query: "caching strategy architecture"
query_type: "internal"
teamspace_id: "engineering-teamspace-id"
```

**Results Found:**
- "System Architecture Overview" (Engineering)
- "Redis Implementation Guide" (Backend Docs)
- "Performance Optimization - Q3 2024" (Engineering)
- "API Caching Decision Record" (Architecture)

### Step 2: Fetch Relevant Pages

```
Notion:notion-fetch
id: "system-architecture-page-url"
```

Extracted: Current caching architecture uses Redis for API responses, Memcached for session storage.

```
Notion:notion-fetch
id: "redis-implementation-guide-url"
```

Extracted: Implementation details, TTL settings, invalidation strategies.

```
Notion:notion-fetch
id: "api-caching-decision-record-url"
```

Extracted: Why Redis was chosen over alternatives, trade-offs considered.

### Step 3: Synthesize Findings

**Key findings identified:**
- Two-tier caching: Redis (API responses) + Memcached (sessions)
- TTL strategy: 5 minutes for dynamic data, 1 hour for static
- Invalidation: Event-driven for critical updates
- Performance impact: 75% reduction in database load
- Known issues: Cache stampede on popular endpoints

### Step 4: Create Technical Summary

```
Notion:notion-create-pages

parent: { page_id: "engineering-docs-parent-id" }

pages: [{
  properties: {
    "title": "Technical Summary: Caching Strategy - Oct 2025"
  },
  content: "[Structured technical summary using template]"
}]
```

## Output Document

```markdown
# Technical Summary: Caching Strategy - Oct 2025

## Executive Summary

Our caching infrastructure uses a two-tier approach with Redis for API response caching and Memcached for session management. This strategy has reduced database load by 75% and improved API response times from 200ms to 50ms average.

## Architecture Overview

### Layer 1: API Response Caching (Redis)
**Technology**: Redis 7.0 cluster (3 nodes)
**Purpose**: Cache GET endpoint responses
**TTL Strategy**:
- Dynamic content: 5 minutes
- Static content: 1 hour
- User-specific: 15 minutes

**Source**: <mention-page url="...">System Architecture Overview</mention-page>

### Layer 2: Session Storage (Memcached)
**Technology**: Memcached 1.6
**Purpose**: User session data, temporary state
**TTL**: 24 hours (session lifetime)

## Implementation Details

### Cache Key Format
```
api:v1:{endpoint}:{params_hash}
session:{user_id}:{session_id}
```

### Invalidation Strategy
- **Event-driven**: Critical data changes trigger immediate invalidation
- **Time-based**: TTL expiration for non-critical data
- **Manual**: Admin tools for emergency cache clear

**Source**: <mention-page url="...">Redis Implementation Guide</mention-page>

## Decision Rationale

### Why Redis for API Caching?

**Pros**:
- Advanced data structures (sorted sets, hashes)
- Built-in TTL with automatic eviction
- Pub/sub for cache invalidation events
- Persistence options for durability

**Cons**:
- Higher memory usage than Memcached
- More complex cluster management

**Decision**: Chosen for flexibility and rich feature set needed for API caching.

**Source**: <mention-page url="...">API Caching Decision Record</mention-page>

### Why Memcached for Sessions?

**Pros**:
- Simpler, lighter weight
- Excellent for key-value storage
- Lower memory footprint

**Cons**:
- No persistence
- Limited data structures

**Decision**: Perfect fit for ephemeral session data where simplicity is valued.

## Performance Impact

| Metric | Before Caching | After Caching | Improvement |
|--------|----------------|---------------|-------------|
| Avg Response Time | 200ms | 50ms | 75% faster |
| Database Load | 100% | 25% | 75% reduction |
| Cache Hit Rate | - | 85% | - |
| Peak RPS Handled | 1,000 | 4,000 | 4x increase |

**Source**: <mention-page url="...">Performance Optimization - Q3 2024</mention-page>

## Known Issues & Limitations

### Cache Stampede
**Issue**: When popular cache entries expire, multiple requests hit database simultaneously

**Mitigation**: Implemented probabilistic early expiration and request coalescing

**Status**: Reduced by 90% but not eliminated

### Stale Data Risk
**Issue**: Cached data can be up to TTL duration stale

**Mitigation**: Event-driven invalidation for critical data paths

**Status**: Acceptable trade-off for performance gains

## Monitoring & Observability

**Metrics tracked**:
- Cache hit/miss rates per endpoint
- Memory usage and eviction rates
- Response time distributions
- Invalidation event frequency

**Tools**: DataDog dashboards, CloudWatch alarms

## Future Considerations

1. **Edge Caching**: Evaluate CDN integration for static assets
2. **Cache Warming**: Pre-populate cache for predictable traffic spikes
3. **Adaptive TTLs**: Adjust TTLs based on data change frequency
4. **Regional Caching**: Multi-region cache replication for global performance

## Related Documentation

- <mention-page url="...">System Architecture Overview</mention-page>
- <mention-page url="...">Redis Implementation Guide</mention-page>
- <mention-page url="...">API Caching Decision Record</mention-page>
- <mention-page url="...">Performance Optimization - Q3 2024</mention-page>

## Appendix: Configuration Examples

### Redis Configuration
```yaml
maxmemory: 8gb
maxmemory-policy: allkeys-lru
tcp-keepalive: 60
```

### Common Cache Operations
```python
# Set with TTL
cache.set(key, value, ttl=300)

# Get with fallback
value = cache.get(key) or fetch_from_db(key)

# Invalidate pattern
cache.delete_pattern("api:v1:users:*")
```
```

## Key Success Factors

1. **Multiple source integration**: Combined architecture docs, implementation guides, and decision records
2. **Technical depth**: Included configuration, code examples, metrics
3. **Decision context**: Explained why choices were made, not just what
4. **Practical focus**: Real performance numbers and known issues
5. **Future-looking**: Noted areas for improvement
6. **Well-cited**: Every major point links back to source material

## Workflow Pattern Demonstrated

This example shows the complete research workflow:
- **Scoped search** (teamspace filter for engineering)
- **Multi-page synthesis** (4 different sources)
- **Technical template** (architecture-focused format)
- **Proper placement** (under engineering docs)
- **Comprehensive citations** (links to all sources)
template_selection_guide
openai SKILL.md License: LICENSE.txt Version: Unknown
Imported skill template_selection_guide from openai
View skill
# Meeting Template Selection Guide

Choose the right template for your meeting type.

## Template Overview

| Meeting Type | Use This Template | When to Use |
|--------------|-------------------|-------------|
| Make a decision | [Decision Meeting](decision-meeting-template.md) | Need to evaluate options and reach a decision |
| Project update | [Status Update](status-update-template.md) | Regular check-ins, progress reviews |
| Generate ideas | [Brainstorming](brainstorming-template.md) | Creative ideation, problem-solving |
| Sprint planning | [Sprint Planning](sprint-planning-template.md) | Planning agile sprint work |
| Sprint retro | [Retrospective](retrospective-template.md) | Reflecting on completed work |
| Manager/report | [1:1 Meeting](one-on-one-template.md) | Regular one-on-one check-ins |
| Weekly team sync | [Status Update](status-update-template.md) (simplified) | Routine team synchronization |

## Quick Decision Tree

```
What's the primary purpose?

├─ Make a decision
│  └─ Use: Decision Meeting Template
│
├─ Update on progress
│  └─ Use: Status Update Template
│
├─ Generate ideas
│  └─ Use: Brainstorming Template
│
├─ Plan sprint work
│  └─ Use: Sprint Planning Template
│
├─ Reflect on past work
│  └─ Use: Retrospective Template
│
└─ Manager/report check-in
   └─ Use: 1:1 Meeting Template
```

## Template Customization

All templates can be customized:
- **Simplify** for shorter meetings
- **Add sections** for specific needs
- **Combine elements** from multiple templates
- **Adapt language** for your team culture

## Best Practices

1. **Choose template first**: Select before gathering context
2. **Gather Notion content**: Search and fetch relevant pages
3. **Enrich with research**: Add Codex insights where valuable
4. **Customize as needed**: Adapt template to specific situation
5. **Share early**: Give attendees time to review
trip_planning
openai SKILL.md License: LICENSE.txt Version: Unknown
Imported skill trip_planning from openai
View skill
# Example: Group Trip Research & Planning

**User Request**: "Research and plan our friends' trip to Japan in March - we're 6 people looking for 10 days"

## Workflow

### 1. Search Existing Notes
```
Notion:notion-search
query: "Japan travel"
```
Found: Japan Travel Guide (from friend), Tokyo Restaurants, Kyoto Temple Guide

### 2. Fetch & Extract Tips
```
Notion:notion-fetch (3x)
```
**Key info from previous travelers:**
- Best time: March-April (cherry blossoms)
- Must-see: Tokyo, Kyoto, Osaka
- Budget: $200-300/day (mid-range)
- Book accommodations 3 months ahead
- Get JR Pass before arrival
- Top restaurants: Sushi Dai, Ichiran Ramen, Tsunahachi Tempura

### 3. Research & Synthesize
Combined previous traveler insights with:
- Flight options and prices
- Accommodation types (hotels/ryokans/Airbnb)
- Transportation (JR Pass essential)
- 10-day itinerary structure
- Budget breakdown

### 4. Create Comprehensive Plan
```
Notion:notion-create-pages
parent: { page_id: "travel-plans-parent-id" }
pages: [{
  properties: {
    title: "Japan Trip 2026 - March 15-25 (10 Days)"
  },
  content: "[Full trip plan with itinerary, budget, tips...]"
}]
```

## Output Sample

```markdown
# Japan Trip 2026 - March 15-25 (10 Days)

## Trip Overview
**Dates**: March 15-25, 2026 (Cherry Blossom Season 🌸)
**Group**: 6 people | **Budget**: $3,000-4,000/person

## Itinerary Summary

**Days 1-3: Tokyo**
- Arrive, explore Shibuya, Harajuku, Shinjuku
- Visit Tsukiji Market, Imperial Palace, Sensoji Temple
- Experience Tokyo nightlife, teamLab Borderless

**Days 4-5: Hakone**  
- Day trip from Tokyo
- Hot springs, Mt. Fuji views, Lake Ashi cruise

**Days 6-8: Kyoto**
- Bullet train from Tokyo
- Fushimi Inari, Kinkaku-ji, Arashiyama Bamboo Grove
- Geisha district (Gion), traditional tea ceremony

**Days 9-10: Osaka + Depart**
- Osaka Castle, Dotonbori food district
- Nara day trip (deer park, Todai-ji Temple)
- Return flight from Osaka (KIX)

## Budget Breakdown (per person)
- **Flights**: $900-1,200 (round-trip)
- **Accommodations**: $700-1,000 (9 nights)
- **JR Pass**: $280 (7-day pass)
- **Food**: $500-700 ($50-70/day)
- **Activities**: $300-400
- **Local transport**: $100
**Total**: $2,780-3,680

## Key Bookings
- **Flights**: Book 3-4 months ahead
- **Hotels**: Book now (cherry blossom season)
  - Tokyo: Shinjuku area (2 nights) + Asakusa (1 night)
  - Kyoto: Gion area (3 nights)
  - Osaka: Namba area (2 nights)
- **JR Pass**: Order 2-3 weeks before travel
- **Restaurants**: Reserve 1 week ahead (Sushi Dai, high-end spots)

## Essential Tips
Source: <mention-page url="...">Japan Travel Guide</mention-page>
- Get pocket WiFi or eSIM on arrival
- Download: Google Translate, Hyperdia (train routes), Tabelog (restaurants)
- Cash-heavy country - withdraw at 7-Eleven ATMs
- Shoes off in temples, ryokans, some restaurants
- Trains extremely punctual - don't be late
- Learn basic phrases: arigatou, sumimasen, itadakimasu

## Packing List
- Comfortable walking shoes (10k+ steps/day)
- Light jacket (March 55-65°F)
- Backpack for day trips
- Cash pouch
- Portable charger

## Next Steps
- [ ] Book flights (target: <$1,100/person)
- [ ] Order JR Passes
- [ ] Book hotels (Tokyo → Kyoto → Osaka)
- [ ] Create shared expense tracker
- [ ] Schedule group planning call

## Sources
- <mention-page url="...">Japan Travel Guide</mention-page> (Sarah's 2024 trip)
- <mention-page url="...">Tokyo Restaurant Recommendations</mention-page>
- <mention-page url="...">Kyoto Temple Guide</mention-page>
```

## Key Takeaways
- Leveraged previous traveler notes from Notion
- Combined personal insights with research
- Created actionable itinerary with budget breakdown
- Included practical tips from experienced travelers
- Set clear next steps for group coordination
ui_component
openai SKILL.md License: LICENSE.txt Version: Unknown
Imported skill ui_component from openai
View skill
# Example: UI Component Implementation

**User Request**: "Implement the new search component from the design spec"

## Workflow

### 1. Find Spec
```
Notion:notion-search
query: "search component design spec"
```
Found: "Global Search Redesign Spec"

### 2. Fetch & Parse Spec
```
Notion:notion-fetch
```

**Requirements**:
- Instant search with debounced API calls
- Keyboard navigation (↑↓ arrows, Enter, Esc)
- Recent searches history
- Result categories (pages, people, files)
- Mobile responsive
- Accessibility (ARIA labels, screen reader)

### 3. Create Implementation Plan
```
Notion:notion-create-pages
```

**Plan phases**:
- Phase 1: Base component structure
- Phase 2: API integration & debouncing
- Phase 3: Keyboard navigation
- Phase 4: Recent searches
- Phase 5: Polish & accessibility

### 4. Create Tasks
```
Notion:notion-create-pages (7 tasks)
```

**Tasks**:
1. Create SearchInput component
2. Implement useDebounce hook
3. Add keyboard navigation
4. LocalStorage for recent searches
5. Result categorization UI
6. Accessibility audit
7. Mobile responsive styling

### 5. Implement & Track
As each task completed, updated status and added progress notes with screenshots and implementation details.

## Key Outputs

**Implementation Plan** (linked to design spec)
**7 Component Tasks** (in Engineering Tasks database)
**Progress Updates** (with code snippets and demo links)

## Success Factors
- Clear component breakdown
- Separated concerns (logic, UI, accessibility)
- Each task had acceptance criteria
- Referenced design spec throughout
- Included accessibility from start, not afterthought
- Tracked progress with visual updates
_sections
vercel SKILL.md License: See repository Version: Unknown
Imported skill _sections from vercel
View skill
# Sections

This file defines all sections, their ordering, impact levels, and descriptions.
The section ID (in parentheses) is the filename prefix used to group rules.

---

## 1. Eliminating Waterfalls (async)

**Impact:** CRITICAL  
**Description:** Waterfalls are the #1 performance killer. Each sequential await adds full network latency. Eliminating them yields the largest gains.

## 2. Bundle Size Optimization (bundle)

**Impact:** CRITICAL  
**Description:** Reducing initial bundle size improves Time to Interactive and Largest Contentful Paint.

## 3. Server-Side Performance (server)

**Impact:** HIGH  
**Description:** Optimizing server-side rendering and data fetching eliminates server-side waterfalls and reduces response times.

## 4. Client-Side Data Fetching (client)

**Impact:** MEDIUM-HIGH  
**Description:** Automatic deduplication and efficient data fetching patterns reduce redundant network requests.

## 5. Re-render Optimization (rerender)

**Impact:** MEDIUM  
**Description:** Reducing unnecessary re-renders minimizes wasted computation and improves UI responsiveness.

## 6. Rendering Performance (rendering)

**Impact:** MEDIUM  
**Description:** Optimizing the rendering process reduces the work the browser needs to do.

## 7. JavaScript Performance (js)

**Impact:** LOW-MEDIUM  
**Description:** Micro-optimizations for hot paths can add up to meaningful improvements.

## 8. Advanced Patterns (advanced)

**Impact:** LOW  
**Description:** Advanced patterns for specific cases that require careful implementation.
_template
vercel SKILL.md License: See repository Version: Unknown
Imported skill _template from vercel
View skill
---
title: Rule Title Here
impact: MEDIUM
impactDescription: Optional description of impact (e.g., "20-50% improvement")
tags: tag1, tag2
---

## Rule Title Here

**Impact: MEDIUM (optional impact description)**

Brief explanation of the rule and why it matters. This should be clear and concise, explaining the performance implications.

**Incorrect (description of what's wrong):**

```typescript
// Bad code example here
const bad = example()
```

**Correct (description of what's right):**

```typescript
// Good code example here
const good = example()
```

Reference: [Link to documentation or resource](https://example.com)
advanced_event_handler_refs
vercel SKILL.md License: See repository Version: Unknown
Imported skill advanced_event_handler_refs from vercel
View skill
---
title: Store Event Handlers in Refs
impact: LOW
impactDescription: stable subscriptions
tags: advanced, hooks, refs, event-handlers, optimization
---

## Store Event Handlers in Refs

Store callbacks in refs when used in effects that shouldn't re-subscribe on callback changes.

**Incorrect (re-subscribes on every render):**

```tsx
function useWindowEvent(event: string, handler: (e) => void) {
  useEffect(() => {
    window.addEventListener(event, handler)
    return () => window.removeEventListener(event, handler)
  }, [event, handler])
}
```

**Correct (stable subscription):**

```tsx
function useWindowEvent(event: string, handler: (e) => void) {
  const handlerRef = useRef(handler)
  useEffect(() => {
    handlerRef.current = handler
  }, [handler])

  useEffect(() => {
    const listener = (e) => handlerRef.current(e)
    window.addEventListener(event, listener)
    return () => window.removeEventListener(event, listener)
  }, [event])
}
```

**Alternative: use `useEffectEvent` if you're on latest React:**

```tsx
import { useEffectEvent } from 'react'

function useWindowEvent(event: string, handler: (e) => void) {
  const onEvent = useEffectEvent(handler)

  useEffect(() => {
    window.addEventListener(event, onEvent)
    return () => window.removeEventListener(event, onEvent)
  }, [event])
}
```

`useEffectEvent` provides a cleaner API for the same pattern: it creates a stable function reference that always calls the latest version of the handler.
agents
vercel SKILL.md License: See repository Version: Unknown
Imported skill agents from vercel
View skill
# React Best Practices

**Version 1.0.0**  
Vercel Engineering  
January 2026

> **Note:**  
> This document is mainly for agents and LLMs to follow when maintaining,  
> generating, or refactoring React and Next.js codebases at Vercel. Humans  
> may also find it useful, but guidance here is optimized for automation  
> and consistency by AI-assisted workflows.

---

## Abstract

Comprehensive performance optimization guide for React and Next.js applications, designed for AI agents and LLMs. Contains 40+ rules across 8 categories, prioritized by impact from critical (eliminating waterfalls, reducing bundle size) to incremental (advanced patterns). Each rule includes detailed explanations, real-world examples comparing incorrect vs. correct implementations, and specific impact metrics to guide automated refactoring and code generation.

---

## Table of Contents

1. [Eliminating Waterfalls](#1-eliminating-waterfalls) — **CRITICAL**
   - 1.1 [Defer Await Until Needed](#11-defer-await-until-needed)
   - 1.2 [Dependency-Based Parallelization](#12-dependency-based-parallelization)
   - 1.3 [Prevent Waterfall Chains in API Routes](#13-prevent-waterfall-chains-in-api-routes)
   - 1.4 [Promise.all() for Independent Operations](#14-promiseall-for-independent-operations)
   - 1.5 [Strategic Suspense Boundaries](#15-strategic-suspense-boundaries)
2. [Bundle Size Optimization](#2-bundle-size-optimization) — **CRITICAL**
   - 2.1 [Avoid Barrel File Imports](#21-avoid-barrel-file-imports)
   - 2.2 [Conditional Module Loading](#22-conditional-module-loading)
   - 2.3 [Defer Non-Critical Third-Party Libraries](#23-defer-non-critical-third-party-libraries)
   - 2.4 [Dynamic Imports for Heavy Components](#24-dynamic-imports-for-heavy-components)
   - 2.5 [Preload Based on User Intent](#25-preload-based-on-user-intent)
3. [Server-Side Performance](#3-server-side-performance) — **HIGH**
   - 3.1 [Cross-Request LRU Caching](#31-cross-request-lru-caching)
   - 3.2 [Minimize Serialization at RSC Boundaries](#32-minimize-serialization-at-rsc-boundaries)
   - 3.3 [Parallel Data Fetching with Component Composition](#33-parallel-data-fetching-with-component-composition)
   - 3.4 [Per-Request Deduplication with React.cache()](#34-per-request-deduplication-with-reactcache)
   - 3.5 [Use after() for Non-Blocking Operations](#35-use-after-for-non-blocking-operations)
4. [Client-Side Data Fetching](#4-client-side-data-fetching) — **MEDIUM-HIGH**
   - 4.1 [Deduplicate Global Event Listeners](#41-deduplicate-global-event-listeners)
   - 4.2 [Use Passive Event Listeners for Scrolling Performance](#42-use-passive-event-listeners-for-scrolling-performance)
   - 4.3 [Use SWR for Automatic Deduplication](#43-use-swr-for-automatic-deduplication)
   - 4.4 [Version and Minimize localStorage Data](#44-version-and-minimize-localstorage-data)
5. [Re-render Optimization](#5-re-render-optimization) — **MEDIUM**
   - 5.1 [Defer State Reads to Usage Point](#51-defer-state-reads-to-usage-point)
   - 5.2 [Extract to Memoized Components](#52-extract-to-memoized-components)
   - 5.3 [Narrow Effect Dependencies](#53-narrow-effect-dependencies)
   - 5.4 [Subscribe to Derived State](#54-subscribe-to-derived-state)
   - 5.5 [Use Functional setState Updates](#55-use-functional-setstate-updates)
   - 5.6 [Use Lazy State Initialization](#56-use-lazy-state-initialization)
   - 5.7 [Use Transitions for Non-Urgent Updates](#57-use-transitions-for-non-urgent-updates)
6. [Rendering Performance](#6-rendering-performance) — **MEDIUM**
   - 6.1 [Animate SVG Wrapper Instead of SVG Element](#61-animate-svg-wrapper-instead-of-svg-element)
   - 6.2 [CSS content-visibility for Long Lists](#62-css-content-visibility-for-long-lists)
   - 6.3 [Hoist Static JSX Elements](#63-hoist-static-jsx-elements)
   - 6.4 [Optimize SVG Precision](#64-optimize-svg-precision)
   - 6.5 [Prevent Hydration Mismatch Without Flickering](#65-prevent-hydration-mismatch-without-flickering)
   - 6.6 [Use Activity Component for Show/Hide](#66-use-activity-component-for-showhide)
   - 6.7 [Use Explicit Conditional Rendering](#67-use-explicit-conditional-rendering)
7. [JavaScript Performance](#7-javascript-performance) — **LOW-MEDIUM**
   - 7.1 [Batch DOM CSS Changes](#71-batch-dom-css-changes)
   - 7.2 [Build Index Maps for Repeated Lookups](#72-build-index-maps-for-repeated-lookups)
   - 7.3 [Cache Property Access in Loops](#73-cache-property-access-in-loops)
   - 7.4 [Cache Repeated Function Calls](#74-cache-repeated-function-calls)
   - 7.5 [Cache Storage API Calls](#75-cache-storage-api-calls)
   - 7.6 [Combine Multiple Array Iterations](#76-combine-multiple-array-iterations)
   - 7.7 [Early Length Check for Array Comparisons](#77-early-length-check-for-array-comparisons)
   - 7.8 [Early Return from Functions](#78-early-return-from-functions)
   - 7.9 [Hoist RegExp Creation](#79-hoist-regexp-creation)
   - 7.10 [Use Loop for Min/Max Instead of Sort](#710-use-loop-for-minmax-instead-of-sort)
   - 7.11 [Use Set/Map for O(1) Lookups](#711-use-setmap-for-o1-lookups)
   - 7.12 [Use toSorted() Instead of sort() for Immutability](#712-use-tosorted-instead-of-sort-for-immutability)
8. [Advanced Patterns](#8-advanced-patterns) — **LOW**
   - 8.1 [Store Event Handlers in Refs](#81-store-event-handlers-in-refs)
   - 8.2 [useLatest for Stable Callback Refs](#82-uselatest-for-stable-callback-refs)

---

## 1. Eliminating Waterfalls

**Impact: CRITICAL**

Waterfalls are the #1 performance killer. Each sequential await adds full network latency. Eliminating them yields the largest gains.

### 1.1 Defer Await Until Needed

**Impact: HIGH (avoids blocking unused code paths)**

Move `await` operations into the branches where they're actually used to avoid blocking code paths that don't need them.

**Incorrect: blocks both branches**

```typescript
async function handleRequest(userId: string, skipProcessing: boolean) {
  const userData = await fetchUserData(userId)
  
  if (skipProcessing) {
    // Returns immediately but still waited for userData
    return { skipped: true }
  }
  
  // Only this branch uses userData
  return processUserData(userData)
}
```

**Correct: only blocks when needed**

```typescript
async function handleRequest(userId: string, skipProcessing: boolean) {
  if (skipProcessing) {
    // Returns immediately without waiting
    return { skipped: true }
  }
  
  // Fetch only when needed
  const userData = await fetchUserData(userId)
  return processUserData(userData)
}
```

**Another example: early return optimization**

```typescript
// Incorrect: always fetches permissions
async function updateResource(resourceId: string, userId: string) {
  const permissions = await fetchPermissions(userId)
  const resource = await getResource(resourceId)
  
  if (!resource) {
    return { error: 'Not found' }
  }
  
  if (!permissions.canEdit) {
    return { error: 'Forbidden' }
  }
  
  return await updateResourceData(resource, permissions)
}

// Correct: fetches only when needed
async function updateResource(resourceId: string, userId: string) {
  const resource = await getResource(resourceId)
  
  if (!resource) {
    return { error: 'Not found' }
  }
  
  const permissions = await fetchPermissions(userId)
  
  if (!permissions.canEdit) {
    return { error: 'Forbidden' }
  }
  
  return await updateResourceData(resource, permissions)
}
```

This optimization is especially valuable when the skipped branch is frequently taken, or when the deferred operation is expensive.

### 1.2 Dependency-Based Parallelization

**Impact: CRITICAL (2-10× improvement)**

For operations with partial dependencies, use `better-all` to maximize parallelism. It automatically starts each task at the earliest possible moment.

**Incorrect: profile waits for config unnecessarily**

```typescript
const [user, config] = await Promise.all([
  fetchUser(),
  fetchConfig()
])
const profile = await fetchProfile(user.id)
```

**Correct: config and profile run in parallel**

```typescript
import { all } from 'better-all'

const { user, config, profile } = await all({
  async user() { return fetchUser() },
  async config() { return fetchConfig() },
  async profile() {
    return fetchProfile((await this.$.user).id)
  }
})
```

Reference: [https://github.com/shuding/better-all](https://github.com/shuding/better-all)

### 1.3 Prevent Waterfall Chains in API Routes

**Impact: CRITICAL (2-10× improvement)**

In API routes and Server Actions, start independent operations immediately, even if you don't await them yet.

**Incorrect: config waits for auth, data waits for both**

```typescript
export async function GET(request: Request) {
  const session = await auth()
  const config = await fetchConfig()
  const data = await fetchData(session.user.id)
  return Response.json({ data, config })
}
```

**Correct: auth and config start immediately**

```typescript
export async function GET(request: Request) {
  const sessionPromise = auth()
  const configPromise = fetchConfig()
  const session = await sessionPromise
  const [config, data] = await Promise.all([
    configPromise,
    fetchData(session.user.id)
  ])
  return Response.json({ data, config })
}
```

For operations with more complex dependency chains, use `better-all` to automatically maximize parallelism (see Dependency-Based Parallelization).

### 1.4 Promise.all() for Independent Operations

**Impact: CRITICAL (2-10× improvement)**

When async operations have no interdependencies, execute them concurrently using `Promise.all()`.

**Incorrect: sequential execution, 3 round trips**

```typescript
const user = await fetchUser()
const posts = await fetchPosts()
const comments = await fetchComments()
```

**Correct: parallel execution, 1 round trip**

```typescript
const [user, posts, comments] = await Promise.all([
  fetchUser(),
  fetchPosts(),
  fetchComments()
])
```

### 1.5 Strategic Suspense Boundaries

**Impact: HIGH (faster initial paint)**

Instead of awaiting data in async components before returning JSX, use Suspense boundaries to show the wrapper UI faster while data loads.

**Incorrect: wrapper blocked by data fetching**

```tsx
async function Page() {
  const data = await fetchData() // Blocks entire page
  
  return (
    <div>
      <div>Sidebar</div>
      <div>Header</div>
      <div>
        <DataDisplay data={data} />
      </div>
      <div>Footer</div>
    </div>
  )
}
```

The entire layout waits for data even though only the middle section needs it.

**Correct: wrapper shows immediately, data streams in**

```tsx
function Page() {
  return (
    <div>
      <div>Sidebar</div>
      <div>Header</div>
      <div>
        <Suspense fallback={<Skeleton />}>
          <DataDisplay />
        </Suspense>
      </div>
      <div>Footer</div>
    </div>
  )
}

async function DataDisplay() {
  const data = await fetchData() // Only blocks this component
  return <div>{data.content}</div>
}
```

Sidebar, Header, and Footer render immediately. Only DataDisplay waits for data.

**Alternative: share promise across components**

```tsx
function Page() {
  // Start fetch immediately, but don't await
  const dataPromise = fetchData()
  
  return (
    <div>
      <div>Sidebar</div>
      <div>Header</div>
      <Suspense fallback={<Skeleton />}>
        <DataDisplay dataPromise={dataPromise} />
        <DataSummary dataPromise={dataPromise} />
      </Suspense>
      <div>Footer</div>
    </div>
  )
}

function DataDisplay({ dataPromise }: { dataPromise: Promise<Data> }) {
  const data = use(dataPromise) // Unwraps the promise
  return <div>{data.content}</div>
}

function DataSummary({ dataPromise }: { dataPromise: Promise<Data> }) {
  const data = use(dataPromise) // Reuses the same promise
  return <div>{data.summary}</div>
}
```

Both components share the same promise, so only one fetch occurs. Layout renders immediately while both components wait together.

**When NOT to use this pattern:**

- Critical data needed for layout decisions (affects positioning)

- SEO-critical content above the fold

- Small, fast queries where suspense overhead isn't worth it

- When you want to avoid layout shift (loading → content jump)

**Trade-off:** Faster initial paint vs potential layout shift. Choose based on your UX priorities.

---

## 2. Bundle Size Optimization

**Impact: CRITICAL**

Reducing initial bundle size improves Time to Interactive and Largest Contentful Paint.

### 2.1 Avoid Barrel File Imports

**Impact: CRITICAL (200-800ms import cost, slow builds)**

Import directly from source files instead of barrel files to avoid loading thousands of unused modules. **Barrel files** are entry points that re-export multiple modules (e.g., `index.js` that does `export * from './module'`).

Popular icon and component libraries can have **up to 10,000 re-exports** in their entry file. For many React packages, **it takes 200-800ms just to import them**, affecting both development speed and production cold starts.

**Why tree-shaking doesn't help:** When a library is marked as external (not bundled), the bundler can't optimize it. If you bundle it to enable tree-shaking, builds become substantially slower analyzing the entire module graph.

**Incorrect: imports entire library**

```tsx
import { Check, X, Menu } from 'lucide-react'
// Loads 1,583 modules, takes ~2.8s extra in dev
// Runtime cost: 200-800ms on every cold start

import { Button, TextField } from '@mui/material'
// Loads 2,225 modules, takes ~4.2s extra in dev
```

**Correct: imports only what you need**

```tsx
import Check from 'lucide-react/dist/esm/icons/check'
import X from 'lucide-react/dist/esm/icons/x'
import Menu from 'lucide-react/dist/esm/icons/menu'
// Loads only 3 modules (~2KB vs ~1MB)

import Button from '@mui/material/Button'
import TextField from '@mui/material/TextField'
// Loads only what you use
```

**Alternative: Next.js 13.5+**

```js
// next.config.js - use optimizePackageImports
module.exports = {
  experimental: {
    optimizePackageImports: ['lucide-react', '@mui/material']
  }
}

// Then you can keep the ergonomic barrel imports:
import { Check, X, Menu } from 'lucide-react'
// Automatically transformed to direct imports at build time
```

Direct imports provide 15-70% faster dev boot, 28% faster builds, 40% faster cold starts, and significantly faster HMR.

Libraries commonly affected: `lucide-react`, `@mui/material`, `@mui/icons-material`, `@tabler/icons-react`, `react-icons`, `@headlessui/react`, `@radix-ui/react-*`, `lodash`, `ramda`, `date-fns`, `rxjs`, `react-use`.

Reference: [https://vercel.com/blog/how-we-optimized-package-imports-in-next-js](https://vercel.com/blog/how-we-optimized-package-imports-in-next-js)

### 2.2 Conditional Module Loading

**Impact: HIGH (loads large data only when needed)**

Load large data or modules only when a feature is activated.

**Example: lazy-load animation frames**

```tsx
function AnimationPlayer({ enabled, setEnabled }: { enabled: boolean; setEnabled: React.Dispatch<React.SetStateAction<boolean>> }) {
  const [frames, setFrames] = useState<Frame[] | null>(null)

  useEffect(() => {
    if (enabled && !frames && typeof window !== 'undefined') {
      import('./animation-frames.js')
        .then(mod => setFrames(mod.frames))
        .catch(() => setEnabled(false))
    }
  }, [enabled, frames, setEnabled])

  if (!frames) return <Skeleton />
  return <Canvas frames={frames} />
}
```

The `typeof window !== 'undefined'` check prevents bundling this module for SSR, optimizing server bundle size and build speed.

### 2.3 Defer Non-Critical Third-Party Libraries

**Impact: MEDIUM (loads after hydration)**

Analytics, logging, and error tracking don't block user interaction. Load them after hydration.

**Incorrect: blocks initial bundle**

```tsx
import { Analytics } from '@vercel/analytics/react'

export default function RootLayout({ children }) {
  return (
    <html>
      <body>
        {children}
        <Analytics />
      </body>
    </html>
  )
}
```

**Correct: loads after hydration**

```tsx
import dynamic from 'next/dynamic'

const Analytics = dynamic(
  () => import('@vercel/analytics/react').then(m => m.Analytics),
  { ssr: false }
)

export default function RootLayout({ children }) {
  return (
    <html>
      <body>
        {children}
        <Analytics />
      </body>
    </html>
  )
}
```

### 2.4 Dynamic Imports for Heavy Components

**Impact: CRITICAL (directly affects TTI and LCP)**

Use `next/dynamic` to lazy-load large components not needed on initial render.

**Incorrect: Monaco bundles with main chunk ~300KB**

```tsx
import { MonacoEditor } from './monaco-editor'

function CodePanel({ code }: { code: string }) {
  return <MonacoEditor value={code} />
}
```

**Correct: Monaco loads on demand**

```tsx
import dynamic from 'next/dynamic'

const MonacoEditor = dynamic(
  () => import('./monaco-editor').then(m => m.MonacoEditor),
  { ssr: false }
)

function CodePanel({ code }: { code: string }) {
  return <MonacoEditor value={code} />
}
```

### 2.5 Preload Based on User Intent

**Impact: MEDIUM (reduces perceived latency)**

Preload heavy bundles before they're needed to reduce perceived latency.

**Example: preload on hover/focus**

```tsx
function EditorButton({ onClick }: { onClick: () => void }) {
  const preload = () => {
    if (typeof window !== 'undefined') {
      void import('./monaco-editor')
    }
  }

  return (
    <button
      onMouseEnter={preload}
      onFocus={preload}
      onClick={onClick}
    >
      Open Editor
    </button>
  )
}
```

**Example: preload when feature flag is enabled**

```tsx
function FlagsProvider({ children, flags }: Props) {
  useEffect(() => {
    if (flags.editorEnabled && typeof window !== 'undefined') {
      void import('./monaco-editor').then(mod => mod.init())
    }
  }, [flags.editorEnabled])

  return <FlagsContext.Provider value={flags}>
    {children}
  </FlagsContext.Provider>
}
```

The `typeof window !== 'undefined'` check prevents bundling preloaded modules for SSR, optimizing server bundle size and build speed.

---

## 3. Server-Side Performance

**Impact: HIGH**

Optimizing server-side rendering and data fetching eliminates server-side waterfalls and reduces response times.

### 3.1 Cross-Request LRU Caching

**Impact: HIGH (caches across requests)**

`React.cache()` only works within one request. For data shared across sequential requests (user clicks button A then button B), use an LRU cache.

**Implementation:**

```typescript
import { LRUCache } from 'lru-cache'

const cache = new LRUCache<string, any>({
  max: 1000,
  ttl: 5 * 60 * 1000  // 5 minutes
})

export async function getUser(id: string) {
  const cached = cache.get(id)
  if (cached) return cached

  const user = await db.user.findUnique({ where: { id } })
  cache.set(id, user)
  return user
}

// Request 1: DB query, result cached
// Request 2: cache hit, no DB query
```

Use when sequential user actions hit multiple endpoints needing the same data within seconds.

**With Vercel's [Fluid Compute](https://vercel.com/docs/fluid-compute):** LRU caching is especially effective because multiple concurrent requests can share the same function instance and cache. This means the cache persists across requests without needing external storage like Redis.

**In traditional serverless:** Each invocation runs in isolation, so consider Redis for cross-process caching.

Reference: [https://github.com/isaacs/node-lru-cache](https://github.com/isaacs/node-lru-cache)

### 3.2 Minimize Serialization at RSC Boundaries

**Impact: HIGH (reduces data transfer size)**

The React Server/Client boundary serializes all object properties into strings and embeds them in the HTML response and subsequent RSC requests. This serialized data directly impacts page weight and load time, so **size matters a lot**. Only pass fields that the client actually uses.

**Incorrect: serializes all 50 fields**

```tsx
async function Page() {
  const user = await fetchUser()  // 50 fields
  return <Profile user={user} />
}

'use client'
function Profile({ user }: { user: User }) {
  return <div>{user.name}</div>  // uses 1 field
}
```

**Correct: serializes only 1 field**

```tsx
async function Page() {
  const user = await fetchUser()
  return <Profile name={user.name} />
}

'use client'
function Profile({ name }: { name: string }) {
  return <div>{name}</div>
}
```

### 3.3 Parallel Data Fetching with Component Composition

**Impact: CRITICAL (eliminates server-side waterfalls)**

React Server Components execute sequentially within a tree. Restructure with composition to parallelize data fetching.

**Incorrect: Sidebar waits for Page's fetch to complete**

```tsx
export default async function Page() {
  const header = await fetchHeader()
  return (
    <div>
      <div>{header}</div>
      <Sidebar />
    </div>
  )
}

async function Sidebar() {
  const items = await fetchSidebarItems()
  return <nav>{items.map(renderItem)}</nav>
}
```

**Correct: both fetch simultaneously**

```tsx
async function Header() {
  const data = await fetchHeader()
  return <div>{data}</div>
}

async function Sidebar() {
  const items = await fetchSidebarItems()
  return <nav>{items.map(renderItem)}</nav>
}

export default function Page() {
  return (
    <div>
      <Header />
      <Sidebar />
    </div>
  )
}
```

**Alternative with children prop:**

```tsx
async function Header() {
  const data = await fetchHeader()
  return <div>{data}</div>
}

async function Sidebar() {
  const items = await fetchSidebarItems()
  return <nav>{items.map(renderItem)}</nav>
}

function Layout({ children }: { children: ReactNode }) {
  return (
    <div>
      <Header />
      {children}
    </div>
  )
}

export default function Page() {
  return (
    <Layout>
      <Sidebar />
    </Layout>
  )
}
```

### 3.4 Per-Request Deduplication with React.cache()

**Impact: MEDIUM (deduplicates within request)**

Use `React.cache()` for server-side request deduplication. Authentication and database queries benefit most.

**Usage:**

```typescript
import { cache } from 'react'

export const getCurrentUser = cache(async () => {
  const session = await auth()
  if (!session?.user?.id) return null
  return await db.user.findUnique({
    where: { id: session.user.id }
  })
})
```

Within a single request, multiple calls to `getCurrentUser()` execute the query only once.

**Avoid inline objects as arguments:**

`React.cache()` uses shallow equality (`Object.is`) to determine cache hits. Inline objects create new references each call, preventing cache hits.

**Incorrect: always cache miss**

```typescript
const getUser = cache(async (params: { uid: number }) => {
  return await db.user.findUnique({ where: { id: params.uid } })
})

// Each call creates new object, never hits cache
getUser({ uid: 1 })
getUser({ uid: 1 })  // Cache miss, runs query again
```

**Correct: cache hit**

```typescript
const params = { uid: 1 }
getUser(params)  // Query runs
getUser(params)  // Cache hit (same reference)
```

If you must pass objects, pass the same reference:

**Next.js-Specific Note:**

In Next.js, the `fetch` API is automatically extended with request memoization. Requests with the same URL and options are automatically deduplicated within a single request, so you don't need `React.cache()` for `fetch` calls. However, `React.cache()` is still essential for other async tasks:

- Database queries (Prisma, Drizzle, etc.)

- Heavy computations

- Authentication checks

- File system operations

- Any non-fetch async work

Use `React.cache()` to deduplicate these operations across your component tree.

Reference: [https://react.dev/reference/react/cache](https://react.dev/reference/react/cache)

### 3.5 Use after() for Non-Blocking Operations

**Impact: MEDIUM (faster response times)**

Use Next.js's `after()` to schedule work that should execute after a response is sent. This prevents logging, analytics, and other side effects from blocking the response.

**Incorrect: blocks response**

```tsx
import { logUserAction } from '@/app/utils'

export async function POST(request: Request) {
  // Perform mutation
  await updateDatabase(request)
  
  // Logging blocks the response
  const userAgent = request.headers.get('user-agent') || 'unknown'
  await logUserAction({ userAgent })
  
  return new Response(JSON.stringify({ status: 'success' }), {
    status: 200,
    headers: { 'Content-Type': 'application/json' }
  })
}
```

**Correct: non-blocking**

```tsx
import { after } from 'next/server'
import { headers, cookies } from 'next/headers'
import { logUserAction } from '@/app/utils'

export async function POST(request: Request) {
  // Perform mutation
  await updateDatabase(request)
  
  // Log after response is sent
  after(async () => {
    const userAgent = (await headers()).get('user-agent') || 'unknown'
    const sessionCookie = (await cookies()).get('session-id')?.value || 'anonymous'
    
    logUserAction({ sessionCookie, userAgent })
  })
  
  return new Response(JSON.stringify({ status: 'success' }), {
    status: 200,
    headers: { 'Content-Type': 'application/json' }
  })
}
```

The response is sent immediately while logging happens in the background.

**Common use cases:**

- Analytics tracking

- Audit logging

- Sending notifications

- Cache invalidation

- Cleanup tasks

**Important notes:**

- `after()` runs even if the response fails or redirects

- Works in Server Actions, Route Handlers, and Server Components

Reference: [https://nextjs.org/docs/app/api-reference/functions/after](https://nextjs.org/docs/app/api-reference/functions/after)

---

## 4. Client-Side Data Fetching

**Impact: MEDIUM-HIGH**

Automatic deduplication and efficient data fetching patterns reduce redundant network requests.

### 4.1 Deduplicate Global Event Listeners

**Impact: LOW (single listener for N components)**

Use `useSWRSubscription()` to share global event listeners across component instances.

**Incorrect: N instances = N listeners**

```tsx
function useKeyboardShortcut(key: string, callback: () => void) {
  useEffect(() => {
    const handler = (e: KeyboardEvent) => {
      if (e.metaKey && e.key === key) {
        callback()
      }
    }
    window.addEventListener('keydown', handler)
    return () => window.removeEventListener('keydown', handler)
  }, [key, callback])
}
```

When using the `useKeyboardShortcut` hook multiple times, each instance will register a new listener.

**Correct: N instances = 1 listener**

```tsx
import useSWRSubscription from 'swr/subscription'

// Module-level Map to track callbacks per key
const keyCallbacks = new Map<string, Set<() => void>>()

function useKeyboardShortcut(key: string, callback: () => void) {
  // Register this callback in the Map
  useEffect(() => {
    if (!keyCallbacks.has(key)) {
      keyCallbacks.set(key, new Set())
    }
    keyCallbacks.get(key)!.add(callback)

    return () => {
      const set = keyCallbacks.get(key)
      if (set) {
        set.delete(callback)
        if (set.size === 0) {
          keyCallbacks.delete(key)
        }
      }
    }
  }, [key, callback])

  useSWRSubscription('global-keydown', () => {
    const handler = (e: KeyboardEvent) => {
      if (e.metaKey && keyCallbacks.has(e.key)) {
        keyCallbacks.get(e.key)!.forEach(cb => cb())
      }
    }
    window.addEventListener('keydown', handler)
    return () => window.removeEventListener('keydown', handler)
  })
}

function Profile() {
  // Multiple shortcuts will share the same listener
  useKeyboardShortcut('p', () => { /* ... */ }) 
  useKeyboardShortcut('k', () => { /* ... */ })
  // ...
}
```

### 4.2 Use Passive Event Listeners for Scrolling Performance

**Impact: MEDIUM (eliminates scroll delay caused by event listeners)**

Add `{ passive: true }` to touch and wheel event listeners to enable immediate scrolling. Browsers normally wait for listeners to finish to check if `preventDefault()` is called, causing scroll delay.

**Incorrect:**

```typescript
useEffect(() => {
  const handleTouch = (e: TouchEvent) => console.log(e.touches[0].clientX)
  const handleWheel = (e: WheelEvent) => console.log(e.deltaY)
  
  document.addEventListener('touchstart', handleTouch)
  document.addEventListener('wheel', handleWheel)
  
  return () => {
    document.removeEventListener('touchstart', handleTouch)
    document.removeEventListener('wheel', handleWheel)
  }
}, [])
```

**Correct:**

```typescript
useEffect(() => {
  const handleTouch = (e: TouchEvent) => console.log(e.touches[0].clientX)
  const handleWheel = (e: WheelEvent) => console.log(e.deltaY)
  
  document.addEventListener('touchstart', handleTouch, { passive: true })
  document.addEventListener('wheel', handleWheel, { passive: true })
  
  return () => {
    document.removeEventListener('touchstart', handleTouch)
    document.removeEventListener('wheel', handleWheel)
  }
}, [])
```

**Use passive when:** tracking/analytics, logging, any listener that doesn't call `preventDefault()`.

**Don't use passive when:** implementing custom swipe gestures, custom zoom controls, or any listener that needs `preventDefault()`.

### 4.3 Use SWR for Automatic Deduplication

**Impact: MEDIUM-HIGH (automatic deduplication)**

SWR enables request deduplication, caching, and revalidation across component instances.

**Incorrect: no deduplication, each instance fetches**

```tsx
function UserList() {
  const [users, setUsers] = useState([])
  useEffect(() => {
    fetch('/api/users')
      .then(r => r.json())
      .then(setUsers)
  }, [])
}
```

**Correct: multiple instances share one request**

```tsx
import useSWR from 'swr'

function UserList() {
  const { data: users } = useSWR('/api/users', fetcher)
}
```

**For immutable data:**

```tsx
import { useImmutableSWR } from '@/lib/swr'

function StaticContent() {
  const { data } = useImmutableSWR('/api/config', fetcher)
}
```

**For mutations:**

```tsx
import { useSWRMutation } from 'swr/mutation'

function UpdateButton() {
  const { trigger } = useSWRMutation('/api/user', updateUser)
  return <button onClick={() => trigger()}>Update</button>
}
```

Reference: [https://swr.vercel.app](https://swr.vercel.app)

### 4.4 Version and Minimize localStorage Data

**Impact: MEDIUM (prevents schema conflicts, reduces storage size)**

Add version prefix to keys and store only needed fields. Prevents schema conflicts and accidental storage of sensitive data.

**Incorrect:**

```typescript
// No version, stores everything, no error handling
localStorage.setItem('userConfig', JSON.stringify(fullUserObject))
const data = localStorage.getItem('userConfig')
```

**Correct:**

```typescript
const VERSION = 'v2'

function saveConfig(config: { theme: string; language: string }) {
  try {
    localStorage.setItem(`userConfig:${VERSION}`, JSON.stringify(config))
  } catch {
    // Throws in incognito/private browsing, quota exceeded, or disabled
  }
}

function loadConfig() {
  try {
    const data = localStorage.getItem(`userConfig:${VERSION}`)
    return data ? JSON.parse(data) : null
  } catch {
    return null
  }
}

// Migration from v1 to v2
function migrate() {
  try {
    const v1 = localStorage.getItem('userConfig:v1')
    if (v1) {
      const old = JSON.parse(v1)
      saveConfig({ theme: old.darkMode ? 'dark' : 'light', language: old.lang })
      localStorage.removeItem('userConfig:v1')
    }
  } catch {}
}
```

**Store minimal fields from server responses:**

```typescript
// User object has 20+ fields, only store what UI needs
function cachePrefs(user: FullUser) {
  try {
    localStorage.setItem('prefs:v1', JSON.stringify({
      theme: user.preferences.theme,
      notifications: user.preferences.notifications
    }))
  } catch {}
}
```

**Always wrap in try-catch:** `getItem()` and `setItem()` throw in incognito/private browsing (Safari, Firefox), when quota exceeded, or when disabled.

**Benefits:** Schema evolution via versioning, reduced storage size, prevents storing tokens/PII/internal flags.

---

## 5. Re-render Optimization

**Impact: MEDIUM**

Reducing unnecessary re-renders minimizes wasted computation and improves UI responsiveness.

### 5.1 Defer State Reads to Usage Point

**Impact: MEDIUM (avoids unnecessary subscriptions)**

Don't subscribe to dynamic state (searchParams, localStorage) if you only read it inside callbacks.

**Incorrect: subscribes to all searchParams changes**

```tsx
function ShareButton({ chatId }: { chatId: string }) {
  const searchParams = useSearchParams()

  const handleShare = () => {
    const ref = searchParams.get('ref')
    shareChat(chatId, { ref })
  }

  return <button onClick={handleShare}>Share</button>
}
```

**Correct: reads on demand, no subscription**

```tsx
function ShareButton({ chatId }: { chatId: string }) {
  const handleShare = () => {
    const params = new URLSearchParams(window.location.search)
    const ref = params.get('ref')
    shareChat(chatId, { ref })
  }

  return <button onClick={handleShare}>Share</button>
}
```

### 5.2 Extract to Memoized Components

**Impact: MEDIUM (enables early returns)**

Extract expensive work into memoized components to enable early returns before computation.

**Incorrect: computes avatar even when loading**

```tsx
function Profile({ user, loading }: Props) {
  const avatar = useMemo(() => {
    const id = computeAvatarId(user)
    return <Avatar id={id} />
  }, [user])

  if (loading) return <Skeleton />
  return <div>{avatar}</div>
}
```

**Correct: skips computation when loading**

```tsx
const UserAvatar = memo(function UserAvatar({ user }: { user: User }) {
  const id = useMemo(() => computeAvatarId(user), [user])
  return <Avatar id={id} />
})

function Profile({ user, loading }: Props) {
  if (loading) return <Skeleton />
  return (
    <div>
      <UserAvatar user={user} />
    </div>
  )
}
```

**Note:** If your project has [React Compiler](https://react.dev/learn/react-compiler) enabled, manual memoization with `memo()` and `useMemo()` is not necessary. The compiler automatically optimizes re-renders.

### 5.3 Narrow Effect Dependencies

**Impact: LOW (minimizes effect re-runs)**

Specify primitive dependencies instead of objects to minimize effect re-runs.

**Incorrect: re-runs on any user field change**

```tsx
useEffect(() => {
  console.log(user.id)
}, [user])
```

**Correct: re-runs only when id changes**

```tsx
useEffect(() => {
  console.log(user.id)
}, [user.id])
```

**For derived state, compute outside effect:**

```tsx
// Incorrect: runs on width=767, 766, 765...
useEffect(() => {
  if (width < 768) {
    enableMobileMode()
  }
}, [width])

// Correct: runs only on boolean transition
const isMobile = width < 768
useEffect(() => {
  if (isMobile) {
    enableMobileMode()
  }
}, [isMobile])
```

### 5.4 Subscribe to Derived State

**Impact: MEDIUM (reduces re-render frequency)**

Subscribe to derived boolean state instead of continuous values to reduce re-render frequency.

**Incorrect: re-renders on every pixel change**

```tsx
function Sidebar() {
  const width = useWindowWidth()  // updates continuously
  const isMobile = width < 768
  return <nav className={isMobile ? 'mobile' : 'desktop'} />
}
```

**Correct: re-renders only when boolean changes**

```tsx
function Sidebar() {
  const isMobile = useMediaQuery('(max-width: 767px)')
  return <nav className={isMobile ? 'mobile' : 'desktop'} />
}
```

### 5.5 Use Functional setState Updates

**Impact: MEDIUM (prevents stale closures and unnecessary callback recreations)**

When updating state based on the current state value, use the functional update form of setState instead of directly referencing the state variable. This prevents stale closures, eliminates unnecessary dependencies, and creates stable callback references.

**Incorrect: requires state as dependency**

```tsx
function TodoList() {
  const [items, setItems] = useState(initialItems)
  
  // Callback must depend on items, recreated on every items change
  const addItems = useCallback((newItems: Item[]) => {
    setItems([...items, ...newItems])
  }, [items])  // ❌ items dependency causes recreations
  
  // Risk of stale closure if dependency is forgotten
  const removeItem = useCallback((id: string) => {
    setItems(items.filter(item => item.id !== id))
  }, [])  // ❌ Missing items dependency - will use stale items!
  
  return <ItemsEditor items={items} onAdd={addItems} onRemove={removeItem} />
}
```

The first callback is recreated every time `items` changes, which can cause child components to re-render unnecessarily. The second callback has a stale closure bug—it will always reference the initial `items` value.

**Correct: stable callbacks, no stale closures**

```tsx
function TodoList() {
  const [items, setItems] = useState(initialItems)
  
  // Stable callback, never recreated
  const addItems = useCallback((newItems: Item[]) => {
    setItems(curr => [...curr, ...newItems])
  }, [])  // ✅ No dependencies needed
  
  // Always uses latest state, no stale closure risk
  const removeItem = useCallback((id: string) => {
    setItems(curr => curr.filter(item => item.id !== id))
  }, [])  // ✅ Safe and stable
  
  return <ItemsEditor items={items} onAdd={addItems} onRemove={removeItem} />
}
```

**Benefits:**

1. **Stable callback references** - Callbacks don't need to be recreated when state changes

2. **No stale closures** - Always operates on the latest state value

3. **Fewer dependencies** - Simplifies dependency arrays and reduces memory leaks

4. **Prevents bugs** - Eliminates the most common source of React closure bugs

**When to use functional updates:**

- Any setState that depends on the current state value

- Inside useCallback/useMemo when state is needed

- Event handlers that reference state

- Async operations that update state

**When direct updates are fine:**

- Setting state to a static value: `setCount(0)`

- Setting state from props/arguments only: `setName(newName)`

- State doesn't depend on previous value

**Note:** If your project has [React Compiler](https://react.dev/learn/react-compiler) enabled, the compiler can automatically optimize some cases, but functional updates are still recommended for correctness and to prevent stale closure bugs.

### 5.6 Use Lazy State Initialization

**Impact: MEDIUM (wasted computation on every render)**

Pass a function to `useState` for expensive initial values. Without the function form, the initializer runs on every render even though the value is only used once.

**Incorrect: runs on every render**

```tsx
function FilteredList({ items }: { items: Item[] }) {
  // buildSearchIndex() runs on EVERY render, even after initialization
  const [searchIndex, setSearchIndex] = useState(buildSearchIndex(items))
  const [query, setQuery] = useState('')
  
  // When query changes, buildSearchIndex runs again unnecessarily
  return <SearchResults index={searchIndex} query={query} />
}

function UserProfile() {
  // JSON.parse runs on every render
  const [settings, setSettings] = useState(
    JSON.parse(localStorage.getItem('settings') || '{}')
  )
  
  return <SettingsForm settings={settings} onChange={setSettings} />
}
```

**Correct: runs only once**

```tsx
function FilteredList({ items }: { items: Item[] }) {
  // buildSearchIndex() runs ONLY on initial render
  const [searchIndex, setSearchIndex] = useState(() => buildSearchIndex(items))
  const [query, setQuery] = useState('')
  
  return <SearchResults index={searchIndex} query={query} />
}

function UserProfile() {
  // JSON.parse runs only on initial render
  const [settings, setSettings] = useState(() => {
    const stored = localStorage.getItem('settings')
    return stored ? JSON.parse(stored) : {}
  })
  
  return <SettingsForm settings={settings} onChange={setSettings} />
}
```

Use lazy initialization when computing initial values from localStorage/sessionStorage, building data structures (indexes, maps), reading from the DOM, or performing heavy transformations.

For simple primitives (`useState(0)`), direct references (`useState(props.value)`), or cheap literals (`useState({})`), the function form is unnecessary.

### 5.7 Use Transitions for Non-Urgent Updates

**Impact: MEDIUM (maintains UI responsiveness)**

Mark frequent, non-urgent state updates as transitions to maintain UI responsiveness.

**Incorrect: blocks UI on every scroll**

```tsx
function ScrollTracker() {
  const [scrollY, setScrollY] = useState(0)
  useEffect(() => {
    const handler = () => setScrollY(window.scrollY)
    window.addEventListener('scroll', handler, { passive: true })
    return () => window.removeEventListener('scroll', handler)
  }, [])
}
```

**Correct: non-blocking updates**

```tsx
import { startTransition } from 'react'

function ScrollTracker() {
  const [scrollY, setScrollY] = useState(0)
  useEffect(() => {
    const handler = () => {
      startTransition(() => setScrollY(window.scrollY))
    }
    window.addEventListener('scroll', handler, { passive: true })
    return () => window.removeEventListener('scroll', handler)
  }, [])
}
```

---

## 6. Rendering Performance

**Impact: MEDIUM**

Optimizing the rendering process reduces the work the browser needs to do.

### 6.1 Animate SVG Wrapper Instead of SVG Element

**Impact: LOW (enables hardware acceleration)**

Many browsers don't have hardware acceleration for CSS3 animations on SVG elements. Wrap SVG in a `<div>` and animate the wrapper instead.

**Incorrect: animating SVG directly - no hardware acceleration**

```tsx
function LoadingSpinner() {
  return (
    <svg 
      className="animate-spin"
      width="24" 
      height="24" 
      viewBox="0 0 24 24"
    >
      <circle cx="12" cy="12" r="10" stroke="currentColor" />
    </svg>
  )
}
```

**Correct: animating wrapper div - hardware accelerated**

```tsx
function LoadingSpinner() {
  return (
    <div className="animate-spin">
      <svg 
        width="24" 
        height="24" 
        viewBox="0 0 24 24"
      >
        <circle cx="12" cy="12" r="10" stroke="currentColor" />
      </svg>
    </div>
  )
}
```

This applies to all CSS transforms and transitions (`transform`, `opacity`, `translate`, `scale`, `rotate`). The wrapper div allows browsers to use GPU acceleration for smoother animations.

### 6.2 CSS content-visibility for Long Lists

**Impact: HIGH (faster initial render)**

Apply `content-visibility: auto` to defer off-screen rendering.

**CSS:**

```css
.message-item {
  content-visibility: auto;
  contain-intrinsic-size: 0 80px;
}
```

**Example:**

```tsx
function MessageList({ messages }: { messages: Message[] }) {
  return (
    <div className="overflow-y-auto h-screen">
      {messages.map(msg => (
        <div key={msg.id} className="message-item">
          <Avatar user={msg.author} />
          <div>{msg.content}</div>
        </div>
      ))}
    </div>
  )
}
```

For 1000 messages, browser skips layout/paint for ~990 off-screen items (10× faster initial render).

### 6.3 Hoist Static JSX Elements

**Impact: LOW (avoids re-creation)**

Extract static JSX outside components to avoid re-creation.

**Incorrect: recreates element every render**

```tsx
function LoadingSkeleton() {
  return <div className="animate-pulse h-20 bg-gray-200" />
}

function Container() {
  return (
    <div>
      {loading && <LoadingSkeleton />}
    </div>
  )
}
```

**Correct: reuses same element**

```tsx
const loadingSkeleton = (
  <div className="animate-pulse h-20 bg-gray-200" />
)

function Container() {
  return (
    <div>
      {loading && loadingSkeleton}
    </div>
  )
}
```

This is especially helpful for large and static SVG nodes, which can be expensive to recreate on every render.

**Note:** If your project has [React Compiler](https://react.dev/learn/react-compiler) enabled, the compiler automatically hoists static JSX elements and optimizes component re-renders, making manual hoisting unnecessary.

### 6.4 Optimize SVG Precision

**Impact: LOW (reduces file size)**

Reduce SVG coordinate precision to decrease file size. The optimal precision depends on the viewBox size, but in general reducing precision should be considered.

**Incorrect: excessive precision**

```svg
<path d="M 10.293847 20.847362 L 30.938472 40.192837" />
```

**Correct: 1 decimal place**

```svg
<path d="M 10.3 20.8 L 30.9 40.2" />
```

**Automate with SVGO:**

```bash
npx svgo --precision=1 --multipass icon.svg
```

### 6.5 Prevent Hydration Mismatch Without Flickering

**Impact: MEDIUM (avoids visual flicker and hydration errors)**

When rendering content that depends on client-side storage (localStorage, cookies), avoid both SSR breakage and post-hydration flickering by injecting a synchronous script that updates the DOM before React hydrates.

**Incorrect: breaks SSR**

```tsx
function ThemeWrapper({ children }: { children: ReactNode }) {
  // localStorage is not available on server - throws error
  const theme = localStorage.getItem('theme') || 'light'
  
  return (
    <div className={theme}>
      {children}
    </div>
  )
}
```

Server-side rendering will fail because `localStorage` is undefined.

**Incorrect: visual flickering**

```tsx
function ThemeWrapper({ children }: { children: ReactNode }) {
  const [theme, setTheme] = useState('light')
  
  useEffect(() => {
    // Runs after hydration - causes visible flash
    const stored = localStorage.getItem('theme')
    if (stored) {
      setTheme(stored)
    }
  }, [])
  
  return (
    <div className={theme}>
      {children}
    </div>
  )
}
```

Component first renders with default value (`light`), then updates after hydration, causing a visible flash of incorrect content.

**Correct: no flicker, no hydration mismatch**

```tsx
function ThemeWrapper({ children }: { children: ReactNode }) {
  return (
    <>
      <div id="theme-wrapper">
        {children}
      </div>
      <script
        dangerouslySetInnerHTML={{
          __html: `
            (function() {
              try {
                var theme = localStorage.getItem('theme') || 'light';
                var el = document.getElementById('theme-wrapper');
                if (el) el.className = theme;
              } catch (e) {}
            })();
          `,
        }}
      />
    </>
  )
}
```

The inline script executes synchronously before showing the element, ensuring the DOM already has the correct value. No flickering, no hydration mismatch.

This pattern is especially useful for theme toggles, user preferences, authentication states, and any client-only data that should render immediately without flashing default values.

### 6.6 Use Activity Component for Show/Hide

**Impact: MEDIUM (preserves state/DOM)**

Use React's `<Activity>` to preserve state/DOM for expensive components that frequently toggle visibility.

**Usage:**

```tsx
import { Activity } from 'react'

function Dropdown({ isOpen }: Props) {
  return (
    <Activity mode={isOpen ? 'visible' : 'hidden'}>
      <ExpensiveMenu />
    </Activity>
  )
}
```

Avoids expensive re-renders and state loss.

### 6.7 Use Explicit Conditional Rendering

**Impact: LOW (prevents rendering 0 or NaN)**

Use explicit ternary operators (`? :`) instead of `&&` for conditional rendering when the condition can be `0`, `NaN`, or other falsy values that render.

**Incorrect: renders "0" when count is 0**

```tsx
function Badge({ count }: { count: number }) {
  return (
    <div>
      {count && <span className="badge">{count}</span>}
    </div>
  )
}

// When count = 0, renders: <div>0</div>
// When count = 5, renders: <div><span class="badge">5</span></div>
```

**Correct: renders nothing when count is 0**

```tsx
function Badge({ count }: { count: number }) {
  return (
    <div>
      {count > 0 ? <span className="badge">{count}</span> : null}
    </div>
  )
}

// When count = 0, renders: <div></div>
// When count = 5, renders: <div><span class="badge">5</span></div>
```

---

## 7. JavaScript Performance

**Impact: LOW-MEDIUM**

Micro-optimizations for hot paths can add up to meaningful improvements.

### 7.1 Batch DOM CSS Changes

**Impact: MEDIUM (reduces reflows/repaints)**

Avoid changing styles one property at a time. Group multiple CSS changes together via classes or `cssText` to minimize browser reflows.

**Incorrect: multiple reflows**

```typescript
function updateElementStyles(element: HTMLElement) {
  // Each line triggers a reflow
  element.style.width = '100px'
  element.style.height = '200px'
  element.style.backgroundColor = 'blue'
  element.style.border = '1px solid black'
}
```

**Correct: add class - single reflow**

```typescript
// CSS file
.highlighted-box {
  width: 100px;
  height: 200px;
  background-color: blue;
  border: 1px solid black;
}

// JavaScript
function updateElementStyles(element: HTMLElement) {
  element.classList.add('highlighted-box')
}
```

**Correct: change cssText - single reflow**

```typescript
function updateElementStyles(element: HTMLElement) {
  element.style.cssText = `
    width: 100px;
    height: 200px;
    background-color: blue;
    border: 1px solid black;
  `
}
```

**React example:**

```tsx
// Incorrect: changing styles one by one
function Box({ isHighlighted }: { isHighlighted: boolean }) {
  const ref = useRef<HTMLDivElement>(null)
  
  useEffect(() => {
    if (ref.current && isHighlighted) {
      ref.current.style.width = '100px'
      ref.current.style.height = '200px'
      ref.current.style.backgroundColor = 'blue'
    }
  }, [isHighlighted])
  
  return <div ref={ref}>Content</div>
}

// Correct: toggle class
function Box({ isHighlighted }: { isHighlighted: boolean }) {
  return (
    <div className={isHighlighted ? 'highlighted-box' : ''}>
      Content
    </div>
  )
}
```

Prefer CSS classes over inline styles when possible. Classes are cached by the browser and provide better separation of concerns.

### 7.2 Build Index Maps for Repeated Lookups

**Impact: LOW-MEDIUM (1M ops to 2K ops)**

Multiple `.find()` calls by the same key should use a Map.

**Incorrect (O(n) per lookup):**

```typescript
function processOrders(orders: Order[], users: User[]) {
  return orders.map(order => ({
    ...order,
    user: users.find(u => u.id === order.userId)
  }))
}
```

**Correct (O(1) per lookup):**

```typescript
function processOrders(orders: Order[], users: User[]) {
  const userById = new Map(users.map(u => [u.id, u]))

  return orders.map(order => ({
    ...order,
    user: userById.get(order.userId)
  }))
}
```

Build map once (O(n)), then all lookups are O(1).

For 1000 orders × 1000 users: 1M ops → 2K ops.

### 7.3 Cache Property Access in Loops

**Impact: LOW-MEDIUM (reduces lookups)**

Cache object property lookups in hot paths.

**Incorrect: 3 lookups × N iterations**

```typescript
for (let i = 0; i < arr.length; i++) {
  process(obj.config.settings.value)
}
```

**Correct: 1 lookup total**

```typescript
const value = obj.config.settings.value
const len = arr.length
for (let i = 0; i < len; i++) {
  process(value)
}
```

### 7.4 Cache Repeated Function Calls

**Impact: MEDIUM (avoid redundant computation)**

Use a module-level Map to cache function results when the same function is called repeatedly with the same inputs during render.

**Incorrect: redundant computation**

```typescript
function ProjectList({ projects }: { projects: Project[] }) {
  return (
    <div>
      {projects.map(project => {
        // slugify() called 100+ times for same project names
        const slug = slugify(project.name)
        
        return <ProjectCard key={project.id} slug={slug} />
      })}
    </div>
  )
}
```

**Correct: cached results**

```typescript
// Module-level cache
const slugifyCache = new Map<string, string>()

function cachedSlugify(text: string): string {
  if (slugifyCache.has(text)) {
    return slugifyCache.get(text)!
  }
  const result = slugify(text)
  slugifyCache.set(text, result)
  return result
}

function ProjectList({ projects }: { projects: Project[] }) {
  return (
    <div>
      {projects.map(project => {
        // Computed only once per unique project name
        const slug = cachedSlugify(project.name)
        
        return <ProjectCard key={project.id} slug={slug} />
      })}
    </div>
  )
}
```

**Simpler pattern for single-value functions:**

```typescript
let isLoggedInCache: boolean | null = null

function isLoggedIn(): boolean {
  if (isLoggedInCache !== null) {
    return isLoggedInCache
  }
  
  isLoggedInCache = document.cookie.includes('auth=')
  return isLoggedInCache
}

// Clear cache when auth changes
function onAuthChange() {
  isLoggedInCache = null
}
```

Use a Map (not a hook) so it works everywhere: utilities, event handlers, not just React components.

Reference: [https://vercel.com/blog/how-we-made-the-vercel-dashboard-twice-as-fast](https://vercel.com/blog/how-we-made-the-vercel-dashboard-twice-as-fast)

### 7.5 Cache Storage API Calls

**Impact: LOW-MEDIUM (reduces expensive I/O)**

`localStorage`, `sessionStorage`, and `document.cookie` are synchronous and expensive. Cache reads in memory.

**Incorrect: reads storage on every call**

```typescript
function getTheme() {
  return localStorage.getItem('theme') ?? 'light'
}
// Called 10 times = 10 storage reads
```

**Correct: Map cache**

```typescript
const storageCache = new Map<string, string | null>()

function getLocalStorage(key: string) {
  if (!storageCache.has(key)) {
    storageCache.set(key, localStorage.getItem(key))
  }
  return storageCache.get(key)
}

function setLocalStorage(key: string, value: string) {
  localStorage.setItem(key, value)
  storageCache.set(key, value)  // keep cache in sync
}
```

Use a Map (not a hook) so it works everywhere: utilities, event handlers, not just React components.

**Cookie caching:**

```typescript
let cookieCache: Record<string, string> | null = null

function getCookie(name: string) {
  if (!cookieCache) {
    cookieCache = Object.fromEntries(
      document.cookie.split('; ').map(c => c.split('='))
    )
  }
  return cookieCache[name]
}
```

**Important: invalidate on external changes**

```typescript
window.addEventListener('storage', (e) => {
  if (e.key) storageCache.delete(e.key)
})

document.addEventListener('visibilitychange', () => {
  if (document.visibilityState === 'visible') {
    storageCache.clear()
  }
})
```

If storage can change externally (another tab, server-set cookies), invalidate cache:

### 7.6 Combine Multiple Array Iterations

**Impact: LOW-MEDIUM (reduces iterations)**

Multiple `.filter()` or `.map()` calls iterate the array multiple times. Combine into one loop.

**Incorrect: 3 iterations**

```typescript
const admins = users.filter(u => u.isAdmin)
const testers = users.filter(u => u.isTester)
const inactive = users.filter(u => !u.isActive)
```

**Correct: 1 iteration**

```typescript
const admins: User[] = []
const testers: User[] = []
const inactive: User[] = []

for (const user of users) {
  if (user.isAdmin) admins.push(user)
  if (user.isTester) testers.push(user)
  if (!user.isActive) inactive.push(user)
}
```

### 7.7 Early Length Check for Array Comparisons

**Impact: MEDIUM-HIGH (avoids expensive operations when lengths differ)**

When comparing arrays with expensive operations (sorting, deep equality, serialization), check lengths first. If lengths differ, the arrays cannot be equal.

In real-world applications, this optimization is especially valuable when the comparison runs in hot paths (event handlers, render loops).

**Incorrect: always runs expensive comparison**

```typescript
function hasChanges(current: string[], original: string[]) {
  // Always sorts and joins, even when lengths differ
  return current.sort().join() !== original.sort().join()
}
```

Two O(n log n) sorts run even when `current.length` is 5 and `original.length` is 100. There is also overhead of joining the arrays and comparing the strings.

**Correct (O(1) length check first):**

```typescript
function hasChanges(current: string[], original: string[]) {
  // Early return if lengths differ
  if (current.length !== original.length) {
    return true
  }
  // Only sort/join when lengths match
  const currentSorted = current.toSorted()
  const originalSorted = original.toSorted()
  for (let i = 0; i < currentSorted.length; i++) {
    if (currentSorted[i] !== originalSorted[i]) {
      return true
    }
  }
  return false
}
```

This new approach is more efficient because:

- It avoids the overhead of sorting and joining the arrays when lengths differ

- It avoids consuming memory for the joined strings (especially important for large arrays)

- It avoids mutating the original arrays

- It returns early when a difference is found

### 7.8 Early Return from Functions

**Impact: LOW-MEDIUM (avoids unnecessary computation)**

Return early when result is determined to skip unnecessary processing.

**Incorrect: processes all items even after finding answer**

```typescript
function validateUsers(users: User[]) {
  let hasError = false
  let errorMessage = ''
  
  for (const user of users) {
    if (!user.email) {
      hasError = true
      errorMessage = 'Email required'
    }
    if (!user.name) {
      hasError = true
      errorMessage = 'Name required'
    }
    // Continues checking all users even after error found
  }
  
  return hasError ? { valid: false, error: errorMessage } : { valid: true }
}
```

**Correct: returns immediately on first error**

```typescript
function validateUsers(users: User[]) {
  for (const user of users) {
    if (!user.email) {
      return { valid: false, error: 'Email required' }
    }
    if (!user.name) {
      return { valid: false, error: 'Name required' }
    }
  }

  return { valid: true }
}
```

### 7.9 Hoist RegExp Creation

**Impact: LOW-MEDIUM (avoids recreation)**

Don't create RegExp inside render. Hoist to module scope or memoize with `useMemo()`.

**Incorrect: new RegExp every render**

```tsx
function Highlighter({ text, query }: Props) {
  const regex = new RegExp(`(${query})`, 'gi')
  const parts = text.split(regex)
  return <>{parts.map((part, i) => ...)}</>
}
```

**Correct: memoize or hoist**

```tsx
const EMAIL_REGEX = /^[^\s@]+@[^\s@]+\.[^\s@]+$/

function Highlighter({ text, query }: Props) {
  const regex = useMemo(
    () => new RegExp(`(${escapeRegex(query)})`, 'gi'),
    [query]
  )
  const parts = text.split(regex)
  return <>{parts.map((part, i) => ...)}</>
}
```

**Warning: global regex has mutable state**

```typescript
const regex = /foo/g
regex.test('foo')  // true, lastIndex = 3
regex.test('foo')  // false, lastIndex = 0
```

Global regex (`/g`) has mutable `lastIndex` state:

### 7.10 Use Loop for Min/Max Instead of Sort

**Impact: LOW (O(n) instead of O(n log n))**

Finding the smallest or largest element only requires a single pass through the array. Sorting is wasteful and slower.

**Incorrect (O(n log n) - sort to find latest):**

```typescript
interface Project {
  id: string
  name: string
  updatedAt: number
}

function getLatestProject(projects: Project[]) {
  const sorted = [...projects].sort((a, b) => b.updatedAt - a.updatedAt)
  return sorted[0]
}
```

Sorts the entire array just to find the maximum value.

**Incorrect (O(n log n) - sort for oldest and newest):**

```typescript
function getOldestAndNewest(projects: Project[]) {
  const sorted = [...projects].sort((a, b) => a.updatedAt - b.updatedAt)
  return { oldest: sorted[0], newest: sorted[sorted.length - 1] }
}
```

Still sorts unnecessarily when only min/max are needed.

**Correct (O(n) - single loop):**

```typescript
function getLatestProject(projects: Project[]) {
  if (projects.length === 0) return null
  
  let latest = projects[0]
  
  for (let i = 1; i < projects.length; i++) {
    if (projects[i].updatedAt > latest.updatedAt) {
      latest = projects[i]
    }
  }
  
  return latest
}

function getOldestAndNewest(projects: Project[]) {
  if (projects.length === 0) return { oldest: null, newest: null }
  
  let oldest = projects[0]
  let newest = projects[0]
  
  for (let i = 1; i < projects.length; i++) {
    if (projects[i].updatedAt < oldest.updatedAt) oldest = projects[i]
    if (projects[i].updatedAt > newest.updatedAt) newest = projects[i]
  }
  
  return { oldest, newest }
}
```

Single pass through the array, no copying, no sorting.

**Alternative: Math.min/Math.max for small arrays**

```typescript
const numbers = [5, 2, 8, 1, 9]
const min = Math.min(...numbers)
const max = Math.max(...numbers)
```

This works for small arrays but can be slower for very large arrays due to spread operator limitations. Use the loop approach for reliability.

### 7.11 Use Set/Map for O(1) Lookups

**Impact: LOW-MEDIUM (O(n) to O(1))**

Convert arrays to Set/Map for repeated membership checks.

**Incorrect (O(n) per check):**

```typescript
const allowedIds = ['a', 'b', 'c', ...]
items.filter(item => allowedIds.includes(item.id))
```

**Correct (O(1) per check):**

```typescript
const allowedIds = new Set(['a', 'b', 'c', ...])
items.filter(item => allowedIds.has(item.id))
```

### 7.12 Use toSorted() Instead of sort() for Immutability

**Impact: MEDIUM-HIGH (prevents mutation bugs in React state)**

`.sort()` mutates the array in place, which can cause bugs with React state and props. Use `.toSorted()` to create a new sorted array without mutation.

**Incorrect: mutates original array**

```typescript
function UserList({ users }: { users: User[] }) {
  // Mutates the users prop array!
  const sorted = useMemo(
    () => users.sort((a, b) => a.name.localeCompare(b.name)),
    [users]
  )
  return <div>{sorted.map(renderUser)}</div>
}
```

**Correct: creates new array**

```typescript
function UserList({ users }: { users: User[] }) {
  // Creates new sorted array, original unchanged
  const sorted = useMemo(
    () => users.toSorted((a, b) => a.name.localeCompare(b.name)),
    [users]
  )
  return <div>{sorted.map(renderUser)}</div>
}
```

**Why this matters in React:**

1. Props/state mutations break React's immutability model - React expects props and state to be treated as read-only

2. Causes stale closure bugs - Mutating arrays inside closures (callbacks, effects) can lead to unexpected behavior

**Browser support: fallback for older browsers**

```typescript
// Fallback for older browsers
const sorted = [...items].sort((a, b) => a.value - b.value)
```

`.toSorted()` is available in all modern browsers (Chrome 110+, Safari 16+, Firefox 115+, Node.js 20+). For older environments, use spread operator:

**Other immutable array methods:**

- `.toSorted()` - immutable sort

- `.toReversed()` - immutable reverse

- `.toSpliced()` - immutable splice

- `.with()` - immutable element replacement

---

## 8. Advanced Patterns

**Impact: LOW**

Advanced patterns for specific cases that require careful implementation.

### 8.1 Store Event Handlers in Refs

**Impact: LOW (stable subscriptions)**

Store callbacks in refs when used in effects that shouldn't re-subscribe on callback changes.

**Incorrect: re-subscribes on every render**

```tsx
function useWindowEvent(event: string, handler: () => void) {
  useEffect(() => {
    window.addEventListener(event, handler)
    return () => window.removeEventListener(event, handler)
  }, [event, handler])
}
```

**Correct: stable subscription**

```tsx
import { useEffectEvent } from 'react'

function useWindowEvent(event: string, handler: () => void) {
  const onEvent = useEffectEvent(handler)

  useEffect(() => {
    window.addEventListener(event, onEvent)
    return () => window.removeEventListener(event, onEvent)
  }, [event])
}
```

**Alternative: use `useEffectEvent` if you're on latest React:**

`useEffectEvent` provides a cleaner API for the same pattern: it creates a stable function reference that always calls the latest version of the handler.

### 8.2 useLatest for Stable Callback Refs

**Impact: LOW (prevents effect re-runs)**

Access latest values in callbacks without adding them to dependency arrays. Prevents effect re-runs while avoiding stale closures.

**Implementation:**

```typescript
function useLatest<T>(value: T) {
  const ref = useRef(value)
  useEffect(() => {
    ref.current = value
  }, [value])
  return ref
}
```

**Incorrect: effect re-runs on every callback change**

```tsx
function SearchInput({ onSearch }: { onSearch: (q: string) => void }) {
  const [query, setQuery] = useState('')

  useEffect(() => {
    const timeout = setTimeout(() => onSearch(query), 300)
    return () => clearTimeout(timeout)
  }, [query, onSearch])
}
```

**Correct: stable effect, fresh callback**

```tsx
function SearchInput({ onSearch }: { onSearch: (q: string) => void }) {
  const [query, setQuery] = useState('')
  const onSearchRef = useLatest(onSearch)

  useEffect(() => {
    const timeout = setTimeout(() => onSearchRef.current(query), 300)
    return () => clearTimeout(timeout)
  }, [query])
}
```

---

## References

1. [https://react.dev](https://react.dev)
2. [https://nextjs.org](https://nextjs.org)
3. [https://swr.vercel.app](https://swr.vercel.app)
4. [https://github.com/shuding/better-all](https://github.com/shuding/better-all)
5. [https://github.com/isaacs/node-lru-cache](https://github.com/isaacs/node-lru-cache)
6. [https://vercel.com/blog/how-we-optimized-package-imports-in-next-js](https://vercel.com/blog/how-we-optimized-package-imports-in-next-js)
7. [https://vercel.com/blog/how-we-made-the-vercel-dashboard-twice-as-fast](https://vercel.com/blog/how-we-made-the-vercel-dashboard-twice-as-fast)
async_api_routes
vercel SKILL.md License: See repository Version: Unknown
Imported skill async_api_routes from vercel
View skill
---
title: Prevent Waterfall Chains in API Routes
impact: CRITICAL
impactDescription: 2-10× improvement
tags: api-routes, server-actions, waterfalls, parallelization
---

## Prevent Waterfall Chains in API Routes

In API routes and Server Actions, start independent operations immediately, even if you don't await them yet.

**Incorrect (config waits for auth, data waits for both):**

```typescript
export async function GET(request: Request) {
  const session = await auth()
  const config = await fetchConfig()
  const data = await fetchData(session.user.id)
  return Response.json({ data, config })
}
```

**Correct (auth and config start immediately):**

```typescript
export async function GET(request: Request) {
  const sessionPromise = auth()
  const configPromise = fetchConfig()
  const session = await sessionPromise
  const [config, data] = await Promise.all([
    configPromise,
    fetchData(session.user.id)
  ])
  return Response.json({ data, config })
}
```

For operations with more complex dependency chains, use `better-all` to automatically maximize parallelism (see Dependency-Based Parallelization).
async_defer_await
vercel SKILL.md License: See repository Version: Unknown
Imported skill async_defer_await from vercel
View skill
---
title: Defer Await Until Needed
impact: HIGH
impactDescription: avoids blocking unused code paths
tags: async, await, conditional, optimization
---

## Defer Await Until Needed

Move `await` operations into the branches where they're actually used to avoid blocking code paths that don't need them.

**Incorrect (blocks both branches):**

```typescript
async function handleRequest(userId: string, skipProcessing: boolean) {
  const userData = await fetchUserData(userId)
  
  if (skipProcessing) {
    // Returns immediately but still waited for userData
    return { skipped: true }
  }
  
  // Only this branch uses userData
  return processUserData(userData)
}
```

**Correct (only blocks when needed):**

```typescript
async function handleRequest(userId: string, skipProcessing: boolean) {
  if (skipProcessing) {
    // Returns immediately without waiting
    return { skipped: true }
  }
  
  // Fetch only when needed
  const userData = await fetchUserData(userId)
  return processUserData(userData)
}
```

**Another example (early return optimization):**

```typescript
// Incorrect: always fetches permissions
async function updateResource(resourceId: string, userId: string) {
  const permissions = await fetchPermissions(userId)
  const resource = await getResource(resourceId)
  
  if (!resource) {
    return { error: 'Not found' }
  }
  
  if (!permissions.canEdit) {
    return { error: 'Forbidden' }
  }
  
  return await updateResourceData(resource, permissions)
}

// Correct: fetches only when needed
async function updateResource(resourceId: string, userId: string) {
  const resource = await getResource(resourceId)
  
  if (!resource) {
    return { error: 'Not found' }
  }
  
  const permissions = await fetchPermissions(userId)
  
  if (!permissions.canEdit) {
    return { error: 'Forbidden' }
  }
  
  return await updateResourceData(resource, permissions)
}
```

This optimization is especially valuable when the skipped branch is frequently taken, or when the deferred operation is expensive.
async_dependencies
vercel SKILL.md License: See repository Version: Unknown
Imported skill async_dependencies from vercel
View skill
---
title: Dependency-Based Parallelization
impact: CRITICAL
impactDescription: 2-10× improvement
tags: async, parallelization, dependencies, better-all
---

## Dependency-Based Parallelization

For operations with partial dependencies, use `better-all` to maximize parallelism. It automatically starts each task at the earliest possible moment.

**Incorrect (profile waits for config unnecessarily):**

```typescript
const [user, config] = await Promise.all([
  fetchUser(),
  fetchConfig()
])
const profile = await fetchProfile(user.id)
```

**Correct (config and profile run in parallel):**

```typescript
import { all } from 'better-all'

const { user, config, profile } = await all({
  async user() { return fetchUser() },
  async config() { return fetchConfig() },
  async profile() {
    return fetchProfile((await this.$.user).id)
  }
})
```

Reference: [https://github.com/shuding/better-all](https://github.com/shuding/better-all)
async_parallel
vercel SKILL.md License: See repository Version: Unknown
Imported skill async_parallel from vercel
View skill
---
title: Promise.all() for Independent Operations
impact: CRITICAL
impactDescription: 2-10× improvement
tags: async, parallelization, promises, waterfalls
---

## Promise.all() for Independent Operations

When async operations have no interdependencies, execute them concurrently using `Promise.all()`.

**Incorrect (sequential execution, 3 round trips):**

```typescript
const user = await fetchUser()
const posts = await fetchPosts()
const comments = await fetchComments()
```

**Correct (parallel execution, 1 round trip):**

```typescript
const [user, posts, comments] = await Promise.all([
  fetchUser(),
  fetchPosts(),
  fetchComments()
])
```
async_suspense_boundaries
vercel SKILL.md License: See repository Version: Unknown
Imported skill async_suspense_boundaries from vercel
View skill
---
title: Strategic Suspense Boundaries
impact: HIGH
impactDescription: faster initial paint
tags: async, suspense, streaming, layout-shift
---

## Strategic Suspense Boundaries

Instead of awaiting data in async components before returning JSX, use Suspense boundaries to show the wrapper UI faster while data loads.

**Incorrect (wrapper blocked by data fetching):**

```tsx
async function Page() {
  const data = await fetchData() // Blocks entire page
  
  return (
    <div>
      <div>Sidebar</div>
      <div>Header</div>
      <div>
        <DataDisplay data={data} />
      </div>
      <div>Footer</div>
    </div>
  )
}
```

The entire layout waits for data even though only the middle section needs it.

**Correct (wrapper shows immediately, data streams in):**

```tsx
function Page() {
  return (
    <div>
      <div>Sidebar</div>
      <div>Header</div>
      <div>
        <Suspense fallback={<Skeleton />}>
          <DataDisplay />
        </Suspense>
      </div>
      <div>Footer</div>
    </div>
  )
}

async function DataDisplay() {
  const data = await fetchData() // Only blocks this component
  return <div>{data.content}</div>
}
```

Sidebar, Header, and Footer render immediately. Only DataDisplay waits for data.

**Alternative (share promise across components):**

```tsx
function Page() {
  // Start fetch immediately, but don't await
  const dataPromise = fetchData()
  
  return (
    <div>
      <div>Sidebar</div>
      <div>Header</div>
      <Suspense fallback={<Skeleton />}>
        <DataDisplay dataPromise={dataPromise} />
        <DataSummary dataPromise={dataPromise} />
      </Suspense>
      <div>Footer</div>
    </div>
  )
}

function DataDisplay({ dataPromise }: { dataPromise: Promise<Data> }) {
  const data = use(dataPromise) // Unwraps the promise
  return <div>{data.content}</div>
}

function DataSummary({ dataPromise }: { dataPromise: Promise<Data> }) {
  const data = use(dataPromise) // Reuses the same promise
  return <div>{data.summary}</div>
}
```

Both components share the same promise, so only one fetch occurs. Layout renders immediately while both components wait together.

**When NOT to use this pattern:**

- Critical data needed for layout decisions (affects positioning)
- SEO-critical content above the fold
- Small, fast queries where suspense overhead isn't worth it
- When you want to avoid layout shift (loading → content jump)

**Trade-off:** Faster initial paint vs potential layout shift. Choose based on your UX priorities.
bundle_barrel_imports
vercel SKILL.md License: See repository Version: Unknown
Imported skill bundle_barrel_imports from vercel
View skill
---
title: Avoid Barrel File Imports
impact: CRITICAL
impactDescription: 200-800ms import cost, slow builds
tags: bundle, imports, tree-shaking, barrel-files, performance
---

## Avoid Barrel File Imports

Import directly from source files instead of barrel files to avoid loading thousands of unused modules. **Barrel files** are entry points that re-export multiple modules (e.g., `index.js` that does `export * from './module'`).

Popular icon and component libraries can have **up to 10,000 re-exports** in their entry file. For many React packages, **it takes 200-800ms just to import them**, affecting both development speed and production cold starts.

**Why tree-shaking doesn't help:** When a library is marked as external (not bundled), the bundler can't optimize it. If you bundle it to enable tree-shaking, builds become substantially slower analyzing the entire module graph.

**Incorrect (imports entire library):**

```tsx
import { Check, X, Menu } from 'lucide-react'
// Loads 1,583 modules, takes ~2.8s extra in dev
// Runtime cost: 200-800ms on every cold start

import { Button, TextField } from '@mui/material'
// Loads 2,225 modules, takes ~4.2s extra in dev
```

**Correct (imports only what you need):**

```tsx
import Check from 'lucide-react/dist/esm/icons/check'
import X from 'lucide-react/dist/esm/icons/x'
import Menu from 'lucide-react/dist/esm/icons/menu'
// Loads only 3 modules (~2KB vs ~1MB)

import Button from '@mui/material/Button'
import TextField from '@mui/material/TextField'
// Loads only what you use
```

**Alternative (Next.js 13.5+):**

```js
// next.config.js - use optimizePackageImports
module.exports = {
  experimental: {
    optimizePackageImports: ['lucide-react', '@mui/material']
  }
}

// Then you can keep the ergonomic barrel imports:
import { Check, X, Menu } from 'lucide-react'
// Automatically transformed to direct imports at build time
```

Direct imports provide 15-70% faster dev boot, 28% faster builds, 40% faster cold starts, and significantly faster HMR.

Libraries commonly affected: `lucide-react`, `@mui/material`, `@mui/icons-material`, `@tabler/icons-react`, `react-icons`, `@headlessui/react`, `@radix-ui/react-*`, `lodash`, `ramda`, `date-fns`, `rxjs`, `react-use`.

Reference: [How we optimized package imports in Next.js](https://vercel.com/blog/how-we-optimized-package-imports-in-next-js)
bundle_conditional
vercel SKILL.md License: See repository Version: Unknown
Imported skill bundle_conditional from vercel
View skill
---
title: Conditional Module Loading
impact: HIGH
impactDescription: loads large data only when needed
tags: bundle, conditional-loading, lazy-loading
---

## Conditional Module Loading

Load large data or modules only when a feature is activated.

**Example (lazy-load animation frames):**

```tsx
function AnimationPlayer({ enabled, setEnabled }: { enabled: boolean; setEnabled: React.Dispatch<React.SetStateAction<boolean>> }) {
  const [frames, setFrames] = useState<Frame[] | null>(null)

  useEffect(() => {
    if (enabled && !frames && typeof window !== 'undefined') {
      import('./animation-frames.js')
        .then(mod => setFrames(mod.frames))
        .catch(() => setEnabled(false))
    }
  }, [enabled, frames, setEnabled])

  if (!frames) return <Skeleton />
  return <Canvas frames={frames} />
}
```

The `typeof window !== 'undefined'` check prevents bundling this module for SSR, optimizing server bundle size and build speed.
bundle_defer_third_party
vercel SKILL.md License: See repository Version: Unknown
Imported skill bundle_defer_third_party from vercel
View skill
---
title: Defer Non-Critical Third-Party Libraries
impact: MEDIUM
impactDescription: loads after hydration
tags: bundle, third-party, analytics, defer
---

## Defer Non-Critical Third-Party Libraries

Analytics, logging, and error tracking don't block user interaction. Load them after hydration.

**Incorrect (blocks initial bundle):**

```tsx
import { Analytics } from '@vercel/analytics/react'

export default function RootLayout({ children }) {
  return (
    <html>
      <body>
        {children}
        <Analytics />
      </body>
    </html>
  )
}
```

**Correct (loads after hydration):**

```tsx
import dynamic from 'next/dynamic'

const Analytics = dynamic(
  () => import('@vercel/analytics/react').then(m => m.Analytics),
  { ssr: false }
)

export default function RootLayout({ children }) {
  return (
    <html>
      <body>
        {children}
        <Analytics />
      </body>
    </html>
  )
}
```
bundle_dynamic_imports
vercel SKILL.md License: See repository Version: Unknown
Imported skill bundle_dynamic_imports from vercel
View skill
---
title: Dynamic Imports for Heavy Components
impact: CRITICAL
impactDescription: directly affects TTI and LCP
tags: bundle, dynamic-import, code-splitting, next-dynamic
---

## Dynamic Imports for Heavy Components

Use `next/dynamic` to lazy-load large components not needed on initial render.

**Incorrect (Monaco bundles with main chunk ~300KB):**

```tsx
import { MonacoEditor } from './monaco-editor'

function CodePanel({ code }: { code: string }) {
  return <MonacoEditor value={code} />
}
```

**Correct (Monaco loads on demand):**

```tsx
import dynamic from 'next/dynamic'

const MonacoEditor = dynamic(
  () => import('./monaco-editor').then(m => m.MonacoEditor),
  { ssr: false }
)

function CodePanel({ code }: { code: string }) {
  return <MonacoEditor value={code} />
}
```
bundle_preload
vercel SKILL.md License: See repository Version: Unknown
Imported skill bundle_preload from vercel
View skill
---
title: Preload Based on User Intent
impact: MEDIUM
impactDescription: reduces perceived latency
tags: bundle, preload, user-intent, hover
---

## Preload Based on User Intent

Preload heavy bundles before they're needed to reduce perceived latency.

**Example (preload on hover/focus):**

```tsx
function EditorButton({ onClick }: { onClick: () => void }) {
  const preload = () => {
    if (typeof window !== 'undefined') {
      void import('./monaco-editor')
    }
  }

  return (
    <button
      onMouseEnter={preload}
      onFocus={preload}
      onClick={onClick}
    >
      Open Editor
    </button>
  )
}
```

**Example (preload when feature flag is enabled):**

```tsx
function FlagsProvider({ children, flags }: Props) {
  useEffect(() => {
    if (flags.editorEnabled && typeof window !== 'undefined') {
      void import('./monaco-editor').then(mod => mod.init())
    }
  }, [flags.editorEnabled])

  return <FlagsContext.Provider value={flags}>
    {children}
  </FlagsContext.Provider>
}
```

The `typeof window !== 'undefined'` check prevents bundling preloaded modules for SSR, optimizing server bundle size and build speed.
claude
vercel SKILL.md License: See repository Version: Unknown
Imported skill claude from vercel
View skill
# AGENTS.md

This file provides guidance to AI coding agents (Claude Code, Cursor, Copilot, etc.) when working with code in this repository.

## Repository Overview

A collection of skills for Claude.ai and Claude Code for working with Vercel deployments. Skills are packaged instructions and scripts that extend Claude's capabilities.

## Creating a New Skill

### Directory Structure

```
skills/
  {skill-name}/           # kebab-case directory name
    SKILL.md              # Required: skill definition
    scripts/              # Required: executable scripts
      {script-name}.sh    # Bash scripts (preferred)
  {skill-name}.zip        # Required: packaged for distribution
```

### Naming Conventions

- **Skill directory**: `kebab-case` (e.g., `vercel-deploy`, `log-monitor`)
- **SKILL.md**: Always uppercase, always this exact filename
- **Scripts**: `kebab-case.sh` (e.g., `deploy.sh`, `fetch-logs.sh`)
- **Zip file**: Must match directory name exactly: `{skill-name}.zip`

### SKILL.md Format

```markdown
---
name: {skill-name}
description: {One sentence describing when to use this skill. Include trigger phrases like "Deploy my app", "Check logs", etc.}
---

# {Skill Title}

{Brief description of what the skill does.}

## How It Works

{Numbered list explaining the skill's workflow}

## Usage

```bash
bash /mnt/skills/user/{skill-name}/scripts/{script}.sh [args]
```

**Arguments:**
- `arg1` - Description (defaults to X)

**Examples:**
{Show 2-3 common usage patterns}

## Output

{Show example output users will see}

## Present Results to User

{Template for how Claude should format results when presenting to users}

## Troubleshooting

{Common issues and solutions, especially network/permissions errors}
```

### Best Practices for Context Efficiency

Skills are loaded on-demand — only the skill name and description are loaded at startup. The full `SKILL.md` loads into context only when the agent decides the skill is relevant. To minimize context usage:

- **Keep SKILL.md under 500 lines** — put detailed reference material in separate files
- **Write specific descriptions** — helps the agent know exactly when to activate the skill
- **Use progressive disclosure** — reference supporting files that get read only when needed
- **Prefer scripts over inline code** — script execution doesn't consume context (only output does)
- **File references work one level deep** — link directly from SKILL.md to supporting files

### Script Requirements

- Use `#!/bin/bash` shebang
- Use `set -e` for fail-fast behavior
- Write status messages to stderr: `echo "Message" >&2`
- Write machine-readable output (JSON) to stdout
- Include a cleanup trap for temp files
- Reference the script path as `/mnt/skills/user/{skill-name}/scripts/{script}.sh`

### Creating the Zip Package

After creating or updating a skill:

```bash
cd skills
zip -r {skill-name}.zip {skill-name}/
```

### End-User Installation

Document these two installation methods for users:

**Claude Code:**
```bash
cp -r skills/{skill-name} ~/.claude/skills/
```

**claude.ai:**
Add the skill to project knowledge or paste SKILL.md contents into the conversation.

If the skill requires network access, instruct users to add required domains at `claude.ai/settings/capabilities`.
client_event_listeners
vercel SKILL.md License: See repository Version: Unknown
Imported skill client_event_listeners from vercel
View skill
---
title: Deduplicate Global Event Listeners
impact: LOW
impactDescription: single listener for N components
tags: client, swr, event-listeners, subscription
---

## Deduplicate Global Event Listeners

Use `useSWRSubscription()` to share global event listeners across component instances.

**Incorrect (N instances = N listeners):**

```tsx
function useKeyboardShortcut(key: string, callback: () => void) {
  useEffect(() => {
    const handler = (e: KeyboardEvent) => {
      if (e.metaKey && e.key === key) {
        callback()
      }
    }
    window.addEventListener('keydown', handler)
    return () => window.removeEventListener('keydown', handler)
  }, [key, callback])
}
```

When using the `useKeyboardShortcut` hook multiple times, each instance will register a new listener.

**Correct (N instances = 1 listener):**

```tsx
import useSWRSubscription from 'swr/subscription'

// Module-level Map to track callbacks per key
const keyCallbacks = new Map<string, Set<() => void>>()

function useKeyboardShortcut(key: string, callback: () => void) {
  // Register this callback in the Map
  useEffect(() => {
    if (!keyCallbacks.has(key)) {
      keyCallbacks.set(key, new Set())
    }
    keyCallbacks.get(key)!.add(callback)

    return () => {
      const set = keyCallbacks.get(key)
      if (set) {
        set.delete(callback)
        if (set.size === 0) {
          keyCallbacks.delete(key)
        }
      }
    }
  }, [key, callback])

  useSWRSubscription('global-keydown', () => {
    const handler = (e: KeyboardEvent) => {
      if (e.metaKey && keyCallbacks.has(e.key)) {
        keyCallbacks.get(e.key)!.forEach(cb => cb())
      }
    }
    window.addEventListener('keydown', handler)
    return () => window.removeEventListener('keydown', handler)
  })
}

function Profile() {
  // Multiple shortcuts will share the same listener
  useKeyboardShortcut('p', () => { /* ... */ }) 
  useKeyboardShortcut('k', () => { /* ... */ })
  // ...
}
```
client_localstorage_schema
vercel SKILL.md License: See repository Version: Unknown
Imported skill client_localstorage_schema from vercel
View skill
---
title: Version and Minimize localStorage Data
impact: MEDIUM
impactDescription: prevents schema conflicts, reduces storage size
tags: client, localStorage, storage, versioning, data-minimization
---

## Version and Minimize localStorage Data

Add version prefix to keys and store only needed fields. Prevents schema conflicts and accidental storage of sensitive data.

**Incorrect:**

```typescript
// No version, stores everything, no error handling
localStorage.setItem('userConfig', JSON.stringify(fullUserObject))
const data = localStorage.getItem('userConfig')
```

**Correct:**

```typescript
const VERSION = 'v2'

function saveConfig(config: { theme: string; language: string }) {
  try {
    localStorage.setItem(`userConfig:${VERSION}`, JSON.stringify(config))
  } catch {
    // Throws in incognito/private browsing, quota exceeded, or disabled
  }
}

function loadConfig() {
  try {
    const data = localStorage.getItem(`userConfig:${VERSION}`)
    return data ? JSON.parse(data) : null
  } catch {
    return null
  }
}

// Migration from v1 to v2
function migrate() {
  try {
    const v1 = localStorage.getItem('userConfig:v1')
    if (v1) {
      const old = JSON.parse(v1)
      saveConfig({ theme: old.darkMode ? 'dark' : 'light', language: old.lang })
      localStorage.removeItem('userConfig:v1')
    }
  } catch {}
}
```

**Store minimal fields from server responses:**

```typescript
// User object has 20+ fields, only store what UI needs
function cachePrefs(user: FullUser) {
  try {
    localStorage.setItem('prefs:v1', JSON.stringify({
      theme: user.preferences.theme,
      notifications: user.preferences.notifications
    }))
  } catch {}
}
```

**Always wrap in try-catch:** `getItem()` and `setItem()` throw in incognito/private browsing (Safari, Firefox), when quota exceeded, or when disabled.

**Benefits:** Schema evolution via versioning, reduced storage size, prevents storing tokens/PII/internal flags.
client_passive_event_listeners
vercel SKILL.md License: See repository Version: Unknown
Imported skill client_passive_event_listeners from vercel
View skill
---
title: Use Passive Event Listeners for Scrolling Performance
impact: MEDIUM
impactDescription: eliminates scroll delay caused by event listeners
tags: client, event-listeners, scrolling, performance, touch, wheel
---

## Use Passive Event Listeners for Scrolling Performance

Add `{ passive: true }` to touch and wheel event listeners to enable immediate scrolling. Browsers normally wait for listeners to finish to check if `preventDefault()` is called, causing scroll delay.

**Incorrect:**

```typescript
useEffect(() => {
  const handleTouch = (e: TouchEvent) => console.log(e.touches[0].clientX)
  const handleWheel = (e: WheelEvent) => console.log(e.deltaY)
  
  document.addEventListener('touchstart', handleTouch)
  document.addEventListener('wheel', handleWheel)
  
  return () => {
    document.removeEventListener('touchstart', handleTouch)
    document.removeEventListener('wheel', handleWheel)
  }
}, [])
```

**Correct:**

```typescript
useEffect(() => {
  const handleTouch = (e: TouchEvent) => console.log(e.touches[0].clientX)
  const handleWheel = (e: WheelEvent) => console.log(e.deltaY)
  
  document.addEventListener('touchstart', handleTouch, { passive: true })
  document.addEventListener('wheel', handleWheel, { passive: true })
  
  return () => {
    document.removeEventListener('touchstart', handleTouch)
    document.removeEventListener('wheel', handleWheel)
  }
}, [])
```

**Use passive when:** tracking/analytics, logging, any listener that doesn't call `preventDefault()`.

**Don't use passive when:** implementing custom swipe gestures, custom zoom controls, or any listener that needs `preventDefault()`.
client_swr_dedup
vercel SKILL.md License: See repository Version: Unknown
Imported skill client_swr_dedup from vercel
View skill
---
title: Use SWR for Automatic Deduplication
impact: MEDIUM-HIGH
impactDescription: automatic deduplication
tags: client, swr, deduplication, data-fetching
---

## Use SWR for Automatic Deduplication

SWR enables request deduplication, caching, and revalidation across component instances.

**Incorrect (no deduplication, each instance fetches):**

```tsx
function UserList() {
  const [users, setUsers] = useState([])
  useEffect(() => {
    fetch('/api/users')
      .then(r => r.json())
      .then(setUsers)
  }, [])
}
```

**Correct (multiple instances share one request):**

```tsx
import useSWR from 'swr'

function UserList() {
  const { data: users } = useSWR('/api/users', fetcher)
}
```

**For immutable data:**

```tsx
import { useImmutableSWR } from '@/lib/swr'

function StaticContent() {
  const { data } = useImmutableSWR('/api/config', fetcher)
}
```

**For mutations:**

```tsx
import { useSWRMutation } from 'swr/mutation'

function UpdateButton() {
  const { trigger } = useSWRMutation('/api/user', updateUser)
  return <button onClick={() => trigger()}>Update</button>
}
```

Reference: [https://swr.vercel.app](https://swr.vercel.app)
js_batch_dom_css
vercel SKILL.md License: See repository Version: Unknown
Imported skill js_batch_dom_css from vercel
View skill
---
title: Batch DOM CSS Changes
impact: MEDIUM
impactDescription: reduces reflows/repaints
tags: javascript, dom, css, performance, reflow
---

## Batch DOM CSS Changes

Avoid interleaving style writes with layout reads. When you read a layout property (like `offsetWidth`, `getBoundingClientRect()`, or `getComputedStyle()`) between style changes, the browser is forced to trigger a synchronous reflow.

**Incorrect (interleaved reads and writes force reflows):**

```typescript
function updateElementStyles(element: HTMLElement) {
  element.style.width = '100px'
  const width = element.offsetWidth  // Forces reflow
  element.style.height = '200px'
  const height = element.offsetHeight  // Forces another reflow
}
```

**Correct (batch writes, then read once):**

```typescript
function updateElementStyles(element: HTMLElement) {
  // Batch all writes together
  element.style.width = '100px'
  element.style.height = '200px'
  element.style.backgroundColor = 'blue'
  element.style.border = '1px solid black'
  
  // Read after all writes are done (single reflow)
  const { width, height } = element.getBoundingClientRect()
}
```

**Better: use CSS classes**

```css
.highlighted-box {
  width: 100px;
  height: 200px;
  background-color: blue;
  border: 1px solid black;
}
```

```typescript
function updateElementStyles(element: HTMLElement) {
  element.classList.add('highlighted-box')

  const { width, height } = element.getBoundingClientRect()
}
```

Prefer CSS classes over inline styles when possible. CSS files are cached by the browser, and classes provide better separation of concerns and are easier to maintain.
js_cache_function_results
vercel SKILL.md License: See repository Version: Unknown
Imported skill js_cache_function_results from vercel
View skill
---
title: Cache Repeated Function Calls
impact: MEDIUM
impactDescription: avoid redundant computation
tags: javascript, cache, memoization, performance
---

## Cache Repeated Function Calls

Use a module-level Map to cache function results when the same function is called repeatedly with the same inputs during render.

**Incorrect (redundant computation):**

```typescript
function ProjectList({ projects }: { projects: Project[] }) {
  return (
    <div>
      {projects.map(project => {
        // slugify() called 100+ times for same project names
        const slug = slugify(project.name)
        
        return <ProjectCard key={project.id} slug={slug} />
      })}
    </div>
  )
}
```

**Correct (cached results):**

```typescript
// Module-level cache
const slugifyCache = new Map<string, string>()

function cachedSlugify(text: string): string {
  if (slugifyCache.has(text)) {
    return slugifyCache.get(text)!
  }
  const result = slugify(text)
  slugifyCache.set(text, result)
  return result
}

function ProjectList({ projects }: { projects: Project[] }) {
  return (
    <div>
      {projects.map(project => {
        // Computed only once per unique project name
        const slug = cachedSlugify(project.name)
        
        return <ProjectCard key={project.id} slug={slug} />
      })}
    </div>
  )
}
```

**Simpler pattern for single-value functions:**

```typescript
let isLoggedInCache: boolean | null = null

function isLoggedIn(): boolean {
  if (isLoggedInCache !== null) {
    return isLoggedInCache
  }
  
  isLoggedInCache = document.cookie.includes('auth=')
  return isLoggedInCache
}

// Clear cache when auth changes
function onAuthChange() {
  isLoggedInCache = null
}
```

Use a Map (not a hook) so it works everywhere: utilities, event handlers, not just React components.

Reference: [How we made the Vercel Dashboard twice as fast](https://vercel.com/blog/how-we-made-the-vercel-dashboard-twice-as-fast)
js_cache_property_access
vercel SKILL.md License: See repository Version: Unknown
Imported skill js_cache_property_access from vercel
View skill
---
title: Cache Property Access in Loops
impact: LOW-MEDIUM
impactDescription: reduces lookups
tags: javascript, loops, optimization, caching
---

## Cache Property Access in Loops

Cache object property lookups in hot paths.

**Incorrect (3 lookups × N iterations):**

```typescript
for (let i = 0; i < arr.length; i++) {
  process(obj.config.settings.value)
}
```

**Correct (1 lookup total):**

```typescript
const value = obj.config.settings.value
const len = arr.length
for (let i = 0; i < len; i++) {
  process(value)
}
```
js_cache_storage
vercel SKILL.md License: See repository Version: Unknown
Imported skill js_cache_storage from vercel
View skill
---
title: Cache Storage API Calls
impact: LOW-MEDIUM
impactDescription: reduces expensive I/O
tags: javascript, localStorage, storage, caching, performance
---

## Cache Storage API Calls

`localStorage`, `sessionStorage`, and `document.cookie` are synchronous and expensive. Cache reads in memory.

**Incorrect (reads storage on every call):**

```typescript
function getTheme() {
  return localStorage.getItem('theme') ?? 'light'
}
// Called 10 times = 10 storage reads
```

**Correct (Map cache):**

```typescript
const storageCache = new Map<string, string | null>()

function getLocalStorage(key: string) {
  if (!storageCache.has(key)) {
    storageCache.set(key, localStorage.getItem(key))
  }
  return storageCache.get(key)
}

function setLocalStorage(key: string, value: string) {
  localStorage.setItem(key, value)
  storageCache.set(key, value)  // keep cache in sync
}
```

Use a Map (not a hook) so it works everywhere: utilities, event handlers, not just React components.

**Cookie caching:**

```typescript
let cookieCache: Record<string, string> | null = null

function getCookie(name: string) {
  if (!cookieCache) {
    cookieCache = Object.fromEntries(
      document.cookie.split('; ').map(c => c.split('='))
    )
  }
  return cookieCache[name]
}
```

**Important (invalidate on external changes):**

If storage can change externally (another tab, server-set cookies), invalidate cache:

```typescript
window.addEventListener('storage', (e) => {
  if (e.key) storageCache.delete(e.key)
})

document.addEventListener('visibilitychange', () => {
  if (document.visibilityState === 'visible') {
    storageCache.clear()
  }
})
```
js_combine_iterations
vercel SKILL.md License: See repository Version: Unknown
Imported skill js_combine_iterations from vercel
View skill
---
title: Combine Multiple Array Iterations
impact: LOW-MEDIUM
impactDescription: reduces iterations
tags: javascript, arrays, loops, performance
---

## Combine Multiple Array Iterations

Multiple `.filter()` or `.map()` calls iterate the array multiple times. Combine into one loop.

**Incorrect (3 iterations):**

```typescript
const admins = users.filter(u => u.isAdmin)
const testers = users.filter(u => u.isTester)
const inactive = users.filter(u => !u.isActive)
```

**Correct (1 iteration):**

```typescript
const admins: User[] = []
const testers: User[] = []
const inactive: User[] = []

for (const user of users) {
  if (user.isAdmin) admins.push(user)
  if (user.isTester) testers.push(user)
  if (!user.isActive) inactive.push(user)
}
```
js_early_exit
vercel SKILL.md License: See repository Version: Unknown
Imported skill js_early_exit from vercel
View skill
---
title: Early Return from Functions
impact: LOW-MEDIUM
impactDescription: avoids unnecessary computation
tags: javascript, functions, optimization, early-return
---

## Early Return from Functions

Return early when result is determined to skip unnecessary processing.

**Incorrect (processes all items even after finding answer):**

```typescript
function validateUsers(users: User[]) {
  let hasError = false
  let errorMessage = ''
  
  for (const user of users) {
    if (!user.email) {
      hasError = true
      errorMessage = 'Email required'
    }
    if (!user.name) {
      hasError = true
      errorMessage = 'Name required'
    }
    // Continues checking all users even after error found
  }
  
  return hasError ? { valid: false, error: errorMessage } : { valid: true }
}
```

**Correct (returns immediately on first error):**

```typescript
function validateUsers(users: User[]) {
  for (const user of users) {
    if (!user.email) {
      return { valid: false, error: 'Email required' }
    }
    if (!user.name) {
      return { valid: false, error: 'Name required' }
    }
  }

  return { valid: true }
}
```
js_hoist_regexp
vercel SKILL.md License: See repository Version: Unknown
Imported skill js_hoist_regexp from vercel
View skill
---
title: Hoist RegExp Creation
impact: LOW-MEDIUM
impactDescription: avoids recreation
tags: javascript, regexp, optimization, memoization
---

## Hoist RegExp Creation

Don't create RegExp inside render. Hoist to module scope or memoize with `useMemo()`.

**Incorrect (new RegExp every render):**

```tsx
function Highlighter({ text, query }: Props) {
  const regex = new RegExp(`(${query})`, 'gi')
  const parts = text.split(regex)
  return <>{parts.map((part, i) => ...)}</>
}
```

**Correct (memoize or hoist):**

```tsx
const EMAIL_REGEX = /^[^\s@]+@[^\s@]+\.[^\s@]+$/

function Highlighter({ text, query }: Props) {
  const regex = useMemo(
    () => new RegExp(`(${escapeRegex(query)})`, 'gi'),
    [query]
  )
  const parts = text.split(regex)
  return <>{parts.map((part, i) => ...)}</>
}
```

**Warning (global regex has mutable state):**

Global regex (`/g`) has mutable `lastIndex` state:

```typescript
const regex = /foo/g
regex.test('foo')  // true, lastIndex = 3
regex.test('foo')  // false, lastIndex = 0
```
js_index_maps
vercel SKILL.md License: See repository Version: Unknown
Imported skill js_index_maps from vercel
View skill
---
title: Build Index Maps for Repeated Lookups
impact: LOW-MEDIUM
impactDescription: 1M ops to 2K ops
tags: javascript, map, indexing, optimization, performance
---

## Build Index Maps for Repeated Lookups

Multiple `.find()` calls by the same key should use a Map.

**Incorrect (O(n) per lookup):**

```typescript
function processOrders(orders: Order[], users: User[]) {
  return orders.map(order => ({
    ...order,
    user: users.find(u => u.id === order.userId)
  }))
}
```

**Correct (O(1) per lookup):**

```typescript
function processOrders(orders: Order[], users: User[]) {
  const userById = new Map(users.map(u => [u.id, u]))

  return orders.map(order => ({
    ...order,
    user: userById.get(order.userId)
  }))
}
```

Build map once (O(n)), then all lookups are O(1).
For 1000 orders × 1000 users: 1M ops → 2K ops.
js_length_check_first
vercel SKILL.md License: See repository Version: Unknown
Imported skill js_length_check_first from vercel
View skill
---
title: Early Length Check for Array Comparisons
impact: MEDIUM-HIGH
impactDescription: avoids expensive operations when lengths differ
tags: javascript, arrays, performance, optimization, comparison
---

## Early Length Check for Array Comparisons

When comparing arrays with expensive operations (sorting, deep equality, serialization), check lengths first. If lengths differ, the arrays cannot be equal.

In real-world applications, this optimization is especially valuable when the comparison runs in hot paths (event handlers, render loops).

**Incorrect (always runs expensive comparison):**

```typescript
function hasChanges(current: string[], original: string[]) {
  // Always sorts and joins, even when lengths differ
  return current.sort().join() !== original.sort().join()
}
```

Two O(n log n) sorts run even when `current.length` is 5 and `original.length` is 100. There is also overhead of joining the arrays and comparing the strings.

**Correct (O(1) length check first):**

```typescript
function hasChanges(current: string[], original: string[]) {
  // Early return if lengths differ
  if (current.length !== original.length) {
    return true
  }
  // Only sort when lengths match
  const currentSorted = current.toSorted()
  const originalSorted = original.toSorted()
  for (let i = 0; i < currentSorted.length; i++) {
    if (currentSorted[i] !== originalSorted[i]) {
      return true
    }
  }
  return false
}
```

This new approach is more efficient because:
- It avoids the overhead of sorting and joining the arrays when lengths differ
- It avoids consuming memory for the joined strings (especially important for large arrays)
- It avoids mutating the original arrays
- It returns early when a difference is found
js_min_max_loop
vercel SKILL.md License: See repository Version: Unknown
Imported skill js_min_max_loop from vercel
View skill
---
title: Use Loop for Min/Max Instead of Sort
impact: LOW
impactDescription: O(n) instead of O(n log n)
tags: javascript, arrays, performance, sorting, algorithms
---

## Use Loop for Min/Max Instead of Sort

Finding the smallest or largest element only requires a single pass through the array. Sorting is wasteful and slower.

**Incorrect (O(n log n) - sort to find latest):**

```typescript
interface Project {
  id: string
  name: string
  updatedAt: number
}

function getLatestProject(projects: Project[]) {
  const sorted = [...projects].sort((a, b) => b.updatedAt - a.updatedAt)
  return sorted[0]
}
```

Sorts the entire array just to find the maximum value.

**Incorrect (O(n log n) - sort for oldest and newest):**

```typescript
function getOldestAndNewest(projects: Project[]) {
  const sorted = [...projects].sort((a, b) => a.updatedAt - b.updatedAt)
  return { oldest: sorted[0], newest: sorted[sorted.length - 1] }
}
```

Still sorts unnecessarily when only min/max are needed.

**Correct (O(n) - single loop):**

```typescript
function getLatestProject(projects: Project[]) {
  if (projects.length === 0) return null
  
  let latest = projects[0]
  
  for (let i = 1; i < projects.length; i++) {
    if (projects[i].updatedAt > latest.updatedAt) {
      latest = projects[i]
    }
  }
  
  return latest
}

function getOldestAndNewest(projects: Project[]) {
  if (projects.length === 0) return { oldest: null, newest: null }
  
  let oldest = projects[0]
  let newest = projects[0]
  
  for (let i = 1; i < projects.length; i++) {
    if (projects[i].updatedAt < oldest.updatedAt) oldest = projects[i]
    if (projects[i].updatedAt > newest.updatedAt) newest = projects[i]
  }
  
  return { oldest, newest }
}
```

Single pass through the array, no copying, no sorting.

**Alternative (Math.min/Math.max for small arrays):**

```typescript
const numbers = [5, 2, 8, 1, 9]
const min = Math.min(...numbers)
const max = Math.max(...numbers)
```

This works for small arrays, but can be slower or just throw an error for very large arrays due to spread operator limitations. Maximal array length is approximately 124000 in Chrome 143 and 638000 in Safari 18; exact numbers may vary - see [the fiddle](https://jsfiddle.net/qw1jabsx/4/). Use the loop approach for reliability.
js_set_map_lookups
vercel SKILL.md License: See repository Version: Unknown
Imported skill js_set_map_lookups from vercel
View skill
---
title: Use Set/Map for O(1) Lookups
impact: LOW-MEDIUM
impactDescription: O(n) to O(1)
tags: javascript, set, map, data-structures, performance
---

## Use Set/Map for O(1) Lookups

Convert arrays to Set/Map for repeated membership checks.

**Incorrect (O(n) per check):**

```typescript
const allowedIds = ['a', 'b', 'c', ...]
items.filter(item => allowedIds.includes(item.id))
```

**Correct (O(1) per check):**

```typescript
const allowedIds = new Set(['a', 'b', 'c', ...])
items.filter(item => allowedIds.has(item.id))
```
js_tosorted_immutable
vercel SKILL.md License: See repository Version: Unknown
Imported skill js_tosorted_immutable from vercel
View skill
---
title: Use toSorted() Instead of sort() for Immutability
impact: MEDIUM-HIGH
impactDescription: prevents mutation bugs in React state
tags: javascript, arrays, immutability, react, state, mutation
---

## Use toSorted() Instead of sort() for Immutability

`.sort()` mutates the array in place, which can cause bugs with React state and props. Use `.toSorted()` to create a new sorted array without mutation.

**Incorrect (mutates original array):**

```typescript
function UserList({ users }: { users: User[] }) {
  // Mutates the users prop array!
  const sorted = useMemo(
    () => users.sort((a, b) => a.name.localeCompare(b.name)),
    [users]
  )
  return <div>{sorted.map(renderUser)}</div>
}
```

**Correct (creates new array):**

```typescript
function UserList({ users }: { users: User[] }) {
  // Creates new sorted array, original unchanged
  const sorted = useMemo(
    () => users.toSorted((a, b) => a.name.localeCompare(b.name)),
    [users]
  )
  return <div>{sorted.map(renderUser)}</div>
}
```

**Why this matters in React:**

1. Props/state mutations break React's immutability model - React expects props and state to be treated as read-only
2. Causes stale closure bugs - Mutating arrays inside closures (callbacks, effects) can lead to unexpected behavior

**Browser support (fallback for older browsers):**

`.toSorted()` is available in all modern browsers (Chrome 110+, Safari 16+, Firefox 115+, Node.js 20+). For older environments, use spread operator:

```typescript
// Fallback for older browsers
const sorted = [...items].sort((a, b) => a.value - b.value)
```

**Other immutable array methods:**

- `.toSorted()` - immutable sort
- `.toReversed()` - immutable reverse
- `.toSpliced()` - immutable splice
- `.with()` - immutable element replacement
readme
vercel SKILL.md License: See repository Version: Unknown
Imported skill readme from vercel
View skill
# React Best Practices

A structured repository for creating and maintaining React Best Practices optimized for agents and LLMs.

## Structure

- `rules/` - Individual rule files (one per rule)
  - `_sections.md` - Section metadata (titles, impacts, descriptions)
  - `_template.md` - Template for creating new rules
  - `area-description.md` - Individual rule files
- `src/` - Build scripts and utilities
- `metadata.json` - Document metadata (version, organization, abstract)
- __`AGENTS.md`__ - Compiled output (generated)
- __`test-cases.json`__ - Test cases for LLM evaluation (generated)

## Getting Started

1. Install dependencies:
   ```bash
   pnpm install
   ```

2. Build AGENTS.md from rules:
   ```bash
   pnpm build
   ```

3. Validate rule files:
   ```bash
   pnpm validate
   ```

4. Extract test cases:
   ```bash
   pnpm extract-tests
   ```

## Creating a New Rule

1. Copy `rules/_template.md` to `rules/area-description.md`
2. Choose the appropriate area prefix:
   - `async-` for Eliminating Waterfalls (Section 1)
   - `bundle-` for Bundle Size Optimization (Section 2)
   - `server-` for Server-Side Performance (Section 3)
   - `client-` for Client-Side Data Fetching (Section 4)
   - `rerender-` for Re-render Optimization (Section 5)
   - `rendering-` for Rendering Performance (Section 6)
   - `js-` for JavaScript Performance (Section 7)
   - `advanced-` for Advanced Patterns (Section 8)
3. Fill in the frontmatter and content
4. Ensure you have clear examples with explanations
5. Run `pnpm build` to regenerate AGENTS.md and test-cases.json

## Rule File Structure

Each rule file should follow this structure:

```markdown
---
title: Rule Title Here
impact: MEDIUM
impactDescription: Optional description
tags: tag1, tag2, tag3
---

## Rule Title Here

Brief explanation of the rule and why it matters.

**Incorrect (description of what's wrong):**

```typescript
// Bad code example
```

**Correct (description of what's right):**

```typescript
// Good code example
```

Optional explanatory text after examples.

Reference: [Link](https://example.com)

## File Naming Convention

- Files starting with `_` are special (excluded from build)
- Rule files: `area-description.md` (e.g., `async-parallel.md`)
- Section is automatically inferred from filename prefix
- Rules are sorted alphabetically by title within each section
- IDs (e.g., 1.1, 1.2) are auto-generated during build

## Impact Levels

- `CRITICAL` - Highest priority, major performance gains
- `HIGH` - Significant performance improvements
- `MEDIUM-HIGH` - Moderate-high gains
- `MEDIUM` - Moderate performance improvements
- `LOW-MEDIUM` - Low-medium gains
- `LOW` - Incremental improvements

## Scripts

- `pnpm build` - Compile rules into AGENTS.md
- `pnpm validate` - Validate all rule files
- `pnpm extract-tests` - Extract test cases for LLM evaluation
- `pnpm dev` - Build and validate

## Contributing

When adding or modifying rules:

1. Use the correct filename prefix for your section
2. Follow the `_template.md` structure
3. Include clear bad/good examples with explanations
4. Add appropriate tags
5. Run `pnpm build` to regenerate AGENTS.md and test-cases.json
6. Rules are automatically sorted by title - no need to manage numbers!

## Acknowledgments

Originally created by [@shuding](https://x.com/shuding) at [Vercel](https://vercel.com).

## Bundled Sources

### metadata.json

Source: `/a0/tmp/skills_research/vercel/skills/react-best-practices/metadata.json`

```json
{
  "version": "1.0.0",
  "organization": "Vercel Engineering",
  "date": "January 2026",
  "abstract": "Comprehensive performance optimization guide for React and Next.js applications, designed for AI agents and LLMs. Contains 40+ rules across 8 categories, prioritized by impact from critical (eliminating waterfalls, reducing bundle size) to incremental (advanced patterns). Each rule includes detailed explanations, real-world examples comparing incorrect vs. correct implementations, and specific impact metrics to guide automated refactoring and code generation.",
  "references": [
    "https://react.dev",
    "https://nextjs.org",
    "https://swr.vercel.app",
    "https://github.com/shuding/better-all",
    "https://github.com/isaacs/node-lru-cache",
    "https://vercel.com/blog/how-we-optimized-package-imports-in-next-js",
    "https://vercel.com/blog/how-we-made-the-vercel-dashboard-twice-as-fast"
  ]
}
```
rendering_activity
vercel SKILL.md License: See repository Version: Unknown
Imported skill rendering_activity from vercel
View skill
---
title: Use Activity Component for Show/Hide
impact: MEDIUM
impactDescription: preserves state/DOM
tags: rendering, activity, visibility, state-preservation
---

## Use Activity Component for Show/Hide

Use React's `<Activity>` to preserve state/DOM for expensive components that frequently toggle visibility.

**Usage:**

```tsx
import { Activity } from 'react'

function Dropdown({ isOpen }: Props) {
  return (
    <Activity mode={isOpen ? 'visible' : 'hidden'}>
      <ExpensiveMenu />
    </Activity>
  )
}
```

Avoids expensive re-renders and state loss.
rendering_animate_svg_wrapper
vercel SKILL.md License: See repository Version: Unknown
Imported skill rendering_animate_svg_wrapper from vercel
View skill
---
title: Animate SVG Wrapper Instead of SVG Element
impact: LOW
impactDescription: enables hardware acceleration
tags: rendering, svg, css, animation, performance
---

## Animate SVG Wrapper Instead of SVG Element

Many browsers don't have hardware acceleration for CSS3 animations on SVG elements. Wrap SVG in a `<div>` and animate the wrapper instead.

**Incorrect (animating SVG directly - no hardware acceleration):**

```tsx
function LoadingSpinner() {
  return (
    <svg 
      className="animate-spin"
      width="24" 
      height="24" 
      viewBox="0 0 24 24"
    >
      <circle cx="12" cy="12" r="10" stroke="currentColor" />
    </svg>
  )
}
```

**Correct (animating wrapper div - hardware accelerated):**

```tsx
function LoadingSpinner() {
  return (
    <div className="animate-spin">
      <svg 
        width="24" 
        height="24" 
        viewBox="0 0 24 24"
      >
        <circle cx="12" cy="12" r="10" stroke="currentColor" />
      </svg>
    </div>
  )
}
```

This applies to all CSS transforms and transitions (`transform`, `opacity`, `translate`, `scale`, `rotate`). The wrapper div allows browsers to use GPU acceleration for smoother animations.
rendering_conditional_render
vercel SKILL.md License: See repository Version: Unknown
Imported skill rendering_conditional_render from vercel
View skill
---
title: Use Explicit Conditional Rendering
impact: LOW
impactDescription: prevents rendering 0 or NaN
tags: rendering, conditional, jsx, falsy-values
---

## Use Explicit Conditional Rendering

Use explicit ternary operators (`? :`) instead of `&&` for conditional rendering when the condition can be `0`, `NaN`, or other falsy values that render.

**Incorrect (renders "0" when count is 0):**

```tsx
function Badge({ count }: { count: number }) {
  return (
    <div>
      {count && <span className="badge">{count}</span>}
    </div>
  )
}

// When count = 0, renders: <div>0</div>
// When count = 5, renders: <div><span class="badge">5</span></div>
```

**Correct (renders nothing when count is 0):**

```tsx
function Badge({ count }: { count: number }) {
  return (
    <div>
      {count > 0 ? <span className="badge">{count}</span> : null}
    </div>
  )
}

// When count = 0, renders: <div></div>
// When count = 5, renders: <div><span class="badge">5</span></div>
```
rendering_content_visibility
vercel SKILL.md License: See repository Version: Unknown
Imported skill rendering_content_visibility from vercel
View skill
---
title: CSS content-visibility for Long Lists
impact: HIGH
impactDescription: faster initial render
tags: rendering, css, content-visibility, long-lists
---

## CSS content-visibility for Long Lists

Apply `content-visibility: auto` to defer off-screen rendering.

**CSS:**

```css
.message-item {
  content-visibility: auto;
  contain-intrinsic-size: 0 80px;
}
```

**Example:**

```tsx
function MessageList({ messages }: { messages: Message[] }) {
  return (
    <div className="overflow-y-auto h-screen">
      {messages.map(msg => (
        <div key={msg.id} className="message-item">
          <Avatar user={msg.author} />
          <div>{msg.content}</div>
        </div>
      ))}
    </div>
  )
}
```

For 1000 messages, browser skips layout/paint for ~990 off-screen items (10× faster initial render).
rendering_hoist_jsx
vercel SKILL.md License: See repository Version: Unknown
Imported skill rendering_hoist_jsx from vercel
View skill
---
title: Hoist Static JSX Elements
impact: LOW
impactDescription: avoids re-creation
tags: rendering, jsx, static, optimization
---

## Hoist Static JSX Elements

Extract static JSX outside components to avoid re-creation.

**Incorrect (recreates element every render):**

```tsx
function LoadingSkeleton() {
  return <div className="animate-pulse h-20 bg-gray-200" />
}

function Container() {
  return (
    <div>
      {loading && <LoadingSkeleton />}
    </div>
  )
}
```

**Correct (reuses same element):**

```tsx
const loadingSkeleton = (
  <div className="animate-pulse h-20 bg-gray-200" />
)

function Container() {
  return (
    <div>
      {loading && loadingSkeleton}
    </div>
  )
}
```

This is especially helpful for large and static SVG nodes, which can be expensive to recreate on every render.

**Note:** If your project has [React Compiler](https://react.dev/learn/react-compiler) enabled, the compiler automatically hoists static JSX elements and optimizes component re-renders, making manual hoisting unnecessary.
rendering_hydration_no_flicker
vercel SKILL.md License: See repository Version: Unknown
Imported skill rendering_hydration_no_flicker from vercel
View skill
---
title: Prevent Hydration Mismatch Without Flickering
impact: MEDIUM
impactDescription: avoids visual flicker and hydration errors
tags: rendering, ssr, hydration, localStorage, flicker
---

## Prevent Hydration Mismatch Without Flickering

When rendering content that depends on client-side storage (localStorage, cookies), avoid both SSR breakage and post-hydration flickering by injecting a synchronous script that updates the DOM before React hydrates.

**Incorrect (breaks SSR):**

```tsx
function ThemeWrapper({ children }: { children: ReactNode }) {
  // localStorage is not available on server - throws error
  const theme = localStorage.getItem('theme') || 'light'
  
  return (
    <div className={theme}>
      {children}
    </div>
  )
}
```

Server-side rendering will fail because `localStorage` is undefined.

**Incorrect (visual flickering):**

```tsx
function ThemeWrapper({ children }: { children: ReactNode }) {
  const [theme, setTheme] = useState('light')
  
  useEffect(() => {
    // Runs after hydration - causes visible flash
    const stored = localStorage.getItem('theme')
    if (stored) {
      setTheme(stored)
    }
  }, [])
  
  return (
    <div className={theme}>
      {children}
    </div>
  )
}
```

Component first renders with default value (`light`), then updates after hydration, causing a visible flash of incorrect content.

**Correct (no flicker, no hydration mismatch):**

```tsx
function ThemeWrapper({ children }: { children: ReactNode }) {
  return (
    <>
      <div id="theme-wrapper">
        {children}
      </div>
      <script
        dangerouslySetInnerHTML={{
          __html: `
            (function() {
              try {
                var theme = localStorage.getItem('theme') || 'light';
                var el = document.getElementById('theme-wrapper');
                if (el) el.className = theme;
              } catch (e) {}
            })();
          `,
        }}
      />
    </>
  )
}
```

The inline script executes synchronously before showing the element, ensuring the DOM already has the correct value. No flickering, no hydration mismatch.

This pattern is especially useful for theme toggles, user preferences, authentication states, and any client-only data that should render immediately without flashing default values.
rendering_svg_precision
vercel SKILL.md License: See repository Version: Unknown
Imported skill rendering_svg_precision from vercel
View skill
---
title: Optimize SVG Precision
impact: LOW
impactDescription: reduces file size
tags: rendering, svg, optimization, svgo
---

## Optimize SVG Precision

Reduce SVG coordinate precision to decrease file size. The optimal precision depends on the viewBox size, but in general reducing precision should be considered.

**Incorrect (excessive precision):**

```svg
<path d="M 10.293847 20.847362 L 30.938472 40.192837" />
```

**Correct (1 decimal place):**

```svg
<path d="M 10.3 20.8 L 30.9 40.2" />
```

**Automate with SVGO:**

```bash
npx svgo --precision=1 --multipass icon.svg
```
rerender_defer_reads
vercel SKILL.md License: See repository Version: Unknown
Imported skill rerender_defer_reads from vercel
View skill
---
title: Defer State Reads to Usage Point
impact: MEDIUM
impactDescription: avoids unnecessary subscriptions
tags: rerender, searchParams, localStorage, optimization
---

## Defer State Reads to Usage Point

Don't subscribe to dynamic state (searchParams, localStorage) if you only read it inside callbacks.

**Incorrect (subscribes to all searchParams changes):**

```tsx
function ShareButton({ chatId }: { chatId: string }) {
  const searchParams = useSearchParams()

  const handleShare = () => {
    const ref = searchParams.get('ref')
    shareChat(chatId, { ref })
  }

  return <button onClick={handleShare}>Share</button>
}
```

**Correct (reads on demand, no subscription):**

```tsx
function ShareButton({ chatId }: { chatId: string }) {
  const handleShare = () => {
    const params = new URLSearchParams(window.location.search)
    const ref = params.get('ref')
    shareChat(chatId, { ref })
  }

  return <button onClick={handleShare}>Share</button>
}
```
rerender_dependencies
vercel SKILL.md License: See repository Version: Unknown
Imported skill rerender_dependencies from vercel
View skill
---
title: Narrow Effect Dependencies
impact: LOW
impactDescription: minimizes effect re-runs
tags: rerender, useEffect, dependencies, optimization
---

## Narrow Effect Dependencies

Specify primitive dependencies instead of objects to minimize effect re-runs.

**Incorrect (re-runs on any user field change):**

```tsx
useEffect(() => {
  console.log(user.id)
}, [user])
```

**Correct (re-runs only when id changes):**

```tsx
useEffect(() => {
  console.log(user.id)
}, [user.id])
```

**For derived state, compute outside effect:**

```tsx
// Incorrect: runs on width=767, 766, 765...
useEffect(() => {
  if (width < 768) {
    enableMobileMode()
  }
}, [width])

// Correct: runs only on boolean transition
const isMobile = width < 768
useEffect(() => {
  if (isMobile) {
    enableMobileMode()
  }
}, [isMobile])
```
rerender_derived_state
vercel SKILL.md License: See repository Version: Unknown
Imported skill rerender_derived_state from vercel
View skill
---
title: Subscribe to Derived State
impact: MEDIUM
impactDescription: reduces re-render frequency
tags: rerender, derived-state, media-query, optimization
---

## Subscribe to Derived State

Subscribe to derived boolean state instead of continuous values to reduce re-render frequency.

**Incorrect (re-renders on every pixel change):**

```tsx
function Sidebar() {
  const width = useWindowWidth()  // updates continuously
  const isMobile = width < 768
  return <nav className={isMobile ? 'mobile' : 'desktop'} />
}
```

**Correct (re-renders only when boolean changes):**

```tsx
function Sidebar() {
  const isMobile = useMediaQuery('(max-width: 767px)')
  return <nav className={isMobile ? 'mobile' : 'desktop'} />
}
```
rerender_functional_setstate
vercel SKILL.md License: See repository Version: Unknown
Imported skill rerender_functional_setstate from vercel
View skill
---
title: Use Functional setState Updates
impact: MEDIUM
impactDescription: prevents stale closures and unnecessary callback recreations
tags: react, hooks, useState, useCallback, callbacks, closures
---

## Use Functional setState Updates

When updating state based on the current state value, use the functional update form of setState instead of directly referencing the state variable. This prevents stale closures, eliminates unnecessary dependencies, and creates stable callback references.

**Incorrect (requires state as dependency):**

```tsx
function TodoList() {
  const [items, setItems] = useState(initialItems)
  
  // Callback must depend on items, recreated on every items change
  const addItems = useCallback((newItems: Item[]) => {
    setItems([...items, ...newItems])
  }, [items])  // ❌ items dependency causes recreations
  
  // Risk of stale closure if dependency is forgotten
  const removeItem = useCallback((id: string) => {
    setItems(items.filter(item => item.id !== id))
  }, [])  // ❌ Missing items dependency - will use stale items!
  
  return <ItemsEditor items={items} onAdd={addItems} onRemove={removeItem} />
}
```

The first callback is recreated every time `items` changes, which can cause child components to re-render unnecessarily. The second callback has a stale closure bug—it will always reference the initial `items` value.

**Correct (stable callbacks, no stale closures):**

```tsx
function TodoList() {
  const [items, setItems] = useState(initialItems)
  
  // Stable callback, never recreated
  const addItems = useCallback((newItems: Item[]) => {
    setItems(curr => [...curr, ...newItems])
  }, [])  // ✅ No dependencies needed
  
  // Always uses latest state, no stale closure risk
  const removeItem = useCallback((id: string) => {
    setItems(curr => curr.filter(item => item.id !== id))
  }, [])  // ✅ Safe and stable
  
  return <ItemsEditor items={items} onAdd={addItems} onRemove={removeItem} />
}
```

**Benefits:**

1. **Stable callback references** - Callbacks don't need to be recreated when state changes
2. **No stale closures** - Always operates on the latest state value
3. **Fewer dependencies** - Simplifies dependency arrays and reduces memory leaks
4. **Prevents bugs** - Eliminates the most common source of React closure bugs

**When to use functional updates:**

- Any setState that depends on the current state value
- Inside useCallback/useMemo when state is needed
- Event handlers that reference state
- Async operations that update state

**When direct updates are fine:**

- Setting state to a static value: `setCount(0)`
- Setting state from props/arguments only: `setName(newName)`
- State doesn't depend on previous value

**Note:** If your project has [React Compiler](https://react.dev/learn/react-compiler) enabled, the compiler can automatically optimize some cases, but functional updates are still recommended for correctness and to prevent stale closure bugs.
rerender_lazy_state_init
vercel SKILL.md License: See repository Version: Unknown
Imported skill rerender_lazy_state_init from vercel
View skill
---
title: Use Lazy State Initialization
impact: MEDIUM
impactDescription: wasted computation on every render
tags: react, hooks, useState, performance, initialization
---

## Use Lazy State Initialization

Pass a function to `useState` for expensive initial values. Without the function form, the initializer runs on every render even though the value is only used once.

**Incorrect (runs on every render):**

```tsx
function FilteredList({ items }: { items: Item[] }) {
  // buildSearchIndex() runs on EVERY render, even after initialization
  const [searchIndex, setSearchIndex] = useState(buildSearchIndex(items))
  const [query, setQuery] = useState('')
  
  // When query changes, buildSearchIndex runs again unnecessarily
  return <SearchResults index={searchIndex} query={query} />
}

function UserProfile() {
  // JSON.parse runs on every render
  const [settings, setSettings] = useState(
    JSON.parse(localStorage.getItem('settings') || '{}')
  )
  
  return <SettingsForm settings={settings} onChange={setSettings} />
}
```

**Correct (runs only once):**

```tsx
function FilteredList({ items }: { items: Item[] }) {
  // buildSearchIndex() runs ONLY on initial render
  const [searchIndex, setSearchIndex] = useState(() => buildSearchIndex(items))
  const [query, setQuery] = useState('')
  
  return <SearchResults index={searchIndex} query={query} />
}

function UserProfile() {
  // JSON.parse runs only on initial render
  const [settings, setSettings] = useState(() => {
    const stored = localStorage.getItem('settings')
    return stored ? JSON.parse(stored) : {}
  })
  
  return <SettingsForm settings={settings} onChange={setSettings} />
}
```

Use lazy initialization when computing initial values from localStorage/sessionStorage, building data structures (indexes, maps), reading from the DOM, or performing heavy transformations.

For simple primitives (`useState(0)`), direct references (`useState(props.value)`), or cheap literals (`useState({})`), the function form is unnecessary.
rerender_memo
vercel SKILL.md License: See repository Version: Unknown
Imported skill rerender_memo from vercel
View skill
---
title: Extract to Memoized Components
impact: MEDIUM
impactDescription: enables early returns
tags: rerender, memo, useMemo, optimization
---

## Extract to Memoized Components

Extract expensive work into memoized components to enable early returns before computation.

**Incorrect (computes avatar even when loading):**

```tsx
function Profile({ user, loading }: Props) {
  const avatar = useMemo(() => {
    const id = computeAvatarId(user)
    return <Avatar id={id} />
  }, [user])

  if (loading) return <Skeleton />
  return <div>{avatar}</div>
}
```

**Correct (skips computation when loading):**

```tsx
const UserAvatar = memo(function UserAvatar({ user }: { user: User }) {
  const id = useMemo(() => computeAvatarId(user), [user])
  return <Avatar id={id} />
})

function Profile({ user, loading }: Props) {
  if (loading) return <Skeleton />
  return (
    <div>
      <UserAvatar user={user} />
    </div>
  )
}
```

**Note:** If your project has [React Compiler](https://react.dev/learn/react-compiler) enabled, manual memoization with `memo()` and `useMemo()` is not necessary. The compiler automatically optimizes re-renders.
rerender_transitions
vercel SKILL.md License: See repository Version: Unknown
Imported skill rerender_transitions from vercel
View skill
---
title: Use Transitions for Non-Urgent Updates
impact: MEDIUM
impactDescription: maintains UI responsiveness
tags: rerender, transitions, startTransition, performance
---

## Use Transitions for Non-Urgent Updates

Mark frequent, non-urgent state updates as transitions to maintain UI responsiveness.

**Incorrect (blocks UI on every scroll):**

```tsx
function ScrollTracker() {
  const [scrollY, setScrollY] = useState(0)
  useEffect(() => {
    const handler = () => setScrollY(window.scrollY)
    window.addEventListener('scroll', handler, { passive: true })
    return () => window.removeEventListener('scroll', handler)
  }, [])
}
```

**Correct (non-blocking updates):**

```tsx
import { startTransition } from 'react'

function ScrollTracker() {
  const [scrollY, setScrollY] = useState(0)
  useEffect(() => {
    const handler = () => {
      startTransition(() => setScrollY(window.scrollY))
    }
    window.addEventListener('scroll', handler, { passive: true })
    return () => window.removeEventListener('scroll', handler)
  }, [])
}
```
server_after_nonblocking
vercel SKILL.md License: See repository Version: Unknown
Imported skill server_after_nonblocking from vercel
View skill
---
title: Use after() for Non-Blocking Operations
impact: MEDIUM
impactDescription: faster response times
tags: server, async, logging, analytics, side-effects
---

## Use after() for Non-Blocking Operations

Use Next.js's `after()` to schedule work that should execute after a response is sent. This prevents logging, analytics, and other side effects from blocking the response.

**Incorrect (blocks response):**

```tsx
import { logUserAction } from '@/app/utils'

export async function POST(request: Request) {
  // Perform mutation
  await updateDatabase(request)
  
  // Logging blocks the response
  const userAgent = request.headers.get('user-agent') || 'unknown'
  await logUserAction({ userAgent })
  
  return new Response(JSON.stringify({ status: 'success' }), {
    status: 200,
    headers: { 'Content-Type': 'application/json' }
  })
}
```

**Correct (non-blocking):**

```tsx
import { after } from 'next/server'
import { headers, cookies } from 'next/headers'
import { logUserAction } from '@/app/utils'

export async function POST(request: Request) {
  // Perform mutation
  await updateDatabase(request)
  
  // Log after response is sent
  after(async () => {
    const userAgent = (await headers()).get('user-agent') || 'unknown'
    const sessionCookie = (await cookies()).get('session-id')?.value || 'anonymous'
    
    logUserAction({ sessionCookie, userAgent })
  })
  
  return new Response(JSON.stringify({ status: 'success' }), {
    status: 200,
    headers: { 'Content-Type': 'application/json' }
  })
}
```

The response is sent immediately while logging happens in the background.

**Common use cases:**

- Analytics tracking
- Audit logging
- Sending notifications
- Cache invalidation
- Cleanup tasks

**Important notes:**

- `after()` runs even if the response fails or redirects
- Works in Server Actions, Route Handlers, and Server Components

Reference: [https://nextjs.org/docs/app/api-reference/functions/after](https://nextjs.org/docs/app/api-reference/functions/after)
server_cache_lru
vercel SKILL.md License: See repository Version: Unknown
Imported skill server_cache_lru from vercel
View skill
---
title: Cross-Request LRU Caching
impact: HIGH
impactDescription: caches across requests
tags: server, cache, lru, cross-request
---

## Cross-Request LRU Caching

`React.cache()` only works within one request. For data shared across sequential requests (user clicks button A then button B), use an LRU cache.

**Implementation:**

```typescript
import { LRUCache } from 'lru-cache'

const cache = new LRUCache<string, any>({
  max: 1000,
  ttl: 5 * 60 * 1000  // 5 minutes
})

export async function getUser(id: string) {
  const cached = cache.get(id)
  if (cached) return cached

  const user = await db.user.findUnique({ where: { id } })
  cache.set(id, user)
  return user
}

// Request 1: DB query, result cached
// Request 2: cache hit, no DB query
```

Use when sequential user actions hit multiple endpoints needing the same data within seconds.

**With Vercel's [Fluid Compute](https://vercel.com/docs/fluid-compute):** LRU caching is especially effective because multiple concurrent requests can share the same function instance and cache. This means the cache persists across requests without needing external storage like Redis.

**In traditional serverless:** Each invocation runs in isolation, so consider Redis for cross-process caching.

Reference: [https://github.com/isaacs/node-lru-cache](https://github.com/isaacs/node-lru-cache)
server_cache_react
vercel SKILL.md License: See repository Version: Unknown
Imported skill server_cache_react from vercel
View skill
---
title: Per-Request Deduplication with React.cache()
impact: MEDIUM
impactDescription: deduplicates within request
tags: server, cache, react-cache, deduplication
---

## Per-Request Deduplication with React.cache()

Use `React.cache()` for server-side request deduplication. Authentication and database queries benefit most.

**Usage:**

```typescript
import { cache } from 'react'

export const getCurrentUser = cache(async () => {
  const session = await auth()
  if (!session?.user?.id) return null
  return await db.user.findUnique({
    where: { id: session.user.id }
  })
})
```

Within a single request, multiple calls to `getCurrentUser()` execute the query only once.

**Avoid inline objects as arguments:**

`React.cache()` uses shallow equality (`Object.is`) to determine cache hits. Inline objects create new references each call, preventing cache hits.

**Incorrect (always cache miss):**

```typescript
const getUser = cache(async (params: { uid: number }) => {
  return await db.user.findUnique({ where: { id: params.uid } })
})

// Each call creates new object, never hits cache
getUser({ uid: 1 })
getUser({ uid: 1 })  // Cache miss, runs query again
```

**Correct (cache hit):**

```typescript
const getUser = cache(async (uid: number) => {
  return await db.user.findUnique({ where: { id: uid } })
})

// Primitive args use value equality
getUser(1)
getUser(1)  // Cache hit, returns cached result
```

If you must pass objects, pass the same reference:

```typescript
const params = { uid: 1 }
getUser(params)  // Query runs
getUser(params)  // Cache hit (same reference)
```

**Next.js-Specific Note:**

In Next.js, the `fetch` API is automatically extended with request memoization. Requests with the same URL and options are automatically deduplicated within a single request, so you don't need `React.cache()` for `fetch` calls. However, `React.cache()` is still essential for other async tasks:

- Database queries (Prisma, Drizzle, etc.)
- Heavy computations
- Authentication checks
- File system operations
- Any non-fetch async work

Use `React.cache()` to deduplicate these operations across your component tree.

Reference: [React.cache documentation](https://react.dev/reference/react/cache)
server_parallel_fetching
vercel SKILL.md License: See repository Version: Unknown
Imported skill server_parallel_fetching from vercel
View skill
---
title: Parallel Data Fetching with Component Composition
impact: CRITICAL
impactDescription: eliminates server-side waterfalls
tags: server, rsc, parallel-fetching, composition
---

## Parallel Data Fetching with Component Composition

React Server Components execute sequentially within a tree. Restructure with composition to parallelize data fetching.

**Incorrect (Sidebar waits for Page's fetch to complete):**

```tsx
export default async function Page() {
  const header = await fetchHeader()
  return (
    <div>
      <div>{header}</div>
      <Sidebar />
    </div>
  )
}

async function Sidebar() {
  const items = await fetchSidebarItems()
  return <nav>{items.map(renderItem)}</nav>
}
```

**Correct (both fetch simultaneously):**

```tsx
async function Header() {
  const data = await fetchHeader()
  return <div>{data}</div>
}

async function Sidebar() {
  const items = await fetchSidebarItems()
  return <nav>{items.map(renderItem)}</nav>
}

export default function Page() {
  return (
    <div>
      <Header />
      <Sidebar />
    </div>
  )
}
```

**Alternative with children prop:**

```tsx
async function Header() {
  const data = await fetchHeader()
  return <div>{data}</div>
}

async function Sidebar() {
  const items = await fetchSidebarItems()
  return <nav>{items.map(renderItem)}</nav>
}

function Layout({ children }: { children: ReactNode }) {
  return (
    <div>
      <Header />
      {children}
    </div>
  )
}

export default function Page() {
  return (
    <Layout>
      <Sidebar />
    </Layout>
  )
}
```
server_serialization
vercel SKILL.md License: See repository Version: Unknown
Imported skill server_serialization from vercel
View skill
---
title: Minimize Serialization at RSC Boundaries
impact: HIGH
impactDescription: reduces data transfer size
tags: server, rsc, serialization, props
---

## Minimize Serialization at RSC Boundaries

The React Server/Client boundary serializes all object properties into strings and embeds them in the HTML response and subsequent RSC requests. This serialized data directly impacts page weight and load time, so **size matters a lot**. Only pass fields that the client actually uses.

**Incorrect (serializes all 50 fields):**

```tsx
async function Page() {
  const user = await fetchUser()  // 50 fields
  return <Profile user={user} />
}

'use client'
function Profile({ user }: { user: User }) {
  return <div>{user.name}</div>  // uses 1 field
}
```

**Correct (serializes only 1 field):**

```tsx
async function Page() {
  const user = await fetchUser()
  return <Profile name={user.name} />
}

'use client'
function Profile({ name }: { name: string }) {
  return <div>{name}</div>
}
```
skill
vercel SKILL.md License: See repository Version: Unknown
Imported skill skill from vercel
View skill
---
name: web-design-guidelines
description: Review UI code for Web Interface Guidelines compliance. Use when asked to "review my UI", "check accessibility", "audit design", "review UX", or "check my site against best practices".
metadata:
  author: vercel
  version: "1.0.0"
  argument-hint: <file-or-pattern>
---

# Web Interface Guidelines

Review files for compliance with Web Interface Guidelines.

## How It Works

1. Fetch the latest guidelines from the source URL below
2. Read the specified files (or prompt user for files/pattern)
3. Check against all rules in the fetched guidelines
4. Output findings in the terse `file:line` format

## Guidelines Source

Fetch fresh guidelines before each review:

```
https://raw.githubusercontent.com/vercel-labs/web-interface-guidelines/main/command.md
```

Use WebFetch to retrieve the latest rules. The fetched content contains all the rules and output format instructions.

## Usage

When a user provides a file or pattern argument:
1. Fetch guidelines from the source URL above
2. Read the specified files
3. Apply all rules from the fetched guidelines
4. Output findings using the format specified in the guidelines

If no files specified, ask the user which files to review.
Copied to clipboard!