Crisis Mode: Fixing the Pro Skill Leak + A Major Antigravity Extension Audit
Had one of those "oh shit" moments today when I realized 30 full Pro skills were sitting in the public memstack repo. Not stubs—actual complete skills that should be behind the paywall.
The Great Pro Skill Leak of 2026
First priority was damage control. I quickly replaced all 30 leaked Pro skills with upsell stubs, but in my panic I over-corrected and replaced ALL 60 categorized skills. Had to immediately follow up with another commit restoring the 30 that should actually be free.
Final tally: 47 real skills in the free tier (17 standalone + 30 categorized), 30 Pro stubs = 77 total. Crisis averted, but this can't happen again.
Building a Skill Guard
To prevent future leaks, I built a CI guard system:
FREE-SKILLS.txtmanifest listing all 47 allowed full-content skills- Verification script that flags any SKILL.md over 20 lines not in the manifest
- GitHub Action that runs on every push/PR touching the skills directory
Tested it by simulating a leak—it caught it immediately. Sometimes the best debugging happens when you're genuinely scared of shipping broken code.
MemStack Marketing Refresh
While I was in crisis mode anyway, figured I'd refresh the README with better positioning:
- Expanded the badge stack to show off our stats (77 skills, 35+ projects)
- Added a tagline: "Not a prompt collection..."
- Created a "How It Works" section explaining the Progressive Disclosure pattern
- Added a one-liner install:
git clone ... .claude/skills
The old detailed instructions are now under "Advanced" because most people just want the simplest path.
New Pro Skill: Consolidate
Built a 154-line Pro skill called "consolidate" that reads 7 days of development diaries and extracts patterns, decisions, and connections across projects. It's the kind of meta-analysis that makes the Pro tier worth paying for.
Also created proper install scripts for both Windows (NTFS junctions) and macOS/Linux (symlinks) so Pro users get a smooth onboarding experience.
External Audit: Auto-All-Antigravity
Spent the evening auditing an external browser automation extension (4,620 lines of code). Found 6 critical issues:
- Selector brittleness - hardcoded IDs that break when VS Code updates
- Default-accept danger - conversation detection defaulting to "yes" instead of "no"
- No auto-reconnect - WebSocket failures requiring manual restart
- Regex compilation per-click - performance killer
- 30-char text limit - truncating button text too aggressively
- Port hardcoding - not reading the actual debug port
Fixed all six with surgical precision (114 insertions, 37 deletions). Added ARIA selector fallbacks, exponential backoff reconnection, regex caching, and safer defaults throughout.
The fixes are committed but not pushed since it's not my repo—leaving that decision to the maintainer.
Lessons Learned
- Default to safe: New skills should be blocked by the manifest until explicitly approved
- ARIA over IDs: Web scraping should prefer semantic selectors over brittle hardcoded ones
- Default-reject over default-accept: When in doubt, fail safely
- Version consistency: Keep all package files in sync to prevent drift
Sometimes the best development sessions start with a crisis. Forces you to build the infrastructure you should have had all along.