Back to DevLog

Shipping MemStack v3.2.2 + My First MCP Skills Migration

3 min read

Just wrapped up a marathon session getting MemStack v3.2.2 ready for prime time. What started as "fix a few audit issues" turned into a full marketplace prep + my first real test of Anthropic's MCP (Model Context Protocol) for skill loading.

Cleaning House Before Launch

Had 4 critical audit issues to knock out across both the Pro and Free repos:

  • License confusion: Turns out having both MIT and Proprietary licenses in the Pro repo sends mixed signals. Whoops.
  • Stale skill counts: Everything was still saying "59 skills" when we're actually at 77 now. Updated all the marketing copy to say "75+" for some future-proofing.
  • Dirty git trees: The Free repo had 31 upsell stub files that got consolidated into a single pro-skills.md in a previous session but never committed. Finally cleaned that up.
  • Broken docs: Some getting-started examples were referencing deprecated skills and old file names.

Plus a bunch of smaller warnings around TTS feature placement, skill category counts, and orphaned batch files. The kind of maintenance debt that accumulates when you're shipping fast.

Marketplace Ready (Maybe?)

Anthropic's been teasing their Skills Marketplace, so I spent time analyzing what it would take to submit MemStack's 77 skills. Created a gap analysis doc and honestly? We're like 90% there already.

The main blocker is cosmetic stuff - they want skill descriptions in third person ("This skill helps you...") while ours are second person ("You can use this to..."). Easy fix when the marketplace actually launches.

Added version: 1.0.0 to all 77 skill frontmatter blocks for compliance. Felt good to stamp that on everything.

Testing MCP in the Wild

Here's the fun part. Instead of keeping massive NTFS junctions pointing to my MemStack skills (which was getting unwieldy), I decided to test Anthropic's MCP protocol for loading skills on-demand.

Picked StreamStack as my guinea pig project. Ripped out the junction, kept just the 6 essential rules local (memstack, notify, diary, echo, work, headroom), and configured .mcp.json to load the full skill library via MCP.

Built an auto-test script that runs 20 different skill queries. All passed. The MCP skill loader properly indexes all 77 skills and search works great. But the real test will be firing up a fresh Claude Desktop session and seeing if it can actually discover and use the MCP tools.

The Numbers Game

Went from 29.8KB of rules via junction down to 8.4KB of essential local rules. That's a 72% reduction while keeping all the functionality available on-demand. Pretty clean architecture if the live test works out.

If this MCP approach works, I can roll it out to all 35 of my projects with a batch script. No more junction maintenance, cleaner repos, and Claude Desktop gets the full skill library in any project that needs it.

What's Next

StreamStack is set up and ready for the live MCP test. Auto-tests pass, configuration looks good, just need to open a fresh Claude Desktop session and see if /mcp shows the memstack-skills server.

If it works, this could be a game-changer for how I manage skills across projects. If it doesn't... well, I've got a clean rollback plan ready.

Still have a few audit warnings to clean up (some TTS duplication, product URL updates) but nothing urgent. The big wins are shipped and both repos are in a much cleaner state.

Time to see if MCP lives up to the hype. šŸ¤ž

Share this post