Leveling Up My Claude Agent Army + First Steps with Firecrawl
Had one of those productive dev sessions today where everything just clicks. Spent the morning diving deep into the awesome-claude-code-subagents repo and holy shit - 127+ agents across 10 categories. It's like a buffet of AI helpers.
After some careful evaluation against my current stack, I cherry-picked 4 high-value global agents that'll actually move the needle:
- wordpress-master: Because let's be honest, WordPress dev/optimization/security is like 60% of my client work
- debugger: A systematic root-cause analysis agent (finally, no more printf debugging like a caveman)
- payment-integration: Gateway/PCI/subscription stuff that I always have to look up
- context-manager: This one's interesting - helps manage shared state between agents so they can actually work together
The cool part? These are all global agents now, so they're available across every project immediately.
Firecrawl Deep Dive
Then I went down the Firecrawl rabbit hole. Initially thought about self-hosting their monorepo, but after reviewing the code, the real value is clearly in their API, not running my own instance. Sometimes the obvious choice is the right choice.
Got the Firecrawl CLI installed with all 8 skills - scrape, search, crawl, map, agent, browser, download, and base. Plus browser support which opens up some interesting possibilities for dynamic content.
Authenticated everything under the HoneyBun team and I'm already seeing two killer custom skill opportunities:
- firecrawl-enrich: Scrape company websites → structured data → auto-populate GHL lead fields
- firecrawl-interlink: SEO internal link suggestions by understanding content relationships
Both have reference implementations in the examples folder, so shouldn't be too gnarly to build.
Infrastructure Clarity
Also knocked out some infrastructure questions that have been bouncing around my head:
- Cloudways doesn't do email (duh), so Google Workspace + SendGrid for transactional stuff
- Supabase vs Cloudways isn't even a competition - they complement each other. WordPress sites stay on MySQL, custom apps get Postgres
Feels good to have that mental model clarified. WordPress for content/marketing sites, Supabase for anything that needs real-time features or complex queries.
What's Next
The natural next build is definitely those custom Firecrawl skills. The GHL lead enrichment one could be a game-changer for client onboarding - imagine scraping a prospect's website and auto-populating their industry, tech stack, team size, all that good stuff.
Also thinking about using Supabase for non-WordPress data - worker state, lead enrichment cache, maybe some client dashboards. The real-time features could be clutch.
Anyway, solid session. Nothing revolutionary, just steady progress on making the development workflow more intelligent. The agent ecosystem is getting pretty wild - feels like we're still in the early innings of what's possible.