Researching developer tools means checking Reddit for honest takes, scrolling Hacker News for debates, scanning X for hot takes, watching YouTube walkthroughs, and maybe checking Polymarket to see where the smart money’s betting. It’s exhausting. The last30days-skill launched this week and hit #1 on GitHub Trending today—March 26, 2026—with 9,540 stars, gaining 2,684 in a single day. It’s an AI agent skill for Claude Code that automates cross-platform research across 10+ platforms, delivering grounded summaries with engagement metrics and citations in 2-8 minutes.
This solves research paralysis. Instead of manually checking multiple platforms for framework comparisons or best practices, last30days-skill aggregates community intelligence automatically, showing not just what’s discussed but what communities actually upvote, share, and bet on.
What is last30days-skill AI Research Tool?
last30days-skill is an AI agent research skill for Claude Code and OpenClaw that synthesizes information from 10+ platforms—Reddit, X, YouTube, Hacker News, Polymarket, Bluesky, TikTok, Instagram, and general web search—to deliver grounded, cited summaries on any topic from the past 30 days. Built by Matt Van Horn, it launched around March 20-25, 2026, and immediately claimed the #1 spot on GitHub Trending.
Moreover, it operates in three phases. First, the research phase scans across 10+ sources simultaneously with smart supplemental search that auto-discovers relevant handles and subreddits. If you research “Open Claw,” it automatically finds @openclaw and @steipete and drills into their posts. Second, the synthesis phase uses multi-signal quality scoring—combining text similarity, engagement velocity, authority weighting, and temporal decay—to rank results. Finally, the delivery phase generates grounded summaries with real citations, engagement metrics, and copy-paste-ready prompts.
The standout feature is comparative mode. Ask “/last30 cursor vs windsurf” and you get three parallel research passes with strength/weakness ratings, community sentiment percentages (Cursor 78% positive vs Windsurf 65%), and a data-driven verdict. No other tool combines 10+ platforms with engagement metrics and grounded citations. ChatGPT lacks multi-platform aggregation. Manual research takes hours. last30days-skill delivers in 2-8 minutes.
Getting Started: Installation and Setup
Installing last30days-skill takes under 5 minutes with Claude Code’s plugin marketplace. Open Claude Code and run two commands:
/plugin marketplace add mvanhorn/last30days-skill
/plugin install last30days@last30days-skill
That’s it for basic installation. However, here’s where it gets interesting: last30days-skill operates in three modes depending on your API keys. Full mode (Reddit + X + WebSearch) requires SCRAPECREATORS_API_KEY for Reddit/TikTok/Instagram and X cookies (AUTH_TOKEN, CT0) for Twitter. Partial mode works with just one API key—Reddit-only or X-only plus web search. Web-only mode requires no API keys at all and still provides value, just without engagement metrics.
Configuration is straightforward. Create ~/.config/last30days/.env and add your keys:
# Essential (for Reddit, TikTok, Instagram)
SCRAPECREATORS_API_KEY=your_key_here
# Recommended (for X/Twitter)
AUTH_TOKEN=your_x_cookie_auth_token
CT0=your_x_cookie_ct0
# Optional (for Bluesky - v2.9.5 feature)
BSKY_HANDLE=your.handle.bsky.social
BSKY_APP_PASSWORD=your_app_password
# Optional (web search - Brave has 2,000 free queries/month)
BRAVE_API_KEY=your_brave_key
The flexibility is smart. Start with web-only mode immediately, then upgrade to full mode for engagement metrics later. Low barrier to entry, high ceiling for power users.
Real-World Use Cases That Save Hours
Framework comparisons are the killer use case. Run /last30 cursor vs windsurf and get three parallel research passes analyzing completion speed, context understanding, community sentiment, and a data-driven verdict: “Use Cursor for daily coding (faster completions), Windsurf for architecture planning (better context).” Consequently, this replaces 5-10 hours of manual research across platforms with 4 minutes of automated synthesis.
Emerging tech research is where last30days-skill shines. When Vibe Motion launched just two days ago, the tool found real-time community sentiment unavailable in official docs. Early adopter intelligence before mainstream coverage even happens. Similarly, for prompt engineering research—searching “Nano Banana Pro prompting techniques” discovered JSON-structured prompts with 92% precision on color/lighting, identified the ICS Framework and 5-element formula dominance, and output copy-paste-ready prompts.
Additionally, market intelligence works too. Search “anthropic odds” and last30days-skill aggregates 11 Polymarket positions spanning model benchmarks, IPO timing, and valuations. For developers evaluating tools, search “how are people using Remotion with Claude Code?” to find community-developed workflows not in official docs. Real patterns from practitioners, not marketing copy.
Basic usage is simple:
# Simple research
/last30days "best web frameworks 2026"
# Comparative analysis
/last30 react vs vue 2026
# Market intelligence with custom timeframe
/last30days "openai vs anthropic" --days=7
# Quick mode (faster, less thorough)
/last30days "rust adoption" --quick
Comparative Mode: The Feature That Justifies Everything
Comparative mode is the reason you’ll tolerate the 2-8 minute execution time. Run /last30 cursor vs windsurf and get automated strength/weakness evaluation across dimensions like completion speed and context understanding, community sentiment scoring with percentages, and a data-driven verdict that removes decision paralysis.
Furthermore, most tools require manual side-by-side prompting. last30days-skill automates three research passes (X-only, Y-only, comparative) and synthesizes the results. Community sentiment percentages show what developers actually prefer based on upvotes, shares, and discussions—not just feature checklists.
Try these comparisons:
# IDE comparison
/last30 cursor vs windsurf
# Framework comparison
/last30 next.js vs remix 2026
# Language comparison for specific use case
/last30 rust vs go for backend
# AI model comparison
/last30 gpt-4 vs claude-3.5
Decision paralysis is real when choosing between frameworks, tools, or languages. Comparative mode removes the guesswork by showing actual community sentiment with engagement metrics. This single feature justifies the execution time.
Trade-offs: When to Use and When to Skip
last30days-skill prioritizes depth over speed. Execution takes 2-8 minutes depending on topic complexity, which is slower than ChatGPT’s instant responses but vastly faster than hours of manual research. The --quick mode trades thoroughness for speed if you’re impatient, but the default is worth the wait.
Nevertheless, API setup is more complex than zero-config alternatives like ChatGPT. You’ll spend 5-10 minutes getting SCRAPECREATORS_API_KEY and optional X cookies, but the payoff is full-mode access to Reddit, X, TikTok, Instagram, and engagement metrics. The web-only mode requires zero setup and still provides value—just without the social data.
Cost considerations matter. ScrapeCreators API is pay-per-use. Brave Search offers 2,000 free queries per month. Manual research is free but costs hours of your time. Pick your poison.
Use last30days-skill when you need comprehensive research for tool or framework comparisons, when community sentiment matters, when grounded citations are required to avoid hallucinations, when comparing two options with comparative mode, or when you need recent community intelligence from the last 30 days.
Skip it when you need instant results—use ChatGPT or Claude instead. Skip it if you only need one platform—just use Reddit or Hacker News directly. Skip it if you don’t want to set up API keys—manual research is free. Skip it if you need historical data beyond 30 days.
The Verdict
last30days-skill hit #1 on GitHub Trending for a reason. It automates 10+ platforms of research with engagement metrics and grounded citations, saving developers 5-10 hours on framework comparisons or tool evaluations. Comparative mode removes decision paralysis with data-driven verdicts based on actual community sentiment.
The 2-8 minute execution time is a trade-off, but it’s worth it for comprehensive research. API setup takes 5-10 minutes, but web-only mode works immediately. The tool isn’t perfect—it won’t replace instant ChatGPT queries or historical research beyond 30 days—but for recent community intelligence, it’s the most comprehensive option available.
Key takeaways:
- Automates research across 10+ platforms (Reddit, X, YouTube, HN, Polymarket, Bluesky, more)
- Trending #1 on GitHub with 9,540 stars, gaining 2,684 today
- Comparative mode enables side-by-side analysis with community sentiment scoring
- Engagement metrics show what communities value, not just what exists
- Saves 5-10 hours of manual research per framework comparison
- Available now for Claude Code and OpenClaw (install in under 5 minutes)
Install it. Run /last30 cursor vs windsurf. See the difference between algorithm-curated noise and community-validated insights. Your research workflow won’t be the same.












