Claude Code research projects
Claude Code is great for code but could be better for research. Learn how to unlock the research platform you need
Anthropic recently launched Claude Code Web and to encourage usage they gave free credits to Claude Max and Claude Pro users. My offer was $1,000 in credits - it is $250 to Pro Plan users - so on the face of it a very generous credit.
The catch - the credits can only be used exclusively in the web; does not apply to your existing plan, and it is also time-limited ending on November 18th. This effectively means that for most Claude Code users, you would be switching development from your paid plan to take advantage of the free tokens. I cannot really use a thousand dollars in tokens in two weeks, although human psychology would tell me that I must try.
The winner of course, will be Anthropic, who with a very clever design of a gift have me and other developers willing to try a new offering - allowing the company to work out bugs and possibly switch us to using Claude Code Web. You can see where this is going - with the carrot dangled, I started coding.
Well not really coding. In my normal workflow I make heavy use of AI assistance to eventually generate code. My process involved taking information from GitHub issues or design ideas through our series of artifacts, mainly Markdown files. But what if Markdown files are what you actually want?
Coding Research Projects
I picked up a fantastic idea from Simon Willison - use Claude Code for research projects. And while I have $1,000 in credit make as much use as possible.
In a a research project you ask Claude Code to generate research markdown documents and code to fulfill a research question. The research question is "anything you want to know about" and you pass that in as the initial prompt to Claude Code. My research projects live in a dedicated research repository repository on Github, and there I have started collecting answers to things I am curious about - such as "how to master Claude Code" or "which are the best LLM providers". Note that these are the raw questions that are inside my head, but I'd still have to phrase it in a more complete prompt so that it guides Claude in the research.
The result in research is generally a README document, but in some cases, there will be code that's generated to provide some of the answers or to verify some of the claims made in the research. But I think the main way I see called code web right now is as a user interface that I can trigger agents to pull information and package it for me and I think Claude Code Web presented itself with very useful UX design or these types of tasks.

Using it on Edgartools
In recent EdgarTools releases, 4.25 and 4.26 I introduce the edgar.skills package to support AI-native workflows. This means that you could use AI tools like Claude Desktop and the AI could learn using EdgarTools Skills how to use the APIs to do research. I wasn't quite sure how good the skills package was, so I wanted to create a test. Here was my prompt
The result was a 300-line report that highlighted a lot of weaknesses in the skills implementation. The most critical of which was that set_identity was not documented anywhere which meant that AI agents would be blocked and pretty early in the research.
| Category | Issue | Severity | Impact on Agents |
|---|---|---|---|
| Setup | set_identity() not documented |
🔴 Critical | Complete failure |
| API Reference | Attribute names unclear | 🟡 Medium | Slows discovery |
| API Reference | Return types ambiguous | 🟡 Medium | Causes errors |
| Documentation | No troubleshooting section | 🟡 Medium | Hard to debug |
| Examples | Missing error handling | 🟢 Minor | Less robust code |
| Versioning | Version info outdated | 🟢 Minor | Confusion |
The research findings were very detailed, and I copied as-is into the edgartool project and used it to create development tasks for a new release. The resulting improvements were released in edgartools 4.26.2. Post-release, I redid the research but pointing it at this new release. Here we could see a major improvement.
| Metric | v4.26.1 | v4.26.2 | Improvement |
|---|---|---|---|
| Agent Efficiency | 40% | 85%+ | +113% |
| Doc Reading (tokens) | ~15,000 | ~3,000 | -80% |
| API Exploration (tokens) | ~3,500 | ~500 | -86% |
| Critical Gaps | 2 major | 0 major | 100% |
| SKILL.md Size | 855 lines | 460 lines | -46% |
| Time to Productivity | 15-20 min | 2-3 min | -85% |
To be fair, a lot of what we did could have been done with regular Claude Code cli but the grants of free tokens allowed me to experiment with a couple new ways in which I could use the web interface. Now I will for sure adopt these new techniques after the trial expires, since I find generating research and testing software systems are just really valuable uses of this type of agent harness.
I haven't really spoken about Claude Code Web itself in terms of how good it is and whether I will switch. There still are some bugs the worst of which is that Sometimes, when the research document to be generated is very large, the Claude Code Instance doesn't generate anything. I'm doing some tweaks to my CLAUDE.md to get it to break large tasks into smaller documents to be combined and to use parallel tools when researching. But in general, I find it to be good, particularly when you combine it with the CLI on the same project.
Conclusion
This article highlights how one can use research projects to assemble information that you are interested in using Claude Code. Credit of course, goes to Simon Willison, who has always had really good ideas And in this instance, it was helpful to help improve the library. There will be a follow-up article on my use of Claude Code with edgartools. You can also see the research project I generated for Claude Code mastery here
The library edgartools provides the easiest way to query SEC filings. You can install it using pip install edgartools. If you find it useful leave a star the GitHub repo.