
This is software (AWS) generated transcription and it is not perfect.
It depends on the project, but there are basically two types from an n8n perspective: you either build a backend for something, or you build a self-contained automation. If I’m building a backend, I first sit down and brainstorm with the client and gather all the requirements they imagine at this point. I do business analysis first. Then I plan the architecture: how many endpoints we need, what the endpoints do, and so on. When I have an 80% ready plan, I start grouping things—for example, authentication as one group of workflows—and then build them one by one. If you’re building a backend, it can be forty or sixty workflows or endpoints, and some will have sub-workflows. A good plan is important; don’t just start building. Then I fire up Claude Code with n8n MCP turned on, and I use sub agents: an architect sub agent, a builder sub agent, a codebase explorer sub agent, and technical researchers. I delegate: design the first workflow in detail, then build it based on the architect’s design. I look at the first draft in n8n, apply my experience, and adjust things like branching and error handling. I usually touch the workflow manually after three or four iterations of agents, and then I correct final parts that would take longer to explain than to do. For self-contained automation, where humans aren’t involved in the testing process, it can be more automated. You can let the AI run executions, check errors, and iterate. You can leave it for thirty minutes to run by itself and correct itself, then you check what’s not working and add final touches. The base process is the same: design, then build, then test, then test again and iterate.
I tried something similar before and failed, and this time I thought deeply. For the base functionality—the documentation—I had to figure out how to get information from the n8n source code and serve it to the agent. The first approach would be to scrape documentation, but it’s accessible on GitHub, so you can just clone the repo. Still, documentation written for humans wasn’t enough, because humans don’t write those JSONs; they click and drag nodes. For AI agents, it’s about building JSONs. I had to figure out how to get those JSONs, where they’re stored, and they’re stored in the source code in the open GitHub repository. I built scripts and logic so that for every node it gets the JSONs and how to build them: what the parameters are, what the limits are, and how to validate values. That was the first design principle that made it work in the first place. After that it became community-driven. People asked to make it work on Railway, so we did. Someone wanted Docker Compose but didn’t want to learn Docker, so we made it run with npx, which was easier, and that became the easiest version for people. Now I’m working on a hosted version because there are already hundreds of people on the waitlist who want easier access, so you just put the URL and it works. It’s community-driven and also driven by my own work, because I’m solving my own problem all the time: making working with n8n workflows easier.
n8n is an automation platform similar to Make.com or Zapier, but geared more toward developers or more technical people, at least at the beginning. It’s slowly becoming more accessible, and it’s been gaining a lot of popularity recently. It basically lets you automate anything or connect APIs. MCP stands for Model Context Protocol. It’s an open standard developed by Anthropic, the team behind the Sonnet, Opus, and Haiku models. MCP allows AI models such as Claude or ChatGPT to connect to live sources of data. Large language models are trained on data up until some point in time, and beyond that they know nothing, but with MCP you can connect the model to your database or to tools and provide it with up-to-date data. In this setup, I connected n8n with MCP to live data about the n8n documentation.