How I Created an AI-based Newsletter Workflow Using Claude Desktop, Langfuse, and a Set of MCP Servers
Inside my journey to automate a weekly newsletter using Claude and custom tools
Introduction
I constantly found myself researching, reading, and testing the latest AI news and tools. That’s when I had the idea of sharing my findings and opinions with the public through a modest newsletter, free of hype and empty buzzwords. That’s how Luis Poveda’s AI Newsletter was born.
But there was a problem, or maybe two: I didn’t have much time to spend on it, and I like things to be very structured. I needed to create a workflow that would allow me to generate the newsletter quickly and consistently, while iterating towards a more refined result with each edition.
Early Experiments and MCP Servers
When I started in March, MCP servers weren’t as popular as they are now (at least among heavy AI users, I still think we’re early). That’s when I began using Project Goose, a tool created by Blocks, a company founded by Jack Dorsey (yes, the Twitter guy). I started with a common MCP server like Tavily, but also experimented with others.
I was also using local LLMs, but since I’m GPU-poor, the experience wasn’t great. Also, as a Windows user, I was having some issues with WSL2 at the time. That’s when I decided to try Claude Desktop, the go-to MCP client nowadays. My experience improved significantly. I still use the combination of Claude Desktop and a set of MCP servers. I even created an MCP server myself to connect Claude to an IT network observability API, but that’s a story for a future post.
Now comes the interesting part.
Prompt Engineering and Langfuse
I started getting inspiration from other publications and translated that into a basic prompt. The prompt needed improvement, and for that, I used the Anthropic Console, which has a powerful prompt editor and iteration tools. The prompt was a key part of the process and went through many changes -36 versions so far- until I got the most accurate and useful result.
Pretty soon, I noticed something: some prompt changes didn’t work as expected. That’s when I realized I needed version control. While the Anthropic Console has basic versioning, I needed a more robust prompt management platform. That’s when I turned to Langfuse. While Langfuse offers many advanced prompt optimization features, I mainly use it as a prompt library, and I’ve kept using it ever since.
The process was simple: test the prompt, improve it using the Anthropic Console, copy and paste into Langfuse, test again, and repeat. Eventually, I wanted to avoid copy-pasting altogether. That’s when I discovered the Langfuse MCP server. After installing it, I could fetch the prompt directly from Claude Desktop. This small step saved me time and reduced the risk of human error.
Adding Search and Web Context
At that time, Claude wasn’t able to search the internet. I needed to add that capability. That’s when I integrated Tavily Search, which was very useful, it has an option to find just news and allows personalized searches. Sometimes, though, I needed a backup search tool or broader results. That’s when I added Brave Search. Honestly, you can get by with just one of them.
Later, I realized that I needed a tool to extract website content in a format LLMs could understand. That’s when I added Fetch, which helped improve context and output quality.
When I first started the newsletter, I think the state-of-the-art model was Claude 3.7 Sonnet. But during the process, Anthropic released the Claude 4 models, and the output quality improved noticeably. That said, some prompt adjustments were needed to adapt.
Finally, I formatted the newsletter output using Beehiiv. I even considered automating this last step using something like Browser Use, but decided it wasn’t worth the time investment, this “last mile” task wasn’t time-consuming enough to justify it.
The Podcast Version
One major addition was an AI-generated podcast version of the newsletter, thanks to Google’s revolutionary tool: NotebookLM. The input was the Markdown-formatted newsletter generated by Claude Desktop.
In NotebookLM, I usually provide the following instructions:
It’s a news podcast based on Luis Poveda’s AI Newsletter
Targeted at tech and AI professionals and enthusiasts
The full newsletter creation process takes about 45 minutes, including the Beehiiv formatting and scheduling. The newsletter is then scheduled to go out every Monday at 09:00 CET.
Conclusions
If you’ve read this far, you might be wondering: Do I really need all this to create a newsletter with AI?
Probably not. You can achieve good results with much less effort, especially now that MCP servers and AI models are more capable than when I started. And tomorrow? Things will be even better.
If you can simplify, do it. I believe we should aim to generate the best output possible -in a structured, simple, and beautiful way- evaluating each step and each iteration. Ask yourself: Will this improve the result meaningfully? If the answer is yes, maybe it’s worth it. If not, move on to a task that adds more value to your creation.