When To Code Or Use AI Agent Frameworks
0x41434f
In my last post, "How I Chose an AI Agent Framework for QuickDesk," I talked about all the new AI agent tools. There are a lot of them! So, when do you build an AI agent with your own code, and when do you use a framework? That’s a tough one. Frameworks can be fast, while custom code gives you total control. This post is about exploring that choice. We’ll look at how agents work, why frameworks can be handy even for good coders, see what Big Tech is offering, and I’ll share my own thoughts. I’m not here to say which framework is best; I just want to give you some ideas to help you pick what works for you and your project.
When we start building AI agents, one of the first big questions to tackle is about who, or what, is in charge. Should regular code, perhaps written in Python, be the primary controller that calls an AI model like an LLM when specific intelligent tasks are needed? Or, should the AI model itself be the main driver, dynamically deciding when to call other tools or pieces of code? This decision isn't just a minor technical detail; it fundamentally changes how you build and even think about your agent. If your own code is in charge, the process feels much like normal programming. You write scripts, and when a touch of "smartness" is required, such as understanding text or making a complex decision, your code calls an LLM to perform that specific job. Afterwards, your code takes the LLM's output and continues its execution. This approach gives you a lot of control over the workflow and is often easier to test and debug because you can see exactly what your code is doing step by step.
On the other hand, if the LLM is in charge, it’s more like giving the AI a goal and letting it figure out the necessary steps. The LLM might decide it needs to use a tool, perhaps for searching the web or looking up customer information, so it calls that tool. Then, it analyzes the tool's output and decides what to do next. This method can be very powerful for complex tasks where all the steps aren't known in advance. Frameworks that support this model often refer to "agents" that can plan and execute tasks. However, it can also be harder to predict what the agent will do, and debugging can be trickier because you're trying to understand the LLM's internal "reasoning." Many of the new agent frameworks are trying to find effective ways to handle this balance of control. Some are designed to help you build code-driven systems more easily, while others are focused on letting the LLM take the lead. Some even attempt to blend both approaches. Understanding this fundamental design choice is a crucial part of picking the right path for your project.
Given these considerations, if you're a coder, you might then wonder why you'd bother with an AI agent framework at all. If you can write the code yourself, what's the point? That's a perfectly fair question. For very simple agents, or if you're just experimenting, building everything from scratch can indeed be a great way to learn how all the components work. But as your agent idea grows, or if you need it to be reliable enough for real work, frameworks can save you a significant amount of time and effort, even if you could technically code it all yourself. It’s much like building a web application: you could write all the basic web server logic from scratch, but most developers use tools like Flask or Ruby on Rails because they handle many common problems efficiently.
Agent frameworks aim to provide similar benefits for AI agent development by offering pre-built building blocks and a clear structure. They often give you a sensible way to organize your agent’s parts, defining how components should fit together for decision-making or memory, so you're not designing everything from scratch. Frameworks can also provide standard ways for agents to communicate with users, other agents, or different software services. For agents that need to perform several steps or manage tasks in a specific order, these frameworks can offer systems to handle that complexity. Furthermore, since most agents need to use external tools like search engines or databases, frameworks usually make it easier to integrate these "function calls." Some frameworks also include tools to help you track your agent’s actions and performance, which is vital for monitoring once it’s running. Essentially, a good framework handles much of the standard, repetitive work, freeing you to focus on the unique, intelligent aspects of your agent that make it useful for your specific problem. This can help you build faster and create something more dependable.
It's also important to notice that it's not just startups and open-source projects creating these AI agent frameworks. The big names in technology, the companies that supply many of the cloud services and AI models we already use, are also making significant moves in this area. This is particularly interesting because when these giants release frameworks, it can heavily influence industry trends, especially for businesses already using their cloud services. This situation feels a bit like past shifts, such as when specialized skills grew around iOS and Android for mobile app development, or AWS, Azure, and Google Cloud for cloud engineering. Companies often prefer tools that integrate well with the systems they already pay for. Therefore, it’s wise to pay attention to what these major players are offering, not necessarily because their tools are automatically superior, but because they are likely to become common in enterprise environments. These large companies have a strong potential to bundle their agent frameworks with other services, making them an easy choice for businesses already within their ecosystem.
Microsoft is making a big push with its Azure AI Foundry, positioning it as an "AI App and agent factory." This full-stack platform aims to help developers build, deploy, and manage AI-powered apps and agents, unifying and expanding on existing tools like AutoGen and Semantic Kernel. Key features include an "Agent Service," advanced multi-agent orchestration, smarter model routing, and a strong focus on observability and governance. (Links: Microsoft's Agentic AI Frameworks (older overview), AI Agents for Beginners, General Availability of Azure AI Foundry Agent Service (and broader Azure AI Foundry announcements from Microsoft Build 2025)) Google is also making significant strides in the agent space. Their offerings include the Vertex AI platform, which helps developers build and manage agents that connect to various systems. Furthering their commitment to interoperability, Google recently announced the Agent2Agent (A2A) protocol, an open standard developed with many partners to allow AI agents from different vendors to communicate and coordinate actions. This A2A protocol is designed to complement other important open standards like Anthropic's Model Context Protocol (MCP), which focuses on providing a universal way for AI systems to connect with diverse data sources. Complementing these protocols, Google has also introduced the Agent Development Kit (ADK), an open-source framework aimed at simplifying the end-to-end development of complex agents and multi-agent systems. ADK, which powers agents within Google's own products, supports various models (Gemini, Vertex AI Model Garden, and others via LiteLLM) and tools, including MCP-compatible tools. It emphasizes multi-agent design, built-in streaming, flexible orchestration, and integrated developer/evaluation experiences. While Google also offers Genkit for more general AI-powered applications, ADK is specifically optimized for developers building intricate, collaborative multi-agent systems. (Links: Build and manage multi-system agents with Vertex AI, Announcing the Agent2Agent Protocol (A2A), Introducing Google's Agent Development Kit (ADK), Anthropic's Model Context Protocol (MCP)) Amazon Web Services (AWS) offers Amazon Bedrock Agents for building generative AI applications that can perform tasks via API calls, and recently introduced Strands, an open-source AI agents SDK. (Links: Amazon Bedrock Agents, Introducing Strands - an open source AI agents SDK) Cloudflare, known for its network infrastructure, provides Workers AI Agents, allowing developers to deploy AI agents on its global network. (Links: Cloudflare Workers AI Agents, Build AI Agents on Cloudflare) OpenAI, the company behind models like GPT, has also released an Agents SDK to make it easier for developers to create applications where their models can reason and act on tasks. (Links: OpenAI Agents SDK (GitHub), New tools for building agents) NVIDIA, a key player in AI hardware and software, has also entered this space with NVIDIA NIM™ Agent Blueprints. These are described as a catalog of pretrained, customizable AI workflows designed to help enterprises build and deploy generative AI applications for common use cases like customer service, retrieval-augmented generation (RAG) from PDFs, and drug discovery. The blueprints include sample applications, reference code, and deployment tools, and are part of the broader NVIDIA AI Enterprise platform which includes NIM microservices and the NeMo framework. NVIDIA is emphasizing a full-stack approach and a strong partner ecosystem to help enterprises operationalize their AI applications. (Link: NVIDIA and Global Partners Launch NIM Agent Blueprints) This quick tour shows that major cloud and AI platform providers are now seriously investing in agent frameworks, meaning we'll likely see more tools and services for building agents directly on these platforms.
All these developments bring me to how I'm thinking about this for my own work and, quite frankly, for my job security and consulting focus. It’s not about trying to pick the "best" framework in isolation. For me, it's about recognizing a clear industry trend and positioning myself strategically. We'veseen this pattern in the past. Think about mobile development, where specializing in iOS or Android became key paths because of their market dominance. Similarly, in cloud computing, many professionals focused on AWS, Azure, or Google Cloud as enterprises adopted these platforms. Providers who own the infrastructure often have a significant advantage in getting their related tools adopted. I see a similar dynamic unfolding with AI agents. Companies like Microsoft, Google, and AWS are actively bundling agentic capabilities and frameworks into their cloud offerings, encouraging their existing customers to build and deploy agents using their integrated tools. This often leads to a degree of "vendor lock-in."
My prediction is that many enterprises will follow this integrated path, leaning towards the agent frameworks offered by their primary cloud provider. So, my personal strategy is to become proficient in understanding and using these Big Tech agent frameworks. This isn’t because I believe they will always be technically superior to every independent framework; some smaller, specialized frameworks might be more innovative or better suited for specific niche tasks. However, I believe that a deep understanding of the offerings from Azure, Google Cloud, and AWS will be highly valuable because a large portion of enterprise AI development will likely occur there. It’s about anticipating where the industry is heading. This focus helps me in my consulting work, enabling me to guide businesses already invested in these ecosystems, and it helps ensure my skills remain relevant in this rapidly evolving field. I'll certainly keep an eye on the broader landscape of frameworks, but the Big Tech platforms are a central part of my learning agenda.
So, what’s the final word on building AI agents: should you code it all yourself, or use a framework? As we've explored, there's no single answer. The experts are clear that the trend is towards "compound AI systems," which are systems with multiple parts working together, because they generally provide better, more controlled results. The real question is how you decide to build your part of that system, and how much "agency" or control you give to the LLM. It's helpful to think of LLM agency as existing on a spectrum. On one end, your code is firmly in charge, calling the LLM like any other service (Anthropic refers to these as "Workflows"). On the other end, the LLM dynamically directs its own actions, deciding what tools to use and when (which Anthropic calls "Agents"). Many practical solutions fall somewhere in between these two extremes.
Starting simple is almost always the best advice. Try direct LLM API calls first. As experts from Anthropic and Hugging Face both suggest, many useful patterns don't require complex frameworks. Only add more layers, such as increased LLM-driven agency or a framework, when it clearly improves your outcome or when the complexity of the task truly demands it. For instance, once you get into an LLM calling external tools or running multi-step loops, you’ll naturally find yourself needing things like output parsers, memory management, and consistent prompting. It's at that point that frameworks start to offer real value by providing these essential building blocks.
Frameworks, including the large platforms from major tech companies, can speed up the process of building these more involved systems. They offer structure and handle many common needs. For me, learning the Big Tech offerings is a strategic choice for enterprise work, aligning with where I see these compound systems being built at scale. But, as many caution, it’s important to be mindful of over-abstraction. Always try to understand what any framework is doing under the hood, and don't hesitate to use simpler approaches or even pure code when it's more effective for your specific situation.
Ultimately, building effective AI isn't about using the fanciest or most talked-about tools, but about "clever engineering." It’s about making smart choices based on your specific needs. Consider questions like: What are you trying to build, and where does it sit on the spectrum of code-driven to LLM-driven? What skills do you and your team possess? How quickly do you need to move? And what are your requirements for performance, cost, and latency? The AI world is evolving at breakneck speed. Good judgment, which includes understanding your needs, the available tools, and the underlying principles, is your most valuable asset. Keep learning, keep experimenting, and always focus on building the right system for your needs.