Emerging agentic adoption models I saw at KubeCon Amsterdam 2026
KubernetesKubeConDevOpsAIAI Agents

Emerging agentic adoption models I saw at KubeCon Amsterdam 2026

Pasquale Toscano · · 9min read

I just got back from KubeCon Europe 2026 in Amsterdam, and I need to process what I saw. I’ve been attending European KubeCons since Berlin 2017, and every edition has its dominant themes. Service mesh, platform engineering, eBPF. Each one reflecting where the community’s energy is heading. This year, AI agents were clearly one of the main topics of the conference.

Agents had a strong presence across keynotes, breakouts, workshops, and sponsor booths. There was even a dedicated co-located event called Agentics Day. Alongside the usual cloud-native conversations around security, observability, and platform engineering, agents carved out a significant space of their own.

Agents as a Main Theme

Let me be upfront. There is still a lot of hype in the air. AI is in the middle of its hype cycle, and this year the hype machine is reaching cloud-native engineers with full force. Many booths had an agent story. Several keynotes referenced autonomous systems. The word “agentic” showed up on a lot of slides.

But underneath the hype, real patterns are emerging. After attending dozens of sessions, walking the expo floor, and having countless hallway conversations, I started seeing a clear structure in how companies are approaching agent adoption. Three distinct models kept appearing, and I think this framework is useful for anyone trying to make sense of where the industry is heading.

image.png

Model 1: The Agent as UX Feature

The first model is the most straightforward. Companies that already have a successful product are adding an agent as a new interface to consume it. The agent doesn’t replace the product. It becomes a conversational front door to capabilities that already exist.

Think Datadog, incident.io, and others in this category. You already use their platform. You already trust it. Now there’s an agent that lets you interact with the same data and workflows through natural language. “Show me the latency spike on the payments service in the last hour.” “Create an incident for the API gateway and page the on-call team.” The product is the same. The interface is new.

This is, in my view, the clearest best-fit for agent adoption today. The value proposition is immediately obvious to the user. You’re not asking them to trust an agent with something new, you’re giving them a faster way to do something they already do. The product already delivers value. The agent just makes it more accessible. If you’re evaluating where agents add real value right now, this is where I’d look first.

Model 2: The Agent as Platform Use Case

The second model is more ambitious. These are companies providing platforms and infrastructure for organizations to build and run their own agents in production. The pitch: “Here’s the runtime, the orchestration layer, the guardrails, the observability. You bring the agent logic.”

Suse, VMWare, Red Hat, Diagrid_,_ solo.io. All of them are investing heavily in this space. They’re building the plumbing that makes it possible to deploy, manage, secure, and observe agents at enterprise scale. Kubernetes itself is being positioned as the control plane for agentic workloads.

This model is a real challenge, and I want to be honest about why. These platforms only make sense if large organizations actually find real, production-worthy use cases for their agents and make them productive. The platform is only as good as what people build on it. Right now, many organizations are still in the experimentation phase, running proofs of concept, not production workloads.

That said, I believe this model has a strong future. The infrastructure needs are real. If agents are going to run in production at scale, and I think they will, someone needs to provide the runtime, the security boundaries, the observability, and the lifecycle management. The companies building this layer today are making a bet that the use cases will materialize, and I think that bet is sound.

Model 3: The Agent as Product

The third model is the most ambitious and the most challenging. These are companies where the product itself is an agent. Specifically, an agent designed for platform teams, SREs, and DevOps engineers. The agent doesn’t augment an existing workflow. It is the workflow.

Companies like BlueBricks, Rootly, and DrDroid fall into this category. Their vision is an autonomous agent that handles operational tasks like incident response, troubleshooting, and remediation with minimal human intervention.

I’ll be direct: this is the model that, as of today, hasn’t moved beyond the demo stage. The visions are compelling. The demos are impressive. But the gap between a demo and a production system that a team trusts to operate their infrastructure is enormous.

Here’s something that struck me at the conference: many of the SRE agents presented on stage were less functional than the agents we have developed and already run in production at AstroKube. We’ve been building these agents, testing them with real customer use cases, and improving them continuously based on what we learn internally and from our customers. Seeing where the rest of the industry stands was a surprising realization. The tooling is promising, but the bar for production-readiness in operations is high, and most products haven’t crossed it yet.

This doesn’t mean Model 3 won’t work. It means the challenge is harder than it looks, and the teams that crack it will have built something genuinely transformative.

The Open Source Side

One of the most encouraging signals at KubeCon was the growing open-source ecosystem around agents in cloud-native environments. This isn’t just vendor-driven. The community and the foundations are taking it seriously.

image

A few projects worth watching:

  • AgentGateway is a Linux Foundation project providing an agentic proxy for AI agents and MCP servers. It handles security, observability, and governance for agent-to-LLM, agent-to-tool, and agent-to-agent communication, with native Kubernetes support through the Gateway API.
  • Kagent is a CNCF Sandbox project created by Solo.io. It’s an open-source framework for building AI agents in Kubernetes, with pre-built tools for CNCF projects like Argo, Helm, Istio, and Prometheus.
  • llm-d was contributed to the CNCF as a Sandbox project by IBM, Red Hat, and Google. It’s a Kubernetes-native framework for scalable, vendor-neutral distributed LLM inference.
  • HolmesGPT is now a CNCF Sandbox project that brings agentic troubleshooting into cloud-native tooling.
  • KAI Scheduler is a CNCF Sandbox project for high-performance AI workload scheduling on Kubernetes.

The fact that both the CNCF and the Linux Foundation are accepting agent-related projects into their governance structures is a strong signal. These foundations don’t adopt technologies on hype alone. There has to be real community demand and production potential. The ecosystem is forming, and it’s forming fast.

What I Loved About This KubeCon

Beyond the agent conversations, three things stood out.

First, the people. As always, the hallway track at KubeCon is where the real conversations happen. The cloud-native community remains one of the most open and generous in tech. People willing to share what they’ve learned, what failed, what surprised them. That hasn’t changed, and I hope it never does.

Second, the opportunity to see with my own eyes where the industry actually stands on the trending topics of the year. Reading blog posts and watching recordings is one thing. Being in the room, seeing the demos, hearing the unscripted Q&A. That gives you a much more honest picture of the maturity level.

Third, meeting open-source maintainers face to face. The people who build the tools we depend on every day are right there, approachable and eager to talk. Every KubeCon reminds me how much of this ecosystem runs on the dedication of maintainers, and how valuable it is to connect with them in person.

My Honest Take

AI is still in its hype moment. The industry is looking for its next wave of adoption, and this year cloud-native engineers are a key audience. That’s not inherently bad. Every transformative technology goes through a hype phase. But it means you need to be thoughtful about separating signal from noise.

image.png

Here’s my confidence ranking across the three models:

Model 1, Agent as UX Feature: This works today. The value is clear, the adoption path is natural, and the risk is low. If you’re a product company, adding an agent interface to your existing platform is the highest-ROI move you can make right now.

Model 2, Agent as Platform Use Case: This is a challenge with a promising future. The platforms are being built, the infrastructure is maturing, and the demand will follow as organizations move from experimentation to production. It’s a matter of when, not if.

Model 3, Agent as Product: This is the big bet. The industry hasn’t moved past the demo phase yet, and the distance from demo to production trust is significant. But the teams that solve this, that build agents platform teams genuinely rely on, will have created something that changes how we operate software.

Where We’re Headed

This is an exciting moment to be in this industry. The challenges are big, the uncertainty is real, and the potential is enormous. KubeCon Amsterdam 2026 made it clear that the cloud-native community is seriously engaged with agent technology. Not just experimenting with it, but building the foundations for it to work at scale.

At AstroKube, we have experience running AI models in production across many business cases. Now we’re experimenting with agents, both as a UX feature for our platform and as a product in its own right. We’re learning what works, what doesn’t, and where the real value lies.

If you’re navigating this space and want to compare notes, reach out. We’re all figuring this out together, and the conversations are as valuable as the technology.