Kellton LogoHome
Contact

How AI helps match resources to real needs

Stop the OOMKills and cut your cloud bill. We show you how AI and KRR turn Kubernetes resource guessing into precise engineering.

Szymon Kieliński
5 min read
Developers collaborating at a workstation with a monitor displaying code and a computer tower with RGB lighting.

Moving beyond trial-and-error in Kubernetes resource management

Anyone who’s tuned Kubernetes requests and limits knows it feels more like guessing than engineering. New services launch without history, workloads spike at the worst times, and staging never really acts like a production. Too generous, and you’re burning cash. Too stingy, and you’re staring at OOMKills.

In this article, we’ll look at why resource estimation is so tricky, why traditional fixes fall short, and how pairing krr (Kubernetes Resource Recommender) with Fabric turns raw data into practical, AI-backed recommendations. The result: more stable clusters, lower costs, and less time spent firefighting.


What is Kubernetes Resource Recommender (krr)?

What is the Kubernetes Resource Recommender (krr)? Simply put, it's an open-source command-line utility designed to provide precise, data-backed recommendations for your workload's CPU and memory requests and limits.

KRR's method is pretty straightforward: it pulls data (usually from Prometheus), runs it through some smart statistical models, and spits out recommendations. The output is in a YAML-ready format – resource values you can copy and paste right into your Kubernetes manifests.

This is a big deal because KRR changes resource allocation from a frantic guessing game into a repeatable, data-driven process. It helps engineering teams finally answer that one recurring, budget-busting question: "What are the real CPU and memory needs for this workload?"


Why resource estimation in Kubernetes is so difficult (and costly)?

It’s not that engineers don’t care about optimization – it’s that the odds are stacked against us:

  • When you ship a brand-new service, you’re flying blind. With no historical data, you’re left guessing at values – and guessing wrong means either wasted resources or a crash waiting to happen
  • Workloads change over time – today’s “right” settings may be wrong next week
  • Different environments behave differently – dev, staging, and prod rarely look the same.

And when these factors don't line up, what happens? Overprovisioning silently drains your budget, one dollar at a time. Underprovisioning is way louder – it blows up right in your face with restarts and service failures.

Most of us try to solve this with better monitoring tools and constant manual tweaks. Sometimes you get lucky. More often, you don't. (On a related note, are you struggling to bring order to your sprints? You might want to check out our piece on Scrum without the chaos: a practical guide for developers.


How does AI enhance Kubernetes Resource Recommender (KRR)?

While Kubernetes Resource Recommender (KRR) provides accurate recommendations, its output is still raw data. Numbers alone don’t tell you whether a specific workload can safely be downscaled, or how a change might affect performance in production.

This is where an AI assistant such as Fabric adds value. Instead of just handing you CPU and memory limits, Fabric interprets KRR’s output, explains trade-offs, and adapts recommendations for different environments.

  • Why this matters – A staging service may survive with leaner requests, but production workloads need more conservative buffers
  • Better than scripts – AI can explain the “why” behind a recommendation, not just the “what.”
  • Cost vs. stability – Fabric highlights how scaling down saves money without putting service reliability at risk.

Together, KRR and Fabric move resource management from plain automation to context-aware optimization.


How to use KRR in 6 steps?

Adopting KRR in your Kubernetes workflow doesn’t require a complete overhaul. Here’s a simple process:

  1. Collect metrics with Prometheus – ensure you have workload CPU and memory history
  2. Run KRR – generate resource recommendations for your deployments
  3. Export YAML – KRR outputs requests and limits ready to paste into manifests
  4. Use Fabric for context – AI interprets results and adjusts them for staging vs. production
  5. Apply changes with kubectl or CI/CD – integrate into your pipeline for repeatability
  6. Monitor results – track utilization and stability in Grafana or Azure Monitor.

Real results from using KRR and Fabric

Teams using Kubernetes Resource Recommender together with AI assistants have reported:

  • Fewer OOMKills – stability improved by up to 70%
  • Balanced utilization – CPU and memory hovering around 60–80%, the sweet spot for efficiency
  • Faster tuning – manual resource adjustments went from hours to minutes
  • Lower costs – one production service reduced memory waste by 40%, saving over $1,000/month.

These aren’t isolated wins. They highlight the value of moving from assumptions to data-driven resource management.


Best practices for Kubernetes resource optimization

Getting your Kubernetes resources right isn't something you check off a list and forget about. It's a continuous process. To get the most mileage out of KRR and those AI-driven insights, keep these points in mind:

  • Get enough data – You need a solid baseline. Don't rush it. Aim for at least 1-2 weeks of Prometheus metrics. Garbage in, garbage out, right?

  • Pair with HPA – The Horizontal Pod Autoscaler works best when your resource requests and limits are accurate. These tools complement each other perfectly.

  • Always review AI recommendations – The AI is powerful, but context is king. A human still needs to confirm that the recommendation makes sense for your business needs and current events.

  • Monitor continuously – Your workloads are always changing, so your resource allocations have to change with them. Set up alerts for deviations.


AI as a DevOps co-pilot in Kubernetes

Kubernetes resource tuning has long been a mix of monitoring, intuition, and trial-and-error. With Kubernetes Resource Recommender (KRR) and AI assistants like Fabric, DevOps teams finally have tools that replace guesswork with data. The outcome? Stable clusters, lower cloud costs, and more time for engineers to focus on innovation instead of firefighting.

AI won’t replace Kubernetes engineers – it acts as a co-pilot, helping them make better, faster decisions. As workloads grow more dynamic, this shift from assumptions to automation will only become more critical.

At Kellton Europe, we help organizations modernize their Kubernetes environments with AI-driven optimization practices. Talk to our experts and let’s make your clusters leaner, faster, and more reliable.

FAQ

  • What is Kubernetes Resource Recommender (KRR)?

    KRR is a CLI tool that analyzes workload usage metrics and suggests optimized CPU and memory limits, turning resource allocation from guesswork into a data-driven process.
  • How does AI improve resource estimation in Kubernetes?

    AI tools like Fabric interpret KRR’s raw data, explaining trade-offs and recommending environment-specific settings – balancing cost and performance intelligently.
  • Why is manual resource tuning inefficient?

    Because workloads change over time. Manual adjustments lead to overprovisioning (wasted cost) or underprovisioning (instability). AI-based tools adapt automatically.
  • When should you use KRR and AI for optimization?

    When scaling clusters, debugging OOMKills, or improving cost efficiency. It’s especially valuable when managing dynamic workloads across staging and production environments.

Szymon Kieliński

DevOps Engineer

Szymon Kieliński is a DevOps Engineer who enjoys working with Kubernetes, GitOps, and Infrastructure as Code, focusing on automation and open-source solutions that simplify cloud operations.

A man standing in the office in front of the Kellton sign, wearing a black shirt and glasses.

Sebastian Spiegel

Backend Development Director

Inspired by our insights? Let's connect!

You've read what we can do. Now let's turn our expertise into your project's success!

Get in touch with us

0 / 3000
Let us know you're human
By submitting this form you acknowledge that you have read Kellton's Privacy Policy and agree to its terms.

Get to know us

Learn about our team, values, and commitment to delivering high-quality, tailored solutions for your business.

Tell us about your needs

Share your project requirements and objectives so we can craft a customized plan.

Free consultation

Make the most of our free consultation to discover the optimal strategies and solutions tailored to your business challenges.