Traefik Proxy works with multiple platforms but, in our case, we will assume that our application will be deployed to Kubernetes and more specifically to two different namespaces-one for production and one for QA. Using Traefik Proxy as a Kubernetes Ingress Let’s see how we can accomplish this with Traefik Proxy. Production, burst traffic 50 rps, average 30 rps.
We, therefore, decide the following limits: The QA environment is strictly used for feature testing and has much fewer resources than the production one. The production environment is deployed on a Kubernetes cluster ready to serve real traffic, including associated services-such as databases (DB), queues, etc. Our example application is deployed in two environments: production and QA. Traefik Proxy supports rate limiting natively via the RateLimit Middleware, so applying rate limits in Kubernetes applications is a straightforward process if you already use Traefik Proxy as an ingress controller. I strongly advise you to apply rate limits in production environments but it is also very common to rate limit QA and testing environments. Without rate limits, a burst of traffic could bring down the whole service making it unavailable for everybody. It can save you from Denial-of-Service (DoS) or resource starvation problems.Īpplying rate limits to your application ensures that, at least, a subset of your users will be able to access your service. Rate limiting is a technique for controlling the rate of requests to your application. Explore Traefik Proxy What is rate limiting?
for Free? 2B+ downloads and 30k+ stars on GitHub say it all.