Most teams running Nginx in front of their app are using 10% of what it can do. An ALB covers that 10% natively.
A common pattern in AWS: traffic hits an Application Load Balancer, which forwards to an EC2 instance or container running Nginx, which reverse-proxies to your actual application. The ALB handles the L7 load balancing. Nginx handles... well, what exactly?
For a lot of teams, the answer is: routing, SSL termination, and maybe a redirect or two. Things the ALB already does. Nginx is sitting in the middle adding latency, operational overhead, and another thing to patch — without doing anything the ALB can't handle on its own.
That doesn't mean Nginx is never needed. It absolutely is in some cases. But the line between "need Nginx" and "ALB is enough" is worth understanding clearly, because removing a layer from your stack is one of the most impactful simplifications you can make.
AWS ALBs have quietly accumulated features over the years that cover most of what teams historically used Nginx for. Here's what you get without running a single reverse proxy:
ALB listener rules can route requests based on the URL path, hostname, HTTP headers, query string parameters, and source IP. You can send /api/* to one target group and /static/* to another. You can route api.example.com to a different backend than app.example.com. For most microservice routing needs, this is sufficient.
ALBs integrate with AWS Certificate Manager. You get free, auto-renewing TLS certificates with zero configuration on your application servers. No certbot cron jobs, no Nginx SSL stanzas, no remembering to renew certificates. ACM handles it. Your app receives plain HTTP from the ALB on the internal network.
ALB health checks hit a configurable endpoint on your targets at a configurable interval. Unhealthy targets get pulled from rotation automatically. This replaces the Nginx upstream health check module and works out of the box with no configuration beyond specifying the health check path.
A single ALB listener rule can redirect all HTTP traffic to HTTPS. No Nginx return 301 block needed. You can also do domain redirects — sending www.example.com to example.com or vice versa — directly in the listener rules.
ALB can return a static response body with a configurable status code. This is useful for maintenance pages, custom 404s, or blocking specific paths. Instead of spinning up an Nginx config to serve a "we'll be right back" page, you add a listener rule that returns a fixed 503 with your maintenance HTML.
ALB supports cookie-based session stickiness at the target group level. If you just need "same user hits the same container," ALB's application cookie or duration-based stickiness works fine. It sets a cookie, and subsequent requests with that cookie go to the same target.
ALB listener rules support forwarding to multiple target groups with configurable weights. This enables blue/green deployments and canary releases at the load balancer level. Send 95% of traffic to the current version and 5% to the new version, then shift the weight as you gain confidence. No Nginx upstream weighting required.
This one is underappreciated. ALB can authenticate users directly via Amazon Cognito or any OIDC-compliant identity provider (Google, Okta, Auth0) before the request even reaches your application. The ALB handles the entire OAuth 2.0 flow — redirect to login, token exchange, session management — and forwards the authenticated user's claims to your app in HTTP headers. With Nginx, you'd need something like lua-resty-openidc or a sidecar like oauth2-proxy. ALB does it in a listener rule.
ALB handles WebSocket connections natively — no special Nginx proxy_set_header Upgrade configuration needed. It also supports gRPC traffic end-to-end, including health checks over gRPC. If your stack uses either protocol, ALB handles it without extra proxy-layer configuration.
ALB access logs ship directly to S3 with no agents or log shippers to configure. Every request gets logged with latency, status code, target response time, and TLS cipher — all in a structured format ready for Athena queries. With Nginx, you're managing log rotation, configuring a log shipper (Fluent Bit, Filebeat, CloudWatch agent), and hoping the pipeline doesn't silently break.
When you deploy a new version of your application, targets need to finish handling in-flight requests before being deregistered. ALB handles this natively with a configurable deregistration delay (default 300 seconds). Active connections are allowed to complete, and new requests go to healthy targets. With Nginx, you'd need to coordinate upstream changes and graceful reloads yourself — and getting this wrong drops requests during deploys.
Here's a CloudFormation snippet that replaces what would typically be 30-40 lines of Nginx config:
HttpsListener:
Type: AWS::ElasticLoadBalancingV2::Listener
Properties:
LoadBalancerArn: !Ref ALB
Port: 443
Protocol: HTTPS
SslPolicy: ELBSecurityPolicy-TLS13-1-2-2021-06
Certificates:
- CertificateArn: !Ref Certificate
DefaultActions:
- Type: forward
TargetGroupArn: !Ref AppTargetGroup
HttpRedirect:
Type: AWS::ElasticLoadBalancingV2::Listener
Properties:
LoadBalancerArn: !Ref ALB
Port: 80
Protocol: HTTP
DefaultActions:
- Type: redirect
RedirectConfig:
Protocol: HTTPS
Port: "443"
StatusCode: HTTP_301
ApiRoute:
Type: AWS::ElasticLoadBalancingV2::ListenerRule
Properties:
ListenerArn: !Ref HttpsListener
Priority: 100
Actions:
- Type: forward
TargetGroupArn: !Ref ApiTargetGroup
Conditions:
- Field: path-pattern
Values: ["/api/*"]
That's SSL termination, HTTP-to-HTTPS redirect, and path-based routing — all version-controlled, all reproducible, no Nginx process to manage. Need a maintenance page? Add a listener rule with a fixed-response action and toggle it with a CloudFormation condition.
ALB covers the common cases, but Nginx remains the right tool when your requirements go beyond basic routing and termination:
ALB doesn't cache responses. If you need to cache upstream responses at the reverse proxy layer to reduce load on your application, you need Nginx (or a CDN like CloudFront). This is a common pattern for APIs that serve the same response to many users — product catalogs, configuration endpoints, public data feeds.
ALB can add, remove, or modify a few specific headers, but Nginx gives you full control. If you need to rewrite X-Forwarded-* headers, add custom security headers conditionally, strip response headers from upstream, or transform headers based on request attributes, Nginx's proxy_set_header and add_header directives are far more flexible.
ALB redirects are simple: you can change the protocol, host, port, path, and query string. But you can't do regex-based URL rewriting, path segment manipulation, or conditional rewrites based on cookies or headers. If your application relies on Nginx's rewrite directive with capture groups and conditionals, ALB can't replace that.
ALB doesn't compress responses. If your application doesn't handle its own gzip/brotli compression and you're relying on Nginx's gzip on; to compress responses before they reach the client, removing Nginx means either adding compression to your app or putting CloudFront in front of the ALB.
ALB's sticky sessions are cookie-based only. If you need to route based on a query parameter (like a session ID in the URL), a custom header, a JWT claim, or anything other than an ALB-managed cookie, you need Nginx's sticky directive or custom Lua logic. ALB's stickiness is simpler but less flexible — if "same user hits same container via cookie" is all you need, it works. Anything more granular requires Nginx.
ALB doesn't do rate limiting. If you need to throttle requests per IP, per user, or per endpoint, you either need Nginx's limit_req module, AWS WAF (additional cost), or application-level rate limiting.
If you need to inspect, modify, or filter request or response bodies — injecting scripts, stripping content, transforming payloads — that's Nginx with Lua or OpenResty territory. ALB treats the body as an opaque pass-through.
This is the part that gets glossed over in "just use ALB" advice. ALBs are not free, and depending on your traffic profile, they can be meaningfully more expensive than Nginx on an EC2 instance you're already paying for.
An ALB charges in two dimensions:
So a baseline ALB costs roughly $20-50/month before you add any features. For a production workload with multiple target groups, SSL, and moderate traffic, $30-40/month is typical.
Nginx open source is free. If it's running on an EC2 instance that already exists for your application, the incremental cost is effectively zero — you're just using CPU and memory you've already provisioned. Even if you run a dedicated t3.micro for Nginx, that's ~$7.60/month.
The ALB cost is worth paying when:
The cost is harder to justify when you have a single EC2 instance running one application. In that case, Nginx on the instance is essentially free and adding an ALB doubles your infrastructure cost for features you might not need.
There's a subtler but important difference between ALB and Nginx that goes beyond features and cost: how changes get made.
Nginx configuration lives on a server. In a well-run shop, that config is managed by Ansible, Chef, Puppet, or at minimum checked into version control and deployed through a pipeline. But the reality is that Nginx makes it very easy to SSH into a box, edit /etc/nginx/sites-enabled/app.conf, run nginx -t && nginx -s reload, and walk away. No commit, no review, no record of what changed.
This happens more than anyone admits. A quick fix at 2 AM during an incident. A redirect someone needed "just for today." A header tweak that never made it back to the config management repo. Over time, the running Nginx config drifts from what's in version control, and nobody notices until the server gets replaced and the new one doesn't behave the same way.
ALB configuration, by contrast, naturally lives in infrastructure-as-code. Whether you're using CloudFormation, Terraform, or the CDK, ALB listener rules are defined in templates that get committed, reviewed, and applied through a pipeline. You can make changes through the console, but it's clunky enough that most teams don't make a habit of it. The path of least resistance is the IaC path, which means changes are tracked, reviewable, and reproducible.
This isn't a technical limitation of Nginx — it's a human one. ALB's operational model nudges teams toward better practices by making the right thing the easy thing.
Nginx has a solid security track record, but it's still software you're running on your servers. When a CVE drops — and they do, a handful per year — you need to update the package, test the config, and roll it out to every instance. If you're running Nginx in containers, that means rebuilding and redeploying images. If you're running it on EC2 instances managed by Ansible, that's a playbook run across your fleet.
ALB is a managed service. AWS patches it. You don't get paged, you don't rebuild images, you don't coordinate rollouts. One less thing in your vulnerability management process.
ALB isn't without constraints. A few to be aware of:
Here's a simple way to think about it:
| If you need... | ALB alone? |
|---|---|
| Path/host-based routing | Yes |
| SSL termination with auto-renewing certs | Yes |
| HTTP → HTTPS redirects | Yes |
| Domain redirects | Yes |
| Health checks & automatic failover | Yes |
| Cookie-based sticky sessions | Yes |
| Blue/green & canary deployments | Yes |
| Maintenance pages / fixed responses | Yes |
| Authentication (Cognito / OIDC) | Yes |
| WebSocket & gRPC | Yes |
| Access logging to S3 | Yes |
| Connection draining during deploys | Yes |
| Response caching | No — need Nginx or CloudFront |
| Header rewriting (complex) | No — need Nginx |
| Regex URL rewrites | No — need Nginx |
| Response compression | No — need Nginx or CloudFront |
| Sticky sessions by header/query param | No — need Nginx |
| Rate limiting | No — need Nginx or WAF |
| Request/response body transforms | No — need Nginx + Lua |
| Request buffering (slow client protection) | No — need Nginx |
If everything you need is in the "Yes" column, you can drop Nginx and let the ALB handle it. Your stack gets simpler, your deploys get simpler, and you have one less thing to patch and monitor.
If you need anything from the "No" column, keep Nginx — but consider whether you really need those features, or whether they're inherited config from a setup that predates your ALB.
If you're running containers on ECS or EKS, you almost certainly already have an ALB. It's how AWS routes traffic to your tasks or pods. If you're also running Nginx as a sidecar or init container in front of your application container, you have two proxies in series: ALB → Nginx → app.
This is the worst of both worlds. You're paying for the ALB, managing Nginx, and getting the downsides of each (ALB cost + Nginx operational overhead) while the two layers duplicate each other's work. Unless Nginx is doing something the ALB genuinely can't — caching, compression, complex rewrites — the sidecar Nginx is dead weight. Remove it and let the ALB talk directly to your application process.
An ALB is not a drop-in Nginx replacement. It's a managed load balancer that happens to cover the subset of Nginx features that most teams actually use. If your Nginx config is a handful of proxy_pass directives, some return 301 redirects, and an SSL block, the ALB can replace all of that with better availability, simpler operations, and infrastructure-as-code as the default workflow.
If your Nginx config has Lua blocks, caching rules, complex rewrites, or custom header logic, keep it. That's what Nginx is for.
The question isn't "is ALB better than Nginx." It's "am I using Nginx for things the ALB already does?" For a surprising number of teams, the answer is yes.
Published by Yaw Labs.
Try yaw on Windows
Free to use, no account required. Install from PowerShell:
irm https://yaw.sh/install-win.ps1 | iex