Setting Up AWS WAF With Nginx Ingress
- 6 minutes read - 1107 wordsThis article highlights one of the ways one can set up AWS WAF while still using Nginx Ingress or whichever ingress existed before without using AWS ALB controller.
Ways to configure WAF on a Kubernetes cluster;
- Using ModSecurity WAF (with supported ingress controller)
- Using AWS WAF
Why use AWS WAF on your cluster?
AWS WAF adds a dynamic/central way to manage your web firewall configurations. It does this by filtering and monitoring HTTP(s) traffic reaching your cluster. It works at the application level of the OSI model.
Why set up AWS WAF with Nginx Ingress Controller and not AWS ALB Ingress controller?
AWS WAF integration can be easily set up using the AWS ALB Ingress controller. However, for teams using other ingress controllers or (reverse) proxies across different setups (whether k8s or not), it will involve a bit of a learning curve to better familiarize themselves with the configuration of AWS ALB ingress controller.
Maintaining the current tools while adopting ‘new’ features such as AWS WAF is crucial in reducing disruption.
ModSecurity: For most ingress controllers such as Nginx one can still use ModSecurity WAF easily which already comes with OWASP coreruleset pre-packaged.
AWS WAF has its advantages too.
What changes with this AWS WAF integration? (Only one)
SSL Certificate management; Since WAF also needs to filter HTTPS traffic, SSL termination has to happen at the load balancer or reverse proxy stage.
In this setup, SSL certificate termination will be done at the load balancer stage and the certificates managed by the AWS Certificate Manager.
What do you need for this setup to work?
Assuming you have a running EKS cluster on your AWS account, a few node groups will belong to your EKS cluster.
The following Infrastructure Setup instructions will use Terraform, if not familiar, use AWS console instructions here and thereafter skip to Nginx Ingress Controller Setup.
Infrastructure Setup
- Set up an application load balancer and its security groups (allow ports 80 and 443 on the load balancer).
# set up the security grp
resource "aws_security_group" "staging-alb-security-group" {
name = "http-https"
description = "Allow http(s) inbound traffic"
vpc_id = var.vpc_id
ingress {
from_port = 443
to_port = 443
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
ingress {
from_port = 80
to_port = 80
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
}
# Set up application load balancer
resource "aws_alb" "staging-alb" {
name = "staging-alb"
internal = false
load_balancer_type = "application"
security_groups = [aws_security_group.staging-alb-security-group.id]
subnets = var.subnet_grps
enable_deletion_protection = false
tags = var.tags
}
The AWS Application load balancer is where the AWS WAF will be integrated and certificates will be installed.
- Set up the HTTP(S) target group. We only need one target group i.e. HTTPS target group to re-secure traffic after SSL termination has happened on the AWS ALB ( Zero Trust). Even though all traffic will be redirected to port 443 we still need port 80 opened on all instances for health checks. Health checks on port 443 will have an overhead because of the additional SSL handshake.
resource "aws_alb_target_group" "staging-alb-target-grp-https" {
name = "staging-alb-target-grp-https"
port = 443
protocol = "HTTPS"
health_check {
path = "/healthz"
protocol = "HTTP"
port = "80"
}
target_type = "instance"
vpc_id = var.vpc_id
tags = var.tags
}
- Set up the application load balancer HTTP(S) listeners. The HTTPS listeners will need an SSL certificate. HTTP listener will redirect traffic to the HTTPS listener. HTTPS listener will then forward the traffic to the target group created in step 2.
resource "aws_alb_listener" "staging-alb-listener-http" {
load_balancer_arn = aws_alb.staging-alb.arn
port = "80"
protocol = "HTTP"
default_action {
type = "redirect"
redirect {
status_code = "HTTP_301"
protocol = "HTTPS"
port = "443"
}
}
}
resource "aws_alb_listener" "staging-alb-listener-https" {
load_balancer_arn = aws_alb.staging-alb.arn
port = "443"
protocol = "HTTPS"
certificate_arn = aws_acm_certificate.example-org.arn
default_action {
target_group_arn = aws_alb_target_group.staging-alb-target-grp-https.id
type = "forward"
}
}
# create certificate to be used by HTTPS listener
resource "aws_acm_certificate" "example-org" {
domain_name = "*.example.org"
validation_method = "DNS"
tags = var.tags
lifecycle {
create_before_destroy = true
}
}
- Setup AWS WAF and its association
# WAF
resource "aws_wafv2_web_acl" "staging-eks-web-acl" {
name = "staging-eks-web-acl"
scope = "REGIONAL"
default_action {
allow {}
}
rule {
name = "AWSManagedRulesCommonRuleSet"
priority = 1
override_action {
none {}
}
statement {
managed_rule_group_statement {
name = "AWSManagedRulesCommonRuleSet"
vendor_name = "AWS"
}
}
visibility_config {
cloudwatch_metrics_enabled = false
metric_name = "staging-eks-metric"
sampled_requests_enabled = false
}
}
visibility_config {
cloudwatch_metrics_enabled = true
metric_name = "staging-eks-metric"
sampled_requests_enabled = false
}
}
# association
resource "aws_wafv2_web_acl_association" "staging-eks-wev-acl-assoc" {
resource_arn = aws_alb.staging-alb.arn
web_acl_arn = aws_wafv2_web_acl.staging-eks-web-acl.arn
}
- Finally add the node group to the load balancer attachment resource.
resource "aws_autoscaling_attachment" "staging-alb-asg-attach-https" {
autoscaling_group_name = data.aws_autoscaling_group.staging-asg.id
lb_target_group_arn = aws_alb_target_group.staging-alb-target-grp-https.arn
}
data "aws_autoscaling_group" "staging-asg" {
name = var.autoscaling_grp_name
}
Once step 5 has been completed the target group created in step 2 will be populated with the instances belonging to staging-asg autoscaling group.
At this point infrastructure setup is done the next step will be to configure your ingress controller to work with the ‘new’ configuration.
Nginx Ingress Controller Setup
The following will change on your nginx ingress controller configuration.
- Switch to deploying ingress nginx controller as a daemonset as opposed to deployment. Reason: a). Health checks will be performed on all target instances so port 80 needs to be open for incoming health probes done on AWS target groups (check step 2 on Infrastructure Setup). b). Request/Traffic will be forwarded to any of all the target instances therefore the controller needs to be available to receive the traffic.
- Expose 80 and 443 Nginx ports on the VMs by setting hostNetwork to ‘true’ config.
- Kubernetes Service type will no longer be LoadBalancer, it should be ClusterIP.
- The DNS policy will be ClusterFirstWithHostNet because of ‘hostNetwork: true’.
below are the helm values.yaml file.
---
controller:
# Enable to have more than one nginx ingress controller on a cluster (1)
# electionID: "ingress-controller-aws-waf"
kind: DaemonSet
hostNetwork: true
dnsPolicy: ClusterFirstWithHostNet
# Enable to get the benefit of both AWS and ModSecurity WAF
# config:
# enable-modsecurity: "true"
# enable-owasp-modsecurity-crs: "true"
service:
type: ClusterIP
# Enable to have more than one nginx ingress controller on a cluster (2)
# ingressClassResource:
# default: false
# name: nginx-aws-waf
# controllerValue: k8s.io/nginx-aws-waf
# ingressClass: nginx-aws-waf
allowSnippetAnnotations: true
Application Setup
To use the nginx controller above in your application(s).
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
# nginx.ingress.kubernetes.io/enable-modsecurity: "false"
# nginx.ingress.kubernetes.io/modsecurity-snippet: |
# SecRuleEngine On
labels:
app.kubernetes.io/instance: example-ingress-aws
name: example-ingress
namespace: default
spec:
rules:
- host: site.example.org
http:
paths:
- backend:
service:
name: example-svc
port:
number: 8080
path: /
pathType: ImplementationSpecific
tls:
- hosts:
- site.example.org
Thats it!