Obtain Source IP using NGINX Ingress Controller

Technical additional documentation

Version Supported

This document is based on Cloud Container Engine version 1.21.

More details about the differences between Shared and Dedicated Loadbalancer :

https://docs.prod-cloud-ocb.orange-business.com/usermanual/elb/elb_pro_0004.html

Introduction

In this document we will explain how to conserve source IP address of a client using NGINX Ingress Controller and different type of Elastic LoadBalancer on Flexible Engine.

You can find more information’s about default CCE Ingress in the Help Center.

Using Shared Elastic LoadBalancer

First we will deploy our NGINX Ingress Controller and automatically provide a Shared ELB with following values.yaml file :

controller:
replicaCount: 1
service:
  externalTrafficPolicy: Local
  annotations:  
    kubernetes.io/elb.class: union
    kubernetes.io/elb.autocreate:
         '{
            "type": "public",
            "bandwidth_name": "cce-bandwidth-1551163379627",
            "bandwidth_chargemode": "traffic",
            "bandwidth_size": 5,
            "bandwidth_sharetype": "PER",
            "eip_type": "5_bgp",
            "name": "test"
           }'

Details about parameters :

externalTrafficPolicy: Local

This parameter is very important to avoid source NAT by default, you can find more details in Kubernetes documentation :

https://kubernetes.io/docs/tutorials/services/source-ip/#source-ip-for-services-with-type-loadbalancer

kubernetes.io/elb.class: union

This parameter specifies which type of loadbalancer it will provided

union = Shared Elastic LoadBalancer

performance = Dedicated Loabalancer

Deploy with Helm

Once it is done, you can now deploy your Helm Chart with your customized values.yaml file :

helm upgrade -f values.yaml --install ingress-nginx ingress-nginx \
--repo https://kubernetes.github.io/ingress-nginx \
--namespace ingress-nginx --create-namespace

Wait up to a minute for the EIP and the Loadbalancer to connect.

You can see that IP address or FQDN with the following command :

kubectl get svc -n ingress-nginx

You can see that the EIP of the ELB is associated to the NGINX Controller with the following result :

NAME                                 TYPE           CLUSTER-IP       EXTERNAL-IP                PORT(S)                      AGE
ingress-nginx-controller             LoadBalancer   10.247.234.164   10.0.21.169,90.84.177.26   80:31360/TCP,443:32340/TCP   16h
ingress-nginx-controller-admission   ClusterIP      10.247.198.139   <none>                     443/TCP                     16h

Edit Loadbalancer Listeners

In Flexible Engine Console select Elastic LoadBalancer menu

Edit as following :

  1. Edit your previously created ELB (test)
  2. Select Listeners
  3. Edit NGINX Listerners (k8s_TCP_443)
  4. Select Obtain Client IP Address
  5. Edit the second Listener as previsouly (k8s_TCP_80) and also Select Obtain Client IP Address

Using Dedicated Elastic LoadBalancer (beta)

First we will deploy our NGINX Ingress Controller and automatically provide a Dedicated ELB with following values.yaml file :

controller:
replicaCount: 1
service:
  externalTrafficPolicy: Local
  annotations:
    kubernetes.io/elb.class: performance
    kubernetes.io/elb.autocreate:
         '{
             "type":"public",
             "bandwidth_name":"cce-bandwidth-1655142139284",
             "bandwidth_chargemode":"traffic",
             "bandwidth_size":5,
             "bandwidth_sharetype":"PER",
             "eip_type":"5_bgp",
            "available_zone": ["eu-west-0a", "eu-west-0b"],
            "l4_flavor_name": "L4_flavor.elb.s1.small"
          }'

Details about parameters :

externalTrafficPolicy: Local

This parameter is very important to avoid source NAT by default, you can find more details in Kubernetes documentation :

https://kubernetes.io/docs/tutorials/services/source-ip/#source-ip-for-services-with-type-loadbalancer

kubernetes.io/elb.class: performance

This parameter specifies which type of loadbalancer it will provided

union = Shared Elastic LoadBalancer

performance = Dedicated Loabalancer

Deploy with Helm

Once it is done, you can now deploy your Helm Chart with your customized values.yaml file :

helm upgrade -f values.yaml --install ingress-nginx ingress-nginx \
--repo https://kubernetes.github.io/ingress-nginx \
--namespace ingress-nginx --create-namespace

Wait up to a minute for the EIP and the Loadbalancer to connect.

You can see that IP address or FQDN with the following command :

kubectl get svc -n ingress-nginx

You can see that the EIP of the ELB is associated to the NGINX Controller with the following result :

NAME                                 TYPE           CLUSTER-IP       EXTERNAL-IP                PORT(S)                      AGE
ingress-nginx-controller             LoadBalancer   10.247.234.164   10.0.21.169,90.84.177.26   80:31360/TCP,443:32340/TCP   16h
ingress-nginx-controller-admission   ClusterIP      10.247.198.139   <none>                     443/TCP                     16h

Loadbalancer Listeners

Obtain Client IP Address is activated by default on Dedicated LoadBalancer so you do not need to change anything here unlike to Shared ELB (see previously).

Testing

We can now deploy a simple echo service to verify that everything is working. The service will use the mendhak/http-https-echo image, a very useful HTTPS echo Docker container for web debugging.

First, copy the next manifest to a echo.yaml file:

apiVersion: v1
kind: Namespace
metadata:
  name: echo
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: echo-deployment
  namespace: echo
  labels:
    app: echo
spec:
  replicas: 1
  selector:
    matchLabels:
      app: echo
  template:
    metadata:
      labels:
        app: echo
    spec:
      containers:
      - name: echo
        image: mendhak/http-https-echo
        ports:
        - containerPort: 80
        - containerPort: 443
---
apiVersion: v1
kind: Service
metadata:  
  name: echo-service
  namespace: echo
spec:
  selector:
    app: echo
  ports:  
  - name: http
    port: 80
    targetPort: 80
    protocol: TCP
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: echo-ingress
  namespace: echo
  annotations:
    kubernetes.io/ingress.class: nginx
spec:
  rules:
  - http:
      paths:
        - path: "/"
          pathType: Prefix
          backend:
            service:
              name: echo-service
              port:
                number: 80

And deploy it on your cluster:

kubectl apply -f echo.yaml

Get echo ingress public IP :

kubectl -n echo get ingress echo-ingress

Now you can test it using the ingress public IP :

curl  xxx.xxx.xxx.xxx

And you should get the HTTP parameters of your request, including the right source IP in the x-forwarded-for header:

{
  "path": "/",
  "headers": {
    "host": "xxx.xxx.xxx.xxx",
    "x-request-id": "76f9a0fb30c12b9549c2d8c48b7debd8",
    "x-real-ip": "XXX.XXX.XXX.XXX",
    "x-forwarded-for": "XXX.XXX.XXX.XXX",
    "x-forwarded-host": "xxx.xxx.xxx.xxx",
    "x-forwarded-port": "80",
    "x-forwarded-proto": "http",
    "x-original-uri": "/",
    "x-scheme": "http",
    "user-agent": "curl/7.68.0",
    "accept": "*/*"
  },
  "method": "GET",
  "body": "",
  "fresh": false,
  "hostname": "xxx.xxx.xxx.xxx",
  "ip": "x.x.x.x",
  "ips": [],
  "protocol": "http",
  "query": {},
  "subdomains": [],
  "xhr": false,
  "os": {
    "hostname": "echo-deployment-b97d6c86f-hl44p"
  }
}

You can double check your Public IP and verify if it match the x-forwarded-for field :

curl ifconfig.me