Centralizing Ingress Traffic Across VMs and Containers using Openshift

Tyler Lisowski
5 min readSep 24, 2022

--

Companies at various phases of their cloud modernization process can have a workload fleet consisting of both containers and VMs. Often times: they are still interested in having a single point of network ingress across this mixed environment. Openshift offers a robust set of options for containerized workload with Openshift Router, network policies, and LoadBalancer services. These same components can additionally be configured to send ingress traffic to VM based workload as well. This guide will illustrate these scenarios using IBM Cloud Satellite and ROKS on IBM Cloud Satellite with a private VPC running both VM workload and containerized workload.

Openshift Router Solution

To illustrate this solution: we start by running a VM colocated in the private VPC of a ROKS on Satellite cluster. The 7 Satellite hosts that makeup the environment are shown below:

ibmcloud sat hosts --location tyler-loc-useast
Retrieving hosts...
OK
Name
ID State Status Zone Cluster Worker ID Worker IP
tyler-testtest-11
b9a5a03e7261221bf65a assigned Ready us-east-1 infrastructure sat-tylertestt-6909322966b8d8eb10fc45a0299852dead5a1591 10.240.0.21
tyler-testtest-12 41794a98a9b3db902c6d assigned Ready us-east-2 infrastructure sat-tylertestt-c29a0fff49a7d710fc943222513f490c11246c04 10.240.0.22
tyler-testtest-13 4dcc290c9dbed1ab4a3c assigned Ready us-east-3 infrastructure sat-tylertestt-a94a00c0088445306913c08aae2212b064fb3c22 10.240.0.23
tyler-testtest-2 787058c8d516a74131f7 assigned Ready us-east-3 infrastructure sat-tylertestt-d72e5734ed6199b4e0ff223c6335e349ea39226a 10.240.0.77
tyler-testtest-3 3d13d3e4aaea788ce3e7 assigned Ready us-east-2 infrastructure sat-tylertestt-84999d4c14c83044ea3b4e418e4badfd677b5f20 10.240.0.78
tyler-testtest-8 48d0ebbbdd4eb0b8a24c assigned Ready us-east-1 tyler-loc-useast-d-1 sat-tylertestt-6b232b18eb144361d0f27011b7f2eec52e43b802 10.240.0.19
tyler-testtest-9 1f199aacadbc702e36d8 assigned Ready us-east-1 tyler-loc-useast-d-1 sat-tylertestt-c50eabbf3519e3bed71206765a1a4784ac3c2d2e 10.240.0.20

The VM running workload colocated in the VPC has an IP of 10.240.64.8 and is shown below:

[root@tyler-vm-1 ~]# ip addr2: ens3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
link/ether 02:00:0b:06:d5:50 brd ff:ff:ff:ff:ff:ff
altname enp0s3
inet 10.240.64.8/24 metric 100 brd 10.240.64.255 scope global dynamic ens3
valid_lft 204sec preferred_lft 204sec
inet6 fe80::bff:fe06:d550/64 scope link
valid_lft forever preferred_lft forever

To simulate workload running on the VM we launch a simple python http server:

root@tyler-vm-1:~# python3 -m http.server
Serving HTTP on 0.0.0.0 port 8000 (http://0.0.0.0:8000/) ...

With this in place: we now apply the configuration that will configure an Openshift Route for the backend VM. The configuration to do that is shown below:

With this configuration: TLS will terminate at the router and it will proceed to forward the HTTP request to the backend VM. At this point: we can now curl the ingress route and reach the backend VM application:

$ kubectl apply -f tyler-vm-forward.yaml
namespace/tyler-vm created
service/app-1 created
endpoints/app-1 created
route.route.openshift.io/app-1 created
$ curl https://app-1-tyler-vm.tyler-loc-useast-d-1-80d128fecd199542426020c17e5e9430-0000.us-east.containers.appdomain.cloud
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01//EN" "http://www.w3.org/TR/html4/strict.dtd">
<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=utf-8">
<title>Directory listing for /</title>
</head>
<body>
<h1>Directory listing for /</h1>
<hr>
<ul>
<li><a href=".bashrc">.bashrc</a></li>
<li><a href=".cache/">.cache/</a></li>
<li><a href=".profile">.profile</a></li>
<li><a href=".ssh/">.ssh/</a></li>
<li><a href="snap/">snap/</a></li>
</ul>
<hr>
</body>
</html>

Additional features of the Openshift Router beyond TLS termination can be utilized as well with this architecture. Additional redundant VM application instances could be added to the setup by modifying the Endpoint yaml section to contain multiple IPs. An external controller could also be deployed that performed health checking against the VM endpoints and removed/added endpoints into that content as appropriate to be able to dynamically add/remove unhealthy endpoints. That advanced configuration is not covered in this guide. The Openshift Router solution works great for exposing applications that have a small number of TCP ports on each VM running the application (less than 10) that they are interested in exposing on an ingress endpoint.

MetalLB Load Balancer + IPVS solution

MetalLB with IPVS is a great solution when the VM applications need to expose a large range of both TCP and UDP ports through a central ingress endpoint. WebRTC is an example application that has these characteristics. In this setup: we provision a load balancer service with MetalLB and run the speaker pod(s) on nodes that are in the same L2 network as the backing VM applications. Additionally on these nodes: a daemonset is deployed that will periodically reconcile IPVS and iptable rules which will NAT the traffic from the load balancer VIP to the backing VM applications. This guide assumes MetalLB is already installed and has been configured with floating ips that it can assign to load balancer services (for details on the install consult Openshift’s documentation). We will now apply the configuration to configure the MetalLB service and the daemonset that will reconcile the IPVS and iptable rules (config shown below):

$ kubectl get service -n tyler-vm app-1NAME              TYPE           CLUSTER-IP       EXTERNAL-IP      PORT(S)          AGEapp-1   LoadBalancer   172.21.107.166   10.240.0.4   2041:30605/TCP   6d10h

At this point the floating IP is registered and being advertised by the MetalLB speaker pods. On each of the nodes running MetalLB speaker pods: the IPVS daemoneset is going to lay down the following rule set

ipvsadm -A -f 1  -s rr
ipvsadm -a -f 1 -r 10.240.64.8 -m


iptables -A PREROUTING -t mangle -d 10.240.0.4/32 -p udp --dport 40000:65535 -j MARK --set-mark 1
iptables -A PREROUTING -t mangle -d 10.240.0.4/32 -p tcp --dport 40000:65535 -j MARK --set-mark 1
iptables -A PREROUTING -t mangle -d 10.240.0.4/32 -p tcp --dport 8000 -j MARK --set-mark 1
sysctl -w net.ipv4.vs.conntrack=1
ip addr add 10.240.0.4 brd 10.240.0.4 scope host dev lo

This automation sets up the IPVS NAT service that will ultimately forward ports 40000–65535 TCP+UDP and port 8000 TCP to the backend VM application at 10.240.64.8. Using this solution: we are able to forward a much larger port range than Kubernetes/Openshift allows by default in it’s service definitions. To properly integrate the backend VM with IPVS NAT: the default route of the backend VM needs to be set to be via the “ingress node” that is processing the traffic. This is done with the following automation:

root@tyler-vm-1:~# ip route add default via 10.240.0.4 dev ens3

With that in place: we can now send traffic from the MetalLB floating IP into the backing VM application

$  curl http://10.240.64.8:8000
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01//EN" "http://www.w3.org/TR/html4/strict.dtd">
<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=utf-8">
<title>Directory listing for /</title>
</head>
<body>
<h1>Directory listing for /</h1>
<hr>
<ul>
<li><a href=".bashrc">.bashrc</a></li>
<li><a href=".cache/">.cache/</a></li>
<li><a href=".profile">.profile</a></li>
<li><a href=".ssh/">.ssh/</a></li>
<li><a href="snap/">snap/</a></li>
</ul>
<hr>
</body>
</html>

This guide will only show traffic being sent to port 8000 however the additional range of 40000–65535 will also be forwarded in the same fashion. This is a useful configuration when applications need to expose large port ranges using multiple protocols (TCP+UDP) and the environment is compatible with MetalLB deployments.

--

--

Tyler Lisowski

IBM Cloud Satellite Lead Architect. Proud member of Bills Mafia.