Industry Best Practice Architectures: Government IBM Cloud Satellite Deployments

Tyler Lisowski
15 min readNov 13, 2023

--

This article outlines best practices I have encountered in my experiences with IBM Cloud Satellite deployments in the government space. The best practice architecture targets deployments into regulated on-premise datacenters. The article will start with outlining the architecture and then will walk through how the individual pieces of the architecture setup a robust environment for on-premise regulated applications in the government space. A live example deployment of the architecture in an on-premise data will be used to visualize the concepts.

Architecture

Best Practice Architecture for Regulated Government Applications in IBM Cloud Satellite

The overall architecture consists of a three tier application (presentation, application, database) deployed in a IBM Cloud Red Hat Openshift cluster in a Satellite location within a regulated on-prem datacenter. In this design: the presentation tier has a basic webpage that displays content fetched from an API in the application tier. The application tier connects to the database tier to determine the content to display. Let’s now walk through the request flow starting from the consumer of the three tier application outside the regulated datacenter.

Client -> Load balancer (Openshift Router)

The consumer accesses either the presentation tier or the application tier components through the openshift router. The traffic is sent to the openshift router through the clusters ingress subdomain. The ingress subdomain DNS maps to a select number of edge worker nodes in the cluster as shown below:

ibmcloud ks ingress domain ls --cluster ckrdnalr0psd72bt16sg
OK
Domain Target(s) Default Provider Secret Status Status
tyler-cloudpak-2-d-31-80d128fecd199542426020c17e5e9430-0000.ca-tor.containers.appdomain.cloud 10.240.64.127,10.240.64.128 yes akamai created OK

bx cs workers --cluster ckrdnalr0psd72bt16sg
OK
ID Primary IP Flavor State Status Zone Version
sat-testkernel-89ec105ae2017b89f0c4e99a6a6ce99ea4f39422 10.240.64.128 upi normal Ready ca-tor-1 4.12.39_1566_openshift*
sat-testkernel-9e169a7699426944ca96412868028f52edeb3915 10.240.64.127 upi normal Ready ca-tor-1 4.12.39_1566_openshift*
sat-tylercloud-e9c71b82531577a8870e27fc38179ee3145550e4 10.240.128.28 upi normal Ready ca-tor-1 4.12.39_1566_openshift*
sat-tylercloud-f9e597828a5a63c24e69191f9c9f3591c3084b20 10.240.128.29 upi normal Ready ca-tor-1 4.12.39_1566_openshift*
sat-tylercloud-fcfc202ff688f7d63c89adfcddd56cd3df93ea73 10.240.128.27 upi normal Ready ca-tor-1 4.12.39_1566_openshift*

* To update to 4.12.41_1567_openshift version, run 'ibmcloud ks worker replace'. Review and make any required version changes before you update: 'https://ibm.biz/upworker

Once the traffic reaches the edge worker nodes: policy is placed in the cluster to restrict ingress access as appropriate in the environment. In our solution: all our external components in the presentation and application tier are going to be HTTPS apps only. Therefore we are only going to allow port 443 to ingress into the cluster from the outside the cluster. This is implemented in a calico policy shown below:

- apiVersion: projectcalico.org/v3
kind: GlobalNetworkPolicy
metadata:
name: allow-local-ingress-odf
spec:
applyOnForward: true
ingress:
- action: Allow
source:
selector: ibm.role == 'satellite_controller_worker'
order: 1400
preDNAT: true
selector: ibm-cloud.kubernetes.io/worker-pool-name == 'odf'
types:
- Ingress
- apiVersion: projectcalico.org/v3
kind: GlobalNetworkPolicy
metadata:
name: allow-local-ingress-edge
spec:
applyOnForward: true
ingress:
- action: Allow
source:
selector: ibm.role == 'satellite_controller_worker'
- action: Allow
destination:
nets:
- 0.0.0.0/0
ports:
- 443
protocol: TCP
source: { }
order: 1400
preDNAT: true
selector: ibm-cloud.kubernetes.io/worker-pool-name == 'edge'
types:
- Ingress
- apiVersion: projectcalico.org/v3
kind: GlobalNetworkPolicy
metadata:
name: deny-ingress-public-inbound
spec:
applyOnForward: true
ingress:
- action: Deny
destination: {}
source: {}
order: 1500
preDNAT: true
selector: ibm.role == 'satellite_controller_worker'
types:
- Ingress

With this policy in place: we can see we can only send traffic on 443 through the edge nodes (which will then forward to the Openshift router for processing). The example below shows how traffic is blocked on port 80 (HTTP) but allowed on port 443.

$ kubectl get route -n presentation-tier
NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD
presentation-tier-app presentation-tier-app-presentation-tier.tyler-cloudpak-2-d-31-80d128fecd199542426020c17e5e9430-0000.ca-tor.containers.appdomain.cloud presentation-tier-app <all> reencrypt None

$ curl --connect-timeout 2 http://presentation-tier-app-presentation-tier.tyler-cloudpak-2-d-31-80d128fecd199542426020c17e5e9430-0000.ca-tor.containers.appdomain.cloud
curl: (28) Failed to connect to presentation-tier-app-presentation-tier.tyler-cloudpak-2-d-31-80d128fecd199542426020c17e5e9430-0000.ca-tor.containers.appdomain.cloud port 80 after 2005 ms: Timeout was reached

$ curl --connect-timeout 2 https://presentation-tier-app-presentation-tier.tyler-cloudpak-2-d-31-80d128fecd199542426020c17e5e9430-0000.ca-tor.containers.appdomain.cloud

<!DOCTYPE html>
...

The policy allows all external IPs to access port 443 but note that can be restricted to specific known external client IP ranges by changing the 0.0.0.0/0 section of the calico policy. The additional `satellite_controller_worker` allow rule ensures ingress traffic from all intra cluster nodes is allowed which is required for general operations of the environment (all nodes in the same cluster should have full IP connectivity between one another). Note that this load balancing and ingress firewall functionality was able to be utilized within the environment without any specialized hardware (just regular available compute). Additionally: the firewall policy is distributed across cluster nodes resulting in no single point of failure along with a scalable firewall solution (just add more nodes to the edge worker pool) versus various legacy deployments with a set device that firewalls the entire environment (and is a single point of failure). We assume all external traffic is exposed over HTTPS but this pattern can be adjusted for different types of traffic on different ports as well (although that will not be covered in this guide).

Openshift Router -> Presentation Tier

After the traffic reaches the openshift router: the openshift router will use route configuration to forward the traffic to the appropriate backend microservice. The configuration in a example environment is shown below:

apiVersion: route.openshift.io/v1
kind: Route
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"route.openshift.io/v1","kind":"Route","metadata":{"annotations":{},"name":"presentation-tier-app","namespace":"presentation-tier"},"spec":{"tls":{"termination":"Reencrypt"},"to":{"kind":"Service","name":"presentation-tier-app"}}}
openshift.io/host.generated: "true"
creationTimestamp: "2023-10-31T17:31:47Z"
name: presentation-tier-app
namespace: presentation-tier
resourceVersion: "3262922"
uid: ca3b09ac-bb85-4fda-ab53-0a52b07c16f7
spec:
host: presentation-tier-app-presentation-tier.tyler-cloudpak-2-d-31-80d128fecd199542426020c17e5e9430-0000.ca-tor.containers.appdomain.cloud
tls:
termination: reencrypt
to:
kind: Service
name: presentation-tier-app
weight: 100
wildcardPolicy: None
status:
ingress:
- conditions:
- lastTransitionTime: "2023-10-31T17:31:47Z"
status: "True"
type: Admitted
host: presentation-tier-app-presentation-tier.tyler-cloudpak-2-d-31-80d128fecd199542426020c17e5e9430-0000.ca-tor.containers.appdomain.cloud
routerCanonicalHostname: router-default.tyler-cloudpak-2-d-31-80d128fecd199542426020c17e5e9430-0000.ca-tor.containers.appdomain.cloud
routerName: default
wildcardPolicy: None
kubectl get pods -n openshift-ingress
NAME READY STATUS RESTARTS AGE
router-default-6b9786dcbb-59958 1/1 Running 0 17d
router-default-6b9786dcbb-7fm8w 1/1 Running 0 17d

kubectl get pods -n presentation-tier
NAME READY STATUS RESTARTS AGE
presentation-tier-app-55ff596f97-26fnb 2/2 Running 0 7s
presentation-tier-app-55ff596f97-kgc8b 2/2 Running 0 7s
presentation-tier-app-55ff596f97-m2tfc 2/2 Running 0 11d


kubectl get service -n presentation-tier -o yaml
apiVersion: v1
items:
- apiVersion: v1
kind: Service
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"service.alpha.openshift.io/serving-cert-secret-name":"presentation-tier-app-tls"},"name":"presentation-tier-app","namespace":"presentation-tier"},"spec":{"ports":[{"name":"presentation-tier-app","port":443,"targetPort":8443}],"selector":{"app":"presentation-tier-app"}}}
service.alpha.openshift.io/serving-cert-secret-name: presentation-tier-app-tls
service.alpha.openshift.io/serving-cert-signed-by: openshift-service-serving-signer@1698204371
service.beta.openshift.io/serving-cert-signed-by: openshift-service-serving-signer@1698204371
creationTimestamp: "2023-10-31T17:31:47Z"
name: presentation-tier-app
namespace: presentation-tier
resourceVersion: "3262942"
uid: 2db64c52-6ba7-41af-9c07-6258811de9aa
spec:
clusterIP: 172.21.15.93
clusterIPs:
- 172.21.15.93
internalTrafficPolicy: Cluster
ipFamilies:
- IPv4
ipFamilyPolicy: SingleStack
ports:
- name: presentation-tier-app
port: 443
protocol: TCP
targetPort: 8443
selector:
app: presentation-tier-app
sessionAffinity: None
type: ClusterIP
status:
loadBalancer: {}
kind: List
metadata:
resourceVersion: ""
Presentation Tier Webpage and Associated Certificate info

A few things on this configuration: one is the traffic is encrypted with TLS from the client to the openshift router. The openshift router then terminates TLS and establishes another TLS connection to the backend presentation tier app. This means that traffic is always encrypted through TLS throughout the entire path of transferring the network. In the first part: the openshift router as part of the platform is configured with certs from Let’s Encrypt that automatically renew as part of the core Satellite platform. In the openshift router -> presentation tier TLS connection: note that the presentation-tier app requests TLS certs from the openshift service CA operator with the service.alpha.openshift.io/serving-cert-secret-name tag. That annotation will trigger the openshift service CA operator to generate service specific certificates associated with the appropriate service shown below that the presentation app service can then mount:

kubectl get secret -n presentation-tier | grep presentation-tier-app-tls
presentation-tier-app-tls kubernetes.io/tls 2 12d
kubectl get deploy -n presentation-tier presentation-tier-app -o yaml
...
- --tls-cert=/etc/tls/private/tls.crt
- --tls-key=/etc/tls/private/tls.key
...
volumeMounts:
- mountPath: /etc/tls/private
name: presentation-tier-app-tls
...
volumes:
- name: presentation-tier-app-tls
secret:
defaultMode: 420
secretName: presentation-tier-app-tls

This effectively solves ensuring all traffic in transit to/from the presentation tier is encrypted. Additionally: we want to implement ingress and egress controls on the namespace which we do with the network policy below:

  - apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: presentation-tier-app-isolation
namespace: presentation-tier
spec:
podSelector: { }
policyTypes:
- Ingress
- Egress
ingress:
- from:
# allow openshift ingress if exposing app through router pods
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: openshift-ingress
# allow monitoring namespace to scrape pods for metrics
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: openshift-monitoring
egress:
# oauth server traffic
- to:
- ipBlock:
cidr: 10.240.0.57/32
- ipBlock:
cidr: 10.240.0.59/32
- ipBlock:
cidr: 10.240.0.61/32
ports:
- protocol: TCP
port: 30857
# kube apiserver traffic
- to:
- ipBlock:
cidr: 172.20.0.1/32
ports:
- protocol: TCP
port: 2040
- to:
# for routing to other ingress points
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: application-tier-app
# for dns resolution
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: openshift-dns

This policy allows the presentation tier to communicate to it’s dependencies:

  • Openshift DNS for DNS resolution
  • Application Tier to be able to access the dependent APIs
  • Kube-Apiserver & OAUTH Server to authenticate and authorize users to the webpage

All other traffic is blocked. This can be seen by execing into a pod and attempting to curl google.com

bash-4.4$ curl --connect-timeout 2 -v https://google.com
* Rebuilt URL to: https://google.com/
* Trying 172.253.122.102...
* Failed to connect to google.com port 443: Connection timed out
* Closing connection 0
curl: (7) Failed to connect to google.com port 443: Connection timed out
bash-4.4$

Additionally: only supporting local cluster monitoring services and openshift ingress are allowed to send traffic into the namespace. This can be seen by going into an unapproved namespace and trying to send traffic to the service:

kubectl exec -it -n application-tier-app application-tier-app-6c7ddfc5b-n2pmq bash
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
bash-4.4$ curl -v --connect-timeout 2 https://presentation-tier-app.presentation-tier.svc.cluster.local
* Rebuilt URL to: https://presentation-tier-app.presentation-tier.svc.cluster.local/
* Trying 172.21.15.93...
* TCP_NODELAY set
* Connection timed out after 2001 milliseconds
* Closing connection 0
curl: (28) Connection timed out after 2001 milliseconds

This implements controlled ingress and egress in the presentation tier namespace. Additionally: openshift oauth proxy is utilized to validate individual user access to the presentation tier. The openshift oauth proxy utilizes IBM Cloud IAM and Openshift RBAC to validate a user has access to specific resources before allowing them to proceed to access the application. In the example environment: the user accessing the application is validated that he/she is authorized to get services in the default namespace

        containers:
- args:
- --https-address=:8443
- --provider=openshift
- --openshift-service-account=presentation-tier-app
- --openshift-delegate-urls={"/":{"namespace":"default","resource":"services","verb":"get"}}
- --upstream=http://localhost:8080
- --tls-cert=/etc/tls/private/tls.crt
- --openshift-sar={"namespace":"default","resource":"services","resourceName":"proxy","verb":"get"}
- --tls-key=/etc/tls/private/tls.key
- --cookie-secret=XXXXXXX
image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35bb6d5f135f1763bf305d71465718978ee1ab73625a2094e42116d52d4b7bd2
imagePullPolicy: IfNotPresent
name: oauth-proxy
ports:
- containerPort: 8443
name: public
protocol: TCP
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /etc/tls/private
name: presentation-tier-app-tls
- image: openshift/hello-openshift:latest
imagePullPolicy: Always
name: app
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File

When attempting to access the application through a browser a user will be asked to login through openshift to get an identity first:

Login Page For Presentation Tier App

Once authenticated and the user’s access is validated: the content will be displayed.

Content Displayed to Authorized User

If they are not authorized they will see a error screen.

Unauthorized Screen

This same solution can also be utilized to authenticate and authorize application to application traffic when the application passes the service account token it is granted in an Authorization header to a application running with the sidecar openshift oauth proxy. An example of that is shown below:

curl -H "Authorization: Bearer XXXX" https://presentation-tier-app-presentation-tier.tyler-cloudpak-2-d-31-80d128fecd199542426020c17e5e9430-0000.ca-tor.containers.appdomain.cloud/
Hello OpenShift!

This gives a pluggable solution without any dependencies outside the Satellite platform to authenticate and authorize users for applications. Often times government organizations will already have an approved local IAM system and that can also be utilized (typically built into the backend applications that are deployed in the environment). When using those: just ensure the network policy is also updated to allow the presentation tier application to utilize the necessary endpoints for the external identity and access management solution.

Openshift Router -> Application Tier
Presentation Tier -> Application Tier

The application tier will be accessed both externally (directly calling APIs) and through the applications running in the presentation tier. Ingress and Egress controls are implemented in the policy shown below:

  - apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: application-tier-app-isolation
namespace: application-tier-app
spec:
podSelector: { }
policyTypes:
- Ingress
- Egress
ingress:
- from:
# allow presentation tier to communicat
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: presentation-tier
# allow openshift ingress if exposing app through router pods
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: openshift-ingress
# allow monitoring namespace to scrape pods for metrics
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: openshift-monitoring
egress:
- to:
# for talking to backend cluster local database
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: database-tier-app
# for dns resolution
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: openshift-dns

Since the APIs will also be able to be accessed externally: ingress is allowed from the openshift-ingress namespace. Additionally: ingress is allowed from the presentation tier to talk to the dependent APIs so the presentation tier can properly display webpages. For egress: the application tier is allowed to talk to the backend database (deployed locally in the cluster) and openshift DNS. The application tier will also make use of the openshift service CA to sign it’s internal server cert

kubectl get service -n application-tier-app  application-tier-app -o yaml
apiVersion: v1
kind: Service
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"service.alpha.openshift.io/serving-cert-secret-name":"application-tier-app-tls"},"labels":{"app":"application-tier-app"},"name":"application-tier-app","namespace":"application-tier-app"},"spec":{"ports":[{"port":443,"protocol":"TCP","targetPort":20000}],"selector":{"app":"application-tier-app"},"sessionAffinity":"None","type":"ClusterIP"}}
service.alpha.openshift.io/serving-cert-secret-name: application-tier-app-tls
service.alpha.openshift.io/serving-cert-signed-by: openshift-service-serving-signer@1698204371
service.beta.openshift.io/serving-cert-signed-by: openshift-service-serving-signer@1698204371
creationTimestamp: "2023-10-31T17:46:46Z"
labels:
app: application-tier-app
name: application-tier-app
namespace: application-tier-app
resourceVersion: "3559581"
uid: f8719770-7d06-4459-a54a-d654f08c3035
spec:
clusterIP: 172.21.21.198
clusterIPs:
- 172.21.21.198
internalTrafficPolicy: Cluster
ipFamilies:
- IPv4
ipFamilyPolicy: SingleStack
ports:
- port: 443
protocol: TCP
targetPort: 20000
selector:
app: application-tier-app
sessionAffinity: None
type: ClusterIP
status:
loadBalancer: {}

kubectl get deploy -n application-tier-app application-tier-app -o yaml
...
containers:
- args:
- cat /etc/tls/private/tls.crt /etc/tls/private/tls.key > /tmp/combined.pem;
chmod 0400 /tmp/combined.pem; haproxy -f /usr/local/etc/haproxy/haproxy.conf
command:
- /bin/bash
- -c
- --
...
volumeMounts:
- mountPath: /usr/local/etc/haproxy
name: conf
- mountPath: /etc/tls/private
name: application-tier-app-tls
...
volumes:
- configMap:
defaultMode: 420
name: application-tier-app-conf
name: conf
- name: application-tier-app-tls
secret:
defaultMode: 420
secretName: application-tier-app-tls

The usage of the common openshift service CA operator enables encryption in transit of traffic both from external clients directly to APIs and from the presentation tier to APIs. An example request for both is shown below:

kubectl exec -it -n presentation-tier presentation-tier-app-55ff596f97-26fnb bash
bash-4.4$ curl --cacert /var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt https://application-tier-app.application-tier-app.svc.cluster.local/monitor
<html><body><h1>200 OK</h1>
Service ready.
</body></html>

$ curl https://application-tier-app-application-tier-app.tyler-cloudpak-2-d-31-80d128fecd199542426020c17e5e9430-0000.ca-tor.containers.appdomain.cloud/monitor
<html><body><h1>200 OK</h1>
Service ready.
</body></html>

Application Tier -> Database Tier

The database tier has no external dependencies and runs a local database. The network policy to implement ingress and egress controls are shown below:

  - apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: database-tier-app-isolation
namespace: database-tier-app
spec:
podSelector: { }
policyTypes:
- Ingress
- Egress
ingress:
- from:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: application-tier-app
# allow monitoring namespace to scrape pods for metrics
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: openshift-monitoring
egress:
- to:
# for dns resolution
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: openshift-dns

It allows ingress from the application tier to successfully execute the API suite the application tier provides. Additionally: the database deployment utilizes ODF encrypted with IBM HyperProtect Crypto Services Keep Your Own Key Encryption. The sample configuration is shown below:

kubectl get deploy -n database-tier-app database-tier-app -o yaml
...
volumeMounts:
- mountPath: /usr/local/etc/haproxy
name: conf
- mountPath: /etc/tls/private
name: database-tier-app-tls
- mountPath: /var/persistentdata
name: database-tier-app-pvc
...
volumes:
- configMap:
defaultMode: 420
name: database-tier-app-conf
name: conf
- name: database-tier-app-tls
secret:
defaultMode: 420
secretName: database-tier-app-tls
- name: database-tier-app-pvc
persistentVolumeClaim:
claimName: database-tier-app-pvc

$ kubectl get pvc -n database-tier-app -o yaml
apiVersion: v1
items:
- apiVersion: v1
kind: PersistentVolumeClaim
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"database-tier-app-pvc","namespace":"database-tier-app"},"spec":{"accessModes":["ReadWriteMany"],"resources":{"requests":{"storage":"8Gi"}},"storageClassName":"sat-ocs-cephfs-gold","volumeMode":"Filesystem"}}
pv.kubernetes.io/bind-completed: "yes"
pv.kubernetes.io/bound-by-controller: "yes"
volume.beta.kubernetes.io/storage-provisioner: openshift-storage.cephfs.csi.ceph.com
volume.kubernetes.io/storage-provisioner: openshift-storage.cephfs.csi.ceph.com
creationTimestamp: "2023-11-01T00:42:56Z"
finalizers:
- kubernetes.io/pvc-protection
name: database-tier-app-pvc
namespace: database-tier-app
resourceVersion: "3494697"
uid: f02f7392-73e0-42ab-a391-faf138bc8586
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 8Gi
storageClassName: sat-ocs-cephfs-gold
volumeMode: Filesystem
volumeName: pvc-f02f7392-73e0-42ab-a391-faf138bc8586
status:
accessModes:
- ReadWriteMany
capacity:
storage: 8Gi
phase: Bound
kind: List
metadata:
resourceVersion: ""

$ kubectl exec -it -n database-tier-app database-tier-app-85dfc84974-zpw2s bash
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
bash-4.4$ ls -l /var/preserve/
total 0
bash-4.4$ ls -l /var/persistentdata/
total 1
-rw-r--r--. 1 1000640000 1000640000 3 Nov 1 00:44 hi

The storage template that deploys ODF contains the configuration to link it to the backend HCPS instance that will hold the keys that encrypt all the persistent storage ODF manages.

With this deployment model: encryption at rest has been implemented at a rest for all application data as well. The platform administrator controls access to the keys through HCPS and can revoke access/keys at any time and effectively stop access to any local persistent data in the Satellite environment.

Threat Detection & Response with SCC Workload Protection

Now that we have outlined the core data path and the implementation of controls around it for the three tier application: let’s look at SCC Workload Protection which will enable a threat detection and response solution within the environment. SCC workload protection gathers activity logs from all events on the hosts and within the Openshift Platform itself and provides analytics and alerting engines to detect and respond to anamolous activity. It also provides compliance policy validation against numerous policy standards including HIPPA, Fedramp, PCI, SOC 2, and more. The agents are deployed into the cluster and the associated data can be viewed in the workload protection page:

kubectl get pods -n ibm-observe | grep sysd
sysdig-agent-4br7f 1/1 Running 0 11d
sysdig-agent-57lxk 1/1 Running 0 11d
sysdig-agent-8vx7n 1/1 Running 0 11d
sysdig-agent-f4f4d 1/1 Running 0 11d
sysdig-agent-kspmcollector-55cd59f959-l58hl 1/1 Running 0 11d
sysdig-agent-node-analyzer-9b4w4 3/3 Running 1 (8d ago) 11d
sysdig-agent-node-analyzer-fqm76 3/3 Running 0 11d
sysdig-agent-node-analyzer-gn9p6 3/3 Running 1 (4d4h ago) 11d
sysdig-agent-node-analyzer-hfkz8 3/3 Running 0 11d
sysdig-agent-node-analyzer-lzz56 3/3 Running 0 11d
sysdig-agent-xwljg 1/1 Running 0 11d
SCC Workload Protection Activity Page

This solution provides the ability for SOC teams associated with government entities to get advanced insights into the activity in ROKS on Satellite clusters and setup alerts that enable them to detect anomalous activity and validate with their organization if the activity is benign or malicious. To read more about the advanced capabilities of this solution: refer to SCC Workload Protections documentation page.

Logging

The IBM Log Analysis agent is also deployed into the environment for the logging portion of the observability solution. Application logs can be visualized overtime in the IBM Log Analysis UI and alerts setup for specific activity or lack of activity from applications. To read more about the advanced capabilities of this solution: refer to IBM Log Analysis’s getting started page.

kubectl get pods -n ibm-observe | grep log
logdna-agent-4fht2 1/1 Running 1 (2d4h ago) 11d
logdna-agent-j27wg 1/1 Running 0 11d
logdna-agent-j7q76 1/1 Running 0 11d
logdna-agent-lflpf 1/1 Running 0 11d
logdna-agent-x4kv8 1/1 Running 0 11d

Core Platform Outbound Traffic

The environment makes use of the reduced firewall location configuration of Satellite to only require a minimal set of outbound networking requirements (port 443 on a small set of IPs plus the additional outbound network requirements of addon software of ODF, SCC Workload Protection, and IBM Log Analysis). In this mode: agents are deployed on all hosts that proxy the dependent platform traffic through Satellite link and provide full auditability, visibility, and control of all traffic flows. This configuration is useful when integrating into existing regulated environments with complex firewall rules/network egress controls.

--

--

Tyler Lisowski

IBM Cloud Satellite Lead Architect. Proud member of Bills Mafia.