Upgrading TIBCO BusinessWorks Container Engine Runtime in a Kubernetes Environment

Dave Winstone
5 min readJan 5, 2023
Photo by frank mckenna on Unsplash

In my previous article, I looked at how you would upgrade the TIBCO BusinessWorks Container Edition runtime for an application running on Docker.

In this article I want to extend the approach for when your application/microservice is deployed in a Kubernetes environment where you can achieve a Zero Downtime upgrade.

I’ll continue using my simple Greeting microservice that is a REST API that responds with a message, the container hostname and the BusinessWorks runtime version:

Greeting Microservice in BusinessWorks Container Edition

Building the BusinessWorks Container Edition Base Image

There’s lots of instructions and different examples of how to do this at TIBCO’s official github repo here that I’d recommend you take a look at.

I’m going to start with my application built using version 2.7.3 of BusinessWorks Container Edition, then upgrade the runtime to 2.8.0.

Once you’ve followed the instructions from the TIBCO GitHub to clone the BWCE repo and downloaded the BWCE runtime from TIBCO’s download site you are ready to build the base image using a command similar to the following from the cloned bwce repo:

./createDockerImage.sh ~/bwce-runtime-2.7.3.zip 127.0.0.1:32000/bwce:v2.7.3

The first argument is the BWCE runtime zip file that you downloaded from TIBCO, and the second argument is the tag we’ll be using. You’ll notice that in my tag I’ve got an ip address and port, and that’s because I want to push this image to a registry — in this case I’m using microk8s, so when the base image is built, I can push it to the registry using this command:

docker push 127.0.0.1:32000/bwce:v2.7.3

Building the Application Container Image

I’ll adjust my application Dockerfile to include the correct base image tag as follows:

FROM 127.0.0.1:32000/bwce:v2.7.3
LABEL maintainer="Dave Winstone"
ADD greeting_1.0.0.ear /
EXPOSE 8081 7777

Now to build the application and push it to the registry too:

docker build -f Dockerfile -t 127.0.0.1:32000/greeting-app:1.0 .

Deploying to K8s (microk8s)

To deploy my application to K8s, I need to create a yaml file that configures my application, creates a service and also defines the update strategy:

apiVersion: v1
kind: Service
metadata:
name: greeting-app
labels:
app: greeting-app
spec:
type: NodePort
ports:
- port: 8081
targetPort: 8081
nodePort: 30092
selector:
app: greeting-app
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: greeting-app
spec:
replicas: 2
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 1
maxUnavailable: 0
selector:
matchLabels:
app: greeting-app
template:
metadata:
name: greeting-app
labels:
app: greeting-app
spec:
containers:
- name: greeting-app
image: 127.0.0.1:32000/greeting-app:1.0
imagePullPolicy: Always
env:
- name: BW_LOGLEVEL
value: "ERROR"
- name: BW_PROFILE
value: "default"
ports:
- containerPort: 8081
- name: readiness-port
containerPort: 7777
readinessProbe:
httpGet:
path: /_ping
port: 7777
initialDelaySeconds: 60
periodSeconds: 15

To deploy the application from the command line, I can execute the following:

kubectl apply -f greeting1.0.ROLLING.yml

When the application is ready (kubectl get deployments) you can test the application with a curl command (notice we defined the nodeport as 30092 in the yaml file):

As we have 2 replicas created if we keep issuing the curl command we’ll see that both pods are responding

while true;do curl http://192.168.1.10:30092/greeting/Dave; printf "\n"; sleep 1; done 

So — we’re all good — we have our microservice API up and running, and we’ve got 2 instances available and processing our messages. Now we want to keep the application running and update the base image to a new version of the container runtime.

Update the Base Image

To rebuild the base image to use v2.8.0 of BusinessWorks Container Edition, we first need to remove the bwce-runtime-2.7.3.zip from the resources folder under the cloned bwce repo, then run the command (assuming I have downloaded the bwce-runtime-2.8.0.zip file from TIBCO), followed by the Docker Push command:

./createDockerImage.sh ~/bwce-runtime-2.8.0.zip 127.0.0.1:32000/bwce:v2.8.0
docker push 127.0.0.1:32000/bwce:v2.8.0

Make a small change to our Dockerfile to use this new version (tag is 127.0.0.1:3200/bwce:v2.8.0):

FROM 127.0.0.1:32000/bwce:v2.8.0
LABEL maintainer="Dave Winstone"
ADD greeting_1.0.0.ear /
EXPOSE 8081 7777

To rebuild the application use the following (and don’t forget to push to the registry):

docker build -f Dockerfile -t 127.0.0.1:32000/greeting-app:1.1 .
docker push 127.0.0.1:32000/greeting-app:1.1

The next job we need to do is update our yaml file that describes the deployment in Kubernetes to point to the new version of the image and then re-apply the yaml file to the deployment using the kubectl command:

Updated yaml deployment file
kubectl apply -f greeting1.0.ROLLING.yml

I deliberately left my curl command running so that as Kubernetes is deploying the rolling update — we can see that when the new pod is instantiated it starts responding with the new version of the BWCE container runtime, whilst another instance of the pod is being started. When completed, you can see that all instances are now running at the latest version:

So there we have it — we have successfully upgraded our BWCE container runtime in a Kubernetes environment without having to upgrade the entire infrastructure — saving lots of time, effort and of course money.

When you start applying DevOps principles to this approach — then you can see how this level of automation can significantly, positively impact your development and operations costs.

As you can see from the screenshot above — this approach does not force you to upgrade every microservice to the same runtime version all at the same time — different versions can co-exist at the same time, which allows you to better plan upgrades alongside existing development and maintenance activities rather than having to plan a separate activity just for upgrading the platform. Co-Existence of containers running different versions of the core run-time is actually a better way of making the best use of developers and operations time.

--

--

Dave Winstone

I’ve been working with Technology since I was 11 years old when I got a Sinclair ZX-81 for Christmas. Since then — I’ve spent over 30 years working in IT.