Serverless Containers … the OpenShift way!
Serverless containers … sounds like a paradox, isn’t it? I also had the exact same feeling when I heard this term for the first time! The reason being there is a long standing belief many of us still have is that Serverless means no server. This is not true. All Serverless platforms allow your code (read function) to run on a trigger. To run the code (again read function) the Serverless platforms need to provision compute resources (read server) till execution is complete. A simple concept with a lot of advantages like
- Enables optimal usage of resources
- Inherently scalable architecture
- Eliminates the need for maintaining / managing the underlining platform, server and network infrastructure
- Enables development teams to focus on development and delivery of core business values to the customer
All Hyperscalers (AWS, Azure, GCP, IBM) have their own Serverless offerings as Function-as-a-Service (FaaS). They all provide almost similar functionalities and common feature they all have is very, very, very cheap pricing! Just to give an example AWS offers first 1M Lambda executions FREE and 20 cents for next 1M executions. And other Hyperscalers offer similar pricing.
As I have mentioned above, Serverless platforms provided by the Hyperscalers are built around FaaS. These have enabled a variety of use cases, but these are far from ideal for enterprise computing needs. Some of shortcomings are
- Functions only
- Limited execution time (~5 mins, varies by Hyperscaler)
- Limited to no orchestration
- Limited local development experience
In comes the Serverless Container which try to solve most of these shortcomings. Serverless Container, as name suggests, brings the best of both the paradigms -
- Serverless: enable abstracting application from underlaying infrastructure helping enterprises to innovate faster
- Containers: Applications can be packaged as OCI compliant container that can run anywhere removing vendor lock-in
In this blog I am going to explain the RedHat OpenShift’s Serverless Container offering called as OpenShift Serverless. OpenShift has adopted Knative Project for it’s Serverless offering. The below diagram shows the architecture of OpenShift Serverless
The main components of OpenShift Serveless architecture are
Knative Serving: enables developers to create cloud-native applications using serverless architecture. It provides custom resource definitions (CRDs) which developers can use to deploy serverless containers, scale number of pods, etc.
Knative Eventing: enables developers to use event-driven architecture with serverless applications.
Let’s get our hands dirty and try out OpenShift Serverless. I am using OpenShift 4.5 cluster. First step that needs to be done is to install RedHat OpenShift Serverless Operator on the cluster. You can follow this OpenShift documentation for the steps to be followed to install the Serverless Operator. You can see the status of the Operator installed on clicking the Operators -> Installed Operators menu on the OpenShift console.
Next you should go ahead and install Knative Serving and Knative Eventing following this OpenShift documentation.
The Serverless Operator will also add a new menu item on OpenShift console menu: Serverless as shown in below image.
The next will be to install Knative CLI (kn) on your machine. Knative CLI supports interactions with OpenShift platform. Follow this link to download Knative CLI for your operating and follow the steps mentioned there. To verify the installation run:
$ kn
Now let’s go ahead and create our first serverless container using Knative CLI in Node.js as below
$ kn func create -c
Knative CLI then will ask few details about the function we are going to code
Project path: /Users/callamitd/workspace/greet-func
Function name: greet-func
Runtime: node
Trigger: http
After providing above information, the CLI auto-generates code for Serverless Container project greet-func in the path given. Go ahead and use your favourite editor to add logic. Once done use following commands to build and deploy your serverless application to OpenShift Serverless.
$ kn func build
$ kn func deploy -c
After deployment, login to OpenShift console. Go to Developer -> Topology view. You should see your serverless application as shown below
The pod will terminate after 30 seconds of inactivity. If you hit the route url it should trigger the autoscaler to instantiate the pod and serve the request.
This is a very simple example of serverless application deploying on OpenShift Serverless platform. Beyond auto-scaling for HTTP requests, you can trigger the serverless containers using a variety of events such as Kafka messages, file upload to Storage, Timers for recurring jobs and 100+ event sources like Salesforce, ServiceNow, E-mail, etc.
To conclude, below are key features of OpenShift Serverless platform
- Developers can choose programming language or runtime of their choice like Java, Python, Go, Quarkus, Node.js, etc.
- Supports immutable revisions: so deploy few features using canary deployments or performing A/B testing with gradual traffic rollout following best practices
- Scale to zero when not in use, auto scale to thousands during peak load giving true serverless experience using containers
- Built for Hybrid Cloud: portable serverless applications running anywhere OpenShift runs, on-premises or on any public cloud.
What are you waiting for? Try out OpenShift Serverless and experience the power of Serverless Containers today!!