Eliminate Hidden Costs by using multi-cloud RedHat OpenShift
Sormita Chakraborty, Consulting Application Architect
Tony Efremenko, Principal Customer Success Manager Architect
We’re going to the cloud ! (an economic and technical journey)
“Going to the cloud” is part of the answer to the problem of “how do I accelerate a Digital Business?”. It’s true that application platforms like Kubernetes in the Cloud do a lot to speed up the deployment of complex applications , but there are so many Kubernetes offerings out there — and so many places you can run them ! It is difficult to figure out which would best suit your Enterprise. Unfortunately, we’ve also found that some of these choices bring hidden costs to the operation and deployment of your solution that aren’t obvious at first. These hidden costs eat away at the longer-term value of your solution.
As in economics in general, the “Local” economy (in this case, the cloud provider) can greatly affect your cost of doing business. Your goal is to create a feasible long-term solution — something that will last past this year, and hopefully, for many years to come. How can you provide a solution that’s technically sound while cost-effective over its entire life span? Even more, how can that solution address some hidden costs that will inevitably come up if you choose to ignore them?
As customer-facing practitioners, Sormita and Tony have a lot of experience with choices that make business sense. We have a track record of seeing and fixing these kinds of problems. Of course, it’s much cheaper to avoid them up front than after the problem has been deployed.
Even more, IBM has strategic partnerships with Azure and AWS, and has a long relationship with Red Hat. IBM has always been rock-solid in building technology solutions that provide lasting value, often for decades, and now the IBM partnerships with Red Hat, Azure, AWS and others make the possibilities of extended value greater than ever. We’ll explore technical points while justifying the business value that will help keep you away from costs that you may not anticipate.
This paper will cover the points you should consider when putting together your mix of cloud providers and services, from a point of view of cost and value. We think we have a way to smooth out and mitigate local “hidden costs”, with a good overall architecture and design. We’ll talk about:
1. Platform and platform services — when to use what
2. Architectural Decision Points — decisions that ensure you’re costs meet the threats that are out there
Obstacles on the path — Hidden Costs
There are easy, well known financial reasons for cloud deployments. Among them,
· The “rental” business model aligns ensures incremental costs rather large up front expenses to provision the environment (OpEx Capex vs CapeEx).
· Cloud allows your team to focus on the business aspects of your solution rather than the technical nuts of bolts of the underlying infrastructure.
· Cloud deployments allow you to shop around for cloud partners and services in a marketplace of providers.
· A certain level of High Availability and Disaster Recovery are more “built in”. Cloud providers know their important and make those decisions throughout their service provisioning process
But as anyone who has used cloud platforms knows, cloud solutions are not always the cheapest on the face of it. In addition, we see some additional hidden costs if you don’t make good decisions. These hidden costs include :
· Added Operational personnel costs from teaching your operations teams to work with technology differences of your cloud provider
· Security services that don’t align with the data confidentiality and audit needs of your workloads, and don’t expand easily with your workloads
· Supplemental services that you use but didn’t write provide value, but not equally, and with lots of overlap among them
· Platform differences that throw off your developers writing the solution in the first place
· Immature Sec/Dev/Ops support for your providers. The runtime is there, but how good are the pipelines to get your code to run, and how difficult is it to monitor and adjust its health?
· High Availability / Disaster Recovery (HA/DR) that’s identified for you isn’t free, and it’s easy to misread your Recovery Times and Points (RTO/RPO) to over/under build, leaving either large daily costs or a large potential penalty for an outage
· Site Reliability Engineering (SRE) is a costly skillset is ripe for automation
· Automation points in general can be difficult to discern in the cloud provider’s topology
… and so on. Plus, if you do decide to re-host your platform with another cloud provider, you can add refactoring costs and developer churn to that list as you adjust for the underlying service and costs differences.
Finally, add in your prior investments in your own data center: if the cloud provider doesn’t work out, can you consider ever bring the solution home to your own infrastructure? Think of it as cost-per-unit-of work plus, where the plus is the halo of things you must do around the application to get it to be flexible and manageable.
Openshift Cloud Economics for complex AppDev
When we talk about complex application development there are various building blocks that should be part of the Application Lifecycle Management to streamline the delivery in Agile manner:
For each of these there are various tools available on Azure and other cloud providers, but that does not provide platform independence and in future if the provider has to be switched for some reason, then that will become a long term project. Choosing Openshift as your technology stack can help avoid such an unnecessary expense as it provides all the required tools to build and maintain such an Enterprise grade application.
Openshift is an ideal platform for this kind of complex application. It adds in things that you always do anyway, like automated deployment from a git-repository, in a seamless way. It supports development on a Microservice architecture with a chaotic release schedule. In short, for scenarios where multiple teams owning each of the service and its deployment, collaborating to build a single application, to safeguard the sanctity of each service and manage the complex application development, Openshift is an ideal container orchestration platform for complex application development.
1. Infrastructure as Code: When an Enterprise Architecture spans across clouds, it is important to code the infrastructure and maintain versions of it to avoid unforeseen disasters which might happen on any of the clouds. Ansible Tower on Openshift helps implement Infrastructure as Code for complex architectures like ‘SAP on Azure’, ‘Data Analytics platform on Azure’ etc. It also helps execute post deployment configuration scripts thus nullifying the need of infrastructure administrators to duplicate or re-create an environment.
2. Source 2 Image deployment as CI/CD workflow: Source-to-image (S2I) produces ready-to-run images by injecting source code into a container that prepares that source code to be run. This eliminates the need to write Deployment Manifests or Dockerfile for run-of-the-mill applications. This provides the option to setup a service hook, whenever there is a change in your GitHub repository files, it will trigger an s2i deployment on the cluster.
3. Red Hat OpenShift Database Access Service: Deploying and maintaining database on Kubernetes cluster had always been a challenging task because of the maintenance overhead it brings with it. RHODA is a capability in managed Openshift which helps removing these obstacles by transforming the Database workloads on Openshift into DbaaS. The Crunchy operator is a prime example on how a Postgres deployment can be made easy to consume and maintain on Openshift. This gives an option to put all the tiers of an application in the cluster thus reducing the latency when data flows from one tier of the application the next.
4. Business Process Automation Framework: IBM Cloud Pak for Business Automation is a set of integrated market-leading software designed to help you solve your toughest operational challenges. It is available as an operator in Openshift thus making the setup easy for the end user.
5. Cluster Threat Security Framework: Red Hat Advanced Cluster Security operator teamed with OpenSCAP operator on Openshift help protect the cluster resources any external security threats.
6. Service Endpoints Management: There are various offerings available in the market which help in managing and exposing the services to the external entities. IBM API Connect is the best in class offering for such a job, it is a complete, intuitive and scalable API platform that lets you create, expose, manage and monetize APIs across clouds. Openshift provides good platform support for IBM API Connect. Install IBM API Connect on Openshift following the steps listed here.
When using the above technology stacks based completely out of Openshift (either as operators or as classic deployment workloads), it is easier to maintain all the pieces of Application management under one software platform. You write your application to target Openshift and Openshift alone. Later, if you change cloud providers, you can “lift and shift” your Openshift cluster as is. without the need to change any of the existing code-block. This eliminates the need for license and tech stack management for the various technologies being used in this complex application environment.
Tips to choose wisely
From our experience, we’ve arrived at some firm points-of-view on what works well. For example:
We think that a single cloud platform can’t fulfill the all the needs of an Enterprise architecture. We believe that the “local economies” of the cloud providers bring too much complexity and hidden cost. But the nice thing about an overarching approach is that you can address that.
For example, Openshift does give the advantage of platform independence, but it is important to understand what workloads to put in which instance of OCP to take best advantage of Openshift as well as the underlying infrastructure.
Following are some of the tech nuggets which will help make architectural decisions that will help economize the Enterprise architecture across multi-cloud environment:
· OCP on IBM Cloud (commonly called ROKS) comes with a varied infrastructural option which include Bare Metal servers. OCP key features like Virtualization work particularly well on Bare Metal servers (where you control every part of the stack down to the operating system kernel). This is because performance and other critical Openshift features ultimately depend on the underlying operating system infrastructure.
· Bare metal servers can also be used for workloads which need more computational power and faster I/O operations. If your enterprise includes components like IBM Cloud Pak for Data, ROKS is the ideal option for deployment.
· Openshift’s virtualization feature, where you can create virtual machines within the Openshift cluster, gives the option to lift a VM-based on-premise architecture to Openshift cluster regardless of whether it can be containerized or not. You can choose this for those workloads that resist containerization.
· OCP on cloud providers like Azure or AWS come with the added advantage of shifting peripheral operations (which can tolerate latency hits) outside the cluster to server-less components like Function Apps. This way, you can take advantage of cloud design patterns like CQRS (Command and Query Responsibility Segregation), and time consuming operations like AI operations (eg Natural Language Understanding) can be offloaded to a messaging queue like the Azure Service Bus. The advantage of doing this is it enables the application to remain responsive for the front-end user while the actual operation is simultaneously executed in the backend. This pattern also takes advantage of Azure’s server-less Consumption billing plan where the user pays only when the function gets executed.
· Databases on Kubernetes can be tricky, because of operational complexity within Kubernetes. To alleviate this, we recommend the RHODA operator on Openshift. Please note however, this feature is available on limited offerings of Openshift. For example it is found on ‘Openshift Dedicated’, and ROSA flavors of Openshift found on AWS and GCP. If your architecture design has database deployed on Kubernetes, RHODA should be the unquestionable option to go for.
Summary of Openshift choices
The following table shows choices for Openshift deployment, any risk they might have, and the mitigation for that risk
Detail on the Cloud choices
Having decided on the multi-cloud/hybrid-cloud strategy, it is important to keep a particular cloud platform as the focal point to manage critical components of an Enterprise Architecture.
Following points will help you decide why a Cloud should be the choice for cloud platform:
Shifting your data from on-premise Data Warehouse to a Cloud Data Lake can help reduce your datacenter footprint, but it will also expose possibilities for advanced analytics on your data through best in class analytics components offered on the cloud (examples include Azure Synapse,Azure Databricks. GCP Big Query, AWS Sagemaker and IBM Data Fabric and Watson). This architectural design will be way more effective than the on-premise Data Warehouse setup as you choose between on-demand clusters or a pay-per-job model when data is processed. Even more, the Cloud providers bring best in class security components that help safeguard data both at rest and on fly.
In OpenShift it is not necessary to back up the cluster itself, but instead you back up the “active state” of your resources, any Persistent Volumes, and any backing services. For example, Azure enables you to reduce downtime for your mission critical workloads on ARO by providing Velero which is a very reliable disaster recovery option.
[Automation components on Runbook Automation help in seamless implementation of AIOps based on triggers and thresholds on Log Analytics teamed with Azure ARC where logs from your Enterprise spanned across multi-cloud or hybrid-cloud can be pooled in. Unlike multi-cloud management in other clouds, in Azure, the cost accrued will be based on the retention of the logs on Log Analytics Workspace. This setup eliminates the need for investment in maintenance staff for your multi-cloud/hybrid-cloud workloads.]
[Azure CostManagement] Cloud lets you track your spend across multi-cloud environment. Budgets and alerts in Cost Management help you plan for and drive organizational accountability by allowing you to set spend amount limits and alert thresholds. This will help create a single control plane for your financial spends on Cloud services.
Take-way — and more to come
As you fulfill the promise of Digital Business, it’s good to keep that same business focus as you extend your solution to the cloud. Understanding the hidden costs of various platform and service decisions before you build can help you ensure that long term value.
We’d like to thank our Sponsors, Dean Ferrogari, Sri Deekshitulu, Sailesh Valecha, and Sridhar Iyengar.
— — — — — — — — — — -