In the development world, containerized applications can be either a place of tranquility or organized chaos. It all depends on the procedures that are put in place by an organization before things get out of hand. The popular platform Docker assists development teams with creating a consistent environment for their applications across multiple machines. The program single-handedly solved two problems for developers. The first, it allowed developers to work on their laptops to write a program while having the confidence that the same program, would also run on another machine containing different hardware. The second, it allowed for microservices to efficiently be isolated from one another, yet work seamlessly together to complete business processes. However, it alone does not solve all problems.
Even with Docker in place, development teams are still in need of their containerized applications to have management as workloads scale. Thus, a container orchestration platform, known as Google Kubernetes Engine (GKE), was developed. It systematically manages the activity of each application, creating a more stable working environment with consistent performance. Within minutes, developers can spin-up multiple clusters with multiple tenants. They can also work on numerous projects and oversee microservices at the same time.
As time goes on, many developers end up with a large volume of tenants and clusters that lack consistency with policy, access, version control, etc., because they did not make the necessary preparations as their business scaled. Because each cluster is in isolation from another, it’s almost like each development team is running their own private companies, with their own policies and access controls, inside their separate working environment, under the same roof. The problem gets even more complicated when you have one developer trying to access multiple clusters. Pushing updates to Kubernetes as a whole with this widespread inconsistency makes it nearly impossible.
So, how exactly can you spin-up tenants and clusters, while maintaining a consistent environment across the system? Frankly, there are so many different combinations to structure Kubernetes, that it’s almost impossible to give just one answer that will work correctly for all organizations. But there are various models and management tools that you can refer to, that will can help create a system that works.
The best place to start is to complete a proper evaluation of your current organization and how it functions. There is the enterprise model, where all the users access the Kubernetes application through one API server or control plane. Software as a Service (SaaS) model allows for multiple users to access the application through different control planes. Lastly, there is Kubernetes as a Service (KaaS), which has access through one control plane but you are getting incoming workload from many different sources. Only after this careful examination, can you effectively create a “game plan” on how you can manage multi-tenancy.
Multi-Clusters & Namespaces
With Kubernetes multi-tenancy, you are essentially carving one system into multiple functional parts. You can do this by creating an isolated cluster or a namespace for each tenant. Both of these set-ups are correct and will allow your organization to complete your workload. But again, you must choose the option that best works for your organization by analysis. The more complex your organization is, the greater the need for organization and isolation of workloads.
Every cluster that is created is a virtual private cloud (VPC), residing in complete isolation from other portions of the Kubernetes platform. When you spin up a cluster, you can be very productive while working in this environment without many any additional alterations. This is especially true if a developer is not ready to move a program into production. The current setting would be ideal for development and testing. However, a dilemma can arise if you have a lot of users within the same cluster. It will make it very difficult to separate work from multiple teams. You may even find yourself overwriting code from another development team before you even realize your mistake, causing project deadlines to be missed. It is better to create multiple namespaces inside of the cluster to increase organization.
Namespace Isolation Management
Once a cluster is established, by default, a namespace labeled “default” is also created. This isolated area of a cluster provides security, order, and improved performance. Typically, this type of set-up is designed for enterprise businesses, which sometimes can have over 100 clusters with namespaces. The benefits of having multiple namespaces are consistent policies, configuration, and controls across the shared control plane.
Creating policies that can be distributed evenly across multiple clusters is critical. The installation of a policy importer will assist with this need. It has the responsibility of communicating with your source-of-truth platform and accurately retrieving the data that is there. Once retrieved, it can then input that information into each of your clusters to provide your system with a form of regulation. This source-of-truth platform is the centralized location where policy is created based on the organization’s needs and standards. Policies can be created per cluster or namespace, allowing for system administrators to make alterations as needed.
One of the most popular source-of-truth platforms is GitOps, which was coined by a company called Weaveworks. It has the capability of establishing policies for Kubernetes and cloud-native applications. GitOps has several tools that allow you to deploy, monitor, and automate your workflow. This platform allows for the creation of role-based access control policy (RBAC) on the Kubernetes API server, which can later be placed onto the cluster to maintain security standards.
Here is a list of available GitOps tools that you can take advantage of:
Google Cloud Platform (GCP)
Google Cloud Platform (GCP) is another source-of-truth platform that can be used with Kubernetes. Within this platform, there is a feature called Identity Access Management (IAM), which provides a hierarchy of the root or organization, folders, projects, and resources. You can establish different identities, groups, and roles for users on the platform, which will limit the permissions to access into the system. For example, you can restrict a user’s ability to start, stop, and delete instances on the cloud, and obtain highly sensitive data.
The project level within the GCP hierarchy has the same level of tenancy as a Kubernetes namespace. This balance allows for you to create an association between both project within GCP, and namespace within Kubernetes. The IAM policy can then sync with the Kubernetes RBAC policy, which will create the consistency you are seeking throughout the multiple clusters.
The resource portion of the GCP hierarchy allows for you to place the organization microservices. Online order processing, consumer accounts, etc. can all be managed in that isolated area of the platform.
Due to the complexity and hybrid infrastructures, it’s hard to provide the exact process each organization should take in managing its multi-tenancy platform effectively. What you can do for sure, is take advantage of the available platforms and tools to regulate policies for all your clusters.