In this article, youʼll learn about one of the platforms from a theory and implementation perspective, GKE.
Before diving into where GKE fits and how to implement it from a hands-on perspective, letʼs take some time to discuss why youʼd want to use a Managed Kubernetes Service and where the differentiators are between services.
Managed Kubernetes Services give you the ability to abstract some of the complexities of Kubernetes. One of the biggest pieces is the Control Plane. In Kubernetes, the Control Plane is where you find the “brains of the operationˮ like the API Server. The Worker Nodes are where the containers/Pods run, so itʼs easier to manage, which is why cloud providers abstract away the Control Plane.
💡 There are certain services like GKE Autopilot and AWS EKS with Fargate that abstract both the Control Plane and the Worker Nodes.
With the abstraction, engineers find it more efficient as they can focus on valuedriven business needs (the application stack) vs the pieces of Kubernetes that are of course drastically important for it to work properly, but arenʼt necessarily important for them to have to care about.
The key thing to remember is even though itʼs all Kubernetes underneath the hood, each cloud provider will sprinkle some of their secret sauce on top. For example, with AKS, using Azure Active Directory/Entra is super easy and works out of the box. Can Azure AD also work with GKE? Absolutely. Does it work out of the box seamlessly? No, thereʼs some setup youʼll need to do. The point is that these managed services are Kubernetes, but theyʼre altered.
Kubernetes was created out of Google, which means itʼs fairly safe to say that Googleʼs Managed Kubernetes Service is incredibly good. They have everything from a really great security posture to the ability to managed Kubernetes cluster in any other cloud and on-prem to services that help you migrate your application stacks from Virtual Machines (VM) to containers.
The biggest setback when it comes to GKE is the overall adoption of Google Cloud. Although GCP is great, it continues to be behind Azure and AWS. However, the good news is that multi-cloud solutions are becoming more and more popular, which means that GKE could potentially be the only service you use in GCP. This is great for all cloud providers as it makes the ability to widely adopt more acceptable.
GKE is one of the best Managed Kubernetes Services due to itʼs extensibility and the “manage your other k8s clusters hereˮ option.
Now that you know a bit about why youʼd want to use Manged Kubernetes
Services and where GKE stands out, its time to learn how to create a GKE cluster. In this section, youʼll learn how to do it with the UI. In the next section, youʼll learn how to create the cluster with the CLI.
First, log into the GCP portal and search “gkeˮ. Youʼll see a pane pop up that looks like the screenshot below.
Click the blue + CREATE button.
💡 If you click the “CREATEˮ button below, itʼll automatically take you to the cluster creation for Autopilot, which is probably not what you want when youʼre just getting started with GKE.
Next, choose the standard cluster creation.
Autopilot is one of the services that abstracts away both the Control Plane and Worker Nodes. There are some nuances to using this type of service and you should do some research around it. One example is you may not be able to use third-party services like Istio for Service Mesh depending on the type of “100% managedˮ you go with (like EKS Fargate).
Youʼll now see a screen that has a bunch of options ranging from the default node pool to all of the security, automation, networking, and backups you may need.
The Cluster basics is the most important to get started as this is where youʼll specify the cluster name, region, and Kubernetes API version.
Two other important panes are:
Automation is where you set up your maintenance window, auto provisioning for nodes, and the autoscaling profile.
Security is where you configure everything from RBAC to secrets management to client certs and everything in between.
Once youʼve gone through all of the options to create a GKE cluster and are comfortable with your selections, you can click the blue CREATE button on the bottom of the page.
Once the cluster is created, you should see that it is up and operational.
Letʼs now see how to do the same thing programmatically with the CLI/SDK.
Configuring a cluster on the UI, as with any other UI, can be a longer process. It is however very important to do because you canʼt automate something and create a repeatable process if you donʼt know how to manually do it.
With that being said, now that you know the manual process, letʼs take a look at how to automate it.
Because IaC providers like Terraform Providers and SDkʼs exist, you can create the cluster programmatically. One way to do it is with the GCP CLI.
The command below will create a cluster that:
If you want more options for the CLI configuration, there are a plethora of flags you can use which you can find at the link below:
https://cloud.google.com/sdk/gcloud/reference/container/clusters/create
Once very important set of flag options to point out is for Nvidia GPUs. GPUs are becoming more and more necessary in Kubernetes as the need for AI/LLM workloads increase, and GKE seems to be one of the leaders in this space when it comes to making GPUs operational on Kubernetes.
As you can see in the screenshot below, you can use Nvidia GPUs on GKE and do everything from specifying a Nvidia GPU to give the ability to share a GPU across multiple Pods.
Congrats! Youʼve successfully set up a GKE cluster both in the UI and on the CLI utilizing programmatic practices.