GKE | Google Kubernetes Engine
GKE and DoiT
Organizations that harness GKE successfully can scale up their applications massively without experiencing any decline in stability, speed or security. But although GKE reduces the complexity of Kubernetes, it still presents a learning curve for DevOps teams.
DoiT has broad, deep skills in GKE. Let your team get on with creating and enhancing what builds most business value for your organization and lean on our team for the experience and expertise to build, scale and innovate your cloud applications.
“Placing our bet on Kubernetes and GCP was a key strategic decision that has paid off. With the help of DoiT International and Google Cloud, SecuredTouch now has GKE clusters that are smoothly handling all crucial aspects of resource management optimization, such as horizontal auto-scaling and preemptible node-pools.”
Ran Wasserman
CTO, SecuredTouch
Understanding Kubernetes
Creating and resizing clusters Creating pods, replication controllers, jobs, services and load balancers Resizing application controllers Updating, upgrading and debugging clusters
Developers often use it to create and test enterprise applications, whereas IT administrators might use it to scale up and enhance performance, for example.
Essentially, GKE is a group of Google Compute Engine instances running Kubernetes. Using it, you can group multiple containers into pods that represent logical groups of related containers and manage them through jobs. Because access to containers might be disrupted if a pod of related containers becomes unavailable, most containerized applications need redundancy to ensure pod access is never compromised. A replication controller within GKE enables users to run the number of pod duplicates they wish at any time.
Groups of pods can be grouped into services, and these mean you don’t need additional code for non-container-aware applications to access other containers. For example, if the pods you use to process data from a client system are set up as a service, the client system can use any of the pods at any time – it doesn’t matter which pod is performing the task.
Related Resources
Workload Identity for GKE: Analyzing common misconfiguration
Discover GKE Workload Identity Analyzer, a tool DoiT developed to analyze workloads running in GKE and ensure Workload Identity is configured properly.
Controlling the Config Connector version on your GKE cluster
Config Connector is a great tool for managing Google Cloud resources using Kubernetes manifests. We show you how to achieve enhanced control by replacing the add-on installation with a manual one.
Global-Scale Scientific Cloud Computing with Kubernetes and Terraform (1/2)
We demo a working example showing the execution of real-world scientific computing (bioinformatics) workloads utilizing several modern DevOps principles for GCP and AWS.