Multicluster Istio on GKE
Overview
In this example we’ll create a single Istio mesh across multiple regionally separated GKE clusters. Once setup, we’ll demonstrate the installation using Istio’s BookInfo application.
While you can accomplish this on your laptop, I’ll be demonstrating through Google Cloud Shell, which already includes many of the tools we’ll be using, as well as a standard environment we can all work from.
If you’re new to GCP, Google offers a free $300 credit as well as a generous Free Tier to get you started. So create your project and lets get started
Environment Variables
First things first, lets ensure we’ve set some key variables in our environment
export PROJECT=$(gcloud config get-value project)
gcloud config set project $PROJECT
Get the Installs
Next let’s pull down some of the tools we’ll be using throughout the process. Obviously we’ll need Istio, but we’ll also pull down Helm to help with the install. Since we’re going to be working in multiple clusters we can use a tool called kubectxto make working with the contexts easier.
# Add a common bin to your path
export WORK_DIR=$HOME/mcIstiomkdir -p $WORK_DIR/bin
PATH=$WORK_DIR/bin/:$PATH# Install Helm
curl https://raw.githubusercontent.com/kubernetes/helm/master/scripts/get > get_helm.sh
chmod 700 get_helm.sh
./get_helm.sh &> /dev/null
cp /usr/local/bin/helm $WORK_DIR/bin/
rm ./get_helm.sh# Download Istio
export ISTIO_VERSION=1.0.2
curl -L https://git.io/getLatestIstio | ISTIO_VERSION=$ISTIO_VERSION sh -
cp istio-$ISTIO_VERSION/bin/istioctl $WORK_DIR/bin/.
mv istio-$ISTIO_VERSION $WORK_DIR/istio## Install kubectx
curl -LO https://raw.githubusercontent.com/ahmetb/kubectx/master/kubectx
chmod +x kubectx
mv kubectx $WORK_DIR/bin/.
Provision the GKE Clusters
Since we’ll be creating multiple clusters, we’ll simplify the setup by wrapping the command in a function, then call it with the name and location of the cluster we want to provision.
The provision cluster function will do a few things. Of course it will provision the GKE cluster in your project, but it will also setup the initial RBAC bindings, pull down the context credentials and rename the context for easier use.
The cluster create command includes a couple key things. We’re specifying num-nodes of 4 to ensure the control cluster has the initial capacity it needs. While the workers won’t need this, it’s simpler to apply the same to all clusters.
We’ve also added enable-ip-alias. This is an important flag for this demo. It essentially tells GKE to give each Pod it’s own routable IP. This will be important as we begin to make calls from one cluster to another with our service mesh.
While we’re talking about routing, it’s worthwhile noting a bit about the network structure here. You can customize the networking but we’ll be using the default network in this example. The GCP networks are global, however regions have different subnets and IP ranges within the default network. Also Since we’re using IP Aliasing we can be sure the pods from one cluster won’t be in the same IP range as a Pod from another cluster.
OK enough talk let’s get this spun up
provision_cluster() {
CLUSTER_NAME=$1
CLUSTER_ZONE=$2
gcloud container clusters create ${CLUSTER_NAME} \
--cluster-version=latest \
--zone=${CLUSTER_ZONE} \
--num-nodes "4" \
--enable-ip-alias \
--scopes cloud-platform,logging-write,monitoring-write,pubsub,trace \
--enable-cloud-logging \
--enable-cloud-monitoring
gcloud container clusters get-credentials ${CLUSTER_NAME} \
--zone ${CLUSTER_ZONE} kubectx ${CLUSTER_NAME}=gke_${PROJECT}_${CLUSTER_ZONE}_${CLUSTER_NAME} kubectx ${CLUSTER_NAME} kubectl create clusterrolebinding cluster-admin-binding \
--clusterrole=cluster-admin \
--user=$(gcloud config get-value core/account)
}
provision_cluster central us-central1-a
provision_cluster west us-west1-b
provision_cluster east us-east1-b
Create the firewall
I noted in the previous section that the Pods will have their own routable IP addresses which will allow direct access from within the network. However we still need to configure a firewall to allow traffic between the clusters.
While it would be easy to just allow all traffic, we’ll want to be a bit more specific by identifying the source ranges of our cluster and pod IPs as well as providing specific target tags. To do this, we’ll pull those values and join them within their own variables before passing them into the firewall create command
# Simple Join Function
function join_by { local IFS="$1"; shift; echo "$*"; }# Grab Cluster IP ranges
ALL_CLUSTER_CIDRS=$(gcloud container clusters list \
--format='value(clusterIpv4Cidr)' | sort | uniq)
ALL_CLUSTER_CIDRS=$(join_by , $(echo "${ALL_CLUSTER_CIDRS}"))# Grab all tags
ALL_CLUSTER_NETTAGS=$(gcloud compute instances list \
--format='value(tags.items.[0])' | sort | uniq)
ALL_CLUSTER_NETTAGS=$(join_by , $(echo "${ALL_CLUSTER_NETTAGS}"))# Create the firewall
gcloud compute firewall-rules create istio-multicluster-pod-fw \
--allow=tcp,udp,icmp,esp,ah,sctp \
--direction=INGRESS \
--priority=900 \
--quiet \
--source-ranges="${ALL_CLUSTER_CIDRS}" \
--target-tags="${ALL_CLUSTER_NETTAGS}"
Install Istio on Control Cluster
Awesome, we’ve got the clusters up and running and ready to talk to each other. It’s time to focus on Istio.
For the multicluster Istio example there are two types of installs we will be performing, a standard Istio install for the control and a light weight install for the other clusters. The initial install here is the standard istio install, but it will act as our control plane in the final configuration.
# Ensure you’re on central
kubectx central
# Install Helm on control plane
kubectl apply -f $WORK_DIR/istio/install/kubernetes/helm/helm-service-account.yaml
helm init --wait --service-account tiller
kubectl create ns istio-system
helm install $WORK_DIR/istio/install/kubernetes/helm/istio \
--name istio \
--namespace istio-system kubectl label namespace default istio-injection=enabled
Install Istio on Remote Clusters
Now let’s focus on the other remote clusters. While there seems to be a lot of code here really we’re only doing 3 things.
Control Config Details
We’re generating a config listing all the IP addresses for the control cluster components. This will be provided to the remote clusters so they know how to communicate with the central control plane.
Install Istio Remote
Next we’ll install Istio on the remote clusters. This is a scaled back version only containing Citadel for security and the Sidecar injector. These components use the values for the control plane provided in the earlier step.
CA Data
To enable secure communications between the clusters we need to share the certificate details with the control cluster. To do this we’ll first pull the remote cluster cert details, then switch to the control cluster and store that value as a secret.
Again since we’re doing the same thing for multiple clusters I’ve wrapped this into a function so we don’t have to repeat it over and over.
#switch to control
kubectx ${CONTROL_CLUSTER_NAME}# Get the control plane values
export PILOT_POD_IP=$(kubectl -n istio-system get pod -l istio=pilot -o jsonpath='{.items[0].status.podIP}')
export POLICY_POD_IP=$(kubectl -n istio-system get pod -l istio-mixer-type=policy -o jsonpath='{.items[0].status.podIP}')
export STATSD_POD_IP=$(kubectl -n istio-system get pod -l istio=statsd-prom-bridge -o jsonpath='{.items[0].status.podIP}')
export TELEMETRY_POD_IP=$(kubectl -n istio-system get pod -l istio-mixer-type=telemetry -o jsonpath='{.items[0].status.podIP}')
export ZIPKIN_POD_IP=$(kubectl -n istio-system get pod -l app=jaeger -o jsonpath='{range .items[*]}{.status.podIP}{end}')# Generate template pointing to control
helm template ${WORK_DIR}/istio/install/kubernetes/helm/istio-remote \
--namespace istio-system \
--name istio-remote \
--set global.remotePilotAddress=${PILOT_POD_IP} \
--set global.remotePolicyAddress=${POLICY_POD_IP} \
--set global.remoteTelemetryAddress=${TELEMETRY_POD_IP} \
--set global.proxy.envoyStatsd.enabled=true \
--set global.proxy.envoyStatsd.host=${STATSD_POD_IP} \
--set global.remoteZipkinAddress=${ZIPKIN_POD_IP} > ${WORK_DIR}/istio-remote.yaml# Configure Remote Clusters
create_remote_cluster() { CLUSTER_NAME=$1 kubectx ${CLUSTER_NAME} # Install Istio Remote
kubectl create ns istio-system
kubectl apply -f ${WORK_DIR}/istio-remote.yaml
kubectl label namespace default istio-injection=enabled
# Get Cert details from Remote
export KUBECFG_FILE="${WORK_DIR}/${CLUSTER_NAME}_secrets-file"
export SERVER=$(kubectl config view --minify=true -o "jsonpath={.clusters[].cluster.server}")
export NAMESPACE=istio-system
export SERVICE_ACCOUNT=istio-multi
export SECRET_NAME=$(kubectl get sa ${SERVICE_ACCOUNT} \
-n ${NAMESPACE} -o jsonpath='{.secrets[].name}')
export CA_DATA=$(kubectl get secret ${SECRET_NAME} \
-n ${NAMESPACE} -o "jsonpath={.data['ca\.crt']}")
export TOKEN=$(kubectl get secret ${SECRET_NAME} \
-n ${NAMESPACE} -o "jsonpath={.data['token']}" | base64 --decode)# create a Secrets file with the value we just pulled
cat <<EOF > ${KUBECFG_FILE}
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: ${CA_DATA}
server: ${SERVER}
name: ${CLUSTER_NAME}
contexts:
- context:
cluster: ${CLUSTER_NAME}
user: ${CLUSTER_NAME}
name: ${CLUSTER_NAME}
current-context: ${CLUSTER_NAME}
kind: Config
preferences: {}
users:
- name: ${CLUSTER_NAME}
user:
token: ${TOKEN}
EOF # Store Remote Cert data on Control cluster
kubectx central kubectl create secret generic ${CLUSTER_NAME} \
--from-file ${KUBECFG_FILE} -n ${NAMESPACE}
kubectl label secret ${CLUSTER_NAME} istio/multiCluster=true \
-n ${NAMESPACE} --overwrite=true
}
create_remote_cluster west
create_remote_cluster east
Deploy Apps
Fantastic, you’ve got all the hard work done, now it’s time to play!
cd $WORK_DIR
git clone https://github.com/cgrant/mcIstio-bookinfo bookinfo
cd bookinfo
Single Cluster Deploy
Lets just deploy this all to the central cluster first to ensure the system is all set up
Deploy the Book Info App
kubectx centralkubectl apply -f ./istio-manifests
kubectl apply -R -f ./kubernetes
Wait for all services to come up then hit ctrl+c to escape back to the prompt
watch istioctl proxy-status # ctrl+c to exit
Review the app through the browser. Open the resulting URL in your browser window
# get the ingress endpoint
export INGRESS_HOST=$(kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.status.loadBalancer.ingress[0].ip}')# Open the web app
echo http://$INGRESS_HOST/productpage
Deploy on multiple clusters
Now that we’ve got it all running in central lets delete some pieces and move them to the other clusters
# Review whats currently running
kubectl get deployments # Delete details and reviews-v3
kubectl delete deployments details-v1
kubectl delete deployments reviews-v3
Verify details and reviews-v3 are no longer active
watch istioctl proxy-status # ctrl+c to exit
echo http://$INGRESS_HOST/productpage
Deploy on East and West
kubectx east
kubectl apply -f ./kubernetes/services
kubectl apply -f ./kubernetes/deployments/details-v1.yamlkubectx west
kubectl apply -f ./kubernetes/services
kubectl apply -f ./kubernetes/deployments/reviews-v3.yaml
Switch back to central and watch everything come up
kubectx central
watch istioctl proxy-status
echo http://$INGRESS_HOST/productpage
Voila! You are a multicluster Master!!!!
Clean Up
Be sure to tear down your cluster to avoid additional costs
delete_cluster() {
CLUSTER_NAME=$1
CLUSTER_ZONE=$2 # Delete the clusters
gcloud container clusters delete ${CLUSTER_NAME} \
--zone=${CLUSTER_ZONE} \
-q --async # Delete
kubectx -d ${CLUSTER_NAME}
}
delete_cluster central us-central1-a
delete_cluster west us-west1-b
delete_cluster east us-east1-bgcloud compute firewall-rules delete istio-multicluster-pod-fw