Monday, December 21, 2020

Using Tanzu Service Manager to expose Tanzu Postgres services to Cloud Foundry

There have been a few products under the Tanzu brand which have fairly recently been made generally available.  These being Tanzu Postgres for Kubernetes and the Tanzu Service Manager.  A very useful combination of products which can be used to make the Postgres database available to applications running on cloud Foundry (Or Tanzu Application Service - TAS) environment.

So what are these products and how to they operate.  Firstly Tanzu Postgres, this is simply a deployment of the OSS Postgres database with the support of VMWare.  Under the guise of Pivotal there has been a commercial offering for Postgres on Virtual Machines for many years and in recent times the engineering effort has been expended to migrate this wonderful database such that it is containerised and can run on the Kubernetes orchestration engine.  

The distribution has been split into a couple of components, firstly the deployment of an operator which can be used to manage the lifecycle of a postgres instance.  This makes the creation of an instance as simple as applying a yaml file configuration to Kubernetes.

Tanzu Service Manager (TSMGR) is a product which runs on Kubernetes and provides an OSBAPI (Open Service Broker API) interface that is integrated with the Cloud Controller of Cloud Foundry.  It manages helm chart based services that run on Kubernetes and makes them available to applications and application developers in Cloud Foundry such that the service looks like it has a fully native integration.  

So if we put these two together we have a fully supported version of Postgres running on Kubernetes that can be made available to applications running in the Tanzu Application Service.  

For an example installation of Tanzu Postgres with the Service Manager have a look at my github repo.




Wednesday, September 9, 2020

Deploying Tanzu Application Service to Kubernetes (TKG)

 Introduction

Tanzu Application Service (TAS) is a powerful Platform as a Service abstraction  which can be deployed on many different cloud infrastructure providers.  It allows developers to write their application in any of the commonly used programming language and "push" their code to a runtime which takes care of all of the provisioning required to allow the program to run and does the plumbing to make the application accessible.

TAS has been available for many years now, based on the Open Source Cloud Foundry Application Service.   At time of writing VMWare have been moving TAS from running on Virtual Machines to run as containers on a kubernetes environment.  Currently in Beta the first GA release should be available in the fall of 2020.  (Or as I would say Autumn.)

This blog post runs through the steps involved to get it up and running, essentially summarising the information in the documentation. (Note - this is a link to beta documentation and the URL may well change soon.)

Installation

Start by downloading the TAS system which can be downloaded as a single .tar file.  When extracted will create a directory called tanzu-application-service with a number of subdirectories.  For this blog posting using the 0.4.0 Beta version of TAS, the installation process may significantly change over coming releases as the product goes GA.

Kubernetes

To get started we need to create a kubernetes cluster to deploy TAS4k8s onto, in this example I am using TKG running in an AWS environment.  (See posting http://ablether.blogspot.com/2020/08/first-experiences-with-tanzu-kubernetes.html about setting up TKG on AWS)
TAS will create quite a few containers and the current default setup is a minimal install with only an instance of most components.  According to the documentation you need at least 5 worker nodes with a minimum of 2CPU/7.5Gb RAM, this equates to worker nodes of t3.large on AWS.  I found that I could get away with 5 t3.medium sized ones but in order to deploy a few of your own containers I recommend 7 workers at t3.large size.

% tkg create cluster clust-tas --plan dev -w 7 --worker-size t3.large
Logs of the command execution can also be found at: /var/folders/nn/p0h624x937l6dt3bdd00lkmm0000gq/T/tkg-20200827T154322852129303.log
Validating configuration...
Creating workload cluster 'clust-tas'...
Waiting for cluster to be initialized...
Waiting for cluster nodes to be available...

Workload cluster 'clust-tas' created

Once our cluster is up and running we need to get the context to it for kubectl and then we can deploy a couple of additional capabilities in preparation for TAS.



% tkg get credentials clust-tas
Credentials of workload cluster 'clust-tas' have been saved
You can now access the cluster by running 'kubectl config use-context clust-tas-admin@clust-tas'
% kubectl config get-contexts
CURRENT   NAME                                      CLUSTER            AUTHINFO                 NAMESPACE
          aws-mgmt-cluster-admin@aws-mgmt-cluster   aws-mgmt-cluster   aws-mgmt-cluster-admin
          clust-tas-admin@clust-tas                 clust-tas          clust-tas-admin
          minikube                                  minikube           minikube
*         test-cluster-admin@test-cluster           test-cluster       test-cluster-admin
% kubectl config use-context clust-tas-admin@clust-tas
Switched to context "clust-tas-admin@clust-tas".


There are two things we need to do to the newly created cluster, firstly we need to install the metrics server which goes into the kube-system namespace.

kubectl apply -f https://github.com/kubernetes-sigs/metrics-server/releases/download/v0.3.6/components.yaml
clusterrole.rbac.authorization.k8s.io/system:aggregated-metrics-reader created
clusterrolebinding.rbac.authorization.k8s.io/metrics-server:system:auth-delegator created
rolebinding.rbac.authorization.k8s.io/metrics-server-auth-reader created
apiservice.apiregistration.k8s.io/v1beta1.metrics.k8s.io created
serviceaccount/metrics-server created
deployment.apps/metrics-server created
service/metrics-server created
clusterrole.rbac.authorization.k8s.io/system:metrics-server created
clusterrolebinding.rbac.authorization.k8s.io/system:metrics-server created
Installing the default storage class for aws gp2 based ext4 storage.

Then for AWS we want to set a default storage class.  Obviously the choice here may depend on what you feel you need for your environment, for the purposes of the test environment we will be using a standard gp2 storage from AWS.

Inside the download directory there is a config-optional directory that contains a yaml file called aws-default-storage-class.yml.  Use this to upload to your Kubernetes cluster.  This introduces another tool which is part of the suite of tools known as k14s or Carvel.  They can be downloaded from k14s.io (or get-kapp.io or for TAS specifically VMWare package a version).    kapp -  Kubernetes Application Management Tool is a CLI designed to manage resources in bulk and is used to package up and deploy/manage TAS4k8s.


$ kapp deploy -a default-storage-class -f <download dir>/tanzu-application-service/config-optional/aws-default-storage-class.yml
Target cluster 'https://clust-tas-apiserver-352766885.eu-west-2.elb.amazonaws.com:6443' (nodes: ip-10-0-0-56.eu-west-2.compute.internal, 7+)

Changes

Namespace  Name  Kind          Conds.  Age  Op      Wait to    Rs  Ri
(cluster)  gp2   StorageClass  -       -    create  reconcile  -   -

Op:      1 create, 0 delete, 0 update, 0 noop
Wait to: 1 reconcile, 0 delete, 0 noop

Continue? [yN]: y

9:43:18AM: ---- applying 1 changes [0/1 done] ----
9:43:18AM: create storageclass/gp2 (storage.k8s.io/v1) cluster
9:43:18AM: ---- waiting on 1 changes [0/1 done] ----
9:43:18AM: ok: reconcile storageclass/gp2 (storage.k8s.io/v1) cluster
9:43:18AM: ---- applying complete [1/1 done] ----
9:43:18AM: ---- waiting complete [1/1 done] ----

Succeeded

Tanzu Application Service

The first step towards configuring TAS is to decide on what domain name to use.  I have a domain donaldforbes.com which I use for dev/testing and so decided to use a system domain of sys.tas.donaldforbes.com for my deployment.  This domain points to a load balancer which I will run in AWS to handle all the ingress.  (wildcard DNS used)

Next we need to create a set of configuration values which are specific to this install.  Create a configuration-values directory and then run the tanzu-application-service/bin/generate-values.sh script to populate configuration parameters for your environment.   The script generates a file with a set of config values - it uses self signed certificates by default and the various passwords for TAS components.

$ ./bin/generate-values.sh -d sys.tas.donaldforbes.com > configuration-values/deployment-values.yml

Using AWS we will also use a load balancer to manage ingress to the platform.  This requires the presence of another file in the configuration-values directory.  Called load-balancer-values.yml

$ cat load-balancer-values.yml
#@data/values
---
enable_load_balancer: True

We also need to be able to access two different registries (they could be the same). Firstly a system registry which is used by the TAS installer to reach out for the images used to run the various components/containers of TAS itself.  This is configured in a file called system-registry-values.yml

$ cat system-registry-values.yml
#@data/values
---
system_registry:
  hostname: "registry.pivotal.io"
  username: "dforbes@pivotal.io"
  password: "<RedactedPassword>"

To gain access simply sign up to https://network.pivotal.io.

The second registry is one that TAS will use to store the containers it creates for every application that is deployed.  Most registries will be possible to used for this.  Simply create a file called app-registry-values.yml and populate it with values to enable the login to the registry and ensure that the user has read/write access.  In my example below I decided to use the docker registry on docker hub.

$ cat app-registry-values.yml
#@library/ref "@github.com/cloudfoundry/cf-for-k8s"
#@data/values
---
app_registry:
  hostname: https://index.docker.io/v1/
  repository_prefix: "donaldf"
  username: "donaldf"
  password: "<Redacted Password>"


Once done we are ready to install TAS.  Simply run the install-tas.sh script in the bin directory of tanzu-application-service passing in to it a parameter which points to your custom configuration directory.

$ ./bin/install-tas.sh ./configuration-values

This script will do all the kubernetes configuration, upload the images from the system repository and create the necessary deployments.  Typically this process will take several minutes to complete.

Once done there is a load balancer created for the ingress.  You must point your DNS to this load balancer to access TAS.  (I actually used route 53 to handle the wildcard DNS need to point to the load balancer and used a CNAME from my domain registration.)



Thursday, August 27, 2020

First Experiences with Tanzu Kubernetes Grid on AWS

 Introduction

Kubernetes as a container orchestration system has been about for a while now and many vendors have jumped onto it to provide supported builds of it which are simple to install, update and manage when compared to the pure play open source kubernetes distribution.  VMWare currently provide a couple of fully supported distributions which are known under the Tanzu brand banner.  

  • TKG - Tanzu Kubernetes Grid (Formally known as the Heptio distribution)
  • TKGI - Tanzu Kubernetes Grid Integrated (Formally known as Pivotal Container Service - PKS)
This blog post is really to provide a simple guide to deploying TKG into an Amazon account, subsequent blog posts will dive into various features/components/use cases for such an environment.

Installation

Obviously the details of the installation may change with time, this summary is for TKG 1.1 and the docs provide the full details for step by step installation instructions.  

In summary the steps to follow are:-

  1. Download and install the tkg and clusterawsadm command lines from the VMWare Tanzu downloads.
  2. Setup environment variables to run clusterawsadm which will bootstrap the AWS account with necessary groups, profiles, roles and users.  (Only needs done once per AWS account)
    $ clusterawsadm alpha bootstrap create-stack
  3. The configuration for the first management cluster can be either done via a UI.  A good choice the first time you do this as it guides you through the setup.  The UI is available through a browser having run init -ui and then access localhost:8080/#/ui.
    $ tkg init --ui
    Or you can setup the ~/.tkg/config.yaml file with the AWS environment variables required.  (A template for this file can be created by running $ tkg get management-cluster. ). Once setup run the command to create a management cluster.
    $ tkg init --infrastructure aws --name aws-mgmt-cluster --plan dev
At this point what is happening behind the scenes is that the tkg command line actually runs a small kubernetes cluster locally using docker, this cluster includes the clusterapi custom resource and it understands the AWS API and is able to create VMs using the AWS infrastructure and ensure that the tkg kubernetes executables are deployed to it.  The number and size of the VMs are controllable via the plan used and other flags to specify the size and number of workers/masters.  

Even for a small plan this process will take several minutes to complete.  (on my laptop15mins)

% tkg init --infrastructure aws --name aws-mgmt-cluster --plan dev
Logs of the command execution can also be found at: /var/folders/nn/p0h624x937l6dt3bdd00lkmm0000gq/T/tkg-20200827T093026003038156.log

Validating the pre-requisites...

Setting up management cluster...
Validating configuration...
Using infrastructure provider aws:v0.5.4
Generating cluster configuration...
Setting up bootstrapper...
Bootstrapper created. Kubeconfig: /Users/dforbes/.kube-tkg/tmp/config_lieTADrw
Installing providers on bootstrapper...
Fetching providers
Installing cert-manager
Waiting for cert-manager to be available...
Installing Provider="cluster-api" Version="v0.3.6" TargetNamespace="capi-system"
Installing Provider="bootstrap-kubeadm" Version="v0.3.6" TargetNamespace="capi-kubeadm-bootstrap-system"
Installing Provider="control-plane-kubeadm" Version="v0.3.6" TargetNamespace="capi-kubeadm-control-plane-system"
Installing Provider="infrastructure-aws" Version="v0.5.4" TargetNamespace="capa-system"
Start creating management cluster...
Saving management cluster kuebconfig into /Users/dforbes/.kube/config
Unable to persist management cluster aws-mgmt-cluster info to tkg config
Installing providers on management cluster...
Fetching providers
Installing cert-manager
Waiting for cert-manager to be available...
Installing Provider="cluster-api" Version="v0.3.6" TargetNamespace="capi-system"
Installing Provider="bootstrap-kubeadm" Version="v0.3.6" TargetNamespace="capi-kubeadm-bootstrap-system"
Installing Provider="control-plane-kubeadm" Version="v0.3.6" TargetNamespace="capi-kubeadm-control-plane-system"
Installing Provider="infrastructure-aws" Version="v0.5.4" TargetNamespace="capa-system"
Waiting for the management cluster to get ready for move...
Moving all Cluster API objects from bootstrap cluster to management cluster...
Performing move...
Discovering Cluster API objects
Moving Cluster API objects Clusters=1
Creating objects in the target cluster
Deleting objects from the source cluster
Context set for management cluster aws-mgmt-cluster as 'aws-mgmt-cluster-admin@aws-mgmt-cluster'.

Management cluster created!


You can now create your first workload cluster by running the following:

  tkg create cluster [name] --kubernetes-version=[version] --plan=[plan]

%

At the end of this process there will be several VMs created in (a bastion server, a master and a worker for the basic dev plan.) a load balancer to allow ingress to the services and security groups etc. to restrict direct access to the cluster.

The tkg init command will by default create a cluster context in your .kube/config file called <cluster name>-admin@<cluster name>.  If other machines/operators are to manage the cluster then the tkg option of --kubeconfig will allow the access context to be stored in a separate file.  (Yup, I deleted the context from my config file then found there was no easy way to re-generate it!)

Testing

First off try a few basic commands

% tkg get management-clusters
 MANAGEMENT-CLUSTER-NAME  CONTEXT-NAME
 aws-mgmt-cluster *       aws-mgmt-cluster-admin@aws-mgmt-cluster

Shows the management cluster we just crated.

% tkg get clusters
 NAME  NAMESPACE  STATUS  CONTROLPLANE  WORKERS  KUBERNETES

No clusters created yet so not a lot of useful information here.  Lets create our first cluster.

% tkg create cluster test-cluster --plan dev
Logs of the command execution can also be found at: /var/folders/nn/p0h624x937l6dt3bdd00lkmm0000gq/T/tkg-20200827T110852116171506.log
Validating configuration...
Creating workload cluster 'test-cluster'...
Waiting for cluster to be initialized...
Waiting for cluster nodes to be available...

Workload cluster 'test-cluster' created

%

This creates a very small cluster with 3 nodes.  A bastion host which is connected to a public network and two VMs on a private network, 1 master or control plane node and 1 worker.  Typically this will take up to 10 minutes to create the VMs and configure the kubernetes cluster.



Next job is to configure kubectl to be able to connect to the cluster.  The tkg command line allows us to get an admin context for the cluster.

% tkg get credentials test-cluster
Credentials of workload cluster 'test-cluster' have been saved
You can now access the cluster by running 'kubectl config use-context test-cluster-admin@test-cluster' 
% kubectl config get-contexts
CURRENT   NAME                                      CLUSTER            AUTHINFO                 NAMESPACE
*         aws-mgmt-cluster-admin@aws-mgmt-cluster   aws-mgmt-cluster   aws-mgmt-cluster-admin
          minikube                                  minikube           minikube
          test-cluster-admin@test-cluster           test-cluster       test-cluster-admin
% kubectl config use-context test-cluster-admin@test-cluster
Switched to context "test-cluster-admin@test-cluster".
% kubectl get ns
NAME              STATUS   AGE
default           Active   10m
kube-node-lease   Active   10m
kube-public       Active   10m
kube-system       Active   10m

That's a kubernetes worker cluster up and ready for action.  An obvious next step might be to scale the cluster.  Again tkg has a very simple api for this.

% tkg get clusters
 NAME          NAMESPACE  STATUS   CONTROLPLANE  WORKERS  KUBERNETES
 test-cluster  default    running  1/1           1/1      v1.18.3+vmware.1
% tkg scale cluster test-cluster -w 2
Successfully updated worker node machine deployment replica count for cluster test-cluster
workload cluster test-cluster is being scaled
% tkg get clusters
 NAME          NAMESPACE  STATUS    CONTROLPLANE  WORKERS  KUBERNETES
 test-cluster  default    updating  1/1           1/2      v1.18.3+vmware.1
.
. . . <about 5 minutes>
.
% tkg get clusters
 NAME          NAMESPACE  STATUS   CONTROLPLANE  WORKERS  KUBERNETES
 test-cluster  default    running  1/1           2/2      v1.18.3+vmware.1

And finally once we have finished with the cluster we want to delete it.

% tkg delete cluster test-cluster
Deleting workload cluster 'test-cluster'. Are you sure?: y█
workload cluster test-cluster is being deleted

Note - there is also a tkg upgrade cluster command.  Makes it nice and simple to upgrade a given cluster just when you want.

Conclusion

A few things to download and startup and certainly a little more complex than using one of the public cloud providers kubernetes environment.  But if you pause for a second and think about the complexity of operation it is achieving in a short period of time it is very impressive and simple approach.  It also gives you access to cluster creation and management with the same experience/binaries running on multiple cloud providers or on-premise. Not to mention full control over the cluster lifecycle.

All in all I was suitably impressed at the ease of use and from nothing it would take no more than a morning or afternoon to have any size of k8s cluster created and ready for use.