Couchbase6.5 K8s Installation – Sweet & Simple


Below step defines the Couchbase 6.5 with Autonomous operator 2.0 Installation in on-Premise Private Cloud based Kubernetes env
  • Installation of Helm
    • This is required to streamline your installation and managing the applications to be deployed in K8s platform
    • I will touchbase this loosely and assume k8s admin already made this available and configured to the platform
    • Helm3 needed for this version of couchbase installation
    • For older Couchbase version you need Tiller to be deployed in your cluster instance
  • Helm Chart is deployable packaged module which is collection of files which define kubernetes resources
  • Add/Download Couchbase helm Chart from repo
  • verify the repo
  • fetch the repo in remote linux directory
  • Customize the deployment the Yaml inside directory
  • perform the helm installation – for Operator first and then Cluster
  • Helm installation will be done in two folds:
    • Couchbase Autonomous Operator (this manages the Cluster in K8s platform)
    • Couchbase Cluster (this is actual cluster)

What is Helm: Helm is single unit of Installer and package manager for Kubernetes applications where any resources (Service / Roles / Deployments) can be managed and updated as single unit and all dependencies would be catered inside it automatically

Helm needs to be installed and present in the env (typically a Linux shell where you have pre-configured kubeconfig file exist to connect to right context and right namespace) from where you invoke all below commands

For Couchbase 2.0 operator I will be using Helm3 build 3.3.0+ . Couchbase compatible with Helm 3.1+ build. Helm build can be downloaded for your OS from here : https://github.com/helm/helm/releases . Some previous release of operator is compatible with only lower version of helm.

[master] wget https://get.helm.sh/helm-v3.3.0-linux-amd64.tar.gz
--2020-08-28 18:05:46-- https://get.helm.sh/helm-v3.3.0-linux-amd64.tar.gz
Resolving get.helm.sh (get.helm.sh)… 152.195.19.97, 2606:2800:11f:1cb7:261b:1f9c:2074:3c
Connecting to get.helm.sh (get.helm.sh)|152.195.19.97|:443… connected.
HTTP request sent, awaiting response… 200 OK
Length: 12741413 (12M) [application/x-tar]
Saving to: ‘helm-v3.3.0-linux-amd64.tar.gz’
100%[========================>] 12,741,413 --.-K/s in 0.1s
2020-08-28 18:05:47 (106 MB/s) - ‘helm-v3.3.0-linux-amd64.tar.gz’ saved [12741413/12741413]

I have extracted this .tar.gz file into my home directory and will execute my helm3 as ~/helm3/helm going forward

Lets switch to namespace and the right k8s env where you want to install couchbase and Set current context: 

kubectl config set-context $(kubectl config current-context) --namespace=<namespace>

kubectl config file to connect right environment is already set inside ~/.kube/config

Check what is available in local repo:

~ [master]
helm repo list

As Helm maintain all the Chart repositories so lets Add new couchbase repository

~ [master]
helm repo add couchbase https://couchbase-partners.github.io/helm-charts/
"couchbase" has been added to your repositories

Update the repo index to get latest available charts from Chart repositories to local:

~ [master]
helm repo update
Hang tight while we grab the latest from your chart repositories…
…Skip local chart repository
…Successfully got an update from the "couchbase" chart repository          …Successfully got an update from the "stable" chart repository
Update Complete. ⎈ Happy Helming!⎈

Check the repo list now:

~ [master]
helm repo list
NAME URL
couchbase https://couchbase-partners.github.io/helm-charts/

ignore if you see stable and local as other repository

Helm maintain local and cache repository index in your linux file system

~/.helm/repository [master]
ls -ltr
total 12
drwxr-xr-x 2 dpaul domain users 4096 Apr 19 16:14 cache
drwxr-xr-x 2 dpaul domain users 4096 Jul 10 14:52 local
-rw-r--r-- 1 dpaul domain users 697 Jul 10 15:30 repositories.yaml
~/.helm/repository/cache [master]
ls
confluentinc-index.yaml couchbase-index.yaml incubator-index.yaml local-index.yaml stable-index.yaml

Search the repo to confirm couchbase is available and its version:

~ [master]
helm search repo couchbase
NAME CHART VERSION APP VERSION DESCRIPTION
couchbase/couchbase-cluster 0.1.2 1.2 Couchbase Server is a NoSQL document database with a dist…
couchbase/couchbase-operator 2.0.1 2.0.1 A Helm chart to deploy the Couchbase Autonomous Operator …

this command is helpful to find the right chart version you want to install in-case you have multiple charts with different release editions

Before I proceed making sure no previous couchbase cluster exist on my namespace:

~ [master]
kubectl get cbc
No resources found in bi-dev namespace.

exist this can be cleaned up by issuing command : kubectl delete cbc <clustername>

Also making sure there is no previous couchbase operator and deployment exist using helm’s tiller utility :

~ [master]
~/helm3/helm ls | grep operator

~ [master]
~/helm3/helm ls  | grep cluster

~ [master]  kubectl get deployment

above should return no results . deployment will find if there is any previous instance of couchbase deployment exist on this said namespaces or not . If yes use : ‘kubectl delete deployment’ to purge the previous cluster before proceed.

If the above return results that means you may have already couchbase deployment/instance exist on same namespace or this could be dangling charts with crazy names of the previously deleted instance. you might want to clean those using below options to have clean slate start:

~/couchbase65 [master]
~/helm3/helm delete <chartname>
release "<release name>" deleted

While you can run the installer from repo directly by issuing below commands but ideal practice is to customize the installer as per your env needs for e.g. you want to named it differently or you want to change the resource (cpu/memory) allocation or you want to customize your couchbase deployment services .

Note: don’t be surprised if below command throws up error to you as you might NOT have certain permissions in cluster to execute certain things because of RBAC privileges

~/helm3/helm install <my-release> couchbase/couchbase-operator

In my case I would be fetching the installer from repo to my remote directory to edit it based on env needs:

~ [master]
~/helm3/helm fetch couchbase/couchbase-operator
~ [master]
~/helm3/helm fetch couchbase/couchbase-cluster

This brings below:

-rw-r--r-- 1 dpaul domain users 15712 Jul 10 16:10 couchbase-operator-2.0.1.tgz
-rw-r--r-- 1 dpaul domain users 4517 Jul 10 16:11 couchbase-cluster-0.1.2.tgz

Now lets extract this .tgz file: I have created a directory called couchbase65 before I move the content there.

~ [master]
mkdir couchbase65 ; mv couchbase*.tgz couchbase65 ; cd couchbase65
~/couchbase65 [master]
tar -xvzf couchbase-cluster-0.1.2.tgz ;
~/couchbase65 [master]
tar -xvzf couchbase-operator-2.0.1.tgz

Now you got this:

~/couchbase65 [master]
ls -ltr
total 32
-rw-r--r-- 1 dpaul domain users 15712 Jul 10 16:10 couchbase-operator-2.0.1.tgz
-rw-r--r-- 1 dpaul domain users 4517 Jul 10 16:11 couchbase-cluster-0.1.2.tgz
drwxr-xr-x 3 dpaul domain users 4096 Jul 10 16:25 couchbase-cluster
drwxr-xr-x 5 dpaul domain users 4096 Jul 10 16:25 couchbase-operator

Now the Action Begins ! 🙂

Few steps we need to know is : you can install Couchbase from the extracted location with customized parameters or else you can pull that couchbase instance from repo directly which will install both operator and cluster :

Note: due to security reason of couchbase resources to be managed in kubernetes cluster, our cluster Administrator created custom CRD’s and Admission controller so as a Couchbase cluster owner I don’t need to bother about internals of permissions managements –

COUCHBASE OPERATOR 2.0 DEPLOYMENT STEPS

Couchbase Operator enables you to run Couchbase deployments natively on Open Source Kubernetes or Enterprise Red Hat OpenShift Container Platform.

The goal of the Couchbase Operator is to fully manage one or more Couchbase deployments by removing operational complexities of running Couchbase by automating the management of common Couchbase tasks such as the configuration, creation, upgrade and scaling of Couchbase clusters.

The Couchbase Operator extends the Kubernetes API by creating a Custom Resource Definition(CRD) and registering a Couchbase specific controller (the Operator) to manage Couchbase clusters.

in couchbase-operator directory you will see below files :

-rw-r--r-- 1 dpaul domain users 888 Aug 20 11:48 Chart.yaml
-rw-r--r-- 1 dpaul domain users 12195 Aug 20 11:48 values.yaml
drwxr-xr-x 2 dpaul domain users 4096 Aug 20 11:48 templates
-rw-r--r-- 1 dpaul domain users 92 Aug 20 11:48 OWNERS
drwxr-xr-x 2 dpaul domain users 4096 Aug 20 11:48 examples
drwxr-xr-x 2 dpaul domain users 4096 Aug 20 11:48 crds

the file important to look at is values.yaml . It has built-in objects that helm templates offer . All the computational details provided inside the charts required certain values to be passed. if you want to override some values you need to have your own values.yaml in different name lets say I created once called “tc_operator_values.yaml” . Just copy values.yaml file and create your own file with this name . Here is my customized Operator Yaml. Details of below parameters can be found in Couchbase Operator 2.0 document link here : https://docs.couchbase.com/operator/2.0/helm-setup-guide.html

# Default values for couchbase-operator chart.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.

# Select what to install
install:
  # install the couchbase operator
  couchbaseOperator: true
  # install the admission controller
  admissionController: false
  # install couchbase cluster
  couchbaseCluster: false 
  # install sync gateway
  syncGateway: false

# couchbaseOperator is the controller for couchbase cluster
couchbaseOperator:
  # name of the couchbase operator
  name: "dev-ilcb-bi-cb"
  # image config
  image:
    repository: couchbase/operator
    tag: 2.0.1
  imagePullPolicy: IfNotPresent
  imagePullSecrets: []
  commandArgs:
    # pod creation timeout
    pod-create-timeout: 10m
  # resources of couchbase-operator
  resources: {}
  nodeSelector: {}
  tolerations: []
tls:
  # enable to auto create certs
  generate: true 
  # Expiry time of CA in days for generated certs
  #expiration: 365

Before install lets verify the couchbase opetator package by helm3 lint utility . This will detect any errors or anomalies present inside your base operator packages : in 2.0 I found below issue and reached couchbase which they have said to ignore and will be fixed in later release:

~/helm3/helm lint couchbase-operator
==> Linting couchbase-operator
[ERROR] templates/: parse error at (couchbase-operator/templates/_helpers.tpl:129): function "lookup" not defined
Error: 1 chart(s) linted, 1 chart(s) failed

To install operator you need to be inside couchbase directory and use below command :

~/helm3/helm install cb-operator couchbase-operator/ --values couchbase-operator/tc_operator_values.yaml --namespace bi-cb --skip-crds --debug --dry-run

Details of every parameters below:

cb-operator is my custom operator name , couchbase-operator/ is the directory name , I am installing into bi-cb namespace with skip CRD parameters (because my K8s Admin already had CRD installed for me) and use –debug and –dry-run before execution as best practice to see if Chart has any error and if all computational values looks okay

Above will verify if installation charts are syntactically good and proceed actual installation by removing –debug and –dry-run to build the operator:

~[master] ~/helm3/helm install cb-operator couchbase-operator/ --values couchbase-operator/tc_operator_values.yaml --namespace bi-cb --skip-crds


NAME: cb-operator
LAST DEPLOYED: Fri Aug 28 18:21:52 2020
NAMESPACE: bi-cb
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
== Couchbase-operator deployed.
# Check the couchbase-operator logs
kubectl logs -f deployment/cb-operator-dev-ilcb-bi-cb –namespace bi-cb

== Manage this chart
# Upgrade Couchbase
helm upgrade cb-operator -f stable/couchbase

# Show this status again
helm status cb-operator

this is log of the deployment

Now verify the new deployment created , operator helm chart created

[master]
kubectl get deployment
NAME READY UP-TO-DATE AVAILABLE AGE
cb-operator-dev-ilcb-bi-cb 1/1 1 1 48s
~/helm3/helm ls
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
cb-operator bi-cb 1 2020-08-28 18:21:52.203954412 -0400 EDT deployed couchbase-operator-2.0.1 2.0.1

Lets see the POD’s status

kubectl get pods
NAME READY STATUS RESTARTS AGE
cb-operator-dev-ilcb-bi-cb-98dccf88-ftn5f 1/1 Running 0 6m41s

Let’s see the logs of the Operator :

[master]
kubectl logs cb-operator-dev-ilcb-bi-cb-98dccf88-ftn5f
{"level":"info","ts":1598653316.9318385,"logger":"main","msg":"couchbase-operator","version":"2.0.1 (build 130)","revision":"release"}
{"level":"info","ts":1598653316.93203,"logger":"leader","msg":"Trying to become the leader."}
{"level":"info","ts":1598653316.9731295,"logger":"leader","msg":"No pre-existing lock was found."}
{"level":"info","ts":1598653316.9784648,"logger":"leader","msg":"Became the leader."}
{"level":"info","ts":1598653317.00218,"logger":"kubebuilder.controller","msg":"Starting EventSource","controller":"couchbase-controller","source":"kind source: /, Kind="}
{"level":"info","ts":1598653317.1025467,"logger":"kubebuilder.controller","msg":"Starting Controller","controller":"couchbase-controller"}
{"level":"info","ts":1598653317.202735,"logger":"kubebuilder.controller","msg":"Starting workers","controller":"couchbase-controller","worker count":4}

this is very useful for troubleshooting purposes . As operator drives everything in the couchbase cluster so any cluster related issue can be further triaged from here

COUCHBASE CLUSTER6.5 DEPLOYMENT STEPS

There is two types of deployment : MDS and non-MDS

In Non-MDS deployment all the couchbase services nodes will get same resources allocated (like CPU , RAM , Storage) but in ideal world use-case we might want to do MDS for production load because MDS is customized at granular level to shuffle allocation across the resources based on anticipated workload and I can customized CPU , RAM , Storage distributions based on my need.

In my specifications I will go with MDS based and NON-MDS helm chart is much easier .

Now , you have two options : either create your own customized values.yaml inside couchbase-cluster directory (which was extracted from .tgz file copying actual values.yaml ) or create the same inside couchbase-operator directory and inside helm charts make sure you mention couchbaseCluster: true and rest are false . I have created file named: tc_cluster_values_mds.yaml

I have provided the Couchbase Cluster code inline at below and point to note here is that some of the parameters I have taken out from Couchbase provided Yaml .For e.g. I want the buckets to be managed by Admin Console UI so I made it false.

Detail specifications of individual couchbase resource parameters refer here: https://docs.couchbase.com/operator/2.0/reference-couchbasecluster.html

# Couchbase Multidimension Scaling Cluster Chart
# Maintained by : Debashis Paul
# Default values for couchbase-operator chart.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.

# Select what to install
install:
  # install the couchbase operator
  couchbaseOperator: false
  # install the admission controller
  admissionController: false
  # install couchbase cluster
  couchbaseCluster: true
  # install sync gateway
  syncGateway: false

  # Default values for couchbase-cluster
cluster:
  # name of the cluster. defaults to name of chart release
  name: "couchbase-cluster-dev"
  # image is the base couchbase image and version of the couchbase cluster
  image: "couchbase/server:6.5.1"
  antiAffinity: true
  security:
    # username of the cluster admin.
    username: Administrator
    # password of the cluster admin.
    # auto-generated when empty
    password: "admin123"
    # adminSecret is name of secret to use instead of using
    # the default secret with username and password specified above
    adminSecret: 
    rbac:
     managed: false
    #ldap: {}
  # networking options
  networking:
    # Option to expose admin console
    exposeAdminConsole: true
    # Option to expose admin console
    adminConsoleServices:
      - data
    # Specific services to use when exposing ui
    exposedFeatures:
      - client
     # - xdcr
    # Defines how the admin console service is exposed.
    # Allowed values are NodePort and LoadBalancer.
    # If this field is LoadBalancer then you must also define a spec.dns.domain.
    adminConsoleServiceType: NodePort
    # Defines how the per Couchbase node ports are exposed.
    # Allowed values are NodePort and LoadBalancer.
    # If this field is LoadBalancer then you must also define a spec.dns.domain.
    exposedFeatureServiceType: NodePort
    # The dynamic DNS configuration to use when exposing services
  #  dns:
    # Custom map of annotations to be added to console and per-pod (exposed feature) services
  #  serviceAnnotations: {}
    # The Couchbase cluster tls configuration (auto-generated)
  #  tls:
  # The retention period that log volumes are kept for after their associated pods have been deleted.
  logRetentionTime: 604800s
  # The maximum number of log volumes that can be kept after their associated pods have been deleted.
  logRetentionCount: 20
  # xdcr defines remote clusters and replications to them.
  #xdcr:
    # managed defines whether the Operator should manage XDCR remote clusters
  #  managed: false
    # remoteClusters contains references to any remote clusters to replicate to
  #  remoteClusters:
  # backup defines values for automated backup.
  backup:
    # managed determines whether Automated Backup is enabled
    managed: true
    # image used by the Operator to perform backup or restore
    image: couchbase/operator-backup:6.5.1
    # optional service account to use when performing backups
    # service account will be created if it does not exist
    serviceAccountName: cbbackupuser
  # defines integration with third party monitoring sofware
  # monitoring:
  #   prometheus:
  #     # defines whether Prometheus metric collection is enabled
  #     enabled: true
  #     # image used by the Operator to perform metric collection
  #     # (injected as a "sidecar" in each Couchbase Server Pod)
  #     image: couchbase/exporter:1.0.1
  #     # Optional Kubernetes secret that clients use to access Prometheus metrics
  #     authorizationSecret:
  # Cluster wide settings for nodes and services
  cluster:
    # The amount of memory that should be allocated to the data service
    dataServiceMemoryQuota: 4096Mi
    # The amount of memory that should be allocated to the index service
    indexServiceMemoryQuota: 4096Mi
    # The amount of memory that should be allocated to the search service
    searchServiceMemoryQuota: 256Mi
    # The amount of memory that should be allocated to the eventing service
    eventingServiceMemoryQuota: 256Mi
    # The amount of memory that should be allocated to the analytics service
    analyticsServiceMemoryQuota: 1Gi
    # The index storage mode to use for secondary indexing
    indexStorageSetting: plasma

    autoCompaction:
      # amount of fragmentation allowed in persistent database [2-100]
      databaseFragmentationThreshold:
        percent: 30
        size: 1Gi
      # amount of fragmentation allowed in persistent view files [2-100]
      viewFragmentationThreshold:
        percent: 30
        size: 1Gi
      # whether auto-compaction should be performed in parallel
      parallelCompaction: false
      # how frequently tombstones may be purged
      tombstonePurgeInterval: 72h
      # optional window when an auto-compaction may start (uncomment below)
      timeWindow: {}
      # start: 02:00
      # end: 06:00
      # abortCompactionOutsideWindow: true

  # configuration of logging functionality
  # for use in conjuction with logs persistent volume mount
  logging:
    # retention period that log volumes are kept after pods have been deleted
    logRetentionTime: 604800s
    # the maximum number of log volumes that can be kept after pods have been deleted
    logRetentionCount: 20
  # kubernetes security context applied to pods
  securityContext:
    # fsGroup of persistent volume mount
    fsGroup: 1000
    runAsUser: 1000
    runAsNonRoot: true
  # cluster buckets
  buckets:
    #Managed defines whether buckets are managed by us or the clients.
    managed: false
  servers:
    dataservices:
      size: 3
      services:
        - data
      pod:
        resources:
          limits:
            cpu: "2"
            memory: 3Gi
          requests:
            cpu: "1"
            memory: 2Gi
      volumeMounts:
        data: couchbasedata
        default: couchbasedefault
    indexservices:
      size: 2
      services:
        - index
        - query
      pod:
        resources:
          limits:
            cpu: "2"
            memory: 3Gi
          requests:
            cpu: "1"
            memory: 2Gi
      volumeMounts:
        index: couchbaseindex
        default: couchbasedefault
    default:
      size: 1
      services:
        - search
        - eventing
        - analytics
      pod:
        resources:
          limits:
            cpu: "2"
            memory: 2Gi
          requests:
            cpu: "1"
            memory: 1Gi
      volumeMounts:
        default: couchbasedefault
  volumeClaimTemplates:
    - metadata:
        name: couchbasedata
      spec:
        accessModes:
          - ReadWriteOnce
        storageClassName: pure-block
        resources:
          requests:
            storage: 70Gi
    - metadata:
        name: couchbaseindex
      spec:
        accessModes:
          - ReadWriteOnce
        storageClassName: pure-block
        resources:
          requests:
            storage: 50Gi
    - metadata:
        name: couchbasedefault
      spec:
        accessModes:
          - ReadWriteOnce
        storageClassName: pure-block
        resources:
          requests:
            storage: 5Gi               


# RBAC users to create
# (requires couchbase server 6.5.0 and higher)
users: {}

#Uncomment to create an example user named 'developer'

developer:
  # password to use for user authentication
  # (alternatively use authSecret)
  password: password
  # optional secret to use containing user password
  authSecret:
  # domain of user authentication
  authDomain: local
  # roles attributed to group
  roles:
    - name: bucket_admin
      bucket: group360

# TLS Certs that will be used to encrypt traffic between operator and couchbase
tls:
  # enable to auto create certs
  generate: true
  # Expiry time of CA in days for generated certs
  #expiration: 365

Now Lets execute the Cluster Helm-Chart build

~/couchbase65 [master]
~/helm3/helm install cb-cluster couchbase-operator --values couchbase-operator/tc_cluster_values_mds.yaml --namespace bi-dev --skip-crds 

if you end up with problem use –debug –dry-run to test the computed values . I would do that first before execution . Also –skip-crds is very important

After execution you will see below

~/helm3/helm install cb-cluster couchbase-operator --values couchbase-operator/tc_cluster_values_mds.yaml --namespace bi-cb --skip-crds
NAME: cb-cluster
LAST DEPLOYED: Mon Aug 31 18:40:42 2020
NAMESPACE: bi-cb
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
== Connect to Admin console
kubectl port-forward --namespace bi-cb couchbase-cluster-dev-0000 18091:18091
# open https://localhost:18091
username: Administrator
password: admin123
== Manage this chart
# Upgrade Couchbase
helm upgrade cb-cluster -f stable/couchbase
# Show this status again
helm status cb-cluster

Lets see the Operator Logs while the Cluster started getting created

~/couchbase65 [master]
kubectl logs -f cb-operator-dev-ilcb-788bc4cb9b-l854f
{"level":"info","ts":1598915123.440226,"logger":"main","msg":"couchbase-operator","version":"2.0.1 (build 130)","revision":"release"}
{"level":"info","ts":1598915123.440417,"logger":"leader","msg":"Trying to become the leader."}
{"level":"info","ts":1598915123.4805334,"logger":"leader","msg":"No pre-existing lock was found."}
{"level":"info","ts":1598915123.4857814,"logger":"leader","msg":"Became the leader."}
{"level":"info","ts":1598915123.511823,"logger":"kubebuilder.controller","msg":"Starting EventSource","controller":"couchbase-controller","source":"kind source: /, Kind="}
{"level":"info","ts":1598915123.6121693,"logger":"kubebuilder.controller","msg":"Starting Controller","controller":"couchbase-controller"}
{"level":"info","ts":1598915123.7123137,"logger":"kubebuilder.controller","msg":"Starting workers","controller":"couchbase-controller","worker count":4}
{"level":"info","ts":1598915182.4643793,"logger":"cluster","msg":"Watching new cluster","cluster":"bi-cb/couchbase-cluster-dev"}
{"level":"info","ts":1598915182.4645112,"logger":"cluster","msg":"Janitor starting","cluster":"bi-cb/couchbase-cluster-dev"}
{"level":"info","ts":1598915182.4669094,"logger":"cluster","msg":"Couchbase client starting","cluster":"bi-cb/couchbase-cluster-dev"}
{"level":"info","ts":1598915182.5306997,"logger":"cluster","msg":"UI service created","cluster":"bi-cb/couchbase-cluster-dev","name":"couchbase-cluster-dev-ui"}
{"level":"info","ts":1598915182.5568848,"logger":"cluster","msg":"Cluster does not exist so the operator is attempting to create it","cluster":"bi-cb/couchbase-cluster-dev"}
{"level":"info","ts":1598915182.6547754,"logger":"cluster","msg":"Creating pod","cluster":"bi-cb/couchbase-cluster-dev","name":"couchbase-cluster-dev-0000","image":"couchbase/server:6.5.1"}

you will start seeing the Couchbase POD’s will be creating one after another

~/couchbase65 [master]
kubectl get pods
NAME READY STATUS RESTARTS AGE
cb-operator-dev-ilcb-788bc4cb9b-l854f 1/1 Running 0 113s
couchbase-cluster-dev-0000 0/1 Running 0 52s
couchbase-cluster-dev-0001 0/1 Running 0 22s
tiller-deploy-5fc9fcb64b-dtkx5 1/1 Running 0 24d

once all the 6 POD’s is in running state that means cluster installation is successful

This is how finally how cluster , deployment , pods and helm will look like

~/couchbase65 [master]
kubectl get cbc
NAME VERSION SIZE STATUS UUID AGE
couchbase-cluster-dev 6.5.1 6 Running 4ea6222a5b4ee6a50613ab0fd589a9f0 3m52s
~/couchbase65 [master]
kubectl get deployment
NAME READY UP-TO-DATE AVAILABLE AGE
cb-operator-dev-ilcb 1/1 1 1 4m40s
tiller-deploy 1/1 1 1 383d

~/helm3/helm ls
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
cb-cluster bi-cb 1 2020-08-31 19:06:04.399745186 -0400 EDT deployed couchbase-operator-2.0.1 2.0.1
cb-operator bi-cb 1 2020-08-31 19:05:20.266942245 -0400 EDT deployed couchbase-operator-2.0.1 2.0.1
~/couchbase65 [master]
kubectl get pods
NAME READY STATUS RESTARTS AGE
cb-operator-dev-ilcb-788bc4cb9b-l854f 1/1 Running 0 4m53s
couchbase-cluster-dev-0000 1/1 Running 0 3m52s
couchbase-cluster-dev-0001 1/1 Running 0 3m22s
couchbase-cluster-dev-0002 1/1 Running 0 2m56s
couchbase-cluster-dev-0003 1/1 Running 0 2m30s
couchbase-cluster-dev-0004 1/1 Running 0 2m11s
couchbase-cluster-dev-0005 1/1 Running 0 107s
tiller-deploy-5fc9fcb64b-dtkx5 1/1 Running 0 24d

Now lets get the Nodeport details to access database (both Admin and for using Client API connection) :

~/couchbase65 [master]
kubectl get svc

To access Couchbase Admin URL you need Nodeport service names and port : for my example above ‘couchbase-cluster-dev-ui’ is the service and outside nodeport corresponding to 8091 which is 31255 is what we need to access this outside K8s env. This port assigned automatically by K8s. Now lets find what IP address we should use.

Lets describe POD from above list of Cluster POD’s and you can pick any one of the IP address and use port 31255 .

~ [master]
kubectl describe pod | grep Node

Now lets access Admin Console UI

Author: Debashis Paul

Retired Oracle BI Enthusiastic. Musing on Enterprise Cloud & Data Architecture and Design in Open source Full stack framework in Kubernetes and working on Big Data/BI & Analytics. In my blog all the voices are of my own and does not necessarily reflect the views of my employer. Thanks for visiting my Journal.Have a Good Day !!!

Leave a comment