Accessing the Kubernetes Cluster

On this page, you can find an explanation of several ways to access a Kubernetes Cluster created in the Cloud Console.

Table of contents

  1. Prerequisites
  2. Get access from Ubuntu VM to the Kubernetes Cluster using CLI User
  3. Get access from Centos VM to the Kubernetes Cluster using CLI User
  4. Connect to Master Node of the Cluster via SSH

Prerequisites

In this article, we will assume that we have already created the following resources, that refer to the Project named TestPr, that was created in the Organization named Test1:
  • SSH Key, that will be specified during creating the Virtual Machine and Kubernetes Cluster for further connection to them via SSH- it was created with the next parameters:
    • Name: testV;
    • Public key is placed on the Linux VM during its creation;
    • Private Key was copied to the clipboard and saved on the local system in the next text file (for example: ~/.ssh/id_rsa);
  • Firewall (Name: for-ssh), that will be specified during creating the Virtual Machine for further connection to it via SSH - it was created with the rule allowing incoming traffic on port 22; this rule was created with the next parameters:
    • Description: for connection via ssh;
    • Direction: ingress;
    • Port range min: 22;
    • Port range max: 22;
    • Protocol: tcp;
    • Remote IP prefix: 0.0.0.0/0
  • Ubuntu Virtual Machine (IP: 185.226.42.229), from which we will get access to the Kubernetes Cluster API - it was created with the next parameters and with additional Firewall named for-ssh, configured to allow incoming traffic on port 22,  so we can connect to this Virtual Machine remotely from our local server via SSH:
    • Name: testVm2;
    • Flavor: VC-2;
    • Image: ubuntu-server-18.04-LTS-20201111;
    • Key pair: testV;
    • Networks: public;
    • Firewalls: default, for-ssh;
    • Volume size: 10.  
  • Centos Virtual Machine (IP:185.226.41.3), from which we will get access to the Kubernetes Cluster API - it was created with the next parameters and with additional Firewall named for-ssh, configured to allow incoming traffic on port 22,  so we can connect to this Virtual Machine remotely from our local server via SSH:
    • Name: testVm4;
    • Flavor: VC-2;
    • Image: centos-7.9-2009;
    • Key pair: testV;
    • Networks: public;
    • Firewalls: default, for-ssh;
    • Volume size: 10.  
  • Kubernetes Cluster (Master node IP:185.226.43.68),  created with the next parameters:
    • Name: testCl;
    • Cluster Template: v1.17.14;
    • Master Node Flavor: VC-2;
    • Node Flavor: VC-2;
    • Keypair: testV.
    • Docker image size (GB): 50 GB;
    • Master Count: 1;
    • Node Count: 1.
  • API User, created with the following parameters and its RC file has already been loaded:
    • Name: testCLIuser;
    • Password: P@sword.

For more information on creating and configuring these resources, see the following articles:

Get access from Ubuntu VM to the Kubernetes Cluster using CLI User

    To get the access from the Ubuntu Virtual Machine to the created Kubernetes cluster using CLI follow the next steps:

    • Loggin to your Ubuntu Virtual Machine from which you want to get access to the Kubernetes Cluster API;
      for this we use SSH protocol - to find additional information about, it see the article: Connect to Linux VM via SSH:
      ssh -i ~/.ssh/id_rsa ubuntu@185.226.42.229
    • Update Ubuntu package sources by running the following command:
      sudo apt update
    • Install Kubernetes Python Client by running the next command:
      sudo apt install python3-pip
    • Install openstack cli tool by running two next commands one by one: 
      sudo pip3 install python-openstackclient

      sudo pip3 install python-magnumclient
    • Place RC File of the created CLI User to your Virtual Machine:
      vi openrc

      Сheck that there were indicated the correct OS_USERNAME and  OS_PROJECT_ID and press  Esc:wq, then Enter to save the changes:

      #!/usr/bin/env bash

          # To use an OpenStack cloud you need to authenticate against the Identity

          # service named keystone, which returns a **Token** and **Service Catalog**.

          # The catalog contains the endpoints for all services the user/tenant has

          # access to - such as Compute, Image Service, Identity, Object Storage, Block

          # Storage, and Networking (code-named nova, glance, keystone, swift,

          # cinder, and neutron).

          #

          # *NOTE*: Using the 3 *Identity API* does not necessarily mean any other

          # OpenStack API is version 3. For example, your cloud provider may implement

          # Image API v1.1, Block Storage API v2, and Compute API v2.0. OS_AUTH_URL is

          # only for the Identity API served through keystone.

          export OS_AUTH_URL=https://upper-austria.ventuscloud.eu:443/v3

          # With the addition of Keystone we have standardized on the term **project**

          # as the entity that owns the resources.

          export OS_PROJECT_ID=681b1b8861fb45e899953da558f22f37

          export OS_PROJECT_NAME="Test1:TestPr"

          export OS_USER_DOMAIN_NAME="ventus"

          if [ -z "$OS_USER_DOMAIN_NAME" ]; then unset OS_USER_DOMAIN_NAME; fi

          export OS_PROJECT_DOMAIN_ID="e1780e7170674d5684076a726f683cfd"

          if [ -z "$OS_PROJECT_DOMAIN_ID" ]; then unset OS_PROJECT_DOMAIN_ID; fi

          # unset v2.0 items in case set

          unset OS_TENANT_ID

          unset OS_TENANT_NAME

          # In addition to the owning entity (tenant), OpenStack stores the entity

          # performing the action as the **user**.

          export OS_USERNAME="Test1:testCLIuser"

          # With Keystone you pass the keystone password.

          echo "Please enter your OpenStack Password for project $OS_PROJECT_NAME as user $OS_USERNAME: "

          read -sr OS_PASSWORD_INPUT

          export OS_PASSWORD=$OS_PASSWORD_INPUT

          # If your configuration has multiple regions, we set that information here.

          # OS_REGION_NAME is optional and only valid in certain environments.

          export OS_REGION_NAME="Upper-Austria"

          # Don't leave a blank variable, unset it if it was empty

          if [ -z "$OS_REGION_NAME" ]; then unset OS_REGION_NAME; fi

          export OS_INTERFACE=public

          export OS_IDENTITY_API_VERSION=3
    • Execute openrc starting with dot:
      . openrc
    • Provide the password of the created CLI User and hit Enter - this password will be used to authenticate you in the Cloud Console.
    • Run the next command to get a list of all clusters created in the corresponding Project and to which your User has access:
      openstack coe cluster list

      In our case the output will be next:

      0cl-cli
    • Run the following command to get the kubeconfig file for the Cluster named testCl, that you will be accessing:
      mkdir ~/testCl

      openstack coe cluster config --dir ~/testCl testCl
    • Export path to created config for as KUBECONFIG env variable:
      export KUBECONFIG="~/testCl/config"
    • Install kubectl by running the next command:
      sudo snap install kubectl --classic
    • Run the next commands to test that you have access to the selected Cluster and all pods are running:
      kubectl get nodes

      kubectl get pods --all-namespaces
      If everything is fine, the output should be close to the following:
      0cl-cli1

    Get access from Centos VM to the Kubernetes Cluster using CLI User

    To get the access from the Centos Virtual Machine to the created Kubernetes cluster using CLI follow the next steps:

    • Loggin to your Centos Virtual Machine from which you want to get access to the Kubernetes Cluster API;
      for this we use SSH protocol - to find additional information about, it see the article: Connect to Linux VM via SSH:
      ssh -i ~/.ssh/id_rsa centos@185.226.41.3
    • Update Centos package sources by running the following command:
      sudo dnf update -y
    • Add the CentOS 7 RDO repository by using the following command:
      sudo yum install -y https://www.rdoproject.org/repos/rdo-release.rpm
    • Install openstack cli tool by running two next commands one by one: 
      sudo yum install python-openstackclient

      sudo yum install python-magnumclient
    • Place RC File of the created CLI User to your Virtual Machine:
      vi openrc

      Сheck that there were indicated the correct OS_USERNAME and  OS_PROJECT_ID and press  Esc:wq, then Enter to save the changes:

      #!/usr/bin/env bash

          # To use an OpenStack cloud you need to authenticate against the Identity

          # service named keystone, which returns a **Token** and **Service Catalog**.

          # The catalog contains the endpoints for all services the user/tenant has

          # access to - such as Compute, Image Service, Identity, Object Storage, Block

          # Storage, and Networking (code-named nova, glance, keystone, swift,

          # cinder, and neutron).

          #

          # *NOTE*: Using the 3 *Identity API* does not necessarily mean any other

          # OpenStack API is version 3. For example, your cloud provider may implement

          # Image API v1.1, Block Storage API v2, and Compute API v2.0. OS_AUTH_URL is

          # only for the Identity API served through keystone.

          export OS_AUTH_URL=https://upper-austria.ventuscloud.eu:443/v3

          # With the addition of Keystone we have standardized on the term **project**

          # as the entity that owns the resources.

          export OS_PROJECT_ID=681b1b8861fb45e899953da558f22f37

          export OS_PROJECT_NAME="Test1:TestPr"

          export OS_USER_DOMAIN_NAME="ventus"

          if [ -z "$OS_USER_DOMAIN_NAME" ]; then unset OS_USER_DOMAIN_NAME; fi

          export OS_PROJECT_DOMAIN_ID="e1780e7170674d5684076a726f683cfd"

          if [ -z "$OS_PROJECT_DOMAIN_ID" ]; then unset OS_PROJECT_DOMAIN_ID; fi

          # unset v2.0 items in case set

          unset OS_TENANT_ID

          unset OS_TENANT_NAME

          # In addition to the owning entity (tenant), OpenStack stores the entity

          # performing the action as the **user**.

          export OS_USERNAME="Test1:testCLIuser"

          # With Keystone you pass the keystone password.

          echo "Please enter your OpenStack Password for project $OS_PROJECT_NAME as user $OS_USERNAME: "

          read -sr OS_PASSWORD_INPUT

          export OS_PASSWORD=$OS_PASSWORD_INPUT

          # If your configuration has multiple regions, we set that information here.

          # OS_REGION_NAME is optional and only valid in certain environments.

          export OS_REGION_NAME="Upper-Austria"

          # Don't leave a blank variable, unset it if it was empty

          if [ -z "$OS_REGION_NAME" ]; then unset OS_REGION_NAME; fi

          export OS_INTERFACE=public

          export OS_IDENTITY_API_VERSION=3
    • Execute openrc starting with dot:
      . openrc
    • Provide the password of the created CLI User and hit Enter - this password will be used to authenticate you in the Cloud Console.
    • Run the next command to get a list of all clusters created in the corresponding Project and to which your User has access:
      openstack coe cluster list

      In our case the output will be next:

      0cl-cli3
    • Run the following command to get the kubeconfig file for the Cluster named testCl, that you will be accessing:
      mkdir ~/testCl

      openstack coe cluster config --dir ~/testCl testCl
    • Export path to created config for as KUBECONFIG env variable:
      export KUBECONFIG="~/testCl/config"
    • Install the latest release of  kubectl, make the kubectl binary executable and move the binary into your PATH by running the next commands:
      curl -LO https://storage.googleapis.com/kubernetes-release/release/`curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt`/bin/linux/amd64/kubectl

      chmod +x ./kubectl

      sudo mv ./kubectl /usr/local/bin/kubectl
    • Run the next commands to test that you have access to the selected Cluster and all pods are running:
      kubectl get nodes

      kubectl get pods --all-namespaces
      If everything is fine, the output should be close to the following:
      0cl-cli4

    Connect to Master Node of the Cluster via SSH

    Since we created an SSH Keypair (see Prerequsutus of this article), the public key of which is deployed on our Cluster nodes, and the private key on our local system (for example, ~ / .ssh / id_rsa), we can connect to this Kubernetes Cluster remotely from our local server - via SSH to the Master Node of the selected Cluster, which IP is 185.226.43.68. For this, just use the following command:

    ssh -i ~/.ssh/id_rsa username@10.111.22.333

    NOTE:

    Username for Cluster nodes is core.

    Replace username and 10.11.22.333 in the command with your data and specify the appropriate path to your private key. In our example, the command will look like this:

    ssh -i ~/.ssh/id_rsa core@185.226.43.68

    After successfully connecting, you can test that you have access to the selected Cluster and all pods are running, just run the following commands:

    sudo su -

    kubectl get nodes

    kubectl get pods --all-namespaces

    If everything is fine, the output should be close to the following:

    cl-cli7