Monthly Archives :

March 2020

What is Kubernetes?

Introduction

In 2017, the container orchestration space was overcome by Kubernetes (k8s). Docker Swarm and Mesos have been presenting container orchestration instruments for a long time, for example, by Kubernetes components. They two helped the Kubernetes in their surroundings. Kubernetes has declared reconciliations to its individual cloud stadiums, for example, AWS, Microsoft Azure and Oracle Cloud. Every developer and the dev-ops engineer would thus benefit by learning Kubernetes nuts and bolts. In fact, this is what we are going to do in this article.

What is Kubernetes?

Kubernetes (K8s) is one of the most stimulating developments in the Dev-Ops world today. The aim is to produce powerful containers behind its immediate reputation.
Kubernetes is a basic framework at which container applications can be operated and arranged on a group of machines. The strategy aims to fully address the containerized applications and administration lifecycle of existence. It uses techniques that provide consistency, adaptability and great accessibility.

The benefits of Kubernetes are discussed here. As a user of Kubernetes:·

  • You can define how your applications work and how they should work with different applications.
  • You can balance up and down your authorities, update and adjust traffic between your applications easily.
  • You have interfaces and primitive composable that allow you, with a high level of adaptability and unwavering quality, to characterize and deal with your applications.

The Kubernetes architecture, Kubernetes can use a common network to deal with a series of simulated or physical machines. This cluster includes all Kubernetes pods and jobs.

Every machine in a group of Kubernetes has one job within the system of Kubernetes. The master server, which covers the various APIs, performs health checks on various servers, schedules outstanding tasks and coordinates various parts, will be one of the servers.

The fundamental parts of the group are guaranteed, Additional machines are called nodes in the group. These machines are designed to perform tasks in containers, which means that each of them has to have a connected container run time.

kuberneters.png

Components of Kubernetes

Master and node components were discussed below: The following

Master Components

  1. The Etcd is reliable storage for all cluster information that is exceptionally accessible and used to support Kubernetes.
  2. The API Server is the thing which, as its name recommends, uncovered the Kubernetes API. It is the main objective of the cluster’s management. It is the bridge between various parts that spread data and directions.
  3. The Controller Manager is responsible for the cluster status and the performance of routine tasks.
  4. Node Components is the assignment to nodes of jobs.

Node Components

Docker: It is used to run the containers of Docker.

Kubelet: For each cluster node, Kubelet is the fundamental point of communication.

Proxy: Proxy is used to maintain system rules and send connections.

Advantage of container management with Kubernetes

  • Check for containers that are defective and replace them with new containers Does regular health checks and kill containers that fail health checks.
  • Easily increase or decrease the number of containers. You can easily define the number of containers required in your K8s cluster and scale up or down without breaking any functionality.
  • Check for containers that are defective and replace them with new containers Does regular health checks and kill containers that fail health checks.
  • Reusable on any infrastructure the k8s objects you build are reusable on any infrastructure with a k8s cluster and when you change the infrastructure you don’t have to change a single line of code.
  • Zero Downtime When you change or upgrade your cluster, by conducting rolling updates, k8s can slowly deploy the changes without getting any downtime.
  • The best thing about k8s is Free and Open Source. So at no cost can you have all these great features.
How to Configure and Install Ansible on CentOS 7.x

This article is written to understand Ansible and install.  Ansible is an open source automation software for configuring, managing and deploying software applications on the nodes without any downtime without any client agent install just by using SSH. Now a day’s most of the IT Automation tools runs as an agent on the remote host, but Ansible needs an SSH connection and Python (2.4 or later) to be installed on the remote nodes to perform the actions.

Environment Setup Details

Ansible Server:
Operating System:    Centos 6.7
IP Address:    192.168.87.140
Host-name:    ansible.hanuman.com
User:    ansibileadmin
Remote Nodes: 
Node 1: 192.168.87.156
Node 2: 192.168.87.157

Installing Controlling Machine – Ansible

There is no official Ansible repository for RPB based clones, but we can install Ansible by enabling EPEL repository using RHEL/CentOS 6.x, 7.x using the currently supported fedora distributions.

# rpm -Uvh http://download.fedoraproject.org/pub/epel/6/i386/epel-release-6-8.noarch.rpmRetrieving http://download.fedoraproject.org/pub/epel/6/i386/epel-release-6-8.no             arch.rpm
warning: /var/tmp/rpm-tmp.nHoRHj: Header V3 RSA/SHA256 Signature, key ID 0608b89             5: NOKEY
Preparing...                ########################################### [100%]
package epel-release-6-8.noarch is installed

After configuring EPEL repository, you can now install Ansible using yum with the below command.

# sudo yum install ansible -y
Loaded plugins: fastestmirror, security
Setting up Install Process
Determining fastest mirrors
epel/metalink                                            | 4.3 kB     00:00
 * base: centosmirror.go4hosting.in
 * epel: epel.mirror.net.in
 * extras: centosmirror.go4hosting.in
 * updates: centosmirror.go4hosting.in
base                                                     | 3.7 kB     00:00
epel                                                     | 4.3 kB     00:00
epel/primary_db                                          | 5.8 MB     00:00
extras                                                   | 3.4 kB     00:00
updates                                                  | 3.4 kB     00:00
updates/primary_db                                       | 4.0 MB     00:00
Resolving Dependencies
.
.
.
.
Running Transaction
  Installing : sshpass-1.05-1.el6.x86_64                                                1/11
  Installing : python-crypto2.6-2.6.1-2.el6.x86_64                                      2/11
  Installing : python-pyasn1-0.0.12a-1.el6.noarch                                       3/11
  Installing : python-keyczar-0.71c-1.el6.noarch                                        4/11
  Installing : python-simplejson-2.0.9-3.1.el6.x86_64                                   5/11
  Installing : python-httplib2-0.7.7-1.el6.noarch                                       6/11
  Installing : libyaml-0.1.3-4.el6_6.x86_64                                             7/11
  Installing : PyYAML-3.10-3.1.el6.x86_64                                               8/11
  Installing : python-babel-0.9.4-5.1.el6.noarch                                        9/11
  Installing : python-jinja2-2.2.1-2.el6_5.x86_64                                      10/11
  Installing : ansible-1.9.4-1.el6.noarch                                          11/11
  Installed:
  ansible.noarch 0:1.9.4-1.el6
Dependency Installed:
  PyYAML.x86_64 0:3.10-3.1.el6                   libyaml.x86_64 0:0.1.3-4.el6_6
  python-babel.noarch 0:0.9.4-5.1.el6            python-crypto2.6.x86_64 0:2.6.1-2.el6
  python-httplib2.noarch 0:0.7.7-1.el6           python-jinja2.x86_64 0:2.2.1-2.el6_5
  python-keyczar.noarch 0:0.71c-1.el6            python-pyasn1.noarch 0:0.0.12a-1.el6
  python-simplejson.x86_64 0:2.0.9-3.1.el6       sshpass.x86_64 0:1.05-1.el6
Complete!

After installation completed, we can verify the version of Ansible by running this below command.

# ansible --version
ansible 1.9.4
  configured module search path = None

Preparing SSH Keys to Remote Hosts

To perform any deployment or management from the local host to remote host first we need to create and copy the ssh keys to the remote host. In every remote host, there will be a user account ansible (in your case may be the different user).

First, let me create an SSH key using the below command and copy the key to remote hosts.

# ssh-keygen -t rsa -b 4096 -C "ansible.hanuman.com"
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in ansible_key.
Your public key has been saved in ansible_key.pub.
The key fingerprint is:
28:ae:0c:8d:91:0a:fa:ac:2f:e2:8c:e5:fd:28:4b:c6 ansible.hanuman.com
The key's randomart image is:
+--[ RSA 4096]----+
|                 |
|                 |
|                 |
| .     .         |
|+   . . S        |
|+= . .           |
|= E .            |
|=X.o .           |
|=*Ooo..          |
+-----------------+

After creating SSH Key successfully, now copy the created key to all two remote servers, We need a user to do ansible here for a demo I am using root user from where we can perform the ansible tasks

# ssh-copy-id root@192.168.87.156
root@192.168.87.156's password:
Now try logging into the machine, with "ssh 'root@192.168.87.156'", and check in:
  .ssh/authorized_keys
to make sure we haven't added extra keys that you weren't expecting.
# ssh-copy-id root@192.168.87.157
root@192.168.87.157's password:
Now try logging into the machine, with "ssh 'root@192.168.87.157'", and check in:
.ssh/authorized_keys
To make sure we haven't added extra keys that you weren't expecting.

Copy SSH Key Second Remote Host

After copying all SSH Keys to the remote host, now perform an ssh key authentication on all remote hosts to check whether authentication working or not.

# ssh root@192.168.87.156
[ansible@localhost ~]# logout
Connection to 192.168.87.156 closed.
# ssh root@192.168.87.157
[ansible@localhost ~]#

Creating Inventory File for Remote Hosts

Inventory file, This file has information of the hosts for which host we need to get connected from local to remote. The Default configuration file will be under /etc/ansible/hosts.

Now we will add the two nodes to the configuration file. Open and edit the file using your favorite editor, Here I use vim.

# sudo vim /etc/ansible/hosts

Add the following two hosts IP address.

.
.
.
.
[webservers]
192.168.87.156
192.168.87.157

Note:  [webservers] in the brackets indicates as group names, it is used to classify the nodes and group them and to control at what times and for what reason.

To Test weather, the Ansible configured correctly or not.

Now time to check our all 3 servers by just doing a ping from my localhost. To perform the action we need to use the command ‘ansible‘ with options ‘-m‘ (module) and ‘-all‘ (group of servers).

# ansible -m ping webservers
[root@localhost ~]# ansible -m ping webservers
192.168.87.157 | success >> {
    "changed": false,
    "ping": "pong"
}

192.168.87.156 | success >> {
    "changed": false,
    "ping": "pong"
}

OR

# ansible -m ping -all
[root@localhost ~]# ansible -m ping webservers
192.168.87.157 | success >> {
    "changed": false,
    "ping": "pong"
}
192.168.87.156 | success >> {
    "changed": false,
    "ping": "pong"
}

In the above example, we’ve used ping module with Ansible command to ping all remote hosts at ones, the same way there are various modules can be used with Ansible, you can find available modules from the ansible Official site here.

Now, here we are using another module called ‘command‘, which is used to execute a list of shell commands (like, df, free, uptime, etc.) on all selected remote hosts at one go. For demo, you can execute the below commands.

Check the partitions on all web servers.

# ansible -m command -a "df -h" webservers

192.168.87.156 | success | rc=0 >>
 Filesystem            Size  Used Avail Use% Mounted on
 /dev/mapper/VolGroup-lv_root
 18G  2.0G   15G  12% /
 tmpfs                 491M     0  491M   0% /dev/shm
 /dev/sda1             477M   42M  411M  10% /boot


192.168.87.157 | success | rc=0 >>
 Filesystem            Size  Used Avail Use% Mounted on
 /dev/mapper/VolGroup-lv_root
 18G  2.0G   15G  12% /
 tmpfs                 491M     0  491M   0% /dev/shm
 /dev/sda1             477M   42M  411M  10% /boot

Check memory usage on all web servers.

# ansible -m command -a "free -mt" webservers

192.168.87.156 | success | rc=0 >>  
Total       used       free     shared    buffers     cached  Mem:           
981        528        453          0         39        322  -/+ buffers/cache:        166        815  Swap:         2047          0       2047  Total:        3029        528       2501

192.168.87.157 | success | rc=0 >> 
Total       used       free     shared    buffers     cached  Mem:           
981        526        455          0         39        322  -/+ buffers/cache:        164        817  Swap:         2047          0       2047  Total:        3029        526       2503

Checking Uptime for all web servers.

# ansible -m command -a "uptime" webservers
192.168.87.157 | success | rc=0 >>
 21:32:47 up 38 min,  3 users,  load average: 0.03, 0.01, 0.00
192.168.87.156 | success | rc=0 >>
 21:32:47 up 38 min,  3 users,  load average: 0.00, 0.01, 0.03

Check for hostname and Architecture.

# ansible -m command -a "arch" webservers
 192.168.87.156 | success | rc=0 >>
 x86_64
192.168.87.157 | success | rc=0 >>
 x86_64
# ansible -m shell -a "hostname" webservers
192.168.87.157 | success | rc=0 >>
 localhost.localdomain
192.168.87.156 | success | rc=0 >>
 localhost.localdomain

Checking the service status of all web servers

# ansible -m shell -a "service httpd status" webservers
 192.168.87.157 | FAILED | rc=3 >>
 httpd is stopped
192.168.87.156 | FAILED | rc=3 >>
 httpd is stopped

Redirecting the output to a file.

# ansible -m shell -a "service httpd status" webservers > service_status.txt
# cat service_status.txt
 192.168.87.156 | FAILED | rc=3 >>
 httpd is stopped
192.168.87.157 | FAILED | rc=3 >>
 httpd is stopped

To shut down all the web servers.

#ansible -m shell -a "init 0" webservers
 192.168.87.157 | success | rc=0 >>
192.168.87.156 | success | rc=0 >>

Like this way, we can run many shell commands using ansible as what we have run the above steps.

Ansible is a Powerful IT automation tool which is must be used by every Linux Admins for deploying applications and managing servers at one go. Among many other automation tools such as Puppet, Chef, etc., Ansible is quite very interesting and very easy to set up for a good production environment.

GCP: How to increase the disk space of Linux Machine

In this tutorial, we will learn how to expand or increase the Linux root volume without stopping/shutdown the instance or stopping any services on the Virtual Machine without restart.

In this, we have 2 steps to be followed.

  1. We have to change the volume size of the disk on Google console from disks Menu in GCP Cloud Compute.
  2. We have to increase the partition size on the Linux machine

Google Console Changes

  • Log in to the Google Console.
  • Click on Google Compute Menu and then goto disks
  • Find the disk on which we want to increase the disk space.gcp_disk1
  • Once the Edit, increase the size of the Volume to our need and Click on Size.

gcp_disk2

  • It will take around 5 – 8 min to increase the volume size on the Google console after some time we can see the increased size on the Google console.

Linux Instance Side.

  • Log in to the Instance using the Credentials and corresponding key pair.
  • Check the Existing disk space using the command df –h
$ df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/sda1             100G  90.1M  9.9  90% /
tmpfs                 1.9G     0  1.9G   0% /dev/shm
  • Change the login to root user by using sudo –i.
  • Run lsbk to show the block devices attached to the instance.
# lsbk
NAME    MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
sda    202:0    0   100 G  0 disk
└─sda1 202:1    0   100 G  0 part /

Here if we see that the block device sda has 100 GB attached and having 1 partition with 100  GB for the root

  • Run the growpart command to increase the root partition (/)
# growpart /dev/sda 1

The filesystem on /dev/sda1 is now 18350080 blocks long.

  • Finally, increase the root partition with resize command
# resize2fs /dev/sda1
resize2fs 1.42.3
   Filesystem at /dev/sda1 is mounted on /; on-line resizing required
    old_desc_blocks = 1, new_desc_blocks = 5
    Performing an on-line resize of /dev/sda1 to 18350080 (4k) blocks.
    The filesystem on /dev/sda1 is now 18350080 blocks long.
  • Now if we check the disk space using df, we can observe the disk space has been increased to a size we have extended on the Google Console.
# df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/sda1            150G  90.1M  60 G  69% /
tmpfs                 1.9G     0  1.9G   0% /dev/shm
AWS: Expanding the Linux root Volume without stopping/shutdown (online).

In this tutorial, we will learn how to expand or increase the Linux root volume without stopping/shutdown the instance or stopping any services on the instance without restart.

In this, we have 2 steps to be followed.

  1. We have to change the volume size on the AWS console.
  2. We have to increase the partition size on the Linux Instance.

AWS Console Changes

  • Log in to the AWS Console.
  • Click on EC2
  • Find the Instance on which we want to increase the volume.
  • Find the Root partition of the Instance in the description panel.

linux1

  • Click on the Root Device, which will show all the device information on the popup.

linux2

  • Click on EBI ID. This will take you to respective volume on EBS Volumes
  • Right-Click on the respective volume where we want to increase the disk space.
  • Click on Modify Volume.linux3
  • Once the Modify Volume pop-up, increase the size of the Volume to our need and Click on Modify.

linux4

  • It will take around 5 – 8 min to increase the volume size on the AWS console after some time we can see the increased size on the AWS console.

Linux Instance Side.

  • Log in to the Instance using the Credentials and corresponding key pair.
  • Check the Existing disk space using the command df –h
$ df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/xvda1            7.9G  7.1M  0.8G  90% /
tmpfs                 1.9G     0  1.9G   0% /dev/shm
  • Change the login to root user by using sudo –i.
  • Run lsbk to show the block devices attached to the instance.
# lsbk
NAME    MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
xvda    202:0    0   10 G  0 disk
└─xvda1 202:1    0   7.9G  0 part /

Here if we see that the block device xvda has 10 GB attached and having 1 partition with 7.9 GB for the root

  • Run the growpart command to increase the root partition (/)
# growpart /dev/xvda 1

The filesystem on /dev/xvda1 is now 18350080 blocks long.

  • Finally, increase the root partition with resize command
# resize2fs /dev/xvda1
resize2fs 1.42.3
   Filesystem at /dev/xvda1 is mounted on /; on-line resizing required
    old_desc_blocks = 1, new_desc_blocks = 5
    Performing an on-line resize of /dev/xvda1 to 18350080 (4k) blocks.
    The filesystem on /dev/xvda1 is now 18350080 blocks long.
  • Now if we check the disk space using df, we can observe the disk space has been increased to a size we have extended on the AWS.
# df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/xvda1            10G  7.1M  2.9G  69% /
tmpfs                 1.9G     0  1.9G   0% /dev/shm

Have a Question?