K3s Installation Guide for Your Kubernetes Sandbox

 The best way to learn is by doing. Everybody knows that. That is the central idea of this blog. That means we also need a way to install and use kubernetes. This blog post is going to teach you precisely that. How to install Kubernetes?

What we will do

I have seen many posts that just dive straight into the content and how to install kubernetes, but they do not take the time to explain what exactly they will do in the post. While that sounds like a very trivial thing, I would have benefited hugely from knowing what exactly I was going to do. So, here it is.

We are not exactly going to install Kubernetes. Kubernetes is very hard to install and configure. It is also very resource intensive. Maybe I will make a post about installing that someday. This is why we will use a distribution of Kubernetes called K3s. This is a lightweight, easy to install version of kubernetes created by a company called Rancher.

Rancher is an awesome company that makes many useful, open source and enterprise software solutions related to infrastructure management, and mainly, containerization.

K3s, unlike K8s (which is the traditional kubernetes), is not modular. It does not have the several parts we talked about in the introduction to kubernetes post. It is like it's own thing.

Requirements

Before we start, you will need to make sure you meet these requirements.

First, you need at least 1 linux server to install k3s on. I recommend that this is something other than your primary workstation. Also make sure it is a disposable environment. Using kubernetes, you will be making a lot of mistakes. You will need the ability to easily dispose of any previously created environments. Trust me, I know.

Second, and finally, you will need to be able to comfortably navigate the command like of your workstation and your linux system. Kubernetes has very little utility if used without the command line. You should also make sure you have basic docker knowledge. I have a post about the basics of docker here. It also has a cheat-sheet that you can download and a hands-on guide to deploying your first containerized app.

Let's get started

To install K3s, there is 1 simple command. Here it is:

curl -sfL https://get.k3s.io | sh -

These 2 simple commands will create a kubernetes cluster for you. That is how simple it is. Before we start using the cluster itself, we should add other nodes to the cluster. If you do not have other nodes to add, that is ok. You can skip this part. For the people who do, here is the command.

First, you need to obtain the token. This is like a password for your cluster. You do not want anybody to be able to join your cluster, right. Here is a command to get that.

sudo cat /var/lib/rancher/k3s/server/node-token

This should print out your node token. Keep that somewhere safe. You will need it.

Now, on all the nodes you want to join, you need to run the following command after replacing a few things.

curl -sfL https://get.k3s.io | K3S_URL=https://<MASTER_NODE_IP>:6443 K3S_TOKEN=<TOKEN> sh -

You need to replace <MASTER_NODE_IP> with the IP of the master node. Make sure the master node is reachable from the worker node. If not, this will not work. You should also replace <TOKEN> with the token you got above.

Using the cluster

Now that we have the whole cluster set up, we need to use it. Unfortunately, it is not as easy as you probably think it is. We are going to be using a tool called kubectl to interact with our cluster. kubectl will connect to the API server running on the master node of the cluster. This requires a lot of configuration. Fortunately, K3s does that for us. You can go on your master node and run the following command.

sudo k3s kubectl get nodes

This is using kubectl, but inside k3s. There are several problems with this, though.

  • That is a lot to type. I will not type that much every time I have to run such a basic command.
  • You have to log into the master node every time you want to interact with the cluster. That is inconvenient.
  • This is not best practice. There are several things you can be doing better, both from a security and an efficiency standpoint.

We can solve all these problems very easily. First, you will have to install kubectl on your workstation. The install instructions vary based on OS, but you can go to this page and click on the link that corresponds to your OS. Once you have kubectl installed, here is what you need to do next.

The kubectl configuration on the master node is stored at /etc/rancher/k3s/k3s.yaml. The contents of the file should look something like this.

apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUJkakNDQVIyZ0F3SUJBZ0lCQURBS0JnZ3Foa2pPUFFRREFqQWpNU0V3SHdZRFZRUUREQmhyTTNNdGMyVnkKZG1WeUxXTmhRREUzTURNNU9ERTVOekF3SGhjTk1qTXhNak14TURBeE9UTXdXaGNOTXpNeE1qSTRNREF4T1RNdwpXakFqTVNFd0h3WURWUVFEREJock0zTXRjMlZ5ZG1WeUxXTmhRREUzTURNNU9ERTVOekF3V1RBVEJnY3Foa2pPClBRSUJCZ2dxaGtqT1BRTUJCd05DQUFRc3k0UXAwSUxBSm4xNmVvUkV0RWZRWlc5cWptMisrR3A3OElBWGdKM1YKeG54ZEVVYVh3dHl5MGd5UGR6eTBKcnVta0ZCVW9DY010dHgvVnVQRlpJU0pvMEl3UURBT0JnTlZIUThCQWY4RQpCQU1DQXFRd0R3WURWUjBUQVFIL0JBVXdBd0VCL3pBZEJnTlZIUTRFRmdRVVhFRytDa0xmeEN5UWdrUDlxeVUwCkFkWFl4K1F3Q2dZSUtvWkl6ajBFQXdJRFJ3QXdSQUlnRmkxT1puYkZNbVh0azl5d01yUDcrMXd6Q2E0UDJ4L3cKS1pLcWQ3dm1BOW9DSUE5akx3N0VEbVZXL2FwOWdlb05XZXIwSGx6U1VNdDRyZlR2ZTRBTGVRYXcKLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
    server: https://127.0.0.1:6443
  name: default
contexts:
- context:
    cluster: default
    user: default
  name: default
current-context: default
kind: Config
preferences: {}
users:
- name: default
  user:
    client-certificate-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUJrVENDQVRlZ0F3SUJBZ0lJTnVzYks0LzhIMVV3Q2dZSUtvWkl6ajBFQXdJd0l6RWhNQjhHQTFVRUF3d1kKYXpOekxXTnNhV1Z1ZEMxallVQXhOekF6T1RneE9UY3dNQjRYRFRJek1USXpNVEF3TVRrek1Gb1hEVEkwTVRJegpNREF3TVRrek1Gb3dNREVYTUJVR0ExVUVDaE1PYzNsemRHVnRPbTFoYzNSbGNuTXhGVEFUQmdOVkJBTVRESE41CmMzUmxiVHBoWkcxcGJqQlpNQk1HQnlxR1NNNDlBZ0VHQ0NxR1NNNDlBd0VIQTBJQUJPemxqdXZBL0M4Ly9rVEUKMVFlWFR0R3IyRnowTmwzc0x2RHJjd0JHNGRTay85SmVLN1h4ajZJWlJQVUJxemtjWUtGb0laS2Q4NWgvazdCMApRWUJ1ZE0ralNEQkdNQTRHQTFVZER3RUIvd1FFQXdJRm9EQVRCZ05WSFNVRUREQUtCZ2dyQmdFRkJRY0RBakFmCkJnTlZIU01FR0RBV2dCUTdGVDVVaDVMRGg1QTVKNXI4NGFTWFArRUZNVEFLQmdncWhrak9QUVFEQWdOSUFEQkYKQWlFQTNHSnNIYmNsMlhIbDhoUDRwYnVtTE5BY25ycEFVSmJUdFVOc082QjFnazBDSUJPVW1ZbGIyOE5CaGw5RQpwbnJ1MXlHK1NFbkpHa2tVT3MvN25uRlRZWFhHCi0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0KLS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUJlRENDQVIyZ0F3SUJBZ0lCQURBS0JnZ3Foa2pPUFFRREFqQWpNU0V3SHdZRFZRUUREQmhyTTNNdFkyeHAKWlc1MExXTmhRREUzTURNNU9ERTVOekF3SGhjTk1qTXhNak14TURBeE9UTXdXaGNOTXpNeE1qSTRNREF4T1RNdwpXakFqTVNFd0h3WURWUVFEREJock0zTXRZMnhwWlc1MExXTmhRREUzTURNNU9ERTVOekF3V1RBVEJnY3Foa2pPClBRSUJCZ2dxaGtqT1BRTUJCd05DQUFUQ2xaNG9BOW43RmxnU3F0UjBNQ0UrSlFIQmw5dC9LQWUyYVYvWDAxeGsKU0hhNFBrVHFZV3RwWUtHVkVSVHB2Nkh3RElHbzNFVVE4UHFUTDNNZjd4QW5vMEl3UURBT0JnTlZIUThCQWY4RQpCQU1DQXFRd0R3WURWUjBUQVFIL0JBVXdBd0VCL3pBZEJnTlZIUTRFRmdRVU94VStWSWVTdzRlUU9TZWEvT0drCmx6L2hCVEV3Q2dZSUtvWkl6ajBFQXdJRFNRQXdSZ0loQUlERTVacXo0WnFEUExIWjczWmhwUE9uNnVkK05ucWYKVTAra1V6WVdBM2dJQWlFQTJmUEtPVE0zNmdiem9WL1kvMDJ0czk3dHZpOU1GSklWZjFiTElRelBhSU09Ci0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K
    client-key-data: LS0tLS1CRUdJTiBFQyBQUklWQVRFIEtFWS0tLS0tCk1IY0NBUUVFSUdNMVM3WjhrVUh5YTgwTFNhaGRMaWhka2xBUHVlcUMzdXRINWNEN1NmSkNvQW9HQ0NxR1NNNDkKQXdFSG9VUURRZ0FFN09XTzY4RDhMei8rUk1UVkI1ZE8wYXZZWFBRMlhld3U4T3R6QUViaDFLVC8wbDRydGZHUApvaGxFOVFHck9SeGdvV2doa3Azem1IK1RzSFJCZ0c1MHp3PT0KLS0tLS1FTkQgRUMgUFJJVkFURSBLRVktLS0tLQo=

This is the file we need in our workstation. Here is what you need to do.

First, go to your home folder. Once there, you need to create a folder called .kube. Inside that folder, you need to create a file called config. Make sure it does not have an extension. No dot should be in the file name. This will still not work. This is because you need to change the config file.

Close to the top, where it says server: https://127.0.0.1:6443, you need to change the IP (127.0.0.1) to be the IP of your master node. Once you are done with that, kubectl should work the way it is intended on your workstation.

What now?

Now that you have your kubernetes cluster, I am sure you are wondering, "What now?". Here is the answer to that:

Now, it is time to learn kubernetes. Learn how powerful and magical it is. See how it has changed the world.

To do that, you can, if you want, pick up some kubernetes course or something like that to teach yourself kubernetes, but I do not recommend it. Trust me, I have tried it. You will be sick of kubernetes in less than 1 hour. It's incredible how these courses are able to make such a fun topic sound so boring.

On the other hand, you could learn the way I did. Do hands-on projects. There are a lot on youtube, but I will also be making posts which guide you through complex projects. Each time, you learn a kubernetes concept in a fun and rewarding way.

One Last Thing

Learning Kubernetes is difficult. You will mess up a lot. There will be times when you want to just destroy the cluster and create a new one. I have had to do that about 50 times, now. Each time was very frustrating for me, because I can do delete that VM, create another one, configure it, install k3s, and then configure kubectl. It is a lot of work that I do not want for you to have to do. In that spirit, here is a bash script that will help you create a brand new kubernetes cluster.

Click here to download

This script works weather you are running it on the master node or the agent node. If it is the master node, it will uninstall the previous installation of k3s, create a new cluster, and also give you the k3s config file and the command to add other agents into the cluster.

IMPORTANT: The output of the script cannot be used right away. You have to replace anywhere where it says <IP_ADDRESS> with the IP address of the master node. You can use the ip addr command to find that.

To use this bash script, download this onto your linux server. There are many ways of doing it, but the easiest way is to right click on the download icon and click "Copy Link Address" or something similar to that. It changes based on your browser. After that, type wget followed by a space in the linux terminal. Then, hit paste. If you run that command, that should download that file. If wget is not a command, you will have to install it. THe process differs based on distribution. There are several guides on that.

Now that you have the file, you have to make sure you can execute it. Just run the following command.

chmod +x reinstall.sh

Once all that is done, just type ./reinstall.

Conclusion

This was an explanation of how to install a kubernetes sandbox environment. I know this wasn't really the kubernetes most companies use in production, but it is better than nothing. K8s has high resource requirements, and it is very complicated to install and use. Additionally, I have found that K3s works very similar to K8s and most of the resources for K8s work perfectly well for K3s.

Now that you have your own kubernetes cluster, I would suggest you do these hands-on exercises. This shows you how to use replicasets and deployments in the real world.

If you think I missed covering something in this post, I can cover another topic, or I should do something differently next time, please leave a comment. Thank you for reading.

Comments

Popular posts from this blog

Persistent Data in Docker: Explanation + Hands-On Demo

Pods to Deployments | Kubernetes Architecture Evolution

Docker Compose Explained: Simplifying Multi-Container Deployments