Skip to main content

Setting up Single node Kubernetes Cluster with Core OS bare metal

You might know already there is an official documentation to follow to setup a Kubernetes cluster on Core OS bare metal. But when do that specially single node cluster, I found some gaps in that documentation [1]. And another reason for this blog post is to get everything into one place. So this blog post will describe how to overcome the issues of setting up a single node cluster.

Installing Core OS bare metal.


You can refer to doc [2] to install core os. 

First thing is about users. Documentation[2] tells you how to create a user without password. To login as that user you will need ssh keys. So to create a user with username password, you can use a cloud-config.yaml file. Here is a sample.

 #cloud-config  
 users:  
 - name: user  
  passwd: $6$SALT$3MUMz4cNIRjQ/Knnc3gXjJLV1vdwFs2nLvh//nGtEh/.li04NodZJSfnc4jeCVHd7kKHGnq5MsenN.tO6Z.Cj/  
  groups:  
  - sudo  
  - docker  

Here value for passwd is a hash value. One of the below methods can be used to hash a password.[3]

 # On Debian/Ubuntu (via the package "whois")  
 mkpasswd --method=SHA-512 --rounds=4096  
 # OpenSSL (note: this will only make md5crypt. While better than plantext it should not be considered fully secure)  
 openssl passwd -1  
 # Python (change password and salt values)  
 python -c "import crypt, getpass, pwd; print crypt.crypt('password', '\$6\$SALT\$')"  
 # Perl (change password and salt values)  
 perl -e 'print crypt("password","\$6\$SALT\$") . "\n"'  


If you are installing this inside a private network (office network or university network) then you may need to set IP, DNS and so on. Specially about DNS, since its resolving using resolv.conf and it get replaced always then you may need to set it up as below.

Create a file in  /etc/systemd/network/static.network with content below. Replace the values with your network values.

 [Match]  
 Name=enp2s0  
 [Network]  
 Address=x.x.x.x  
 Gateway=x.x.x.x  
 DNS=x.x.x.x  

Then restart the network with command below.

 sudo systemctl restart systemd-networkd  

Now your core os installation is ready to install Kubernetes.


Installing Kubernetes on Core OS


The official documentation [1] describes how to install a cluster. But what I will explain is to create a single node cluster. You can follow the same documentation. When you create certs create what's needed for the master node. And then go on and deploy the master node. Here you will not need the calcio related steps if you don't need to specifically use calcio with Kubernetes.

In Core OS Kubernetes is installed as a service named kubelet. So there what you defined is the service definition and supporting manifest files to service. There are four components of Kubernetes which are configured as manifests inside  /etc/kubernetes/manifests/

  1. API server
  2. Proxy
  3. Controller Manager
  4. Scheduler
All these four components will start as pods / containers inside the cluster.

Apart from these four configuration you have configured the kubelet service as well.

But with only these configurations if you try to create a pod it will not get created. Actually it will fail to schedule. Because you don't have a node available to schedule in the cluster. Usually masters don't schedule pods. So that's why in this documentation in master scheduling is set to false. So to turn on scheduling just edit the service definition file /etc/kubernetes/system/kubelet.service to change  --register-schedulable=false to --register-schedulable=true .

Now you will be able to schedule the pods in this node.

Configuring to use registry.


Next step is configuring to use a registry. If you have already used docker in other OS, then you should know that adding an insecure registry is done using DOCKER_OPTS.  One way to configure DOCKER_OPTS in Core OS is to add it to /run/flannel_docker_opts.env file. But it would be overridden when the server is restarted. For both insecure and proper registries use the method explained in [4].



References

Comments

  1. This blog is very helpful and nicely explain how to install Kubernetes. Thanks for sharing

    ReplyDelete

Post a Comment

Popular posts from this blog

Generate JWT access tokens from WSO2 Identity Server

In Identity Server 5.2.0 we have created an interface to generate access tokens. Using that we have developed a sample to generate JWT tokens. You can find that sample under msf4j samples[1][2]. If you are build it as it is you will need to use Java 8 to build since msf4j is developed on Java 8. So you will need to run Identity Server on Java 8 as well. After building the project[2] please copy the jar inside target directory to $IS_HOME/repository/components/dropins/ directory. And then please add the following configuration to Identity.xml which is placed under $IS_HOME/repository/conf/identity/ folder inside tag OAuth . <IdentityOAuthTokenGenerator>com.wso2.jwt.token.builder.JWTAccessTokenBuilder</IdentityOAuthTokenGenerator> Then go to the database you used to store oauth tokens (This is the database pointed from the datasource you mentioned in the $IS_HOME/repository/conf/identity/identity.xml) and then alter the size of the column ACCESS_TOKEN of the tab...

Consuming File System artifacts from Kubernetes Pods

When you are deploying an application which contains artifacts written on file system dynamically withing kubernetes (k8s), for example a tomcat server exposed to outside to deploy war files, you need to make sure the file system state is preserved always. Otherwise if the pod goes down, you might loose data. So one solution is to mount an external disk. Yes indeed you can do that. But how robust is that solution. Say something happened to the external disk. How can you recover the data? Use several disks and rsync to sync the data. Sounds a robust solution. Say you want to increase the reliability. And what happens if rsync process get killed. How much will it cost to make it's reliability closer to 100%? We have a robust, simple solution. It's using gluster to save data. [1] [2] We install a pod named gluster for each node. There is an additional disk attached to each node which will be used as the data storage for gluster. This disk is formatted in a special forma...