Skip to main content

Setting up Single node Kubernetes Cluster with Core OS bare metal

You might know already there is an official documentation to follow to setup a Kubernetes cluster on Core OS bare metal. But when do that specially single node cluster, I found some gaps in that documentation [1]. And another reason for this blog post is to get everything into one place. So this blog post will describe how to overcome the issues of setting up a single node cluster.

Installing Core OS bare metal.


You can refer to doc [2] to install core os. 

First thing is about users. Documentation[2] tells you how to create a user without password. To login as that user you will need ssh keys. So to create a user with username password, you can use a cloud-config.yaml file. Here is a sample.

 #cloud-config  
 users:  
 - name: user  
  passwd: $6$SALT$3MUMz4cNIRjQ/Knnc3gXjJLV1vdwFs2nLvh//nGtEh/.li04NodZJSfnc4jeCVHd7kKHGnq5MsenN.tO6Z.Cj/  
  groups:  
  - sudo  
  - docker  

Here value for passwd is a hash value. One of the below methods can be used to hash a password.[3]

 # On Debian/Ubuntu (via the package "whois")  
 mkpasswd --method=SHA-512 --rounds=4096  
 # OpenSSL (note: this will only make md5crypt. While better than plantext it should not be considered fully secure)  
 openssl passwd -1  
 # Python (change password and salt values)  
 python -c "import crypt, getpass, pwd; print crypt.crypt('password', '\$6\$SALT\$')"  
 # Perl (change password and salt values)  
 perl -e 'print crypt("password","\$6\$SALT\$") . "\n"'  


If you are installing this inside a private network (office network or university network) then you may need to set IP, DNS and so on. Specially about DNS, since its resolving using resolv.conf and it get replaced always then you may need to set it up as below.

Create a file in  /etc/systemd/network/static.network with content below. Replace the values with your network values.

 [Match]  
 Name=enp2s0  
 [Network]  
 Address=x.x.x.x  
 Gateway=x.x.x.x  
 DNS=x.x.x.x  

Then restart the network with command below.

 sudo systemctl restart systemd-networkd  

Now your core os installation is ready to install Kubernetes.


Installing Kubernetes on Core OS


The official documentation [1] describes how to install a cluster. But what I will explain is to create a single node cluster. You can follow the same documentation. When you create certs create what's needed for the master node. And then go on and deploy the master node. Here you will not need the calcio related steps if you don't need to specifically use calcio with Kubernetes.

In Core OS Kubernetes is installed as a service named kubelet. So there what you defined is the service definition and supporting manifest files to service. There are four components of Kubernetes which are configured as manifests inside  /etc/kubernetes/manifests/

  1. API server
  2. Proxy
  3. Controller Manager
  4. Scheduler
All these four components will start as pods / containers inside the cluster.

Apart from these four configuration you have configured the kubelet service as well.

But with only these configurations if you try to create a pod it will not get created. Actually it will fail to schedule. Because you don't have a node available to schedule in the cluster. Usually masters don't schedule pods. So that's why in this documentation in master scheduling is set to false. So to turn on scheduling just edit the service definition file /etc/kubernetes/system/kubelet.service to change  --register-schedulable=false to --register-schedulable=true .

Now you will be able to schedule the pods in this node.

Configuring to use registry.


Next step is configuring to use a registry. If you have already used docker in other OS, then you should know that adding an insecure registry is done using DOCKER_OPTS.  One way to configure DOCKER_OPTS in Core OS is to add it to /run/flannel_docker_opts.env file. But it would be overridden when the server is restarted. For both insecure and proper registries use the method explained in [4].



References

Comments

  1. This blog is very helpful and nicely explain how to install Kubernetes. Thanks for sharing

    ReplyDelete

Post a Comment

Popular posts from this blog

Generate JWT access tokens from WSO2 Identity Server

In Identity Server 5.2.0 we have created an interface to generate access tokens. Using that we have developed a sample to generate JWT tokens. You can find that sample under msf4j samples[1][2]. If you are build it as it is you will need to use Java 8 to build since msf4j is developed on Java 8. So you will need to run Identity Server on Java 8 as well. After building the project[2] please copy the jar inside target directory to $IS_HOME/repository/components/dropins/ directory. And then please add the following configuration to Identity.xml which is placed under $IS_HOME/repository/conf/identity/ folder inside tag OAuth . <IdentityOAuthTokenGenerator>com.wso2.jwt.token.builder.JWTAccessTokenBuilder</IdentityOAuthTokenGenerator> Then go to the database you used to store oauth tokens (This is the database pointed from the datasource you mentioned in the $IS_HOME/repository/conf/identity/identity.xml) and then alter the size of the column ACCESS_TOKEN of the tab

Oauth custom basic authenticator with WSO2 IS 5.1.0

WSO2 Identity Server supports Oauth2 authorization code grant type with basic authentication OOTB. But basic authentication is done only with WSO2 user store. So there could be use cases that basic authentication has to be done against some other system. In this case you follow below steps to achieve your requirement. First you need to create an class which extends AbstractApplicationAuthenticator and implements LocalApplicationAuthenticator. Because this class is going to act as your application authenticator so it needs to be an implementation of application authenticator interface and to achieve this it needs to be a local authenticator as well. [2] public class CustomBasicAuthenticator extends AbstractApplicationAuthenticator implements LocalApplicationAuthenticator {   Then you need to override the initiateAuthenticationRequest method so you can redirect to the page to enter user and password. In my sample I redirected to the page that is used by our default basic au

Installing gluster in AWS EKS

This article is a continuance of [1] . Purpose of this article is to document the steps, issues and solutions to those issues we have to face when installing gluster in EKS (Elastic Kubernetes Service). For gluster we need a disk to be attached with the K8s node. In EKS easiest way of implementing this is, adding it to the node configuration. So every time a node comes up, it comes up with a disk attached to the defined path. You can use this path in the topology.josn as mentioned in [1] . Next step is to install gluster using the gk-deploy script. The challenge comes here after. To use gluster in pods, you need to define a storage class. The heketi url mentioned in the storage class definition, should be accessible from master node. But the given heketi url is a cluster IP type k8s service. But in EKS deployments masters are managed by AWS and master don't have access to cluster IPs. So how we can solve this? Actually I tried to contact AWS support on this and I didn't got