Skip to main content

Posts

Showing posts from 2019

Installing Grafana with ingress

Installing Grafana with ingress If you are deploying Prometheus and Grafana in Kubernetes, easiest way would be to use helm charts. But when you are deploying the helm chart, You will need to provide custom parameters mainly for storage configuration and service configuration to expose Grafana UI to outsiders. In this blog I will share the artifacts I used in the AWS EKS. But if someone wants to change it to use in any other platform or a custom installation, just change the ingress annotations and you will be good. For the basic installation I used eks workshop [1] to start with. Here everything is provided in command line. Instead I used yaml files and used the file with -f option of the helm command. My sample helm command would be like below. helm install --name promethues stable/prometheus -f prometheus.yaml --namespace prometheus My prometheus.yaml will be like below. Here I haven’t changed anything from eks workshop other than taking everything to yaml file. Bu

Installing gluster in AWS EKS

This article is a continuance of [1] . Purpose of this article is to document the steps, issues and solutions to those issues we have to face when installing gluster in EKS (Elastic Kubernetes Service). For gluster we need a disk to be attached with the K8s node. In EKS easiest way of implementing this is, adding it to the node configuration. So every time a node comes up, it comes up with a disk attached to the defined path. You can use this path in the topology.josn as mentioned in [1] . Next step is to install gluster using the gk-deploy script. The challenge comes here after. To use gluster in pods, you need to define a storage class. The heketi url mentioned in the storage class definition, should be accessible from master node. But the given heketi url is a cluster IP type k8s service. But in EKS deployments masters are managed by AWS and master don't have access to cluster IPs. So how we can solve this? Actually I tried to contact AWS support on this and I didn't got

Consuming File System artifacts from Kubernetes Pods

When you are deploying an application which contains artifacts written on file system dynamically withing kubernetes (k8s), for example a tomcat server exposed to outside to deploy war files, you need to make sure the file system state is preserved always. Otherwise if the pod goes down, you might loose data. So one solution is to mount an external disk. Yes indeed you can do that. But how robust is that solution. Say something happened to the external disk. How can you recover the data? Use several disks and rsync to sync the data. Sounds a robust solution. Say you want to increase the reliability. And what happens if rsync process get killed. How much will it cost to make it's reliability closer to 100%? We have a robust, simple solution. It's using gluster to save data. [1] [2] We install a pod named gluster for each node. There is an additional disk attached to each node which will be used as the data storage for gluster. This disk is formatted in a special forma