Skip to main content

Installing Grafana with ingress

Installing Grafana with ingress

If you are deploying Prometheus and Grafana in Kubernetes, easiest way would be to use helm charts. But when you are deploying the helm chart, You will need to provide custom parameters mainly for storage configuration and service configuration to expose Grafana UI to outsiders.
In this blog I will share the artifacts I used in the AWS EKS. But if someone wants to change it to use in any other platform or a custom installation, just change the ingress annotations and you will be good.

For the basic installation I used eks workshop[1] to start with. Here everything is provided in command line. Instead I used yaml files and used the file with -f option of the helm command. My sample helm command would be like below.

helm install --name promethues stable/prometheus -f prometheus.yaml --namespace prometheus

My prometheus.yaml will be like below. Here I haven’t changed anything from eks workshop other than taking everything to yaml file. But if you intend to change something you can do so by adding more content to this yaml according to the structure of prometheus values.yaml[2].

alertmanager:
  persistentVolume:
    storageClass: "gp2"
server:
  persistentVolume:
    storageClass: "gp2"

Tricky part would be to adding ingress to Grafana installation. I obviously followed the same path. I just customized values according to values.yaml of Grafana Helm chart[3].
Here some important notes are ingress.enabled should set to true. And then ingress.path should set to “/*” support Grafana UI. Other annotations set here are to support ALB ingress controller and to enable SSL to ALB.

And you can customize any other thing following format of the values.yaml of the chart.

persistence:
  storageClassName: "gp2"
adminPassword: "EKS!sAWSome"
ingress:
  enabled: true
  path: /*
  annotations:
    alb.ingress.kubernetes.io/scheme: internet-facing
    kubernetes.io/ingress.class: alb
    alb.ingress.kubernetes.io/certificate-arn: arn:aws:iam::xxxxxxxxx:server-certificate/aws
    alb.ingress.kubernetes.io/listen-ports: '[{"HTTP": 80}, {"HTTPS":443}]'
    alb.ingress.kubernetes.io/actions.ssl-redirect: '{"Type": "redirect", "RedirectConfig": { "Protocol": "HTTPS", "Port": "443", "StatusCode": "HTTP_301"}}'
    external-dns.alpha.kubernetes.io/hostname: grafana.xxxx.xxx.xxx
service:
  annotations:
    alb.ingress.kubernetes.io/target-type: ip
datasources:
  datasources.yaml:
    apiVersion: 1
    datasources:
      -  name: Prometheus
         type: prometheus
         url: http://prometheus-server.prometheus.svc.cluster.local
         access: proxy
         isDefault: true

Hope this helps!!!.

References

Comments

Popular posts from this blog

Generate JWT access tokens from WSO2 Identity Server

In Identity Server 5.2.0 we have created an interface to generate access tokens. Using that we have developed a sample to generate JWT tokens. You can find that sample under msf4j samples[1][2]. If you are build it as it is you will need to use Java 8 to build since msf4j is developed on Java 8. So you will need to run Identity Server on Java 8 as well. After building the project[2] please copy the jar inside target directory to $IS_HOME/repository/components/dropins/ directory. And then please add the following configuration to Identity.xml which is placed under $IS_HOME/repository/conf/identity/ folder inside tag OAuth . <IdentityOAuthTokenGenerator>com.wso2.jwt.token.builder.JWTAccessTokenBuilder</IdentityOAuthTokenGenerator> Then go to the database you used to store oauth tokens (This is the database pointed from the datasource you mentioned in the $IS_HOME/repository/conf/identity/identity.xml) and then alter the size of the column ACCESS_TOKEN of the tab...

Setting up Single node Kubernetes Cluster with Core OS bare metal

You might know already there is an official documentation to follow to setup a Kubernetes cluster on Core OS bare metal. But when do that specially single node cluster, I found some gaps in that documentation [1] . And another reason for this blog post is to get everything into one place. So this blog post will describe how to overcome the issues of setting up a single node cluster. Installing Core OS bare metal. You can refer to doc [2]  to install core os.  First thing is about users. Documentation [2]  tells you how to create a user without password. To login as that user you will need ssh keys. So to create a user with username password, you can use a cloud-config.yaml file. Here is a sample. #cloud-config users: - name: user passwd: $6$SALT$3MUMz4cNIRjQ/Knnc3gXjJLV1vdwFs2nLvh//nGtEh/.li04NodZJSfnc4jeCVHd7kKHGnq5MsenN.tO6Z.Cj/ groups: - sudo - docker Here value for passwd is a hash value. One of the below methods can be used...

Consuming File System artifacts from Kubernetes Pods

When you are deploying an application which contains artifacts written on file system dynamically withing kubernetes (k8s), for example a tomcat server exposed to outside to deploy war files, you need to make sure the file system state is preserved always. Otherwise if the pod goes down, you might loose data. So one solution is to mount an external disk. Yes indeed you can do that. But how robust is that solution. Say something happened to the external disk. How can you recover the data? Use several disks and rsync to sync the data. Sounds a robust solution. Say you want to increase the reliability. And what happens if rsync process get killed. How much will it cost to make it's reliability closer to 100%? We have a robust, simple solution. It's using gluster to save data. [1] [2] We install a pod named gluster for each node. There is an additional disk attached to each node which will be used as the data storage for gluster. This disk is formatted in a special forma...