Skip to main content

Installing Grafana with ingress

Installing Grafana with ingress

If you are deploying Prometheus and Grafana in Kubernetes, easiest way would be to use helm charts. But when you are deploying the helm chart, You will need to provide custom parameters mainly for storage configuration and service configuration to expose Grafana UI to outsiders.
In this blog I will share the artifacts I used in the AWS EKS. But if someone wants to change it to use in any other platform or a custom installation, just change the ingress annotations and you will be good.

For the basic installation I used eks workshop[1] to start with. Here everything is provided in command line. Instead I used yaml files and used the file with -f option of the helm command. My sample helm command would be like below.

helm install --name promethues stable/prometheus -f prometheus.yaml --namespace prometheus

My prometheus.yaml will be like below. Here I haven’t changed anything from eks workshop other than taking everything to yaml file. But if you intend to change something you can do so by adding more content to this yaml according to the structure of prometheus values.yaml[2].

alertmanager:
  persistentVolume:
    storageClass: "gp2"
server:
  persistentVolume:
    storageClass: "gp2"

Tricky part would be to adding ingress to Grafana installation. I obviously followed the same path. I just customized values according to values.yaml of Grafana Helm chart[3].
Here some important notes are ingress.enabled should set to true. And then ingress.path should set to “/*” support Grafana UI. Other annotations set here are to support ALB ingress controller and to enable SSL to ALB.

And you can customize any other thing following format of the values.yaml of the chart.

persistence:
  storageClassName: "gp2"
adminPassword: "EKS!sAWSome"
ingress:
  enabled: true
  path: /*
  annotations:
    alb.ingress.kubernetes.io/scheme: internet-facing
    kubernetes.io/ingress.class: alb
    alb.ingress.kubernetes.io/certificate-arn: arn:aws:iam::xxxxxxxxx:server-certificate/aws
    alb.ingress.kubernetes.io/listen-ports: '[{"HTTP": 80}, {"HTTPS":443}]'
    alb.ingress.kubernetes.io/actions.ssl-redirect: '{"Type": "redirect", "RedirectConfig": { "Protocol": "HTTPS", "Port": "443", "StatusCode": "HTTP_301"}}'
    external-dns.alpha.kubernetes.io/hostname: grafana.xxxx.xxx.xxx
service:
  annotations:
    alb.ingress.kubernetes.io/target-type: ip
datasources:
  datasources.yaml:
    apiVersion: 1
    datasources:
      -  name: Prometheus
         type: prometheus
         url: http://prometheus-server.prometheus.svc.cluster.local
         access: proxy
         isDefault: true

Hope this helps!!!.

References

Comments

Popular posts from this blog

Generate JWT access tokens from WSO2 Identity Server

In Identity Server 5.2.0 we have created an interface to generate access tokens. Using that we have developed a sample to generate JWT tokens. You can find that sample under msf4j samples[1][2]. If you are build it as it is you will need to use Java 8 to build since msf4j is developed on Java 8. So you will need to run Identity Server on Java 8 as well. After building the project[2] please copy the jar inside target directory to $IS_HOME/repository/components/dropins/ directory. And then please add the following configuration to Identity.xml which is placed under $IS_HOME/repository/conf/identity/ folder inside tag OAuth . <IdentityOAuthTokenGenerator>com.wso2.jwt.token.builder.JWTAccessTokenBuilder</IdentityOAuthTokenGenerator> Then go to the database you used to store oauth tokens (This is the database pointed from the datasource you mentioned in the $IS_HOME/repository/conf/identity/identity.xml) and then alter the size of the column ACCESS_TOKEN of the tab

Oauth custom basic authenticator with WSO2 IS 5.1.0

WSO2 Identity Server supports Oauth2 authorization code grant type with basic authentication OOTB. But basic authentication is done only with WSO2 user store. So there could be use cases that basic authentication has to be done against some other system. In this case you follow below steps to achieve your requirement. First you need to create an class which extends AbstractApplicationAuthenticator and implements LocalApplicationAuthenticator. Because this class is going to act as your application authenticator so it needs to be an implementation of application authenticator interface and to achieve this it needs to be a local authenticator as well. [2] public class CustomBasicAuthenticator extends AbstractApplicationAuthenticator implements LocalApplicationAuthenticator {   Then you need to override the initiateAuthenticationRequest method so you can redirect to the page to enter user and password. In my sample I redirected to the page that is used by our default basic au

Installing gluster in AWS EKS

This article is a continuance of [1] . Purpose of this article is to document the steps, issues and solutions to those issues we have to face when installing gluster in EKS (Elastic Kubernetes Service). For gluster we need a disk to be attached with the K8s node. In EKS easiest way of implementing this is, adding it to the node configuration. So every time a node comes up, it comes up with a disk attached to the defined path. You can use this path in the topology.josn as mentioned in [1] . Next step is to install gluster using the gk-deploy script. The challenge comes here after. To use gluster in pods, you need to define a storage class. The heketi url mentioned in the storage class definition, should be accessible from master node. But the given heketi url is a cluster IP type k8s service. But in EKS deployments masters are managed by AWS and master don't have access to cluster IPs. So how we can solve this? Actually I tried to contact AWS support on this and I didn't got