Thursday, September 23, 2021

Scrapping MongoDb and Our Microservice using ServiceMonitor and MongoDb exporter with Prometheus and Grafana in Kubernetes using Docker Containers using helm charts.

 In this example we will do following things sequentially

1- We will configure prometheus using K8 operator
2- We will install Mongodb using helm chart. And check we are able to access MongoDb using Mongoexpress port-forward method.
3- We will install mongodb exporter using helm and service monitor. We will be scrapping Mongodb using mongodb exporter in K8 with Premetheus operator and ServiceMinotor using helm (Mongodb + MongoDb Exporter + ServiceMonitor)
4- Finally we will have our own spring boot microservice using micrometer to expose our springboot metrics to prometheus. We will create docker image of this springboot application and uplaod the same on dockerhub. In this step we will not configure prometheus.yaml file to directly set our microservice url in prometheus for scrapping but we will use the concept of serviceMonitor in K8 to register our Springboot application and scrap the metrics. We will also create an helm chart for simple execution of this step.

we are using window machine and hence we will need basic following requiremen to be installed in your machine
1- Docker Desktop with Kubernetes
Start your docker and kubernetes with following belwo configurations.

2- Make sure you have help installted in your windows and set to path.

3- Make sure you have kubectl installed in windows to fire command as a client to apiserver of K8 environment

Now lets start with above steps

First check our K8 environment is clean i..e no pod is installed in it.

c:\>kubectl get all
NAME                 TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
service/kubernetes   ClusterIP   10.96.0.1    <none>        443/TCP   79d

1- We will configure prometheus using K8 operator

To install Prometheus-operator we can use below help repo.

add repos

helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
helm repo update

C:\prometheus-operator-helm-example>helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
“prometheus-community” already exists with the same configuration, skipping

C:\prometheus-operator-helm-example>helm repo update
Hang tight while we grab the latest from your chart repositories...
...Successfully got an update from the "elastic" chart repository
...Successfully got an update from the "grafana" chart repository
...Successfully got an update from the "incubator" chart repository
...Successfully got an update from the "prometheus-community" chart repository
...Successfully got an update from the "stable" chart repository
...Successfully got an update from the "bitnami" chart repository
Update Complete. ⎈Happy Helming!⎈

C:\prometheus-operator-helm-example>

install chart
$ helm install [RELEASE_NAME] prometheus-community/kube-prometheus-stack
i.e.
helm install prometheus prometheus-community/kube-prometheus-stack

C:\prometheus-operator-helm-example>helm install prometheus prometheus-community/kube-prometheus-stack
W0923 13:20:29.962894   10980 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
W0923 13:20:29.968708   10980 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
W0923 13:20:29.974736   10980 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
W0923 13:20:29.980445   10980 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
W0923 13:20:29.988448   10980 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
W0923 13:20:30.005574   10980 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
W0923 13:20:30.016450   10980 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
W0923 13:20:31.772918   10980 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
W0923 13:20:32.759052   10980 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
W0923 13:20:50.118965   10980 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
W0923 13:20:51.422042   10980 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
W0923 13:20:51.422042   10980 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
W0923 13:20:51.428122   10980 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
W0923 13:20:51.428122   10980 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
W0923 13:20:51.428122   10980 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
W0923 13:20:51.428122   10980 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
W0923 13:20:51.444192   10980 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
W0923 13:20:59.748365   10980 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
W0923 13:21:01.868471   10980 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
W0923 13:21:37.557610   10980 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
NAME: prometheus
LAST DEPLOYED: Thu Sep 23 13:20:26 2021
NAMESPACE: default
STATUS: deployed
REVISION: 1
NOTES:
kube-prometheus-stack has been installed. Check its status by running:
  kubectl --namespace default get pods -l "release=prometheus"

Visit https://github.com/prometheus-operator/kube-prometheus for instructions on how to create & configure Alertmanager and Prometheus instances using the Operator.

C:\prometheus-operator-helm-example>

install chart with fixed version
i.e.
helm install prometheus prometheus-community/kube-prometheus-stack –version “9.4.1”

To Uninstall prometheus operator using helm use belwo command.

Helm

$ helm uninstall [RELEASE_NAME]

C:\prometheus-operator-helm-example>helm uninstall  prometheus
W0923 13:19:29.237702    7152 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
W0923 13:19:29.237702    7152 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
W0923 13:19:29.237702    7152 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
W0923 13:19:29.237702    7152 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
W0923 13:19:29.239797    7152 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
W0923 13:19:29.239797    7152 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
W0923 13:19:29.239797    7152 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
release "prometheus" uninstalled

C:\prometheus-operator-helm-example>

Link to chart
https://github.com/prometheus-community/helm-charts/tree/main/charts/kube-prometheus-stack

Now lets check all things related to Prometheus are in place.

1- Pod

C:\prometheus-operator-helm-example>kubectl get pod
NAME                                                     READY   STATUS              RESTARTS   AGE
alertmanager-prometheus-kube-prometheus-alertmanager-0   2/2     Running             0          5m27s
prometheus-grafana-7b9cdbbfdb-tf5n7                      2/2     Running             0          5m50s
prometheus-kube-prometheus-operator-769b9bb6f5-jw7ws     1/1     Running             0          5m50s
prometheus-kube-state-metrics-76f66976cb-jqgq7           1/1     Running             0          5m50s
prometheus-prometheus-kube-prometheus-prometheus-0       2/2     Running             0          5m25s
prometheus-prometheus-node-exporter-2l5nq                0/1     RunContainerError   6          5m50s

C:\prometheus-operator-helm-example>kubectl get pod
NAME                                                     READY   STATUS             RESTARTS   AGE
alertmanager-prometheus-kube-prometheus-alertmanager-0   2/2     Running            0          5m31s
prometheus-grafana-7b9cdbbfdb-tf5n7                      2/2     Running            0          5m54s
prometheus-kube-prometheus-operator-769b9bb6f5-jw7ws     1/1     Running            0          5m54s
prometheus-kube-state-metrics-76f66976cb-jqgq7           1/1     Running            0          5m54s
prometheus-prometheus-kube-prometheus-prometheus-0       2/2     Running            0          5m29s
prometheus-prometheus-node-exporter-2l5nq                0/1     CrashLoopBackOff   6          5m54s

Now you can see the our node-expoter pod is not in running mode so lets check it using belwo commands

C:\prometheus-operator-helm-example>kubectl describe pod prometheus-prometheus-node-exporter-2l5nq

and we found this

Events:
  Type     Reason     Age                    From               Message
  ----     ------     ----                   ----               -------
  Normal   Scheduled  7m1s                   default-scheduler  Successfully assigned default/prometheus-prometheus-node-exporter-2l5nq to docker-desktop
  Normal   Pulled     5m41s (x5 over 6m55s)  kubelet            Container image "quay.io/prometheus/node-exporter:v1.2.2" already present on machine
  Normal   Created    5m39s (x5 over 6m52s)  kubelet            Created container node-exporter
  Warning  Failed     5m38s (x5 over 6m51s)  kubelet            Error: failed to start container "node-exporter": Error response from daemon: path / is mounted on / but it is not a shared or slave mount
  Warning  BackOff    114s (x29 over 6m38s)  kubelet            Back-off restarting failed container

In order to resolve create a files called as MyValues.yaml and add below values in it.

prometheus-node-exporter:
hostRootFsMount: false

Now rerun the command again using this values.yaml files as an argument.

First undeploy the prometheus using below command.

helm uninstall prometheus

and rerun below command

helm install prometheus prometheus-community/kube-prometheus-stack –values=MyValues.yaml

C:\prometheus-operator-helm-example>helm install prometheus prometheus-community/kube-prometheus-stack --values=MyValues.yaml
W0923 13:33:49.759411    5368 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
W0923 13:33:49.764986    5368 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
W0923 13:33:49.769438    5368 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
W0923 13:33:49.774513    5368 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
W0923 13:33:49.780485    5368 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
W0923 13:33:49.785735    5368 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
W0923 13:33:49.790833    5368 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
W0923 13:33:51.500832    5368 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
W0923 13:33:52.508221    5368 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
W0923 13:34:06.156445    5368 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
W0923 13:34:07.271820    5368 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
W0923 13:34:07.272721    5368 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
W0923 13:34:07.273849    5368 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
W0923 13:34:07.273849    5368 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
W0923 13:34:07.273849    5368 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
W0923 13:34:07.273849    5368 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
W0923 13:34:07.273849    5368 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
W0923 13:34:13.402788    5368 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
W0923 13:34:15.086290    5368 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
W0923 13:34:48.755855    5368 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
NAME: prometheus
LAST DEPLOYED: Thu Sep 23 13:33:46 2021
NAMESPACE: default
STATUS: deployed
REVISION: 1
NOTES:
kube-prometheus-stack has been installed. Check its status by running:
  kubectl --namespace default get pods -l "release=prometheus"

Visit https://github.com/prometheus-operator/kube-prometheus for instructions on how to create & configure Alertmanager and Prometheus instances using the Operator

Now you will see all the pod are running properly

C:\prometheus-operator-helm-example>kubectl get pod
NAME                                                     READY   STATUS    RESTARTS   AGE
alertmanager-prometheus-kube-prometheus-alertmanager-0   2/2     Running   0          2m53s
prometheus-grafana-7b9cdbbfdb-lgttb                      2/2     Running   0          3m14s
prometheus-kube-prometheus-operator-769b9bb6f5-9fsh2     1/1     Running   0          3m14s
prometheus-kube-state-metrics-76f66976cb-vpg5m           1/1     Running   0          3m14s
prometheus-prometheus-kube-prometheus-prometheus-0       2/2     Running   0          2m50s
prometheus-prometheus-node-exporter-fp8ts                1/1     Running   0          3m14s

C:\prometheus-operator-helm-example>

2- Service

C:\prometheus-operator-helm-example>kubectl get svc
NAME                                      TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)                      AGE
alertmanager-operated                     ClusterIP   None             <none>        9093/TCP,9094/TCP,9094/UDP   4m4s
kubernetes                                ClusterIP   10.96.0.1        <none>        443/TCP                      79d
prometheus-grafana                        ClusterIP   10.111.153.183   <none>        80/TCP                       4m26s
prometheus-kube-prometheus-alertmanager   ClusterIP   10.101.85.15     <none>        9093/TCP                     4m26s
prometheus-kube-prometheus-operator       ClusterIP   10.106.217.73    <none>        443/TCP                      4m26s
prometheus-kube-prometheus-prometheus     ClusterIP   10.106.75.248    <none>        9090/TCP                     4m26s
prometheus-kube-state-metrics             ClusterIP   10.105.66.138    <none>        8080/TCP                     4m26s
prometheus-operated                       ClusterIP   None             <none>        9090/TCP                     4m2s
prometheus-prometheus-node-exporter       ClusterIP   10.96.132.240    <none>        9100/TCP                     4m26s

C:\prometheus-operator-helm-example>

Now lets check we are able to see the promethues and grafa ui using portforwards to cross verify everything is working fine.

Prometheus :-

C:\prometheus-operator-helm-example>kubectl port-forward svc/prometheus-kube-prometheus-prometheus 9090:9090
Forwarding from 127.0.0.1:9090 -> 9090
Forwarding from [::1]:9090 -> 9090

Grafana:-

C:\prometheus-operator-helm-example>kubectl port-forward svc/prometheus-grafana 80:80
Forwarding from 127.0.0.1:80 -> 3000
Forwarding from [::1]:80 -> 3000

Default
userid:- admin
password :- prom-operator

Now lets check few of the things in the Grafana and it by defualt monitor many things in the K8 env. as we have node-expoter so it is already scrapping K8 node data goto Manage option and select node and see the data will be shown to you on the screen.

3- ServiceMonitor

C:\prometheus-operator-helm-example>kubectl get servicemonitor
NAME                                                 AGE
prometheus-kube-prometheus-alertmanager              53m
prometheus-kube-prometheus-apiserver                 53m
prometheus-kube-prometheus-coredns                   53m
prometheus-kube-prometheus-grafana                   53m
prometheus-kube-prometheus-kube-controller-manager   53m
prometheus-kube-prometheus-kube-etcd                 53m
prometheus-kube-prometheus-kube-proxy                53m
prometheus-kube-prometheus-kube-scheduler            53m
prometheus-kube-prometheus-kube-state-metrics        53m
prometheus-kube-prometheus-kubelet                   53m
prometheus-kube-prometheus-node-exporter             53m
prometheus-kube-prometheus-operator                  53m
prometheus-kube-prometheus-prometheus                53m

C:\prometheus-operator-helm-example>

Now lets open one of the servicemonitor files and check its yaml

C:\prometheus-operator-helm-example>kubectl get servicemonitor prometheus-kube-prometheus-prometheus -o yaml
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
  annotations:
    meta.helm.sh/release-name: prometheus
    meta.helm.sh/release-namespace: default
  creationTimestamp: "2021-09-23T08:04:11Z"
  generation: 1
  labels:
    app: kube-prometheus-stack-prometheus
    app.kubernetes.io/instance: prometheus
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/part-of: kube-prometheus-stack
    app.kubernetes.io/version: 18.0.12
    chart: kube-prometheus-stack-18.0.12
    heritage: Helm
    release: prometheus
  managedFields:
  - apiVersion: monitoring.coreos.com/v1
    fieldsType: FieldsV1
    fieldsV1:
      f:metadata:
        f:annotations:
          .: {}
          f:meta.helm.sh/release-name: {}

Only one thing we need to take care is in this yml we havea label called as release with value prometheus i.e. release: prometheus

This indicate Prometheus will monitor/scrap all that applications that have this values in their kind: ServiceMonitor yaml files. So all the componen which we want to get it scrap automaticaly from prometheus we will set release: prometheus in that specific component kind: ServiceMonitor yaml files. i.e. if we deploy Mongodb yaml and we will have MongoDB kind: ServiceMonitor yaml files in that we will set release: prometheus so that once we deploy Mongodb Prometheus will automatically scrap it without adding MongoDB url in prometheus.yaml file of prometheus and restart it.

Now question is how the prometheus knows that release: prometheus is the only key it has to see in the serviceMonitor. As we know in service we have the concept of “selector” which check all the deployement pod having thieir label as same as “selector” it will try to connect that pod when service is called. In sameway for auto scrapping our pod with prometheus operator install crd that has an entry as a selector that will make sure all the servicemonitor having release: prometheus will be auto scrapped.

C:\prometheus-operator-helm-example>kubectl get crd
NAME                                        CREATED AT
alertmanagerconfigs.monitoring.coreos.com   2021-09-23T07:44:03Z
alertmanagers.monitoring.coreos.com         2021-09-23T07:44:04Z
podmonitors.monitoring.coreos.com           2021-09-23T07:44:04Z
probes.monitoring.coreos.com                2021-09-23T07:44:04Z
prometheuses.monitoring.coreos.com          2021-09-23T07:44:04Z
prometheusrules.monitoring.coreos.com       2021-09-23T07:44:05Z
servicemonitors.monitoring.coreos.com       2021-09-23T07:44:05Z
thanosrulers.monitoring.coreos.com          2021-09-23T07:44:05Z

C:\prometheus-operator-helm-example>kubectl get crd  prometheuses.monitoring.coreos.com -o yaml > prometheuses.monitoring.coreos.com.yaml

C:\prometheus-operator-helm-example>

Check for serviceMonitorSelector tag and its matchLabels values as release: prometheus

Till now we had installed Prometheus and Grafana for our K8 env using operator. Now lets move to the second steps.

2- We will install Mongodb statefulset using helm chart. And check we are able to access MongoDb using Mongoexpress port-forward method.

There are two ways to install Mongodb

A- We can use our own service and deployment yaml file
B- use the helm char to install MongoDB.

A- Using our own service and deployment yaml files. Please use below yaml files and execute using below command from kubectl

apiVersion: apps/v1
kind: Deployment
metadata:
  name: mongodb-deployment
  labels:
    app: mongodb
spec:
  replicas: 1
  selector:
    matchLabels:
      app: mongodb
  template:
    metadata:
      labels:
        app: mongodb
    spec:
      containers:
      - name: mongodb
        image: mongo
        ports:
        - containerPort: 27017
---
apiVersion: v1
kind: Service
metadata:
  name: mongodb-service
spec:
  selector:
    app: mongodb
  ports:
    - protocol: TCP
      port: 27017
      targetPort: 27017  

Now execute belwo command to create service and deployment components in K8 env.

kubectl apply -f MyMongodb.yaml

B- use the helm char to install MongoDB.

You can use belwo url for Mongodb installation using Helm.

https://artifacthub.io/packages/helm/bitnami/mongodb

we will use helm for easy approach. Use below command to install MongoDB

helm repo add bitnami https://charts.bitnami.com/bitnami
helm install my-release bitnami/mongodb

C:\prometheus-operator-helm-example>helm repo add bitnami https://charts.bitnami.com/bitnami
"bitnami" already exists with the same configuration, skipping

C:\prometheus-operator-helm-example>helm install mongodb bitnami/mongodb
NAME: mongodb
LAST DEPLOYED: Thu Sep 23 15:27:15 2021
NAMESPACE: default
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
** Please be patient while the chart is being deployed **

MongoDB&reg; can be accessed on the following DNS name(s) and ports from within your cluster:

    mongodb.default.svc.cluster.local

To get the root password run:

    export MONGODB_ROOT_PASSWORD=$(kubectl get secret --namespace default mongodb -o jsonpath="{.data.mongodb-root-password}" | base64 --decode)

To connect to your database, create a MongoDB&reg; client container:

    kubectl run --namespace default mongodb-client --rm --tty -i --restart='Never' --env="MONGODB_ROOT_PASSWORD=$MONGODB_ROOT_PASSWORD" --image docker.io/bitnami/mongodb:4.4.9-debian-10-r0 --command -- bash

Then, run the following command:
    mongo admin --host "mongodb" --authenticationDatabase admin -u root -p $MONGODB_ROOT_PASSWORD

To connect to your database from outside the cluster execute the following commands:

    kubectl port-forward --namespace default svc/mongodb 27017:27017 &
    mongo --host 127.0.0.1 --authenticationDatabase admin -p $MONGODB_ROOT_PASSWORD

C:\prometheus-operator-helm-example>

Check if the MongodB pod and svc is createding and running

C:\prometheus-operator-helm-example>kubectl get pod
NAME                                                     READY   STATUS    RESTARTS   AGE
alertmanager-prometheus-kube-prometheus-alertmanager-0   2/2     Running   0          114m
mongodb-5676fffb8b-9tz97                                 1/1     Running   0          88s
prometheus-grafana-7b9cdbbfdb-lgttb                      2/2     Running   0          114m
prometheus-kube-prometheus-operator-769b9bb6f5-9fsh2     1/1     Running   0          114m
prometheus-kube-state-metrics-76f66976cb-vpg5m           1/1     Running   0          114m
prometheus-prometheus-kube-prometheus-prometheus-0       2/2     Running   0          114m
prometheus-prometheus-node-exporter-fp8ts                1/1     Running   0          114m

C:\prometheus-operator-helm-example>


C:\prometheus-operator-helm-example>kubectl get svc
NAME                                      TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)                      AGE
alertmanager-operated                     ClusterIP   None             <none>        9093/TCP,9094/TCP,9094/UDP   114m
kubernetes                                ClusterIP   10.96.0.1        <none>        443/TCP                      79d
mongodb                                   ClusterIP   10.107.146.126   <none>        27017/TCP                    116s
prometheus-grafana                        ClusterIP   10.111.153.183   <none>        80/TCP                       115m
prometheus-kube-prometheus-alertmanager   ClusterIP   10.101.85.15     <none>        9093/TCP                     115m
prometheus-kube-prometheus-operator       ClusterIP   10.106.217.73    <none>        443/TCP                      115m
prometheus-kube-prometheus-prometheus     ClusterIP   10.106.75.248    <none>        9090/TCP                     115m
prometheus-kube-state-metrics             ClusterIP   10.105.66.138    <none>        8080/TCP                     115m
prometheus-operated                       ClusterIP   None             <none>        9090/TCP                     114m
prometheus-prometheus-node-exporter       ClusterIP   10.96.132.240    <none>        9100/TCP                     115m

C:\prometheus-operator-helm-example>
C:\prometheus-operator-helm-example>kubectl describe svc mongodb
Name:              mongodb
Namespace:         default
Labels:            app.kubernetes.io/component=mongodb
                   app.kubernetes.io/instance=mongodb
                   app.kubernetes.io/managed-by=Helm
                   app.kubernetes.io/name=mongodb
                   helm.sh/chart=mongodb-10.26.3
Annotations:       meta.helm.sh/release-name: mongodb
                   meta.helm.sh/release-namespace: default
Selector:          app.kubernetes.io/component=mongodb,app.kubernetes.io/instance=mongodb,app.kubernetes.io/name=mongodb
Type:              ClusterIP
IP Family Policy:  SingleStack
IP Families:       IPv4
IP:                10.107.146.126
IPs:               10.107.146.126
Port:              mongodb  27017/TCP
TargetPort:        mongodb/TCP
Endpoints:         10.1.0.35:27017
Session Affinity:  None
Events:            <none>

C:\prometheus-operator-helm-example>

3- We will install mongodb exporter using helm and service monitor. We will be scrapping Mongodb using mongodb exporter in K8 with Premetheus operator and ServiceMinotor using helm (Mongodb + Mongoexpress + MongoDb Exporter + ServiceMonitor)

Now lets move to our thrid step of installation of monodb exporter. There are many exporter available to export the metrics of appliation. Please browse belwo url for the same. As we want to scrap mongodb we will be using mongodb exporter.

https://prometheus.io/docs/instrumenting/exporters/

https://github.com/dcu/mongodb_exporter

Now if we want to use this exporter we need to do following things
1- we need to deploy this mongodb-exporter using deloyment yaml files that will exposed /matrics url for scrapping data for Prometheus.
2- we need to create service for this mongodb-exporter using service yaml files so that prometheus can connect to this exporter.
3- we need to create a servicemonitor so that it can be register using service discovery pattern to the prometheus and it can scrap it automatically.

Here again we can take the help of helm chart.
go to below url
https://github.com/helm/charts/tree/master/stable/prometheus-mongodb-exporter
https://artifacthub.io/packages/helm/prometheus-community/prometheus-mongodb-exporter

Or you can go to github url like given below
https://github.com/helm/charts/tree/master/stable/prometheus-mongodb-exporter
https://github.com/prometheus-community/helm-charts
https://github.com/prometheus-community/helm-charts/tree/main/charts/prometheus-mongodb-exporter

As given in above url we will first add the repository
helm repo add prometheus-community https://prometheus-community.github.io/helm-charts

Now as we learned above for automaticaly allowing/registering our mongodb pod to be scrapped we need to set the values i.e. release: prometheus in serviceMonitor. To do that lets first create a MyMongoDBServiceMonitoerValues.yaml files using below command.

helm show values prometheus-community/prometheus-mongodb-exporter

C:\prometheus-operator-helm-example>helm show values prometheus-community/prometheus-mongodb-exporter > MyMongoDBServiceMonitoerValues.yaml

and remove all other things and keep the only the below modified information

mongodb:
uri: “mongodb://mongodb:27017”

serviceMonitor:
additionalLabels:
release: prometheus

Then add prometheus-mongodb-exporter using below command

helm install prometheus-mongodb-exporter prometheus-community/prometheus-mongodb-exporter –values=MyMongoDBServiceMonitoerValues.yaml

or use this command

helm install prometheus-mongodb-exporter prometheus-community/prometheus-mongodb-exporter -f MyMongoDBServiceMonitoerValues.yaml

C:\prometheus-operator-helm-example>helm install prometheus-mongodb-exporter prometheus-community/prometheus-mongodb-exporter --values=MyMongoDBServiceMonitoerValues.yaml
NAME: prometheus-mongodb-exporter
LAST DEPLOYED: Thu Sep 23 15:55:41 2021
NAMESPACE: default
STATUS: deployed
REVISION: 1
NOTES:
Verify the application is working by running these commands:

  kubectl port-forward service/prometheus-mongodb-exporter 9216
  curl http://127.0.0.1:9216/metrics

C:\prometheus-operator-helm-example>

lets check the chart we have in helm

C:\prometheus-operator-helm-example>helm ls
NAME                            NAMESPACE       REVISION        UPDATED                                 STATUS          CHART                                   APP VERSION
mongodb                         default         1               2021-09-23 15:27:15.1113123 +0530 IST   deployed        mongodb-10.26.3                         4.4.9
prometheus                      default         1               2021-09-23 13:33:46.5448056 +0530 IST   deployed        kube-prometheus-stack-18.0.12           0.50.0
prometheus-mongodb-exporter     default         1               2021-09-23 15:55:41.3399995 +0530 IST   deployed        prometheus-mongodb-exporter-2.8.1       v0.10.0

Now check if we have pod, svc and servicemonitor created properly

C:\prometheus-operator-helm-example>kubectl get pod
NAME                                                     READY   STATUS    RESTARTS   AGE
alertmanager-prometheus-kube-prometheus-alertmanager-0   2/2     Running   0          143m
mongodb-5676fffb8b-9tz97                                 1/1     Running   0          30m
prometheus-grafana-7b9cdbbfdb-lgttb                      2/2     Running   0          143m
prometheus-kube-prometheus-operator-769b9bb6f5-9fsh2     1/1     Running   0          143m
prometheus-kube-state-metrics-76f66976cb-vpg5m           1/1     Running   0          143m
prometheus-mongodb-exporter-5c9877d7c8-swqr2             1/1     Running   0          2m16s
prometheus-prometheus-kube-prometheus-prometheus-0       2/2     Running   0          143m
prometheus-prometheus-node-exporter-fp8ts                1/1     Running   0          143m
C:\prometheus-operator-helm-example>kubectl get svc
NAME                                      TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)                      AGE
alertmanager-operated                     ClusterIP   None             <none>        9093/TCP,9094/TCP,9094/UDP   144m
kubernetes                                ClusterIP   10.96.0.1        <none>        443/TCP                      79d
mongodb                                   ClusterIP   10.107.146.126   <none>        27017/TCP                    31m
prometheus-grafana                        ClusterIP   10.111.153.183   <none>        80/TCP                       145m
prometheus-kube-prometheus-alertmanager   ClusterIP   10.101.85.15     <none>        9093/TCP                     145m
prometheus-kube-prometheus-operator       ClusterIP   10.106.217.73    <none>        443/TCP                      145m
prometheus-kube-prometheus-prometheus     ClusterIP   10.106.75.248    <none>        9090/TCP                     145m
prometheus-kube-state-metrics             ClusterIP   10.105.66.138    <none>        8080/TCP                     145m
prometheus-mongodb-exporter               ClusterIP   10.108.134.120   <none>        9216/TCP                     3m29s
prometheus-operated                       ClusterIP   None             <none>        9090/TCP                     144m
prometheus-prometheus-node-exporter       ClusterIP   10.96.132.240    <none>        9100/TCP                     145m
C:\prometheus-operator-helm-example>kubectl get servicemonitor
NAME                                                 AGE
prometheus-kube-prometheus-alertmanager              145m
prometheus-kube-prometheus-apiserver                 145m
prometheus-kube-prometheus-coredns                   145m
prometheus-kube-prometheus-grafana                   145m
prometheus-kube-prometheus-kube-controller-manager   145m
prometheus-kube-prometheus-kube-etcd                 145m
prometheus-kube-prometheus-kube-proxy                145m
prometheus-kube-prometheus-kube-scheduler            145m
prometheus-kube-prometheus-kube-state-metrics        145m
prometheus-kube-prometheus-kubelet                   145m
prometheus-kube-prometheus-node-exporter             145m
prometheus-kube-prometheus-operator                  145m
prometheus-kube-prometheus-prometheus                145m
prometheus-mongodb-exporter                          4m3s

Now lets check we have the labes as prometheus as values in newly create servicemonitor

C:\prometheus-operator-helm-example>kubectl get servicemonitor prometheus-mongodb-exporter -o yaml
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
  annotations:
    meta.helm.sh/release-name: prometheus-mongodb-exporter
    meta.helm.sh/release-namespace: default
  creationTimestamp: "2021-09-23T10:25:43Z"
  generation: 1
  labels:
    app.kubernetes.io/instance: prometheus-mongodb-exporter
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: prometheus-mongodb-exporter
    helm.sh/chart: prometheus-mongodb-exporter-2.8.1
    release: prometheus
  managedFields:
  - apiVersion: monitoring.coreos.com/v1
    fieldsType: FieldsV1
    fieldsV1:
      f:metadata:
        f:annotations:
          .: {}
          f:meta.helm.sh/release-name: {}
          f:meta.helm.sh/release-namespace: {}
        f:labels:
          .: {}
          f:app.kubernetes.io/instance: {}
          f:app.kubernetes.io/managed-by: {}
          f:app.kubernetes.io/name: {}
          f:helm.sh/chart: {}
          f:release: {}
      f:spec:
        .: {}
        f:endpoints: {}
        f:namespaceSelector:
          .: {}
          f:matchNames: {}
        f:selector:
          .: {}
          f:matchLabels:
            .: {}
            f:app.kubernetes.io/instance: {}
            f:app.kubernetes.io/name: {}
    manager: Go-http-client
    operation: Update
    time: "2021-09-23T10:25:43Z"
  name: prometheus-mongodb-exporter
  namespace: default
  resourceVersion: "29646"
  uid: 8f7ea0ac-a645-4db7-96bc-24804e80d11e
spec:
  endpoints:
  - interval: 30s
    port: metrics
    scrapeTimeout: 10s
  namespaceSelector:
    matchNames:
    - default
  selector:
    matchLabels:
      app.kubernetes.io/instance: prometheus-mongodb-exporter
      app.kubernetes.io/name: prometheus-mongodb-exporter

Now finally check the url of prometheus ui and we will see the our mongodb expoter data is scrapped

you can also check the metric url using port forward

C:\prometheus-operator-helm-example>kubectl port-forward svc/prometheus-mongodb-exporter 9216:9216
Forwarding from 127.0.0.1:9216 -> 9216
Forwarding from [::1]:9216 -> 9216

Check in Grafa that we are able to see the graph of Mongodb

In next blog we will try to carry our fourth step

4- Finally we will have our own spring boot microservice using micrometer to expose our springboot metrics to prometheus. We will create docker image of this springboot application and uplaod the same on dockerhub. In this step we will not configure prometheus.yaml file to directly set our microservice url in prometheus for scrapping but we will use the concept of serviceMonitor in K8 to register our Springboot application and scrap the metrics. We will also create an helm chart for simple execution of this step.


Gitbuh: https://github.com/shdhumale/prometheus-operator-helm-example.git

No comments: