Tuesday, June 01, 2021

EFK FluentBit stack using springboot Microservice, Kubernetes and Docker using Helm

 Helm is really handy for the developer. As you know if we are working with Kubernetes we have to create a large number of yaml files for concept like deployment, role, statefulset, deamonset, pod, volume, configmap, secrets, ingress, node, container, rolebinding, operator, service, service account (Dr-Sd-PVC-SIN-CROSS )to configure K8 env. Further more this things need to be executed in sequence i.e. secreate need to be created first so that we can use them in config map and so on. It becomes really difficult to track all this things in seqence when the application level increase. To over come this helm comes handy. It act as an package manager for Kubernetes…let me try to explain as we have apt, npm, homebrew which are package manager same way we can collect all the yaml and package them in to once chart and upload it on helm repository (private or public). Further any developer can easily use this package and execute it using helm command to make sure that all the things that is mandate is present and executed in sequence manner for K8 setup.

In this blog we will have try to setup all the component that is required to set for EFK (Elastic search, Fluent Bit and Kibana) for log management.

So lets begin … As stated earlier as i am using windows O/S and minikube is giving me 100% disk usage I moved to use K8 with docker desktop with below configuration.

Let check we have clear base K8 enviroment i.e no namespace except default, no elastic search ,Kibana and fluent bit deployed.

Make sure you have helm installed using below command

All ready made helm chart of EFK required more memory and is not eligible to have them running in local machine i.e. minikube or docker desktop.

So lets create our own helm chart to install our EFK stack.

Please refer to mine old blog to understand

http://siddharathadhumale.blogspot.com/2021/04/simple-helm-example-for-kubernetes-with.html

http://siddharathadhumale.blogspot.com/2021/04/understanding-of-helm-and-its-use-case.html


we have two folder chart and template and two files chart and value.yaml files

Chart.yaml :- this file contains some information like chart version, name , its dependencies i.e. in short it contains meta data for the chart.
Values.yaml :- this files contains configuration template values that can be used in the tempalate applicaiton specific yaml files. This are default values and we can override it later.
charts/ :- this folder contains depenencies that myChartName has. i.e. if we create a chart that is used to deploy elastic stake but in depth it depends on some XYZ chart used to install addition package such as transection,persistant etc.
template/ :- this is the folder that contains all the template files that used to install/update application in K8 cluster using yaml. It will contains the values that is configured in values/yaml files.

Remove all files from template folder.

Create following files as shown below :-

Note you can download the same from my github location given at the end of this blog.

1- siddhu-elastic-stack :- this file contains yaml code for deployment and service for Elastic Search
2-siddhu-fluent-bit-configmap:- this file contains yaml code for fluent bit configmap i.e. key value pair
3-siddhu-fluent-bit-ds:- this file contains yaml code for deamonset for fluentbit i.e. to start one of the container as soon as node is up.
4-siddhu-fluent-bit-role:- this file contains yaml code for setting role for fluentbit.
5-siddhu-fluent-bit-role-binding:- this file contains yaml code for bindign clusterrole and serviceaccount for fluentbit.
6-siddhu-fluent-bit-service-account:- this file contains yaml code for creating service account for fluentbit.
7-siddhu-kibana:- this file contains yaml code for deployment and service for kibana
8-siddhu-kube-logging:- this file contains yaml code for creating namespace as kube-logging.
9-siddhu-springboot:- this file contains yaml code for deployment and service for our own create Microservice.

We need to follow a perticular sequence for exection

Step 1: Create a Namespace
kubectl create -f siddhu-kube-logging.yaml
Step 2: Setup Elasticsearch
kubectl create -f siddhu-elastic-stack.yaml
Step 3: Setup Kibana
kubectl create -f siddhu-kibana.yaml
Step 4: Fluent Bit Service
kubectl create -f siddhu-fluent-bit-service-account.yaml
kubectl create -f siddhu-fluent-bit-role.yaml
kubectl create -f siddhu-fluent-bit-role-binding.yaml
kubectl create -f siddhu-fluent-bit-configmap.yaml
kubectl create -f siddhu-fluent-bit-ds.yaml

First check that we dont have anything present in our env

now lets run below command and check if all yaml file is created properly

C:\vscode-helm-workspace>helm template siddhu-efk-helm-chart >> helmcreatedyaml.yaml

We will find that all the component that is needed is create properly


---
# Source: siddhu-efk-helm-chart/templates/siddhu-kube-logging.yaml
kind: Namespace
apiVersion: v1
metadata:
  name: kube-logging
---
# Source: siddhu-efk-helm-chart/templates/siddhu-fluent-bit-service-account.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  name: fluent-bit
  namespace: kube-logging
  labels:
    app: fluent-bit
---
# Source: siddhu-efk-helm-chart/templates/siddhu-fluent-bit-configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
  name: fluent-bit-config
  namespace: kube-logging
  labels:
    k8s-app: fluent-bit
data:
  # Configuration files: server, input, filters and output
  # ======================================================
  fluent-bit.conf: |
    [SERVICE]
        Flush         1
        Log_Level     info
        Daemon        off
        Parsers_File  parsers.conf
        HTTP_Server   On
        HTTP_Listen   0.0.0.0
        HTTP_Port     2020

    @INCLUDE input-kubernetes.conf
    @INCLUDE filter-kubernetes.conf
    @INCLUDE output-elasticsearch.conf

  input-kubernetes.conf: |
    [INPUT]
        Name              tail
        Tag               kube.*
        Path              /var/log/containers/*.log
        Parser            docker
        DB                /var/log/flb_kube.db
        Mem_Buf_Limit     5MB
        Skip_Long_Lines   On
        Refresh_Interval  10

  filter-kubernetes.conf: |
    [FILTER]
        Name                kubernetes
        Match               kube.*
        Kube_URL            https://kubernetes.default.svc:443
        Kube_CA_File        /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
        Kube_Token_File     /var/run/secrets/kubernetes.io/serviceaccount/token
        Kube_Tag_Prefix     kube.var.log.containers.
        Merge_Log           On
        Merge_Log_Key       log_processed
        K8S-Logging.Parser  On
        K8S-Logging.Exclude Off

  output-elasticsearch.conf: |
    [OUTPUT]
        Name            es
        Match           *
        Host            ${FLUENT_ELASTICSEARCH_HOST}
        Port            ${FLUENT_ELASTICSEARCH_PORT}
        Logstash_Format On
        Replace_Dots    On
        Retry_Limit     False

  parsers.conf: |
    [PARSER]
        Name   apache
        Format regex
        Regex  ^(?<host>[^ ]*) [^ ]* (?<user>[^ ]*) \[(?<time>[^\]]*)\] "(?<method>\S+)(?: +(?<path>[^\"]*?)(?: +\S*)?)?" (?<code>[^ ]*) (?<size>[^ ]*)(?: "(?<referer>[^\"]*)" "(?<agent>[^\"]*)")?$
        Time_Key time
        Time_Format %d/%b/%Y:%H:%M:%S %z

    [PARSER]
        Name   apache2
        Format regex
        Regex  ^(?<host>[^ ]*) [^ ]* (?<user>[^ ]*) \[(?<time>[^\]]*)\] "(?<method>\S+)(?: +(?<path>[^ ]*) +\S*)?" (?<code>[^ ]*) (?<size>[^ ]*)(?: "(?<referer>[^\"]*)" "(?<agent>[^\"]*)")?$
        Time_Key time
        Time_Format %d/%b/%Y:%H:%M:%S %z

    [PARSER]
        Name   apache_error
        Format regex
        Regex  ^\[[^ ]* (?<time>[^\]]*)\] \[(?<level>[^\]]*)\](?: \[pid (?<pid>[^\]]*)\])?( \[client (?<client>[^\]]*)\])? (?<message>.*)$

    [PARSER]
        Name   nginx
        Format regex
        Regex ^(?<remote>[^ ]*) (?<host>[^ ]*) (?<user>[^ ]*) \[(?<time>[^\]]*)\] "(?<method>\S+)(?: +(?<path>[^\"]*?)(?: +\S*)?)?" (?<code>[^ ]*) (?<size>[^ ]*)(?: "(?<referer>[^\"]*)" "(?<agent>[^\"]*)")?$
        Time_Key time
        Time_Format %d/%b/%Y:%H:%M:%S %z

    [PARSER]
        Name   json
        Format json
        Time_Key time
        Time_Format %d/%b/%Y:%H:%M:%S %z

    [PARSER]
        Name        docker
        Format      json
        Time_Key    time
        Time_Format %Y-%m-%dT%H:%M:%S.%L
        Time_Keep   On

    [PARSER]
        Name        syslog
        Format      regex
        Regex       ^\<(?<pri>[0-9]+)\>(?<time>[^ ]* {1,2}[^ ]* [^ ]*) (?<host>[^ ]*) (?<ident>[a-zA-Z0-9_\/\.\-]*)(?:\[(?<pid>[0-9]+)\])?(?:[^\:]*\:)? *(?<message>.*)$
        Time_Key    time
        Time_Format %b %d %H:%M:%S
---
# Source: siddhu-efk-helm-chart/templates/siddhu-fluent-bit-role.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: fluent-bit
  labels:
    app: fluent-bit
rules:
- apiGroups: [""]
  resources:
  - pods
  - namespaces
  verbs: ["get", "list", "watch"]
---
# Source: siddhu-efk-helm-chart/templates/siddhu-fluent-bit-role-binding.yaml
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: fluent-bit
roleRef:
  kind: ClusterRole
  name: fluent-bit
  apiGroup: rbac.authorization.k8s.io
subjects:
- kind: ServiceAccount
  name: fluent-bit
  namespace: kube-logging
---
# Source: siddhu-efk-helm-chart/templates/siddhu-elastic-stack.yaml
apiVersion: v1
kind: Service
metadata:
  name:  elasticsearch
  namespace: kube-logging
  labels:
    service:  elasticsearch
spec:
  type: NodePort
  selector:
    component:  elasticsearch
  ports:
  - port: 9200
    targetPort: 9200
---
# Source: siddhu-efk-helm-chart/templates/siddhu-kibana.yaml
apiVersion: v1
kind: Service
metadata:
  name: kibana
  namespace: kube-logging
  labels:
    service: kibana
spec:
  type: NodePort
  selector:
    run: kibana
  ports:
  - port: 5601
    targetPort: 5601
---
# Source: siddhu-efk-helm-chart/templates/siddhu-springboot.yaml
apiVersion: v1
kind: Service
metadata:
  name: siddhuspringboot
  namespace: kube-logging
  labels:
    service: siddhuspringboot
spec:
  type: NodePort
  selector:
    component: siddhuspringboot
  ports:
  - port: 9898
    targetPort: 9898
---
# Source: siddhu-efk-helm-chart/templates/siddhu-fluent-bit-ds.yaml
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: fluent-bit
  namespace: kube-logging
  labels:
    k8s-app: fluent-bit-logging
    version: v1
    kubernetes.io/cluster-service: "true"
spec:
  selector:
    matchLabels:
      name: fluent-bit
  template:
    metadata:
      labels:
        name: fluent-bit
        k8s-app: fluent-bit-logging
        version: v1
        kubernetes.io/cluster-service: "true"
      annotations:
        prometheus.io/scrape: "true"
        prometheus.io/port: "2020"
        prometheus.io/path: /api/v1/metrics/prometheus
    spec:
      containers:
      - name: fluent-bit
        image: fluent/fluent-bit:1.3.11
        imagePullPolicy: Always
        ports:
          - containerPort: 2020
        env:
        - name: FLUENT_ELASTICSEARCH_HOST
          value: "elasticsearch"
        - name: FLUENT_ELASTICSEARCH_PORT
          value: "9200"
        volumeMounts:
        - name: varlog
          mountPath: /var/log
        - name: varlibdockercontainers
          mountPath: /var/lib/docker/containers
          readOnly: true
        - name: fluent-bit-config
          mountPath: /fluent-bit/etc/
      terminationGracePeriodSeconds: 10
      volumes:
      - name: varlog
        hostPath:
          path: /var/log
      - name: varlibdockercontainers
        hostPath:
          path: /var/lib/docker/containers
      - name: fluent-bit-config
        configMap:
          name: fluent-bit-config
      serviceAccountName: fluent-bit
      tolerations:
      - key: node-role.kubernetes.io/master
        operator: Exists
        effect: NoSchedule
      - operator: "Exists"
        effect: "NoExecute"
      - operator: "Exists"
        effect: "NoSchedule"
---
# Source: siddhu-efk-helm-chart/templates/siddhu-elastic-stack.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: elasticsearch
  namespace: kube-logging
spec:
  selector:
    matchLabels:
      component:  elasticsearch
  template:
    metadata:
      labels:
        component:  elasticsearch
    spec:
      containers:
      - name:  elasticsearch
        image: elasticsearch:7.3.2
        env:
        - name: discovery.type
          value: single-node
        ports:
        - containerPort: 9200
          name: http
          protocol: TCP
        resources:
          limits:
            cpu: 500m
            memory: 2Gi
          requests:
            cpu: 500m
            memory: 1Gi
---
# Source: siddhu-efk-helm-chart/templates/siddhu-kibana.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: kibana
  namespace: kube-logging
spec:
  selector:
    matchLabels:
      run: kibana
  template:
    metadata:
      labels:
        run: kibana
    spec:
      containers:
      - name: kibana
        image: "kibana:7.3.2"
        env:
        - name: ELASTICSEARCH_URL
          value: http://elasticsearch:9200
        - name: XPACK_SECURITY_ENABLED
          value: "true"
        ports:
        - containerPort: 5601
          name: http
          protocol: TCP
---
# Source: siddhu-efk-helm-chart/templates/siddhu-springboot.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: siddhuspringboot
  namespace: kube-logging
spec:
  selector:
    matchLabels:
      component: siddhuspringboot
  template:
    metadata:
      labels:
        component: siddhuspringboot
    spec:
      containers:
      - name: siddhuspringboot
        image: shdhumale/efk-springboot-docker-kubernetes:latest
        env:
        - name: discovery.type
          value: single-node
        ports:
        - containerPort: 9898
          name: http
          protocol: TCP
---
# Source: siddhu-efk-helm-chart/templates/tests/test-connection.yaml
apiVersion: v1
kind: Pod
metadata:
  name: "RELEASE-NAME-siddhu-efk-helm-chart-test-connection"
  labels:
    helm.sh/chart: siddhu-efk-helm-chart-0.1.0
    app.kubernetes.io/name: siddhu-efk-helm-chart
    app.kubernetes.io/instance: RELEASE-NAME
    app.kubernetes.io/version: "1.16.0"
    app.kubernetes.io/managed-by: Helm
  annotations:
    "helm.sh/hook": test
spec:
  containers:
    - name: wget
      image: busybox
      command: ['wget']
      args: ['RELEASE-NAME-siddhu-efk-helm-chart:9898']
  restartPolicy: Never

Also try to run helm lint command lint provided by helm which you could run to identify possible issues forehand.

C:\vscode-helm-workspace>helm lint siddhu-efk-helm-chart
==> Linting siddhu-efk-helm-chart
[INFO] Chart.yaml: icon is recommended

1 chart(s) linted, 0 chart(s) failed

As shown above we did not get any error.

Now let use the helm command -dry-run. this allows developer to test its configuration before running the final install command

Use the following -dry-run command to verify our siddhu-efk-helm-chart Helm Chart

helm install siddhu-efk-helm-chart –-debug –-dry-run siddhu-efk-helm-chart

C:\vscode-helm-workspace>helm install siddhu-efk-helm-chart --debug --dry-run siddhu-efk-helm-chart
install.go:173: [debug] Original chart version: ""
install.go:190: [debug] CHART PATH: C:\vscode-helm-workspace\siddhu-efk-helm-chart

NAME: siddhu-efk-helm-chart
LAST DEPLOYED: Tue Jun  1 19:56:47 2021
NAMESPACE: default
STATUS: pending-install
REVISION: 1
USER-SUPPLIED VALUES:
{}

COMPUTED VALUES:
affinity: {}
autoscaling:
  enabled: false
  maxReplicas: 100
  minReplicas: 1
  targetCPUUtilizationPercentage: 80
elasticdeployment:
  name: elasticsearch
elasticimage:
  pullPolicy: IfNotPresent
  repository: elasticsearch
  tag: 7.3.2
elasticservice:
  containerPort: 9200
  port: 9200
  targetPort: 9200
  type: NodePort
fluentbitimage:
  pullPolicy: IfNotPresent
  repository: fluent/fluent-bit
  tag: 1.3.11
image:
  pullPolicy: IfNotPresent
  repository: shdhumale/efk-springboot-docker-kubernetes
  tag: latest
ingress:
  annotations: {}
  enabled: false
  hosts:
  - host: chart-example.local
    paths:
    - backend:
        serviceName: chart-example.local
        servicePort: 80
      path: /
  tls: []
kibanadeployment:
  name: kibana
kibanaimage:
  pullPolicy: IfNotPresent
  repository: kibana
  tag: 7.3.2
kibanaservice:
  containerPort: 5601
  port: 5601
  targetPort: 5601
  type: NodePort
kibanaurl:
  urlvalue: http://elasticsearch:9200
namespace:
  name: kube-logging
nodeSelector: {}
replicaCount: 1
resources: {}
service:
  containerPort: 9898
  port: 9898
  targetPort: 9898
  type: NodePort
serviceaccount:
  name: fluent-bit
springbootdeployment:
  name: siddhuspringboot
tolerations: []

HOOKS:
---
# Source: siddhu-efk-helm-chart/templates/tests/test-connection.yaml
apiVersion: v1
kind: Pod
metadata:
  name: "siddhu-efk-helm-chart-test-connection"
  labels:
    helm.sh/chart: siddhu-efk-helm-chart-0.1.0
    app.kubernetes.io/name: siddhu-efk-helm-chart
    app.kubernetes.io/instance: siddhu-efk-helm-chart
    app.kubernetes.io/version: "1.16.0"
    app.kubernetes.io/managed-by: Helm
  annotations:
    "helm.sh/hook": test
spec:
  containers:
    - name: wget
      image: busybox
      command: ['wget']
      args: ['siddhu-efk-helm-chart:9898']
  restartPolicy: Never
MANIFEST:
---
# Source: siddhu-efk-helm-chart/templates/siddhu-kube-logging.yaml
kind: Namespace
apiVersion: v1
metadata:
  name: kube-logging
---
# Source: siddhu-efk-helm-chart/templates/siddhu-fluent-bit-service-account.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  name: fluent-bit
  namespace: kube-logging
  labels:
    app: fluent-bit
---
# Source: siddhu-efk-helm-chart/templates/siddhu-fluent-bit-configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
  name: fluent-bit-config
  namespace: kube-logging
  labels:
    k8s-app: fluent-bit
data:
  # Configuration files: server, input, filters and output
  # ======================================================
  fluent-bit.conf: |
    [SERVICE]
        Flush         1
        Log_Level     info
        Daemon        off
        Parsers_File  parsers.conf
        HTTP_Server   On
        HTTP_Listen   0.0.0.0
        HTTP_Port     2020

    @INCLUDE input-kubernetes.conf
    @INCLUDE filter-kubernetes.conf
    @INCLUDE output-elasticsearch.conf

  input-kubernetes.conf: |
    [INPUT]
        Name              tail
        Tag               kube.*
        Path              /var/log/containers/*.log
        Parser            docker
        DB                /var/log/flb_kube.db
        Mem_Buf_Limit     5MB
        Skip_Long_Lines   On
        Refresh_Interval  10

  filter-kubernetes.conf: |
    [FILTER]
        Name                kubernetes
        Match               kube.*
        Kube_URL            https://kubernetes.default.svc:443
        Kube_CA_File        /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
        Kube_Token_File     /var/run/secrets/kubernetes.io/serviceaccount/token
        Kube_Tag_Prefix     kube.var.log.containers.
        Merge_Log           On
        Merge_Log_Key       log_processed
        K8S-Logging.Parser  On
        K8S-Logging.Exclude Off

  output-elasticsearch.conf: |
    [OUTPUT]
        Name            es
        Match           *
        Host            ${FLUENT_ELASTICSEARCH_HOST}
        Port            ${FLUENT_ELASTICSEARCH_PORT}
        Logstash_Format On
        Replace_Dots    On
        Retry_Limit     False

  parsers.conf: |
    [PARSER]
        Name   apache
        Format regex
        Regex  ^(?<host>[^ ]*) [^ ]* (?<user>[^ ]*) \[(?<time>[^\]]*)\] "(?<method>\S+)(?: +(?<path>[^\"]*?)(?: +\S*)?)?" (?<code>[^ ]*) (?<size>[^ ]*)(?: "(?<referer>[^\"]*)" "(?<agent>[^\"]*)")?$
        Time_Key time
        Time_Format %d/%b/%Y:%H:%M:%S %z

    [PARSER]
        Name   apache2
        Format regex
        Regex  ^(?<host>[^ ]*) [^ ]* (?<user>[^ ]*) \[(?<time>[^\]]*)\] "(?<method>\S+)(?: +(?<path>[^ ]*) +\S*)?" (?<code>[^ ]*) (?<size>[^ ]*)(?: "(?<referer>[^\"]*)" "(?<agent>[^\"]*)")?$
        Time_Key time
        Time_Format %d/%b/%Y:%H:%M:%S %z

    [PARSER]
        Name   apache_error
        Format regex
        Regex  ^\[[^ ]* (?<time>[^\]]*)\] \[(?<level>[^\]]*)\](?: \[pid (?<pid>[^\]]*)\])?( \[client (?<client>[^\]]*)\])? (?<message>.*)$

    [PARSER]
        Name   nginx
        Format regex
        Regex ^(?<remote>[^ ]*) (?<host>[^ ]*) (?<user>[^ ]*) \[(?<time>[^\]]*)\] "(?<method>\S+)(?: +(?<path>[^\"]*?)(?: +\S*)?)?" (?<code>[^ ]*) (?<size>[^ ]*)(?: "(?<referer>[^\"]*)" "(?<agent>[^\"]*)")?$
        Time_Key time
        Time_Format %d/%b/%Y:%H:%M:%S %z

    [PARSER]
        Name   json
        Format json
        Time_Key time
        Time_Format %d/%b/%Y:%H:%M:%S %z

    [PARSER]
        Name        docker
        Format      json
        Time_Key    time
        Time_Format %Y-%m-%dT%H:%M:%S.%L
        Time_Keep   On

    [PARSER]
        Name        syslog
        Format      regex
        Regex       ^\<(?<pri>[0-9]+)\>(?<time>[^ ]* {1,2}[^ ]* [^ ]*) (?<host>[^ ]*) (?<ident>[a-zA-Z0-9_\/\.\-]*)(?:\[(?<pid>[0-9]+)\])?(?:[^\:]*\:)? *(?<message>.*)$
        Time_Key    time
        Time_Format %b %d %H:%M:%S
---
# Source: siddhu-efk-helm-chart/templates/siddhu-fluent-bit-role.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: fluent-bit
  labels:
    app: fluent-bit
rules:
- apiGroups: [""]
  resources:
  - pods
  - namespaces
  verbs: ["get", "list", "watch"]
---
# Source: siddhu-efk-helm-chart/templates/siddhu-fluent-bit-role-binding.yaml
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: fluent-bit
roleRef:
  kind: ClusterRole
  name: fluent-bit
  apiGroup: rbac.authorization.k8s.io
subjects:
- kind: ServiceAccount
  name: fluent-bit
  namespace: kube-logging
---
# Source: siddhu-efk-helm-chart/templates/siddhu-elastic-stack.yaml
apiVersion: v1
kind: Service
metadata:
  name:  elasticsearch
  namespace: kube-logging
  labels:
    service:  elasticsearch
spec:
  type: NodePort
  selector:
    component:  elasticsearch
  ports:
  - port: 9200
    targetPort: 9200
---
# Source: siddhu-efk-helm-chart/templates/siddhu-kibana.yaml
apiVersion: v1
kind: Service
metadata:
  name: kibana
  namespace: kube-logging
  labels:
    service: kibana
spec:
  type: NodePort
  selector:
    run: kibana
  ports:
  - port: 5601
    targetPort: 5601
---
# Source: siddhu-efk-helm-chart/templates/siddhu-springboot.yaml
apiVersion: v1
kind: Service
metadata:
  name: siddhuspringboot
  namespace: kube-logging
  labels:
    service: siddhuspringboot
spec:
  type: NodePort
  selector:
    component: siddhuspringboot
  ports:
  - port: 9898
    targetPort: 9898
---
# Source: siddhu-efk-helm-chart/templates/siddhu-fluent-bit-ds.yaml
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: fluent-bit
  namespace: kube-logging
  labels:
    k8s-app: fluent-bit-logging
    version: v1
    kubernetes.io/cluster-service: "true"
spec:
  selector:
    matchLabels:
      name: fluent-bit
  template:
    metadata:
      labels:
        name: fluent-bit
        k8s-app: fluent-bit-logging
        version: v1
        kubernetes.io/cluster-service: "true"
      annotations:
        prometheus.io/scrape: "true"
        prometheus.io/port: "2020"
        prometheus.io/path: /api/v1/metrics/prometheus
    spec:
      containers:
      - name: fluent-bit
        image: fluent/fluent-bit:1.3.11
        imagePullPolicy: Always
        ports:
          - containerPort: 2020
        env:
        - name: FLUENT_ELASTICSEARCH_HOST
          value: "elasticsearch"
        - name: FLUENT_ELASTICSEARCH_PORT
          value: "9200"
        volumeMounts:
        - name: varlog
          mountPath: /var/log
        - name: varlibdockercontainers
          mountPath: /var/lib/docker/containers
          readOnly: true
        - name: fluent-bit-config
          mountPath: /fluent-bit/etc/
      terminationGracePeriodSeconds: 10
      volumes:
      - name: varlog
        hostPath:
          path: /var/log
      - name: varlibdockercontainers
        hostPath:
          path: /var/lib/docker/containers
      - name: fluent-bit-config
        configMap:
          name: fluent-bit-config
      serviceAccountName: fluent-bit
      tolerations:
      - key: node-role.kubernetes.io/master
        operator: Exists
        effect: NoSchedule
      - operator: "Exists"
        effect: "NoExecute"
      - operator: "Exists"
        effect: "NoSchedule"
---
# Source: siddhu-efk-helm-chart/templates/siddhu-elastic-stack.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: elasticsearch
  namespace: kube-logging
spec:
  selector:
    matchLabels:
      component:  elasticsearch
  template:
    metadata:
      labels:
        component:  elasticsearch
    spec:
      containers:
      - name:  elasticsearch
        image: elasticsearch:7.3.2
        env:
        - name: discovery.type
          value: single-node
        ports:
        - containerPort: 9200
          name: http
          protocol: TCP
        resources:
          limits:
            cpu: 500m
            memory: 2Gi
          requests:
            cpu: 500m
            memory: 1Gi
---
# Source: siddhu-efk-helm-chart/templates/siddhu-kibana.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: kibana
  namespace: kube-logging
spec:
  selector:
    matchLabels:
      run: kibana
  template:
    metadata:
      labels:
        run: kibana
    spec:
      containers:
      - name: kibana
        image: "kibana:7.3.2"
        env:
        - name: ELASTICSEARCH_URL
          value: http://elasticsearch:9200
        - name: XPACK_SECURITY_ENABLED
          value: "true"
        ports:
        - containerPort: 5601
          name: http
          protocol: TCP
---
# Source: siddhu-efk-helm-chart/templates/siddhu-springboot.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: siddhuspringboot
  namespace: kube-logging
spec:
  selector:
    matchLabels:
      component: siddhuspringboot
  template:
    metadata:
      labels:
        component: siddhuspringboot
    spec:
      containers:
      - name: siddhuspringboot
        image: shdhumale/efk-springboot-docker-kubernetes:latest
        env:
        - name: discovery.type
          value: single-node
        ports:
        - containerPort: 9898
          name: http
          protocol: TCP

NOTES:
1. Get the application URL by running these commands:
  export NODE_PORT=$(kubectl get --namespace default -o jsonpath="{.spec.ports[0].nodePort}" services siddhu-efk-helm-chart)
  export NODE_IP=$(kubectl get nodes --namespace default -o jsonpath="{.items[0].status.addresses[0].address}")
  echo http://$NODE_IP:$NODE_PORT

C:\vscode-helm-workspace>

Now finally we are ready to install our helm char using below command and lets check all is working fine.

helm install siddhu-efk-helm-chart siddhu-efk-helm-chart

C:\vscode-helm-workspace>helm install siddhu-efk-helm-chart siddhu-efk-helm-chart
NAME: siddhu-efk-helm-chart
LAST DEPLOYED: Tue Jun  1 19:59:11 2021
NAMESPACE: default
STATUS: deployed
REVISION: 1
NOTES:
1. Get the application URL by running these commands:
  export NODE_PORT=$(kubectl get --namespace default -o jsonpath="{.spec.ports[0].nodePort}" services siddhu-efk-helm-chart)
  export NODE_IP=$(kubectl get nodes --namespace default -o jsonpath="{.items[0].status.addresses[0].address}")
  echo http://$NODE_IP:$NODE_PORT
  

now finally check we got all thing created properly as shown below.

C:\vscode-helm-workspace\siddhu-efk-helm-chart>kubectl get all -n kube-logging
NAME                                    READY   STATUS    RESTARTS   AGE
pod/elasticsearch-d4995f6d9-l45l7       1/1     Running   0          43s
pod/fluent-bit-8crx4                    1/1     Running   0          43s
pod/kibana-64ccc79f59-nt5rf             1/1     Running   0          43s
pod/siddhuspringboot-798d499975-f6dxh   1/1     Running   0          43s

NAME                       TYPE       CLUSTER-IP      EXTERNAL-IP   PORT(S)          AGE
service/elasticsearch      NodePort   10.97.66.215    <none>        9200:31880/TCP   46s
service/kibana             NodePort   10.107.110.60   <none>        5601:32548/TCP   45s
service/siddhuspringboot   NodePort   10.109.110.54   <none>        9898:32022/TCP   45s

NAME                        DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR   AGE
daemonset.apps/fluent-bit   1         1         1       1            1           <none>          45s

NAME                               READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/elasticsearch      1/1     1            1           44s
deployment.apps/kibana             1/1     1            1           44s
deployment.apps/siddhuspringboot   1/1     1            1           44s

NAME                                          DESIRED   CURRENT   READY   AGE
replicaset.apps/elasticsearch-d4995f6d9       1         1         1       44s
replicaset.apps/kibana-64ccc79f59             1         1         1       44s
replicaset.apps/siddhuspringboot-798d499975   1         1         1       44s

C:\vscode-helm-workspace\siddhu-efk-helm-chart>

Now finally we will open our kibana and elastic url in browser

kubectl get pod -n kube-logging -o wide
kubectl port-forward 9200:9200 -n kube-logging
check this url is working http://localhost:9200

and also check our kibana is working using below command and url.

kubectl port-forward 5601:5601 –n kube-logging
check this url is working http://localhost:5601

Now lets acces our spring boot application using below command and url.

kubectl port-forward 9898:9898 –n kube-logging
check this url is working http://localhost:9898/siddhu

Now lets configure our kibana to get the log of our microservice in it as shown in below image we have index with logstash.

Once you configure this index in kibana you will be able to see our microservice log in the kibana.

Note:- You can download the whole code from below given github url.
https://github.com/shdhumale/siddhu-efk-helm-chart


No comments: