Setting Up FluentBit in a Kubernetes Environment with Elasticsearch and Kibana for Production
Introduction
In modern Kubernetes environments, log management is crucial for monitoring application health, troubleshooting issues, and ensuring security compliance. One of the most efficient logging stacks used for Kubernetes is the EFK stack (Elasticsearch, FluentBit, Kibana).
This guide provides a step-by-step approach to setting up FluentBit for log collection in a Kubernetes cluster, while deploying Elasticsearch and Kibana for centralized log storage and visualization in a production-ready environment.
Why Use FluentBit, Elasticsearch, and Kibana in Kubernetes?
1. FluentBit (Log Collector and Forwarder)
Lightweight and faster than Fluentd with low memory and CPU usage.
Collects logs from Kubernetes pods, services, and nodes.
Parses, filters, and forwards logs efficiently to Elasticsearch.
Built-in Kubernetes integration for auto-labeling and enrichment.
2. Elasticsearch (Log Storage and Search Engine)
A distributed and scalable search engine designed for efficient log storage.
Stores, indexes, and enables real-time querying of log data.
Elasticsearch runs as a StatefulSet in Kubernetes, ensuring data persistence.
3. Kibana (Visualization and Monitoring)
Provides an interactive web-based interface for searching and analyzing logs.
Helps teams create dashboards and monitor system health.
Simplifies log-based alerting and troubleshooting.
Prerequisites
Before setting up FluentBit, Elasticsearch, and Kibana, ensure that:
✅ You have a Kubernetes cluster (AWS EKS, GKE, AKS, or a self-hosted cluster).
✅ kubectl is installed and configured to interact with the cluster.
✅ Helm is installed (for easier deployment of Elasticsearch and Kibana).
✅ You have sufficient CPU, memory, and storage resources.
Step 1: Deploy Elasticsearch in Kubernetes (Production Setup)
Elasticsearch requires persistent storage and proper resource allocation in production. We will use Helm to deploy it.
1. Add the Helm Repository
helm repo add elastic https://helm.elastic.co
helm repo update
2. Deploy Elasticsearch with Helm
helm install elasticsearch elastic/elasticsearch \
--set replicas=3 \
--set minimumMasterNodes=2 \
--set persistence.enabled=true \
--set resources.requests.cpu=1 \
--set resources.requests.memory=2Gi
3. Verify Elasticsearch Deployment
kubectl get pods -l app=elasticsearch
4. Expose Elasticsearch (Optional for External Access)
kubectl port-forward svc/elasticsearch-master 9200:9200
Now, you can access Elasticsearch at http://localhost:9200
.
Step 2: Deploy Kibana in Kubernetes
Kibana will connect to Elasticsearch to visualize logs.
1. Deploy Kibana Using Helm
helm install kibana elastic/kibana --set service.type=ClusterIP
2. Verify Kibana Deployment
kubectl get pods -l app=kibana
3. Expose Kibana UI (Port Forwarding for Testing)
kubectl port-forward svc/kibana-kibana 5601:5601
Now, you can access Kibana at http://localhost:5601
.
Step 3: Deploy FluentBit as a DaemonSet in Kubernetes
FluentBit will run as a DaemonSet, ensuring that logs from all nodes in the cluster are collected and forwarded to Elasticsearch.
1. Create a FluentBit Configuration File
Create a ConfigMap for FluentBit configuration:
apiVersion: v1
kind: ConfigMap
metadata:
name: fluentbit-config
namespace: kube-system
data:
fluent-bit.conf: |
[SERVICE]
Flush 5
Log_Level info
[INPUT]
Name tail
Path /var/log/containers/*.log
Tag kube.*
Parser docker
DB /var/log/flb_kube.db
[FILTER]
Name kubernetes
Match kube.*
Kube_URL https://kubernetes.default.svc:443
Merge_Log On
[OUTPUT]
Name es
Match *
Host elasticsearch-master
Port 9200
Index kubernetes-logs
Type _doc
2. Apply the ConfigMap to the Cluster
kubectl apply -f fluentbit-config.yaml
3. Deploy FluentBit DaemonSet
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: fluentbit
namespace: kube-system
spec:
selector:
matchLabels:
name: fluentbit
template:
metadata:
labels:
name: fluentbit
spec:
containers:
- name: fluentbit
image: fluent/fluent-bit:latest
volumeMounts:
- name: varlog
mountPath: /var/log
- name: config-volume
mountPath: /fluent-bit/etc
volumes:
- name: varlog
hostPath:
path: /var/log
- name: config-volume
configMap:
name: fluentbit-config
4. Apply FluentBit DaemonSet to the Cluster
kubectl apply -f fluentbit-daemonset.yaml
5. Verify FluentBit Logs
kubectl logs -l name=fluentbit -n kube-system
If FluentBit is running correctly, it should show log data being forwarded to Elasticsearch.
Step 4: Visualizing Logs in Kibana
1. Access Kibana
If Kibana is not exposed externally, you can use port forwarding:
kubectl port-forward svc/kibana-kibana 5601:5601
Then, open localhost:5601 in your browser.
2. Configure Kibana to Read Logs from Elasticsearch
In Kibana, navigate to Management → Stack Management → Index Patterns.
Click Create Index Pattern and enter
kubernetes-logs-*
.Select the
@timestamp
field and save.
3. Explore Kubernetes Logs
Go to Discover in Kibana.
Filter logs using pod names, namespaces, or error messages.
Conclusion
By deploying FluentBit as a DaemonSet, Elasticsearch for storage, and Kibana for visualization, we achieve a scalable, centralized logging system for Kubernetes clusters.