Logs Monitoring with Loki and Promtail

Learn how to set up a complete log monitoring stack using Loki and Promtail.

Banner

Introduction

Log monitoring is crucial for understanding application performance and debugging issues in production.

  • Loki: Developed by Grafana, provides an efficient, low-cost log aggregation system that integrates seamlessly with other Monitoring tools such as Prometheus and Grafana.

  • Promtail: A log collector and shipper specifically built to feed logs into Grafana Loki. It reads log files (or system journals), attaches labels, and pushes them to a Loki server.

This guide will walk you through setting up Loki and Promtail, .

** Use Docker and Docker Compose for loki and promtail**

directories structures should be like this:


.
├── docker-compose.yml
├── loki
│   ├── docker-compose.yml
│   └── loki-config.yaml
└── promtail
    ├── docker-compose.yml
    └── promtail-config.yaml

Lets define all services on the main directory in our docker-compose.yml file as follow:


version: '3'

networks:
  backend:
    driver: bridge

services:
  
  loki:
    extends:
      file: ./loki/docker-compose.yml
      service: loki
    networks:
      - backend
    volumes:
      - loki_storage:/loki

  promtail:
    extends:
      file: ./promtail/docker-compose.yml
      service: promtail
    networks:
      - backend

volumes:
  loki_storage:

** Note: here for all dockers we are using the same network called backend so the containers can communicate with each other using this network ** If you want to know more about docker networks check my post about it.

lets create files inside /loki directory 1. docker-compose.yml


services:
  loki:
    image: grafana/loki:3.0.0
    container_name: loki
    restart: unless-stopped
    environment:
      - TZ=Europe/Berlin # Change this to your timezone
    ports:  # Map Loki to be accessible on your host machine
      - "3100:3100"
    volumes:
      - ./loki-config.yaml:/etc/loki/loki-config.yaml:ro
      - loki:/loki/data
    command: -config.file=/etc/loki/loki-config.yaml
    networks:
      - backend

networks:
  backend:
    name: backend  

volumes:
  loki:

2. loki-config.yaml


auth_enabled: false

server:
  http_listen_port: 3100
  grpc_listen_port: 9096

common:
  instance_addr: 127.0.0.1
  path_prefix: /loki/data
  storage:
    filesystem:
      chunks_directory: /loki/data/chunks
      rules_directory: /loki/data/rules
  replication_factor: 1
  ring:
    kvstore:
      store: memberlist  # recomended for single node using 'etcd' for multicore

schema_config:
  configs:
    - from: 2020-10-24
      store: tsdb
      object_store: filesystem
      schema: v13
      index:
        prefix: index_
        period: 24h

query_range:
  results_cache:
    cache:
      embedded_cache:
        enabled: true
        max_size_mb: 100

querier:
  max_concurrent: 500  # Adjust based on CPU and memory

query_scheduler:
  max_outstanding_requests_per_tenant: 1000  # Adjust based on load

frontend:
  max_outstanding_per_tenant: 2000  # Adjust based on load

limits_config:
  retention_period: 168h  # ✅ Set retention here (7 days)
  max_global_streams_per_user: 5000
  ingestion_rate_mb: 50
  per_stream_rate_limit: 50MB

compactor:
  retention_enabled: true  # ✅ This is fine, but don't add `retention_period`
  delete_request_store: filesystem
  retention_delete_delay: 2h
  retention_delete_worker_count: 2

Now we define files inside /promtail directory. (Promtail collects logs from the system and forwards them to Loki).

1. docker-compose.yml


services:
  promtail:
    image: grafana/promtail:2.9.2
    container_name: promtail
    restart: unless-stopped
    environment:
      - TZ=Europe/Berlin  # Change this to your timezone
    volumes:
      - ./promtail-config.yaml:/etc/promtail/promtail-config.yaml:ro
      - /var/log:/var/log:ro  # In case you need to scrape system logs
      - /home/DB_backup/backup_logfile.log:/home/log/backup_logfile.log:ro     # address of your logs to scrape
    command: -config.file=/etc/promtail/promtail-config.yaml
    networks:
      - backend
    ports:
      - "9080:9080"  # Optional: Expose Promtail UI for debugging

networks:
  backend:
    name: backend

2. promtail-config.yaml


server:
  http_listen_port: 9080
  grpc_listen_port: 0

positions:
  filename: /tmp/positions.yaml

clients:
  - url: http://loki:3100/loki/api/v1/push

scrape_configs:
  - job_name: database_backup_log
    static_configs:
      - targets:
          - database_backup_log
        labels:
          job: database_backup_log
          __path__: /home/log/backup_logfile.log

Final step: running dockers Go to the main directory and run dockers


docker-compose up -d

Verify that Loki and Promtail dockers are running using:


docker ps

** Verification ** 1 -Verify Loki's Status


curl -s "http://localhost:3100/ready"

Expected output:


ready

2- manually push a new value to loki (check only loki and not promtail)


curl -X POST "http://localhost:3100/loki/api/v1/push" -H "Content-Type: application/json" -d '
{
  "streams": [
    {
      "stream": {
        "job": "manual_test"
      },
      "values": [
        ["'"$(date +%s%N)"'", "This is a test log from manual_test"]
      ]
    }
  ]
}'

then check if you can see this in loki by:


curl -G -s "http://localhost:3100/loki/api/v1/query" --data-urlencode 'query={job="manual_test"}'

2- Manually add a new log file


echo "Test log entry $(date)" >> /your_log_file_path/backup_logfile.log

3- check loki: fetch latest 6 hours log


curl -G -s "http://localhost:3100/loki/api/v1/query_range" \
  --data-urlencode 'query={job="database_backup_log"}' \
  --data-urlencode 'limit=10' \
  --data-urlencode "start=$(date -u -d '6 hours ago' +%s%N)"

Optionally you can get the name of jobs received by loki in the last 6 hour


curl -G -s "http://localhost:3100/loki/api/v1/label/job/values" \
  --data-urlencode "start=$(date -u -d '6 hours ago' +%s%N)" \
  --data-urlencode "end=$(date -u +%s%N)"

Optional: Logging in Flask Applications

To enable logging in Flask, create a logger.py file:


import logging
from logging.handlers import RotatingFileHandler

class Logger:
    def __init__(self, name, log_file, level=logging.INFO):
        self.logger = logging.getLogger(name)
        self.logger.setLevel(level)

        # File handler
        file_handler = RotatingFileHandler(log_file, maxBytes=1000000, backupCount=5)
        formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')
        file_handler.setFormatter(formatter)

        # Add handler
        if not self.logger.hasHandlers():
            self.logger.addHandler(file_handler)

    def get_logger(self):
        return self.logger

In your Flask application (init.py):


from project.logger import Logger

app_logger = Logger("flask_app", "/path/to/flask/logs/app.log").get_logger()

app_logger.info("Logger initialized with a class!")

Conclusion

With Loki, Promtail, Prometheus, and Grafana, you now have a powerful log monitoring stack. Whether running locally or in Docker, you can collect, store, and visualize logs efficiently. Next step is to Secure Loki Access with NGINX Authentication from other Pcs. check my post about Loki Access with NGINX Authentication for more info

Logs Monitoring with Loki and Promtail | Software Engineer Blog