1. Log Rotation Script
Logs can quickly grow large and consume disk space. This script rotates and compresses old log files.
#!/bin/bash
# Rotate and compress logs
LOG_FILE="/var/log/app.log"
BACKUP_DIR="/var/log/backup"
TIMESTAMP=$(date +"%Y%m%d")
mv $LOG_FILE $BACKUP_DIR/log_$TIMESTAMP.log
gzip $BACKUP_DIR/log_$TIMESTAMP.log
touch $LOG_FILE
echo "Log rotation completed."
- Moves old logs to a backup directory and compresses them.
- Creates a new empty log file.
2. Automated Log Cleanup Script
To avoid excessive disk usage, this script deletes logs older than 30 days.
#!/bin/bash
# Delete logs older than 30 days
LOG_DIR="/var/log/backup"
find $LOG_DIR -type f -name "*.log.gz" -mtime +30 -exec rm -f {} \;
echo "Old logs cleaned up."
- Uses find
to locate and delete old logs based on their age.
- Prevents unnecessary disk consumption.
3. Real-Time Log Monitoring Script
This script continuously monitors a log file for specific keywords (e.g., “ERROR”).
#!/bin/bash
# Monitor logs for errors
LOG_FILE="/var/log/app.log"
tail -F $LOG_FILE | grep - line-buffered "ERROR" | while read line; do
echo "Alert: $line"
done
- Uses tail -F
to follow the log in real-time.
- Triggers alerts when an error is detected.
4. Log Archiving Script
Archives logs weekly into a separate storage directory.
#!/bin/bash
# Archive logs weekly
LOG_DIR="/var/log"
ARCHIVE_DIR="/var/log/archive"
mkdir -p $ARCHIVE_DIR
tar -czf $ARCHIVE_DIR/logs_$(date +"%Y%m%d").tar.gz $LOG_DIR/*.log
echo "Logs archived."
- Compresses all logs into a single archive file.
- Useful for long-term log retention.
5. Log Parsing and Summary Script
Summarizes log files to identify common errors and warnings.
#!/bin/bash
# Parse logs and summarize errors
LOG_FILE="/var/log/app.log"
ERROR_COUNT=$(grep -c "ERROR" $LOG_FILE)
WARNING_COUNT=$(grep -c "WARNING" $LOG_FILE)
echo "Errors: $ERROR_COUNT, Warnings: $WARNING_COUNT"
- Counts the occurrences of specific log patterns.
- Helps in quick troubleshooting.
6. Automated Log Transfer Script
Transfers logs to a centralized log server for analysis.
#!/bin/bash
# Transfer logs to remote server
LOG_FILE="/var/log/app.log"
REMOTE_SERVER="logserver.example.com"
scp $LOG_FILE user@$REMOTE_SERVER:/var/log/central/
echo "Log file transferred."
- Uses scp
to send logs to a remote location.
- Ensures centralized log storage for analysis.
7. Log Integrity Check Script
Verifies log integrity using checksum comparison.
#!/bin/bash
# Check log integrity
LOG_FILE="/var/log/app.log"
CHECKSUM_FILE="/var/log/app.log.md5"
md5sum -c $CHECKSUM_FILE
- Uses MD5 checksums to verify if logs have been tampered with.
- Useful for security-sensitive environments.
8. Failed Login Attempt Logger
Logs and alerts for multiple failed login attempts from the authentication logs.
#!/bin/bash
# Monitor failed SSH logins
LOG_FILE="/var/log/auth.log"
THRESHOLD=5
grep "Failed password" $LOG_FILE | awk '{print $(NF-3)}' | sort | uniq -c | while read count ip; do
if [ $count -gt $THRESHOLD ]; then
echo "Alert: $ip has $count failed SSH attempts!"
fi
done
- Detects brute-force attacks by monitoring failed login attempts.
- Can be modified to trigger alerts or block IPs.
9. Log File Size Monitoring Script
Checks that log files do not exceed a certain size threshold.
#!/bin/bash
# Monitor log file size
LOG_FILE="/var/log/app.log"
MAX_SIZE=5000000 # 5MB
CURRENT_SIZE=$(stat -c%s "$LOG_FILE")
if [ $CURRENT_SIZE -gt $MAX_SIZE ]; then
echo "Warning: $LOG_FILE exceeds $MAX_SIZE bytes!"
fi
- Checks if the log file exceeds a predefined size.
- Can be expanded to rotate or archive the log automatically.
10. Log Synchronization with Cloud Storage
Automatically uploads logs to an S3 bucket for long-term retention.
#!/bin/bash
# Sync logs to AWS S3
LOG_DIR="/var/log"
BUCKET_NAME="my-log-bucket"
aws s3 sync $LOG_DIR s3://$BUCKET_NAME - exclude "*" - include "*.log"
echo "Logs synchronized to S3."
- Uses AWS CLI to sync logs to an S3 bucket.
- Backup and compliance with log retention policies.
Conclusion
Effective log management is crucial for maintaining system health, troubleshooting issues, and ensuring security. The scripts shared in this blog help automate essential log management tasks, such as rotation, monitoring, archiving, and transferring logs to a centralized location. By implementing these solutions, DevOps engineers can prevent log overflow, optimize storage, and enhance security without manual intervention.
For enterprise-level log management, integrating these scripts with tools like ELK Stack (Elasticsearch, Logstash, Kibana), Prometheus, Grafana, or cloud-based solutions (AWS CloudWatch, Splunk, or Datadog) can provide real-time visualization and alerting capabilities.
By automating log handling, teams can focus on proactive monitoring, improve incident response times, and maintain system reliability—a key aspect of a well-architected DevOps strategy.
Give me your heart 💖
If you found this blog helpful for your interviews or in learning Docker troubleshooting, please hit a heart for 10 times and drop a comment! Your support motivates me to create more content on DevOps and related topics. ❤️
If you'd like to connect or discuss more on this topic, feel free to reach out on LinkedIn.
Linkedin: linkedin.com/in/musta-shaik