Appendix C: Sample Scripts for Daily Tasks
Introduction to Automation Through Shell Scripting
In the realm of Linux administration and daily computing tasks, the power of automation cannot be overstated. Shell scripting transforms repetitive, time-consuming activities into streamlined, automated processes that execute with precision and consistency. This appendix serves as your practical toolkit, containing carefully crafted scripts that address common daily tasks encountered by Linux users, system administrators, and developers alike.
The beauty of shell scripting lies not merely in its ability to execute commands sequentially, but in its capacity to make intelligent decisions, handle errors gracefully, and adapt to varying conditions. Each script presented in this collection has been designed with real-world scenarios in mind, incorporating best practices for error handling, logging, and user interaction.
As we delve into these sample scripts, you'll discover how seemingly complex tasks can be broken down into logical, manageable components. Whether you're managing system backups, monitoring server resources, or organizing file structures, these scripts will serve as both functional tools and educational examples that you can modify and expand upon to suit your specific needs.
System Maintenance Scripts
Automated System Backup Script
The cornerstone of any robust system administration strategy is reliable, automated backups. The following script demonstrates a comprehensive approach to system backup that includes compression, rotation, and notification features.
#!/bin/bash
# System Backup Script
# Description: Creates compressed backups of specified directories with rotation
# Author: System Administrator
# Version: 2.1
# Configuration Variables
BACKUP_SOURCE="/home /etc /var/www"
BACKUP_DEST="/backup/daily"
BACKUP_NAME="system_backup_$(date +%Y%m%d_%H%M%S)"
RETENTION_DAYS=7
LOG_FILE="/var/log/backup.log"
EMAIL_NOTIFY="admin@example.com"
# Function to log messages with timestamps
log_message() {
echo "$(date '+%Y-%m-%d %H:%M:%S') - $1" | tee -a "$LOG_FILE"
}
# Function to send email notifications
send_notification() {
local subject="$1"
local message="$2"
echo "$message" | mail -s "$subject" "$EMAIL_NOTIFY"
}
# Check if backup destination exists
if [ ! -d "$BACKUP_DEST" ]; then
log_message "Creating backup directory: $BACKUP_DEST"
mkdir -p "$BACKUP_DEST" || {
log_message "ERROR: Failed to create backup directory"
exit 1
}
fi
# Start backup process
log_message "Starting system backup: $BACKUP_NAME"
start_time=$(date +%s)
# Create compressed archive
tar -czf "$BACKUP_DEST/$BACKUP_NAME.tar.gz" $BACKUP_SOURCE 2>/dev/null
# Check backup success
if [ $? -eq 0 ]; then
end_time=$(date +%s)
duration=$((end_time - start_time))
backup_size=$(du -h "$BACKUP_DEST/$BACKUP_NAME.tar.gz" | cut -f1)
log_message "Backup completed successfully"
log_message "Backup size: $backup_size"
log_message "Duration: ${duration} seconds"
# Cleanup old backups
log_message "Cleaning up backups older than $RETENTION_DAYS days"
find "$BACKUP_DEST" -name "system_backup_*.tar.gz" -mtime +$RETENTION_DAYS -delete
send_notification "Backup Success" "System backup completed successfully. Size: $backup_size, Duration: ${duration}s"
else
log_message "ERROR: Backup failed"
send_notification "Backup Failed" "System backup failed. Check log file: $LOG_FILE"
exit 1
fi
Command Explanation:
- tar -czf: Creates a compressed gzip archive
- -c: Create archive
- -z: Compress with gzip
- -f: Specify filename
- tee -a: Writes output to both stdout and appends to file
- find ... -mtime +$RETENTION_DAYS -delete: Removes files older than specified days
Notes:
- Modify BACKUP_SOURCE to include directories relevant to your system
- Ensure the script has execute permissions: chmod +x backup_script.sh
- Consider running via cron for automation: 0 2 * * * /path/to/backup_script.sh
System Health Monitor
Proactive system monitoring prevents small issues from becoming critical problems. This script provides comprehensive system health checks with intelligent alerting.
#!/bin/bash
# System Health Monitor
# Description: Monitors system resources and sends alerts when thresholds are exceeded
# Usage: Run via cron every 5-10 minutes
# Configuration
CPU_THRESHOLD=80
MEMORY_THRESHOLD=85
DISK_THRESHOLD=90
LOAD_THRESHOLD=2.0
LOG_FILE="/var/log/system_health.log"
ALERT_EMAIL="sysadmin@example.com"
# Colors for output
RED='\033[0;31m'
YELLOW='\033[1;33m'
GREEN='\033[0;32m'
NC='\033[0m' # No Color
# Function to log with timestamp
log_entry() {
echo "$(date '+%Y-%m-%d %H:%M:%S') - $1" >> "$LOG_FILE"
}
# Function to send alert
send_alert() {
local alert_type="$1"
local message="$2"
local hostname=$(hostname)
echo "ALERT: $alert_type on $hostname" | mail -s "System Alert: $alert_type" "$ALERT_EMAIL"
log_entry "ALERT: $alert_type - $message"
}
# Check CPU usage
check_cpu() {
local cpu_usage=$(top -bn1 | grep "Cpu(s)" | sed "s/.*, *\([0-9.]*\)%* id.*/\1/" | awk '{print 100 - $1}')
local cpu_int=${cpu_usage%.*}
if [ "$cpu_int" -gt "$CPU_THRESHOLD" ]; then
echo -e "${RED}CPU Usage: ${cpu_usage}% (Threshold: ${CPU_THRESHOLD}%)${NC}"
send_alert "High CPU Usage" "CPU usage is ${cpu_usage}%"
return 1
else
echo -e "${GREEN}CPU Usage: ${cpu_usage}%${NC}"
return 0
fi
}
# Check memory usage
check_memory() {
local memory_info=$(free | grep Mem)
local total_mem=$(echo $memory_info | awk '{print $2}')
local used_mem=$(echo $memory_info | awk '{print $3}')
local memory_percent=$((used_mem * 100 / total_mem))
if [ "$memory_percent" -gt "$MEMORY_THRESHOLD" ]; then
echo -e "${RED}Memory Usage: ${memory_percent}% (Threshold: ${MEMORY_THRESHOLD}%)${NC}"
send_alert "High Memory Usage" "Memory usage is ${memory_percent}%"
return 1
else
echo -e "${GREEN}Memory Usage: ${memory_percent}%${NC}"
return 0
fi
}
# Check disk usage
check_disk() {
local alert_sent=0
while IFS= read -r line; do
local usage=$(echo "$line" | awk '{print $5}' | sed 's/%//')
local partition=$(echo "$line" | awk '{print $6}')
if [ "$usage" -gt "$DISK_THRESHOLD" ]; then
echo -e "${RED}Disk Usage: ${partition} ${usage}% (Threshold: ${DISK_THRESHOLD}%)${NC}"
send_alert "High Disk Usage" "Partition ${partition} usage is ${usage}%"
alert_sent=1
else
echo -e "${GREEN}Disk Usage: ${partition} ${usage}%${NC}"
fi
done < <(df -h | grep -E '^/dev/' | grep -v tmpfs)
return $alert_sent
}
# Check system load
check_load() {
local load_avg=$(uptime | awk -F'load average:' '{print $2}' | awk '{print $1}' | sed 's/,//')
local load_comparison=$(echo "$load_avg > $LOAD_THRESHOLD" | bc -l)
if [ "$load_comparison" -eq 1 ]; then
echo -e "${RED}Load Average: ${load_avg} (Threshold: ${LOAD_THRESHOLD})${NC}"
send_alert "High Load Average" "Load average is ${load_avg}"
return 1
else
echo -e "${GREEN}Load Average: ${load_avg}${NC}"
return 0
fi
}
# Main execution
echo "=== System Health Check - $(date) ==="
log_entry "Starting system health check"
check_cpu
cpu_status=$?
check_memory
memory_status=$?
check_disk
disk_status=$?
check_load
load_status=$?
# Summary
if [ $cpu_status -eq 0 ] && [ $memory_status -eq 0 ] && [ $disk_status -eq 0 ] && [ $load_status -eq 0 ]; then
echo -e "\n${GREEN}All systems normal${NC}"
log_entry "All systems normal"
else
echo -e "\n${YELLOW}Some systems require attention${NC}"
log_entry "Alert conditions detected"
fi
echo "=== End Health Check ==="
Command Explanation:
- top -bn1: Runs top in batch mode for one iteration
- free: Displays memory usage information
- df -h: Shows disk space usage in human-readable format
- uptime: Shows system load averages
- bc -l: Calculator for floating-point arithmetic
Notes:
- Adjust threshold values based on your system requirements
- Add to crontab for regular monitoring: */10 * * * * /path/to/health_monitor.sh
- Ensure mail command is configured for email notifications
File Management Scripts
Intelligent File Organizer
File organization becomes effortless with automated sorting based on file types, dates, and sizes. This script demonstrates advanced file handling techniques.
#!/bin/bash
# Intelligent File Organizer
# Description: Organizes files by type, date, and size with duplicate detection
# Usage: ./organize_files.sh [source_directory] [destination_directory]
# Configuration
DEFAULT_SOURCE="$HOME/Downloads"
DEFAULT_DEST="$HOME/Organized"
LOG_FILE="$HOME/file_organizer.log"
# File type associations
declare -A FILE_TYPES=(
["images"]="jpg jpeg png gif bmp tiff svg webp"
["documents"]="pdf doc docx txt rtf odt"
["videos"]="mp4 avi mkv mov wmv flv webm"
["audio"]="mp3 wav flac aac ogg m4a"
["archives"]="zip rar tar gz 7z bz2 xz"
["code"]="py js html css php java cpp c sh"
["spreadsheets"]="xls xlsx csv ods"
)
# Function to log messages
log_message() {
echo "$(date '+%Y-%m-%d %H:%M:%S') - $1" | tee -a "$LOG_FILE"
}
# Function to get file extension
get_extension() {
local filename="$1"
echo "${filename##*.}" | tr '[:upper:]' '[:lower:]'
}
# Function to determine file category
get_file_category() {
local extension="$1"
for category in "${!FILE_TYPES[@]}"; do
if [[ " ${FILE_TYPES[$category]} " =~ " $extension " ]]; then
echo "$category"
return 0
fi
done
echo "miscellaneous"
}
# Function to generate MD5 hash for duplicate detection
get_file_hash() {
local filepath="$1"
md5sum "$filepath" 2>/dev/null | cut -d' ' -f1
}
# Function to create organized directory structure
create_directory_structure() {
local base_dir="$1"
local year="$2"
local month="$3"
local category="$4"
local target_dir="$base_dir/$category/$year/$month"
mkdir -p "$target_dir"
echo "$target_dir"
}
# Function to handle file size categorization
get_size_category() {
local file_size="$1"
if [ "$file_size" -lt 1048576 ]; then # Less than 1MB
echo "small"
elif [ "$file_size" -lt 104857600 ]; then # Less than 100MB
echo "medium"
else
echo "large"
fi
}
# Function to organize a single file
organize_file() {
local source_file="$1"
local dest_base="$2"
local filename=$(basename "$source_file")
local extension=$(get_extension "$filename")
local category=$(get_file_category "$extension")
# Get file date information
local file_date=$(stat -c %Y "$source_file")
local year=$(date -d "@$file_date" +%Y)
local month=$(date -d "@$file_date" +%m-%B)
# Get file size
local file_size=$(stat -c %s "$source_file")
local size_category=$(get_size_category "$file_size")
# Create target directory
local target_dir=$(create_directory_structure "$dest_base" "$year" "$month" "$category")
local target_file="$target_dir/$filename"
# Check for duplicates
if [ -f "$target_file" ]; then
local source_hash=$(get_file_hash "$source_file")
local target_hash=$(get_file_hash "$target_file")
if [ "$source_hash" = "$target_hash" ]; then
log_message "Duplicate found: $filename (skipping)"
return 0
else
# Rename to avoid conflict
local counter=1
local name_without_ext="${filename%.*}"
local new_target="$target_dir/${name_without_ext}_${counter}.${extension}"
while [ -f "$new_target" ]; do
((counter++))
new_target="$target_dir/${name_without_ext}_${counter}.${extension}"
done
target_file="$new_target"
fi
fi
# Move the file
if mv "$source_file" "$target_file"; then
log_message "Moved: $filename -> $category/$year/$month/ (${size_category})"
return 0
else
log_message "ERROR: Failed to move $filename"
return 1
fi
}
# Main function
main() {
local source_dir="${1:-$DEFAULT_SOURCE}"
local dest_dir="${2:-$DEFAULT_DEST}"
# Validate directories
if [ ! -d "$source_dir" ]; then
echo "Error: Source directory '$source_dir' does not exist"
exit 1
fi
# Create destination directory if it doesn't exist
mkdir -p "$dest_dir"
log_message "Starting file organization"
log_message "Source: $source_dir"
log_message "Destination: $dest_dir"
local file_count=0
local success_count=0
# Process all files in source directory
while IFS= read -r -d '' file; do
if [ -f "$file" ]; then
((file_count++))
if organize_file "$file" "$dest_dir"; then
((success_count++))
fi
fi
done < <(find "$source_dir" -maxdepth 1 -type f -print0)
log_message "Organization complete: $success_count/$file_count files processed"
# Generate summary report
echo "=== File Organization Summary ==="
echo "Total files processed: $file_count"
echo "Successfully organized: $success_count"
echo "Failed: $((file_count - success_count))"
echo "Log file: $LOG_FILE"
}
# Script execution
if [ "$1" = "--help" ] || [ "$1" = "-h" ]; then
echo "Usage: $0 [source_directory] [destination_directory]"
echo "Organizes files by type, date, and detects duplicates"
echo "Default source: $DEFAULT_SOURCE"
echo "Default destination: $DEFAULT_DEST"
exit 0
fi
main "$@"
Command Explanation:
- stat -c %Y: Gets file modification time in seconds since epoch
- stat -c %s: Gets file size in bytes
- find ... -print0: Uses null delimiter for safe filename handling
- md5sum: Generates MD5 hash for duplicate detection
- declare -A: Creates associative array in bash
Notes:
- Run with ./organize_files.sh /path/to/messy/folder /path/to/organized/folder
- The script preserves original file timestamps
- Duplicate detection prevents data loss
- Directory structure: category/year/month/filename
Log File Analyzer and Cleaner
System logs accumulate rapidly and require regular maintenance. This script provides intelligent log analysis with automated cleanup capabilities.
#!/bin/bash
# Log File Analyzer and Cleaner
# Description: Analyzes log files for patterns and performs intelligent cleanup
# Usage: ./log_analyzer.sh [log_directory]
# Configuration
DEFAULT_LOG_DIR="/var/log"
ANALYSIS_REPORT="/tmp/log_analysis_$(date +%Y%m%d).txt"
RETENTION_DAYS=30
COMPRESS_DAYS=7
MAX_SIZE_MB=100
# Error patterns to search for
ERROR_PATTERNS=(
"ERROR"
"CRITICAL"
"FATAL"
"Failed"
"Exception"
"Timeout"
"Connection refused"
"Permission denied"
)
# Function to convert bytes to human readable format
bytes_to_human() {
local bytes=$1
local units=("B" "KB" "MB" "GB" "TB")
local unit=0
while [ $bytes -gt 1024 ] && [ $unit -lt 4 ]; do
bytes=$((bytes / 1024))
((unit++))
done
echo "${bytes}${units[$unit]}"
}
# Function to analyze log file for patterns
analyze_log_file() {
local log_file="$1"
local temp_analysis="/tmp/$(basename "$log_file")_analysis.tmp"
echo "=== Analysis for $log_file ===" >> "$ANALYSIS_REPORT"
echo "File size: $(bytes_to_human $(stat -c %s "$log_file"))" >> "$ANALYSIS_REPORT"
echo "Last modified: $(stat -c %y "$log_file")" >> "$ANALYSIS_REPORT"
echo "Line count: $(wc -l < "$log_file")" >> "$ANALYSIS_REPORT"
echo "" >> "$ANALYSIS_REPORT"
# Search for error patterns
for pattern in "${ERROR_PATTERNS[@]}"; do
local count=$(grep -ci "$pattern" "$log_file" 2>/dev/null || echo "0")
if [ "$count" -gt 0 ]; then
echo "$pattern occurrences: $count" >> "$ANALYSIS_REPORT"
# Show recent occurrences
echo "Recent $pattern entries:" >> "$ANALYSIS_REPORT"
grep -i "$pattern" "$log_file" | tail -5 >> "$ANALYSIS_REPORT"
echo "" >> "$ANALYSIS_REPORT"
fi
done
# Top IP addresses (for access logs)
if [[ "$log_file" =~ access\.log ]]; then
echo "Top 10 IP addresses:" >> "$ANALYSIS_REPORT"
awk '{print $1}' "$log_file" | sort | uniq -c | sort -nr | head -10 >> "$ANALYSIS_REPORT"
echo "" >> "$ANALYSIS_REPORT"
fi
echo "----------------------------------------" >> "$ANALYSIS_REPORT"
}
# Function to compress old log files
compress_old_logs() {
local log_dir="$1"
local compressed_count=0
echo "Compressing log files older than $COMPRESS_DAYS days..."
while IFS= read -r -d '' log_file; do
if [[ "$log_file" != *.gz ]] && [[ "$log_file" != *.bz2 ]]; then
echo "Compressing: $(basename "$log_file")"
gzip "$log_file"
((compressed_count++))
fi
done < <(find "$log_dir" -name "*.log" -mtime +$COMPRESS_DAYS -print0)
echo "Compressed $compressed_count log files"
}
# Function to remove very old log files
cleanup_old_logs() {
local log_dir="$1"
local removed_count=0
echo "Removing log files older than $RETENTION_DAYS days..."
# Remove old compressed logs
while IFS= read -r -d '' old_log; do
echo "Removing: $(basename "$old_log")"
rm "$old_log"
((removed_count++))
done < <(find "$log_dir" -name "*.log.gz" -mtime +$RETENTION_DAYS -print0)
# Remove old uncompressed logs
while IFS= read -r -d '' old_log; do
echo "Removing: $(basename "$old_log")"
rm "$old_log"
((removed_count++))
done < <(find "$log_dir" -name "*.log" -mtime +$RETENTION_DAYS -print0)
echo "Removed $removed_count old log files"
}
# Function to handle large log files
handle_large_logs() {
local log_dir="$1"
local large_files_found=0
echo "Checking for oversized log files..."
while IFS= read -r -d '' large_log; do
local file_size_mb=$(($(stat -c %s "$large_log") / 1024 / 1024))
echo "Large file detected: $(basename "$large_log") (${file_size_mb}MB)"
# Truncate to last 1000 lines
local temp_file="/tmp/$(basename "$large_log").tmp"
tail -1000 "$large_log" > "$temp_file"
mv "$temp_file" "$large_log"
echo "Truncated $(basename "$large_log") to last 1000 lines"
((large_files_found++))
done < <(find "$log_dir" -name "*.log" -size +${MAX_SIZE_MB}M -print0)
if [ $large_files_found -eq 0 ]; then
echo "No oversized log files found"
fi
}
# Main function
main() {
local log_dir="${1:-$DEFAULT_LOG_DIR}"
if [ ! -d "$log_dir" ]; then
echo "Error: Log directory '$log_dir' does not exist"
exit 1
fi
if [ ! -r "$log_dir" ]; then
echo "Error: No read permission for '$log_dir'"
exit 1
fi
echo "Starting log analysis and cleanup for: $log_dir"
echo "Analysis report will be saved to: $ANALYSIS_REPORT"
# Initialize analysis report
echo "Log Analysis Report - $(date)" > "$ANALYSIS_REPORT"
echo "=======================================" >> "$ANALYSIS_REPORT"
echo "" >> "$ANALYSIS_REPORT"
# Analyze current log files
echo "Analyzing log files..."
local analyzed_count=0
while IFS= read -r -d '' log_file; do
if [[ "$log_file" == *.log ]] && [ -r "$log_file" ]; then
analyze_log_file "$log_file"
((analyzed_count++))
fi
done < <(find "$log_dir" -maxdepth 2 -type f -print0)
echo "Analyzed $analyzed_count log files"
# Perform maintenance tasks
compress_old_logs "$log_dir"
handle_large_logs "$log_dir"
cleanup_old_logs "$log_dir"
# Generate summary
echo "" >> "$ANALYSIS_REPORT"
echo "=== SUMMARY ===" >> "$ANALYSIS_REPORT"
echo "Analysis completed: $(date)" >> "$ANALYSIS_REPORT"
echo "Total log files analyzed: $analyzed_count" >> "$ANALYSIS_REPORT"
echo "Log analysis and cleanup completed!"
echo "Review the analysis report: $ANALYSIS_REPORT"
}
# Check for help flag
if [ "$1" = "--help" ] || [ "$1" = "-h" ]; then
echo "Usage: $0 [log_directory]"
echo "Analyzes log files for errors and performs cleanup"
echo "Default directory: $DEFAULT_LOG_DIR"
echo ""
echo "Actions performed:"
echo " - Analyze logs for error patterns"
echo " - Compress logs older than $COMPRESS_DAYS days"
echo " - Remove logs older than $RETENTION_DAYS days"
echo " - Truncate logs larger than ${MAX_SIZE_MB}MB"
exit 0
fi
# Execute main function
main "$@"
Command Explanation:
- grep -ci: Case-insensitive count of pattern matches
- awk '{print $1}': Extracts first field (typically IP addresses)
- uniq -c: Counts unique occurrences
- sort -nr: Sorts numerically in reverse order
- tail -1000: Shows last 1000 lines of file
Notes:
- Requires appropriate permissions to read/modify log files
- Run as root or with sudo for system log directories
- Consider scheduling via cron for regular maintenance
- Customize ERROR_PATTERNS array for specific applications
Conclusion
The scripts presented in this appendix represent more than mere automation tools; they embody the philosophy of efficient system administration through intelligent scripting. Each script demonstrates key principles that extend far beyond their specific functions: error handling, logging, user notification, and graceful degradation when faced with unexpected conditions.
As you implement and modify these scripts for your own environment, remember that the true power lies not in using them exactly as written, but in understanding their structure and adapting their techniques to solve your unique challenges. The patterns demonstrated here—configuration management through variables, comprehensive logging, modular function design, and robust error handling—form the foundation of professional-grade automation.
These scripts serve as stepping stones toward more sophisticated automation frameworks. As your needs grow, you might find yourself integrating these concepts with configuration management tools, monitoring systems, or cloud orchestration platforms. The fundamental principles remain constant: clarity of purpose, reliability of execution, and maintainability of code.
Whether you're a system administrator seeking to streamline daily operations, a developer looking to automate deployment processes, or a power user wanting to organize digital life more efficiently, these scripts provide both immediate utility and educational value. Take them, modify them, improve them, and most importantly, let them inspire you to see the endless possibilities that await in the world of Linux automation.
The journey from manual task execution to elegant automation is one of the most rewarding aspects of Linux mastery. With these tools in your arsenal, you're well-equipped to transform repetitive drudgery into efficient, reliable, and maintainable automated processes that will serve you well throughout your Linux journey.