Chapter 11: Control Structures in Bash
Introduction to Control Structures
In the dimly lit server room, the soft hum of cooling fans creates a rhythmic backdrop as Sarah, a seasoned system administrator, stares at her terminal screen. The cursor blinks expectantly, waiting for her next command. She's been tasked with automating a complex deployment process that requires different actions based on various conditions—file existence, system resources, and user permissions. This is where the true power of Bash scripting reveals itself through control structures.
Control structures in Bash are the architectural framework that transforms simple command sequences into intelligent, decision-making programs. They provide the logic that allows scripts to branch, loop, and make decisions based on conditions, much like the neural pathways in a programmer's mind when solving complex problems. Without these fundamental constructs, Bash scripts would be nothing more than linear sequences of commands, unable to adapt to changing circumstances or handle the dynamic nature of real-world system administration tasks.
The beauty of Bash control structures lies in their simplicity and power. They mirror the logical thinking processes we use daily: "If this condition is true, do this; otherwise, do that." This intuitive approach makes Bash scripting accessible to both beginners and experienced programmers, while providing the flexibility needed for sophisticated automation tasks.
Conditional Statements
The if Statement
The if statement stands as the cornerstone of conditional logic in Bash, much like a traffic controller directing the flow of execution based on specific conditions. Its syntax follows a clean, readable pattern that reflects natural language logic:
if [ condition ]; then
# commands to execute if condition is true
fi
Let's examine a practical example that demonstrates the elegance of conditional logic:
#!/bin/bash
# Check if a file exists and display appropriate message
filename="important_config.txt"
if [ -f "$filename" ]; then
echo "Configuration file found: $filename"
echo "File size: $(stat -c%s "$filename") bytes"
echo "Last modified: $(stat -c%y "$filename")"
else
echo "Warning: Configuration file $filename not found!"
echo "Creating default configuration..."
touch "$filename"
echo "# Default configuration" > "$filename"
fi
Command Explanation:
- [ -f "$filename" ]: Tests if the file exists and is a regular file
- stat -c%s: Displays file size in bytes
- stat -c%y: Shows the last modification time
- touch "$filename": Creates an empty file if it doesn't exist
The elif Statement
When dealing with multiple conditions, the elif (else if) statement provides a clean way to chain conditional logic without creating deeply nested structures:
#!/bin/bash
# System resource monitoring script
cpu_usage=$(top -bn1 | grep "Cpu(s)" | awk '{print $2}' | cut -d'%' -f1)
memory_usage=$(free | grep Mem | awk '{printf "%.2f", $3/$2 * 100.0}')
echo "System Resource Monitor"
echo "======================"
echo "CPU Usage: ${cpu_usage}%"
echo "Memory Usage: ${memory_usage}%"
if (( $(echo "$cpu_usage > 80" | bc -l) )); then
echo "CRITICAL: CPU usage is critically high!"
echo "Initiating emergency procedures..."
# Kill non-essential processes
killall -9 unnecessary_process 2>/dev/null
elif (( $(echo "$cpu_usage > 60" | bc -l) )); then
echo "WARNING: CPU usage is high"
echo "Consider closing unnecessary applications"
elif (( $(echo "$cpu_usage > 40" | bc -l) )); then
echo "NOTICE: CPU usage is moderate"
else
echo "INFO: CPU usage is normal"
fi
Command Explanation:
- top -bn1: Runs top in batch mode for one iteration
- bc -l: Basic calculator with math library for floating-point comparisons
- free: Displays memory usage information
- killall -9: Forcefully terminates processes by name
Case Statements
The case statement provides an elegant alternative to multiple if-elif chains when dealing with pattern matching and multiple discrete values:
#!/bin/bash
# Service management script
service_name="$1"
action="$2"
case "$action" in
start)
echo "Starting service: $service_name"
systemctl start "$service_name"
if [ $? -eq 0 ]; then
echo "✓ Service $service_name started successfully"
else
echo "✗ Failed to start service $service_name"
fi
;;
stop)
echo "Stopping service: $service_name"
systemctl stop "$service_name"
echo "✓ Service $service_name stopped"
;;
restart)
echo "Restarting service: $service_name"
systemctl restart "$service_name"
echo "✓ Service $service_name restarted"
;;
status)
echo "Checking status of service: $service_name"
systemctl status "$service_name"
;;
*)
echo "Usage: $0 <service_name> {start|stop|restart|status}"
echo "Example: $0 nginx start"
exit 1
;;
esac
Command Explanation:
- systemctl: Controls systemd services
- $?: Contains the exit status of the last command
- *): Default case that matches any pattern not previously matched
- ;;: Terminates each case branch
Loops
For Loops
The for loop in Bash provides multiple syntactic forms, each suited to different scenarios. The traditional form iterates over lists of items:
#!/bin/bash
# Backup script for multiple directories
backup_dirs=("/home/user/documents" "/home/user/pictures" "/var/www/html" "/etc/nginx")
backup_destination="/backup/$(date +%Y%m%d)"
echo "Starting backup process..."
echo "Backup destination: $backup_destination"
# Create backup directory
mkdir -p "$backup_destination"
for dir in "${backup_dirs[@]}"; do
if [ -d "$dir" ]; then
echo "Backing up: $dir"
dir_name=$(basename "$dir")
tar -czf "$backup_destination/${dir_name}_backup.tar.gz" -C "$(dirname "$dir")" "$(basename "$dir")"
if [ $? -eq 0 ]; then
echo "✓ Successfully backed up $dir"
else
echo "✗ Failed to backup $dir"
fi
else
echo "⚠ Directory not found: $dir"
fi
done
echo "Backup process completed!"
The C-style for loop provides precise control over iteration:
#!/bin/bash
# Generate sequential configuration files
echo "Generating server configuration files..."
for ((i=1; i<=5; i++)); do
config_file="server_${i}.conf"
port=$((8080 + i))
cat > "$config_file" << EOF
# Server Configuration $i
server {
listen $port;
server_name server${i}.example.com;
root /var/www/server${i};
location / {
try_files \$uri \$uri/ =404;
}
}
EOF
echo "Created: $config_file (Port: $port)"
done
Command Explanation:
- "${backup_dirs[@]}": Expands array elements as separate words
- basename: Extracts the filename from a path
- dirname: Extracts the directory path
- tar -czf: Creates a compressed gzip archive
- << EOF: Here document for multi-line input
While Loops
The while loop continues execution as long as a condition remains true, making it perfect for monitoring tasks and processing streams of data:
#!/bin/bash
# Log monitoring script
log_file="/var/log/application.log"
error_count=0
max_errors=10
echo "Starting log monitor for: $log_file"
echo "Maximum allowed errors: $max_errors"
tail -f "$log_file" | while read -r line; do
timestamp=$(date '+%Y-%m-%d %H:%M:%S')
if [[ "$line" =~ ERROR ]]; then
error_count=$((error_count + 1))
echo "[$timestamp] ERROR DETECTED (#$error_count): $line"
if [ $error_count -ge $max_errors ]; then
echo "[$timestamp] CRITICAL: Maximum error threshold reached!"
echo "[$timestamp] Sending alert notification..."
# Send notification (example using mail command)
echo "Critical error threshold reached in $log_file" | \
mail -s "Application Alert" admin@example.com
break
fi
elif [[ "$line" =~ WARNING ]]; then
echo "[$timestamp] WARNING: $line"
fi
done
Until Loops
The until loop executes until a condition becomes true, providing an alternative perspective for certain logical constructs:
#!/bin/bash
# Wait for service to become available
service_url="http://localhost:8080/health"
max_attempts=30
attempt=1
echo "Waiting for service to become available..."
echo "Service URL: $service_url"
until curl -s "$service_url" > /dev/null 2>&1; do
echo "Attempt $attempt/$max_attempts: Service not ready..."
if [ $attempt -ge $max_attempts ]; then
echo "ERROR: Service failed to start within expected time"
echo "Please check service logs for details"
exit 1
fi
sleep 5
attempt=$((attempt + 1))
done
echo "✓ Service is now available!"
echo "Service responded successfully after $attempt attempts"
Command Explanation:
- tail -f: Follows file changes in real-time
- [[ "$line" =~ ERROR ]]: Pattern matching using regular expressions
- curl -s: Silent mode, suppresses progress output
- > /dev/null 2>&1: Redirects both stdout and stderr to null device
Loop Control
Break and Continue
Loop control statements provide fine-grained control over loop execution, allowing scripts to respond dynamically to changing conditions:
#!/bin/bash
# Process files with error handling and recovery
files_to_process=(*.txt *.log *.conf)
processed_count=0
error_count=0
max_errors=3
for file in "${files_to_process[@]}"; do
# Skip if file doesn't exist (glob didn't match)
if [[ "$file" == "*.txt" || "$file" == "*.log" || "$file" == "*.conf" ]]; then
continue
fi
echo "Processing: $file"
# Simulate processing with potential errors
if ! process_file "$file"; then
error_count=$((error_count + 1))
echo "✗ Error processing $file (Error #$error_count)"
if [ $error_count -ge $max_errors ]; then
echo "CRITICAL: Too many errors encountered!"
echo "Aborting processing to prevent system damage"
break
fi
continue # Skip to next file
fi
processed_count=$((processed_count + 1))
echo "✓ Successfully processed $file"
done
echo "Processing Summary:"
echo "Files processed: $processed_count"
echo "Errors encountered: $error_count"
# Function definition
process_file() {
local file="$1"
# Simulate processing time
sleep 1
# Simulate random failures (20% chance)
if [ $((RANDOM % 5)) -eq 0 ]; then
return 1 # Failure
fi
return 0 # Success
}
Nested Control Structures
Real-world scripts often require complex logic that combines multiple control structures. Here's an advanced example that demonstrates nested loops and conditions:
#!/bin/bash
# Advanced server deployment script
servers=("web-01" "web-02" "db-01" "cache-01")
services=("nginx" "mysql" "redis" "monitoring")
deployment_stages=("pre-check" "deploy" "verify" "cleanup")
total_operations=0
successful_operations=0
echo "=== Advanced Deployment Script ==="
echo "Servers: ${servers[*]}"
echo "Services: ${services[*]}"
echo "Stages: ${deployment_stages[*]}"
echo "======================================"
for server in "${servers[@]}"; do
echo ""
echo "🖥️ Processing Server: $server"
echo "================================="
# Check server connectivity
if ! ping -c 1 "$server" > /dev/null 2>&1; then
echo "❌ Server $server is unreachable. Skipping..."
continue
fi
for stage in "${deployment_stages[@]}"; do
echo ""
echo "📋 Stage: $stage on $server"
case "$stage" in
"pre-check")
echo " Performing pre-deployment checks..."
# Check disk space, memory, etc.
;;
"deploy")
echo " Deploying services..."
for service in "${services[@]}"; do
total_operations=$((total_operations + 1))
echo " 📦 Deploying $service..."
# Simulate deployment logic
if deploy_service "$server" "$service"; then
successful_operations=$((successful_operations + 1))
echo " ✅ $service deployed successfully"
else
echo " ❌ Failed to deploy $service"
# Critical services require immediate attention
if [[ "$service" == "nginx" || "$service" == "mysql" ]]; then
echo " 🚨 Critical service failure! Aborting server deployment."
break 2 # Break out of both service and stage loops
fi
fi
done
;;
"verify")
echo " Verifying deployment..."
sleep 2
echo " ✅ Verification completed"
;;
"cleanup")
echo " Cleaning up temporary files..."
sleep 1
echo " ✅ Cleanup completed"
;;
esac
done
echo "✅ Server $server processing completed"
done
echo ""
echo "=== Deployment Summary ==="
echo "Total operations: $total_operations"
echo "Successful operations: $successful_operations"
echo "Success rate: $(( (successful_operations * 100) / total_operations ))%"
# Function to simulate service deployment
deploy_service() {
local server="$1"
local service="$2"
# Simulate deployment time
sleep $((RANDOM % 3 + 1))
# Simulate success/failure (80% success rate)
if [ $((RANDOM % 10)) -lt 8 ]; then
return 0 # Success
else
return 1 # Failure
fi
}
Advanced Command Explanations:
- break 2: Breaks out of two nested loops
- ping -c 1: Sends one ping packet to test connectivity
- $((RANDOM % 10)): Generates random number between 0-9
- ${servers[*]}: Expands array as single word with spaces
Best Practices and Notes
Performance Considerations
When working with control structures, consider these performance optimization techniques:
# Efficient file processing
while IFS= read -r line; do
# Process line
done < "$filename"
# Instead of:
# for line in $(cat "$filename"); do # This loads entire file into memory
Error Handling Integration
Always integrate proper error handling within control structures:
set -euo pipefail # Exit on error, undefined variables, pipe failures
if ! command_that_might_fail; then
echo "Error: Command failed" >&2
exit 1
fi
Code Readability
Maintain code readability by using consistent indentation and meaningful variable names:
# Good practice
for server_name in "${production_servers[@]}"; do
if check_server_health "$server_name"; then
deploy_application "$server_name"
fi
done
# Avoid
for s in "${ps[@]}"; do
if chk "$s"; then
dep "$s"
fi
done
Conclusion
Control structures in Bash transform simple command sequences into intelligent, adaptive programs capable of making decisions and handling complex logic flows. From the fundamental if statements that guide program execution to sophisticated nested loops that process multiple data sets, these constructs provide the foundation for robust system automation.
The journey through conditional statements, loops, and their combinations reveals the true power of shell scripting. Like a skilled conductor orchestrating a symphony, a well-crafted Bash script uses control structures to coordinate system operations, respond to changing conditions, and ensure reliable execution of complex tasks.
As Sarah completes her deployment automation script, the server room's gentle hum seems to sync with the rhythmic execution of her carefully crafted loops and conditions. Each control structure serves its purpose, creating a harmonious automation system that can adapt to the dynamic nature of modern infrastructure management. The mastery of these fundamental constructs opens the door to advanced scripting techniques and sophisticated system administration solutions.
Remember that effective use of control structures requires not just understanding their syntax, but also developing the logical thinking skills to design efficient, maintainable, and robust automation solutions. Practice with real-world scenarios, experiment with different approaches, and always prioritize code clarity and error handling in your implementations.