Back to Technology

Linux Command Line Essentials

April 1, 2026 Wasil Zafar 60 min read

Master the Linux terminal from zero to sysadmin — file system navigation, permissions and ownership, piping and redirection, process management, shell scripting, cron scheduling, networking commands, and security hardening for production servers.

Table of Contents

  1. History of Unix & Linux CLI
  2. File System Navigation
  3. File Operations
  4. Permissions & Ownership
  5. Pipes, Redirection & Text Processing
  6. Process Management
  7. Shell Scripting
  8. Cron & Task Scheduling
  9. Networking Commands
  10. Security Hardening
  11. Case Studies
  12. Exercises
  13. Linux Command Reference Generator
  14. Conclusion & Resources

History of Unix & Linux CLI

The command line is the oldest and most powerful interface to a computer. Every graphical tool you use on a server — monitoring dashboards, deployment pipelines, configuration management — is ultimately a wrapper around command-line operations. Mastering the CLI gives you direct, unmediated control over every aspect of a system.

The story begins at Bell Labs in 1969, when Ken Thompson and Dennis Ritchie created Unix. Thompson wrote the first Unix shell (the Thompson shell, sh) as a simple command interpreter. In 1979, Stephen Bourne at Bell Labs wrote the Bourne shell (/bin/sh), which introduced control structures like if, for, and while — turning the command line into a programming environment.

Think of the shell as a translator sitting between you and the operating system. You speak in commands (English-like sentences), and the shell translates them into system calls that the kernel executes. Different shells are like different translators: they all talk to the same kernel, but they have different vocabularies and syntactic preferences.

Year Shell Creator Key Innovation
1971 Thompson shell Ken Thompson First Unix shell; pipes (|) added in 1973
1979 Bourne shell (sh) Stephen Bourne Variables, control structures, here-documents
1978 C shell (csh) Bill Joy C-like syntax, history, aliases, job control
1983 Korn shell (ksh) David Korn Combined best of sh and csh; associative arrays
1989 Bash Brian Fox (GNU) Free replacement for sh; tab completion, command history
1990 Zsh Paul Falstad Advanced globbing, spelling correction, themes (Oh My Zsh)
2005 Fish Axel Liljencrantz Auto-suggestions, web-based config, sane defaults

In 1991, Linus Torvalds released the Linux kernel, and it was paired with GNU tools (including Bash) to create a complete free operating system. Today, Linux powers 96.3% of the world's top 1 million web servers (W3Techs, 2024), 100% of the top 500 supercomputers, and the vast majority of cloud infrastructure (AWS, Google Cloud, Azure all default to Linux instances).

Key Insight: Bash is the default shell on most Linux distributions and macOS (until Catalina, which switched to Zsh). When you see "shell scripting" without further qualification, it almost always means Bash. Learning Bash is not optional for anyone who works with servers, containers, CI/CD pipelines, or cloud infrastructure.

File System Navigation

The Linux file system is a single tree rooted at /. Unlike Windows, there are no drive letters (C:, D:). Everything — hard drives, USB sticks, network shares, even hardware devices — appears as a file or directory somewhere under /.

Think of the Linux file system like a building. The root (/) is the lobby. /home is the residential floor where each user has an apartment. /etc is the management office where all configuration files live. /var is the mailroom where logs and variable data accumulate. /tmp is the break room — anyone can leave things there, but they get cleaned up regularly.

# Print current directory
pwd
# /home/wasil

# List files (basic)
ls

# List files with details (permissions, owner, size, date)
ls -la

# List files sorted by modification time (newest first)
ls -lt

# List files with human-readable sizes
ls -lh

# Change directory
cd /var/log
cd ~              # Go to home directory
cd -              # Go to previous directory
cd ..             # Go up one level
cd ../..          # Go up two levels

# Show directory tree structure
tree -L 2 /etc
# Shows 2 levels deep under /etc

# Find files by name
find /home -name "*.log" -type f

# Find files modified in the last 24 hours
find /var/log -mtime -1 -type f

# Find files larger than 100MB
find / -size +100M -type f 2>/dev/null

# Find and execute a command on results
find /tmp -name "*.tmp" -mtime +7 -exec rm {} \;

# Locate files using a pre-built index (much faster than find)
locate nginx.conf
# Update the locate database
sudo updatedb

Key Directories

Directory Purpose Examples
/ Root of the entire file system Everything starts here
/home User home directories /home/wasil, /home/deploy
/etc System-wide configuration files /etc/nginx/nginx.conf, /etc/ssh/sshd_config
/var Variable data (logs, caches, mail) /var/log/syslog, /var/www/html
/tmp Temporary files (cleared on reboot) Build artifacts, session files
/usr User programs and libraries /usr/bin/python3, /usr/lib
/opt Optional/third-party software /opt/google/chrome
/proc Virtual filesystem for process/kernel info /proc/cpuinfo, /proc/meminfo

File Operations

# Create files
touch newfile.txt                     # Create empty file (or update timestamp)
echo "Hello World" > hello.txt        # Create file with content

# Copy files and directories
cp source.txt destination.txt         # Copy a file
cp -r src/ backup/                    # Copy directory recursively
cp -p file.txt backup/               # Preserve permissions and timestamps

# Move / rename
mv old-name.txt new-name.txt         # Rename
mv file.txt /tmp/                    # Move to another directory
mv *.log /var/archive/               # Move all .log files

# Delete
rm file.txt                          # Delete a file
rm -r directory/                     # Delete directory recursively
rm -rf /tmp/build-*                  # Force delete (no confirmation)

# Create directories
mkdir projects                       # Create single directory
mkdir -p projects/2026/04/src        # Create nested directories

# Create symbolic and hard links
ln -s /etc/nginx/nginx.conf ~/nginx.conf   # Symbolic link (shortcut)
ln original.txt hardlink.txt               # Hard link (same inode)

# Archive and compress
tar -czf archive.tar.gz directory/   # Create gzipped tar archive
tar -xzf archive.tar.gz             # Extract gzipped tar archive
tar -xzf archive.tar.gz -C /opt/    # Extract to specific directory

zip -r backup.zip directory/         # Create zip archive
unzip backup.zip                     # Extract zip archive

# View file contents
cat file.txt                         # Print entire file
head -20 file.txt                    # First 20 lines
tail -50 file.txt                    # Last 50 lines
tail -f /var/log/syslog              # Follow log in real-time
less /var/log/syslog                 # Scrollable pager (q to quit)
wc -l file.txt                      # Count lines
Key Insight: rm -rf is the most dangerous command in Linux. There is no recycle bin, no undo, no confirmation (with the -f flag). A typo like rm -rf / home/wasil (note the space after /) can destroy the entire system. Always double-check your path before pressing Enter. Consider setting alias rm='rm -i' in your .bashrc for interactive confirmation on destructive operations.

Permissions & Ownership

Every file and directory in Linux has three sets of permissions (owner, group, others) and three types of access (read, write, execute). Understanding this system is fundamental to Linux security.

# View permissions
ls -la
# drwxr-xr-x 2 wasil developers 4096 Apr  1 10:00 scripts/
# -rw-r--r-- 1 wasil developers  512 Apr  1 09:30 config.yml
# -rwxr-x--- 1 wasil developers 2048 Apr  1 08:00 deploy.sh

# Permission breakdown:  d rwx r-x r-x
#                        |  |   |   |
#                        |  |   |   +-- Others: read + execute
#                        |  |   +------ Group: read + execute
#                        |  +---------- Owner: read + write + execute
#                        +------------- Type: d=directory, -=file, l=symlink

Numeric (Octal) Notation

Permission Symbol Value Common Combinations
Read r 4 7 (rwx) = 4+2+1
Write w 2 6 (rw-) = 4+2
Execute x 1 5 (r-x) = 4+1
None - 0 0 (---) = 0
# chmod — change permissions
chmod 755 deploy.sh          # rwxr-xr-x (owner: all, group+others: read+exec)
chmod 644 config.yml         # rw-r--r-- (owner: read+write, group+others: read)
chmod 600 private-key.pem    # rw------- (owner only)
chmod 700 ~/.ssh             # rwx------ (SSH directory best practice)

# Symbolic notation
chmod u+x script.sh          # Add execute for owner
chmod g-w file.txt            # Remove write for group
chmod o=r file.txt            # Set others to read only
chmod a+r file.txt            # Add read for all (a = all = u+g+o)

# chown — change ownership
chown wasil:developers file.txt           # Change owner and group
chown -R www-data:www-data /var/www/      # Recursive ownership change
chown wasil file.txt                      # Change owner only

# umask — default permissions for new files
umask                        # Show current umask
# 0022 — means new files get 644, new directories get 755

umask 0077                   # Restrictive: new files 600, dirs 700

Special Permissions

# Setuid (4xxx) — runs as the file's owner, not the executor
chmod 4755 /usr/bin/passwd   # passwd runs as root to modify /etc/shadow

# Setgid (2xxx) — new files inherit the directory's group
chmod 2775 /shared/projects  # Files created here belong to the projects group

# Sticky bit (1xxx) — only file owner can delete in shared directories
chmod 1777 /tmp              # Anyone can write to /tmp, but can't delete others' files
# This is why /tmp has permissions drwxrwxrwt (note the 't')
Warning: Never set chmod 777 on anything in production. It gives everyone full read, write, and execute access. If you find yourself reaching for 777, stop and figure out which specific user or group needs access, and grant only that. chmod 777 in production is a red flag in any security audit.

Pipes, Redirection & Text Processing

The Unix philosophy is "do one thing and do it well." Individual commands are simple, but when you connect them with pipes, you can build powerful data processing pipelines. Think of it like an assembly line in a factory: each station (command) performs one operation on the product (data) and passes it to the next station.

Redirection Operators

# Standard output redirection
echo "Hello" > file.txt              # Write (overwrite) to file
echo "World" >> file.txt             # Append to file

# Standard error redirection
find / -name "*.conf" 2> errors.log  # Redirect errors to file
find / -name "*.conf" 2>/dev/null    # Discard errors

# Redirect both stdout and stderr
command > output.log 2>&1            # Both to same file
command &> output.log                # Shorthand (Bash 4+)

# Input redirection
sort < unsorted.txt                  # Read input from file
mysql database < dump.sql            # Feed SQL file to MySQL

# Here-document (multi-line input)
cat <<EOF > config.yml
server:
  host: 0.0.0.0
  port: 8080
  debug: false
EOF

Pipes

# Pipe: connect stdout of one command to stdin of the next
# Find the 10 largest files in /var/log
du -ah /var/log | sort -rh | head -10

# Count unique IP addresses in an access log
awk '{print $1}' /var/log/nginx/access.log | sort | uniq -c | sort -rn | head -20

# Find all running Node.js processes
ps aux | grep node | grep -v grep

# Monitor log file and filter for errors
tail -f /var/log/app.log | grep --line-buffered "ERROR"

Essential Text Processing Tools

# grep — search for patterns
grep "ERROR" /var/log/syslog                   # Find lines containing ERROR
grep -i "warning" /var/log/syslog              # Case-insensitive
grep -r "TODO" src/                            # Recursive search in directory
grep -c "404" /var/log/nginx/access.log        # Count matching lines
grep -n "def main" *.py                        # Show line numbers
grep -v "DEBUG" app.log                        # Invert match (exclude DEBUG)
grep -E "error|warning|critical" app.log       # Extended regex (OR)

# sed — stream editor
sed 's/old/new/g' file.txt                     # Replace all occurrences
sed -i 's/http:/https:/g' config.yml           # In-place edit
sed -n '10,20p' file.txt                       # Print lines 10-20
sed '/^#/d' config.yml                         # Delete comment lines

# awk — column-based processing
awk '{print $1, $4}' /var/log/nginx/access.log # Print columns 1 and 4
awk -F: '{print $1}' /etc/passwd               # Use : as delimiter
awk '$9 == 500 {print $1, $7}' access.log      # Filter by column value
awk '{sum += $10} END {print sum}' access.log  # Sum a column

# cut, sort, uniq
cut -d: -f1,3 /etc/passwd                      # Extract fields 1 and 3
sort -t: -k3 -n /etc/passwd                    # Sort by field 3 numerically
uniq -c                                        # Count consecutive duplicates
sort | uniq -c | sort -rn                      # Count all duplicates, sorted
Key Insight: The combination sort | uniq -c | sort -rn is one of the most useful patterns in Linux. It takes a list of items, counts how many times each appears, and sorts by frequency. Use it to find the most common IP addresses in logs, the most frequent errors, the most active users, or any "top N" analysis on text data.

Process Management

# View running processes
ps aux                                # All processes, detailed
ps aux --sort=-%mem | head -20        # Top 20 by memory usage
ps -ef | grep nginx                   # Find nginx processes

# Real-time process monitoring
top                                   # Classic process monitor (q to quit)
htop                                  # Better UI, colors, mouse support

# Process information
pgrep nginx                           # Get PIDs of nginx processes
pidof sshd                            # Get PID of sshd

# Sending signals
kill 1234                             # Send SIGTERM (graceful shutdown)
kill -9 1234                          # Send SIGKILL (force kill)
kill -HUP 1234                        # Send SIGHUP (reload config)
killall nginx                         # Kill all nginx processes
pkill -f "python app.py"             # Kill by command pattern

# Background and foreground jobs
./long-task.sh &                      # Run in background
jobs                                  # List background jobs
fg %1                                 # Bring job 1 to foreground
bg %1                                 # Resume job 1 in background
nohup ./server.sh &                   # Survive terminal close
disown %1                             # Detach job from terminal

# systemd — modern service management
systemctl status nginx                # Check service status
systemctl start nginx                 # Start a service
systemctl stop nginx                  # Stop a service
systemctl restart nginx               # Restart a service
systemctl reload nginx                # Reload config without restart
systemctl enable nginx                # Start on boot
systemctl disable nginx               # Don't start on boot

# View service logs with journalctl
journalctl -u nginx                   # All logs for nginx
journalctl -u nginx --since "1 hour ago"  # Last hour
journalctl -u nginx -f                # Follow live logs
journalctl -p err --since today       # Only errors from today

Understanding Signals

Signal Number Default Action Use Case
SIGHUP 1 Terminate Reload configuration (convention for daemons)
SIGINT 2 Terminate Ctrl+C — interrupt from keyboard
SIGTERM 15 Terminate Graceful shutdown (default kill)
SIGKILL 9 Terminate (cannot be caught) Force kill — last resort
SIGSTOP 19 Stop (pause) Ctrl+Z — pause a process
SIGCONT 18 Continue Resume a stopped process

Shell Scripting

A shell script is a text file containing a sequence of commands. It turns repetitive manual tasks into automated, reproducible operations. Think of it as writing a recipe: instead of cooking from memory every time, you write down each step so anyone can follow it exactly.

#!/bin/bash
# deploy.sh — Automated deployment script
# Usage: ./deploy.sh [environment]

set -euo pipefail  # Exit on error, undefined var, pipe failure

# ─── Configuration ─────────────────────────────────────────
readonly APP_NAME="myapp"
readonly DEPLOY_USER="deploy"
readonly LOG_FILE="/var/log/deploy/${APP_NAME}-$(date +%Y%m%d-%H%M%S).log"

# ─── Functions ─────────────────────────────────────────────
log() {
    local level="$1"
    shift
    echo "[$(date '+%Y-%m-%d %H:%M:%S')] [$level] $*" | tee -a "$LOG_FILE"
}

die() {
    log "ERROR" "$@"
    exit 1
}

check_prerequisites() {
    log "INFO" "Checking prerequisites..."
    command -v git >/dev/null 2>&1 || die "git is not installed"
    command -v node >/dev/null 2>&1 || die "node is not installed"
    command -v pm2 >/dev/null 2>&1 || die "pm2 is not installed"
    log "INFO" "All prerequisites met."
}

# ─── Variables ─────────────────────────────────────────────
ENVIRONMENT="${1:-production}"

if [[ "$ENVIRONMENT" != "production" && "$ENVIRONMENT" != "staging" ]]; then
    die "Invalid environment: $ENVIRONMENT. Use 'production' or 'staging'."
fi

log "INFO" "Deploying $APP_NAME to $ENVIRONMENT"

# ─── Conditionals ──────────────────────────────────────────
if [[ "$ENVIRONMENT" == "production" ]]; then
    DEPLOY_DIR="/var/www/${APP_NAME}"
    BRANCH="main"
else
    DEPLOY_DIR="/var/www/${APP_NAME}-staging"
    BRANCH="develop"
fi

# ─── Loops ─────────────────────────────────────────────────
SERVICES=("api" "worker" "scheduler")
for service in "${SERVICES[@]}"; do
    log "INFO" "Restarting $service..."
    pm2 restart "$service" --update-env || log "WARN" "Failed to restart $service"
done

# ─── Error handling ────────────────────────────────────────
cd "$DEPLOY_DIR" || die "Cannot access $DEPLOY_DIR"
git fetch origin "$BRANCH" || die "Git fetch failed"
git checkout "$BRANCH" || die "Git checkout failed"
git pull origin "$BRANCH" || die "Git pull failed"

npm ci --production || die "npm install failed"

log "INFO" "Deployment to $ENVIRONMENT completed successfully."

Best Practices

  • Always start with set -euo pipefail: -e exits on any error, -u treats undefined variables as errors, -o pipefail catches errors in piped commands.
  • Use functions for reusable logic. Functions make scripts readable and testable.
  • Quote your variables: "$variable" prevents word splitting and glob expansion. Unquoted variables are the #1 source of shell scripting bugs.
  • Use [[ ]] instead of [ ]: Double brackets support regex, logical operators, and do not require quoting variables.
  • Use readonly for constants and local for function variables.
  • Log everything. Include timestamps and severity levels.
# Common patterns

# Read a file line by line
while IFS= read -r line; do
    echo "Processing: $line"
done < /etc/hosts

# Loop through files safely (handles spaces in filenames)
find /var/log -name "*.log" -print0 | while IFS= read -r -d '' file; do
    echo "Processing: $file"
done

# Parse command-line arguments
while [[ $# -gt 0 ]]; do
    case "$1" in
        -e|--environment) ENVIRONMENT="$2"; shift 2 ;;
        -v|--verbose)     VERBOSE=true; shift ;;
        -h|--help)        show_help; exit 0 ;;
        *)                die "Unknown option: $1" ;;
    esac
done

# Trap for cleanup on exit
cleanup() {
    log "INFO" "Cleaning up temporary files..."
    rm -rf "$TMP_DIR"
}
trap cleanup EXIT

Cron & Task Scheduling

Cron is the standard task scheduler on Unix/Linux systems. It runs commands at specified times or intervals, defined in a crontab (cron table) file.

Crontab Syntax

# ┌───────────── minute (0-59)
# │ ┌───────────── hour (0-23)
# │ │ ┌───────────── day of month (1-31)
# │ │ │ ┌───────────── month (1-12)
# │ │ │ │ ┌───────────── day of week (0-7, 0 and 7 = Sunday)
# │ │ │ │ │
# * * * * * command to execute

# Edit your crontab
crontab -e

# List your cron jobs
crontab -l

# Common patterns
0 * * * *     /scripts/hourly-check.sh        # Every hour at :00
*/15 * * * *  /scripts/health-check.sh         # Every 15 minutes
0 2 * * *     /scripts/nightly-backup.sh       # Daily at 2:00 AM
0 0 * * 0     /scripts/weekly-report.sh        # Weekly on Sunday midnight
0 6 1 * *     /scripts/monthly-cleanup.sh      # First of month at 6:00 AM
30 4 * * 1-5  /scripts/weekday-sync.sh         # Weekdays at 4:30 AM

Cron Best Practices

# Always redirect output to a log file
0 2 * * * /scripts/backup.sh >> /var/log/backup.log 2>&1

# Use full paths (cron has a minimal PATH)
0 3 * * * /usr/bin/python3 /opt/scripts/report.py

# Use lock files to prevent overlapping runs
*/5 * * * * flock -xn /tmp/job.lock /scripts/job.sh

# Set environment variables at the top of crontab
SHELL=/bin/bash
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
MAILTO=admin@example.com

Systemd Timers (Modern Alternative)

# /etc/systemd/system/backup.service
[Unit]
Description=Nightly Backup

[Service]
Type=oneshot
ExecStart=/scripts/backup.sh
User=backup

# /etc/systemd/system/backup.timer
[Unit]
Description=Run backup every night at 2 AM

[Timer]
OnCalendar=*-*-* 02:00:00
Persistent=true

[Install]
WantedBy=timers.target

# Enable and start the timer
sudo systemctl enable backup.timer
sudo systemctl start backup.timer

# View all timers
systemctl list-timers --all
Key Insight: Systemd timers have significant advantages over cron: they log to journalctl (no need to manage separate log files), support dependency ordering, can catch up on missed runs (Persistent=true), and provide better monitoring with systemctl list-timers. For new deployments on systemd-based distributions, prefer timers over cron.

Networking Commands

# ─── Interface and IP information ──────────────────────────
ip addr show                          # Show all interfaces and IPs
ip route show                         # Show routing table
ip link show                          # Show link-layer info
hostname -I                           # Show all IP addresses

# ─── Connection testing ───────────────────────────────────
ping -c 4 google.com                  # Send 4 ICMP packets
traceroute google.com                 # Trace the route to a host
mtr google.com                        # Continuous traceroute with stats

# ─── DNS lookups ──────────────────────────────────────────
dig google.com                        # Detailed DNS lookup
dig google.com +short                 # Just the IP
dig -x 8.8.8.8                       # Reverse DNS lookup
nslookup google.com                   # Simpler DNS lookup
host google.com                       # Another DNS lookup tool

# ─── Port and connection inspection ───────────────────────
ss -tlnp                              # Show listening TCP ports with PIDs
ss -tunap                             # Show all connections with PIDs
lsof -i :80                           # What's using port 80?
lsof -i :443                          # What's using port 443?

# ─── HTTP requests ────────────────────────────────────────
curl -v https://api.example.com       # Verbose request with headers
curl -s https://api.example.com/users | python3 -m json.tool  # Pretty-print JSON
curl -X POST -H "Content-Type: application/json" \
     -d '{"name":"Wasil"}' \
     https://api.example.com/users     # POST with JSON body
curl -o file.zip https://example.com/file.zip  # Download file

wget https://example.com/file.tar.gz  # Download file (simpler than curl)
wget -r -l 2 https://example.com      # Recursive download, 2 levels deep

# ─── Bandwidth and transfer ──────────────────────────────
scp file.txt user@server:/path/       # Secure copy to remote
scp user@server:/path/file.txt .      # Secure copy from remote
rsync -avz src/ user@server:/dest/    # Efficient sync (only changes)
rsync -avz --delete src/ dest/        # Sync and delete removed files

Firewall Basics

# UFW (Uncomplicated Firewall) — Ubuntu/Debian
sudo ufw status                       # Check firewall status
sudo ufw enable                       # Enable firewall
sudo ufw allow 22/tcp                 # Allow SSH
sudo ufw allow 80/tcp                 # Allow HTTP
sudo ufw allow 443/tcp                # Allow HTTPS
sudo ufw deny 3306/tcp               # Block MySQL from outside
sudo ufw allow from 10.0.0.0/24 to any port 5432  # Allow PostgreSQL from LAN only

# firewalld — CentOS/RHEL/Fedora
sudo firewall-cmd --list-all                              # Show rules
sudo firewall-cmd --permanent --add-service=http          # Allow HTTP
sudo firewall-cmd --permanent --add-service=https         # Allow HTTPS
sudo firewall-cmd --permanent --add-port=8080/tcp         # Allow custom port
sudo firewall-cmd --reload                                # Apply changes

Security Hardening

SSH Configuration

# Generate an SSH key pair (Ed25519 — modern and secure)
ssh-keygen -t ed25519 -C "wasil@workstation"

# Copy your public key to a remote server
ssh-copy-id -i ~/.ssh/id_ed25519.pub user@server

# Harden SSH server configuration (/etc/ssh/sshd_config)
# Disable root login
PermitRootLogin no

# Disable password authentication (use keys only)
PasswordAuthentication no

# Change default port (reduces automated scanning)
Port 2222

# Allow only specific users
AllowUsers deploy wasil

# Restrict to SSH protocol 2
Protocol 2

# Set idle timeout (disconnect after 5 minutes of inactivity)
ClientAliveInterval 300
ClientAliveCountMax 0

# Apply changes
sudo systemctl restart sshd

Fail2ban: Automated Intrusion Prevention

# Install fail2ban
sudo apt install fail2ban              # Debian/Ubuntu
sudo yum install fail2ban              # CentOS/RHEL

# Create a local configuration
sudo cp /etc/fail2ban/jail.conf /etc/fail2ban/jail.local

# Edit /etc/fail2ban/jail.local
[sshd]
enabled = true
port = 2222
maxretry = 3
bantime = 3600
findtime = 600

[nginx-http-auth]
enabled = true
port = http,https
maxretry = 5

# Start and enable
sudo systemctl enable fail2ban
sudo systemctl start fail2ban

# Check banned IPs
sudo fail2ban-client status sshd

User Management

# Create a user with home directory
sudo useradd -m -s /bin/bash deploy

# Set password
sudo passwd deploy

# Add user to a group
sudo usermod -aG sudo deploy          # Add to sudo group (Ubuntu)
sudo usermod -aG wheel deploy         # Add to wheel group (CentOS)

# Create a system user (for services, no login shell)
sudo useradd -r -s /usr/sbin/nologin appuser

# Lock a user account
sudo usermod -L baduser

# View login history
last -10                              # Last 10 logins
lastb -10                             # Last 10 failed logins
who                                   # Currently logged-in users

Security Audit Checklist

Check Command Expected Result
Root login disabled grep PermitRootLogin /etc/ssh/sshd_config PermitRootLogin no
Password auth disabled grep PasswordAuthentication /etc/ssh/sshd_config PasswordAuthentication no
Firewall active sudo ufw status Status: active
No world-writable files find / -perm -002 -type f 2>/dev/null Empty or only /tmp files
No unowned files find / -nouser -o -nogroup 2>/dev/null Empty list
Automatic updates enabled systemctl is-enabled unattended-upgrades enabled
Key Insight: The CIS (Center for Internet Security) Benchmarks provide comprehensive, step-by-step hardening guides for every major Linux distribution. They are free to download and are the industry standard used by auditors. If you are responsible for production servers, download the CIS Benchmark for your distribution and work through it systematically.

Case Studies

Case Study 1: Server Migration with rsync

A mid-size SaaS company needed to migrate 2 TB of data from an aging dedicated server to AWS EC2 instances with zero downtime. The migration team used rsync over SSH to synchronize data incrementally over three days, then did a final sync during a 15-minute maintenance window.

# Initial sync (ran for ~18 hours, transferred the bulk of data)
rsync -avz --progress --partial \
  -e "ssh -i /path/to/key -p 2222" \
  /var/www/ deploy@new-server:/var/www/

# Daily incremental syncs (only changed files, took ~30 minutes each)
rsync -avz --delete \
  -e "ssh -i /path/to/key -p 2222" \
  /var/www/ deploy@new-server:/var/www/

# Final sync during maintenance window (took 3 minutes)
rsync -avz --delete \
  -e "ssh -i /path/to/key -p 2222" \
  /var/www/ deploy@new-server:/var/www/

# Switch DNS, verify, done.

The key insight was that rsync's delta-transfer algorithm only sends the parts of files that changed, making incremental syncs extremely fast. The total downtime was under 5 minutes.

Case Study 2: Log Analysis Pipeline

A DevOps team at a fintech startup needed to analyze 50 GB of nginx access logs to identify the source of intermittent 502 errors. They built a shell pipeline that processed the logs in under 2 minutes on a standard server:

#!/bin/bash
# analyze-502s.sh — Find patterns in 502 errors

echo "=== 502 Errors by Hour ==="
grep ' 502 ' /var/log/nginx/access.log* \
  | awk '{print $4}' \
  | cut -d: -f1-2 \
  | sort | uniq -c | sort -rn | head -20

echo ""
echo "=== Top Upstream Servers Returning 502 ==="
grep ' 502 ' /var/log/nginx/access.log* \
  | grep -oP 'upstream: "\K[^"]+' \
  | sort | uniq -c | sort -rn | head -10

echo ""
echo "=== Top URLs Getting 502 ==="
grep ' 502 ' /var/log/nginx/access.log* \
  | awk '{print $7}' \
  | sort | uniq -c | sort -rn | head -20

echo ""
echo "=== 502 Rate Over Time (5-minute buckets) ==="
grep ' 502 ' /var/log/nginx/access.log \
  | awk '{print substr($4,2,17)}' \
  | awk -F: '{printf "%s:%s:%02d\n", $1, $2, int($3/5)*5}' \
  | sort | uniq -c | sort -k2

The analysis revealed that 502 errors spiked every hour at the 15-minute mark, correlating with a cron job that consumed all available database connections. The fix was a 2-line change to the cron job's connection pool configuration.

Case Study 3: Automated Backup Script

A small team managing a PostgreSQL database and application files on a DigitalOcean droplet needed reliable automated backups with rotation:

#!/bin/bash
# backup.sh — Daily backup with 30-day rotation
set -euo pipefail

BACKUP_DIR="/backups"
DATE=$(date +%Y%m%d-%H%M%S)
RETENTION_DAYS=30

log() { echo "[$(date '+%Y-%m-%d %H:%M:%S')] $*"; }

# Database backup
log "Starting PostgreSQL backup..."
pg_dump -U app_user -h localhost app_database \
  | gzip > "${BACKUP_DIR}/db-${DATE}.sql.gz"
log "Database backup complete: $(du -h ${BACKUP_DIR}/db-${DATE}.sql.gz | cut -f1)"

# Application files backup
log "Starting file backup..."
tar -czf "${BACKUP_DIR}/files-${DATE}.tar.gz" \
  --exclude='node_modules' \
  --exclude='.git' \
  /var/www/app/
log "File backup complete: $(du -h ${BACKUP_DIR}/files-${DATE}.tar.gz | cut -f1)"

# Upload to S3
log "Uploading to S3..."
aws s3 cp "${BACKUP_DIR}/db-${DATE}.sql.gz" s3://company-backups/daily/
aws s3 cp "${BACKUP_DIR}/files-${DATE}.tar.gz" s3://company-backups/daily/

# Cleanup old local backups
log "Cleaning up backups older than ${RETENTION_DAYS} days..."
find "${BACKUP_DIR}" -name "*.gz" -mtime +${RETENTION_DAYS} -delete

log "Backup completed successfully."

Exercises

Exercise 1 Beginner

File System Exploration

On a fresh Linux system (use a VM or Docker container), complete these tasks using only the command line: (1) Create a directory structure /home/student/project/{src,docs,tests,config} in one command. (2) Create files in each directory. (3) Set permissions so that src/ is readable by all but writable only by the owner, config/ is accessible only by the owner, and docs/ is readable by everyone. (4) Find all files you created that are larger than 0 bytes. (5) Create a tar.gz archive of the entire project directory.

mkdir chmod find tar
Exercise 2 Intermediate

Log Analysis Pipeline

Download a sample nginx access log (or generate one with a tool like flog). Write a shell pipeline that: (1) Extracts all unique IP addresses. (2) Counts requests per IP, sorted by frequency. (3) Finds the top 10 most-requested URLs. (4) Calculates the percentage of 4xx and 5xx errors. (5) Identifies the busiest hour of the day. Combine all of these into a single script that generates a summary report.

awk grep sort uniq pipes
Exercise 3 Advanced

Production Server Hardening

Set up a cloud VM (DigitalOcean, AWS, or Linode) and perform a complete security hardening: (1) Create a non-root deploy user with SSH key authentication. (2) Disable root login and password authentication. (3) Configure UFW to allow only SSH (custom port), HTTP, and HTTPS. (4) Install and configure fail2ban with custom rules. (5) Set up automated security updates. (6) Write a monitoring script that runs via cron every 5 minutes and sends an alert (email or webhook) if disk usage exceeds 80% or if an unauthorized login attempt is detected. (7) Run the CIS Benchmark Level 1 checks and document your compliance score.

SSH hardening fail2ban UFW cron CIS Benchmark

Linux Command Reference Generator

Use this tool to document your server configuration, common tasks, scripts, and cron jobs. Download as Word, Excel, PDF, or PowerPoint for team documentation or runbooks.

Linux Command Reference Generator

Document your server setup and command reference for export. All data stays in your browser — nothing is sent to any server.

Draft auto-saved

All data stays in your browser. Nothing is sent to or stored on any server.

Conclusion & Resources

The Linux command line is a 50-year-old interface that has never been more relevant. From navigating the file system to hardening production servers, the skills in this guide form the foundation of every sysadmin, DevOps engineer, and backend developer's toolkit.

The most important takeaways:

  • Master the fundamentals first. ls, cd, grep, find, chmod, and ps cover 80% of daily work.
  • Learn to pipe. The ability to chain simple commands into powerful data processing pipelines is the superpower of the command line.
  • Script everything you do twice. If you run a sequence of commands more than once, put it in a script with set -euo pipefail and proper error handling.
  • Security is not optional. Disable root login, use SSH keys, configure a firewall, and install fail2ban on every server you manage.
  • Use systemd. For services, use systemctl. For scheduled tasks, consider systemd timers. For logs, use journalctl. Systemd is the modern Linux init system and mastering it is essential.

Recommended Resources

  • The Linux Command Line by William Shotts — free online at linuxcommand.org
  • TLDR pages — simplified man pages at tldr.sh
  • ExplainShell — paste a command to see what each part does at explainshell.com
  • OverTheWire: Banditwargame for learning Linux commands through challenges
  • CIS Benchmarkscisecurity.org for server hardening guides
Technology