Data Backup on a Budget

I have a lot of data that I’d hate to lose. Anyone with a hard drive failure really appreciates the value of having another copy of the lost data. I suppose cloud storage would be a logical place to backup data. With all my personal audio and video files, I might need as much as 10 TB of storage. That would be too expensive for me. My strategy has been to buy enough USB external hard drives to keep two copies of all my data. This has always been a manual process. I am working to automate my backup processes.

There are a couple of options when syncing two file directories. One is to simply copy all new files from the source to the target. Another is to additionally remove all the files from the target that have been removed from the source. I’ll automate the first procedure. If you accidentally remove all files from the source, the second procedure will also remove all files from the target. This option will only be used manually and very carefully.

Personal Documents

stub…

Audio, Video and Pictures

stub…

Duplicate Files

stub…

Server Backups

stub…

Rsync (ChatGPT)

I have three external USB drives: 4 TB, 5 TB, and 12 TB

You can use the following rsync command to synchronize the /documents directory from fileserver1 to fileserver2, ensuring that extra files on fileserver2 are removed:

rsync -av --delete fileserver1:/documents/ fileserver2:/documents/

Explanation of Options:

  • -a → Archive mode (preserves permissions, timestamps, symbolic links, etc.).
  • -v → Verbose output (optional, shows progress).
  • --delete → Removes extra files on fileserver2 that are not present on fileserver1.
  • The trailing / after /documents/ ensures correct directory syncing.

Additional Options (If Needed):

  • Dry run before actual execution (to preview changes): rsync -av --delete --dry-run fileserver1:/documents/ fileserver2:/documents/
  • Use SSH for secure transfer (if servers are remote): rsync -av --delete -e ssh fileserver1:/documents/ fileserver2:/documents/
  • Limit bandwidth to avoid overloading network (e.g., limit to 5MB/s): rsync -av --delete --bwlimit=5000 fileserver1:/documents/ fileserver2:/documents/

You can automate the rsync synchronization using a cron job.

Step 1: Edit the Crontab

On fileserver1, open the crontab editor:

crontab -e

Step 2: Add the Rsync Command

Add the following line to schedule synchronization every day at 2 AM:

0 2 * * * rsync -av --delete fileserver1:/documents/ fileserver2:/documents/ >> /var/log/rsync.log 2>&1

This will:

  • Run rsync at 2:00 AM daily.
  • Log output to /var/log/rsync.log for debugging.

Step 3: Save and Exit

Save the file and exit the editor.

Step 4: Verify the Cron Job

After a day, check if the job ran:

cat /var/log/rsync.log

Would you like email notifications for failures?

Robocopy

stub…

Back up Pi-hole on a Schedule

To back up Pi-hole on a schedule, you can use its built-in configuration export tools combined with a cron job. This allows you to save the configuration and restore it if needed. Here’s how to set it up:


1. Pi-hole Configuration Backup

Pi-hole provides a pihole -a command to export and import settings.

Backup Command:

To back up Pi-hole settings, run:

pihole -a -t

This creates a tarball of Pi-hole settings in the current directory, containing:

  • DNS configuration
  • Whitelist/Blacklist
  • Static DHCP leases
  • Query logs (optional)

Restore Command:

To restore from the backup:

pihole -a -r


2. Automate Backup with a Cron Job

Step 1: Create a Backup Directory

Choose where to store the backup files (e.g., /home/pi/pihole-backups):

mkdir -p /home/pi/pihole-backups

Step 2: Write a Backup Script

Create a script to handle the backup process.

Example script: /home/pi/backup_pihole.sh

#!/bin/bash

# Define backup directory
BACKUP_DIR="/home/pi/pihole-backups"

# Create a timestamp
TIMESTAMP=$(date +"%Y-%m-%d_%H-%M-%S")

# Backup file path
BACKUP_FILE="$BACKUP_DIR/pihole-backup-$TIMESTAMP.tar.gz"

# Run Pi-hole backup
pihole -a -t > /dev/null 2>&1

# Move the tarball to the backup directory with a timestamp
mv /etc/pihole/*.tar.gz "$BACKUP_FILE"

# Delete old backups (optional, e.g., keep 7 days)
find "$BACKUP_DIR" -type f -mtime +7 -exec rm {} \;

Make the script executable:

chmod +x /home/pi/backup_pihole.sh

Step 3: Add a Cron Job

Edit the crontab to schedule the script:

crontab -e

Add a line to schedule the script (e.g., daily at 2:00 AM):

0 2 * * * /home/pi/backup_pihole.sh


3. (Optional) Sync Backups to Another Location

For additional safety, copy backups to an external location (e.g., NAS, cloud storage, or another server).

Example: Use rsync to Copy Backups

Add the following line to the script:

rsync -av --delete /home/pi/pihole-backups/ user@remote-server:/backup-location/


4. Verify Backup and Restore

  1. Run the script manually to test: /home/pi/backup_pihole.sh
  2. Confirm the backup file exists in /home/pi/pihole-backups.
  3. Test restoring using: pihole -a -r

With this setup, Pi-hole backups will occur automatically, and you’ll have a reliable way to restore your configuration when needed. Let me know if you’d like help customizing the process!

Proxmox High Availablity Setup (ChatGPT)

I think there’s a pretty good video here to set up high availability (HA) on my Proxmox cluster.

This is my query to ChatGPT:

How to set up ha on my three-node proxmox cluster (pve1, pve2, and pve3) for automatic failover of my pihole dhcp server on pve1.

ChatGPT response:

Setting up high availability (HA) for your Pi-hole DHCP server on a three-node Proxmox cluster (pve1, pve2, pve3) ensures automatic failover if one node goes down. Here’s how you can achieve this:

1. Prerequisites

  1. Shared Storage:
    • Ensure all Proxmox nodes have access to shared storage (e.g., NFS, Ceph, or iSCSI) for the Pi-hole VM or container.
    • Alternatively, use Proxmox replication for local storage to synchronize disks between nodes.
  2. Cluster and Quorum:
    • Your Proxmox cluster (pve1, pve2, pve3) must be fully configured with quorum enabled to manage HA resources. pvecm status
  3. Pi-hole Setup:
    • Install Pi-hole as a VM or LXC container on pve1.
    • Configure Pi-hole’s DHCP and DNS settings.
  4. HA Configuration Package:
    • Ensure HA management tools are installed on Proxmox: apt install proxmox-ha-manager

2. Configure Shared Storage

If you don’t have shared storage:

  1. Set Up NFS Storage:
    • Install NFS on a NAS or a separate server.
    • Add the shared storage in Proxmox:
      • Go to Datacenter > Storage > Add > NFS.
      • Provide the NFS server details and share path.
  2. Use Local Storage Replication:
    • Enable storage replication between nodes for local storage (not real-time but effective for HA).

3. Move Pi-hole VM to Shared Storage

  1. Migrate the Pi-hole VM to shared storage: qm move_disk <vmid> <storage_name> Replace <vmid> with your Pi-hole VM ID and <storage_name> with the shared storage ID.
  2. Confirm the disk location: qm config <vmid>

4. Enable HA for Pi-hole VM

  1. Add the VM to HA Resources:
    • In the Proxmox web interface:
      • Go to Datacenter > HA > Add > Virtual Machine.
      • Select the Pi-hole VM and configure Failover priority.
    • Or use the CLI: ha-manager add vm:<vmid>
  2. Check HA Status: ha-manager status
  3. Set VM Start Priority:
    • Go to VM > Options and set the start/shutdown order.

5. Test HA Setup

  1. Shut down pve1: shutdown -h now
  2. Verify that the Pi-hole VM automatically starts on another node (e.g., pve2 or pve3).

6. Additional Considerations

DNS Redundancy:

  • Add a secondary Pi-hole instance for DNS failover.
    • Configure it on another node (e.g., pve2).
    • Sync configurations between the two using tools like rsync or Gravity Sync.

Monitor DHCP Leases:

  • Ensure DHCP leases persist after failover.
    • Store the dnsmasq lease file on shared storage or sync it between nodes using a script.

Backup Configuration:

  • Regularly back up the Pi-hole configuration and VM:
    • Use Proxmox backup jobs to save VM snapshots.

Summary of Key Commands

  • Add HA Resource: ha-manager add vm:<vmid>
  • Check HA Status: ha-manager status
  • Migrate Disk to Shared Storage: qm move_disk <vmid> <storage_name>

Let me know if you’d like assistance with any specific part of this process!