V1.1 Refactor

This commit is contained in:
2025-11-19 17:47:29 +01:00
parent 1bff525466
commit 0bde669723
4 changed files with 994 additions and 462 deletions

292
README.MD
View File

@@ -1,118 +1,258 @@
# Backify 🗃️
Backify README
==============
## What is Backify? 👾
What is Backify?
----------------
Backify is a shell script that helps you automate the backup process of all kinds of data from Linux systems. It differs from other backup scripts because it gives you the flexibility to choose what you want to save, ranging from system logs to containers. The script was tailored to meet personal needs as there was no complete solution for the specific use case.
Prerequisites
-------------
## Prerequisites 👷
* The script must be executed as root.
* A configuration file (by default "backup.cfg") must exist and be readable.
* The system must be a Red Hatbased (RHEL, CentOS, Rocky, Alma…) or Debian/Ubuntubased distribution.
* Required tools:
* tar
* rsync and ssh if you push backups to a remote host
* docker if you use Docker backup features
* mysqldump (for MySQL/MariaDB) and/or pg\_dump / pg\_dumpall (for PostgreSQL) if you back up databases
Configuration
-------------
- The script must be executed as root.
All configuration options can be found in the "backup.cfg" file.
- A configuration file named `backup.cfg` must exist in the same directory as the script.
By default Backify looks for "backup.cfg" in the same directory as the script, but you can override this with the-c / --config command-line option.
- The system must be either a Red Hat-based or an Ubuntu-based distribution.
The script has an integrity check in place to ensure that no external commands can be embedded into it by malware(the config is “cleaned” before sourcing). The following sections provide an overview of the available configuration options.
- mysqldump / pgdump if dumping database on a diferent host
Main options
------------
**Name** | **Value** | **Specifics**
--------------------|--------------------|-------------
`enabled` | true/false | Disable or enable the main function
`backup_path` | path | Where to save the backup, **must NOT end with a slash**
`www_backup` | true/false | Backup of the webroot directory
`www_dir` | path | Path to the webroot
`vhost_backup` | true/false | Backup of the vhost configuration
`vhost_dir` | path | Path to the vhost files
`log_backup` | true/false | Backup log files
`log_to_backup` | array | Array of logs to back up (see list below)
`rsync_push` | true/false | Push the backup archive to a remote server
`push_clean` | true/false | Delete the local backup archive after a successful push
`target_host` | host | Backup push target host (single-target mode)
`target_user` | user | Backup push target username (single-target mode)
`target_key` | path | SSH key for the remote backup user
`target_dir` | path | Remote directory to push backups to
`targets` | array | **Optional**: list of full rsync destinations (`user@host:/path`). If non-empty, overrides `target_host` / `target_user` / `target_dir`.
`docker_enabled` | true/false | Enable Docker backups
`docker_images` | true/false | Backup Docker images
`docker_volumes` | true/false | Backup Docker volumes (via helper container)
`docker_data` | true/false | Backup container metadata (inspect output)
`tar_opts` | string | Optional `TAR_OPTS` passed to the Docker volume backup helper (e.g. `-J` for xz)
`db_backup` | true/false | Enable database backup
`database_type` | mysql/postgresql | Database type
`db_host` | host | Database host
`db_port` | int | Port for DB access
`db_username` | string | Username for DB access
`db_password` | string | Password for DB access
`db_name` | string | Name of database to dump when `db_all=false`
`db_all` | true/false | Dump all databases instead of a specific one
`custom_backup` | true/false | Enable backup of custom files
`custom_dirs` | array | Array of files/directories to back up
`retention_days` | int | **Optional**: delete local archives older than this many days (0 = disabled)
`retention_keep_min`| int | **Optional**: always keep at least this many newest archives (0 = disabled)
`pre_backup_hook` | path | **Optional**: executable script run **before** the backup (receives `TMPDIR` as `$1`)
`post_backup_hook` | path | **Optional**: executable script run **after success** (receives archive path as `$1`)
## Configuration 🧙‍♂️
Logs to backup array
--------------------
**Option** | **Specifics**
---------------|-------------
`apache` | Apache access and error logs
`nginx` | Nginx access and error logs
`fail2ban` | Fail2ban log
`pckg_mngr` | Package manager logs (`yum`/`dnf` on RHEL, `apt` on Debian/Ubuntu)
`auth` | Authentication logs
`dmesg` | Kernel ring buffer log
`dpkg` | Package changes log (Debian/Ubuntu)
`letsencrypt` | Lets Encrypt logs
`php` | Logs from all installed PHP versions
`syslog` | General system event data
`purge` | Truncate/empty selected logs after backing them up
All configuration options can be found in the `backup.cfg` file. The script has an integrity check in place to ensure that no external commands can be embedded into it by malware. The following table provides an overview of the available configuration options:
Command-line options
--------------------
| Name | Value | Specifics |
| --- | --- | --- |
| enabled | true/false | Disable the main function |
| backup_path | ------> | Set where to save the backup, make sure it DOESNT end with backslash |
| www_backup | true/false | Backup of the webroot directory |
| www_dir | ------> | Path to the webroot |
| vhost_backup | true/false | Backup of the vhost configuration |
| vhost_dir | ------> | Path to the vhost files |
| log_backup | true/false | Backup log files |
| log_to_backup |array | Array of logs to backup, see below for options|
| rsync_push | true/false | Push the backup file to a remote server |
| push_clean | true/false | Delete the backup file after push |
| target_host | ------> | Backup push target host |
| target_user | ------> | Backup push target username |
| target_key | ------> | Backup target ssh key |
| target_dir | ------> | Backup target push to location |
| docker_enable | true/false | Enable Docker backups |
| docker_images | true/false | Backup Docker images |
| docker_volumes | true/false | Backup Docker volumes |
| docker_data | true/false | Backup container information |
| db_backup | true/false | Backup database |
| database_type | mysql/postgresql | Database type |
| db_host | ------> | Database host |
| db_port | ------> | Port for DB access |
| db_username | ------> | Username for DB access |
| db_password | ------> | Password for DB access |
| db_name | ------> | Name of database |
| db_all | ------> | Dump all databases instead of specific one |
| custom_backup | true/false | Enable backup of custom files |
| custom_dirs | ------> | Array of files/directories to backup
Backify supports the following CLI options:
- `-c`, `--config` *PATH*
Path to configuration file (default: `./backup.cfg`).
- `-n`, `--dry-run`
Show what would be done, but do not copy/compress/push/delete anything.
- `-h`, `--help`
Show short usage help and exit.
- `-v`, `--version`
Show Backify version and exit.
## Logs to backup array 📚
Examples
--------
| Option | Specifics |
| --- | --- |
| apache | Apache access and error logs |
| nginx | Nginx access and error logs |
| fail2ban | Fail2ban log |
| alternatives | Alternatives log |
| pckg_mngr | Logs from Yum/Apt package manager |
| auth | Authentications log |
| dmesg | Kernel log |
| dpkg | Package changes log |
| letsencrypt | Let's encrypt logs |
| php | Logs from all installed PHPs |
| syslog | System event data |
| purge | Empty all the logs after backing up |
Use the default `backup.cfg` (in the same directory as the script):
```bash
./backify.sh
```
## Script Execution 🪄
Use a custom config file:
```bash
./backify.sh --config /etc/backify/web01.cfg
```
To execute the script, simply run the following command in the terminal:
Safe test run: see what would happen, but do not touch any data:
> ./backify.sh
```bash
./backify.sh --config /etc/backify/web01.cfg --dry-run
```
The script will first initialize by checking for the existence of the configuration file, loading its parameters, and verifying that the script is being executed as root.
Script Execution
----------------
Then, it will determine whether the system is a Red Hat-based or an Ubuntu-based distribution.
To execute the script with the default configuration file in the same directory:
Finally, the script will create a new directory with a timestamped name in the backup_path directory, where the backups will be stored.
./backify.sh
The components specified in the configuration file will then be backed up to the newly created directory.
The script will:
* Parse CLI options (config path, dry-run, etc.).
* Initialize by checking for the existence of the configuration file, loading its parameters, and verifying that it is being executed as root.
* Detect whether the system is Red Hatbased or Debian/Ubuntubased.
* Create a new timestamped directory inside 'backup_path', where the backup data will be stored.
* Run the configured backup steps:
* Webroot
* Vhosts
* Logs
* Docker images/volumes/data
* Databases
* Custom files/directories
* Create a compressed tar archive (backify-YYYYMMDD\_HHMM.tar.gz) from the backup directory.
* Optionally push the archive to one or more remote rsync targets.
* Optionally apply a retention policy to local archives.
* Optionally run pre/post backup hooks.
## Automation 🤖
If you use --dry-run, steps that modify data (copying files, truncating logs, creating archives, pushing, deleting, hooks) are simulated and only logged, not executed.
Automation
----------
Here's an example of how you can use cron on Linux to run your script every day at 12 PM:
Cron
----
Open the terminal and type crontab -e to open the cron table for editing.
You can use cron to run Backify every day at 12:00.
Add the following line to the end of the file:
1. Open the crontab editor:
> 0 12 * * * /path/to/your/script.sh
```bash
crontab -e
```
Save and exit the file.
2. Add a line like this (adjust the path as needed):
## MySQL user 🛢️
```bash
0 12 * * * /path/to/backify.sh --config /etc/backify/web01.cfg
```
If You want to dump all of MySQL databases, read only user is recommended for that action.
3. Save and exit.
It can be created with the following MySQL command:
systemd (optional)
------------------
> GRANT LOCK TABLES, SELECT ON DATABASE_NAME.* TO 'BACKUP_USER'@'%' IDENTIFIED BY 'PASSWORD';
If you prefer systemd, you can wrap Backify in a simple backify.service and backify.timer unit pair.(Units are not shipped in this repo yet, but Backify is fully compatible with a systemd timer.)
## Buy me a beer 🍻
Restore (high-level overview)
-----------------------------
One pale ale won't hurt, will it ?
Backify creates standard `tar.gz` archives, so restoration is straightforward but manual by design:
0x4046979a1E1152ddbfa4a910b1a98F73625a77ae
ETH / BNB and Polygon chains
1. Copy the desired archive back to the server (or access it on the backup storage).
2. Extract it:
```bash
tar -xzf backify-YYYYMMDD_HHMM.tar.gz -C /tmp/restore
```
The content layout roughly mirrors:
* `wwwdata/` your webroot
* `vhosts/` webserver vhost configs
* `syslogs/`, `apachelogs/`, `nginxlogs/` logs
* `containers/` Docker images, volumes, and metadata (if enabled)
* `db/` database dumps (`.sql`)
* `custom/` custom files/directories you configured
3. Restore what you need:
* Webroot / vhosts: copy files back into place and reload/restart services.
* Databases:
* MySQL/MariaDB:
```bash
mysql -u USER -p DB_NAME < db/yourdb.sql
```
* PostgreSQL:
```bash
psql -U USER -h HOST DB_NAME < db/yourdb.sql
```
Make sure you understand what you are overwriting; ideally test restores on a non-production server first.
MySQL / PostgreSQL user
-----------------------
If you want to dump all databases, a dedicated read-only user is recommended.
For MySQL/MariaDB, you can create one with:
```sql
GRANT LOCK TABLES, SELECT ON DATABASE\_NAME.\* TO 'BACKUP\_USER'@'%' IDENTIFIED BY 'PASSWORD';
```
For PostgreSQL, you can use a user with sufficient CONNECT and SELECT permissions on the databases you want to dump.
Buy me a beer
-------------
One pale ale won't hurt, will it?
0x4046979a1E1152ddbfa4a910b1a98F73625a77ae ETH / BNB / Polygon chains

727
backify.sh Normal file
View File

@@ -0,0 +1,727 @@
#! /bin/bash
set -Eeo pipefail
umask 077
VERSION="1.1.0"
CONFIG="backup.cfg" # default config path; can be overridden with -c/--config
DRY_RUN=false
tmpdir=""
cleanup() {
if [ -n "${tmpdir-}" ] && [ -d "$tmpdir" ]; then
rm -rf "$tmpdir"
fi
}
trap cleanup EXIT
function usage {
cat >&2 <<EOF
Usage: $0 [options]
Options:
-c, --config PATH Path to configuration file (default: backup.cfg)
-n, --dry-run Show what would be done, but do not copy/compress/push/delete
-h, --help Show this help and exit
-v, --version Show Backify version and exit
EOF
}
function show_version {
echo "Backify version $VERSION"
}
function parse_args {
while [ $# -gt 0 ]; do
case "$1" in
-c|--config)
if [ -n "${2-}" ]; then
CONFIG="$2"
shift 2
else
echo "Error: -c|--config requires a path argument." >&2
usage
exit 1
fi
;;
-n|--dry-run)
DRY_RUN=true
shift
;;
-h|--help)
usage
exit 0
;;
-v|--version)
show_version
exit 0
;;
--)
shift
break
;;
-*)
echo "Unknown option: $1" >&2
usage
exit 1
;;
*)
shift
;;
esac
done
}
function log_enabled {
local needle="$1"
local item
for item in "${log_to_backup[@]:-}"; do
if [ "$item" = "$needle" ]; then
return 0
fi
done
return 1
}
function require_cmd {
local cmd="$1"
if ! command -v "$cmd" >/dev/null 2>&1; then
echo "Error: required command '$cmd' not found in PATH." >&2
exit 1
fi
}
function preflight {
require_cmd tar
if [ "${rsync_push:-false}" = true ]; then
require_cmd rsync
require_cmd ssh
fi
if [ "${docker_enabled:-false}" = true ]; then
require_cmd docker
fi
if [ "${db_backup:-false}" = true ]; then
case "${database_type:-}" in
mysql)
require_cmd mysqldump
;;
postgresql)
require_cmd pg_dump
require_cmd pg_dumpall
;;
*)
echo "Error: database_type must be 'mysql' or 'postgresql' when db_backup is true." >&2
exit 1
;;
esac
fi
}
function init {
echo "Backify is starting, looking for configuration file..." >&2
config="$CONFIG"
secured_config='sbackup.cfg'
if [ ! -f "$config" ]; then
echo "Error: Config file not found: $config" >&2
echo "Please create a config file or specify the location of an existing file (use -c/--config)." >&2
exit 1
fi
if grep -E -q -v '^#|^[^ ]*=[^;]*' "$config"; then
echo "Config file is unclean, cleaning it..." >&2
grep -E '^#|^[^ ]*=[^;&]*' "$config" >"$secured_config"
config="$secured_config"
fi
source "$config"
echo "Configuration file loaded" >&2
if [ "$EUID" -ne 0 ]; then
echo "Please run as root" >&2
exit 1
fi
mkdir -p "$backup_path"
if [ ! -w "$backup_path" ]; then
echo "Error: backup_path '$backup_path' is not writable." >&2
exit 1
fi
: "${retention_days:=0}"
: "${retention_keep_min:=0}"
: "${pre_backup_hook:=}"
: "${post_backup_hook:=}"
if ! declare -p log_to_backup >/dev/null 2>&1; then
log_to_backup=()
fi
if ! declare -p custom_dirs >/dev/null 2>&1; then
custom_dirs=()
fi
if ! declare -p targets >/dev/null 2>&1; then
targets=()
fi
}
function detect_system {
echo "Detecting OS type..." >&2
if [ -r /etc/os-release ]; then
. /etc/os-release
case "$ID" in
rhel|centos|rocky|almalinux)
echo "Discovered Red Hat-based OS..." >&2
SYSTEM='rhel'
;;
debian|ubuntu)
echo "Discovered Debian-based OS..." >&2
SYSTEM='debian'
;;
*)
echo "Error: Unsupported OS: $ID" >&2
exit 1
;;
esac
elif [ -f /etc/redhat-release ]; then
echo "Discovered Red Hat-based OS via legacy detection..." >&2
SYSTEM='rhel'
elif [ -f /etc/lsb-release ]; then
echo "Discovered Debian-based OS via legacy detection..." >&2
SYSTEM='debian'
else
echo "Error: Unable to detect OS type." >&2
exit 1
fi
}
function makedir {
timestamp=$(date +%Y%m%d_%H%M)
tmpdir="$backup_path/backify-$timestamp"
if [ "$DRY_RUN" = true ]; then
echo "[DRY-RUN] Would create temporary directory $tmpdir" >&2
else
mkdir -p "$tmpdir"
fi
}
function wwwbackup {
if [ "$www_backup" = true ]; then
echo "Backing up wwwroot..." >&2
if [ "$DRY_RUN" = true ]; then
echo "[DRY-RUN] Would copy $www_dir to $tmpdir/wwwdata" >&2
return
fi
mkdir -p "$tmpdir/wwwdata"
cp -r "$www_dir/" "$tmpdir/wwwdata/"
echo "Finished" >&2
fi
}
function vhostbackup {
if [ "$vhost_backup" = true ]; then
echo "Backing up vhosts..." >&2
if [ "$DRY_RUN" = true ]; then
echo "[DRY-RUN] Would copy $vhost_dir to $tmpdir/vhosts" >&2
return
fi
mkdir -p "$tmpdir/vhosts"
cp -avr "$vhost_dir/" "$tmpdir/vhosts/"
echo "Finished" >&2
fi
}
function logbackup {
if [ "$log_backup" = true ]; then
echo "Backing up system logs..." >&2
if [ "$DRY_RUN" = true ]; then
echo "[DRY-RUN] Would collect selected logs into $tmpdir/syslogs" >&2
else
mkdir -p "$tmpdir/syslogs"
fi
case "$SYSTEM" in
"rhel")
if log_enabled "fail2ban"; then
if [ "$DRY_RUN" = true ]; then
echo "[DRY-RUN] Would copy /var/log/fail2ban.log" >&2
else
cp /var/log/fail2ban.log "$tmpdir/syslogs/" 2>/dev/null || true
fi
fi
if log_enabled "apache"; then
if [ "$DRY_RUN" = true ]; then
echo "[DRY-RUN] Would copy /var/log/httpd to $tmpdir/apachelogs" >&2
else
mkdir -p "$tmpdir/apachelogs"
cp -r /var/log/httpd "$tmpdir/apachelogs" 2>/dev/null || true
fi
fi
if log_enabled "nginx"; then
if [ "$DRY_RUN" = true ]; then
echo "[DRY-RUN] Would copy /var/log/nginx to $tmpdir/nginxlogs" >&2
else
mkdir -p "$tmpdir/nginxlogs"
cp -r /var/log/nginx "$tmpdir/nginxlogs" 2>/dev/null || true
fi
fi
if log_enabled "pckg_mngr"; then
if [ "$DRY_RUN" = true ]; then
echo "[DRY-RUN] Would copy yum/dnf logs to $tmpdir/syslogs/yum" >&2
else
mkdir -p "$tmpdir/syslogs/yum"
cp -r /var/log/yum/* "$tmpdir/syslogs/yum/" 2>/dev/null || true
cp -r /var/log/dnf* "$tmpdir/syslogs/yum/" 2>/dev/null || true
fi
fi
if log_enabled "letsencrypt"; then
if [ "$DRY_RUN" = true ]; then
echo "[DRY-RUN] Would copy /var/log/letsencrypt to $tmpdir/syslogs/letsencrypt" >&2
else
mkdir -p "$tmpdir/syslogs/letsencrypt"
cp -r /var/log/letsencrypt/* "$tmpdir/syslogs/letsencrypt/" 2>/dev/null || true
fi
fi
if log_enabled "php"; then
if [ "$DRY_RUN" = true ]; then
echo "[DRY-RUN] Would copy /var/log/php*.log to $tmpdir/syslogs" >&2
else
cp -r /var/log/php*.log "$tmpdir/syslogs/" 2>/dev/null || true
fi
fi
if log_enabled "syslog"; then
if [ "$DRY_RUN" = true ]; then
echo "[DRY-RUN] Would copy /var/log/secure to $tmpdir/syslogs" >&2
else
cp -r /var/log/secure "$tmpdir/syslogs/" 2>/dev/null || true
fi
fi
if log_enabled "purge"; then
echo "Purging logs..." >&2
if [ "$DRY_RUN" = true ]; then
echo "[DRY-RUN] Would truncate and clear configured logs on RHEL system" >&2
else
truncate -s 0 /var/log/messages 2>/dev/null || true
truncate -s 0 /var/log/syslog 2>/dev/null || true
if log_enabled "apache"; then
truncate -s 0 /var/log/httpd/* 2>/dev/null || true
rm /var/log/httpd/*.gz 2>/dev/null || true
fi
if log_enabled "nginx"; then
truncate -s 0 /var/log/nginx/* 2>/dev/null || true
rm /var/log/nginx/*.gz 2>/dev/null || true
fi
if log_enabled "fail2ban"; then
truncate -s 0 /var/log/fail2ban.log 2>/dev/null || true
fi
if log_enabled "pckg_mngr"; then
truncate -s 0 /var/log/yum/* 2>/dev/null || true
truncate -s 0 /var/log/dnf* 2>/dev/null || true
fi
if log_enabled "letsencrypt"; then
truncate -s 0 /var/log/letsencrypt/* 2>/dev/null || true
fi
if log_enabled "php"; then
truncate -s 0 /var/log/php*.log 2>/dev/null || true
fi
if log_enabled "syslog"; then
truncate -s 0 /var/log/secure 2>/dev/null || true
fi
fi
fi
;;
"debian")
if log_enabled "fail2ban"; then
if [ "$DRY_RUN" = true ]; then
echo "[DRY-RUN] Would copy /var/log/fail2ban.log" >&2
else
cp /var/log/fail2ban.log "$tmpdir/syslogs/" 2>/dev/null || true
fi
fi
if log_enabled "apache"; then
if [ "$DRY_RUN" = true ]; then
echo "[DRY-RUN] Would copy /var/log/apache2 to $tmpdir/apachelogs" >&2
else
mkdir -p "$tmpdir/apachelogs"
cp -r /var/log/apache2 "$tmpdir/apachelogs" 2>/dev/null || true
fi
fi
if log_enabled "nginx"; then
if [ "$DRY_RUN" = true ]; then
echo "[DRY-RUN] Would copy /var/log/nginx to $tmpdir/nginxlogs" >&2
else
mkdir -p "$tmpdir/nginxlogs"
cp -r /var/log/nginx "$tmpdir/nginxlogs" 2>/dev/null || true
fi
fi
if log_enabled "pckg_mngr"; then
if [ "$DRY_RUN" = true ]; then
echo "[DRY-RUN] Would copy apt logs to $tmpdir/syslogs/apt" >&2
else
mkdir -p "$tmpdir/syslogs/apt"
cp -r /var/log/apt/* "$tmpdir/syslogs/apt/" 2>/dev/null || true
fi
fi
if log_enabled "auth"; then
if [ "$DRY_RUN" = true ]; then
echo "[DRY-RUN] Would copy /var/log/auth.log" >&2
else
cp -r /var/log/auth.log "$tmpdir/syslogs/" 2>/dev/null || true
fi
fi
if log_enabled "dmesg"; then
if [ "$DRY_RUN" = true ]; then
echo "[DRY-RUN] Would copy /var/log/dmesg" >&2
else
cp -r /var/log/dmesg "$tmpdir/syslogs/" 2>/dev/null || true
fi
fi
if log_enabled "dpkg"; then
if [ "$DRY_RUN" = true ]; then
echo "[DRY-RUN] Would copy /var/log/dpkg.log" >&2
else
cp -r /var/log/dpkg.log "$tmpdir/syslogs/" 2>/dev/null || true
fi
fi
if log_enabled "letsencrypt"; then
if [ "$DRY_RUN" = true ]; then
echo "[DRY-RUN] Would copy /var/log/letsencrypt to $tmpdir/syslogs/letsencrypt" >&2
else
mkdir -p "$tmpdir/syslogs/letsencrypt"
cp -r /var/log/letsencrypt/* "$tmpdir/syslogs/letsencrypt/" 2>/dev/null || true
fi
fi
if log_enabled "php"; then
if [ "$DRY_RUN" = true ]; then
echo "[DRY-RUN] Would copy /var/log/php*.log" >&2
else
cp -r /var/log/php*.log "$tmpdir/syslogs/" 2>/dev/null || true
fi
fi
if log_enabled "syslog"; then
if [ "$DRY_RUN" = true ]; then
echo "[DRY-RUN] Would copy /var/log/syslog" >&2
else
cp -r /var/log/syslog "$tmpdir/syslogs/" 2>/dev/null || true
fi
fi
if log_enabled "purge"; then
echo "Purging logs..." >&2
if [ "$DRY_RUN" = true ]; then
echo "[DRY-RUN] Would truncate and clear configured logs on Debian system" >&2
else
truncate -s 0 /var/log/syslog 2>/dev/null || true
truncate -s 0 /var/log/message 2>/dev/null || true
if log_enabled "apache"; then
truncate -s 0 /var/log/apache2/* 2>/dev/null || true
rm /var/log/apache2/*.gz 2>/dev/null || true
fi
if log_enabled "nginx"; then
truncate -s 0 /var/log/nginx/* 2>/dev/null || true
rm /var/log/nginx/*.gz 2>/dev/null || true
fi
if log_enabled "fail2ban"; then
truncate -s 0 /var/log/fail2ban.log 2>/dev/null || true
fi
if log_enabled "pckg_mngr"; then
truncate -s 0 /var/log/apt/* 2>/dev/null || true
fi
if log_enabled "auth"; then
truncate -s 0 /var/log/auth.log 2>/dev/null || true
fi
if log_enabled "dmesg"; then
truncate -s 0 /var/log/dmesg 2>/dev/null || true
fi
if log_enabled "dpkg"; then
truncate -s 0 /var/log/dpkg.log 2>/dev/null || true
fi
if log_enabled "letsencrypt"; then
truncate -s 0 /var/log/letsencrypt/* 2>/dev/null || true
fi
if log_enabled "php"; then
truncate -s 0 /var/log/php*.log 2>/dev/null || true
fi
if log_enabled "syslog"; then
truncate -s 0 /var/log/syslog 2>/dev/null || true
fi
fi
fi
;;
esac
fi
}
function push {
if [ "$rsync_push" = true ]; then
local archive="$backup_path/backify-$timestamp.tar.gz"
if [ "$DRY_RUN" = true ]; then
if [ "${#targets[@]}" -gt 0 ]; then
echo "[DRY-RUN] Would rsync $archive to multiple remote targets:" >&2
local t
for t in "${targets[@]}"; do
echo " - $t" >&2
done
else
echo "[DRY-RUN] Would rsync $archive to $target_user@$target_host:$target_dir" >&2
fi
return
fi
local rsync_ssh="ssh"
if [ -n "${target_key:-}" ]; then
rsync_ssh="ssh -i $target_key"
fi
if [ "${#targets[@]}" -gt 0 ]; then
local remote
for remote in "${targets[@]}"; do
echo "Pushing the backup package to $remote..." >&2
rsync -avz -e "$rsync_ssh" "$archive" "$remote"
done
else
echo "Pushing the backup package to $target_host..." >&2
rsync -avz -e "$rsync_ssh" "$archive" "$target_user@$target_host:$target_dir"
fi
if [ "$push_clean" = true ]; then
echo "Removing archive..." >&2
rm -f "$archive"
fi
fi
}
function dockerbackup {
if [ "$docker_enabled" = true ]; then
if [ "$DRY_RUN" = true ]; then
echo "[DRY-RUN] Would back up Docker images/volumes/data according to configuration." >&2
return
fi
if [ "$docker_images" = true ]; then
echo "Backing up Docker images..." >&2
for container_name in $(docker inspect --format='{{.Name}}' $(docker ps -q) | cut -f2 -d/); do
echo -n "$container_name - " >&2
container_image=$(docker inspect --format='{{.Config.Image}}' "$container_name")
mkdir -p "$tmpdir/containers/$container_name"
save_dir="$tmpdir/containers/$container_name/$container_name-image.tar"
docker save -o "$save_dir" "$container_image"
echo "Finished" >&2
done
fi
if [ "$docker_volumes" = true ]; then
echo "Backing up Docker volumes..." >&2
#Thanks piscue :)
for container_name in $(docker inspect --format='{{.Name}}' $(docker ps -q) | cut -f2 -d/); do
mkdir -p "$tmpdir/containers/$container_name"
echo -n "$container_name - " >&2
docker run --rm --userns=host \
--volumes-from "$container_name" \
-v "$tmpdir/containers/$container_name:/backup" \
-e TAR_OPTS="$tar_opts" \
piscue/docker-backup \
backup "$container_name-volume.tar.xz"
echo "Finished" >&2
done
fi
if [ "$docker_data" = true ]; then
echo "Backing up container information..." >&2
for container_name in $(docker inspect --format='{{.Name}}' $(docker ps -q) | cut -f2 -d/); do
echo -n "$container_name - " >&2
container_data=$(docker inspect "$container_name")
mkdir -p "$tmpdir/containers/$container_name"
echo "$container_data" >"$tmpdir/containers/$container_name/$container_name-data.txt"
echo "Finished" >&2
done
fi
fi
}
function backup_db {
if [ "$DRY_RUN" = true ]; then
echo "[DRY-RUN] Would dump database(s) of type '$database_type' into $tmpdir/db" >&2
return
fi
mkdir -p "$tmpdir/db"
if [ "$db_all" = true ]; then
if [ "$database_type" = "mysql" ]; then
mysqldump -u "$db_username" -p"$db_password" -h "$db_host" -P"$db_port" --all-databases >"$tmpdir/db/db_all.sql"
elif [ "$database_type" = "postgresql" ]; then
PGPASSWORD="$db_password" pg_dumpall -U "$db_username" -h "$db_host" -f "$tmpdir/db/db_all.sql"
fi
else
if [ "$database_type" = "mysql" ]; then
mysqldump -u "$db_username" -p"$db_password" -h "$db_host" -P"$db_port" "$db_name" >"$tmpdir/db/$db_name.sql"
elif [ "$database_type" = "postgresql" ]; then
PGPASSWORD="$db_password" pg_dump -U "$db_username" -h "$db_host" "$db_name" -f "$tmpdir/db/$db_name.sql"
fi
fi
}
function custombackup {
if [ "$custom_backup" = true ]; then
if [ "$DRY_RUN" = true ]; then
echo "[DRY-RUN] Would copy custom directories into $tmpdir/custom:" >&2
local i
for i in "${custom_dirs[@]}"; do
echo " - $i" >&2
done
return
fi
mkdir -p "$tmpdir/custom"
local i
for i in "${custom_dirs[@]}"; do
cp -r "$i" "$tmpdir/custom/" 2>/dev/null || true
done
fi
}
function apply_retention {
if [ "${retention_days:-0}" -le 0 ] && [ "${retention_keep_min:-0}" -le 0 ]; then
return
fi
local dir="$backup_path"
local pattern="$dir/backify-"*.tar.gz
if ! compgen -G "$pattern" >/dev/null; then
return
fi
echo "Applying retention policy in $dir..." >&2
local archives=()
local file
while IFS= read -r file; do
archives+=("$file")
done < <(ls -1 "$pattern" 2>/dev/null | sort)
local total=${#archives[@]}
if [ "$total" -eq 0 ]; then
return
fi
local keep_min=${retention_keep_min:-0}
if [ "$keep_min" -lt 0 ]; then keep_min=0; fi
local cutoff_date=""
if [ "${retention_days:-0}" -gt 0 ]; then
cutoff_date=$(date -d "-${retention_days} days" +%Y%m%d 2>/dev/null || true)
fi
local i=0
for file in "${archives[@]}"; do
i=$((i + 1))
if [ "$keep_min" -gt 0 ] && [ $((total - i)) -lt "$keep_min" ]; then
continue
fi
if [ -z "$cutoff_date" ] && [ "$keep_min" -gt 0 ]; then
echo "Removing old backup (by count): $file" >&2
rm -f "$file"
continue
elif [ -z "$cutoff_date" ]; then
continue
fi
local base
base=$(basename "$file")
local date_part=${base#backify-}
date_part=${date_part%%_*}
if [ "$date_part" -lt "$cutoff_date" ]; then
echo "Removing old backup (older than ${retention_days} days): $file" >&2
rm -f "$file"
fi
done
}
function runbackup {
init
detect_system
preflight
if [ "$enabled" = true ]; then
if [ "$DRY_RUN" = true ]; then
echo "Running Backify in DRY-RUN mode. No files will be copied, compressed, pushed or deleted." >&2
fi
makedir
if [ "$DRY_RUN" = false ] && [ -n "${pre_backup_hook:-}" ] && [ -x "$pre_backup_hook" ]; then
echo "Running pre-backup hook: $pre_backup_hook" >&2
"$pre_backup_hook" "$tmpdir"
fi
wwwbackup
vhostbackup
logbackup
dockerbackup
if [ "$db_backup" = true ]; then
backup_db
fi
custombackup
if [ "$DRY_RUN" = false ]; then
echo "Creating backup archive..." >&2
tar -czvf "$backup_path/backify-$timestamp.tar.gz" -C "$backup_path" "backify-$timestamp" >> /var/log/backify-compress.log 2>&1
push
apply_retention
if [ -n "${post_backup_hook:-}" ] && [ -x "$post_backup_hook" ]; then
echo "Running post-backup hook: $post_backup_hook" >&2
local post_backup_archive
post_backup_archive="$backup_path/backify-$timestamp.tar.gz"
"$post_backup_hook" "$post_backup_archive"
fi
else
echo "[DRY-RUN] Skipping archive creation, remote push, retention and post-backup hooks." >&2
fi
echo "Voila, enjoy the rest of the day" >&2
else
echo "Backup is disabled in the configuration" >&2
fi
}
parse_args "$@"
runbackup

View File

@@ -2,32 +2,55 @@
# --------------------------------------------------------
# Please double check Your settings
# --------------------------------------------------------
enabled=false #enable the script
backup_path='/opt/backify' # where do you want backups saved, make sure it doesnt end in backslash
www_backup=false # backup wwwroot
www_dir='xyz' # location of wwwroot to backup
vhost_backup=false # backup vhost configurations
vhost_dir='/etc/httpd/sites-enabled' # location of active vhost files
log_backup=false # backup logs
log_to_backup=("apache" "nginx" "fail2ban" "pckg_mngr" "auth" "dmesg" "dpkg" "letsencrypt" "php" "syslog" "purge")
# logs to backup, options: apache, nginx, fail2ban, pckg_mngr, auth, dmesg, dpkg, letsencrypt, php, syslog, purge (truncate all))
rsync_push=false # enable push to remote server
push_clean=false # clean backup file after push
target_host="127.0.0.1" # rsync target host
target_user="backup" # rsync target user
target_key='/home/xyz/.ssh/rsync' # rsync key
enabled=false # enable the script
backup_path='/opt/backify' # where to save backups; must NOT end with trailing slash
www_backup=false # backup wwwroot
www_dir='xyz' # location of wwwroot to backup
vhost_backup=false # backup vhost configurations
vhost_dir='/etc/httpd/sites-enabled' # location of active vhost files
log_backup=false # backup logs
log_to_backup=("apache" "nginx" "fail2ban" "pckg_mngr" "auth" "dmesg" "dpkg" "letsencrypt" "php" "syslog" "purge")
# logs to backup, options: apache, nginx, fail2ban, pckg_mngr, auth, dmesg, dpkg, letsencrypt, php, syslog, purge (truncate all)
rsync_push=false # enable push to remote server
push_clean=false # delete local archive after successful push
target_host="127.0.0.1" # rsync target host (single-target mode)
target_user="backup" # rsync target user (single-target mode)
target_key='/home/xyz/.ssh/rsync' # rsync SSH key
target_dir='/opt/backups/srvyxyz/' # rsync target host path
docker_enabled=false # will you use docker backup
docker_images=false # backup docker images
docker_volumes=false #backup docker volumes
docker_data=false #backup container information
db_backup=false #backup databases
database_type=mysql #mysql or postgresql
db_host='localhost' #hostname of mysql server
db_port=3306 #port for db access
db_username=user #database user
db_password=user #database password
db_all=false #dumps all databases if true
db_name=user #name of the database
custom_backup=false #backup custom files or directories
custom_dirs=("/opt/example" "/var/log/script.log") #array of custom files and/or directories to backup
# Optional: multiple rsync targets. If set, these are used instead of target_host/target_user/target_dir.
# Each entry is a full rsync destination: user@host:/path
# Example:
# targets=("backup@host1:/backups/server1/" "backup@host2:/backups/server2/")
targets=()
docker_enabled=false # enable Docker backup
docker_images=false # backup Docker images
docker_volumes=false # backup Docker volumes
docker_data=false # backup container information
tar_opts='' # optional TAR_OPTS passed to docker volume backup container (e.g. "-J" for xz compression)
db_backup=false # backup databases
database_type=mysql # mysql or postgresql
db_host='localhost' # database host
db_port=3306 # port for DB access
db_username='user' # database user
db_password='user' # database password
db_all=false # dump all databases if true
db_name='user' # name of the database if db_all=false
custom_backup=false # backup custom files or directories
custom_dirs=("/opt/example" "/var/log/script.log") # array of custom files/directories to backup
# Optional: retention policy for local archives. 0 disables the check.
retention_days=0 # delete archives older than this many days (0 = disabled)
retention_keep_min=0 # always keep at least this many newest archives (0 = disabled)
# Optional: hooks (executed only in non-dry-run mode)
pre_backup_hook='' # executable run before backup; receives TMPDIR as $1
post_backup_hook='' # executable run after success; receives archive path as $1

358
main.sh
View File

@@ -1,358 +0,0 @@
#! /bin/bash
function init {
echo "Backify is starting, looking for configuration file..." >&2
config='backup.cfg'
secured_config='sbackup.cfg'
if [ ! -f "$config" ]; then
echo "Error: Config file not found: $config" >&2
echo "Please create a config file or specify the location of an existing file." >&2
exit 1
fi
if grep -E -q -v '^#|^[^ ]*=[^;]*' "$config"; then
echo "Config file is unclean, cleaning it..." >&2
grep -E '^#|^[^ ]*=[^;&]*' "$config" >"$secured_config"
config="$secured_config"
fi
source "$config"
echo "Configuration file loaded" >&2
if [ "$EUID" -ne 0 ]; then
echo "Please run as root"
exit
fi
}
function system {
if [ -f /etc/redhat-release ]; then
echo "Discovered Red Hat-based OS..."
system='rhel'
elif [ -f /etc/lsb-release ]; then
echo "Discovered Debian-based OS..."
system='debian'
else
echo "Error: Unable to detect OS type."
exit 1
fi
}
function makedir {
timestamp=$(date +%Y%m%d_%H%M)
mkdir -p "$backup_path/backify-$timestamp"
tmpdir="$backup_path/backify-$timestamp"
}
function wwwbackup {
if [ "$www_backup" = true ]; then
echo "Backing up wwwroot..." >&2
mkdir -p "$tmpdir/wwwdata"
cp -r "$www_dir/" "$tmpdir/wwwdata/"
echo "Finished" >&2
fi
}
function vhostbackup {
if [ "$vhost_backup" = true ]; then
echo "Backing up vhosts..." >&2
mkdir -p "$tmpdir/vhosts"
cp -avr "$vhost_dir/" "$tmpdir/vhosts/"
echo "Finished" >&2
fi
}
function logbackup {
if [ "$log_backup" = true ]; then
echo "Backing up system logs..." >&2
mkdir -p "$tmpdir/syslogs"
case $system in
"rhel")
if [[ " ${log_to_backup[*]} " =~ " ${log_to_backup[fail2ban]} " ]]; then
cp /var/log/fail2ban.log "$tmpdir/syslogs/"
fi
if [[ " ${log_to_backup[*]} " =~ " ${log_to_backup[apache]} " ]]; then
mkdir -p "$tmpdir/apachelogs"
cp -r /var/log/httpd "$tmpdir/apachelogs"
fi
if [[ " ${log_to_backup[*]} " =~ " ${log_to_backup[nginx]} " ]]; then
mkdir -p "$tmpdir/nginxlogs"
cp -r /var/log/nginx "$tmpdir/nginxlogs"
fi
if [[ " ${log_to_backup[*]} " =~ " ${log_to_backup[pckg_mngr]} " ]]; then
mkdir -p "$tmpdir/syslogs/"
mkdir -p "$tmpdir/syslogs/yum"
cp -r /var/log/yum/* "$tmpdir/syslogs/yum/"
cp -r /var/log/dnf* "$tmpdir/syslogs/yum/"
fi
if [[ " ${log_to_backup[*]} " =~ " ${log_to_backup[letsencrypt]} " ]]; then
mkdir -p "$tmpdir/syslogs/"
mkdir -p "$tmpdir/syslogs/letsencrypt"
cp -r /var/log/letsencrypt/* "$tmpdir/syslogs/letsencrypt/"
fi
if [[ " ${log_to_backup[*]} " =~ " ${log_to_backup[php]} " ]]; then
mkdir -p "$tmpdir/syslogs/"
cp -r /var/log/php*.log "$tmpdir/syslogs/"
fi
if [[ " ${log_to_backup[*]} " =~ " ${log_to_backup[syslog]} " ]]; then
mkdir -p "$tmpdir/syslogs/"
cp -r /var/log/secure "$tmpdir/syslogs/"
fi
if [[ " ${log_to_backup[*]} " =~ " ${log_to_backup[purge]} " ]]; then
echo "Purging logs..." >&2
truncate -s 0 /var/log/syslog
truncate -s 0 /var/log/message
if [[ " ${log_to_backup[*]} " =~ " ${log_to_backup[apache]} " ]]; then
truncate -s 0 /var/log/httpd/*
rm /var/log/httpd/*.gz
fi
if [[ " ${log_to_backup[*]} " =~ " ${log_to_backup[nginx]} " ]]; then
truncate -s 0 /var/log/nginx/*
rm /var/log/nginx/*.gz
fi
if [[ " ${log_to_backup[*]} " =~ " ${log_to_backup[fail2ban]} " ]]; then
truncate -s 0 /var/log/fail2ban.log
fi
if [[ " ${log_to_backup[*]} " =~ " ${log_to_backup[pckg_mngr]} " ]]; then
truncate -s 0 /var/log/yum/*
truncate -s 0 /var/log/dnf*
fi
if [[ " ${log_to_backup[*]} " =~ " ${log_to_backup[letsencrypt]} " ]]; then
truncate -s 0 /var/log/letsencrypt/*
fi
if [[ " ${log_to_backup[*]} " =~ " ${log_to_backup[php]} " ]]; then
truncate -s 0 /var/log/php*.log
fi
if [[ " ${log_to_backup[*]} " =~ " ${log_to_backup[syslog]} " ]]; then
truncate -s 0 /var/log/secure
fi
fi
;;
"debian")
if [[ " ${log_to_backup[*]} " =~ " ${log_to_backup[fail2ban]} " ]]; then
cp /var/log/fail2ban.log "$tmpdir/syslogs/"
fi
if [[ " ${log_to_backup[*]} " =~ " ${log_to_backup[apache]} " ]]; then
mkdir -p "$tmpdir/apachelogs"
cp -r /var/log/apache2 "$tmpdir/apachelogs"
fi
if [[ " ${log_to_backup[*]} " =~ " ${log_to_backup[nginx]} " ]]; then
mkdir -p "$tmpdir/nginxlogs"
cp -r /var/log/nginx "$tmpdir/nginxlogs"
fi
if [[ " ${log_to_backup[*]} " =~ " ${log_to_backup[pckg_mngr]} " ]]; then
mkdir -p "$tmpdir/syslogs/"
mkdir -p "$tmpdir/syslogs/apt"
cp -r /var/log/apt/* "$tmpdir/syslogs/apt/"
fi
if [[ " ${log_to_backup[*]} " =~ " ${log_to_backup[auth]} " ]]; then
mkdir -p "$tmpdir/syslogs/"
cp -r /var/log/auth.log "$tmpdir/syslogs/"
fi
if [[ " ${log_to_backup[*]} " =~ " ${log_to_backup[dmesg]} " ]]; then
mkdir -p "$tmpdir/syslogs/"
cp -r /var/log/dmesg "$tmpdir/syslogs/"
fi
if [[ " ${log_to_backup[*]} " =~ " ${log_to_backup[dpkg]} " ]]; then
mkdir -p "$tmpdir/syslogs/"
cp -r /var/log/dpkg.log "$tmpdir/syslogs/"
fi
if [[ " ${log_to_backup[*]} " =~ " ${log_to_backup[letsencrypt]} " ]]; then
mkdir -p "$tmpdir/syslogs/"
mkdir -p "$tmpdir/syslogs/letsencrypt"
cp -r /var/log/letsencrypt/* "$tmpdir/syslogs/letsencrypt/"
fi
if [[ " ${log_to_backup[*]} " =~ " ${log_to_backup[php]} " ]]; then
mkdir -p "$tmpdir/syslogs/"
cp -r /var/log/php*.log "$tmpdir/syslogs/"
fi
if [[ " ${log_to_backup[*]} " =~ " ${log_to_backup[syslog]} " ]]; then
mkdir -p "$tmpdir/syslogs/"
cp -r /var/log/syslog "$tmpdir/syslogs/"
fi
if [[ " ${log_to_backup[*]} " =~ " ${log_to_backup[purge]} " ]]; then
echo "Purging logs..." >&2
truncate -s 0 /var/log/syslog
truncate -s 0 /var/log/message
if [[ " ${log_to_backup[*]} " =~ " ${log_to_backup[apache]} " ]]; then
truncate -s 0 /var/log/apache2/*
rm /var/log/apache2/*.gz
fi
if [[ " ${log_to_backup[*]} " =~ " ${log_to_backup[nginx]} " ]]; then
truncate -s 0 /var/log/nginx/*
rm /var/log/nginx/*.gz
fi
if [[ " ${log_to_backup[*]} " =~ " ${log_to_backup[fail2ban]} " ]]; then
truncate -s 0 /var/log/fail2ban.log
fi
if [[ " ${log_to_backup[*]} " =~ " ${log_to_backup[pckg_mngr]} " ]]; then
truncate -s 0 /var/log/apt/*
fi
if [[ " ${log_to_backup[*]} " =~ " ${log_to_backup[auth]} " ]]; then
truncate -s 0 /var/log/auth.log
fi
if [[ " ${log_to_backup[*]} " =~ " ${log_to_backup[dmesg]} " ]]; then
truncate -s 0 /var/log/dmesg
fi
if [[ " ${log_to_backup[*]} " =~ " ${log_to_backup[dpkg]} " ]]; then
truncate -s 0 /var/log/dpkg.log
fi
if [[ " ${log_to_backup[*]} " =~ " ${log_to_backup[letsencrypt]} " ]]; then
truncate -s 0 /var/log/letsencrypt/*
fi
if [[ " ${log_to_backup[*]} " =~ " ${log_to_backup[php]} " ]]; then
truncate -s 0 /var/log/php*.log
fi
if [[ " ${log_to_backup[*]} " =~ " ${log_to_backup[syslog]} " ]]; then
truncate -s 0 /var/log/syslog
fi
fi
;;
esac
fi
}
function push {
if [ "$rsync_push" = true ]; then
echo "Pushing the backup package to $target_host..." >&2
rsync -avz -e "ssh -i $target_key" $backup_path/backify-$timestamp.tar.gz $target_user@$target_host:$target_dir
if [ "$push_clean" = true ]; then
echo "Removing archive..." >&2
rm "$backup_path/backify-$timestamp.tar.gz"
fi
fi
}
function dockerbackup {
if [ "$docker_enabled" = true ]; then
if [ "$docker_images" = true ]; then
echo "Backing up Docker images..." >&2
for i in $(docker inspect --format='{{.Name}}' $(docker ps -q) | cut -f2 -d\/); do
container_name=$i
echo -n "$container_name - "
container_image=$(docker inspect --format='{{.Config.Image}}' $container_name)
mkdir -p $tmpdir/containers/$container_name
save_dir="$tmpdir/containers/$container_name/$container_name-image.tar"
docker save -o $save_dir $container_image
echo "Finished" >&2
done
fi
if [ "$docker_volumes" = true ]; then
echo "Backing up Docker volumes..." >&2
#Thanks piscue :)
for i in $(docker inspect --format='{{.Name}}' $(docker ps -q) | cut -f2 -d\/); do
container_name=$i
mkdir -p $tmpdir/containers/$container_name
echo -n "$container_name - "
docker run --rm --userns=host \
--volumes-from $container_name \
-v $tmpdir/containers/$container_name:/backup \
-e TAR_OPTS="$tar_opts" \
piscue/docker-backup \
backup "$container_name-volume.tar.xz"
echo "Finished" >&2
done
fi
if [ "$docker_data" = true ]; then
echo "Backing up container information..." >&2
for i in $(docker inspect --format='{{.Name}}' $(docker ps -q) | cut -f2 -d\/); do
container_name=$i
echo -n "$container_name - "
container_data=$(docker inspect $container_name)
mkdir -p $tmpdir/containers/$container_name
echo $container_data >$tmpdir/containers/$container_name/$container_name-data.txt
echo "Finished" >&2
done
fi
fi
}
function backup_db {
mkdir -p $tmpdir/db
if [ "$db_all" = true ]; then
if [ "$database_type" = "mysql" ]; then
mysqldump -u "$db_username" -p"$db_password" -h "$db_host" -P"$db_port" --all-databases >$tmpdir/db/db_all.sql
elif [ "$database_type" = "postgresql" ]; then
pg_dumpall -U "$db_username" -h "$db_host" -f $tmpdir/db/db_all.sql
fi
else
if [ "$database_type" = "mysql" ]; then
mysqldump -u "$db_username" -p"$db_password" -h "$db_host" -P"$db_port" "$db_name" >$tmpdir/db/$db_name.sql
elif [ "$database_type" = "postgresql" ]; then
pg_dump -U "$db_username" -h "$db_host" "$db_name" -f $tmpdir/db/$db_name.sql
fi
fi
}
function custombackup {
if [ "$custom_backup" = "true" ]; then
mkdir -p "$tmpdir/custom"
for i in "${custom_dirs[@]}"
do
cp -r $i $tmpdir/custom/
done
fi
}
function runbackup {
# init, config check
init
# run system detection
system
if [ "$enabled" = true ]; then
# step 1 : create directory
makedir
# step 2 : www backup
wwwbackup
# step 3 : vhost backup
vhostbackup
# step 4: log backup
logbackup
# step 5: docker backup
dockerbackup
# step 6: db backup
if [ "$db_backup" = true ]; then
backup_db
fi
# step 7 : custom backup
custombackup
# archive data
echo "Creating backup archive..." >&2
tar -czvf $backup_path/backify-$timestamp.tar.gz $tmpdir >> /var/log/backify-compress.log
# push data to server
push
# remove temp files
rm -r $tmpdir
echo "Voila, enjoy the rest of the day" >&2
else
echo "Backup is disabled in the configuration" >&2
fi
}
runbackup