Compare commits

..

25 Commits

Author SHA1 Message Date
88f16a8efa License 2025-11-19 17:49:06 +01:00
0bde669723 V1.1 Refactor 2025-11-19 17:47:29 +01:00
David Petric
1bff525466 Minimal adjustments 2023-03-21 18:11:38 +01:00
David Petric
35255156bd readme and slight adjustments 2023-02-27 09:54:23 +01:00
f53b1d381d DB backup implement host and port parameter / backup path fix 2023-02-22 20:05:34 +01:00
472e490ea1 Implement custom directory backup 2023-02-22 19:42:00 +01:00
fa3ca3cf63 Docker volume backup path bug fix 2023-02-22 18:20:25 +01:00
ed5f806720 Markdown troubles 2023-02-22 18:11:01 +01:00
6dcc7eb8e0 Documentation, alternatives.log kickout 2023-02-22 18:09:48 +01:00
65b5b20b18 Obvious fail of array matching fix 2023-02-22 18:01:41 +01:00
685b149045 Remove duplicate echo 2023-02-22 17:47:44 +01:00
e6adba12b2 Add forgoten purging option to array 2023-02-22 17:42:07 +01:00
aef0f7dc32 Upgrades on log backup logic 2023-02-22 17:35:59 +01:00
225a9d4db4 Widen the support for system logs on Debian systems 2023-02-22 17:30:12 +01:00
David Petric
bc0480689d Array for logs implementation 2023-02-13 15:06:07 +01:00
629e3c1631 Formatting 2023-02-12 14:46:59 +01:00
89b861c7f7 damn markdown 2023-02-12 14:43:52 +01:00
2f66ba386f Docs 2023-02-12 14:42:52 +01:00
5203f65147 Doc spacing 2023-02-12 14:30:54 +01:00
a428d3bc9e Docs 2023-02-12 14:28:20 +01:00
8e2e28602f Rsync implementation, formatting 2023-02-12 13:26:12 +01:00
6c40c9898d Syntax fixes 2023-02-11 19:16:15 +01:00
0478ce746a Log backup refactoring 2023-02-11 16:21:46 +01:00
5183b9bc7d Dump all databases, refactoring, docs 2023-02-11 11:58:21 +01:00
59a370ba74 readme update 2023-02-11 11:50:38 +01:00
5 changed files with 1046 additions and 348 deletions

18
LICENSE Normal file
View File

@@ -0,0 +1,18 @@
MIT License
Copyright (c) 2025 almostm4
Permission is hereby granted, free of charge, to any person obtaining a copy of this software and
associated documentation files (the "Software"), to deal in the Software without restriction, including
without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the
following conditions:
The above copyright notice and this permission notice shall be included in all copies or substantial
portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT
LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO
EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER
IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE
USE OR OTHER DEALINGS IN THE SOFTWARE.

289
README.MD
View File

@@ -1,49 +1,258 @@
Backify 🗃️
===========
Backify README
==============
A powerful and automated bash script for backing up all kinds of Linux data, archiving it and pushing it to a remote host.
What is Backify? 👾
-------------------
What is Backify?
----------------
Backify is a shell script that helps you automate the backup process of all kinds of data from Linux systems. It differs from other backup scripts because it gives you the flexibility to choose what you want to save, ranging from system logs to containers. The script was tailored to meet personal needs as there was no complete solution for the specific use case.
Configuration 🧙‍♂️
-------------------
Prerequisites
-------------
All configuration options can be found in the `backup.cfg` file. The script has an integrity check in place to ensure that no external commands can be embedded into it by malware. The following table provides an overview of the available configuration options:
* The script must be executed as root.
* A configuration file (by default "backup.cfg") must exist and be readable.
* The system must be a Red Hatbased (RHEL, CentOS, Rocky, Alma…) or Debian/Ubuntubased distribution.
* Required tools:
* tar
* rsync and ssh if you push backups to a remote host
* docker if you use Docker backup features
* mysqldump (for MySQL/MariaDB) and/or pg\_dump / pg\_dumpall (for PostgreSQL) if you back up databases
| Name | Value | Specifics |
| --- | --- | --- |
| Enabled | true/false | Disable the main function |
| www_backup | true/false | Backup of the webroot directory |
| www_dir | ------> | Path to the webroot |
| vhost_backup | true/false | Backup of the vhost configuration |
| vhost_dir | ------> | Path to the vhost files |
| log_backup | true/false | Backup log files |
| log_backup_web | true/false | Backup web app logs |
| apache | true/false | Enable Apache logs |
| nginx | true/false | Enable nginx logs |
| fail2ban_log | true/false | Enable fail2ban logs |
| log_purge | true/false | Truncate logs after backup |
| rsync_push | true/false | Push the backup file to a remote server |
| push_clean | true/false | Delete the backup file after push |
| target_host | ------> | Backup push target host |
| target_user | ------> | Backup push target username |
| target_key | ------> | Backup target ssh key |
| docker_enable | true/false | Enable Docker backups |
| docker_images | true/false | Backup Docker images |
| docker_volumes | true/false | Backup Docker volumes |
| docker_data | true/false | Backup container information |
Configuration
-------------
To-Do List
All configuration options can be found in the "backup.cfg" file.
By default Backify looks for "backup.cfg" in the same directory as the script, but you can override this with the-c / --config command-line option.
The script has an integrity check in place to ensure that no external commands can be embedded into it by malware(the config is “cleaned” before sourcing). The following sections provide an overview of the available configuration options.
Main options
------------
**Name** | **Value** | **Specifics**
--------------------|--------------------|-------------
`enabled` | true/false | Disable or enable the main function
`backup_path` | path | Where to save the backup, **must NOT end with a slash**
`www_backup` | true/false | Backup of the webroot directory
`www_dir` | path | Path to the webroot
`vhost_backup` | true/false | Backup of the vhost configuration
`vhost_dir` | path | Path to the vhost files
`log_backup` | true/false | Backup log files
`log_to_backup` | array | Array of logs to back up (see list below)
`rsync_push` | true/false | Push the backup archive to a remote server
`push_clean` | true/false | Delete the local backup archive after a successful push
`target_host` | host | Backup push target host (single-target mode)
`target_user` | user | Backup push target username (single-target mode)
`target_key` | path | SSH key for the remote backup user
`target_dir` | path | Remote directory to push backups to
`targets` | array | **Optional**: list of full rsync destinations (`user@host:/path`). If non-empty, overrides `target_host` / `target_user` / `target_dir`.
`docker_enabled` | true/false | Enable Docker backups
`docker_images` | true/false | Backup Docker images
`docker_volumes` | true/false | Backup Docker volumes (via helper container)
`docker_data` | true/false | Backup container metadata (inspect output)
`tar_opts` | string | Optional `TAR_OPTS` passed to the Docker volume backup helper (e.g. `-J` for xz)
`db_backup` | true/false | Enable database backup
`database_type` | mysql/postgresql | Database type
`db_host` | host | Database host
`db_port` | int | Port for DB access
`db_username` | string | Username for DB access
`db_password` | string | Password for DB access
`db_name` | string | Name of database to dump when `db_all=false`
`db_all` | true/false | Dump all databases instead of a specific one
`custom_backup` | true/false | Enable backup of custom files
`custom_dirs` | array | Array of files/directories to back up
`retention_days` | int | **Optional**: delete local archives older than this many days (0 = disabled)
`retention_keep_min`| int | **Optional**: always keep at least this many newest archives (0 = disabled)
`pre_backup_hook` | path | **Optional**: executable script run **before** the backup (receives `TMPDIR` as `$1`)
`post_backup_hook` | path | **Optional**: executable script run **after success** (receives archive path as `$1`)
Logs to backup array
--------------------
**Option** | **Specifics**
---------------|-------------
`apache` | Apache access and error logs
`nginx` | Nginx access and error logs
`fail2ban` | Fail2ban log
`pckg_mngr` | Package manager logs (`yum`/`dnf` on RHEL, `apt` on Debian/Ubuntu)
`auth` | Authentication logs
`dmesg` | Kernel ring buffer log
`dpkg` | Package changes log (Debian/Ubuntu)
`letsencrypt` | Lets Encrypt logs
`php` | Logs from all installed PHP versions
`syslog` | General system event data
`purge` | Truncate/empty selected logs after backing them up
Command-line options
--------------------
Backify supports the following CLI options:
- `-c`, `--config` *PATH*
Path to configuration file (default: `./backup.cfg`).
- `-n`, `--dry-run`
Show what would be done, but do not copy/compress/push/delete anything.
- `-h`, `--help`
Show short usage help and exit.
- `-v`, `--version`
Show Backify version and exit.
Examples
--------
Use the default `backup.cfg` (in the same directory as the script):
```bash
./backify.sh
```
Use a custom config file:
```bash
./backify.sh --config /etc/backify/web01.cfg
```
Safe test run: see what would happen, but do not touch any data:
```bash
./backify.sh --config /etc/backify/web01.cfg --dry-run
```
Script Execution
----------------
To execute the script with the default configuration file in the same directory:
./backify.sh
The script will:
* Parse CLI options (config path, dry-run, etc.).
* Initialize by checking for the existence of the configuration file, loading its parameters, and verifying that it is being executed as root.
* Detect whether the system is Red Hatbased or Debian/Ubuntubased.
* Create a new timestamped directory inside 'backup_path', where the backup data will be stored.
* Run the configured backup steps:
* Webroot
* Vhosts
* Logs
* Docker images/volumes/data
* Databases
* Custom files/directories
* Create a compressed tar archive (backify-YYYYMMDD\_HHMM.tar.gz) from the backup directory.
* Optionally push the archive to one or more remote rsync targets.
* Optionally apply a retention policy to local archives.
* Optionally run pre/post backup hooks.
If you use --dry-run, steps that modify data (copying files, truncating logs, creating archives, pushing, deleting, hooks) are simulated and only logged, not executed.
Automation
----------
- [ ] Rsync implementation via shell
- [ ] Rsync implementation via Docker
- [ ] Cron scheduler
- [ ] RHEL/Ubuntu parser
- [ ] Automatic adjustments per system
- [ ] MySQL backups
- [ ] PostgreSQL backups
- [ ] Cover more system logs
Cron
----
You can use cron to run Backify every day at 12:00.
1. Open the crontab editor:
```bash
crontab -e
```
2. Add a line like this (adjust the path as needed):
```bash
0 12 * * * /path/to/backify.sh --config /etc/backify/web01.cfg
```
3. Save and exit.
systemd (optional)
------------------
If you prefer systemd, you can wrap Backify in a simple backify.service and backify.timer unit pair.(Units are not shipped in this repo yet, but Backify is fully compatible with a systemd timer.)
Restore (high-level overview)
-----------------------------
Backify creates standard `tar.gz` archives, so restoration is straightforward but manual by design:
1. Copy the desired archive back to the server (or access it on the backup storage).
2. Extract it:
```bash
tar -xzf backify-YYYYMMDD_HHMM.tar.gz -C /tmp/restore
```
The content layout roughly mirrors:
* `wwwdata/` your webroot
* `vhosts/` webserver vhost configs
* `syslogs/`, `apachelogs/`, `nginxlogs/` logs
* `containers/` Docker images, volumes, and metadata (if enabled)
* `db/` database dumps (`.sql`)
* `custom/` custom files/directories you configured
3. Restore what you need:
* Webroot / vhosts: copy files back into place and reload/restart services.
* Databases:
* MySQL/MariaDB:
```bash
mysql -u USER -p DB_NAME < db/yourdb.sql
```
* PostgreSQL:
```bash
psql -U USER -h HOST DB_NAME < db/yourdb.sql
```
Make sure you understand what you are overwriting; ideally test restores on a non-production server first.
MySQL / PostgreSQL user
-----------------------
If you want to dump all databases, a dedicated read-only user is recommended.
For MySQL/MariaDB, you can create one with:
```sql
GRANT LOCK TABLES, SELECT ON DATABASE\_NAME.\* TO 'BACKUP\_USER'@'%' IDENTIFIED BY 'PASSWORD';
```
For PostgreSQL, you can use a user with sufficient CONNECT and SELECT permissions on the databases you want to dump.
Buy me a beer
-------------
One pale ale won't hurt, will it?
0x4046979a1E1152ddbfa4a910b1a98F73625a77ae ETH / BNB / Polygon chains

727
backify.sh Normal file
View File

@@ -0,0 +1,727 @@
#! /bin/bash
set -Eeo pipefail
umask 077
VERSION="1.1.0"
CONFIG="backup.cfg" # default config path; can be overridden with -c/--config
DRY_RUN=false
tmpdir=""
cleanup() {
if [ -n "${tmpdir-}" ] && [ -d "$tmpdir" ]; then
rm -rf "$tmpdir"
fi
}
trap cleanup EXIT
function usage {
cat >&2 <<EOF
Usage: $0 [options]
Options:
-c, --config PATH Path to configuration file (default: backup.cfg)
-n, --dry-run Show what would be done, but do not copy/compress/push/delete
-h, --help Show this help and exit
-v, --version Show Backify version and exit
EOF
}
function show_version {
echo "Backify version $VERSION"
}
function parse_args {
while [ $# -gt 0 ]; do
case "$1" in
-c|--config)
if [ -n "${2-}" ]; then
CONFIG="$2"
shift 2
else
echo "Error: -c|--config requires a path argument." >&2
usage
exit 1
fi
;;
-n|--dry-run)
DRY_RUN=true
shift
;;
-h|--help)
usage
exit 0
;;
-v|--version)
show_version
exit 0
;;
--)
shift
break
;;
-*)
echo "Unknown option: $1" >&2
usage
exit 1
;;
*)
shift
;;
esac
done
}
function log_enabled {
local needle="$1"
local item
for item in "${log_to_backup[@]:-}"; do
if [ "$item" = "$needle" ]; then
return 0
fi
done
return 1
}
function require_cmd {
local cmd="$1"
if ! command -v "$cmd" >/dev/null 2>&1; then
echo "Error: required command '$cmd' not found in PATH." >&2
exit 1
fi
}
function preflight {
require_cmd tar
if [ "${rsync_push:-false}" = true ]; then
require_cmd rsync
require_cmd ssh
fi
if [ "${docker_enabled:-false}" = true ]; then
require_cmd docker
fi
if [ "${db_backup:-false}" = true ]; then
case "${database_type:-}" in
mysql)
require_cmd mysqldump
;;
postgresql)
require_cmd pg_dump
require_cmd pg_dumpall
;;
*)
echo "Error: database_type must be 'mysql' or 'postgresql' when db_backup is true." >&2
exit 1
;;
esac
fi
}
function init {
echo "Backify is starting, looking for configuration file..." >&2
config="$CONFIG"
secured_config='sbackup.cfg'
if [ ! -f "$config" ]; then
echo "Error: Config file not found: $config" >&2
echo "Please create a config file or specify the location of an existing file (use -c/--config)." >&2
exit 1
fi
if grep -E -q -v '^#|^[^ ]*=[^;]*' "$config"; then
echo "Config file is unclean, cleaning it..." >&2
grep -E '^#|^[^ ]*=[^;&]*' "$config" >"$secured_config"
config="$secured_config"
fi
source "$config"
echo "Configuration file loaded" >&2
if [ "$EUID" -ne 0 ]; then
echo "Please run as root" >&2
exit 1
fi
mkdir -p "$backup_path"
if [ ! -w "$backup_path" ]; then
echo "Error: backup_path '$backup_path' is not writable." >&2
exit 1
fi
: "${retention_days:=0}"
: "${retention_keep_min:=0}"
: "${pre_backup_hook:=}"
: "${post_backup_hook:=}"
if ! declare -p log_to_backup >/dev/null 2>&1; then
log_to_backup=()
fi
if ! declare -p custom_dirs >/dev/null 2>&1; then
custom_dirs=()
fi
if ! declare -p targets >/dev/null 2>&1; then
targets=()
fi
}
function detect_system {
echo "Detecting OS type..." >&2
if [ -r /etc/os-release ]; then
. /etc/os-release
case "$ID" in
rhel|centos|rocky|almalinux)
echo "Discovered Red Hat-based OS..." >&2
SYSTEM='rhel'
;;
debian|ubuntu)
echo "Discovered Debian-based OS..." >&2
SYSTEM='debian'
;;
*)
echo "Error: Unsupported OS: $ID" >&2
exit 1
;;
esac
elif [ -f /etc/redhat-release ]; then
echo "Discovered Red Hat-based OS via legacy detection..." >&2
SYSTEM='rhel'
elif [ -f /etc/lsb-release ]; then
echo "Discovered Debian-based OS via legacy detection..." >&2
SYSTEM='debian'
else
echo "Error: Unable to detect OS type." >&2
exit 1
fi
}
function makedir {
timestamp=$(date +%Y%m%d_%H%M)
tmpdir="$backup_path/backify-$timestamp"
if [ "$DRY_RUN" = true ]; then
echo "[DRY-RUN] Would create temporary directory $tmpdir" >&2
else
mkdir -p "$tmpdir"
fi
}
function wwwbackup {
if [ "$www_backup" = true ]; then
echo "Backing up wwwroot..." >&2
if [ "$DRY_RUN" = true ]; then
echo "[DRY-RUN] Would copy $www_dir to $tmpdir/wwwdata" >&2
return
fi
mkdir -p "$tmpdir/wwwdata"
cp -r "$www_dir/" "$tmpdir/wwwdata/"
echo "Finished" >&2
fi
}
function vhostbackup {
if [ "$vhost_backup" = true ]; then
echo "Backing up vhosts..." >&2
if [ "$DRY_RUN" = true ]; then
echo "[DRY-RUN] Would copy $vhost_dir to $tmpdir/vhosts" >&2
return
fi
mkdir -p "$tmpdir/vhosts"
cp -avr "$vhost_dir/" "$tmpdir/vhosts/"
echo "Finished" >&2
fi
}
function logbackup {
if [ "$log_backup" = true ]; then
echo "Backing up system logs..." >&2
if [ "$DRY_RUN" = true ]; then
echo "[DRY-RUN] Would collect selected logs into $tmpdir/syslogs" >&2
else
mkdir -p "$tmpdir/syslogs"
fi
case "$SYSTEM" in
"rhel")
if log_enabled "fail2ban"; then
if [ "$DRY_RUN" = true ]; then
echo "[DRY-RUN] Would copy /var/log/fail2ban.log" >&2
else
cp /var/log/fail2ban.log "$tmpdir/syslogs/" 2>/dev/null || true
fi
fi
if log_enabled "apache"; then
if [ "$DRY_RUN" = true ]; then
echo "[DRY-RUN] Would copy /var/log/httpd to $tmpdir/apachelogs" >&2
else
mkdir -p "$tmpdir/apachelogs"
cp -r /var/log/httpd "$tmpdir/apachelogs" 2>/dev/null || true
fi
fi
if log_enabled "nginx"; then
if [ "$DRY_RUN" = true ]; then
echo "[DRY-RUN] Would copy /var/log/nginx to $tmpdir/nginxlogs" >&2
else
mkdir -p "$tmpdir/nginxlogs"
cp -r /var/log/nginx "$tmpdir/nginxlogs" 2>/dev/null || true
fi
fi
if log_enabled "pckg_mngr"; then
if [ "$DRY_RUN" = true ]; then
echo "[DRY-RUN] Would copy yum/dnf logs to $tmpdir/syslogs/yum" >&2
else
mkdir -p "$tmpdir/syslogs/yum"
cp -r /var/log/yum/* "$tmpdir/syslogs/yum/" 2>/dev/null || true
cp -r /var/log/dnf* "$tmpdir/syslogs/yum/" 2>/dev/null || true
fi
fi
if log_enabled "letsencrypt"; then
if [ "$DRY_RUN" = true ]; then
echo "[DRY-RUN] Would copy /var/log/letsencrypt to $tmpdir/syslogs/letsencrypt" >&2
else
mkdir -p "$tmpdir/syslogs/letsencrypt"
cp -r /var/log/letsencrypt/* "$tmpdir/syslogs/letsencrypt/" 2>/dev/null || true
fi
fi
if log_enabled "php"; then
if [ "$DRY_RUN" = true ]; then
echo "[DRY-RUN] Would copy /var/log/php*.log to $tmpdir/syslogs" >&2
else
cp -r /var/log/php*.log "$tmpdir/syslogs/" 2>/dev/null || true
fi
fi
if log_enabled "syslog"; then
if [ "$DRY_RUN" = true ]; then
echo "[DRY-RUN] Would copy /var/log/secure to $tmpdir/syslogs" >&2
else
cp -r /var/log/secure "$tmpdir/syslogs/" 2>/dev/null || true
fi
fi
if log_enabled "purge"; then
echo "Purging logs..." >&2
if [ "$DRY_RUN" = true ]; then
echo "[DRY-RUN] Would truncate and clear configured logs on RHEL system" >&2
else
truncate -s 0 /var/log/messages 2>/dev/null || true
truncate -s 0 /var/log/syslog 2>/dev/null || true
if log_enabled "apache"; then
truncate -s 0 /var/log/httpd/* 2>/dev/null || true
rm /var/log/httpd/*.gz 2>/dev/null || true
fi
if log_enabled "nginx"; then
truncate -s 0 /var/log/nginx/* 2>/dev/null || true
rm /var/log/nginx/*.gz 2>/dev/null || true
fi
if log_enabled "fail2ban"; then
truncate -s 0 /var/log/fail2ban.log 2>/dev/null || true
fi
if log_enabled "pckg_mngr"; then
truncate -s 0 /var/log/yum/* 2>/dev/null || true
truncate -s 0 /var/log/dnf* 2>/dev/null || true
fi
if log_enabled "letsencrypt"; then
truncate -s 0 /var/log/letsencrypt/* 2>/dev/null || true
fi
if log_enabled "php"; then
truncate -s 0 /var/log/php*.log 2>/dev/null || true
fi
if log_enabled "syslog"; then
truncate -s 0 /var/log/secure 2>/dev/null || true
fi
fi
fi
;;
"debian")
if log_enabled "fail2ban"; then
if [ "$DRY_RUN" = true ]; then
echo "[DRY-RUN] Would copy /var/log/fail2ban.log" >&2
else
cp /var/log/fail2ban.log "$tmpdir/syslogs/" 2>/dev/null || true
fi
fi
if log_enabled "apache"; then
if [ "$DRY_RUN" = true ]; then
echo "[DRY-RUN] Would copy /var/log/apache2 to $tmpdir/apachelogs" >&2
else
mkdir -p "$tmpdir/apachelogs"
cp -r /var/log/apache2 "$tmpdir/apachelogs" 2>/dev/null || true
fi
fi
if log_enabled "nginx"; then
if [ "$DRY_RUN" = true ]; then
echo "[DRY-RUN] Would copy /var/log/nginx to $tmpdir/nginxlogs" >&2
else
mkdir -p "$tmpdir/nginxlogs"
cp -r /var/log/nginx "$tmpdir/nginxlogs" 2>/dev/null || true
fi
fi
if log_enabled "pckg_mngr"; then
if [ "$DRY_RUN" = true ]; then
echo "[DRY-RUN] Would copy apt logs to $tmpdir/syslogs/apt" >&2
else
mkdir -p "$tmpdir/syslogs/apt"
cp -r /var/log/apt/* "$tmpdir/syslogs/apt/" 2>/dev/null || true
fi
fi
if log_enabled "auth"; then
if [ "$DRY_RUN" = true ]; then
echo "[DRY-RUN] Would copy /var/log/auth.log" >&2
else
cp -r /var/log/auth.log "$tmpdir/syslogs/" 2>/dev/null || true
fi
fi
if log_enabled "dmesg"; then
if [ "$DRY_RUN" = true ]; then
echo "[DRY-RUN] Would copy /var/log/dmesg" >&2
else
cp -r /var/log/dmesg "$tmpdir/syslogs/" 2>/dev/null || true
fi
fi
if log_enabled "dpkg"; then
if [ "$DRY_RUN" = true ]; then
echo "[DRY-RUN] Would copy /var/log/dpkg.log" >&2
else
cp -r /var/log/dpkg.log "$tmpdir/syslogs/" 2>/dev/null || true
fi
fi
if log_enabled "letsencrypt"; then
if [ "$DRY_RUN" = true ]; then
echo "[DRY-RUN] Would copy /var/log/letsencrypt to $tmpdir/syslogs/letsencrypt" >&2
else
mkdir -p "$tmpdir/syslogs/letsencrypt"
cp -r /var/log/letsencrypt/* "$tmpdir/syslogs/letsencrypt/" 2>/dev/null || true
fi
fi
if log_enabled "php"; then
if [ "$DRY_RUN" = true ]; then
echo "[DRY-RUN] Would copy /var/log/php*.log" >&2
else
cp -r /var/log/php*.log "$tmpdir/syslogs/" 2>/dev/null || true
fi
fi
if log_enabled "syslog"; then
if [ "$DRY_RUN" = true ]; then
echo "[DRY-RUN] Would copy /var/log/syslog" >&2
else
cp -r /var/log/syslog "$tmpdir/syslogs/" 2>/dev/null || true
fi
fi
if log_enabled "purge"; then
echo "Purging logs..." >&2
if [ "$DRY_RUN" = true ]; then
echo "[DRY-RUN] Would truncate and clear configured logs on Debian system" >&2
else
truncate -s 0 /var/log/syslog 2>/dev/null || true
truncate -s 0 /var/log/message 2>/dev/null || true
if log_enabled "apache"; then
truncate -s 0 /var/log/apache2/* 2>/dev/null || true
rm /var/log/apache2/*.gz 2>/dev/null || true
fi
if log_enabled "nginx"; then
truncate -s 0 /var/log/nginx/* 2>/dev/null || true
rm /var/log/nginx/*.gz 2>/dev/null || true
fi
if log_enabled "fail2ban"; then
truncate -s 0 /var/log/fail2ban.log 2>/dev/null || true
fi
if log_enabled "pckg_mngr"; then
truncate -s 0 /var/log/apt/* 2>/dev/null || true
fi
if log_enabled "auth"; then
truncate -s 0 /var/log/auth.log 2>/dev/null || true
fi
if log_enabled "dmesg"; then
truncate -s 0 /var/log/dmesg 2>/dev/null || true
fi
if log_enabled "dpkg"; then
truncate -s 0 /var/log/dpkg.log 2>/dev/null || true
fi
if log_enabled "letsencrypt"; then
truncate -s 0 /var/log/letsencrypt/* 2>/dev/null || true
fi
if log_enabled "php"; then
truncate -s 0 /var/log/php*.log 2>/dev/null || true
fi
if log_enabled "syslog"; then
truncate -s 0 /var/log/syslog 2>/dev/null || true
fi
fi
fi
;;
esac
fi
}
function push {
if [ "$rsync_push" = true ]; then
local archive="$backup_path/backify-$timestamp.tar.gz"
if [ "$DRY_RUN" = true ]; then
if [ "${#targets[@]}" -gt 0 ]; then
echo "[DRY-RUN] Would rsync $archive to multiple remote targets:" >&2
local t
for t in "${targets[@]}"; do
echo " - $t" >&2
done
else
echo "[DRY-RUN] Would rsync $archive to $target_user@$target_host:$target_dir" >&2
fi
return
fi
local rsync_ssh="ssh"
if [ -n "${target_key:-}" ]; then
rsync_ssh="ssh -i $target_key"
fi
if [ "${#targets[@]}" -gt 0 ]; then
local remote
for remote in "${targets[@]}"; do
echo "Pushing the backup package to $remote..." >&2
rsync -avz -e "$rsync_ssh" "$archive" "$remote"
done
else
echo "Pushing the backup package to $target_host..." >&2
rsync -avz -e "$rsync_ssh" "$archive" "$target_user@$target_host:$target_dir"
fi
if [ "$push_clean" = true ]; then
echo "Removing archive..." >&2
rm -f "$archive"
fi
fi
}
function dockerbackup {
if [ "$docker_enabled" = true ]; then
if [ "$DRY_RUN" = true ]; then
echo "[DRY-RUN] Would back up Docker images/volumes/data according to configuration." >&2
return
fi
if [ "$docker_images" = true ]; then
echo "Backing up Docker images..." >&2
for container_name in $(docker inspect --format='{{.Name}}' $(docker ps -q) | cut -f2 -d/); do
echo -n "$container_name - " >&2
container_image=$(docker inspect --format='{{.Config.Image}}' "$container_name")
mkdir -p "$tmpdir/containers/$container_name"
save_dir="$tmpdir/containers/$container_name/$container_name-image.tar"
docker save -o "$save_dir" "$container_image"
echo "Finished" >&2
done
fi
if [ "$docker_volumes" = true ]; then
echo "Backing up Docker volumes..." >&2
#Thanks piscue :)
for container_name in $(docker inspect --format='{{.Name}}' $(docker ps -q) | cut -f2 -d/); do
mkdir -p "$tmpdir/containers/$container_name"
echo -n "$container_name - " >&2
docker run --rm --userns=host \
--volumes-from "$container_name" \
-v "$tmpdir/containers/$container_name:/backup" \
-e TAR_OPTS="$tar_opts" \
piscue/docker-backup \
backup "$container_name-volume.tar.xz"
echo "Finished" >&2
done
fi
if [ "$docker_data" = true ]; then
echo "Backing up container information..." >&2
for container_name in $(docker inspect --format='{{.Name}}' $(docker ps -q) | cut -f2 -d/); do
echo -n "$container_name - " >&2
container_data=$(docker inspect "$container_name")
mkdir -p "$tmpdir/containers/$container_name"
echo "$container_data" >"$tmpdir/containers/$container_name/$container_name-data.txt"
echo "Finished" >&2
done
fi
fi
}
function backup_db {
if [ "$DRY_RUN" = true ]; then
echo "[DRY-RUN] Would dump database(s) of type '$database_type' into $tmpdir/db" >&2
return
fi
mkdir -p "$tmpdir/db"
if [ "$db_all" = true ]; then
if [ "$database_type" = "mysql" ]; then
mysqldump -u "$db_username" -p"$db_password" -h "$db_host" -P"$db_port" --all-databases >"$tmpdir/db/db_all.sql"
elif [ "$database_type" = "postgresql" ]; then
PGPASSWORD="$db_password" pg_dumpall -U "$db_username" -h "$db_host" -f "$tmpdir/db/db_all.sql"
fi
else
if [ "$database_type" = "mysql" ]; then
mysqldump -u "$db_username" -p"$db_password" -h "$db_host" -P"$db_port" "$db_name" >"$tmpdir/db/$db_name.sql"
elif [ "$database_type" = "postgresql" ]; then
PGPASSWORD="$db_password" pg_dump -U "$db_username" -h "$db_host" "$db_name" -f "$tmpdir/db/$db_name.sql"
fi
fi
}
function custombackup {
if [ "$custom_backup" = true ]; then
if [ "$DRY_RUN" = true ]; then
echo "[DRY-RUN] Would copy custom directories into $tmpdir/custom:" >&2
local i
for i in "${custom_dirs[@]}"; do
echo " - $i" >&2
done
return
fi
mkdir -p "$tmpdir/custom"
local i
for i in "${custom_dirs[@]}"; do
cp -r "$i" "$tmpdir/custom/" 2>/dev/null || true
done
fi
}
function apply_retention {
if [ "${retention_days:-0}" -le 0 ] && [ "${retention_keep_min:-0}" -le 0 ]; then
return
fi
local dir="$backup_path"
local pattern="$dir/backify-"*.tar.gz
if ! compgen -G "$pattern" >/dev/null; then
return
fi
echo "Applying retention policy in $dir..." >&2
local archives=()
local file
while IFS= read -r file; do
archives+=("$file")
done < <(ls -1 "$pattern" 2>/dev/null | sort)
local total=${#archives[@]}
if [ "$total" -eq 0 ]; then
return
fi
local keep_min=${retention_keep_min:-0}
if [ "$keep_min" -lt 0 ]; then keep_min=0; fi
local cutoff_date=""
if [ "${retention_days:-0}" -gt 0 ]; then
cutoff_date=$(date -d "-${retention_days} days" +%Y%m%d 2>/dev/null || true)
fi
local i=0
for file in "${archives[@]}"; do
i=$((i + 1))
if [ "$keep_min" -gt 0 ] && [ $((total - i)) -lt "$keep_min" ]; then
continue
fi
if [ -z "$cutoff_date" ] && [ "$keep_min" -gt 0 ]; then
echo "Removing old backup (by count): $file" >&2
rm -f "$file"
continue
elif [ -z "$cutoff_date" ]; then
continue
fi
local base
base=$(basename "$file")
local date_part=${base#backify-}
date_part=${date_part%%_*}
if [ "$date_part" -lt "$cutoff_date" ]; then
echo "Removing old backup (older than ${retention_days} days): $file" >&2
rm -f "$file"
fi
done
}
function runbackup {
init
detect_system
preflight
if [ "$enabled" = true ]; then
if [ "$DRY_RUN" = true ]; then
echo "Running Backify in DRY-RUN mode. No files will be copied, compressed, pushed or deleted." >&2
fi
makedir
if [ "$DRY_RUN" = false ] && [ -n "${pre_backup_hook:-}" ] && [ -x "$pre_backup_hook" ]; then
echo "Running pre-backup hook: $pre_backup_hook" >&2
"$pre_backup_hook" "$tmpdir"
fi
wwwbackup
vhostbackup
logbackup
dockerbackup
if [ "$db_backup" = true ]; then
backup_db
fi
custombackup
if [ "$DRY_RUN" = false ]; then
echo "Creating backup archive..." >&2
tar -czvf "$backup_path/backify-$timestamp.tar.gz" -C "$backup_path" "backify-$timestamp" >> /var/log/backify-compress.log 2>&1
push
apply_retention
if [ -n "${post_backup_hook:-}" ] && [ -x "$post_backup_hook" ]; then
echo "Running post-backup hook: $post_backup_hook" >&2
local post_backup_archive
post_backup_archive="$backup_path/backify-$timestamp.tar.gz"
"$post_backup_hook" "$post_backup_archive"
fi
else
echo "[DRY-RUN] Skipping archive creation, remote push, retention and post-backup hooks." >&2
fi
echo "Voila, enjoy the rest of the day" >&2
else
echo "Backup is disabled in the configuration" >&2
fi
}
parse_args "$@"
runbackup

View File

@@ -2,28 +2,55 @@
# --------------------------------------------------------
# Please double check Your settings
# --------------------------------------------------------
enabled=false #enable main function
www_backup=false # backup wwwroot
www_dir='xyz' # wwwroot location
vhost_backup=false # backup vhost config
vhost_dir='/etc/httpd/sites-enabled' # vhost location
log_backup=false # backup logs
log_backup_web=false # backup webapp logs
apache=false # apache log backup
nginx=false # nginx log backup
fail2ban_log=false # fail2ban log backup
log_purge=false # purge logs after backup
rsync_push=false # enable push to remote server
push_clean=false # clean backup file after push
target_host="127.0.0.1" # rsync target host
target_user="backup" # rsync target user
target_key='/home/xyz/.ssh/rsync' # rsync key
docker_enabled=false # will you use docker backup
docker_images=false # backup docker images
docker_volumes=false #backup docker volumes
docker_data=false #backup container information
db_backup=false #backup databases
database_type=mysql #mysql or postgresql
db_username=user #database user
db_password=user #database password
db_name=user #name of the database
enabled=false # enable the script
backup_path='/opt/backify' # where to save backups; must NOT end with trailing slash
www_backup=false # backup wwwroot
www_dir='xyz' # location of wwwroot to backup
vhost_backup=false # backup vhost configurations
vhost_dir='/etc/httpd/sites-enabled' # location of active vhost files
log_backup=false # backup logs
log_to_backup=("apache" "nginx" "fail2ban" "pckg_mngr" "auth" "dmesg" "dpkg" "letsencrypt" "php" "syslog" "purge")
# logs to backup, options: apache, nginx, fail2ban, pckg_mngr, auth, dmesg, dpkg, letsencrypt, php, syslog, purge (truncate all)
rsync_push=false # enable push to remote server
push_clean=false # delete local archive after successful push
target_host="127.0.0.1" # rsync target host (single-target mode)
target_user="backup" # rsync target user (single-target mode)
target_key='/home/xyz/.ssh/rsync' # rsync SSH key
target_dir='/opt/backups/srvyxyz/' # rsync target host path
# Optional: multiple rsync targets. If set, these are used instead of target_host/target_user/target_dir.
# Each entry is a full rsync destination: user@host:/path
# Example:
# targets=("backup@host1:/backups/server1/" "backup@host2:/backups/server2/")
targets=()
docker_enabled=false # enable Docker backup
docker_images=false # backup Docker images
docker_volumes=false # backup Docker volumes
docker_data=false # backup container information
tar_opts='' # optional TAR_OPTS passed to docker volume backup container (e.g. "-J" for xz compression)
db_backup=false # backup databases
database_type=mysql # mysql or postgresql
db_host='localhost' # database host
db_port=3306 # port for DB access
db_username='user' # database user
db_password='user' # database password
db_all=false # dump all databases if true
db_name='user' # name of the database if db_all=false
custom_backup=false # backup custom files or directories
custom_dirs=("/opt/example" "/var/log/script.log") # array of custom files/directories to backup
# Optional: retention policy for local archives. 0 disables the check.
retention_days=0 # delete archives older than this many days (0 = disabled)
retention_keep_min=0 # always keep at least this many newest archives (0 = disabled)
# Optional: hooks (executed only in non-dry-run mode)
pre_backup_hook='' # executable run before backup; receives TMPDIR as $1
post_backup_hook='' # executable run after success; receives archive path as $1

283
main.sh
View File

@@ -1,283 +0,0 @@
#! /bin/bash
function init {
echo "Backify is starting, looking for configuration file..." >&2
config='backup.cfg'
config_secured='sbackup.cfg'
if [ ! -f "$config" ]
then
echo "Error: Config file not found: $config" >&2
echo "Please create a config file or specify the location of an existing file." >&2
exit 1
fi
if grep -E -q -v '^#|^[^ ]*=[^;]*' "$config"; then
echo "Config file is unclean, cleaning it..." >&2
grep -E '^#|^[^ ]*=[^;&]*' "$config" > "$config_secured"
config="$config_secured"
fi
source "$config"
echo "Configuration file loaded" >&2
if [ "$EUID" -ne 0 ]
then echo "Please run as root"
exit
fi
}
function system {
if [ -f /etc/redhat-release ]
then
echo "Discovered Red Hat-based OS..."
system='rhel'
elif [ -f /etc/lsb-release ]
then
echo "Discovered Ubuntu-based OS..."
system='ubuntu'
else
echo "Error: Unable to detect OS type."
exit 1
fi
echo "Discovered $system based OS..." >&2
}
function makedir {
timestamp=$(date +%Y%m%d_%H%M)
mkdir /tmp/backify-$timestamp
tmpdir="/tmp/backify-$timestamp"
}
function wwwbackup {
if [ "$www_backup" = true ]
then
echo "Backing up wwwroot..." >&2
mkdir -p $tmpdir/wwwdata
cp -r $www_dir/ $tmpdir/wwwdata/
echo "Finished" >&2
fi
}
function vhostbackup {
if [ "$vhost_backup" = true ]
then
echo "Backing up vhosts..." >&2
mkdir -p $tmpdir/vhosts
cp -r $vhost_dir/ $tmpdir/vhosts/
echo "Finished" >&2
fi
}
function logbackupcentos {
if [ "$log_backup" = true ]
then
echo "Backing up system logs..." >&2
mkdir -p $tmpdir/syslogs
cp /var/log/syslog $tmpdir/syslogs/
cp /var/log/message $tmpdir/syslogs/
if [ "$fail2ban_log" = true ]
then
cp /var/log/fail2ban.log $tmpdir/syslogs/
fi
if [ "$log_backup_web" = true]
then
if [ "$apache" = true ]
then
mkdir -p $tmpdir/apachelogs
cp -r /var/log/httpd $tmpdir/apachelogs
fi
if [ "$nginx" = true ]
then
mkdir -p $tmpdir/nginxlogs
cp -r /var/log/nginx $tmpdir/nginxlogs
fi
fi
if [ "$log_purge" = true]
then
echo "Purging logs..." >&2
truncate -s 0 /var/log/syslog
truncate -s 0 /var/log/message
if [ "$apache" = true ]
then
truncate -s 0 /var/log/httpd/*
rm /var/log/httpd/*.gz
fi
if [ "$nginx" = true ]
then
truncate -s 0 /var/log/nginx/*
rm /var/log/nginx/*.gz
fi
if [ "$fail2ban_log" = true ]
then
truncate -s 0 /var/log/fail2ban.log
fi
fi
echo "Finished" >&2
fi
}
function logbackupubuntu {
if [ "$log_backup" = true ]
then
echo "Backing up system logs..." >&2
mkdir -p $tmpdir/syslogs
cp /var/log/syslog $tmpdir/syslogs/
cp /var/log/message $tmpdir/syslogs/
if [ "$fail2ban_log" = true ]
then
cp /var/log/fail2ban.log $tmpdir/syslogs/
fi
if [ "$log_backup_web" = true]
then
if [ "$apache" = true ]
then
mkdir -p $tmpdir/apachelogs
cp -r /var/log/apache2 $tmpdir/apachelogs
fi
if [ "$nginx" = true ]
then
mkdir -p $tmpdir/nginxlogs
cp -r /var/log/nginx $tmpdir/nginxlogs
fi
fi
if [ "$log_purge" = true]
then
echo "Purging logs..." >&2
truncate -s 0 /var/log/syslog
truncate -s 0 /var/log/message
if [ "$apache" = true ]
then
truncate -s 0 /var/log/apache2/*
rm /var/log/apache2/*.gz
fi
if [ "$nginx" = true ]
then
truncate -s 0 /var/log/nginx/*
rm /var/log/nginx/*.gz
fi
if [ "$fail2ban_log" = true ]
then
truncate -s 0 /var/log/fail2ban.log
fi
fi
echo "Finished" >&2
fi
}
function push {
if [ "$rsync_push" = true ]
then
#Push - Dockerized
if [ "push_clean" = true ]
then
rm /opt/backify-$timestamp.tar.gz
fi
fi
}
function dockerbackup {
if [ "$docker_enabled" = true]
then
if [ "$docker_images" = true]
then
echo "Backing up Docker images..." >&2
for i in `docker inspect --format='{{.Name}}' $(docker ps -q) | cut -f2 -d\/`
do container_name=$i
echo -n "$container_name - "
container_image=`docker inspect --format='{{.Config.Image}}' $container_name`
mkdir -p $tmpdir/containers/$container_name
save_dir="$tmpdir/containers/$container_name/$container_name-image.tar"
docker save -o $save_dir $container_image
echo "Finished" >&2
done
fi
if [ "$docker_volumes" = true ]
then
echo "Backing up Docker volumes..." >&2
for i in `docker inspect --format='{{.Name}}' $(docker ps -q) | cut -f2 -d\/`
do container_name=$i
mkdir -p $tmpdir/containers/$container_name
echo -n "$container_name - "
docker run --rm --userns=host \
--volumes-from $container_name \
-v $backup_path:/backup \
-e TAR_OPTS="$tar_opts" \
piscue/docker-backup \
backup "$tmpdir/containers/$container_name/$container_name-volume.tar.xz"
echo "Finished" >&2
done
fi
if [ "$docker_data" = true ]
then
echo "Backing up container information..." >&2
for i in `docker inspect --format='{{.Name}}' $(docker ps -q) | cut -f2 -d\/`
do container_name=$i
echo -n "$container_name - "
container_data=`docker inspect $container_name`
mkdir -p $tmpdir/containers/$container_name
echo $container_data > $tmpdir/containers/$container_name/$container_name-data.txt
echo "Finished" >&2
done
fi
fi
}
function backup_db {
if [ "$db_backup" = true ]
then
echo "Backing up database..." >&2
mkdir -p $tmpdir/db
if [ "$database_type" = "mysql" ]
then
mysqldump -u "$db_username" -p"$db_password" "$db_name" > $tmpdir/db/db.sql
elif [ "$database_type" = "postgresql" ]
echo "soon"
}
function runbackup {
# init, config check
init
# run system detection
system
if [ "$enabled" = true ]
then
# step 1 : create directory
makedir
# step 2 : www backup
wwwbackup
# step 3 : vhost backup
vhostbackup
# step 4: log backup
if [ $system = "rhel" ]
then
logbackuprhel
fi
if [ $system = "ubuntu" ]
then
logbackupubuntu
fi
# step 5: docker backup
dockerbackup
# archive data
echo "Creating backup archive..." >&2
tar -czvf /opt/backify-$timestamp.tar.gz $tmpdir
# push data to server
push
echo "Voila, enjoy the rest of the day" >&2
else
echo "Backup is disabled in the configuration" >&2
fi
}
runbackup