V1.1 Refactor
This commit is contained in:
292
README.MD
292
README.MD
@@ -1,118 +1,258 @@
|
||||
# Backify 🗃️
|
||||
Backify README
|
||||
==============
|
||||
|
||||
|
||||
## What is Backify? 👾
|
||||
What is Backify?
|
||||
----------------
|
||||
|
||||
Backify is a shell script that helps you automate the backup process of all kinds of data from Linux systems. It differs from other backup scripts because it gives you the flexibility to choose what you want to save, ranging from system logs to containers. The script was tailored to meet personal needs as there was no complete solution for the specific use case.
|
||||
|
||||
Prerequisites
|
||||
-------------
|
||||
|
||||
## Prerequisites 👷
|
||||
* The script must be executed as root.
|
||||
|
||||
* A configuration file (by default "backup.cfg") must exist and be readable.
|
||||
|
||||
* The system must be a Red Hat–based (RHEL, CentOS, Rocky, Alma…) or Debian/Ubuntu–based distribution.
|
||||
|
||||
* Required tools:
|
||||
|
||||
* tar
|
||||
|
||||
* rsync and ssh if you push backups to a remote host
|
||||
|
||||
* docker if you use Docker backup features
|
||||
|
||||
* mysqldump (for MySQL/MariaDB) and/or pg\_dump / pg\_dumpall (for PostgreSQL) if you back up databases
|
||||
|
||||
|
||||
Configuration
|
||||
-------------
|
||||
|
||||
- The script must be executed as root.
|
||||
All configuration options can be found in the "backup.cfg" file.
|
||||
|
||||
- A configuration file named `backup.cfg` must exist in the same directory as the script.
|
||||
By default Backify looks for "backup.cfg" in the same directory as the script, but you can override this with the-c / --config command-line option.
|
||||
|
||||
- The system must be either a Red Hat-based or an Ubuntu-based distribution.
|
||||
The script has an integrity check in place to ensure that no external commands can be embedded into it by malware(the config is “cleaned” before sourcing). The following sections provide an overview of the available configuration options.
|
||||
|
||||
- mysqldump / pgdump if dumping database on a diferent host
|
||||
Main options
|
||||
------------
|
||||
|
||||
**Name** | **Value** | **Specifics**
|
||||
--------------------|--------------------|-------------
|
||||
`enabled` | true/false | Disable or enable the main function
|
||||
`backup_path` | path | Where to save the backup, **must NOT end with a slash**
|
||||
`www_backup` | true/false | Backup of the webroot directory
|
||||
`www_dir` | path | Path to the webroot
|
||||
`vhost_backup` | true/false | Backup of the vhost configuration
|
||||
`vhost_dir` | path | Path to the vhost files
|
||||
`log_backup` | true/false | Backup log files
|
||||
`log_to_backup` | array | Array of logs to back up (see list below)
|
||||
`rsync_push` | true/false | Push the backup archive to a remote server
|
||||
`push_clean` | true/false | Delete the local backup archive after a successful push
|
||||
`target_host` | host | Backup push target host (single-target mode)
|
||||
`target_user` | user | Backup push target username (single-target mode)
|
||||
`target_key` | path | SSH key for the remote backup user
|
||||
`target_dir` | path | Remote directory to push backups to
|
||||
`targets` | array | **Optional**: list of full rsync destinations (`user@host:/path`). If non-empty, overrides `target_host` / `target_user` / `target_dir`.
|
||||
`docker_enabled` | true/false | Enable Docker backups
|
||||
`docker_images` | true/false | Backup Docker images
|
||||
`docker_volumes` | true/false | Backup Docker volumes (via helper container)
|
||||
`docker_data` | true/false | Backup container metadata (inspect output)
|
||||
`tar_opts` | string | Optional `TAR_OPTS` passed to the Docker volume backup helper (e.g. `-J` for xz)
|
||||
`db_backup` | true/false | Enable database backup
|
||||
`database_type` | mysql/postgresql | Database type
|
||||
`db_host` | host | Database host
|
||||
`db_port` | int | Port for DB access
|
||||
`db_username` | string | Username for DB access
|
||||
`db_password` | string | Password for DB access
|
||||
`db_name` | string | Name of database to dump when `db_all=false`
|
||||
`db_all` | true/false | Dump all databases instead of a specific one
|
||||
`custom_backup` | true/false | Enable backup of custom files
|
||||
`custom_dirs` | array | Array of files/directories to back up
|
||||
`retention_days` | int | **Optional**: delete local archives older than this many days (0 = disabled)
|
||||
`retention_keep_min`| int | **Optional**: always keep at least this many newest archives (0 = disabled)
|
||||
`pre_backup_hook` | path | **Optional**: executable script run **before** the backup (receives `TMPDIR` as `$1`)
|
||||
`post_backup_hook` | path | **Optional**: executable script run **after success** (receives archive path as `$1`)
|
||||
|
||||
## Configuration 🧙♂️
|
||||
Logs to backup array
|
||||
--------------------
|
||||
|
||||
**Option** | **Specifics**
|
||||
---------------|-------------
|
||||
`apache` | Apache access and error logs
|
||||
`nginx` | Nginx access and error logs
|
||||
`fail2ban` | Fail2ban log
|
||||
`pckg_mngr` | Package manager logs (`yum`/`dnf` on RHEL, `apt` on Debian/Ubuntu)
|
||||
`auth` | Authentication logs
|
||||
`dmesg` | Kernel ring buffer log
|
||||
`dpkg` | Package changes log (Debian/Ubuntu)
|
||||
`letsencrypt` | Let’s Encrypt logs
|
||||
`php` | Logs from all installed PHP versions
|
||||
`syslog` | General system event data
|
||||
`purge` | Truncate/empty selected logs after backing them up
|
||||
|
||||
All configuration options can be found in the `backup.cfg` file. The script has an integrity check in place to ensure that no external commands can be embedded into it by malware. The following table provides an overview of the available configuration options:
|
||||
Command-line options
|
||||
--------------------
|
||||
|
||||
| Name | Value | Specifics |
|
||||
| --- | --- | --- |
|
||||
| enabled | true/false | Disable the main function |
|
||||
| backup_path | ------> | Set where to save the backup, make sure it DOESNT end with backslash |
|
||||
| www_backup | true/false | Backup of the webroot directory |
|
||||
| www_dir | ------> | Path to the webroot |
|
||||
| vhost_backup | true/false | Backup of the vhost configuration |
|
||||
| vhost_dir | ------> | Path to the vhost files |
|
||||
| log_backup | true/false | Backup log files |
|
||||
| log_to_backup |array | Array of logs to backup, see below for options|
|
||||
| rsync_push | true/false | Push the backup file to a remote server |
|
||||
| push_clean | true/false | Delete the backup file after push |
|
||||
| target_host | ------> | Backup push target host |
|
||||
| target_user | ------> | Backup push target username |
|
||||
| target_key | ------> | Backup target ssh key |
|
||||
| target_dir | ------> | Backup target push to location |
|
||||
| docker_enable | true/false | Enable Docker backups |
|
||||
| docker_images | true/false | Backup Docker images |
|
||||
| docker_volumes | true/false | Backup Docker volumes |
|
||||
| docker_data | true/false | Backup container information |
|
||||
| db_backup | true/false | Backup database |
|
||||
| database_type | mysql/postgresql | Database type |
|
||||
| db_host | ------> | Database host |
|
||||
| db_port | ------> | Port for DB access |
|
||||
| db_username | ------> | Username for DB access |
|
||||
| db_password | ------> | Password for DB access |
|
||||
| db_name | ------> | Name of database |
|
||||
| db_all | ------> | Dump all databases instead of specific one |
|
||||
| custom_backup | true/false | Enable backup of custom files |
|
||||
| custom_dirs | ------> | Array of files/directories to backup
|
||||
Backify supports the following CLI options:
|
||||
|
||||
- `-c`, `--config` *PATH*
|
||||
Path to configuration file (default: `./backup.cfg`).
|
||||
- `-n`, `--dry-run`
|
||||
Show what would be done, but do not copy/compress/push/delete anything.
|
||||
- `-h`, `--help`
|
||||
Show short usage help and exit.
|
||||
- `-v`, `--version`
|
||||
Show Backify version and exit.
|
||||
|
||||
## Logs to backup array 📚
|
||||
Examples
|
||||
--------
|
||||
|
||||
| Option | Specifics |
|
||||
| --- | --- |
|
||||
| apache | Apache access and error logs |
|
||||
| nginx | Nginx access and error logs |
|
||||
| fail2ban | Fail2ban log |
|
||||
| alternatives | Alternatives log |
|
||||
| pckg_mngr | Logs from Yum/Apt package manager |
|
||||
| auth | Authentications log |
|
||||
| dmesg | Kernel log |
|
||||
| dpkg | Package changes log |
|
||||
| letsencrypt | Let's encrypt logs |
|
||||
| php | Logs from all installed PHPs |
|
||||
| syslog | System event data |
|
||||
| purge | Empty all the logs after backing up |
|
||||
Use the default `backup.cfg` (in the same directory as the script):
|
||||
|
||||
```bash
|
||||
./backify.sh
|
||||
```
|
||||
|
||||
## Script Execution 🪄
|
||||
Use a custom config file:
|
||||
|
||||
```bash
|
||||
./backify.sh --config /etc/backify/web01.cfg
|
||||
```
|
||||
|
||||
To execute the script, simply run the following command in the terminal:
|
||||
Safe test run: see what would happen, but do not touch any data:
|
||||
|
||||
> ./backify.sh
|
||||
```bash
|
||||
./backify.sh --config /etc/backify/web01.cfg --dry-run
|
||||
```
|
||||
|
||||
The script will first initialize by checking for the existence of the configuration file, loading its parameters, and verifying that the script is being executed as root.
|
||||
Script Execution
|
||||
----------------
|
||||
|
||||
Then, it will determine whether the system is a Red Hat-based or an Ubuntu-based distribution.
|
||||
To execute the script with the default configuration file in the same directory:
|
||||
|
||||
Finally, the script will create a new directory with a timestamped name in the backup_path directory, where the backups will be stored.
|
||||
./backify.sh
|
||||
|
||||
The components specified in the configuration file will then be backed up to the newly created directory.
|
||||
The script will:
|
||||
|
||||
* Parse CLI options (config path, dry-run, etc.).
|
||||
|
||||
* Initialize by checking for the existence of the configuration file, loading its parameters, and verifying that it is being executed as root.
|
||||
|
||||
* Detect whether the system is Red Hat–based or Debian/Ubuntu–based.
|
||||
|
||||
* Create a new timestamped directory inside 'backup_path', where the backup data will be stored.
|
||||
|
||||
* Run the configured backup steps:
|
||||
|
||||
* Webroot
|
||||
|
||||
* Vhosts
|
||||
|
||||
* Logs
|
||||
|
||||
* Docker images/volumes/data
|
||||
|
||||
* Databases
|
||||
|
||||
* Custom files/directories
|
||||
|
||||
* Create a compressed tar archive (backify-YYYYMMDD\_HHMM.tar.gz) from the backup directory.
|
||||
|
||||
* Optionally push the archive to one or more remote rsync targets.
|
||||
|
||||
* Optionally apply a retention policy to local archives.
|
||||
|
||||
* Optionally run pre/post backup hooks.
|
||||
|
||||
|
||||
## Automation 🤖
|
||||
If you use --dry-run, steps that modify data (copying files, truncating logs, creating archives, pushing, deleting, hooks) are simulated and only logged, not executed.
|
||||
|
||||
Automation
|
||||
----------
|
||||
|
||||
Here's an example of how you can use cron on Linux to run your script every day at 12 PM:
|
||||
Cron
|
||||
----
|
||||
|
||||
Open the terminal and type crontab -e to open the cron table for editing.
|
||||
You can use cron to run Backify every day at 12:00.
|
||||
|
||||
Add the following line to the end of the file:
|
||||
1. Open the crontab editor:
|
||||
|
||||
> 0 12 * * * /path/to/your/script.sh
|
||||
```bash
|
||||
crontab -e
|
||||
```
|
||||
|
||||
Save and exit the file.
|
||||
2. Add a line like this (adjust the path as needed):
|
||||
|
||||
## MySQL user 🛢️
|
||||
```bash
|
||||
0 12 * * * /path/to/backify.sh --config /etc/backify/web01.cfg
|
||||
```
|
||||
|
||||
If You want to dump all of MySQL databases, read only user is recommended for that action.
|
||||
3. Save and exit.
|
||||
|
||||
|
||||
It can be created with the following MySQL command:
|
||||
systemd (optional)
|
||||
------------------
|
||||
|
||||
> GRANT LOCK TABLES, SELECT ON DATABASE_NAME.* TO 'BACKUP_USER'@'%' IDENTIFIED BY 'PASSWORD';
|
||||
If you prefer systemd, you can wrap Backify in a simple backify.service and backify.timer unit pair.(Units are not shipped in this repo yet, but Backify is fully compatible with a systemd timer.)
|
||||
|
||||
## Buy me a beer 🍻
|
||||
Restore (high-level overview)
|
||||
-----------------------------
|
||||
|
||||
One pale ale won't hurt, will it ?
|
||||
Backify creates standard `tar.gz` archives, so restoration is straightforward but manual by design:
|
||||
|
||||
0x4046979a1E1152ddbfa4a910b1a98F73625a77ae
|
||||
ETH / BNB and Polygon chains
|
||||
1. Copy the desired archive back to the server (or access it on the backup storage).
|
||||
2. Extract it:
|
||||
|
||||
```bash
|
||||
tar -xzf backify-YYYYMMDD_HHMM.tar.gz -C /tmp/restore
|
||||
```
|
||||
|
||||
The content layout roughly mirrors:
|
||||
|
||||
* `wwwdata/` – your webroot
|
||||
* `vhosts/` – webserver vhost configs
|
||||
* `syslogs/`, `apachelogs/`, `nginxlogs/` – logs
|
||||
* `containers/` – Docker images, volumes, and metadata (if enabled)
|
||||
* `db/` – database dumps (`.sql`)
|
||||
* `custom/` – custom files/directories you configured
|
||||
|
||||
3. Restore what you need:
|
||||
|
||||
* Webroot / vhosts: copy files back into place and reload/restart services.
|
||||
* Databases:
|
||||
* MySQL/MariaDB:
|
||||
|
||||
```bash
|
||||
mysql -u USER -p DB_NAME < db/yourdb.sql
|
||||
```
|
||||
|
||||
* PostgreSQL:
|
||||
|
||||
```bash
|
||||
psql -U USER -h HOST DB_NAME < db/yourdb.sql
|
||||
```
|
||||
|
||||
|
||||
Make sure you understand what you are overwriting; ideally test restores on a non-production server first.
|
||||
|
||||
MySQL / PostgreSQL user
|
||||
-----------------------
|
||||
|
||||
If you want to dump all databases, a dedicated read-only user is recommended.
|
||||
|
||||
For MySQL/MariaDB, you can create one with:
|
||||
|
||||
```sql
|
||||
GRANT LOCK TABLES, SELECT ON DATABASE\_NAME.\* TO 'BACKUP\_USER'@'%' IDENTIFIED BY 'PASSWORD';
|
||||
```
|
||||
|
||||
For PostgreSQL, you can use a user with sufficient CONNECT and SELECT permissions on the databases you want to dump.
|
||||
|
||||
Buy me a beer
|
||||
-------------
|
||||
|
||||
One pale ale won't hurt, will it?
|
||||
|
||||
0x4046979a1E1152ddbfa4a910b1a98F73625a77ae – ETH / BNB / Polygon chains
|
||||
Reference in New Issue
Block a user