Compare commits

..

23 Commits

Author SHA1 Message Date
1bff525466 Minimal adjustments 2023-03-21 18:11:38 +01:00
35255156bd readme and slight adjustments 2023-02-27 09:54:23 +01:00
f53b1d381d DB backup implement host and port parameter / backup path fix 2023-02-22 20:05:34 +01:00
472e490ea1 Implement custom directory backup 2023-02-22 19:42:00 +01:00
fa3ca3cf63 Docker volume backup path bug fix 2023-02-22 18:20:25 +01:00
ed5f806720 Markdown troubles 2023-02-22 18:11:01 +01:00
6dcc7eb8e0 Documentation, alternatives.log kickout 2023-02-22 18:09:48 +01:00
65b5b20b18 Obvious fail of array matching fix 2023-02-22 18:01:41 +01:00
685b149045 Remove duplicate echo 2023-02-22 17:47:44 +01:00
e6adba12b2 Add forgoten purging option to array 2023-02-22 17:42:07 +01:00
aef0f7dc32 Upgrades on log backup logic 2023-02-22 17:35:59 +01:00
225a9d4db4 Widen the support for system logs on Debian systems 2023-02-22 17:30:12 +01:00
bc0480689d Array for logs implementation 2023-02-13 15:06:07 +01:00
629e3c1631 Formatting 2023-02-12 14:46:59 +01:00
89b861c7f7 damn markdown 2023-02-12 14:43:52 +01:00
2f66ba386f Docs 2023-02-12 14:42:52 +01:00
5203f65147 Doc spacing 2023-02-12 14:30:54 +01:00
a428d3bc9e Docs 2023-02-12 14:28:20 +01:00
8e2e28602f Rsync implementation, formatting 2023-02-12 13:26:12 +01:00
6c40c9898d Syntax fixes 2023-02-11 19:16:15 +01:00
0478ce746a Log backup refactoring 2023-02-11 16:21:46 +01:00
5183b9bc7d Dump all databases, refactoring, docs 2023-02-11 11:58:21 +01:00
59a370ba74 readme update 2023-02-11 11:50:38 +01:00
3 changed files with 391 additions and 243 deletions

115
README.MD
View File

@ -1,49 +1,118 @@
Backify 🗃️
===========
# Backify 🗃️
A powerful and automated bash script for backing up all kinds of Linux data, archiving it and pushing it to a remote host.
What is Backify? 👾
-------------------
## What is Backify? 👾
Backify is a shell script that helps you automate the backup process of all kinds of data from Linux systems. It differs from other backup scripts because it gives you the flexibility to choose what you want to save, ranging from system logs to containers. The script was tailored to meet personal needs as there was no complete solution for the specific use case.
Configuration 🧙‍♂️
-------------------
## Prerequisites 👷
- The script must be executed as root.
- A configuration file named `backup.cfg` must exist in the same directory as the script.
- The system must be either a Red Hat-based or an Ubuntu-based distribution.
- mysqldump / pgdump if dumping database on a diferent host
## Configuration 🧙‍♂️
All configuration options can be found in the `backup.cfg` file. The script has an integrity check in place to ensure that no external commands can be embedded into it by malware. The following table provides an overview of the available configuration options:
| Name | Value | Specifics |
| --- | --- | --- |
| Enabled | true/false | Disable the main function |
| enabled | true/false | Disable the main function |
| backup_path | ------> | Set where to save the backup, make sure it DOESNT end with backslash |
| www_backup | true/false | Backup of the webroot directory |
| www_dir | ------> | Path to the webroot |
| vhost_backup | true/false | Backup of the vhost configuration |
| vhost_dir | ------> | Path to the vhost files |
| log_backup | true/false | Backup log files |
| log_backup_web | true/false | Backup web app logs |
| apache | true/false | Enable Apache logs |
| nginx | true/false | Enable nginx logs |
| fail2ban_log | true/false | Enable fail2ban logs |
| log_purge | true/false | Truncate logs after backup |
| log_to_backup |array | Array of logs to backup, see below for options|
| rsync_push | true/false | Push the backup file to a remote server |
| push_clean | true/false | Delete the backup file after push |
| target_host | ------> | Backup push target host |
| target_user | ------> | Backup push target username |
| target_key | ------> | Backup target ssh key |
| target_dir | ------> | Backup target push to location |
| docker_enable | true/false | Enable Docker backups |
| docker_images | true/false | Backup Docker images |
| docker_volumes | true/false | Backup Docker volumes |
| docker_data | true/false | Backup container information |
| db_backup | true/false | Backup database |
| database_type | mysql/postgresql | Database type |
| db_host | ------> | Database host |
| db_port | ------> | Port for DB access |
| db_username | ------> | Username for DB access |
| db_password | ------> | Password for DB access |
| db_name | ------> | Name of database |
| db_all | ------> | Dump all databases instead of specific one |
| custom_backup | true/false | Enable backup of custom files |
| custom_dirs | ------> | Array of files/directories to backup
To-Do List
----------
- [ ] Rsync implementation via shell
- [ ] Rsync implementation via Docker
- [ ] Cron scheduler
- [ ] RHEL/Ubuntu parser
- [ ] Automatic adjustments per system
- [ ] MySQL backups
- [ ] PostgreSQL backups
- [ ] Cover more system logs
## Logs to backup array 📚
| Option | Specifics |
| --- | --- |
| apache | Apache access and error logs |
| nginx | Nginx access and error logs |
| fail2ban | Fail2ban log |
| alternatives | Alternatives log |
| pckg_mngr | Logs from Yum/Apt package manager |
| auth | Authentications log |
| dmesg | Kernel log |
| dpkg | Package changes log |
| letsencrypt | Let's encrypt logs |
| php | Logs from all installed PHPs |
| syslog | System event data |
| purge | Empty all the logs after backing up |
## Script Execution 🪄
To execute the script, simply run the following command in the terminal:
> ./backify.sh
The script will first initialize by checking for the existence of the configuration file, loading its parameters, and verifying that the script is being executed as root.
Then, it will determine whether the system is a Red Hat-based or an Ubuntu-based distribution.
Finally, the script will create a new directory with a timestamped name in the backup_path directory, where the backups will be stored.
The components specified in the configuration file will then be backed up to the newly created directory.
## Automation 🤖
Here's an example of how you can use cron on Linux to run your script every day at 12 PM:
Open the terminal and type crontab -e to open the cron table for editing.
Add the following line to the end of the file:
> 0 12 * * * /path/to/your/script.sh
Save and exit the file.
## MySQL user 🛢️
If You want to dump all of MySQL databases, read only user is recommended for that action.
It can be created with the following MySQL command:
> GRANT LOCK TABLES, SELECT ON DATABASE_NAME.* TO 'BACKUP_USER'@'%' IDENTIFIED BY 'PASSWORD';
## Buy me a beer 🍻
One pale ale won't hurt, will it ?
0x4046979a1E1152ddbfa4a910b1a98F73625a77ae
ETH / BNB and Polygon chains

View File

@ -2,28 +2,32 @@
# --------------------------------------------------------
# Please double check Your settings
# --------------------------------------------------------
enabled=false #enable main function
enabled=false #enable the script
backup_path='/opt/backify' # where do you want backups saved, make sure it doesnt end in backslash
www_backup=false # backup wwwroot
www_dir='xyz' # wwwroot location
vhost_backup=false # backup vhost config
vhost_dir='/etc/httpd/sites-enabled' # vhost location
www_dir='xyz' # location of wwwroot to backup
vhost_backup=false # backup vhost configurations
vhost_dir='/etc/httpd/sites-enabled' # location of active vhost files
log_backup=false # backup logs
log_backup_web=false # backup webapp logs
apache=false # apache log backup
nginx=false # nginx log backup
fail2ban_log=false # fail2ban log backup
log_purge=false # purge logs after backup
log_to_backup=("apache" "nginx" "fail2ban" "pckg_mngr" "auth" "dmesg" "dpkg" "letsencrypt" "php" "syslog" "purge")
# logs to backup, options: apache, nginx, fail2ban, pckg_mngr, auth, dmesg, dpkg, letsencrypt, php, syslog, purge (truncate all))
rsync_push=false # enable push to remote server
push_clean=false # clean backup file after push
target_host="127.0.0.1" # rsync target host
target_user="backup" # rsync target user
target_key='/home/xyz/.ssh/rsync' # rsync key
target_dir='/opt/backups/srvyxyz/' # rsync target host path
docker_enabled=false # will you use docker backup
docker_images=false # backup docker images
docker_volumes=false #backup docker volumes
docker_data=false #backup container information
db_backup=false #backup databases
database_type=mysql #mysql or postgresql
db_host='localhost' #hostname of mysql server
db_port=3306 #port for db access
db_username=user #database user
db_password=user #database password
db_name=user #name of the database
db_all=false #dumps all databases if true
db_name=user #name of the database
custom_backup=false #backup custom files or directories
custom_dirs=("/opt/example" "/var/log/script.log") #array of custom files and/or directories to backup

495
main.sh
View File

@ -1,283 +1,358 @@
#! /bin/bash
function init {
echo "Backify is starting, looking for configuration file..." >&2
echo "Backify is starting, looking for configuration file..." >&2
config='backup.cfg'
config_secured='sbackup.cfg'
config='backup.cfg'
secured_config='sbackup.cfg'
if [ ! -f "$config" ]
then
if [ ! -f "$config" ]; then
echo "Error: Config file not found: $config" >&2
echo "Please create a config file or specify the location of an existing file." >&2
exit 1
fi
fi
if grep -E -q -v '^#|^[^ ]*=[^;]*' "$config"; then
echo "Config file is unclean, cleaning it..." >&2
grep -E '^#|^[^ ]*=[^;&]*' "$config" > "$config_secured"
config="$config_secured"
fi
if grep -E -q -v '^#|^[^ ]*=[^;]*' "$config"; then
echo "Config file is unclean, cleaning it..." >&2
grep -E '^#|^[^ ]*=[^;&]*' "$config" >"$secured_config"
config="$secured_config"
fi
source "$config"
source "$config"
echo "Configuration file loaded" >&2
echo "Configuration file loaded" >&2
if [ "$EUID" -ne 0 ]
then echo "Please run as root"
exit
fi
if [ "$EUID" -ne 0 ]; then
echo "Please run as root"
exit
fi
}
function system {
if [ -f /etc/redhat-release ]
then
if [ -f /etc/redhat-release ]; then
echo "Discovered Red Hat-based OS..."
system='rhel'
elif [ -f /etc/lsb-release ]
then
echo "Discovered Ubuntu-based OS..."
system='ubuntu'
elif [ -f /etc/lsb-release ]; then
echo "Discovered Debian-based OS..."
system='debian'
else
echo "Error: Unable to detect OS type."
exit 1
fi
echo "Discovered $system based OS..." >&2
}
function makedir {
timestamp=$(date +%Y%m%d_%H%M)
mkdir /tmp/backify-$timestamp
tmpdir="/tmp/backify-$timestamp"
timestamp=$(date +%Y%m%d_%H%M)
mkdir -p "$backup_path/backify-$timestamp"
tmpdir="$backup_path/backify-$timestamp"
}
function wwwbackup {
if [ "$www_backup" = true ]
then
if [ "$www_backup" = true ]; then
echo "Backing up wwwroot..." >&2
mkdir -p $tmpdir/wwwdata
cp -r $www_dir/ $tmpdir/wwwdata/
mkdir -p "$tmpdir/wwwdata"
cp -r "$www_dir/" "$tmpdir/wwwdata/"
echo "Finished" >&2
fi
fi
}
function vhostbackup {
if [ "$vhost_backup" = true ]
then
if [ "$vhost_backup" = true ]; then
echo "Backing up vhosts..." >&2
mkdir -p $tmpdir/vhosts
cp -r $vhost_dir/ $tmpdir/vhosts/
mkdir -p "$tmpdir/vhosts"
cp -avr "$vhost_dir/" "$tmpdir/vhosts/"
echo "Finished" >&2
fi
fi
}
function logbackupcentos {
if [ "$log_backup" = true ]
then
function logbackup {
if [ "$log_backup" = true ]; then
echo "Backing up system logs..." >&2
mkdir -p $tmpdir/syslogs
cp /var/log/syslog $tmpdir/syslogs/
cp /var/log/message $tmpdir/syslogs/
mkdir -p "$tmpdir/syslogs"
if [ "$fail2ban_log" = true ]
then
cp /var/log/fail2ban.log $tmpdir/syslogs/
fi
case $system in
"rhel")
if [ "$log_backup_web" = true]
then
if [ "$apache" = true ]
then
mkdir -p $tmpdir/apachelogs
cp -r /var/log/httpd $tmpdir/apachelogs
fi
if [ "$nginx" = true ]
then
mkdir -p $tmpdir/nginxlogs
cp -r /var/log/nginx $tmpdir/nginxlogs
fi
fi
if [[ " ${log_to_backup[*]} " =~ " ${log_to_backup[fail2ban]} " ]]; then
cp /var/log/fail2ban.log "$tmpdir/syslogs/"
fi
if [ "$log_purge" = true]
then
if [[ " ${log_to_backup[*]} " =~ " ${log_to_backup[apache]} " ]]; then
mkdir -p "$tmpdir/apachelogs"
cp -r /var/log/httpd "$tmpdir/apachelogs"
fi
if [[ " ${log_to_backup[*]} " =~ " ${log_to_backup[nginx]} " ]]; then
mkdir -p "$tmpdir/nginxlogs"
cp -r /var/log/nginx "$tmpdir/nginxlogs"
fi
if [[ " ${log_to_backup[*]} " =~ " ${log_to_backup[pckg_mngr]} " ]]; then
mkdir -p "$tmpdir/syslogs/"
mkdir -p "$tmpdir/syslogs/yum"
cp -r /var/log/yum/* "$tmpdir/syslogs/yum/"
cp -r /var/log/dnf* "$tmpdir/syslogs/yum/"
fi
if [[ " ${log_to_backup[*]} " =~ " ${log_to_backup[letsencrypt]} " ]]; then
mkdir -p "$tmpdir/syslogs/"
mkdir -p "$tmpdir/syslogs/letsencrypt"
cp -r /var/log/letsencrypt/* "$tmpdir/syslogs/letsencrypt/"
fi
if [[ " ${log_to_backup[*]} " =~ " ${log_to_backup[php]} " ]]; then
mkdir -p "$tmpdir/syslogs/"
cp -r /var/log/php*.log "$tmpdir/syslogs/"
fi
if [[ " ${log_to_backup[*]} " =~ " ${log_to_backup[syslog]} " ]]; then
mkdir -p "$tmpdir/syslogs/"
cp -r /var/log/secure "$tmpdir/syslogs/"
fi
if [[ " ${log_to_backup[*]} " =~ " ${log_to_backup[purge]} " ]]; then
echo "Purging logs..." >&2
truncate -s 0 /var/log/syslog
truncate -s 0 /var/log/syslog
truncate -s 0 /var/log/message
if [ "$apache" = true ]
then
truncate -s 0 /var/log/httpd/*
rm /var/log/httpd/*.gz
if [[ " ${log_to_backup[*]} " =~ " ${log_to_backup[apache]} " ]]; then
truncate -s 0 /var/log/httpd/*
rm /var/log/httpd/*.gz
fi
if [ "$nginx" = true ]
then
truncate -s 0 /var/log/nginx/*
rm /var/log/nginx/*.gz
if [[ " ${log_to_backup[*]} " =~ " ${log_to_backup[nginx]} " ]]; then
truncate -s 0 /var/log/nginx/*
rm /var/log/nginx/*.gz
fi
if [ "$fail2ban_log" = true ]
then
truncate -s 0 /var/log/fail2ban.log
if [[ " ${log_to_backup[*]} " =~ " ${log_to_backup[fail2ban]} " ]]; then
truncate -s 0 /var/log/fail2ban.log
fi
fi
echo "Finished" >&2
fi
}
if [[ " ${log_to_backup[*]} " =~ " ${log_to_backup[pckg_mngr]} " ]]; then
truncate -s 0 /var/log/yum/*
truncate -s 0 /var/log/dnf*
fi
if [[ " ${log_to_backup[*]} " =~ " ${log_to_backup[letsencrypt]} " ]]; then
truncate -s 0 /var/log/letsencrypt/*
fi
if [[ " ${log_to_backup[*]} " =~ " ${log_to_backup[php]} " ]]; then
truncate -s 0 /var/log/php*.log
fi
if [[ " ${log_to_backup[*]} " =~ " ${log_to_backup[syslog]} " ]]; then
truncate -s 0 /var/log/secure
fi
fi
;;
function logbackupubuntu {
if [ "$log_backup" = true ]
then
echo "Backing up system logs..." >&2
mkdir -p $tmpdir/syslogs
cp /var/log/syslog $tmpdir/syslogs/
cp /var/log/message $tmpdir/syslogs/
"debian")
if [ "$fail2ban_log" = true ]
then
cp /var/log/fail2ban.log $tmpdir/syslogs/
fi
if [[ " ${log_to_backup[*]} " =~ " ${log_to_backup[fail2ban]} " ]]; then
cp /var/log/fail2ban.log "$tmpdir/syslogs/"
fi
if [ "$log_backup_web" = true]
then
if [ "$apache" = true ]
then
mkdir -p $tmpdir/apachelogs
cp -r /var/log/apache2 $tmpdir/apachelogs
fi
if [ "$nginx" = true ]
then
mkdir -p $tmpdir/nginxlogs
cp -r /var/log/nginx $tmpdir/nginxlogs
fi
fi
if [[ " ${log_to_backup[*]} " =~ " ${log_to_backup[apache]} " ]]; then
mkdir -p "$tmpdir/apachelogs"
cp -r /var/log/apache2 "$tmpdir/apachelogs"
fi
if [ "$log_purge" = true]
then
if [[ " ${log_to_backup[*]} " =~ " ${log_to_backup[nginx]} " ]]; then
mkdir -p "$tmpdir/nginxlogs"
cp -r /var/log/nginx "$tmpdir/nginxlogs"
fi
if [[ " ${log_to_backup[*]} " =~ " ${log_to_backup[pckg_mngr]} " ]]; then
mkdir -p "$tmpdir/syslogs/"
mkdir -p "$tmpdir/syslogs/apt"
cp -r /var/log/apt/* "$tmpdir/syslogs/apt/"
fi
if [[ " ${log_to_backup[*]} " =~ " ${log_to_backup[auth]} " ]]; then
mkdir -p "$tmpdir/syslogs/"
cp -r /var/log/auth.log "$tmpdir/syslogs/"
fi
if [[ " ${log_to_backup[*]} " =~ " ${log_to_backup[dmesg]} " ]]; then
mkdir -p "$tmpdir/syslogs/"
cp -r /var/log/dmesg "$tmpdir/syslogs/"
fi
if [[ " ${log_to_backup[*]} " =~ " ${log_to_backup[dpkg]} " ]]; then
mkdir -p "$tmpdir/syslogs/"
cp -r /var/log/dpkg.log "$tmpdir/syslogs/"
fi
if [[ " ${log_to_backup[*]} " =~ " ${log_to_backup[letsencrypt]} " ]]; then
mkdir -p "$tmpdir/syslogs/"
mkdir -p "$tmpdir/syslogs/letsencrypt"
cp -r /var/log/letsencrypt/* "$tmpdir/syslogs/letsencrypt/"
fi
if [[ " ${log_to_backup[*]} " =~ " ${log_to_backup[php]} " ]]; then
mkdir -p "$tmpdir/syslogs/"
cp -r /var/log/php*.log "$tmpdir/syslogs/"
fi
if [[ " ${log_to_backup[*]} " =~ " ${log_to_backup[syslog]} " ]]; then
mkdir -p "$tmpdir/syslogs/"
cp -r /var/log/syslog "$tmpdir/syslogs/"
fi
if [[ " ${log_to_backup[*]} " =~ " ${log_to_backup[purge]} " ]]; then
echo "Purging logs..." >&2
truncate -s 0 /var/log/syslog
truncate -s 0 /var/log/syslog
truncate -s 0 /var/log/message
if [ "$apache" = true ]
then
truncate -s 0 /var/log/apache2/*
rm /var/log/apache2/*.gz
if [[ " ${log_to_backup[*]} " =~ " ${log_to_backup[apache]} " ]]; then
truncate -s 0 /var/log/apache2/*
rm /var/log/apache2/*.gz
fi
if [ "$nginx" = true ]
then
truncate -s 0 /var/log/nginx/*
rm /var/log/nginx/*.gz
if [[ " ${log_to_backup[*]} " =~ " ${log_to_backup[nginx]} " ]]; then
truncate -s 0 /var/log/nginx/*
rm /var/log/nginx/*.gz
fi
if [ "$fail2ban_log" = true ]
then
truncate -s 0 /var/log/fail2ban.log
if [[ " ${log_to_backup[*]} " =~ " ${log_to_backup[fail2ban]} " ]]; then
truncate -s 0 /var/log/fail2ban.log
fi
fi
echo "Finished" >&2
fi
if [[ " ${log_to_backup[*]} " =~ " ${log_to_backup[pckg_mngr]} " ]]; then
truncate -s 0 /var/log/apt/*
fi
if [[ " ${log_to_backup[*]} " =~ " ${log_to_backup[auth]} " ]]; then
truncate -s 0 /var/log/auth.log
fi
if [[ " ${log_to_backup[*]} " =~ " ${log_to_backup[dmesg]} " ]]; then
truncate -s 0 /var/log/dmesg
fi
if [[ " ${log_to_backup[*]} " =~ " ${log_to_backup[dpkg]} " ]]; then
truncate -s 0 /var/log/dpkg.log
fi
if [[ " ${log_to_backup[*]} " =~ " ${log_to_backup[letsencrypt]} " ]]; then
truncate -s 0 /var/log/letsencrypt/*
fi
if [[ " ${log_to_backup[*]} " =~ " ${log_to_backup[php]} " ]]; then
truncate -s 0 /var/log/php*.log
fi
if [[ " ${log_to_backup[*]} " =~ " ${log_to_backup[syslog]} " ]]; then
truncate -s 0 /var/log/syslog
fi
fi
;;
esac
fi
}
function push {
if [ "$rsync_push" = true ]
then
#Push - Dockerized
if [ "push_clean" = true ]
then
rm /opt/backify-$timestamp.tar.gz
fi
if [ "$rsync_push" = true ]; then
echo "Pushing the backup package to $target_host..." >&2
rsync -avz -e "ssh -i $target_key" $backup_path/backify-$timestamp.tar.gz $target_user@$target_host:$target_dir
if [ "$push_clean" = true ]; then
echo "Removing archive..." >&2
rm "$backup_path/backify-$timestamp.tar.gz"
fi
fi
}
function dockerbackup {
if [ "$docker_enabled" = true]
then
if [ "$docker_images" = true]
then
echo "Backing up Docker images..." >&2
for i in `docker inspect --format='{{.Name}}' $(docker ps -q) | cut -f2 -d\/`
do container_name=$i
echo -n "$container_name - "
container_image=`docker inspect --format='{{.Config.Image}}' $container_name`
mkdir -p $tmpdir/containers/$container_name
save_dir="$tmpdir/containers/$container_name/$container_name-image.tar"
docker save -o $save_dir $container_image
echo "Finished" >&2
done
fi
if [ "$docker_volumes" = true ]
then
echo "Backing up Docker volumes..." >&2
for i in `docker inspect --format='{{.Name}}' $(docker ps -q) | cut -f2 -d\/`
do container_name=$i
mkdir -p $tmpdir/containers/$container_name
echo -n "$container_name - "
docker run --rm --userns=host \
--volumes-from $container_name \
-v $backup_path:/backup \
-e TAR_OPTS="$tar_opts" \
piscue/docker-backup \
backup "$tmpdir/containers/$container_name/$container_name-volume.tar.xz"
echo "Finished" >&2
done
fi
if [ "$docker_data" = true ]
then
echo "Backing up container information..." >&2
for i in `docker inspect --format='{{.Name}}' $(docker ps -q) | cut -f2 -d\/`
do container_name=$i
echo -n "$container_name - "
container_data=`docker inspect $container_name`
mkdir -p $tmpdir/containers/$container_name
echo $container_data > $tmpdir/containers/$container_name/$container_name-data.txt
echo "Finished" >&2
done
fi
if [ "$docker_enabled" = true ]; then
if [ "$docker_images" = true ]; then
echo "Backing up Docker images..." >&2
for i in $(docker inspect --format='{{.Name}}' $(docker ps -q) | cut -f2 -d\/); do
container_name=$i
echo -n "$container_name - "
container_image=$(docker inspect --format='{{.Config.Image}}' $container_name)
mkdir -p $tmpdir/containers/$container_name
save_dir="$tmpdir/containers/$container_name/$container_name-image.tar"
docker save -o $save_dir $container_image
echo "Finished" >&2
done
fi
if [ "$docker_volumes" = true ]; then
echo "Backing up Docker volumes..." >&2
#Thanks piscue :)
for i in $(docker inspect --format='{{.Name}}' $(docker ps -q) | cut -f2 -d\/); do
container_name=$i
mkdir -p $tmpdir/containers/$container_name
echo -n "$container_name - "
docker run --rm --userns=host \
--volumes-from $container_name \
-v $tmpdir/containers/$container_name:/backup \
-e TAR_OPTS="$tar_opts" \
piscue/docker-backup \
backup "$container_name-volume.tar.xz"
echo "Finished" >&2
done
fi
if [ "$docker_data" = true ]; then
echo "Backing up container information..." >&2
for i in $(docker inspect --format='{{.Name}}' $(docker ps -q) | cut -f2 -d\/); do
container_name=$i
echo -n "$container_name - "
container_data=$(docker inspect $container_name)
mkdir -p $tmpdir/containers/$container_name
echo $container_data >$tmpdir/containers/$container_name/$container_name-data.txt
echo "Finished" >&2
done
fi
fi
}
function backup_db {
if [ "$db_backup" = true ]
then
echo "Backing up database..." >&2
mkdir -p $tmpdir/db
if [ "$database_type" = "mysql" ]
then
mysqldump -u "$db_username" -p"$db_password" "$db_name" > $tmpdir/db/db.sql
elif [ "$database_type" = "postgresql" ]
echo "soon"
mkdir -p $tmpdir/db
if [ "$db_all" = true ]; then
if [ "$database_type" = "mysql" ]; then
mysqldump -u "$db_username" -p"$db_password" -h "$db_host" -P"$db_port" --all-databases >$tmpdir/db/db_all.sql
elif [ "$database_type" = "postgresql" ]; then
pg_dumpall -U "$db_username" -h "$db_host" -f $tmpdir/db/db_all.sql
fi
else
if [ "$database_type" = "mysql" ]; then
mysqldump -u "$db_username" -p"$db_password" -h "$db_host" -P"$db_port" "$db_name" >$tmpdir/db/$db_name.sql
elif [ "$database_type" = "postgresql" ]; then
pg_dump -U "$db_username" -h "$db_host" "$db_name" -f $tmpdir/db/$db_name.sql
fi
fi
}
function custombackup {
if [ "$custom_backup" = "true" ]; then
mkdir -p "$tmpdir/custom"
for i in "${custom_dirs[@]}"
do
cp -r $i $tmpdir/custom/
done
fi
}
function runbackup {
# init, config check
init
# run system detection
system
if [ "$enabled" = true ]
then
# step 1 : create directory
makedir
# step 2 : www backup
wwwbackup
# step 3 : vhost backup
vhostbackup
# step 4: log backup
if [ $system = "rhel" ]
then
logbackuprhel
fi
if [ $system = "ubuntu" ]
then
logbackupubuntu
fi
# step 5: docker backup
dockerbackup
# archive data
echo "Creating backup archive..." >&2
tar -czvf /opt/backify-$timestamp.tar.gz $tmpdir
# push data to server
push
echo "Voila, enjoy the rest of the day" >&2
else
echo "Backup is disabled in the configuration" >&2
# init, config check
init
# run system detection
system
if [ "$enabled" = true ]; then
# step 1 : create directory
makedir
# step 2 : www backup
wwwbackup
# step 3 : vhost backup
vhostbackup
# step 4: log backup
logbackup
# step 5: docker backup
dockerbackup
# step 6: db backup
if [ "$db_backup" = true ]; then
backup_db
fi
# step 7 : custom backup
custombackup
# archive data
echo "Creating backup archive..." >&2
tar -czvf $backup_path/backify-$timestamp.tar.gz $tmpdir >> /var/log/backify-compress.log
# push data to server
push
# remove temp files
rm -r $tmpdir
echo "Voila, enjoy the rest of the day" >&2
else
echo "Backup is disabled in the configuration" >&2
fi
}
runbackup
runbackup