一键备份脚本backup.sh(新增支持COS/阿里云盘)

修改自大神 秋水逸冰 的 backup.sh
增加了上传到 腾讯云 COS 和 阿里云盘 功能
增强加密强度,增加 pbkdf2 迭代 20w 次,md 换成 sha256

总结一下 backup.sh 特点:

  1. 支持 MySQL/MariaDB/Percona 的数据库全量备份或选择备份;
  2. 支持指定目录或文件的备份;
  3. 支持加密备份文件(需安装 openssl 命令,可选);
  4. 支持上传至 Google Drive(需先安装 rclone 并配置,可选);
  5. 支持上传至 腾讯云 COS(需先安装 coscmd 并配置,可选);
  6. 支持上传至 阿里云盘(需先安装 aliyunpan 并配置,可选);
  7. 支持上传至 FTP(可选);
  8. 支持在删除指定天数本地旧的备份文件的同时,也删除 Google Drive/COS/阿里云盘 上的同名文件(可选)。

修改并配置脚本

关于变量名的一些说明:

  • ENCRYPTFLG (加密 FLG,true 为加密,false 为不加密,默认是加密)
  • BACKUPPASS (加密密码,重要,务必要修改)
  • LOCALDIR (备份目录,可自己指定)
  • TEMPDIR (备份目录的临时目录,可自己指定)
  • LOGFILE (脚本运行产生的日志文件路径)
  • MYSQL_ROOT_PASSWORD (MySQL/MariaDB/Percona 的 root 用户密码)
  • MYSQL_DATABASE_NAME (指定 MySQL/MariaDB/Percona 的数据库名,留空则是备份所有数据库)

※ MYSQL_DATABASE_NAME 是一个数组变量,可以指定多个。举例如下:

1
2
MYSQL_DATABASE_NAME[0]="phpmyadmin"
MYSQL_DATABASE_NAME[1]="test"
  • BACKUP (需要备份的指定目录或文件列表,留空就是不备份目录或文件)

※ BACKUP 是一个数组变量,可以指定多个。举例如下:

1
2
3
BACKUP[0]="/data/www/default/test.tgz"
BACKUP[1]="/data/www/default/test/"
BACKUP[2]="/data/www/default/test2/"
  • LOCALAGEDAILIES (指定多少天之后删除本地旧的备份文件,默认为 7 天)

  • DELETE_REMOTE_FILE_FLG (删除 Google Drive/COS/AliyunDrive/FTP 上备份文件的 FLG,true 为删除,false 为不删除)

  • RCLONE_NAME (设置 rclone config 时设定的 remote 名称,务必要指定)

  • RCLONE_FOLDER (指定备份时设定的 remote 的目录名称,该目录名在 Google Drive 不存在时则会自行创建。默认为空,也就是根目录)

  • RCLONE_FLG (上传本地备份文件至 Google Drive 的 FLG,true 为上传,false 为不上传)

  • COS_FOLDER (指定备份时设定的 remote 的目录名称,该目录名在 COS 不存在时则会自行创建。默认为空,也就是根目录)

  • COS_FLG (上传本地备份文件至 COS 的 FLG,true 为上传,false 为不上传)

  • ALI_FLG (上传本地备份文件至 AliyunDrive 的 FLG,true 为上传,false 为不上传)

  • ALI_FOLDER (指定备份时设定的 remote 的目录名称,该目录名在 AliyunDrive 不存在时会错误!!!需要手动创建!

  • ALI_PY_FILE (指定 aliyunpanmain.py 路径)

  • ALI_REFRESH_TOKEN (阿里云盘的 REFRESH_TOKEN )

  • FTP_FLG (上传文件至 FTP 服务器的 FLG,true 为上传,false 为不上传)

  • FTP_HOST (连接的 FTP 域名或 IP 地址)

  • FTP_USER (连接的 FTP 的用户名)

  • FTP_PASS (连接的 FTP 的用户的密码)

  • FTP_DIR (连接的 FTP 的远程目录,比如: public_html)

一些注意事项的说明:

  1. 脚本需要用 root 用户来执行;
  2. 脚本需要用到 openssl 来加密,请事先安装好;
  3. 脚本默认备份所有的数据库(全量备份);
  4. 备份文件的解密命令如下:
1
openssl enc -aes256 -salt -pbkdf2 -iter 200000 -in [ENCRYPTED BACKUP] -out decrypted_backup.tgz -pass pass:[BACKUPPASS] -d -md sha256
  1. 备份文件解密后,解压命令如下:
1
tar -zxPf [DECRYPTION BACKUP FILE]

解释一下参数 -P:
tar 压缩文件默认都是相对路径的。加个 -P 是为了 tar 能以绝对路径压缩文件。因此,解压的时候也要带个 -P 参数。

配置 rclone 命令(可选)

rclone 是一个命令行工具,用于 Google Drive 的上传下载等操作。官网网站:
https://rclone.org/

你可以用以下的命令来安装 rclone,以 RedHat 系举例,记得要先安装 unzip 命令。

1
yum -y install unzip && wget -qO- https://rclone.org/install.sh | bash

然后,运行以下命令开始配置:

1
rclone config

参考这篇文章,当设置到 Use auto config? 是否使用自动配置,选 n 不自动配置,然后根据提示用浏览器打开 rclone 给出的 URL,点击接受(Accept),然后将浏览器上显示出来的字符串粘贴回命令行里,完成授权,然后退出即可。参考文章里有挂载的操作,记得这里不需要挂载 Google Drive。

配置 coscmd 命令(可选)

通过 pip 安装

执行pip命令进行安装:

1
pip install coscmd

安装成功之后,用户可以通过-v或者--version命令查看当前的版本信息。

pip 更新

安装完成后,执行以下命令进行更新:

1
pip install coscmd -U

! 当 pip 版本号大于等于 10.0.0 时,升级或安装依赖库时可能会出现失败,建议使用 pip 版本 9.x(pip install pip==9.0.0)。如果您安装的是最新 Python 版本(例如 3.9.0),则已集成 pip,您无需再次安装。

快速配置

通常情况下,若您只需要进行简单的操作,可参照以下操作示例进行快速配置。

?配置前,您需要先在 COS 控制台创建一个用于配置参数的存储桶(例如 configure-bucket-1250000000),并创建密钥信息。

1
coscmd config -a AChT4ThiXAbpBDEFGhT4ThiXAbp**** -s WE54wreefvds3462refgwewe**** -b configure-bucket-1250000000 -r ap-chengdu

配置 aliyunpan 命令(可选)

安装 aliyunpan

原命令

1
2
3
git clone https://github.com/wxy1343/aliyunpan.git
cd aliyunpan
pip install -r requirements.txt

新命令 2022-03-30 更新

1
2
pip install aliyunpan
pip install aliyunpan --upgrade

设置 password / refresh_token

不推荐 web 端获取,问题挺多

可以指定账号密码登入

1
2
echo "username: 'xxxxx'"  >  ~/.config/aliyunpan.yaml
echo "password: 'xxxxx'" >> ~/.config/aliyunpan.yaml

可以通过手机端查找日志获取 refresh_token

/sdcard/Android/data/com.alicloud.databox/files/logs/trace/userId/yunpan/latest.log

1
echo "refresh_token: 'xxxxx'"  >  ~/.config/aliyunpan.yaml

运行脚本开始备份

1
./backup.sh

脚本默认会显示备份进度,并在最后统计出所需时间。
如果你想将脚本加入到 cron 自动运行的话,就不需要前台显示备份进度,只写日志就可以了。
这个时候你需要稍微改一下脚本中的 log 函数。

1
2
3
4
log() {
echo "$(date "+%Y-%m-%d %H:%M:%S")" "$1"
echo -e "$(date "+%Y-%m-%d %H:%M:%S")" "$1" >> ${LOGFILE}
}

改为:

1
2
3
log() {
echo -e "$(date "+%Y-%m-%d %H:%M:%S")" "$1" >> ${LOGFILE}
}

如何使用 cron 自动备份

我的例子

1
2
3
crontab -l #查看一下计划任务列表
crontab -e #加入计划任务
30 2 * * * /home/server/backup/backup.sh #每天凌晨2:30运行备份

附:脚本文件

点击查看脚本文件
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
#!/usr/bin/env bash
# Copyright (C) 2013 - 2020 Teddysun <i@teddysun.com>
#
# This file is part of the LAMP script.
#
# LAMP is a powerful bash script for the installation of
# Apache + PHP + MySQL/MariaDB and so on.
# You can install Apache + PHP + MySQL/MariaDB in an very easy way.
# Just need to input numbers to choose what you want to install before installation.
# And all things will be done in a few minutes.
#
# Description: Auto backup shell script
# Description URL: https://teddysun.com/469.html
#
# Website: https://lamp.sh
# Github: https://github.com/teddysun/lamp
#
# You must to modify the config before run it!!!
# Backup MySQL/MariaDB datebases, files and directories
# Backup file is encrypted with AES256-cbc with SHA1 message-digest (option)
# Auto transfer backup file to Google Drive (need install rclone command) (option)
# Auto transfer backup file to FTP server (option)
# Auto delete Google Drive's or FTP server's remote file (option)

[[ $EUID -ne 0 ]] && echo "Error: This script must be run as root!" && exit 1

########## START OF CONFIG ##########

# Encrypt flag (true: encrypt, false: not encrypt)
ENCRYPTFLG=true

# WARNING: KEEP THE PASSWORD SAFE!!!
# The password used to encrypt the backup
# To decrypt backups made by this script, run the following command:
# openssl enc -aes256 -in [encrypted backup] -out decrypted_backup.tgz -pass pass:[backup password] -d -md sha1
BACKUPPASS=""

# Directory to store backups
LOCALDIR=""

# Temporary directory used during backup creation
TEMPDIR="/tmp/backups/temp/"

# File to log the outcome of backups
LOGFILE=""

# OPTIONAL:
# If you want to backup the MySQL database, enter the MySQL root password below, otherwise leave it blank
MYSQL_ROOT_PASSWORD=""

# Below is a list of MySQL database name that will be backed up
# If you want backup ALL databases, leave it blank.
MYSQL_DATABASE_NAME[0]=""

# Below is a list of files and directories that will be backed up in the tar backup
# For example:
# File: /data/www/default/test.tgz
# Directory: /data/www/default/test
BACKUP[0]=""

# Number of days to store daily local backups (default 7 days)
LOCALAGEDAILIES="7"

# Delete remote file from Googole Drive or FTP server flag (true: delete, false: not delete)
DELETE_REMOTE_FILE_FLG=true

# Rclone remote name
RCLONE_NAME=""

# Rclone remote folder name (default "")
RCLONE_FOLDER=""

# Cos remote folder name (default "")
COS_FOLDER=""

# AliyunDrive remote folder name (default "")
ALI_FOLDER=""

# Upload local file to FTP server flag (true: upload, false: not upload)
FTP_FLG=false

# Upload local file to Google Drive flag (true: upload, false: not upload)
RCLONE_FLG=false

# Upload local file to Cos flag (true: upload, false: not upload)
COS_FLG=false

# Upload local file to AliyunDrive flag (true: upload, false: not upload)
ALI_FLG=true
ALI_PY_FILE="/usr/local/bin/aliyunpan-cli"

# FTP server
# OPTIONAL: If you want to upload to FTP server, enter the Hostname or IP address below
FTP_HOST=""

# FTP username
# OPTIONAL: If you want to upload to FTP server, enter the FTP username below
FTP_USER=""

# FTP password
# OPTIONAL: If you want to upload to FTP server, enter the username's password below
FTP_PASS=""

# FTP server remote folder
# OPTIONAL: If you want to upload to FTP server, enter the FTP remote folder below
# For example: public_html
FTP_DIR=""

########## END OF CONFIG ##########

# Date & Time
DAY=$(date +%d)
MONTH=$(date +%m)
YEAR=$(date +%C%y)
BACKUPDATE=$(date +%Y%m%d%H%M%S)
# Backup file name
TARFILE="${LOCALDIR}""$(hostname)"_"${BACKUPDATE}".tgz
# Encrypted backup file name
ENC_TARFILE="${TARFILE}.enc"
# Backup MySQL dump file name
SQLFILE="${TEMPDIR}mysql_${BACKUPDATE}.sql"

log() {
echo "$(date "+%Y-%m-%d %H:%M:%S")" "$1"
echo -e "$(date "+%Y-%m-%d %H:%M:%S")" "$1" >> ${LOGFILE}
}

# Check for list of mandatory binaries
check_commands() {
# This section checks for all of the binaries used in the backup
# Do not check mysql command if you do not want to backup the MySQL database
if [ -z "${MYSQL_ROOT_PASSWORD}" ]; then
BINARIES=( cat cd du date dirname echo openssl pwd rm tar )
else
BINARIES=( cat cd du date dirname echo openssl mysql mysqldump pwd rm tar )
fi

# Iterate over the list of binaries, and if one isn't found, abort
for BINARY in "${BINARIES[@]}"; do
if [ ! "$(command -v "$BINARY")" ]; then
log "$BINARY is not installed. Install it and try again"
exit 1
fi
done

# check rclone command
RCLONE_COMMAND=false
if [ "$(command -v "rclone")" ]; then
RCLONE_COMMAND=true
fi

# check COS command
COS_COMMAND=false
if [ "$(command -v "coscmd")" ]; then
COS_COMMAND=true
fi

# check AliyunDrive command
ALI_COMMAND=false
if [ -f "${ALI_PY_FILE}" ]; then
ALI_COMMAND=true
fi

# check ftp command
if ${FTP_FLG}; then
if [ ! "$(command -v "ftp")" ]; then
log "ftp is not installed. Install it and try again"
exit 1
fi
fi
}

calculate_size() {
local file_name=$1
local file_size=$(du -h $file_name 2>/dev/null | awk '{print $1}')
if [ "x${file_size}" = "x" ]; then
echo "unknown"
else
echo "${file_size}"
fi
}

# Backup MySQL databases
mysql_backup() {
if [ -z "${MYSQL_ROOT_PASSWORD}" ]; then
log "MySQL root password not set, MySQL backup skipped"
else
log "MySQL dump start"
mysql -u root -p"${MYSQL_ROOT_PASSWORD}" 2>/dev/null <<EOF
exit
EOF
if [ $? -ne 0 ]; then
log "MySQL root password is incorrect. Please check it and try again"
exit 1
fi
if [ "${MYSQL_DATABASE_NAME[@]}" == "" ]; then
mysqldump -u root -p"${MYSQL_ROOT_PASSWORD}" --all-databases > "${SQLFILE}" 2>/dev/null
if [ $? -ne 0 ]; then
log "MySQL all databases backup failed"
exit 1
fi
log "MySQL all databases dump file name: ${SQLFILE}"
#Add MySQL backup dump file to BACKUP list
BACKUP=(${BACKUP[@]} ${SQLFILE})
else
for db in ${MYSQL_DATABASE_NAME[@]}; do
unset DBFILE
DBFILE="${TEMPDIR}${db}_${BACKUPDATE}.sql"
mysqldump -u root -p"${MYSQL_ROOT_PASSWORD}" ${db} > "${DBFILE}" 2>/dev/null
if [ $? -ne 0 ]; then
log "MySQL database name [${db}] backup failed, please check database name is correct and try again"
exit 1
fi
log "MySQL database name [${db}] dump file name: ${DBFILE}"
#Add MySQL backup dump file to BACKUP list
BACKUP=(${BACKUP[@]} ${DBFILE})
done
fi
log "MySQL dump completed"
fi
}

start_backup() {
[ "${#BACKUP[@]}" -eq 0 ] && echo "Error: You must to modify the [$(basename $0)] config before run it!" && exit 1

log "Tar backup file start"
tar -zcPf ${TARFILE} ${BACKUP[@]}
if [ $? -gt 1 ]; then
log "Tar backup file failed"
exit 1
fi
log "Tar backup file completed"

# Encrypt tar file
if ${ENCRYPTFLG}; then
log "Encrypt backup file start"
openssl enc -aes256 -salt -pbkdf2 -iter 200000 -in "${TARFILE}" -out "${ENC_TARFILE}" -pass pass:"${BACKUPPASS}" -md sha256
log "Encrypt backup file completed"

# Delete unencrypted tar
log "Delete unencrypted tar file: ${TARFILE}"
rm -f ${TARFILE}
fi

# Delete MySQL temporary dump file
for sql in $(ls ${TEMPDIR}*.sql); do
log "Delete MySQL temporary dump file: ${sql}"
rm -f ${sql}
done

if ${ENCRYPTFLG}; then
OUT_FILE="${ENC_TARFILE}"
else
OUT_FILE="${TARFILE}"
fi
log "File name: ${OUT_FILE}, File size: $(calculate_size ${OUT_FILE})"
}

# Transfer backup file to Google Drive
# If you want to install rclone command, please visit website:
# https://rclone.org/downloads/
rclone_upload() {
if ${RCLONE_FLG} && ${RCLONE_COMMAND}; then
[ -z "${RCLONE_NAME}" ] && log "Error: RCLONE_NAME can not be empty!" && return 1
if [ -n "${RCLONE_FOLDER}" ]; then
rclone ls ${RCLONE_NAME}:${RCLONE_FOLDER} 2>&1 > /dev/null
if [ $? -ne 0 ]; then
log "Create the path ${RCLONE_NAME}:${RCLONE_FOLDER}"
rclone mkdir ${RCLONE_NAME}:${RCLONE_FOLDER}
fi
fi
log "Tranferring backup file: ${OUT_FILE} to Google Drive"
rclone copy ${OUT_FILE} ${RCLONE_NAME}:${RCLONE_FOLDER} >> ${LOGFILE}
if [ $? -ne 0 ]; then
log "Error: Tranferring backup file: ${OUT_FILE} to Google Drive failed"
return 1
fi
log "Tranferring backup file: ${OUT_FILE} to Google Drive completed"
fi
}


# Tranferring backup file to COS
cos_upload() {
if ${COS_FLG} && ${COS_COMMAND}; then
[ -z "${COS_FOLDER}" ] && log "Error: COS_FOLDER can not be empty!" && return 1
log "Tranferring backup file: ${OUT_FILE} to COS"
coscmd upload ${OUT_FILE} ${COS_FOLDER}/ >> ${LOGFILE}
if [ $? -ne 0 ]; then
log "Error: Tranferring backup file: ${OUT_FILE} to COS"
return 1
fi
log "Tranferring backup file: ${OUT_FILE} to COS completed"
fi
}

# Tranferring backup file to AliyunDrive
ali_upload() {
if ${ALI_FLG} && ${ALI_COMMAND}; then
[ -z "${ALI_FOLDER}" ] && log "Error: ALI_FOLDER can not be empty!" && return 1
log "Tranferring backup file: ${OUT_FILE} to AliyunDrive"
${ALI_PY_FILE} upload ${OUT_FILE} ${ALI_FOLDER} # >> ${LOGFILE}
if [ $? -ne 0 ]; then
log "Error: Tranferring backup file: ${OUT_FILE} to AliyunDrive"
return 1
fi
log "Tranferring backup file: ${OUT_FILE} to AliyunDrive completed"
fi
}

# Tranferring backup file to FTP server
ftp_upload() {
if ${FTP_FLG}; then
[ -z "${FTP_HOST}" ] && log "Error: FTP_HOST can not be empty!" && return 1
[ -z "${FTP_USER}" ] && log "Error: FTP_USER can not be empty!" && return 1
[ -z "${FTP_PASS}" ] && log "Error: FTP_PASS can not be empty!" && return 1
[ -z "${FTP_DIR}" ] && log "Error: FTP_DIR can not be empty!" && return 1
local FTP_OUT_FILE=$(basename ${OUT_FILE})
log "Tranferring backup file: ${FTP_OUT_FILE} to FTP server"
ftp -in ${FTP_HOST} 2>&1 >> ${LOGFILE} <<EOF
user $FTP_USER $FTP_PASS
binary
lcd $LOCALDIR
cd $FTP_DIR
put $FTP_OUT_FILE
quit
EOF
if [ $? -ne 0 ]; then
log "Error: Tranferring backup file: ${FTP_OUT_FILE} to FTP server failed"
return 1
fi
log "Tranferring backup file: ${FTP_OUT_FILE} to FTP server completed"
fi
}

# Get file date
get_file_date() {
#Approximate a 30-day month and 365-day year
DAYS=$(( $((10#${YEAR}*365)) + $((10#${MONTH}*30)) + $((10#${DAY})) ))
unset FILEYEAR FILEMONTH FILEDAY FILEDAYS FILEAGE
FILEYEAR=$(echo "$1" | cut -d_ -f2 | cut -c 1-4)
FILEMONTH=$(echo "$1" | cut -d_ -f2 | cut -c 5-6)
FILEDAY=$(echo "$1" | cut -d_ -f2 | cut -c 7-8)
if [[ "${FILEYEAR}" && "${FILEMONTH}" && "${FILEDAY}" ]]; then
#Approximate a 30-day month and 365-day year
FILEDAYS=$(( $((10#${FILEYEAR}*365)) + $((10#${FILEMONTH}*30)) + $((10#${FILEDAY})) ))
FILEAGE=$(( 10#${DAYS} - 10#${FILEDAYS} ))
return 0
fi
return 1
}

# Delete Google Drive's old backup file
delete_gdrive_file() {
local FILENAME=$1
if ${DELETE_REMOTE_FILE_FLG} && ${RCLONE_COMMAND}; then
rclone ls ${RCLONE_NAME}:${RCLONE_FOLDER}/${FILENAME} 2>&1 > /dev/null
if [ $? -eq 0 ]; then
rclone delete ${RCLONE_NAME}:${RCLONE_FOLDER}/${FILENAME} >> ${LOGFILE}
if [ $? -eq 0 ]; then
log "Google Drive's old backup file: ${FILENAME} has been deleted"
else
log "Failed to delete Google Drive's old backup file: ${FILENAME}"
fi
else
log "Google Drive's old backup file: ${FILENAME} is not exist"
fi
fi
}

# Delete COS's old backup file
delete_cos_file() {
local FILENAME=$1
if ${DELETE_REMOTE_FILE_FLG} && ${COS_COMMAND}; then
cos delete ${COS_FOLDER}/${FILENAME} >> ${LOGFILE}
if [ $? -eq 0 ]; then
log "COS's old backup file: ${FILENAME} has been deleted"
else
log "Failed to delete COS's old backup file: ${FILENAME}"
fi
fi
}

# Delete AliyunDrive's old backup file
delete_ali_file() {
local FILENAME=$1
if ${DELETE_REMOTE_FILE_FLG} && ${ALI_COMMAND}; then
${ALI_PY_FILE} delete ${ALI_FOLDER}${FILENAME} # >> ${LOGFILE}
if [ $? -eq 0 ]; then
log "AliyunDrive's old backup file: ${FILENAME} has been deleted"
else
log "Failed to delete AliyunDrive's old backup file: ${FILENAME}"
fi
fi
}

# Delete FTP server's old backup file
delete_ftp_file() {
local FILENAME=$1
if ${DELETE_REMOTE_FILE_FLG} && ${FTP_FLG}; then
ftp -in ${FTP_HOST} 2>&1 >> ${LOGFILE} <<EOF
user $FTP_USER $FTP_PASS
cd $FTP_DIR
del $FILENAME
quit
EOF
if [ $? -eq 0 ]; then
log "FTP server's old backup file: ${FILENAME} has been deleted"
else
log "Failed to delete FTP server's old backup file: ${FILENAME}"
fi
fi
}

# Clean up old file
clean_up_files() {
cd ${LOCALDIR} || exit
if ${ENCRYPTFLG}; then
LS=($(ls *.enc))
else
LS=($(ls *.tgz))
fi
for f in ${LS[@]}; do
get_file_date ${f}
if [ $? -eq 0 ]; then
if [[ ${FILEAGE} -gt ${LOCALAGEDAILIES} ]]; then
rm -f ${f}
log "Old backup file name: ${f} has been deleted"
delete_gdrive_file ${f}
delete_ftp_file ${f}
delete_cos_file ${f}
delete_ali_file ${f}
fi
fi
done
}

# Main progress
STARTTIME=$(date +%s)

# Check if the backup folders exist and are writeable
[ ! -d "${LOCALDIR}" ] && mkdir -p ${LOCALDIR}
[ ! -d "${TEMPDIR}" ] && mkdir -p ${TEMPDIR}

log "Backup progress start"
check_commands
mysql_backup
start_backup
log "Backup progress complete"

log "Upload progress start"
rclone_upload
ftp_upload
cos_upload
ali_upload
log "Upload progress complete"

log "Cleaning up"
clean_up_files
ENDTIME=$(date +%s)
DURATION=$((ENDTIME - STARTTIME))
log "All done"
log "Backup and transfer completed in ${DURATION} seconds"


一键备份脚本backup.sh(新增支持COS/阿里云盘)
https://cuojue.org/read/backup-sh.html
作者
WeiCN
发布于
2021年3月30日
更新于
2022年3月31日
许可协议