initial commit

This commit is contained in:
2025-11-25 12:27:53 +03:30
commit f9d16ab078
102 changed files with 11156 additions and 0 deletions

101
backup/README.md Normal file
View File

@@ -0,0 +1,101 @@
# Backup helper services
This directory holds the lightweight services that create and restore the stack backups. They are kept separate from the main `docker-compose.yml` so you can start them on demand.
## Files
- `backup.yml` defines the scheduled backup sidecars:
- `pgbackup` dumps the configured Postgres databases into `./backups/databases`.
- `backup-manager` waits for the latest DB dump, archives mounted application volumes, and rotates dated backup folders in `./backups`.
- `restore.yml` provides a one-shot `restore` helper container that mounts the same named volumes and the `./backups` directory for restores.
- `../scripts/backup/*.sh` shell scripts that power the services:
- `pg-dump.sh`, `manage-backups.sh`, `restore.sh` generic helpers used by the containers.
- `restore-odoo.sh`, `restore-gitea.sh`, `restore-opencloud.sh` wrappers that restore their respective volumes and (for Odoo/Gitea) the matching Postgres database dump in one shot.
## Configuring backups (`backup.yml`)
Environment variables (usually sourced from `.env`) let you control what gets dumped and how long backups are kept:
| Variable | Used by | Purpose |
| --- | --- | --- |
| `POSTGRES_ADMIN_USER`, `POSTGRES_ADMIN_PASSWORD` | `pgbackup` | Credentials for the Postgres superuser/operator running `pg_dump`. |
| `BKP_DB_LIST` | `pgbackup` | Space-separated list of database names to dump each run. |
| `BKP_RETENTION_DAYS` | `backup-manager` | Delete dated backup directories and SQL dumps older than this many days (default 7). |
| `BKP_RETENTION_COUNT` | `backup-manager` | Keep only the newest N dated directories when >0 (directories newer than the day threshold are kept regardless). |
`backup-manager` mounts the live volumes (`odoo-filestore`, `odoo-config`, `gitea_data`, etc.) read-only so it can archive their contents into timestamped tarballs. It also moves the SQL dumps from `./backups/databases` into each dated directory after the dump job finishes. The script writes a marker file (`.last_backup_complete`) so the manager only starts once the database dump completes; adjust `DB_BACKUP_TIMEOUT` in the environment if you want to change the wait time (default 600s).
To start the scheduled backup helpers alongside the stack:
```bash
docker compose -f docker-compose.yml -f backup/backup.yml up -d pgbackup backup-manager
```
## Restoring backups (`restore.yml`)
The restore helper uses the same Postgres image so it already ships with `psql`, `tar`, and `gzip`. Run the helper with your main compose file so the volumes resolve correctly:
```bash
# Open a shell inside the helper
docker compose -f docker-compose.yml -f backup/restore.yml run --rm restore sh
```
Or call the bundled restore script directly via the helper entrypoint:
```bash
# List dated backup directories
docker compose -f docker-compose.yml -f backup/restore.yml run --rm restore list
# Restore the Odoo filestore archive into the named volume (services should be stopped first)
docker compose -f docker-compose.yml -f backup/restore.yml run --rm restore \
restore-volume odoo-filestore /backups/2025-11-13/odoo_filestore_2025-11-13.tar.gz
# Restore a database dump (uses POSTGRES_HOST env, defaults to `postgres`)
docker compose -f docker-compose.yml -f backup/restore.yml run --rm restore \
restore-db /backups/2025-11-13/odoodb_2025-11-13.sql odoodb postgres "$POSTGRES_ADMIN_PASSWORD"
```
### Shortcut restore scripts
For the most common restore scenarios, use the convenience wrappers in `scripts/backup/`:
- `restore-odoo.sh BACKUP_ID [DB_NAME] [DB_USER]` stops the `odoo` service, restores the filestore/config/addons/web/logs/db-data archives and, when `ODOO_DB_PASSWORD` is present, replays the `backups/<BACKUP_ID>/<db>_<BACKUP_ID>.sql(.gz)` dump that was moved into the same dated folder. Defaults: DB name from `ODOO_DB`/`ODOO_DB_NAME`, user from `ODOO_DB_USER`, fallbacks `odoo`/`odoouser`.
- `restore-gitea.sh BACKUP_ID [DB_NAME] [DB_USER]` stops the `gitea` service, restores the data archive and, when `GITEA_DB_PASSWORD` is available, the `backups/<BACKUP_ID>/<db>_<BACKUP_ID>.sql(.gz)` dump residing alongside the volume archives. Defaults: DB name/user from `GITEA_DB`/`GITEA_DB_NAME` and `GITEA_DB_USER`, fallback `gitea`.
- `restore-opencloud.sh BACKUP_ID` stops the `opencloud` service and restores the OpenCloud data archive before starting the service again.
All three scripts automatically source the repository `.env` (if present) for credentials, run from the repository root, and restart the stopped service once extraction finishes. If a matching SQL dump is found but the corresponding password variable is not set the script skips the database restore with a warning so you can retry after exporting the secret. They expect `docker compose` to be on the PATH and the helper `restore` service available.
The restore helper now defaults to the service database user/password for connectivity (so values like `ODOO_DB_USER`, `ODOO_DB_PASSWORD`, `ODOO_DB_HOST`, etc. are enough for most restores). By default it drops the existing database before replaying the dump (`DROP_EXISTING_DB=1`, overridable per service via `ODOO_DROP_EXISTING_DB` / `GITEA_DROP_EXISTING_DB`). Set the flag to `0` if you prefer to manage cleanup manually. If the configured user lacks the privileges to drop/create databases you can export `POSTGRES_ADMIN_USER`, `POSTGRES_ADMIN_PASSWORD`, and optionally `POSTGRES_ADMIN_DB` to elevate the operation; otherwise prepare the database yourself before rerunning the script.
Example:
```bash
# Restore Odoo from a dated backup directory and replay its DB dump
./scripts/backup/restore-odoo.sh 2025-11-13_04-15
# Restore Gitea using custom database credentials
GITEA_DB_PASSWORD=supersecret ./scripts/backup/restore-gitea.sh 2025-11-13_04-15 giteadb gitea
# Restore OpenCloud files only
./scripts/backup/restore-opencloud.sh 2025-11-13_04-15
```
Under the hood the helper mounts:
- `./backups` (read-only) so you can access tarballs and SQL dumps.
- The named volumes (`odoo-filestore`, `odoo-config`, `odoo-addons`, `gitea_data`, `opencloud-data`, etc.) so extraction happens directly into the running stacks storage.
- `restore.sh`, which provides the `list`, `restore-volume`, and `restore-db` commands (run `restore help` for usage inside the container).
## Best practices
- Stop (or put into maintenance) any service that uses a target volume or database before restoring to avoid live writes fighting the restore.
- After a restore, restart the services you paused and verify permissions inside the volume; you might need to run a `chown` if the target container expects a different UID/GID.
- Test your backup/restore workflow in a safe environment so you know the round-trip works before you need it in production.
- Keep an eye on `/var/log/backup.log` inside the `pgbackup` and `backup-manager` containers if you need to confirm when jobs run or troubleshoot failures.
## Troubleshooting
- **Backups didnt rotate:** confirm `BKP_RETENTION_DAYS`/`BKP_RETENTION_COUNT` are exported and that cron has run long enough for the age threshold to be exceeded.
- **Manager starts too early:** raise `DB_BACKUP_TIMEOUT` or make sure the database list in `BKP_DB_LIST` only contains existing databases.
- **Restore fails with permission errors:** ensure the target volume is mounted read-write for the restore step, or restore into a temporary directory and adjust ownership manually.
If you need frequently used restore commands, consider adding shell aliases or Makefile targets (e.g., `make restore-odoo DATE=2025-11-13`).

53
backup/backup.yml Normal file
View File

@@ -0,0 +1,53 @@
---
services:
pgbackup:
image: postgres:18-alpine
container_name: pgbackup
restart: always
environment:
PGPASSWORD: ${POSTGRES_ADMIN_PASSWORD}
POSTGRES_HOST: postgres
POSTGRES_USER: ${POSTGRES_ADMIN_USER}
DB_LIST: ${BKP_DB_LIST}
TIMEZONE: ${TIMEZONE}
volumes:
- ./backups:/backups
- ./scripts/backup/pg-dump.sh:/pg-dump.sh
- ./scripts/backup/pg-dump.cron:/etc/crontabs/root
entrypoint: ["/bin/sh", "-c", "# map host /backups ownership to container backup user\nHOST_UID=$$(stat -c '%u' /backups 2>/dev/null || echo 1000) && HOST_GID=$$(stat -c '%g' /backups 2>/dev/null || echo 1000) && addgroup -g \"$$HOST_GID\" backup 2>/dev/null || true && adduser -D -u \"$$HOST_UID\" -G backup backup 2>/dev/null || true && mkdir -p /var/log /backups && chown -R \"$$HOST_UID\":\"$$HOST_GID\" /var/log /backups 2>/dev/null || true && chmod +x /pg-dump.sh && chown root:root /etc/crontabs/root 2>/dev/null || true && crond -f"]
backup-manager:
image: alpine:latest
container_name: backup-manager
restart: always
depends_on:
- pgbackup
environment:
BACKUP_RETENTION_COUNT: ${BKP_RETENTION_COUNT}
BACKUP_RETENTION_DAYS: ${BKP_RETENTION_DAYS}
TIMEZONE: ${TIMEZONE}
volumes:
- ./backups:/backups
- type: volume
source: odoo-db-data
target: /odoo_db_data
read_only: true
- type: volume
source: odoo-config
target: /odoo_config
read_only: true
- type: volume
source: gitea_data
target: /gitea_data
read_only: true
- type: volume
source: opencloud-data
target: /opencloud_data
read_only: true
- type: volume
source: opencloud-config
target: /opencloud_config
read_only: true
- ./scripts/backup/manage-backups.sh:/manage-backups.sh
- ./scripts/backup/backup-manager.cron:/etc/crontabs/root
entrypoint: ["/bin/sh", "-c", "# install bsdtar for xattr-aware archives\napk add --no-cache libarchive-tools attr >/dev/null \n# map host /backups ownership to container backup user\nHOST_UID=$$(stat -c '%u' /backups 2>/dev/null || echo 1000) && HOST_GID=$$(stat -c '%g' /backups 2>/dev/null || echo 1000) && addgroup -g \"$$HOST_GID\" backup 2>/dev/null || true && adduser -D -u \"$$HOST_UID\" -G backup backup 2>/dev/null || true && mkdir -p /var/log /backups && chown -R \"$$HOST_UID\":\"$$HOST_GID\" /var/log /backups 2>/dev/null || true && chmod +x /manage-backups.sh && chown root:root /etc/crontabs/root 2>/dev/null || true && crond -f"]

36
backup/restore.yml Normal file
View File

@@ -0,0 +1,36 @@
---
# One-shot helper service for restores. Run with your main compose file so
# volumes and networks resolve to the same project.
services:
restore:
image: postgres:18-alpine
container_name: restore
restart: "no"
environment:
IN_CONTAINER: "1"
POSTGRES_HOST: postgres
volumes:
- ./backups:/backups:ro
- type: volume
source: odoo-db-data
target: /odoo_db_data
- type: volume
source: odoo-config
target: /odoo_config
- type: volume
source: gitea_data
target: /gitea_data
- type: volume
source: opencloud-data
target: /opencloud_data
- type: volume
source: opencloud-config
target: /opencloud_config
- ./scripts/backup/restore.sh:/restore.sh:ro
entrypoint:
- /bin/sh
- -c
- |
apk add --no-cache libarchive-tools attr >/dev/null
exec /restore.sh "$@"
- --