feat: add comprehensive GitHub workflow and development tools

This commit is contained in:
Stiftung Development
2025-09-06 18:31:54 +02:00
commit ab23d7187e
10224 changed files with 2075210 additions and 0 deletions

View File

@@ -0,0 +1,198 @@
# Synology NAS Deployment Checklist (Detailed)
This document captures the exact steps and lessons learned to reliably deploy the Stiftung app on a Synology NAS.
## 1) Prerequisites ✅
- DSM 7.0+ with the Docker package installed (Package Center)
- SSH enabled (Control Panel → Terminal & SNMP → Enable SSH)
- At least 4GB RAM and 10GB free disk space
- NAS local IP (e.g., `192.168.1.50`) and open port 8081
## 2) Directory Layout on NAS ✅
Create this structure on the NAS (e.g., under `/volume1/docker/stiftung`):
```
/volume1/docker/stiftung
├─ app/ # Django app code (the entire project folder "app")
├─ deploy-synology/ # Deployment files (compose, scripts, env template)
└─ data/ # Persistent volumes created by compose
├─ db/
├─ backups/
├─ redis/
├─ uploads/
└─ logs/
```
Important:
- The `deploy-synology/docker-compose.yml` uses `build.context: ../app`. Ensure `app/` sits next to `deploy-synology/` in the same parent directory.
- If you used File Station to upload ZIPs, make sure you actually extracted the real `app` contents (not an empty folder).
## 3) Environment Configuration ✅
1) Go to the deployment folder:
```bash
cd /volume1/docker/stiftung/deploy-synology
```
2) Create `.env` from template and edit values:
```bash
cp env.template .env
vi .env # or use Text Editor in DSM
```
3) Required variables (example):
```bash
# Core
SECRET_KEY=change_me_to_a_long_random_string
DEBUG=False
ALLOWED_HOSTS=localhost,127.0.0.1,crnas,192.168.1.50
CSRF_TRUSTED_ORIGINS=http://crnas:8081,http://192.168.1.50:8081,http://localhost:8081
# Database
DB_PASSWORD=super_secure_db_password
# Paperless (optional)
PAPERLESS_URL=
PAPERLESS_TOKEN=
```
Notes:
- The database service uses: `POSTGRES_DB=stiftung`, `POSTGRES_USER=stiftung_user`, `POSTGRES_PASSWORD=${DB_PASSWORD}`.
- The app services also receive `POSTGRES_DB`, `POSTGRES_USER`, `POSTGRES_PASSWORD`. This alignment prevents auth failures.
## 4) First Deployment ✅
1) Make scripts executable (first time only):
```bash
cd /volume1/docker/stiftung/deploy-synology
chmod +x *.sh
```
2) Start services:
```bash
./deploy.sh
```
3) Create a Django superuser:
```bash
sudo docker-compose exec web python manage.py createsuperuser
```
4) Access the app:
```
http://<NAS_IP>:8081
```
5) Health endpoint (for quick checks):
```
http://<NAS_IP>:8081/health/
```
## 5) Clean Reset / Redeploy ✅
Use when you change envs, or need a clean DB:
```bash
cd /volume1/docker/stiftung/deploy-synology
sudo docker-compose down -v
rm -rf ./data/db
mkdir -p ./data/db
POSTGRES_UID=$(sudo docker run --rm postgres:15-alpine id -u postgres)
POSTGRES_GID=$(sudo docker run --rm postgres:15-alpine id -g postgres)
sudo chown -R $POSTGRES_UID:$POSTGRES_GID ./data/db
sudo chmod 700 ./data/db
sudo docker-compose up -d --build
sudo docker-compose exec web python manage.py migrate
```
## 6) Backup & Restore (via UI) ✅
- Backups are written to `./data/backups` (mounted as `/app/backups` inside containers).
- Backup creation and restore are available in the app under Administration → Backup & Restore.
- Restore requires uploading a backup created by this app (`.tar.gz`). The process runs in background; check the job status in the UI.
Recommended restore flow on NAS:
```bash
cd /volume1/docker/stiftung/deploy-synology
sudo docker-compose stop worker beat # avoid writers during restore
# Start restore via UI → upload .tar.gz → Start
sudo docker-compose logs -f web | cat # watch progress
# After completion:
sudo docker-compose exec web python manage.py migrate
sudo docker-compose restart web
sudo docker-compose start worker beat
```
## 7) Logs & Monitoring ✅
```bash
cd /volume1/docker/stiftung/deploy-synology
sudo docker-compose ps
sudo docker-compose logs --tail=150 web | cat
sudo docker-compose logs -f web | cat
```
## 8) Known Pitfalls & Fixes ✅
- Build path error: `ERROR: build path .../app does not exist`
- Ensure directory layout matches section 2 (compose uses `../app`).
- Image pull denied for `gramps/gramps-web`
- GrampsWeb was removed from compose; use the provided compose without that service.
- DB port already in use
- The DB service exposes no host port. Access it internally via service name `db`.
- Bind mount path does not exist (e.g., `./data/backups`)
- Ensure `../data/backups` exists (compose will also create it). Use the provided layout.
- `FATAL: password authentication failed for user "stiftung_user"`
- The DB volume may have been initialized with an old password. Align credentials:
- Quick fix (change DB user to match .env):
```bash
sudo docker-compose exec db psql -U postgres -d postgres -c "ALTER USER stiftung_user WITH PASSWORD '...from .env DB_PASSWORD...'";
sudo docker-compose restart web worker beat
```
- Clean reinit (wipes data): use the Clean Reset steps above. Note path is `./data/db` relative to compose.
- SECRET_KEY empty → 500 ImproperlyConfigured
- Ensure `.env` has a non-empty `SECRET_KEY` (and it propagates into the web container):
```bash
sudo docker-compose exec web env | grep -E "DJANGO_SECRET_KEY|SECRET_KEY"
```
- DisallowedHost for `crnas`
- Add NAS hostname/IP to `.env`:
```
ALLOWED_HOSTS=192.168.x.x,crnas,localhost,127.0.0.1
CSRF_TRUSTED_ORIGINS=http://crnas:8081,http://192.168.x.x:8081,http://localhost:8081
```
- Celery error: `Unable to load celery application. Module 'core' has no attribute 'celery'`
- Fixed by adding `core/celery.py` and `from .celery import app as celery_app` in `core/__init__.py` (already included in the app).
- Backup error: `pg_dump` not found
- The app image installs `postgresql-client` (already included). Rebuild if you changed Dockerfile.
- Restore 500 on upload
- Fixed by ensuring the restore view imports `BackupJob` and uses the correct form field `backup_file`.
## 9) Updating the App ✅
```bash
cd /volume1/docker/stiftung/deploy-synology
sudo docker-compose pull # if using remote images
sudo docker-compose up -d --build web worker beat
```
If you changed environment values in `.env`, restart:
```bash
sudo docker-compose up -d --build
```
---
Deployment Date: `________________`
Deployed By: `________________`
Notes: `________________`

107
deploy-synology/README.md Normal file
View File

@@ -0,0 +1,107 @@
# Stiftung Application - Synology NAS Deployment
## Overview
This guide will help you deploy the Stiftung application on a Synology NAS using Docker.
## Prerequisites
- Synology NAS with DSM 7.0+ (recommended)
- Docker package installed from Synology Package Center
- At least 4GB RAM available
- At least 10GB free disk space
- SSH access enabled (optional but recommended)
## Quick Start
1. Copy the entire `deploy-synology` folder to your Synology NAS
2. SSH into your NAS or use File Station
3. Navigate to the deployment folder
4. Run: `docker-compose up -d`
5. Access the application at: `http://YOUR_NAS_IP:8081`
## Detailed Steps
### Step 1: Install Docker Package
1. Open **Package Center** on your Synology NAS
2. Search for "Docker"
3. Install the Docker package
4. Wait for installation to complete
### Step 2: Copy Deployment Files
1. Copy the `deploy-synology` folder to your NAS
- Use **File Station** to upload via web interface
- Or use **SMB/AFP** to mount and copy via network
- Or use **SSH/SCP** for command line transfer
### Step 3: Configure Environment
1. Edit `docker-compose.yml` if needed (ports, volumes)
2. Edit `.env` file with your specific settings
3. Ensure backup directories exist and have proper permissions
### Step 4: Deploy
1. SSH into your NAS: `ssh admin@YOUR_NAS_IP`
2. Navigate to deployment folder: `cd /volume1/docker/stiftung`
3. Start the application: `docker-compose up -d`
4. Check status: `docker-compose ps`
### Step 5: Verify Deployment
1. Check container logs: `docker-compose logs web`
2. Test web access: `http://YOUR_NAS_IP:8081`
3. Verify database connection
4. Test backup functionality
## Configuration Files
### Environment Variables (.env)
- `DATABASE_URL`: PostgreSQL connection string
- `PAPERLESS_URL`: Paperless-ngx API URL
- `GRAMPSWEB_URL`: Gramps Web API URL
- `SECRET_KEY`: Django secret key
- `DEBUG`: Set to False for production
### Ports
- **8081**: Main web application
- **5432**: PostgreSQL database (internal)
- **6379**: Redis (internal)
- **5000**: Gramps Web (internal)
### Volumes
- `./data/db`: PostgreSQL data
- `./data/redis`: Redis data
- `./data/uploads`: File uploads
- `./data/backups`: Backup files
- `./data/gramps`: Gramps data
## Maintenance
### Backup
- Application creates automatic backups
- Manual backup: `docker-compose exec web python manage.py backup`
- Restore: `docker-compose exec web python manage.py restore backup_file.tar.gz`
### Updates
1. Stop containers: `docker-compose down`
2. Pull new images: `docker-compose pull`
3. Start containers: `docker-compose up -d`
### Logs
- View logs: `docker-compose logs -f [service_name]`
- Rotate logs: Configure log rotation in DSM
## Troubleshooting
### Common Issues
1. **Port conflicts**: Change ports in docker-compose.yml
2. **Permission errors**: Check folder permissions (755 for directories, 644 for files)
3. **Database connection**: Verify PostgreSQL container is running
4. **Memory issues**: Increase NAS memory or reduce container memory limits
### Support
- Check container logs: `docker-compose logs [service_name]`
- Verify container status: `docker-compose ps`
- Check resource usage in DSM Resource Monitor
## Security Notes
- Change default passwords
- Use HTTPS in production (reverse proxy with Let's Encrypt)
- Regular security updates
- Firewall configuration
- Backup encryption

80
deploy-synology/backup.sh Normal file
View File

@@ -0,0 +1,80 @@
#!/bin/bash
# Stiftung Application - Backup Script for Synology NAS
# This script can be used for automated backups
set -e
# Configuration
BACKUP_DIR="./data/backups"
RETENTION_DAYS=30
TIMESTAMP=$(date +"%Y%m%d_%H%M%S")
BACKUP_NAME="stiftung_backup_${TIMESTAMP}.tar.gz"
echo "🔄 Starting backup process..."
echo "Timestamp: ${TIMESTAMP}"
echo "Backup name: ${BACKUP_NAME}"
# Create backup directory if it doesn't exist
mkdir -p "${BACKUP_DIR}"
# Check if containers are running
if ! docker-compose ps | grep -q "Up"; then
echo "❌ Error: No containers are running. Please start the application first."
exit 1
fi
# Create backup using the application's backup function
echo "📦 Creating backup using application backup function..."
if docker-compose exec -T web python manage.py backup --type=full --name="Automated backup ${TIMESTAMP}"; then
echo "✅ Application backup completed successfully"
else
echo "⚠️ Application backup failed, trying manual backup..."
# Manual backup as fallback
echo "📦 Creating manual backup..."
# Create temporary backup directory
TEMP_BACKUP_DIR="/tmp/stiftung_backup_${TIMESTAMP}"
mkdir -p "${TEMP_BACKUP_DIR}"
# Backup database
echo "🗄️ Backing up database..."
docker-compose exec -T db pg_dump -U stiftung_user stiftung > "${TEMP_BACKUP_DIR}/database.sql"
# Backup uploads
echo "📁 Backing up uploads..."
if [ -d "./data/uploads" ]; then
cp -r ./data/uploads "${TEMP_BACKUP_DIR}/"
fi
# Backup configuration
echo "⚙️ Backing up configuration..."
cp .env "${TEMP_BACKUP_DIR}/" 2>/dev/null || echo "No .env file to backup"
cp docker-compose.yml "${TEMP_BACKUP_DIR}/"
# Create tar.gz archive
echo "📦 Creating archive..."
tar -czf "${BACKUP_DIR}/${BACKUP_NAME}" -C "${TEMP_BACKUP_DIR}" .
# Clean up temporary directory
rm -rf "${TEMP_BACKUP_DIR}"
echo "✅ Manual backup completed: ${BACKUP_NAME}"
fi
# Clean up old backups
echo "🧹 Cleaning up old backups (older than ${RETENTION_DAYS} days)..."
find "${BACKUP_DIR}" -name "stiftung_backup_*.tar.gz" -type f -mtime +${RETENTION_DAYS} -delete
# Show backup status
echo ""
echo "📊 Backup Summary:"
echo "=================="
echo "Latest backup: ${BACKUP_NAME}"
echo "Backup location: ${BACKUP_DIR}"
echo "Total backups: $(find "${BACKUP_DIR}" -name "stiftung_backup_*.tar.gz" | wc -l)"
echo "Available space: $(df -h "${BACKUP_DIR}" | tail -1 | awk '{print $4}')"
echo ""
echo "🎉 Backup process completed successfully!"

225
deploy-synology/deploy.sh Normal file
View File

@@ -0,0 +1,225 @@
#!/bin/bash
# Stiftung Application - Synology NAS Deployment Script
# This script automates the deployment process
set -e # Exit on any error
echo "🚀 Starting Stiftung Application Deployment on Synology NAS..."
echo "=================================================="
# Colors for output
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
NC='\033[0m' # No Color
# Function to print colored output
print_status() {
echo -e "${GREEN}[INFO]${NC} $1"
}
print_warning() {
echo -e "${YELLOW}[WARNING]${NC} $1"
}
print_error() {
echo -e "${RED}[ERROR]${NC} $1"
}
# Check if Docker is running
check_docker() {
print_status "Checking Docker status..."
if ! docker info > /dev/null 2>&1; then
print_error "Docker is not running. Please start Docker from Package Center."
exit 1
fi
print_status "Docker is running ✓"
}
# Check if docker-compose is available
check_docker_compose() {
print_status "Checking docker-compose availability..."
if ! command -v docker-compose &> /dev/null; then
print_error "docker-compose not found. Installing..."
# Try to install docker-compose if not available
if command -v pip3 &> /dev/null; then
pip3 install docker-compose
else
print_error "Please install docker-compose manually."
exit 1
fi
fi
print_status "docker-compose is available ✓"
}
# Create necessary directories
create_directories() {
print_status "Creating necessary directories..."
mkdir -p data/db
mkdir -p data/redis
mkdir -p data/uploads
mkdir -p data/backups
mkdir -p logs
# Set proper permissions
chmod 755 data/
chmod 755 logs/
chmod 755 data/db/
chmod 755 data/redis/
chmod 755 data/uploads/
chmod 755 data/backups/
print_status "Directories created ✓"
}
# Setup environment file
setup_environment() {
print_status "Setting up environment configuration..."
if [ ! -f .env ]; then
if [ -f env.template ]; then
cp env.template .env
print_warning "Environment file created from template."
print_warning "Please edit .env file with your actual values before continuing."
print_warning "Press Enter when you're ready to continue..."
read
else
print_error "env.template not found. Please create .env file manually."
exit 1
fi
else
print_status "Environment file already exists ✓"
fi
}
# Generate secure secret key
generate_secret_key() {
print_status "Generating secure Django secret key..."
# Generate a 50-character random string
SECRET_KEY=$(python3 -c "import secrets; print(''.join(secrets.choice('abcdefghijklmnopqrstuvwxyz0123456789!@#$%^&*(-_=+)') for i in range(50)))")
# Update .env file with generated secret key
if [ -f .env ]; then
sed -i "s/your_very_long_random_secret_key_here/$SECRET_KEY/" .env
print_status "Secret key generated and updated in .env ✓"
fi
}
# Build and start containers
deploy_containers() {
print_status "Building and starting containers..."
# Build the web application image
print_status "Building web application image..."
docker-compose build web
# Start all services
print_status "Starting all services..."
docker-compose up -d
print_status "Containers started ✓"
}
# Wait for services to be ready
wait_for_services() {
print_status "Waiting for services to be ready..."
# Wait for database
print_status "Waiting for database..."
timeout=60
while [ $timeout -gt 0 ]; do
if docker-compose exec -T db pg_isready -U stiftung_user -d stiftung > /dev/null 2>&1; then
print_status "Database is ready ✓"
break
fi
sleep 2
timeout=$((timeout - 2))
done
if [ $timeout -le 0 ]; then
print_error "Database failed to start within 60 seconds"
exit 1
fi
# Wait for web service
print_status "Waiting for web service..."
timeout=60
while [ $timeout -gt 0 ]; do
if curl -f http://localhost:8081/health/ > /dev/null 2>&1; then
print_status "Web service is ready ✓"
break
fi
sleep 2
timeout=$((timeout - 2))
done
if [ $timeout -le 0 ]; then
print_warning "Web service may not be fully ready yet"
fi
}
# Run database migrations
run_migrations() {
print_status "Running database migrations..."
if docker-compose exec -T web python manage.py migrate --noinput; then
print_status "Database migrations completed ✓"
else
print_warning "Database migrations failed. You may need to run them manually."
fi
}
# Create superuser if needed
create_superuser() {
print_status "Checking if superuser exists..."
if ! docker-compose exec -T web python manage.py shell -c "from django.contrib.auth.models import User; print('Superuser exists' if User.objects.filter(is_superuser=True).exists() else 'No superuser')" 2>/dev/null | grep -q "Superuser exists"; then
print_warning "No superuser found. You can create one manually with:"
print_warning "docker-compose exec web python manage.py createsuperuser"
else
print_status "Superuser exists ✓"
fi
}
# Show deployment status
show_status() {
print_status "Deployment completed! 🎉"
echo ""
echo "📋 Service Status:"
docker-compose ps
echo ""
echo "🌐 Access your application at:"
echo " Main App: http://<your-synology-ip>:8081"
echo ""
echo "📁 Data directories:"
echo " Database: ./data/db"
echo " Uploads: ./data/uploads"
echo " Backups: ./data/backups"
echo " Logs: ./logs"
echo ""
echo "🔧 Useful commands:"
echo " View logs: docker-compose logs -f [service_name]"
echo " Stop: docker-compose down"
echo " Restart: docker-compose restart"
echo " Update: docker-compose pull && docker-compose up -d"
}
# Main deployment flow
main() {
check_docker
check_docker_compose
create_directories
setup_environment
generate_secret_key
deploy_containers
wait_for_services
run_migrations
create_superuser
show_status
}
# Run main function
main "$@"

View File

@@ -0,0 +1,167 @@
version: '3.8'
services:
# PostgreSQL Database
db:
image: postgres:15-alpine
container_name: stiftung-db
restart: unless-stopped
environment:
POSTGRES_DB: stiftung
POSTGRES_USER: stiftung_user
POSTGRES_PASSWORD: ${DB_PASSWORD:-stiftung_password}
POSTGRES_INITDB_ARGS: "--encoding=UTF-8 --lc-collate=C --lc-ctype=C"
volumes:
- ./data/db:/var/lib/postgresql/data
- ./data/backups:/backups
healthcheck:
test: ["CMD-SHELL", "pg_isready -U stiftung_user -d stiftung"]
interval: 30s
timeout: 10s
retries: 3
networks:
- stiftung-network
# Redis Cache
redis:
image: redis:7-alpine
container_name: stiftung-redis
restart: unless-stopped
volumes:
- ./data/redis:/data
command: redis-server --appendonly yes --maxmemory 256mb --maxmemory-policy allkeys-lru
healthcheck:
test: ["CMD", "redis-cli", "ping"]
interval: 30s
timeout: 10s
retries: 3
networks:
- stiftung-network
# Main Web Application
web:
build:
context: ../app
dockerfile: Dockerfile
container_name: stiftung-web
restart: unless-stopped
environment:
- POSTGRES_DB=stiftung
- POSTGRES_USER=stiftung_user
- POSTGRES_PASSWORD=${DB_PASSWORD:-stiftung_password}
- DB_HOST=db
- DB_PORT=5432
- REDIS_URL=redis://redis:6379/0
- PAPERLESS_URL=${PAPERLESS_URL}
- PAPERLESS_TOKEN=${PAPERLESS_TOKEN}
- PAPERLESS_API_URL=${PAPERLESS_URL}
- PAPERLESS_API_TOKEN=${PAPERLESS_TOKEN}
- SECRET_KEY=${SECRET_KEY}
- DEBUG=${DEBUG:-False}
- ALLOWED_HOSTS=${ALLOWED_HOSTS:-localhost,127.0.0.1}
- DJANGO_SECRET_KEY=${SECRET_KEY}
- DJANGO_DEBUG=${DEBUG:-0}
- DJANGO_ALLOWED_HOSTS=${ALLOWED_HOSTS:-localhost,127.0.0.1}
- CSRF_TRUSTED_ORIGINS=${CSRF_TRUSTED_ORIGINS:-http://localhost:8081}
volumes:
- ./data/uploads:/app/uploads
- ./data/backups:/app/backups
- ./logs:/app/logs
ports:
- "8081:8000"
depends_on:
db:
condition: service_healthy
redis:
condition: service_healthy
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:8000/health/"]
interval: 30s
timeout: 10s
retries: 3
networks:
- stiftung-network
# Celery Worker for Background Tasks
worker:
build:
context: ../app
dockerfile: Dockerfile
container_name: stiftung-worker
restart: unless-stopped
command: celery -A core worker -l info
environment:
- POSTGRES_DB=stiftung
- POSTGRES_USER=stiftung_user
- POSTGRES_PASSWORD=${DB_PASSWORD:-stiftung_password}
- DB_HOST=db
- DB_PORT=5432
- REDIS_URL=redis://redis:6379/0
- PAPERLESS_URL=${PAPERLESS_URL}
- PAPERLESS_TOKEN=${PAPERLESS_TOKEN}
- PAPERLESS_API_URL=${PAPERLESS_URL}
- PAPERLESS_API_TOKEN=${PAPERLESS_TOKEN}
- SECRET_KEY=${SECRET_KEY}
- DEBUG=${DEBUG:-False}
- DJANGO_SECRET_KEY=${SECRET_KEY}
- DJANGO_DEBUG=${DEBUG:-0}
- DJANGO_ALLOWED_HOSTS=${ALLOWED_HOSTS:-localhost,127.0.0.1}
volumes:
- ./data/uploads:/app/uploads
- ./data/backups:/app/backups
- ./logs:/app/logs
depends_on:
db:
condition: service_healthy
redis:
condition: service_healthy
networks:
- stiftung-network
# Celery Beat for Scheduled Tasks
beat:
build:
context: ../app
dockerfile: Dockerfile
container_name: stiftung-beat
restart: unless-stopped
command: celery -A core beat -l info
environment:
- POSTGRES_DB=stiftung
- POSTGRES_USER=stiftung_user
- POSTGRES_PASSWORD=${DB_PASSWORD:-stiftung_password}
- DB_HOST=db
- DB_PORT=5432
- REDIS_URL=redis://redis:6379/0
- PAPERLESS_URL=${PAPERLESS_URL}
- PAPERLESS_TOKEN=${PAPERLESS_TOKEN}
- PAPERLESS_API_URL=${PAPERLESS_URL}
- PAPERLESS_API_TOKEN=${PAPERLESS_TOKEN}
- SECRET_KEY=${SECRET_KEY}
- DEBUG=${DEBUG:-False}
- DJANGO_SECRET_KEY=${SECRET_KEY}
- DJANGO_DEBUG=${DEBUG:-0}
- DJANGO_ALLOWED_HOSTS=${ALLOWED_HOSTS:-localhost,127.0.0.1}
volumes:
- ./data/uploads:/app/uploads
- ./data/backups:/app/backups
- ./logs:/app/logs
depends_on:
db:
condition: service_healthy
redis:
condition: service_healthy
networks:
- stiftung-network
networks:
stiftung-network:
driver: bridge
volumes:
db_data:
redis_data:
uploads:
backups:
logs:

View File

@@ -0,0 +1,32 @@
# Stiftung Application - Production Environment Configuration
# Copy this file to .env and fill in your actual values
# Database Configuration
DB_PASSWORD=your_secure_database_password_here
# Django Configuration
SECRET_KEY=your_very_long_random_secret_key_here
DEBUG=False
ALLOWED_HOSTS=your.nas.ip.address,localhost,127.0.0.1
CSRF_TRUSTED_ORIGINS=http://your.nas.ip.address:8081,https://your.nas.ip.address:8081,http://localhost:8081
# Paperless-ngx Integration (Optional)
PAPERLESS_URL=http://your-paperless-server:8000
PAPERLESS_TOKEN=your_paperless_api_token_here
# Gramps Web Integration (Removed)
# Redis Configuration
REDIS_URL=redis://redis:6379/0
# Backup Configuration
BACKUP_RETENTION_DAYS=30
BACKUP_STORAGE_PATH=/app/backups
# Logging Configuration
LOG_LEVEL=INFO
LOG_FILE_PATH=/app/logs
# Security Configuration
SESSION_COOKIE_SECURE=False # Set to True if using HTTPS
CSRF_COOKIE_SECURE=False # Set to True if using HTTPS