Clean up docker-compose configuration

- Archive deploy-production directory as deploy-production-archived (legacy)
- Add DOCKER_COMPOSE_README.md for documentation
- Main configuration now uses compose.yml with working Paperless integration
- Paperless API URL configured as https://vhtv-stiftung.de/paperless
This commit is contained in:
Stiftung Development
2025-09-17 13:19:35 +02:00
parent bbe3208fc0
commit a6559daeb1
9 changed files with 37 additions and 0 deletions

View File

@@ -0,0 +1,325 @@
# Ubuntu Root Server Migration Plan (217.154.84.225)
This document provides a comprehensive migration plan from your existing Synology deployment to a dedicated Ubuntu 22.04 server, incorporating all new features and lessons learned.
## 🎯 Migration Overview
**From**: Synology NAS (Docker containers)
**To**: Ubuntu 22.04 Root Server (217.154.84.225)
**Goal**: Production-ready deployment with improved performance and reliability
## 📋 Pre-Migration Checklist
### 1) Current System Assessment ✅
- [ ] Document current Synology configuration
- [ ] Create full backup of existing data
- [ ] Export user accounts and permissions
- [ ] Document custom configurations
### 2) New Features Since Synology Deployment ✅
- [ ] **HelpBox System**: Editable info boxes with Markdown support on all "New" creation pages
- [ ] **Improved Förderung Search**: Enhanced search functionality
- [ ] **Template Cleanup**: Orphaned recurring payment templates handling
- [ ] **PDF Export Enhancements**: Better report generation
- [ ] **Admin Interface Improvements**: Central help box management
### 3) Server Preparation ✅
- [ ] Ubuntu 22.04 server provisioned (217.154.84.225)
- [ ] Root access confirmed
- [ ] Network connectivity verified
- [ ] DNS configuration planned
## 🚀 Phase 1: Server Setup
### 1.1) Initial Server Configuration
```bash
# Connect to server
ssh root@217.154.84.225
# Run comprehensive setup
wget https://your-repo.com/server-setup.sh
chmod +x server-setup.sh
./server-setup.sh
```
### 1.2) Security Hardening
```bash
# Additional security measures
apt install -y fail2ban logwatch unattended-upgrades
# Configure fail2ban
systemctl enable fail2ban
systemctl start fail2ban
# Set up automatic security updates
dpkg-reconfigure unattended-upgrades
```
### 1.3) Monitoring Setup
```bash
# Install monitoring tools
apt install -y htop iotop nethogs ncdu
# Optional: Install monitoring stack
# docker run -d --name=netdata -p 19999:19999 netdata/netdata
```
## 🗄️ Phase 2: Data Migration
### 2.1) Backup Current System
On your Synology NAS:
```bash
cd /volume1/docker/stiftung/deploy-synology
# Create comprehensive backup
sudo docker-compose exec web python manage.py dumpdata \
--format=json --indent=2 > full_backup_$(date +%Y%m%d).json
# Create database backup
sudo docker-compose exec db pg_dump -U stiftung_user -d stiftung \
> db_backup_$(date +%Y%m%d).sql
# Backup media files
tar -czf media_backup_$(date +%Y%m%d).tar.gz ./data/uploads/
```
### 2.2) Transfer Data to New Server
```bash
# From your local machine or Synology
scp full_backup_*.json root@217.154.84.225:/tmp/
scp db_backup_*.sql root@217.154.84.225:/tmp/
scp media_backup_*.tar.gz root@217.154.84.225:/tmp/
```
## 🏗️ Phase 3: Application Deployment
### 3.1) Code Deployment
```bash
# On the new server as stiftung user
su - stiftung
cd /opt/stiftung
# Clone latest code
git clone https://github.com/yourusername/stiftung-starter.git .
# Copy production configuration
cp deploy-production/docker-compose.prod.yml docker-compose.yml
cp deploy-production/.env.production .env
# Configure environment variables
nano .env
```
### 3.2) Environment Configuration (Updated for Ubuntu)
```bash
# Production Environment (.env)
DEBUG=False
SECRET_KEY=your-new-production-secret-key
ALLOWED_HOSTS=217.154.84.225,your-domain.com,localhost
CSRF_TRUSTED_ORIGINS=https://your-domain.com,http://217.154.84.225
# Database (PostgreSQL 15)
POSTGRES_DB=stiftung_prod
POSTGRES_USER=stiftung_user
POSTGRES_PASSWORD=new-secure-production-password
# Redis Configuration
REDIS_URL=redis://redis:6379/0
CELERY_BROKER_URL=redis://redis:6379/0
CELERY_RESULT_BACKEND=redis://redis:6379/0
# Email Configuration (Production)
EMAIL_HOST=smtp.your-provider.com
EMAIL_PORT=587
EMAIL_HOST_USER=admin@your-domain.com
EMAIL_HOST_PASSWORD=your-email-password
EMAIL_USE_TLS=True
DEFAULT_FROM_EMAIL=admin@your-domain.com
# Backup Configuration
BACKUP_RETENTION_DAYS=30
BACKUP_STORAGE_PATH=/opt/stiftung/backups
# New Features Configuration
HELPBOX_ENABLED=True
MARKDOWN_EXTENSIONS=nl2br,fenced_code,tables
```
### 3.3) First Deployment
```bash
# Make deploy script executable
chmod +x deploy-production/deploy.sh
# Run deployment
./deploy-production/deploy.sh
```
## 📊 Phase 4: Data Restoration
### 4.1) Database Restoration
```bash
# Wait for containers to be ready
sleep 30
# Restore database structure
docker compose exec web python manage.py migrate
# Optional: Load data from JSON backup
docker compose exec web python manage.py loaddata /tmp/full_backup_*.json
# Or restore from SQL dump
docker compose exec -T db psql -U stiftung_user -d stiftung_prod < /tmp/db_backup_*.sql
```
### 4.2) Media Files Restoration
```bash
# Extract media files
cd /opt/stiftung
tar -xzf /tmp/media_backup_*.tar.gz -C app/media/
# Fix permissions
chown -R stiftung:stiftung app/media/
```
### 4.3) Create Superuser
```bash
# Create new admin user for production
docker compose exec web python manage.py createsuperuser
```
## 🌐 Phase 5: Web Server Configuration
### 5.1) Nginx Setup
```bash
# Copy nginx configuration
sudo cp deploy-production/nginx.conf /etc/nginx/sites-available/stiftung
sudo ln -s /etc/nginx/sites-available/stiftung /etc/nginx/sites-enabled/
sudo rm -f /etc/nginx/sites-enabled/default
# Test configuration
sudo nginx -t
# Restart nginx
sudo systemctl restart nginx
```
### 5.2) SSL Certificate (Let's Encrypt)
```bash
# Install certbot
sudo apt install -y certbot python3-certbot-nginx
# Obtain SSL certificate
sudo certbot --nginx -d your-domain.com
# Test auto-renewal
sudo certbot renew --dry-run
```
## ✅ Phase 6: Feature Verification
### 6.1) New Features Testing
- [ ] **HelpBox System**: Test on all creation pages
- Destinatär creation page
- Unterstützung creation page
- Förderung creation page
- Pächter creation page
- Länderei creation page
- [ ] **HelpBox Admin**: Test central management at `/help-box/admin/`
- [ ] **Förderung Search**: Verify improved search functionality
- [ ] **PDF Exports**: Test all report generations
- [ ] **Backup System**: Test backup creation and restoration
### 6.2) Performance Testing
```bash
# Test application responsiveness
curl -I http://217.154.84.225
# Check container resource usage
docker stats
# Monitor logs
docker compose logs -f web
```
## 🔄 Phase 7: Cutover Plan
### 7.1) DNS Update
- [ ] Update DNS records to point to `217.154.84.225`
- [ ] Configure reverse DNS if available
- [ ] Test DNS propagation
### 7.2) Final Data Sync
```bash
# Create final backup on Synology
# Transfer and restore on Ubuntu server
# Verify data consistency
```
### 7.3) Go-Live Checklist
- [ ] All services running (`docker compose ps`)
- [ ] Application accessible via domain
- [ ] SSL certificate valid
- [ ] Admin interface accessible
- [ ] All new features functional
- [ ] Backup system operational
- [ ] Monitoring active
## 📈 Phase 8: Post-Migration
### 8.1) Monitoring Setup
```bash
# Set up log rotation
sudo nano /etc/logrotate.d/stiftung
# Configure system monitoring
# Set up alerting for critical issues
```
### 8.2) Backup Strategy
```bash
# Configure automated backups
docker compose exec web python manage.py backup_database
# Set up off-site backup sync
# Test restore procedures
```
### 8.3) Performance Optimization
- [ ] Database query optimization
- [ ] Static file serving optimization
- [ ] Container resource tuning
- [ ] Cache configuration review
## 🚨 Rollback Plan
If issues arise during migration:
1. **Immediate Rollback**: Point DNS back to Synology NAS
2. **Data Recovery**: Restore from pre-migration backups
3. **Service Restoration**: Restart Synology services
4. **Issue Analysis**: Document problems for retry
## 📝 Migration Timeline
**Estimated Duration**: 4-6 hours
- **Phase 1-3**: 2 hours (Setup & Deployment)
- **Phase 4**: 1 hour (Data Migration)
- **Phase 5-6**: 1-2 hours (Web Server & Testing)
- **Phase 7-8**: 1 hour (Cutover & Verification)
## 🔍 Key Improvements vs. Synology
1. **Performance**: Dedicated resources, better I/O
2. **Reliability**: Professional hosting infrastructure
3. **Security**: Enhanced firewall and monitoring
4. **Scalability**: Easy resource upgrades
5. **Features**: All latest HelpBox and search improvements
6. **Maintenance**: Simplified backup and update procedures
---
**Migration Date**: `________________`
**Migrated By**: `________________`
**Verification**: `________________`
**Notes**: `________________`

View File

@@ -0,0 +1,193 @@
#!/bin/bash
# Production Deployment Script for Ubuntu Server
# Based on Synology deployment lessons learned
# Run this script from /opt/stiftung directory
set -e
echo "=== Stiftung Production Deployment (Ubuntu Server) ==="
# Check if running as stiftung user
if [ "$USER" != "stiftung" ]; then
echo "Error: This script must be run as the 'stiftung' user"
echo "Run: su - stiftung"
exit 1
fi
# Check if we're in the right directory
if [ ! -f "docker-compose.prod.yml" ]; then
echo "Error: docker-compose.prod.yml not found"
echo "Make sure you're in the /opt/stiftung directory"
exit 1
fi
# Check if .env exists
if [ ! -f ".env" ]; then
echo "Error: .env file not found"
echo "Copy .env.production to .env and configure it first"
exit 1
fi
# Validate critical environment variables
echo "=== Validating Environment Configuration ==="
if ! grep -q "SECRET_KEY=" .env || grep -q "SECRET_KEY=$" .env; then
echo "Error: SECRET_KEY not set in .env file"
exit 1
fi
if ! grep -q "POSTGRES_PASSWORD=" .env || grep -q "POSTGRES_PASSWORD=$" .env; then
echo "Error: POSTGRES_PASSWORD not set in .env file"
exit 1
fi
echo "Environment validation passed ✓"
echo "=== Creating Required Directories ==="
mkdir -p backups
mkdir -p logs
mkdir -p app/media
mkdir -p app/static
echo "=== Pulling latest changes ==="
if [ -d ".git" ]; then
git pull origin main
else
echo "Not a git repository, skipping git pull"
fi
echo "=== Building Docker images ==="
docker compose -f docker-compose.prod.yml build --no-cache
echo "=== Stopping existing containers ==="
docker compose -f docker-compose.prod.yml down
echo "=== Starting database first ==="
docker compose -f docker-compose.prod.yml up -d db redis
echo "=== Waiting for database to be ready ==="
echo "This may take up to 60 seconds..."
for i in {1..60}; do
if docker compose -f docker-compose.prod.yml exec db pg_isready -U $POSTGRES_USER -d $POSTGRES_DB >/dev/null 2>&1; then
echo "Database is ready! ✓"
break
fi
if [ $i -eq 60 ]; then
echo "Database failed to start within 60 seconds"
docker compose -f docker-compose.prod.yml logs db
exit 1
fi
echo "Waiting for database... ($i/60)"
sleep 1
done
echo "=== Starting all services ==="
docker compose -f docker-compose.prod.yml up -d
echo "=== Waiting for web service to be ready ==="
sleep 15
echo "=== Running migrations ==="
docker compose -f docker-compose.prod.yml exec web python manage.py migrate
echo "=== Collecting static files ==="
docker compose -f docker-compose.prod.yml exec web python manage.py collectstatic --noinput
echo "=== Setting up HelpBox system ==="
echo "Creating initial help boxes for new features..."
docker compose -f docker-compose.prod.yml exec web python manage.py shell << 'EOF'
from stiftung.models import HelpBox
from django.contrib.auth import get_user_model
User = get_user_model()
admin_user = User.objects.filter(is_superuser=True).first()
if admin_user:
# Create default help boxes for all creation pages
help_boxes = [
{
'page_key': 'destinataer_new',
'title': 'Neuer Destinatär',
'content': '## Destinatär erstellen\n\nHier können Sie einen neuen Destinatär anlegen.\n\n**Wichtige Felder:**\n- Name (Pflichtfeld)\n- Kontaktdaten\n- Notizen\n\n*Tipp: Nutzen Sie die Notizen für wichtige Zusatzinformationen.*'
},
{
'page_key': 'foerderung_new',
'title': 'Neue Förderung',
'content': '## Förderung erstellen\n\nErstellen Sie hier eine neue Förderung.\n\n**Beachten Sie:**\n- Destinatär auswählen\n- Betrag und Datum eingeben\n- Verwendungszweck dokumentieren\n\n*Die Förderung wird automatisch in die Berichte aufgenommen.*'
},
{
'page_key': 'unterstuetzung_new',
'title': 'Neue Unterstützung',
'content': '## Unterstützung erstellen\n\nLegen Sie eine neue Unterstützung an.\n\n**Wichtige Informationen:**\n- Art der Unterstützung\n- Zeitraum\n- Begünstigter\n\n*Dokumentieren Sie alle relevanten Details für spätere Nachverfolgung.*'
}
]
for hb_data in help_boxes:
help_box, created = HelpBox.objects.get_or_create(
page_key=hb_data['page_key'],
defaults={
'title': hb_data['title'],
'content': hb_data['content'],
'is_active': True,
'created_by': admin_user,
'updated_by': admin_user
}
)
if created:
print(f"Created help box for {hb_data['page_key']}")
else:
print(f"Help box for {hb_data['page_key']} already exists")
else:
print("No superuser found. Create one first with: docker compose exec web python manage.py createsuperuser")
EOF
echo "=== Verifying new features ==="
echo "Testing HelpBox system..."
docker compose -f docker-compose.prod.yml exec web python manage.py shell << 'EOF'
from stiftung.models import HelpBox
count = HelpBox.objects.count()
print(f"Total HelpBoxes: {count}")
for hb in HelpBox.objects.all():
print(f"- {hb.page_key}: {hb.title} ({'Active' if hb.is_active else 'Inactive'})")
EOF
echo "=== Creating superuser (if needed) ==="
echo "Run this manually if you need to create an admin user:"
echo "docker compose -f docker-compose.prod.yml exec web python manage.py createsuperuser"
echo ""
echo "=== Checking service status ==="
docker compose -f docker-compose.prod.yml ps
echo "=== Testing application health ==="
sleep 5
if curl -f -s http://localhost:8000/health/ > /dev/null; then
echo "✓ Application health check passed"
else
echo "✗ Application health check failed"
echo "Check logs: docker compose -f docker-compose.prod.yml logs web"
fi
echo ""
echo "=== Deployment Complete! ==="
echo "Application should be running at:"
echo " - Main app: http://217.154.84.225"
echo " - Admin: http://217.154.84.225/admin/"
echo " - HelpBox Admin: http://217.154.84.225/help-box/admin/"
echo ""
echo "New Features Available:"
echo " ✓ HelpBox system on all creation pages"
echo " ✓ Central HelpBox management"
echo " ✓ Enhanced Förderung search"
echo " ✓ Improved PDF exports"
echo " ✓ Template cleanup system"
echo ""
echo "Next steps:"
echo "1. Configure domain name and SSL certificate"
echo "2. Set up monitoring and backups"
echo "3. Create initial data and users"
echo "4. Configure email settings for notifications"
echo ""
echo "For troubleshooting:"
echo " - View logs: docker compose -f docker-compose.prod.yml logs -f web"
echo " - Check containers: docker compose -f docker-compose.prod.yml ps"
echo " - Access shell: docker compose -f docker-compose.prod.yml exec web bash"

View File

@@ -0,0 +1,140 @@
# Production Docker Compose Configuration
# This file is used for production deployment via GitHub Actions
# For local development, use: docker-compose -f compose.dev.yml up
#
# IMPORTANT: This configuration requires ALL environment variables to be
# provided via the production server's .env file. No fallback values are
# included for security reasons.
services:
db:
image: postgres:16-alpine
restart: unless-stopped
environment:
POSTGRES_DB: ${POSTGRES_DB}
POSTGRES_USER: ${POSTGRES_USER}
POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}
volumes:
- dbdata:/var/lib/postgresql/data
healthcheck:
test: ["CMD-SHELL", "pg_isready -U $${POSTGRES_USER} -d $${POSTGRES_DB}"]
interval: 10s
timeout: 5s
retries: 5
redis:
image: redis:7-alpine
restart: unless-stopped
web:
image: ghcr.io/remmerinio/stiftung-management-system:latest
restart: unless-stopped
depends_on:
db:
condition: service_healthy
redis:
condition: service_started
environment:
- POSTGRES_DB=${POSTGRES_DB}
- POSTGRES_USER=${POSTGRES_USER}
- POSTGRES_PASSWORD=${POSTGRES_PASSWORD}
- DB_HOST=${DB_HOST}
- DB_PORT=${DB_PORT}
- DJANGO_SECRET_KEY=${DJANGO_SECRET_KEY}
- DJANGO_DEBUG=${DJANGO_DEBUG}
- DJANGO_ALLOWED_HOSTS=${DJANGO_ALLOWED_HOSTS}
- LANGUAGE_CODE=${LANGUAGE_CODE}
- TIME_ZONE=${TIME_ZONE}
- REDIS_URL=${REDIS_URL}
- PAPERLESS_API_URL=${PAPERLESS_API_URL}
- PAPERLESS_API_TOKEN=${PAPERLESS_API_TOKEN}
ports:
- "8081:8000"
volumes:
- media_files:/app/media
command: ["gunicorn", "core.wsgi:application", "--bind", "0.0.0.0:8000", "--workers", "3"]
worker:
image: ghcr.io/remmerinio/stiftung-management-system:latest
restart: unless-stopped
environment:
- POSTGRES_DB=${POSTGRES_DB}
- POSTGRES_USER=${POSTGRES_USER}
- POSTGRES_PASSWORD=${POSTGRES_PASSWORD}
- DB_HOST=${DB_HOST}
- DB_PORT=${DB_PORT}
- DJANGO_SECRET_KEY=${DJANGO_SECRET_KEY}
- DJANGO_DEBUG=${DJANGO_DEBUG}
- REDIS_URL=${REDIS_URL}
depends_on:
- redis
- db
command: ["celery", "-A", "core", "worker", "-l", "info"]
beat:
image: ghcr.io/remmerinio/stiftung-management-system:latest
restart: unless-stopped
environment:
- POSTGRES_DB=${POSTGRES_DB}
- POSTGRES_USER=${POSTGRES_USER}
- POSTGRES_PASSWORD=${POSTGRES_PASSWORD}
- DB_HOST=${DB_HOST}
- DB_PORT=${DB_PORT}
- DJANGO_SECRET_KEY=${DJANGO_SECRET_KEY}
- DJANGO_DEBUG=${DJANGO_DEBUG}
- REDIS_URL=${REDIS_URL}
depends_on:
- redis
- db
command: ["celery", "-A", "core", "beat", "-l", "info"]
grampsweb:
image: ghcr.io/gramps-project/grampsweb:latest
ports:
- "8090:80"
environment:
- GRAMPSWEB_SECRET_KEY=${GRAMPSWEB_SECRET_KEY}
- GRAMPSWEB_ADMIN_EMAIL=${GRAMPSWEB_ADMIN_EMAIL}
- GRAMPSWEB_ADMIN_PASSWORD=${GRAMPSWEB_ADMIN_PASSWORD}
volumes:
- gramps_data:/app/data
paperless:
image: ghcr.io/paperless-ngx/paperless-ngx:latest
ports:
- "8080:8000"
environment:
- PAPERLESS_REDIS=redis://redis:6379
- PAPERLESS_DBHOST=db
- PAPERLESS_DBPORT=5432
- PAPERLESS_DBNAME=${POSTGRES_DB}
- PAPERLESS_DBUSER=${POSTGRES_USER}
- PAPERLESS_DBPASS=${POSTGRES_PASSWORD}
- PAPERLESS_SECRET_KEY=${PAPERLESS_SECRET_KEY}
- PAPERLESS_URL=https://vhtv-stiftung.de
- PAPERLESS_ALLOWED_HOSTS=vhtv-stiftung.de,localhost
- PAPERLESS_CORS_ALLOWED_HOSTS=https://vhtv-stiftung.de
- PAPERLESS_FORCE_SCRIPT_NAME=/paperless
- PAPERLESS_STATIC_URL=/paperless/static/
- PAPERLESS_LOGIN_REDIRECT_URL=/paperless/
- PAPERLESS_LOGOUT_REDIRECT_URL=/paperless/
- PAPERLESS_ADMIN_USER=${PAPERLESS_ADMIN_USER}
- PAPERLESS_ADMIN_PASSWORD=${PAPERLESS_ADMIN_PASSWORD}
- PAPERLESS_ADMIN_MAIL=${PAPERLESS_ADMIN_MAIL}
volumes:
- paperless_data:/usr/src/paperless/data
- paperless_media:/usr/src/paperless/media
- paperless_export:/usr/src/paperless/export
- paperless_consume:/usr/src/paperless/consume
depends_on:
- db
- redis
volumes:
dbdata:
gramps_data:
paperless_data:
paperless_media:
paperless_export:
paperless_consume:
media_files:

View File

@@ -0,0 +1,12 @@
#!/bin/bash
# Quick fix for remaining volume paths in docker-compose.prod.yml
cd /opt/stiftung
git pull
# Fix remaining volume paths in docker-compose.prod.yml
sed -i 's|./app/media|../app/media|g' deploy-production/docker-compose.prod.yml
sed -i 's|./backups|../app/backups|g' deploy-production/docker-compose.prod.yml
echo "✅ Volume paths fixed!"
echo "Now you can run: cd deploy-production && docker-compose -f docker-compose.prod.yml up -d"

View File

@@ -0,0 +1,234 @@
#!/bin/bash
# Data Migration Script from Synology to Ubuntu Server
# Run this script after the initial deployment is complete
set -e
echo "=== Stiftung Data Migration Script ==="
echo "This script migrates data from Synology NAS to Ubuntu Server"
# Check if running as stiftung user
if [ "$USER" != "stiftung" ]; then
echo "Error: This script must be run as the 'stiftung' user"
echo "Run: su - stiftung"
exit 1
fi
# Check if we're in the right directory
if [ ! -f "docker-compose.prod.yml" ]; then
echo "Error: docker-compose.prod.yml not found"
echo "Make sure you're in the /opt/stiftung directory"
exit 1
fi
# Check if containers are running
if ! docker compose -f docker-compose.prod.yml ps | grep -q "Up"; then
echo "Error: Containers are not running"
echo "Run: ./deploy.sh first"
exit 1
fi
echo "=== Step 1: Preparing for Migration ==="
# Create backup directory
mkdir -p migration-data
cd migration-data
echo "Please upload your backup files to /opt/stiftung/migration-data/"
echo "Expected files:"
echo " - full_backup_YYYYMMDD.json (Django data export)"
echo " - db_backup_YYYYMMDD.sql (PostgreSQL dump)"
echo " - media_backup_YYYYMMDD.tar.gz (Media files)"
echo ""
echo "Press Enter when files are uploaded..."
read
# Verify backup files exist
if ! ls full_backup_*.json >/dev/null 2>&1; then
echo "Error: No Django backup file found (full_backup_*.json)"
exit 1
fi
if ! ls db_backup_*.sql >/dev/null 2>&1; then
echo "Error: No database backup file found (db_backup_*.sql)"
exit 1
fi
if ! ls media_backup_*.tar.gz >/dev/null 2>&1; then
echo "Error: No media backup file found (media_backup_*.tar.gz)"
exit 1
fi
echo "✓ All backup files found"
echo "=== Step 2: Database Migration ==="
echo "Choose migration method:"
echo "1) Load Django JSON backup (recommended for clean migration)"
echo "2) Restore PostgreSQL dump (faster, but may have compatibility issues)"
echo "3) Skip database migration"
read -p "Enter choice (1-3): " migration_choice
case $migration_choice in
1)
echo "Loading Django JSON backup..."
# Copy JSON file to container
JSON_FILE=$(ls full_backup_*.json | head -1)
docker cp "$JSON_FILE" stiftung-web:/tmp/backup.json
# Load data
echo "This may take several minutes..."
docker compose -f ../docker-compose.prod.yml exec web python manage.py loaddata /tmp/backup.json
echo "✓ Django data loaded successfully"
;;
2)
echo "Restoring PostgreSQL dump..."
SQL_FILE=$(ls db_backup_*.sql | head -1)
# Get database credentials from .env
source ../.env
# Drop and recreate database to ensure clean state
docker compose -f ../docker-compose.prod.yml exec db psql -U postgres -c "DROP DATABASE IF EXISTS $POSTGRES_DB;"
docker compose -f ../docker-compose.prod.yml exec db psql -U postgres -c "CREATE DATABASE $POSTGRES_DB OWNER $POSTGRES_USER;"
# Restore data
docker cp "$SQL_FILE" stiftung-db:/tmp/backup.sql
docker compose -f ../docker-compose.prod.yml exec db psql -U $POSTGRES_USER -d $POSTGRES_DB -f /tmp/backup.sql
# Run migrations to ensure schema is up to date
docker compose -f ../docker-compose.prod.yml exec web python manage.py migrate
echo "✓ PostgreSQL data restored successfully"
;;
3)
echo "Skipping database migration"
;;
*)
echo "Invalid choice, skipping database migration"
;;
esac
echo "=== Step 3: Media Files Migration ==="
read -p "Restore media files? (y/n): " restore_media
if [[ $restore_media =~ ^[Yy]$ ]]; then
echo "Restoring media files..."
MEDIA_FILE=$(ls media_backup_*.tar.gz | head -1)
# Extract media files to the correct location
tar -xzf "$MEDIA_FILE" -C ../app/
# Fix permissions
chown -R stiftung:stiftung ../app/media/
echo "✓ Media files restored successfully"
fi
echo "=== Step 4: User Account Migration ==="
echo "Creating superuser account for production..."
echo "Note: You'll need to create a new superuser account"
echo "The old passwords won't work due to different SECRET_KEY"
read -p "Create new superuser now? (y/n): " create_user
if [[ $create_user =~ ^[Yy]$ ]]; then
docker compose -f ../docker-compose.prod.yml exec web python manage.py createsuperuser
fi
echo "=== Step 5: Post-Migration Tasks ==="
# Run any additional migrations for new features
echo "Running migrations for new features..."
docker compose -f ../docker-compose.prod.yml exec web python manage.py migrate
# Rebuild search indexes if needed
echo "Rebuilding search indexes..."
docker compose -f ../docker-compose.prod.yml exec web python manage.py shell << 'EOF'
try:
from django.core.management import call_command
call_command('rebuild_index', interactive=False)
print("Search indexes rebuilt successfully")
except:
print("No search indexes to rebuild")
EOF
# Verify HelpBox system
echo "Verifying HelpBox system..."
docker compose -f ../docker-compose.prod.yml exec web python manage.py shell << 'EOF'
from stiftung.models import HelpBox
count = HelpBox.objects.count()
print(f"HelpBoxes found: {count}")
if count == 0:
print("Creating default help boxes...")
from django.contrib.auth import get_user_model
User = get_user_model()
admin_user = User.objects.filter(is_superuser=True).first()
if admin_user:
help_boxes = [
('destinataer_new', 'Neuer Destinatär', '## Destinatär erstellen\n\nHier können Sie einen neuen Destinatär anlegen.'),
('foerderung_new', 'Neue Förderung', '## Förderung erstellen\n\nErstellen Sie hier eine neue Förderung.'),
('unterstuetzung_new', 'Neue Unterstützung', '## Unterstützung erstellen\n\nLegen Sie eine neue Unterstützung an.'),
]
for page_key, title, content in help_boxes:
HelpBox.objects.get_or_create(
page_key=page_key,
defaults={
'title': title,
'content': content,
'is_active': True,
'created_by': admin_user,
'updated_by': admin_user
}
)
print("Default help boxes created")
EOF
echo "=== Step 6: Verification ==="
echo "Verifying migration results..."
# Test database connectivity
if docker compose -f ../docker-compose.prod.yml exec web python manage.py check; then
echo "✓ Django system check passed"
else
echo "✗ Django system check failed"
fi
# Test web interface
if curl -f -s http://localhost:8000/health/ > /dev/null; then
echo "✓ Web interface is responding"
else
echo "✗ Web interface is not responding"
fi
# Count migrated objects
echo "Migration summary:"
docker compose -f ../docker-compose.prod.yml exec web python manage.py shell << 'EOF'
from django.apps import apps
for model in apps.get_models():
if hasattr(model, '_meta') and model._meta.app_label == 'stiftung':
count = model.objects.count()
print(f" {model.__name__}: {count} objects")
EOF
echo ""
echo "=== Migration Complete! ==="
echo ""
echo "Your application has been migrated from Synology to Ubuntu server."
echo ""
echo "What to test:"
echo "1. Login with your new superuser account"
echo "2. Verify all data is present and correct"
echo "3. Test HelpBox system on creation pages"
echo "4. Test PDF exports and reports"
echo "5. Verify Förderung search functionality"
echo ""
echo "Access your application at:"
echo " - Main app: http://217.154.84.225"
echo " - Admin: http://217.154.84.225/admin/"
echo " - HelpBox Admin: http://217.154.84.225/help-box/admin/"
echo ""
echo "If everything works correctly, you can:"
echo "1. Update your DNS to point to the new server"
echo "2. Set up SSL certificates"
echo "3. Decommission the old Synology deployment"
cd ..
echo "Migration data is preserved in: /opt/stiftung/migration-data/"

View File

@@ -0,0 +1,116 @@
# HTTP server block - redirect to HTTPS
server {
listen 80;
server_name vhtv-stiftung.de www.vhtv-stiftung.de;
# Redirect all HTTP traffic to HTTPS
return 301 https://$server_name$request_uri;
}
# HTTPS server block
server {
listen 443 ssl http2;
server_name vhtv-stiftung.de www.vhtv-stiftung.de;
# SSL Certificate Configuration
ssl_certificate /etc/letsencrypt/live/vhtv-stiftung.de/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/vhtv-stiftung.de/privkey.pem;
# SSL Security Settings
ssl_protocols TLSv1.2 TLSv1.3;
ssl_ciphers ECDHE-RSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-AES128-SHA256:ECDHE-RSA-AES256-SHA384;
ssl_prefer_server_ciphers off;
ssl_session_cache shared:SSL:10m;
ssl_session_timeout 10m;
# HSTS (HTTP Strict Transport Security)
add_header Strict-Transport-Security "max-age=31536000; includeSubDomains" always;
# Enhanced Security headers
add_header X-Frame-Options "SAMEORIGIN" always;
add_header X-XSS-Protection "1; mode=block" always;
add_header X-Content-Type-Options "nosniff" always;
add_header Referrer-Policy "no-referrer-when-downgrade" always;
add_header Content-Security-Policy "default-src 'self' https: data: blob: 'unsafe-inline'" always;
# Static files
location /static/ {
alias /opt/stiftung/app/static/;
expires 1y;
add_header Cache-Control "public, immutable";
}
location /media/ {
alias /opt/stiftung/app/media/;
expires 1y;
add_header Cache-Control "public";
}
# Django application
location / {
proxy_pass http://127.0.0.1:8081;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
# Timeouts
proxy_connect_timeout 60s;
proxy_send_timeout 60s;
proxy_read_timeout 60s;
# Buffer settings
proxy_buffering on;
proxy_buffer_size 128k;
proxy_buffers 4 256k;
proxy_busy_buffers_size 256k;
}
# Paperless-ngx document management
location /paperless/ {
proxy_pass http://127.0.0.1:8080/;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Script-Name /paperless;
# Large file uploads for documents
client_max_body_size 100M;
proxy_read_timeout 300s;
proxy_connect_timeout 300s;
proxy_send_timeout 300s;
}
# Gramps Web (optional)
location /gramps/ {
proxy_pass http://127.0.0.1:5000/;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
# Health check endpoint
location /health/ {
access_log off;
return 200 "healthy\n";
add_header Content-Type text/plain;
}
# Block access to sensitive files
location ~ /\. {
deny all;
}
location ~ ^/(\.env|docker-compose|Dockerfile) {
deny all;
}
}
# Redirect HTTP to HTTPS (add after SSL setup)
# server {
# listen 80;
# server_name 217.154.84.225 your-domain.com;
# return 301 https://$server_name$request_uri;
# }

View File

@@ -0,0 +1,49 @@
#!/bin/bash
# Server Setup Script for Ubuntu 22.04
# Run as root: bash server-setup.sh
echo "=== Installing Docker and Docker Compose ==="
apt update
apt install -y ca-certificates curl gnupg lsb-release
# Add Docker's official GPG key
mkdir -p /etc/apt/keyrings
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | gpg --dearmor -o /etc/apt/keyrings/docker.gpg
# Set up Docker repository
echo \
"deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu \
$(lsb_release -cs) stable" | tee /etc/apt/sources.list.d/docker.list > /dev/null
apt update
apt install -y docker-ce docker-ce-cli containerd.io docker-compose-plugin
# Start and enable Docker
systemctl start docker
systemctl enable docker
echo "=== Installing Additional Tools ==="
apt install -y git nginx certbot python3-certbot-nginx ufw htop
echo "=== Setting up Firewall ==="
ufw default deny incoming
ufw default allow outgoing
ufw allow ssh
ufw allow 80
ufw allow 443
ufw --force enable
echo "=== Creating Application User ==="
useradd -m -s /bin/bash stiftung
usermod -aG docker stiftung
echo "=== Creating Application Directory ==="
mkdir -p /opt/stiftung
chown stiftung:stiftung /opt/stiftung
echo "=== Setup Complete! ==="
echo "Next steps:"
echo "1. Switch to stiftung user: su - stiftung"
echo "2. Clone your repository to /opt/stiftung"
echo "3. Configure environment variables"
echo "4. Start the application"

View File

@@ -0,0 +1,75 @@
#!/bin/bash
# Paperless-ngx Production Setup Script
# Run this script after deploying the updated Docker Compose configuration
set -e
echo "🔧 Setting up Paperless-ngx in production..."
# Check if we're in the right directory
if [ ! -f "docker-compose.yml" ]; then
echo "❌ Error: docker-compose.yml not found. Please run this script from /opt/stiftung"
exit 1
fi
# Generate a random secret key for Paperless
echo "🔑 Generating Paperless secret key..."
PAPERLESS_SECRET=$(openssl rand -base64 50 | tr -d "=+/" | cut -c1-50)
echo "📝 Please add this to your app/.env file:"
echo "PAPERLESS_SECRET_KEY=$PAPERLESS_SECRET"
echo ""
echo "Press Enter to continue after you've added the secret key to your .env file..."
read -r
# Restart containers to pick up new environment variables
echo "<22> Restarting containers..."
docker-compose down
docker-compose up -d
# Wait for database to be ready
echo "⏳ Waiting for database to be ready..."
sleep 60
# Wait for Paperless to be ready
echo "⏳ Waiting for Paperless to initialize..."
sleep 30
# Check if Paperless is running
echo "🔍 Checking Paperless container status..."
docker-compose ps paperless
# Run Paperless migrations
echo "📊 Running Paperless migrations..."
docker-compose exec -T paperless python3 manage.py migrate --no-input
# Create Paperless superuser non-interactively
echo "👤 Creating Paperless superuser..."
docker-compose exec -T paperless python3 manage.py shell -c "
from django.contrib.auth import get_user_model
User = get_user_model()
if not User.objects.filter(username='admin').exists():
User.objects.create_superuser('admin', 'admin@vhtv-stiftung.de', 'admin123')
print('Admin user created successfully')
else:
print('Admin user already exists')
"
echo ""
echo "✅ Paperless-ngx setup complete!"
echo ""
echo "📚 Login details:"
echo "URL: https://vhtv-stiftung.de/paperless/"
echo "Username: admin"
echo "Password: admin123"
echo ""
echo "⚠️ IMPORTANT: Change the admin password immediately after first login!"
echo ""
echo "<22> Next steps:"
echo "1. Visit https://vhtv-stiftung.de/paperless/ and change the admin password"
echo "2. Go to https://vhtv-stiftung.de/paperless/admin/"
echo "3. Navigate to Authentication and Authorization > Tokens"
echo "4. Create a new token for the admin user"
echo "5. Add the token to your app/.env file as PAPERLESS_API_TOKEN"
echo "6. Restart containers: docker-compose restart"