Automated Archiving Service
Go to file
2025-09-08 13:06:31 -04:00
src Clean up architecture and remove unused code 2025-09-08 12:30:57 -04:00
tests Major refactoring: Event-driven livestream archiver 2025-09-06 18:27:59 -04:00
.env.example Security improvements and architecture cleanup 2025-09-08 12:09:58 -04:00
.gitignore Add documentation and configuration templates 2025-08-16 19:05:59 -04:00
Cargo.lock Security improvements and architecture cleanup 2025-09-08 12:09:58 -04:00
Cargo.toml Major refactoring: Event-driven livestream archiver 2025-09-06 18:27:59 -04:00
deploy.sh.example Add documentation and configuration templates 2025-08-16 19:05:59 -04:00
LICENSE Add documentation and configuration templates 2025-08-16 19:05:59 -04:00
livestream-archiver.service.example Make paths configurable via environment variables 2025-08-16 19:09:21 -04:00
README.md Add comprehensive documentation with event-driven architecture details 2025-09-08 13:06:31 -04:00

Livestream Archiver

A Rust application that monitors a directory for livestream recordings, processes them with hardware-accelerated transcoding, and syncs them to a remote server using an event-driven architecture.

Features

  • Event-Driven Architecture: Uses filesystem events for instant file detection
  • Automatic Detection: Monitors a configured directory for new MP4 files
  • Smart Processing: Waits for files to be completely written before processing
  • Remote Sync: Uses rsync to sync files to a remote server immediately after detection
  • Hardware Acceleration: Converts files to AV1 using Intel QSV hardware acceleration
  • Organized Storage: Creates date-based directory structure (Jellyfin-compatible)
  • Metadata Generation: Creates NFO files for media servers
  • Retry Logic: Caches and retries failed sync operations
  • Service Integration: Runs as a systemd service with automatic restart
  • Environment Configuration: Uses .env file for easy configuration management

Prerequisites

  • Rust toolchain (1.70 or later)
  • ffmpeg with Intel QSV support
  • rsync for file synchronization
  • SSH key authentication for passwordless rsync (if using remote sync)

Installation

Building from Source

# Clone the repository
git clone <repository-url>
cd livestream-archiver

# Build release binary
cargo build --release

# Binary will be at target/release/livestream_archiver

Quick Deploy

  1. Copy and customize the example files:
cp .env.example .env
cp livestream-archiver.service.example livestream-archiver.service
cp deploy.sh.example deploy.sh
# Edit all files with your specific configuration
  1. Run the deployment:
./deploy.sh

Configuration

Environment Variables (.env file)

Create a .env file in the project root with the following variables:

  • INPUT_DIR: Directory to monitor for new MP4 files
  • OUTPUT_DIR: Local output directory for processed files
  • PC_SYNC_TARGET: Remote sync destination (e.g., user@server:/path/to/destination/)

Example .env file:

INPUT_DIR=/home/user/livestreams
OUTPUT_DIR=/media/archive/livestreams  
PC_SYNC_TARGET=user@server:/remote/path/to/livestreams/

Service Configuration

  1. Copy the example service file:
cp livestream-archiver.service.example livestream-archiver.service
  1. Edit the service file with your configuration:
  • Update User and Group
  • Set correct paths for WorkingDirectory and ExecStart
  • Ensure the service can access your .env file
  1. Install and start the service:
sudo cp livestream-archiver.service /etc/systemd/system/
sudo systemctl daemon-reload
sudo systemctl enable livestream-archiver
sudo systemctl start livestream-archiver

Usage

Running Manually

# Run directly (uses .env file automatically)
cargo run

# Or run the compiled binary
./target/release/livestream_archiver

Running as a Service

# Check service status
sudo systemctl status livestream-archiver

# View logs
sudo journalctl -u livestream-archiver -f

# Restart service
sudo systemctl restart livestream-archiver

How It Works

The application uses an event-driven architecture for efficient file monitoring:

  1. Event Monitoring: Uses filesystem events to detect new files instantly (no polling)
  2. File Validation: Waits for files to be completely written before processing
  3. Immediate Sync: Syncs new files to remote server via rsync as soon as they're detected
  4. Hardware Processing: Converts to AV1 format using Intel QSV hardware acceleration
  5. Smart Organization: Moves processed files to date-based directories
  6. Metadata Generation: Creates NFO files for media server compatibility
  7. Cleanup: Optionally removes original files after successful processing

Directory Structure

Output files are organized by date for easy browsing:

<output-directory>/
├── 2024/
│   ├── 01-January/
│   │   ├── Recording - January 01 2024.mp4
│   │   ├── Recording - January 01 2024.nfo
│   │   └── ...
│   └── 02-February/
│       └── ...
└── 2025/
    └── ...

Troubleshooting

Service not starting

  • Check logs: journalctl -u livestream-archiver -f
  • Verify .env file exists and has correct permissions
  • Ensure all paths exist and have proper permissions
  • Verify ffmpeg has QSV support: ffmpeg -hwaccels | grep qsv

Sync failures

  • Verify SSH key authentication to remote server
  • Check network connectivity
  • Test rsync manually: rsync -av /local/path/ user@server:/remote/path/
  • Review cached sync attempts in ~/.cache/livestream-archiver/

Processing issues

  • Ensure Intel QSV drivers are installed
  • Check available disk space in both input and output directories
  • Verify file permissions in input/output directories
  • Test hardware acceleration: ffmpeg -hwaccel qsv -i input.mp4 -c:v av1_qsv output.mp4

Files not detected

  • Ensure the input directory exists and is accessible
  • Check that the application has read permissions on the input directory
  • Verify filesystem events are working (may not work on some network filesystems)

Development

Running Tests

cargo test

Building for Debug

cargo build

Development Environment

# Copy example env for development
cp .env.example .env
# Edit .env with your development paths

# Run with debug output
RUST_LOG=debug cargo run

Performance Notes

  • Uses filesystem events instead of polling for better performance
  • Hardware acceleration significantly reduces CPU usage during transcoding
  • Processes files as they arrive rather than batch processing
  • Minimal memory footprint due to streaming processing approach

License

This project is licensed under the MIT License - see the LICENSE file for details.

Contributing

Contributions are welcome! Please feel free to submit a Pull Request.