New Grimoire
This commit is contained in:
parent
77d589a13d
commit
cc574f8aed
157 changed files with 29420 additions and 0 deletions
453
Green-Grimoire/Library/Stash-Management.md
Normal file
453
Green-Grimoire/Library/Stash-Management.md
Normal file
|
|
@ -0,0 +1,453 @@
|
|||
---
|
||||
title: Stashapp Workflow
|
||||
description:
|
||||
published: true
|
||||
date: 2026-02-20T04:25:56.467Z
|
||||
tags:
|
||||
editor: markdown
|
||||
dateCreated: 2026-02-18T13:08:53.604Z
|
||||
---
|
||||
|
||||
# StashApp: Automated Library Management with Community Scrapers
|
||||
|
||||
> **Goal:** Automatically identify, tag, rename, and organize your media library with minimal manual intervention using StashDB, ThePornDB, and the CommunityScrapers repository.
|
||||
|
||||
---
|
||||
|
||||
## Table of Contents
|
||||
|
||||
1. [Prerequisites](#1-prerequisites)
|
||||
2. [Installing CommunityScrapers](#2-installing-community-scrapers)
|
||||
3. [Configuring Metadata Providers](#3-configuring-metadata-providers)
|
||||
- [StashDB](#31-stashdb)
|
||||
- [ThePornDB (TPDB)](#32-theporndbtpdb)
|
||||
4. [Configuring Your Library](#4-configuring-your-library)
|
||||
5. [Automated File Naming & Moving](#5-automated-file-naming--moving)
|
||||
6. [The Core Workflow](#6-the-core-workflow)
|
||||
7. [Handling ABMEA & Amateur Content](#7-handling-abmea--amateur-content)
|
||||
8. [Automation with Scheduled Tasks](#8-automation-with-scheduled-tasks)
|
||||
9. [Tips & Troubleshooting](#9-tips--troubleshooting)
|
||||
|
||||
---
|
||||
|
||||
## 1. Prerequisites
|
||||
|
||||
Before starting, make sure you have:
|
||||
|
||||
- **StashApp installed and running** — see the [official install docs](https://github.com/stashapp/stash/wiki/Installation)
|
||||
- **Git installed** on your system (needed to clone the scrapers repo)
|
||||
- **A ThePornDB account** — free tier available at [metadataapi.net](https://metadataapi.net)
|
||||
- **A StashDB account** — requires a community invite; request one on [the Discord](https://discord.gg/2TsNFKt)
|
||||
- Your Stash config directory noted — default locations:
|
||||
|
||||
| OS | Default Path |
|
||||
|----|-------------|
|
||||
| Windows | `%APPDATA%\stash` |
|
||||
| macOS | `~/.stash` |
|
||||
| Linux | `~/.stash` |
|
||||
| Docker | `/root/.stash` |
|
||||
|
||||
---
|
||||
|
||||
## 2. Installing CommunityScrapers
|
||||
|
||||
The [CommunityScrapers](https://github.com/stashapp/CommunityScrapers) repository contains scrapers for hundreds of sites maintained by the Stash community. This is the primary source for site-specific scrapers including ABMEA.
|
||||
|
||||
### Step 1 — Navigate to your Stash config directory
|
||||
|
||||
```bash
|
||||
cd ~/.stash
|
||||
```
|
||||
|
||||
### Step 2 — Create a scrapers directory if it doesn't exist
|
||||
|
||||
```bash
|
||||
mkdir -p scrapers
|
||||
cd scrapers
|
||||
```
|
||||
|
||||
### Step 3 — Clone the CommunityScrapers repository
|
||||
|
||||
```bash
|
||||
git clone https://github.com/stashapp/CommunityScrapers.git
|
||||
```
|
||||
|
||||
This creates `~/.stash/scrapers/CommunityScrapers/` containing all available scrapers.
|
||||
|
||||
### Step 4 — Verify Stash detects the scrapers
|
||||
|
||||
1. Open Stash in your browser (default: `http://localhost:9999`)
|
||||
2. Go to **Settings → Metadata Providers → Scrapers**
|
||||
3. Click **Reload Scrapers**
|
||||
4. You should now see a long list of scrapers including entries for ABMEA, ManyVids, Clips4Sale, etc.
|
||||
|
||||
### Step 5 — Keep scrapers updated
|
||||
|
||||
Since community scrapers are actively maintained, set up a periodic update:
|
||||
|
||||
```bash
|
||||
cd ~/.stash/scrapers/CommunityScrapers
|
||||
git pull
|
||||
```
|
||||
|
||||
> 💡 **Tip:** You can automate this with a cron job or scheduled task. See [Section 8](#8-automation-with-scheduled-tasks).
|
||||
|
||||
### Installing Python Dependencies (if prompted)
|
||||
|
||||
Some scrapers require Python packages. If you see scraper errors mentioning missing modules:
|
||||
|
||||
```bash
|
||||
pip install requests cloudscraper py-cord lxml
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 3. Configuring Metadata Providers
|
||||
|
||||
Stash uses **metadata providers** to automatically match scenes by fingerprint (phash/oshash). This is what enables true automation — no filename matching required.
|
||||
|
||||
### 3.1 StashDB
|
||||
|
||||
StashDB is the official community-run fingerprint and metadata database. It is the most reliable source for mainstream and studio content.
|
||||
|
||||
1. Go to **Settings → Metadata Providers**
|
||||
2. Under **Stash-Box Endpoints**, click **Add**
|
||||
3. Fill in:
|
||||
- **Name:** `StashDB`
|
||||
- **Endpoint:** `https://stashdb.org/graphql`
|
||||
- **API Key:** *(generate this from your StashDB account → API Keys)*
|
||||
4. Click **Confirm**
|
||||
|
||||
### 3.2 ThePornDB (TPDB)
|
||||
|
||||
TPDB aggregates metadata from a large number of sites and is especially useful for amateur, clip site, and ABMEA content that may not be on StashDB.
|
||||
|
||||
1. Log in at [metadataapi.net](https://metadataapi.net) and go to your **API Settings** to get your key
|
||||
2. In Stash, go to **Settings → Metadata Providers**
|
||||
3. Under **Stash-Box Endpoints**, click **Add**
|
||||
4. Fill in:
|
||||
- **Name:** `ThePornDB`
|
||||
- **Endpoint:** `https://theporndb.net/graphql`
|
||||
- **API Key:** *(your TPDB API key)*
|
||||
5. Click **Confirm**
|
||||
|
||||
### Provider Priority Order
|
||||
|
||||
Set your identify task to query providers in this order for best results:
|
||||
|
||||
1. **StashDB** — highest quality, community-verified
|
||||
2. **ThePornDB** — broad coverage including amateur/clip sites
|
||||
3. **CommunityScrapers** (site-specific) — for anything not matched above
|
||||
|
||||
---
|
||||
|
||||
## 4. Configuring Your Library
|
||||
|
||||
### Adding Library Paths
|
||||
|
||||
1. Go to **Settings → Library**
|
||||
2. Under **Directories**, click **Add** and point to your media folders
|
||||
3. You can add multiple directories (e.g., separate drives or folders)
|
||||
|
||||
> ⚠️ **Do not** set your organized output folder as a source directory. Keep source and destination separate until you are confident in your setup.
|
||||
|
||||
### Recommended Directory Structure
|
||||
|
||||
```
|
||||
/media/
|
||||
├── stash-incoming/ ← Source: where new files land
|
||||
└── stash-library/ ← Destination: where Stash moves organized files
|
||||
├── Studios/
|
||||
│ └── ABMEA/
|
||||
└── Amateur/
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 5. Automated File Naming & Moving
|
||||
|
||||
This is the section that does the heavy lifting. Stash will rename and move files **only when a scene is marked as Organized**, which gives you a review gate before anything is touched.
|
||||
|
||||
### Enable File Moving
|
||||
|
||||
1. Go to **Settings → Library**
|
||||
2. Enable **"Move files to organized folder on organize"**
|
||||
3. Set your **Organized folder path** (e.g., `/media/stash-library`)
|
||||
|
||||
### Configure the File Naming Template
|
||||
|
||||
Still in **Settings → Library**, set your **Filename template**. These use Go template syntax with Stash variables.
|
||||
|
||||
**Recommended template for mixed studio/amateur libraries:**
|
||||
|
||||
```
|
||||
{studio}/{date} {title}
|
||||
```
|
||||
|
||||
**For performer-centric amateur libraries:**
|
||||
|
||||
```
|
||||
{performers}/{studio}/{date} {title}
|
||||
```
|
||||
|
||||
**Full example with fallbacks:**
|
||||
|
||||
```
|
||||
{{if .Studio}}{{.Studio.Name}}{{else}}Unknown{{end}}/{{if .Date}}{{.Date}}{{else}}0000-00-00{{end}} {{.Title}}
|
||||
```
|
||||
|
||||
### Available Template Variables
|
||||
|
||||
| Variable | Example Output |
|
||||
|----------|---------------|
|
||||
| `{title}` | `Scene Title Here` |
|
||||
| `{date}` | `2024-03-15` |
|
||||
| `{studio}` | `ABMEA` |
|
||||
| `{performers}` | `Jane Doe` |
|
||||
| `{resolution}` | `1080p` |
|
||||
| `{duration}` | `00-32-15` |
|
||||
| `{rating}` | `5` |
|
||||
|
||||
> 💡 If a field is empty (e.g., no studio), Stash skips that path segment. Test with a few scenes before running on your whole library.
|
||||
|
||||
---
|
||||
|
||||
## 6. The Core Workflow
|
||||
|
||||
Follow these steps **in order** every time you add new content. This is the automated pipeline.
|
||||
|
||||
```
|
||||
New Files → Scan → Generate Fingerprints → Identify → Review → Organize (Move + Rename)
|
||||
```
|
||||
|
||||
### Step 1 — Scan
|
||||
|
||||
**Tasks → Scan**
|
||||
|
||||
- Discovers new files and adds them to the database
|
||||
- Does not move or rename anything yet
|
||||
- Options to enable: **Generate covers on scan**
|
||||
|
||||
### Step 2 — Generate Fingerprints
|
||||
|
||||
**Tasks → Generate**
|
||||
|
||||
Select these options:
|
||||
|
||||
| Option | Purpose |
|
||||
|--------|---------|
|
||||
| ✅ **Phashes** | Used for fingerprint matching against StashDB/TPDB |
|
||||
| ✅ **Checksums (MD5/SHA256)** | Used for duplicate detection |
|
||||
| ✅ **Previews** | Thumbnail previews in the UI |
|
||||
| ✅ **Sprites** | Timeline scrubber images |
|
||||
|
||||
> ⏳ This step is CPU/GPU intensive. Let it complete before proceeding. On a large library, this may take hours.
|
||||
|
||||
### Step 3 — Identify (Auto-Scrape by Fingerprint)
|
||||
|
||||
**Tasks → Identify**
|
||||
|
||||
This is the magic step. Stash sends your file fingerprints to StashDB and TPDB and pulls back metadata automatically.
|
||||
|
||||
Configure the task:
|
||||
1. Click **Add Source** and add **StashDB** first
|
||||
2. Click **Add Source** again and add **ThePornDB**
|
||||
3. Under **Options**, enable:
|
||||
- ✅ Set cover image
|
||||
- ✅ Set performers
|
||||
- ✅ Set studio
|
||||
- ✅ Set tags
|
||||
- ✅ Set date
|
||||
4. Click **Identify**
|
||||
|
||||
Stash will now automatically match and populate metadata for any scene it recognizes by fingerprint.
|
||||
|
||||
### Step 4 — Auto Tag (Filename-Based Fallback)
|
||||
|
||||
For scenes that didn't match by fingerprint (common with amateur content), use Auto Tag to extract metadata from filenames.
|
||||
|
||||
**Tasks → Auto Tag**
|
||||
|
||||
- Matches **Performers**, **Studios**, and **Tags** from filenames against your existing database entries
|
||||
- Works best when filenames contain names (e.g., `JaneDoe_SceneTitle_1080p.mp4`)
|
||||
|
||||
### Step 5 — Review Unmatched Scenes
|
||||
|
||||
Filter to find scenes that still need attention:
|
||||
|
||||
1. Go to **Scenes**
|
||||
2. Filter by: **Organized = false** and **Studio = none** (or **Performers = none**)
|
||||
3. Use the **Tagger view** (icon in top right of Scenes) for rapid URL-based scraping
|
||||
|
||||
In Tagger view:
|
||||
- Paste the original source URL into the scrape field
|
||||
- Click **Scrape** — Stash fills in all metadata from that URL
|
||||
- Review and click **Save**
|
||||
|
||||
### Step 6 — Organize (Move & Rename)
|
||||
|
||||
Once you're satisfied with a scene's metadata:
|
||||
|
||||
1. Open the scene
|
||||
2. Click the **Organize** button (checkmark icon), OR
|
||||
3. Use **bulk organize**: select multiple scenes → Edit → Mark as Organized
|
||||
|
||||
When a scene is marked Organized, Stash will:
|
||||
- ✅ Rename the file according to your template
|
||||
- ✅ Move it to your organized folder
|
||||
- ✅ Update the database path
|
||||
|
||||
> ⚠️ **This action cannot be easily undone at scale.** Always verify metadata on a small batch first.
|
||||
|
||||
---
|
||||
|
||||
## 7. Handling ABMEA & Amateur Content
|
||||
|
||||
ABMEA and amateur clips often lack fingerprint matches. Use these additional strategies:
|
||||
|
||||
### ABMEA-Specific Scraper
|
||||
|
||||
The CommunityScrapers repo includes an ABMEA scraper. To use it manually:
|
||||
|
||||
1. Open a scene in Stash
|
||||
2. Click **Edit → Scrape with → ABMEA**
|
||||
3. If the scene URL is known, enter it; otherwise the scraper will search by title
|
||||
|
||||
### Batch URL Scraping Workflow for ABMEA
|
||||
|
||||
If you have many files sourced from ABMEA:
|
||||
|
||||
1. Before ingesting files, **rename them to include the ABMEA scene ID** in the filename if possible (e.g., `ABMEA-0123_title.mp4`)
|
||||
2. After scanning, go to **Tagger View**
|
||||
3. Filter to unmatched scenes and paste ABMEA URLs one by one
|
||||
|
||||
### Amateur Content Without a Source Site
|
||||
|
||||
For truly anonymous amateur clips:
|
||||
|
||||
1. Create a **Studio** entry called `Amateur` (or more specific names like `Amateur - Reddit`)
|
||||
2. Create **Performer** entries for recurring people you can identify
|
||||
3. Use **Auto Tag** to match these once entries exist
|
||||
4. Use tags liberally to compensate for missing structured metadata: `amateur`, `homemade`, `POV`, etc.
|
||||
|
||||
### Tag Hierarchy Recommendation
|
||||
|
||||
Set up tag parents in **Settings → Tags** to create a browsable hierarchy:
|
||||
|
||||
```
|
||||
Content Type
|
||||
├── Amateur
|
||||
├── Professional
|
||||
└── Compilation
|
||||
|
||||
Source
|
||||
├── ABMEA
|
||||
├── Clip Site
|
||||
└── Unknown
|
||||
|
||||
Quality
|
||||
├── 4K
|
||||
├── 1080p
|
||||
└── SD
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 8. Automation with Scheduled Tasks
|
||||
|
||||
Minimize manual steps by scheduling recurring tasks.
|
||||
|
||||
### Setting Up Scheduled Tasks in Stash
|
||||
|
||||
Go to **Settings → Tasks → Scheduled Tasks** and create:
|
||||
|
||||
| Task | Schedule | Purpose |
|
||||
|------|----------|---------|
|
||||
| Scan | Every 6 hours | Pick up new files automatically |
|
||||
| Generate (Phashes only) | Every 6 hours | Fingerprint new files |
|
||||
| Identify | Daily at 2am | Match new fingerprinted files |
|
||||
| Auto Tag | Daily at 3am | Filename-based fallback tagging |
|
||||
| Clean | Weekly | Remove missing files from database |
|
||||
|
||||
### Auto-Update CommunityScrapers (Linux/macOS)
|
||||
|
||||
Add to your crontab (`crontab -e`):
|
||||
|
||||
```bash
|
||||
# Update CommunityScrapers every Sunday at midnight
|
||||
0 0 * * 0 cd ~/.stash/scrapers/CommunityScrapers && git pull
|
||||
```
|
||||
|
||||
### Auto-Update CommunityScrapers (Windows)
|
||||
|
||||
Create a scheduled task in Task Scheduler running:
|
||||
|
||||
```powershell
|
||||
cd C:\Users\YourUser\.stash\scrapers\CommunityScrapers; git pull
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 9. Tips & Troubleshooting
|
||||
|
||||
### Scraper not appearing in Stash
|
||||
|
||||
- Go to **Settings → Metadata Providers → Scrapers** and click **Reload Scrapers**
|
||||
- Check that the `.yml` scraper file is in a subdirectory of your scrapers folder
|
||||
- Check Stash logs (**Settings → Logs**) for scraper loading errors
|
||||
|
||||
### Identify finds no matches
|
||||
|
||||
- Confirm phashes were generated (check scene details — phash should be populated)
|
||||
- Confirm your StashDB/TPDB API keys are correctly entered and not expired
|
||||
- The file may simply not be in either database — proceed to manual URL scraping
|
||||
|
||||
### Files not moving after marking as Organized
|
||||
|
||||
- Confirm **"Move files to organized folder"** is enabled in Settings → Library
|
||||
- Confirm the organized folder path is set and the folder exists
|
||||
- Check that Stash has write permissions to both source and destination
|
||||
|
||||
### Duplicate files
|
||||
|
||||
Run **Tasks → Clean → Find Duplicates** before organizing to avoid moving duplicates into your library. Stash uses phash to find visual duplicates even if filenames differ.
|
||||
|
||||
### Metadata keeps getting overwritten
|
||||
|
||||
In **Settings → Scraping**, set the **Scrape behavior** to `If not set` instead of `Always` to prevent already-populated fields from being overwritten during re-scrapes.
|
||||
|
||||
### Useful Stash Plugins
|
||||
|
||||
Install via **Settings → Plugins → Browse Available Plugins**:
|
||||
|
||||
| Plugin | Purpose |
|
||||
|--------|---------|
|
||||
| **Performer Image Cleanup** | Remove duplicate performer images |
|
||||
| **Tag Graph** | Visualize tag relationships |
|
||||
| **Duplicate Finder** | Advanced duplicate management |
|
||||
| **Stats** | Library analytics dashboard |
|
||||
|
||||
---
|
||||
|
||||
## Quick Reference Checklist
|
||||
|
||||
Use this checklist every time you add new content:
|
||||
|
||||
```
|
||||
[ ] Drop files into stash-incoming directory
|
||||
[ ] Tasks → Scan
|
||||
[ ] Tasks → Generate → Phashes + Checksums
|
||||
[ ] Tasks → Identify (StashDB → TPDB)
|
||||
[ ] Tasks → Auto Tag
|
||||
[ ] Review unmatched scenes in Tagger View
|
||||
[ ] Manually scrape remaining unmatched scenes by URL
|
||||
[ ] Spot-check metadata on a sample of scenes
|
||||
[ ] Bulk select reviewed scenes → Mark as Organized
|
||||
[ ] Verify a few files moved and renamed correctly
|
||||
[ ] Done ✓
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
*Last updated: February 2026 | Stash version compatibility: 0.25+*
|
||||
*Community resources: [Stash Discord](https://discord.gg/2TsNFKt) | [GitHub](https://github.com/stashapp/stash) | [Wiki](https://github.com/stashapp/stash/wiki)*
|
||||
58
Green-Grimoire/Overview.md
Normal file
58
Green-Grimoire/Overview.md
Normal file
|
|
@ -0,0 +1,58 @@
|
|||
---
|
||||
title: Green Grimoire
|
||||
description: Adult media stack — the satyr's private library
|
||||
published: true
|
||||
date: 2026-04-12T00:00:00.000Z
|
||||
tags: green, adult, stash
|
||||
editor: markdown
|
||||
dateCreated: 2026-04-12T00:00:00.000Z
|
||||
---
|
||||
|
||||
# Green Grimoire
|
||||
|
||||

|
||||
|
||||
The Green Grimoire is the self-hosted adult media stack. Separate host and domain from Netgrimoire. All services sit behind `*.wasted-bandwidth.net` and Authelia. Homepage tab: **Nucking-Futz**.
|
||||
|
||||
Data lives at `/data/nfs/Baxter/Green/` with two libraries: Clips and Movies.
|
||||
|
||||
---
|
||||
|
||||
## Services
|
||||
|
||||
| Service | URL | Port | Purpose | Host |
|
||||
|---------|-----|------|---------|------|
|
||||
| Stash (main) | `stash.wasted-bandwidth.net` | 9999 | Primary adult content library | znas / Compose |
|
||||
| GreenFin (Jellyfinx) | Internal | 7096 | Green Door media server | docker5 / Compose |
|
||||
| Namer | `namer.wasted-bandwidth.net` | 6980 | Scene file namer | znas / Compose |
|
||||
| Whisparr | — | — | Adult content acquisition | znas / Swarm |
|
||||
| NZBGet | — | — | Downloader | znas / Swarm |
|
||||
| PocketStash | Internal | 9998 | Stash instance for Pocket Grimoire sync | znas / Compose |
|
||||
|
||||
---
|
||||
|
||||
## Data Structure
|
||||
|
||||
```
|
||||
/data/nfs/Baxter/Green/
|
||||
├── Clips/ ← Clips library
|
||||
├── Movies/ ← Movies library
|
||||
└── Pocket/ ← Synced to Pocket Grimoire pre-travel
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Pocket Integration
|
||||
|
||||
PocketStash (port 9998) is a separate Stash instance that maintains a curated subset for travel. Before a trip, `syncoid` pushes `vault/Green/Pocket` to the Pocket Grimoire laptop. The Pocket instance runs in read-only travel mode — no writes while traveling.
|
||||
|
||||
See [Stash Integration](/Pocket-Grimoire/Software/Stash-Integration) in Pocket Grimoire docs.
|
||||
|
||||
---
|
||||
|
||||
## Sections
|
||||
|
||||
| | |
|
||||
|---|---|
|
||||
| [Stash Management](/Green-Grimoire/Library/Stash-Management) | Library config, scrapers, metadata workflow |
|
||||
| [VHS Restoration](/Green-Grimoire/Scripts/VHS-Restoration) | Encoding, deinterlace, restoration scripts |
|
||||
531
Green-Grimoire/Scripts/VHS-Restoration.md
Normal file
531
Green-Grimoire/Scripts/VHS-Restoration.md
Normal file
|
|
@ -0,0 +1,531 @@
|
|||
---
|
||||
title: Video Restoration Script
|
||||
description: Restore VHS Video Captures
|
||||
published: true
|
||||
date: 2026-03-06T03:48:12.713Z
|
||||
tags:
|
||||
editor: markdown
|
||||
dateCreated: 2026-03-06T03:48:05.841Z
|
||||
---
|
||||
|
||||
# VHS Video Restoration — User Guide
|
||||
|
||||
A pipeline script for cleaning up and upscaling old VHS captures on Ubuntu 24.04.
|
||||
Runs in two modes: a fast FFmpeg-only cleanup pass, and a full AI upscale using Real-ESRGAN.
|
||||
|
||||
---
|
||||
|
||||
## Requirements
|
||||
|
||||
- **Ubuntu 24.04**
|
||||
- **FFmpeg** — `sudo apt install ffmpeg`
|
||||
- **bc** — `sudo apt install bc`
|
||||
- **Real-ESRGAN** (optional, for AI upscaling — see setup below)
|
||||
|
||||
---
|
||||
|
||||
## File Setup
|
||||
|
||||
Place everything in a working folder with this structure:
|
||||
|
||||
```
|
||||
~/your-folder/
|
||||
├── vhs_restore.sh
|
||||
├── realesrgan-ncnn-vulkan ← AI upscaler binary (optional)
|
||||
├── models/ ← Real-ESRGAN model files
|
||||
├── input/ ← Put your source videos here
|
||||
├── output/ ← Restored videos appear here
|
||||
└── work/ ← Temporary scratch files (auto-created)
|
||||
```
|
||||
|
||||
Supported input formats: `.mpg`, `.mpeg`, `.mp4`, `.avi`, `.mov`, `.mkv`, `.wmv`, `.m4v`, `.ts`
|
||||
|
||||
---
|
||||
|
||||
## First-Time Setup
|
||||
|
||||
```bash
|
||||
# Make the script executable
|
||||
chmod +x vhs_restore.sh
|
||||
|
||||
# Create the input folder and add your videos
|
||||
mkdir input
|
||||
cp /path/to/your/videos/*.mpg input/
|
||||
```
|
||||
|
||||
### Installing Real-ESRGAN (one-time, for AI upscaling)
|
||||
|
||||
1. Download the latest Ubuntu release from:
|
||||
https://github.com/xinntao/Real-ESRGAN/releases
|
||||
→ look for `realesrgan-ncnn-vulkan-*-ubuntu.zip`
|
||||
2. Unzip into your working folder
|
||||
3. `chmod +x realesrgan-ncnn-vulkan`
|
||||
|
||||
---
|
||||
|
||||
## Running the Script
|
||||
|
||||
### Quick cleanup only (recommended first pass)
|
||||
|
||||
Fast — processes in a few minutes per file. No AI upscaling.
|
||||
|
||||
```bash
|
||||
./vhs_restore.sh --no-ai
|
||||
```
|
||||
|
||||
### Full pipeline with AI upscaling
|
||||
|
||||
Slow on CPU (plan for several hours per hour of footage). Produces the best results.
|
||||
|
||||
```bash
|
||||
./vhs_restore.sh
|
||||
```
|
||||
|
||||
### All options
|
||||
|
||||
| Flag | Description | Default |
|
||||
|------|-------------|---------|
|
||||
| `-i DIR` | Input directory | `./input` |
|
||||
| `-o DIR` | Output directory | `./output` |
|
||||
| `-w DIR` | Scratch/work directory | `./work` |
|
||||
| `-b PATH` | Path to Real-ESRGAN binary | `./realesrgan-ncnn-vulkan` |
|
||||
| `-s 2` or `-s 4` | Upscale factor | `2` |
|
||||
| `-q 16` | Output quality (0–51, lower = better) | `16` |
|
||||
| `--no-ai` | Skip AI upscaling, FFmpeg only | off |
|
||||
| `--keep` | Keep extracted PNG frames after processing | off |
|
||||
| `-h` | Show help | |
|
||||
|
||||
**Examples:**
|
||||
|
||||
```bash
|
||||
# Process files from a custom folder
|
||||
./vhs_restore.sh -i ~/Videos/VHS -o ~/Videos/Restored
|
||||
|
||||
# 4x upscale with slightly smaller output file
|
||||
./vhs_restore.sh -s 4 -q 18
|
||||
|
||||
# FFmpeg cleanup only, custom folders
|
||||
./vhs_restore.sh -i ~/Videos/VHS -o ~/Videos/Restored --no-ai
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## What the Script Does
|
||||
|
||||
**Stage 1 — FFmpeg cleanup** (always runs):
|
||||
- Deinterlaces the video (`yadif`) — removes the horizontal combing artifacts common in VHS captures
|
||||
- Denoises (`hqdn3d=2:1:2:2`) — gentle noise reduction that avoids motion blocking
|
||||
- Sharpens edges (`unsharp`) — recovers detail softened by the denoise step
|
||||
- Colour corrects — boosts washed-out VHS colour, adjusts contrast and gamma, corrects the green/yellow cast common in aged tape
|
||||
|
||||
**Stage 2 — Frame extraction** (AI mode only):
|
||||
- Extracts every frame as a PNG into a temporary folder
|
||||
|
||||
**Stage 3 — Real-ESRGAN upscaling** (AI mode only):
|
||||
- Runs the `realesr-animevideov3` model on each frame
|
||||
- Default: 2× upscale (e.g. 640×480 → 1280×960)
|
||||
|
||||
**Reassembly:**
|
||||
- Rebuilds the video from upscaled frames with the original audio
|
||||
|
||||
---
|
||||
|
||||
## Live Progress
|
||||
|
||||
The script shows live FFmpeg output. Watch for:
|
||||
|
||||
- `speed=3.5x` — processing at 3.5× realtime (good)
|
||||
- `speed=0.5x` — slow, likely a very heavy filter load
|
||||
- `corrupt decoded frame` — normal for damaged VHS files, FFmpeg will push through
|
||||
|
||||
---
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
**Script hangs with no output**
|
||||
Run with `--no-ai` first to confirm FFmpeg is working, then check that your Real-ESRGAN binary is executable (`chmod +x realesrgan-ncnn-vulkan`).
|
||||
|
||||
**Output looks blocky during motion**
|
||||
The denoise values may still be too high for your footage. Edit the script and reduce `hqdn3d=2:1:2:2` to `hqdn3d=1:1:1:1`, or remove `hqdn3d` entirely — Real-ESRGAN handles noise well on its own.
|
||||
|
||||
**Colour looks over-saturated**
|
||||
Reduce `saturation=1.8` in the filter chain to `saturation=1.4` or `1.2`.
|
||||
|
||||
**Real-ESRGAN not found**
|
||||
Ensure the binary is in the same folder as the script and is executable. Or pass the path explicitly: `./vhs_restore.sh -b /path/to/realesrgan-ncnn-vulkan`
|
||||
|
||||
**Error logs**
|
||||
All FFmpeg and Real-ESRGAN logs are saved to `/tmp/` for diagnosis:
|
||||
- `/tmp/ffmpeg_stage1.log`
|
||||
- `/tmp/ffmpeg_extract.log`
|
||||
- `/tmp/realesrgan.log`
|
||||
- `/tmp/ffmpeg_reassemble.log`
|
||||
|
||||
---
|
||||
|
||||
## Workflow Recommendation
|
||||
|
||||
1. Run `--no-ai` first on one file to check the cleanup result
|
||||
2. If it looks good, run the full pipeline on all files overnight
|
||||
3. For heavily damaged footage, consider also running **CodeFormer** (face restoration) on top of the output — particularly effective if the video contains people
|
||||
|
||||
---
|
||||
|
||||
## Output
|
||||
|
||||
Restored files are saved to `./output/` as `<original_name>_restored.mp4` encoded as H.264 with AAC audio.
|
||||
|
||||
|
||||
## vhs_restore.sh Script
|
||||
|
||||
`#!/usr/bin/env bash
|
||||
# =============================================================================
|
||||
# vhs_restore.sh — Automated VHS Video Restoration Pipeline
|
||||
# Stages: Deinterlace → Denoise → Colour correct → AI Upscale → Reassemble
|
||||
#
|
||||
# Changes from v1:
|
||||
# - Gentle hqdn3d (2:1:2:2) to prevent motion blocking/pixelation
|
||||
# - Aggressive colour correction for washed-out VHS footage
|
||||
# - Live FFmpeg progress shown in terminal (no silent hanging)
|
||||
# - Logs still saved to /tmp/ for error diagnosis
|
||||
# =============================================================================
|
||||
set -euo pipefail
|
||||
|
||||
# ── Colour output helpers ────────────────────────────────────────────────────
|
||||
RED='\033[0;31m'; GREEN='\033[0;32m'; YELLOW='\033[1;33m'
|
||||
CYAN='\033[0;36m'; BOLD='\033[1m'; NC='\033[0m'
|
||||
info() { echo -e "${CYAN}[INFO]${NC} $*"; }
|
||||
success() { echo -e "${GREEN}[OK]${NC} $*"; }
|
||||
warn() { echo -e "${YELLOW}[WARN]${NC} $*"; }
|
||||
error() { echo -e "${RED}[ERROR]${NC} $*" >&2; }
|
||||
header() { echo -e "\n${BOLD}${CYAN}══ $* ══${NC}"; }
|
||||
|
||||
# ── Default configuration ────────────────────────────────────────────────────
|
||||
INPUT_DIR="./input" # Folder containing your source VHS videos
|
||||
OUTPUT_DIR="./output" # Final restored videos land here
|
||||
WORK_DIR="./work" # Scratch space (frames, temp files)
|
||||
REALESRGAN_BIN="./realesrgan-ncnn-vulkan" # Path to Real-ESRGAN binary
|
||||
REALESRGAN_MODEL="realesr-animevideov3" # Best model for home video
|
||||
UPSCALE_FACTOR=2 # 2x or 4x (4x is very slow on CPU)
|
||||
OUTPUT_WIDTH=1920 # Target width used in --no-ai mode
|
||||
OUTPUT_HEIGHT=1080 # Target height used in --no-ai mode
|
||||
CRF=16 # Output quality 0-51, lower = better
|
||||
PRESET="slow" # FFmpeg encode preset
|
||||
SKIP_UPSCALE=false # --no-ai flag sets this true
|
||||
KEEP_FRAMES=false # --keep flag sets this true
|
||||
|
||||
# ── Parse CLI flags ──────────────────────────────────────────────────────────
|
||||
usage() {
|
||||
cat <<EOF
|
||||
Usage: $(basename "$0") [options]
|
||||
|
||||
Options:
|
||||
-i DIR Input directory (default: ./input)
|
||||
-o DIR Output directory (default: ./output)
|
||||
-w DIR Work/scratch dir (default: ./work)
|
||||
-b PATH Path to realesrgan-ncnn-vulkan binary
|
||||
-s FACTOR Upscale factor: 2 or 4 (default: 2)
|
||||
-q CRF Output quality 0-51, lower=better (default: 16)
|
||||
--no-ai Skip Real-ESRGAN; FFmpeg cleanup only (fast)
|
||||
--keep Keep extracted frames after processing
|
||||
-h Show this help
|
||||
|
||||
Examples:
|
||||
$(basename "$0") -i ~/Videos/VHS -o ~/Videos/Restored
|
||||
$(basename "$0") -i ~/Videos/VHS --no-ai # Quick cleanup only
|
||||
$(basename "$0") -i ~/Videos/VHS -s 4 -q 18 # 4x upscale
|
||||
EOF
|
||||
exit 0
|
||||
}
|
||||
|
||||
while [[ $# -gt 0 ]]; do
|
||||
case "$1" in
|
||||
-i) INPUT_DIR="$2"; shift 2 ;;
|
||||
-o) OUTPUT_DIR="$2"; shift 2 ;;
|
||||
-w) WORK_DIR="$2"; shift 2 ;;
|
||||
-b) REALESRGAN_BIN="$2"; shift 2 ;;
|
||||
-s) UPSCALE_FACTOR="$2"; shift 2 ;;
|
||||
-q) CRF="$2"; shift 2 ;;
|
||||
--no-ai) SKIP_UPSCALE=true; shift ;;
|
||||
--keep) KEEP_FRAMES=true; shift ;;
|
||||
-h|--help) usage ;;
|
||||
*) error "Unknown option: $1"; usage ;;
|
||||
esac
|
||||
done
|
||||
|
||||
# ── Dependency checks ────────────────────────────────────────────────────────
|
||||
header "Checking dependencies"
|
||||
|
||||
check_cmd() {
|
||||
if command -v "$1" &>/dev/null; then
|
||||
success "$1 found"
|
||||
else
|
||||
error "$1 not found. Install with: $2"
|
||||
exit 1
|
||||
fi
|
||||
}
|
||||
|
||||
check_cmd ffmpeg "sudo apt install ffmpeg"
|
||||
check_cmd ffprobe "sudo apt install ffmpeg"
|
||||
check_cmd bc "sudo apt install bc"
|
||||
|
||||
if [[ "$SKIP_UPSCALE" == false ]]; then
|
||||
if [[ ! -x "$REALESRGAN_BIN" ]]; then
|
||||
warn "Real-ESRGAN binary not found at: $REALESRGAN_BIN"
|
||||
echo
|
||||
echo -e "${YELLOW}To install Real-ESRGAN:${NC}"
|
||||
echo " 1. Download: https://github.com/xinntao/Real-ESRGAN/releases"
|
||||
echo " -> realesrgan-ncnn-vulkan-*-ubuntu.zip"
|
||||
echo " 2. Unzip into this directory"
|
||||
echo " 3. chmod +x realesrgan-ncnn-vulkan"
|
||||
echo " 4. Re-run this script"
|
||||
echo
|
||||
echo "Or run with --no-ai for FFmpeg-only cleanup (no upscaling)."
|
||||
exit 1
|
||||
fi
|
||||
success "Real-ESRGAN found"
|
||||
fi
|
||||
|
||||
# ── Locate input files ───────────────────────────────────────────────────────
|
||||
header "Scanning input directory: $INPUT_DIR"
|
||||
|
||||
if [[ ! -d "$INPUT_DIR" ]]; then
|
||||
error "Input directory not found: $INPUT_DIR"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
mapfile -t VIDEO_FILES < <(find "$INPUT_DIR" -maxdepth 1 \
|
||||
-type f \( -iname "*.mp4" -o -iname "*.avi" -o -iname "*.mov" \
|
||||
-o -iname "*.mkv" -o -iname "*.mpg" -o -iname "*.mpeg" \
|
||||
-o -iname "*.wmv" -o -iname "*.m4v" -o -iname "*.ts" \) \
|
||||
| sort)
|
||||
|
||||
if [[ ${#VIDEO_FILES[@]} -eq 0 ]]; then
|
||||
error "No video files found in $INPUT_DIR"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
info "Found ${#VIDEO_FILES[@]} video file(s):"
|
||||
for f in "${VIDEO_FILES[@]}"; do echo " * $(basename "$f")"; done
|
||||
|
||||
# ── Helpers ──────────────────────────────────────────────────────────────────
|
||||
probe() {
|
||||
ffprobe -v error -select_streams v:0 \
|
||||
-show_entries "stream=$2" -of csv=p=0 "$1" 2>/dev/null | head -1
|
||||
}
|
||||
|
||||
human_time() {
|
||||
local s="${1%.*}"
|
||||
printf '%dh %dm %ds' $((s/3600)) $(( (s%3600)/60 )) $((s%60))
|
||||
}
|
||||
|
||||
# ── Create directories ───────────────────────────────────────────────────────
|
||||
mkdir -p "$OUTPUT_DIR" "$WORK_DIR"
|
||||
|
||||
# ── Overall stats ────────────────────────────────────────────────────────────
|
||||
TOTAL_FILES=${#VIDEO_FILES[@]}
|
||||
PROCESSED=0
|
||||
FAILED=0
|
||||
PIPELINE_START=$(date +%s)
|
||||
|
||||
# ════════════════════════════════════════════════════════════════════════════
|
||||
# MAIN LOOP
|
||||
# ════════════════════════════════════════════════════════════════════════════
|
||||
for INPUT_FILE in "${VIDEO_FILES[@]}"; do
|
||||
|
||||
BASENAME=$(basename "$INPUT_FILE")
|
||||
STEM="${BASENAME%.*}"
|
||||
CLEANED="$WORK_DIR/${STEM}_cleaned.mp4"
|
||||
FRAMES_IN="$WORK_DIR/${STEM}_frames_in"
|
||||
FRAMES_OUT="$WORK_DIR/${STEM}_frames_out"
|
||||
FINAL_OUTPUT="$OUTPUT_DIR/${STEM}_restored.mp4"
|
||||
|
||||
header "Processing: $BASENAME ($((PROCESSED+1))/$TOTAL_FILES)"
|
||||
FILE_START=$(date +%s)
|
||||
|
||||
# ── Probe source ──────────────────────────────────────────────────────────
|
||||
FPS=$(probe "$INPUT_FILE" "r_frame_rate")
|
||||
FPS_DEC=$(echo "scale=3; $FPS" | bc 2>/dev/null || echo "25")
|
||||
WIDTH=$(probe "$INPUT_FILE" "width")
|
||||
HEIGHT=$(probe "$INPUT_FILE" "height")
|
||||
FIELD_ORDER=$(probe "$INPUT_FILE" "field_order")
|
||||
DURATION=$(ffprobe -v error -show_entries format=duration \
|
||||
-of csv=p=0 "$INPUT_FILE" 2>/dev/null | head -1)
|
||||
|
||||
info "Source: ${WIDTH}x${HEIGHT} ${FPS_DEC}fps $(human_time "${DURATION%.*}") field_order=${FIELD_ORDER:-unknown}"
|
||||
|
||||
# Always deinterlace for VHS -- safe even if not flagged as interlaced
|
||||
if [[ "$FIELD_ORDER" =~ ^(tt|tb|bt|bb)$ ]]; then
|
||||
DEINTERLACE_FILTER="yadif=mode=1,"
|
||||
info "Interlacing detected — applying yadif deinterlacer"
|
||||
else
|
||||
DEINTERLACE_FILTER="yadif=mode=1,"
|
||||
warn "Interlacing not confirmed by probe — applying yadif anyway (safe for VHS)"
|
||||
fi
|
||||
|
||||
# ── Stage 1: FFmpeg cleanup ───────────────────────────────────────────────
|
||||
header "Stage 1/3 — FFmpeg cleanup & colour correction"
|
||||
info "Watch fps= and speed= for live progress."
|
||||
info "Corrupt frame warnings are normal for old VHS captures."
|
||||
echo
|
||||
|
||||
if [[ "$SKIP_UPSCALE" == true ]]; then
|
||||
SCALE_FILTER="scale=${OUTPUT_WIDTH}:${OUTPUT_HEIGHT}:flags=lanczos,"
|
||||
else
|
||||
SCALE_FILTER=""
|
||||
fi
|
||||
|
||||
# Filter chain notes:
|
||||
# hqdn3d=2:1:2:2 -- gentle denoise; low temporal values (3rd/4th)
|
||||
# prevent the motion blocking seen with higher values
|
||||
# unsharp -- moderate sharpening to recover edge detail
|
||||
# eq -- aggressive colour boost for washed-out VHS
|
||||
# colorbalance -- corrects the green/yellow cast common in aged VHS
|
||||
VFILTER="${DEINTERLACE_FILTER}\
|
||||
hqdn3d=2:1:2:2,\
|
||||
unsharp=3:3:0.5:3:3:0.3,\
|
||||
eq=contrast=1.2:brightness=0.05:saturation=1.8:gamma=1.1,\
|
||||
colorbalance=rs=0.1:gs=0.0:bs=-0.1,\
|
||||
${SCALE_FILTER}\
|
||||
format=yuv420p"
|
||||
|
||||
if ! ffmpeg -y -i "$INPUT_FILE" \
|
||||
-vf "$VFILTER" \
|
||||
-c:v libx264 -crf 18 -preset medium \
|
||||
-c:a aac -b:a 192k -ac 2 \
|
||||
-stats \
|
||||
"$CLEANED" 2>&1 | tee /tmp/ffmpeg_stage1.log | \
|
||||
grep --line-buffered -E "(frame=|speed=|error|Error|Invalid)"; then
|
||||
error "FFmpeg stage 1 failed. Full log: /tmp/ffmpeg_stage1.log"
|
||||
FAILED=$((FAILED+1))
|
||||
continue
|
||||
fi
|
||||
|
||||
echo
|
||||
success "Stage 1 complete -> $(du -sh "$CLEANED" | cut -f1)"
|
||||
|
||||
if [[ "$SKIP_UPSCALE" == true ]]; then
|
||||
cp "$CLEANED" "$FINAL_OUTPUT"
|
||||
success "Output (no AI): $FINAL_OUTPUT"
|
||||
PROCESSED=$((PROCESSED+1))
|
||||
[[ "$KEEP_FRAMES" == false ]] && rm -f "$CLEANED"
|
||||
continue
|
||||
fi
|
||||
|
||||
# ── Stage 2: Extract frames ───────────────────────────────────────────────
|
||||
header "Stage 2/3 — Extracting frames for AI upscaling"
|
||||
mkdir -p "$FRAMES_IN" "$FRAMES_OUT"
|
||||
|
||||
FRAME_COUNT=$(ffprobe -v error -count_packets \
|
||||
-select_streams v:0 -show_entries stream=nb_read_packets \
|
||||
-of csv=p=0 "$CLEANED" 2>/dev/null | head -1)
|
||||
FRAME_COUNT=${FRAME_COUNT:-0}
|
||||
info "Extracting ~${FRAME_COUNT} frames..."
|
||||
|
||||
if ! ffmpeg -y -i "$CLEANED" \
|
||||
-vsync 0 -stats \
|
||||
"$FRAMES_IN/frame%08d.png" 2>&1 | tee /tmp/ffmpeg_extract.log | \
|
||||
grep --line-buffered -E "(frame=|speed=|error|Error)"; then
|
||||
error "Frame extraction failed. Full log: /tmp/ffmpeg_extract.log"
|
||||
FAILED=$((FAILED+1))
|
||||
continue
|
||||
fi
|
||||
|
||||
ACTUAL_FRAMES=$(find "$FRAMES_IN" -name "*.png" | wc -l)
|
||||
echo
|
||||
success "Extracted $ACTUAL_FRAMES frames"
|
||||
|
||||
# ── Stage 3: Real-ESRGAN ──────────────────────────────────────────────────
|
||||
header "Stage 3/3 — Real-ESRGAN AI upscaling (${UPSCALE_FACTOR}x)"
|
||||
warn "Slow on CPU — est. $(echo "scale=0; $ACTUAL_FRAMES * 10 / 60" | bc)-$(echo "scale=0; $ACTUAL_FRAMES * 30 / 60" | bc) minutes"
|
||||
info "Upscaled frames will appear in: $FRAMES_OUT"
|
||||
echo
|
||||
|
||||
UPSCALE_START=$(date +%s)
|
||||
if ! "$REALESRGAN_BIN" \
|
||||
-i "$FRAMES_IN" \
|
||||
-o "$FRAMES_OUT" \
|
||||
-n "$REALESRGAN_MODEL" \
|
||||
-s "$UPSCALE_FACTOR" \
|
||||
-f png 2>&1 | tee /tmp/realesrgan.log; then
|
||||
error "Real-ESRGAN failed. Full log: /tmp/realesrgan.log"
|
||||
FAILED=$((FAILED+1))
|
||||
continue
|
||||
fi
|
||||
|
||||
UPSCALE_END=$(date +%s)
|
||||
UPSCALE_ELAPSED=$((UPSCALE_END - UPSCALE_START))
|
||||
success "AI upscaling complete in $(human_time $UPSCALE_ELAPSED)"
|
||||
|
||||
# ── Reassemble ────────────────────────────────────────────────────────────
|
||||
REASSEMBLE_FPS=$(ffprobe -v error -select_streams v:0 \
|
||||
-show_entries stream=r_frame_rate \
|
||||
-of csv=p=0 "$CLEANED" 2>/dev/null | head -1)
|
||||
|
||||
info "Reassembling video from upscaled frames..."
|
||||
echo
|
||||
|
||||
if ! ffmpeg -y \
|
||||
-framerate "$REASSEMBLE_FPS" \
|
||||
-i "$FRAMES_OUT/frame%08d.png" \
|
||||
-i "$CLEANED" \
|
||||
-map 0:v -map 1:a \
|
||||
-c:v libx264 -crf "$CRF" -preset "$PRESET" \
|
||||
-c:a copy \
|
||||
-movflags +faststart \
|
||||
-stats \
|
||||
"$FINAL_OUTPUT" 2>&1 | tee /tmp/ffmpeg_reassemble.log | \
|
||||
grep --line-buffered -E "(frame=|speed=|error|Error)"; then
|
||||
error "Reassembly failed. Full log: /tmp/ffmpeg_reassemble.log"
|
||||
FAILED=$((FAILED+1))
|
||||
continue
|
||||
fi
|
||||
|
||||
# ── Cleanup ───────────────────────────────────────────────────────────────
|
||||
if [[ "$KEEP_FRAMES" == false ]]; then
|
||||
rm -rf "$FRAMES_IN" "$FRAMES_OUT" "$CLEANED"
|
||||
info "Scratch files cleaned up"
|
||||
else
|
||||
info "Frames kept in: $FRAMES_IN / $FRAMES_OUT"
|
||||
fi
|
||||
|
||||
FILE_END=$(date +%s)
|
||||
FILE_ELAPSED=$((FILE_END - FILE_START))
|
||||
PROCESSED=$((PROCESSED+1))
|
||||
|
||||
OUT_SIZE=$(du -sh "$FINAL_OUTPUT" | cut -f1)
|
||||
echo
|
||||
success "Done: $FINAL_OUTPUT"
|
||||
info " File size : $OUT_SIZE"
|
||||
info " Time taken: $(human_time $FILE_ELAPSED)"
|
||||
|
||||
done
|
||||
|
||||
# ════════════════════════════════════════════════════════════════════════════
|
||||
# Final summary
|
||||
# ════════════════════════════════════════════════════════════════════════════
|
||||
PIPELINE_END=$(date +%s)
|
||||
PIPELINE_ELAPSED=$((PIPELINE_END - PIPELINE_START))
|
||||
|
||||
header "Pipeline Complete"
|
||||
echo -e " ${GREEN}Processed : $PROCESSED / $TOTAL_FILES${NC}"
|
||||
[[ $FAILED -gt 0 ]] && echo -e " ${RED}Failed : $FAILED${NC}"
|
||||
echo -e " Total time: $(human_time $PIPELINE_ELAPSED)"
|
||||
echo -e " Output dir: $OUTPUT_DIR"
|
||||
echo
|
||||
|
||||
if [[ $PROCESSED -gt 0 ]]; then
|
||||
echo "Restored files:"
|
||||
find "$OUTPUT_DIR" -name "*_restored.mp4" | while read -r f; do
|
||||
SIZE=$(du -sh "$f" | cut -f1)
|
||||
echo " * $(basename "$f") ($SIZE)"
|
||||
done
|
||||
fi
|
||||
`
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
Loading…
Add table
Add a link
Reference in a new issue