Welcome to Doom9's Forum, THE in-place to be for everyone interested in DVD conversion.

Before you start posting please read the forum rules. By posting to this forum you agree to abide by the rules.

 

Go Back   Doom9's Forum > Capturing and Editing Video > VapourSynth

Reply
 
Thread Tools Search this Thread Display Modes
Old 11th August 2017, 03:40   #1  |  Link
lansing
Registered User
 
Join Date: Sep 2006
Posts: 806
Building a montage clip for color correcting Dragon Ball Z

I'm currently working on my own project on color correcting Dragon Ball Z. There're a few parts to it, but the most tedious work is to build a montage for each scene for better color matching accuracy, and that's the reason I wrote this script.

This script is basically a copy of what this guy did here. It takes in a video, generate a montage for every scene, and append them all together into one montage clip.

Download


usage
Code:
import vapoursynth as vs
import dbz_colormatch as dbzcm

clip = core.ffms2.Source("kai_001.mp4")

scenechange_file = r"kai_001.txt"
scenechange_skip_file = r'kai_001_skip.txt'

clip = dbzcm.BuildMontage(clip, frame_per_scene=9, scenechange_file, scenechange_skip_file)
scene change text format:
Code:
100, 200, 300, 400
Each number is the frame number of the starting frame of a scenechange.

Scenechange skip default to None. It is mainly used just for Dragon Ball Z to not process title scene/midshow intro. If scenechange skip has frame 200, that means frames 200-299 would be skipped.


frame_per_scene: number of reference frames per montage, with a fixed display pattern. Default=9, max=16.


2017-08-12: updated script
2017-09-06: updated script
2017-12-05: updated script, separated the montage builder function from the main dbz processing script.
2017-12-12: rename module name to all lowercase, added helper function reorder_frames and replace_frames
2017-12-15: removes mode 1 because it gives poor result most of the time. Simplified everything to just one frame_per_scene number at minimum.

Last edited by lansing; 15th December 2017 at 08:52. Reason: update script
lansing is online now   Reply With Quote
Old 12th August 2017, 19:28   #2  |  Link
lansing
Registered User
 
Join Date: Sep 2006
Posts: 806
I updated the script in the first post. I removed some abundant codes and change most of the approach to number calculation than having to touch the clip, but still the script is so slow, I don't know why. I tested it on a 7 minutes 720x416 video and it took 30 seconds to load the preview in the editor.
lansing is online now   Reply With Quote
Old 14th August 2017, 21:07   #3  |  Link
cwk
Registered User
 
Join Date: Jan 2004
Location: earth, barely
Posts: 87
Is your updated script slow, or just slow to load?

I believe the ffms2 plugin creates an index for a media file when it sees it for the first time. Perhaps the slow preview load time is in index generation.
cwk is offline   Reply With Quote
Old 14th August 2017, 21:41   #4  |  Link
lansing
Registered User
 
Join Date: Sep 2006
Posts: 806
Quote:
Originally Posted by cwk View Post
Is your updated script slow, or just slow to load?

I believe the ffms2 plugin creates an index for a media file when it sees it for the first time. Perhaps the slow preview load time is in index generation.
I learned that the slow down was caused by the many iterations I have, which is the completely wrong way to do this. I shouldn't be using for loop. I need to rewrite the whole approach using the functional programming paradigm.

I'll update the script in a few days after I learned it.
lansing is online now   Reply With Quote
Old 15th August 2017, 09:00   #5  |  Link
splinter98
Registered User
 
Join Date: Oct 2010
Posts: 36
You're close, The biggest slow down is actually this line here:

Code:
for n in range(c.num_frames):
    frame = c.get_frame(n)
This causes Vapoursynth to have to render that frame during the loading stage to be able to produce the data. This also then serialises the generation of
Code:
core.wwxd.WWXD(clip)
and makes it effectively single threaded.

You could try using get_frame_async and generate all the frames upfront, but using this naively requires much more memory. (I haven't quite worked out a good recipe for generating and consuming this just yet)

The rest of the for loops should actually be reasonably fast (but there are plenty of optimisations you can do there too).
splinter98 is offline   Reply With Quote
Old 17th August 2017, 02:37   #6  |  Link
lansing
Registered User
 
Join Date: Sep 2006
Posts: 806
Quote:
Originally Posted by splinter98 View Post
You're close, The biggest slow down is actually this line here:

Code:
for n in range(c.num_frames):
    frame = c.get_frame(n)
This causes Vapoursynth to have to render that frame during the loading stage to be able to produce the data. This also then serialises the generation of
Code:
core.wwxd.WWXD(clip)
and makes it effectively single threaded.

You could try using get_frame_async and generate all the frames upfront, but using this naively requires much more memory. (I haven't quite worked out a good recipe for generating and consuming this just yet)

The rest of the for loops should actually be reasonably fast (but there are plenty of optimisations you can do there too).
Ok so after a couple of days of learning and relooking at my script, I can't think of a way to run my script without getting the list of the key frames first, it has to be created first. Because my script take an input and change it to different length and dimension, all the vs functions that speed things up like FrameEval are not the right tool for me.

And so I did some speed tests with frame list only and compares that to the generated list from looping the whole clip.

frame list only: 12 seconds on 10000 frames
getting the frame list using the loop: 30 seconds on 10000 frames


So with a 24 minutes episode of dragon ball z that's averaging 350 key frames, if I can use just the keyframe(scene change) list, my script would be opened in less than 1 second.

So the resolve to my problem is to have WWXD output the scene change list, there's really no other optimization that matters

Last edited by lansing; 17th August 2017 at 04:28. Reason: added descriptions
lansing is online now   Reply With Quote
Old 6th September 2017, 08:23   #7  |  Link
lansing
Registered User
 
Join Date: Sep 2006
Posts: 806
Updated the script in the first post, I took out the uses of scene change detection filter completely because it's very slow and the quality sucks. The only good one I found is ffmsindex but it doesn't work for ts files.

The script now rely solely on user input text file for scene change, input format from text is:
Code:
0
164
600
In addition, I added a delete frame range function and a curve reader to the script, both necessary for a better workflow of the project. The delete frame range function takes in a cut list from a text file and delete range of frames accordingly
Code:
6 600
1000 1200
4000 5000
lansing is online now   Reply With Quote
Old 5th December 2017, 11:28   #8  |  Link
lansing
Registered User
 
Join Date: Sep 2006
Posts: 806
Updated first post, now the montage building function is separated into it own script. Read update on first post. It should be generic enough to work on other clips other than dragon ball z.
lansing is online now   Reply With Quote
Old 6th December 2017, 00:14   #9  |  Link
lansing
Registered User
 
Join Date: Sep 2006
Posts: 806
This is the whole script for building montage clip for the Dragon Ball Z color matching. I added a few switches on the top controller for more user friendly.

Also I updated the DBZColorMatch_func file so that it can now work without having a scene change file.

Code:
import vapoursynth as vs
from functools import reduce, partial
from itertools import zip_longest, starmap
import DBZColorMatch_func as dbzf

############################################################################################################################
## 			Montage building script for Dragon Ball Z Level/Season Blu-ray Color Matching
##
##  Build a montage clip for both color reference video (Level/Season) and target video (Kai), switch at the
##  is_clip_color_ref parameter
##  
##  The script will first take a list of clip(s), apply cropping, user defined denoise and curve adjustment to each, then 
##  append them together. Then the clip will be splice around according to all the splicing text files.
## 
##  The resulting clip will be pass to the montage builder function to build a montage clip.
##
##  Setting output_img to True will output the montage clip to image sequence.
##
##  Basic Setting:
##  clip_paths, profile_paths, preset_paths, curve_paths:
##				Takes list as input. Position of each list correspond to one another. Leave list blank to disable. Or you can do 
## 				curve_paths = [None, r"001.acv"] to only apply curve on the 2nd clip.
##
##	is_clip_color_ref:
##				If set to false, all the splicing process will be skipped. (for DBZ Kai)
##
##  do_denoise: True to turn on, False to turn off
##				The DBZ level/season clips need to denoise for better color accuracy. Denoisers used are NeatVideo and 
##				KNLMeansCL. Setup profile_paths to use NeatVideo, leave it empty to use KNLMeansCL. 
##  			NeatVideo is the only denoiser that can remove the grains in level set. Use KNLMeansCL for Season BD.
##
##	use_neatvideo:
##				Trim out the first extra frame created by the filter		
############################################################################################################################

### Controller ###
clip_paths = [r"001.ts", r"002.ts"]

# neatvideo config files
profile_paths = [r"dbz01.dnp", r"dbz02.dnp"]
preset_paths = [r"dbz01.nfp", r"dbz02.nfp"]

# curve files, acv extension
curve_paths = [None, r'curve_level2kai_002.acv']

# True for color reference clip (level/season), False for target clip (kai/dragon box). 
# Set to False will disable all cut/scenechange files
is_clip_color_ref = False 

# color reference clip splicing config
pad_op = 4563 # frames to pad op so the clips match
reorder_file = r'reorder_level2kai_001.txt'
cut_file = r'cut_level2kai_001.txt'
dupe_file = r'dupe_level2kai_001.txt'

# compose clip config
cropside = False # crop 240 from left and right of frame
do_denoise = False
use_neatvideo = False # trigger the preroll frame adjust

# For montage builder
scenechange_file = r"scenechange_kai_001.txt"
scenechange_skip_file = r'scenechange_kai_001_skip.txt'

# output image sequence
output_img = False
img_dir = r"F:\temp_img_w\test_%06d.png"

###################
core = vs.get_core(accept_lowercase=True)

# load avs plugin
core.avs.LoadPlugin(r"C:\Program Files (x86)\AviSynth+\plugins64+\VDubFilter.dll")
core.avs.LoadVirtualdubPlugin(r'NeatVideo.vdf', 'NeatVideo', 1)

# compose clip with filters
def compose_clip(clip_path, profile, preset, curve, cropside=False, denoise=False):

	if clip_path:
		clip = core.ffms2.Source(clip_path)
	
		if cropside == True:
			clip = core.std.Crop(clip, 240, 240, 0, 0)
		
		if denoise:
			if profile: 
				# if profile exist, use neatvideo
				clip = core.resize.Bicubic(clip, matrix_in_s="709", format=vs.COMPATBGR32)
				clip = core.avs.NeatVideo_2(clip, profile, preset, 1, 1, 0, 0)
			else:
				# use knlmeanscl
				clip = core.knlm.KNLMeansCL(clip, d=1, a=1, s=1, h=1)
		
		if curve:
			clip = core.resize.Bicubic(clip, matrix_in_s="709", format=vs.RGB24)		
			clip = core.grad.Curve(clip, curve, ftype=2, pmode=1)
			clip = core.resize.Bicubic(clip, matrix_s="709", format=vs.YUV420P8)
		return clip

def add_clip(a, b):
	return a + b if b else a # skip clip b is it doesn't exist

# partial function of "composer_clip" function that takes user input switch
compose_clip_ctrler = partial(compose_clip, cropside=cropside, denoise=do_denoise)

# load and append clips
clip = reduce(add_clip, starmap(compose_clip_ctrler, zip_longest(clip_paths, profile_paths, preset_paths, curve_paths)))

	
# apply padding for level/season
if is_clip_color_ref:
	padding_clip = core.std.BlankClip(clip, length=pad_op)
	clip =  padding_clip + clip

	# rearrange order, first episode only?
	if reorder_file:
		reorder_list = []
	
		with open(reorder_file) as f:
		    for line in f:
		        start, end = (int(x.strip()) for x in line.split())
		        reorder_list.append(clip[start:end])

		clip = core.std.Splice(reorder_list)

	clip = clip[1:-1] if use_neatvideo else clip # adjust frames for preroll

	# apply cut list
	clip = dbzf.apply_cut(clip, cut_file)

	# apply dupe list
	if dupe_file:
		dupe_list = dbzf.read_dupe_file(dupe_file)
		clip = core.std.DuplicateFrames(clip, list(dupe_list))


# build montage sequence from scene change list
final_clip = dbzf.BuildMontage(clip, scenechange_file, scenechange_skip_file, per_row=4, mode=2, per_scene=9)

### output montage clip as image sequence ###
if output_img == True:
	final_clip = core.resize.Bicubic(final_clip, matrix_in_s="709", format=vs.RGB24)	
	imw = core.imwrif.Write(final_clip, imgformat="PNG", filename=img_dir ,firstnum=1)

	imw.set_output()
else:
	final_clip.set_output()
The script works on my tests, the only issue is with the neatvideo function naming that I reported last week. I couldn't put it inside a loop (I disabled it for now) because that'll means calling the function more than once, which will crash the program.
lansing is online now   Reply With Quote
Old 13th December 2017, 05:09   #10  |  Link
lansing
Registered User
 
Join Date: Sep 2006
Posts: 806
Update script in top post.

And I ran into a performance issue with the build montage function, mainly with the many calls to stackhorizontal and stackvertical.

Code:
clip = core.ffms.source("hi.mp4")
frame_list = range(300)

frame_counter = 0
change_row = False

for n in frame_list:
    frame_counter += 1
    
    if frame_counter == 1:
        row = clip[n]
    elif frame_counter == per_row:
        row = core.std.StackHorizontal([row, clip[n]])
        change_row = True

    ...
I create the montage by looping through a list of frame numbers and stack them one at a time with stackhorizontal and stackvertical. But this slows the script instantiating time to 6 seconds and each seek in preview costed 80MB of RAM...

I have had a similar performance issue with looping and concatenating clip before:
Code:
clip = core.ffms.source("hi.mp4")
 
frame_list = range(300)
clip_list = []

for n in frame_list:
    clip_list += clip[n:600]
The solution to this is to use append instead of "+="
Code:
for n in frame_list:
    clip_list.append(clip[n:600])
But I don't know how to do that for stackhorizontal/vertical. Vapoursynth will flush the memory when the program hits 4.5G so there's no worry about that it will run out of memory, but I just want to know if there is a more efficient way to stack the frames together?
lansing is online now   Reply With Quote
Old 13th December 2017, 15:17   #11  |  Link
Myrsloik
Professional Code Monkey
 
Myrsloik's Avatar
 
Join Date: Jun 2003
Location: Ikea Chair
Posts: 1,635
Quote:
Originally Posted by lansing View Post
Update script in top post.

And I ran into a performance issue with the build montage function, mainly with the many calls to stackhorizontal and stackvertical.

Code:
clip = core.ffms.source("hi.mp4")
frame_list = range(300)

frame_counter = 0
change_row = False

for n in frame_list:
    frame_counter += 1
    
    if frame_counter == 1:
        row = clip[n]
    elif frame_counter == per_row:
        row = core.std.StackHorizontal([row, clip[n]])
        change_row = True

    ...
I create the montage by looping through a list of frame numbers and stack them one at a time with stackhorizontal and stackvertical. But this slows the script instantiating time to 6 seconds and each seek in preview costed 80MB of RAM...

I have had a similar performance issue with looping and concatenating clip before:
Code:
clip = core.ffms.source("hi.mp4")
 
frame_list = range(300)
clip_list = []

for n in frame_list:
    clip_list += clip[n:600]
The solution to this is to use append instead of "+="
Code:
for n in frame_list:
    clip_list.append(clip[n:600])
But I don't know how to do that for stackhorizontal/vertical. Vapoursynth will flush the memory when the program hits 4.5G so there's no worry about that it will run out of memory, but I just want to know if there is a more efficient way to stack the frames together?
Incomplete script so can't even figure out exactly what you're doing. You are passing a list of all the clips in a row at once, right? That'll reduce the number of intermediate frames. 80MB sounds quite reasonable for a 300 frame montage...
__________________
VapourSynth - proving that scripting languages and video processing isn't dead yet
Myrsloik is offline   Reply With Quote
Old 13th December 2017, 15:53   #12  |  Link
lansing
Registered User
 
Join Date: Sep 2006
Posts: 806
Quote:
Originally Posted by Myrsloik View Post
Incomplete script so can't even figure out exactly what you're doing. You are passing a list of all the clips in a row at once, right? That'll reduce the number of intermediate frames. 80MB sounds quite reasonable for a 300 frame montage...
Script is in my first post, the stacking part is at end of the BuildMontage function. The frames to stack per montage is 9 and max at 16, sorry for the confusion.
lansing is online now   Reply With Quote
Old 15th December 2017, 08:30   #13  |  Link
lansing
Registered User
 
Join Date: Sep 2006
Posts: 806
Updated first post, it should be final unless there's a bug.
lansing is online now   Reply With Quote
Reply

Tags
color correction, dragon ball z, montage

Thread Tools Search this Thread
Search this Thread:

Advanced Search
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump


All times are GMT +1. The time now is 02:13.


Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2017, vBulletin Solutions Inc.