Welcome to Doom9's Forum, THE in-place to be for everyone interested in DVD conversion.

Before you start posting please read the forum rules. By posting to this forum you agree to abide by the rules.

 

Go Back   Doom9's Forum > Capturing and Editing Video > Avisynth Usage

Reply
 
Thread Tools Search this Thread Display Modes
Old 26th November 2017, 14:32   #1  |  Link
pcroland
Registered User
 
Join Date: Mar 2014
Location: Hungary
Posts: 59
Repairing weird (NTSC -> PAL ?) blend

Hi!

I'm trying to restore the "The Raccoons" DVD's. Here's a 1min clip:
https://mega.nz/#!OFZ0lBIb!k1bLutjNX...9_bm21iReNBLyY

I managed to remove the noise, remove the strong halo, fix the borders but I wasn't able to do anything with the blend so far.

QTGMC(Preset="Medium").SelectOdd() gives me 25fps, but every ~2nd frame is a blended frame and it's constantly shifting.

I tried AnimeIVTC(), it gives me 20fps and there are still a lot of blended frames.

Would it be possible to somehow detect the blended frames and replace them with the previous or next frame that has the less difference? I tried srestore (omode=3) but it didn't work. I don't mind having some duplicate frames here and there but I want to get rid of these blended ones.
pcroland is offline   Reply With Quote
Old 29th November 2017, 15:20   #2  |  Link
kriNon
Registered User
 
Join Date: Jul 2016
Posts: 38
Hmm, sounds like you might be dealing with a similar issue to what I'm trying to fix currently. Although it's probably different to my source, what I have found is that if instead of using SelectOdd to get 25fps, I've been looking to find a better metric for measuring how much blending a frame has, and selecting odd/even based on which has less blending.

What I've been doing is comparing the edgemask of both frames, and choosing whichever frame has lower luma, however this method only has a ~70% success rate for my source. The logic behind this is that the blended frame should have more lines due to having the lines from the next/previous frame too. It tends to have issues with small differences between the frames, and only really works for frames with huge amounts of blending. Although what I did notice is that srestore(omode=3) works far better once I have decimated properly.

I'm struggling to think of logic that would work properly for constantly getting rid of the blended frame, because I'm having difficulty finding meaningful metrics that I can use that don't create too many false positives. I don't really want to be comparing frames to the previous/next two fields, because it is most likely that one if not more of those will be blended, which means I need to try to measure blending without knowing which frame it is blended with, making things a little more difficult. I've found that by using Srestore to decimate I get better results for the frames with very little blending, however my method has better results for the frames with a lot of blending. Another thing I've noticed is that I will get a mixture of good and bad frames, and the pattern is typically "GBGBBGBGBG" however whenever the scene changes it resets, and there is some variance, the pattern does not always hold true.

AnimeIVTC is giving you 20fps because it is decimating your clip, which probably doesn't require it, unless of course it was produced in 20fps, but that seems unlikely. Keep in mind that I haven't actually had a chance to have a look at your clip and that my advice is general, hopefully I've been able to help a bit though!
kriNon is offline   Reply With Quote
Old 1st December 2017, 22:38   #3  |  Link
pcroland
Registered User
 
Join Date: Mar 2014
Location: Hungary
Posts: 59
Could you show your script? In my case it would also work great what you mentioned: deciding from every two frame which has more blending but I would replace that frame with the other one instead of removing that. Because the video has parts when the "camera" zooms in and it's actually 25fps. So it would also require something to tell the edgemask not to replace frames if the difference is too big.
pcroland is offline   Reply With Quote
Old 2nd December 2017, 02:02   #4  |  Link
SaurusX
Registered User
 
Join Date: Feb 2017
Posts: 65
I believe you are talking about srestore(omode=3).
SaurusX is offline   Reply With Quote
Old 3rd December 2017, 01:35   #5  |  Link
manono
Moderator
 
Join Date: Oct 2001
Location: Hawaii
Posts: 7,249
It's been fieldblended from an NTSC source, but there's way too much fieldblending to get a clean result. Anyway, try this:

Yadif(Mode=1)###or the better QTGMC
SRestore()


For unblenders to work they depend on one field of each frame to be 'clean'. You just don't have that here.
manono is offline   Reply With Quote
Old 5th December 2017, 01:13   #6  |  Link
kriNon
Registered User
 
Join Date: Jul 2016
Posts: 38
Hey, Sorry to get back to you late.
Yeah I'm happy to share with you my script, however my script is actually written in vapoursynth, not avisynth. Also, my script is still very unfinished, I am working on improving it, however not only does it have pretty bad performance, but it still isn't perfect. Nonetheless, if you want to try porting it to avisynth feel free, i am sure that the same general idea could be achieved. I also really want to stress that it is super unfinished, it's just the result of me messing around, so not only is the code a bit of a mess, but there are sections that either don't do anything anymore, don't work as intended, etc.

Here's my current script: https://pastebin.com/XAaAjain
Currently I'm trying to implement pattern detection, I have a few preset patterns that it can detect, and it will choose the best pattern for each scene. However ideally the pattern detection would be dynamic, where the pattern is changed to fit the pattern of the surrounding 20 or so frames. No idea how I would go about that. Currently it is able to detect all 5 TTTBB, and all 5 BBXTT combinations, as I found they were the most common in my source. Where T is Top Field, B is bottom field, and X means that neither field is good, and to repeat the previous good field (which will then be removed in decimation hopefully).

If you have any questions feel free to ask! I'll try to answer them to the best of my ability. The script was working at around 80% accuracy without the pattern detection, so I guess implementing that would still be better than nothing.
kriNon is offline   Reply With Quote
Old 6th December 2017, 08:52   #7  |  Link
kriNon
Registered User
 
Join Date: Jul 2016
Posts: 38
I did some more testing and i've found that one of the best ways of doing the decimation is actually to create a mvtools motionmask of the clip, and then return whichever field whose mask has a lower average brightness. I found that using a pel of 4, a blocksize of 4, and making sure the mask is kind=1 works very well. This should be good enough for your usage.
kriNon is offline   Reply With Quote
Reply

Thread Tools Search this Thread
Search this Thread:

Advanced Search
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump


All times are GMT +1. The time now is 09:29.


Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2019, vBulletin Solutions Inc.