Welcome to Doom9's Forum, THE in-place to be for everyone interested in DVD conversion. Before you start posting please read the forum rules. By posting to this forum you agree to abide by the rules. |
18th March 2016, 23:52 | #1 | Link |
Registered User
Join Date: Feb 2015
Posts: 34
|
Converting 25i back to original 29.97i
I have a PAL DVD at 25i that was converted from and shot on 29.97i video. Is there a way to convert it back to 29.97i? Most things I could find were about anime DVDs that had this kind of conversion, but mine is live action content.
Clip https://www.sendspace.com/file/sqfvsp Last edited by gunfelix; 20th March 2016 at 23:10. |
19th March 2016, 21:27 | #3 | Link |
Registered User
Join Date: Feb 2015
Posts: 34
|
The video jumps up and down every other frame, even when deinterlacing.
Also, if I would deinterlace it to 29.97p, should I remove the last two steps and change changefps to 30000,1001 or add QTGMC().SelectEven() at the end of that script? |
20th March 2016, 04:44 | #4 | Link | |||
Registered User
Join Date: Jul 2011
Location: Tennessee, USA
Posts: 266
|
Quote:
Converted to PAL using what software? What method? It isn't already screwed up enough? This PAL conversion was made by someone who was either just your average idiot or extremely clever and malicious. You'd have to break down all former telecine, phony interlace, and blend effects and duplicate frames, reduce the video to its original 23.976 progressive layout, resize to 720x480, then add pulldown during encoding for 29.97 playback. Chances are it will never be entirely smooth, and blending effects aren't going away. And don't forget to look everywhere for missing frames. I assume you don't have the original DVD. That would be a shame. Three lossy encodes are not better than one or two. Quote:
Quote:
That's enough for me, folks. Someone with more patience has to move in from here. Last edited by LemMotlow; 20th March 2016 at 13:35. |
|||
20th March 2016, 12:10 | #5 | Link | |
Registered User
Join Date: May 2006
Posts: 3,997
|
Quote:
Code:
assumeTFF() bob() #or another bobber, e.g. QTGMC() tdecimate(cycle=5,cycleR=2) changefps(30000,1001) or this one: Code:
assumeTFF() bob() tdecimate(cycle=5,cycleR=2) changefps(60000,1001) separatefields() selectevery(4,0,3).weave() Well, as has been mentioned: garbage in - garbage out ;-) Last edited by Sharc; 20th March 2016 at 13:35. |
|
20th March 2016, 18:03 | #6 | Link |
Registered User
Join Date: Feb 2015
Posts: 34
|
It's a PAL Disney DVD from a show shot on videotape in the US.
Another clip: https://www.sendspace.com/file/tiglp5 |
20th March 2016, 20:53 | #9 | Link |
Registered User
Join Date: Feb 2002
Location: California
Posts: 2,695
|
I think it may be possible to get this looking really good, but it will involve using motion estimation to synthesize the missing fields.
If you use this diagnostic script: Code:
assumeTFF() odd=separatefields().selectodd() even=separatefields().selecteven() stackvertical(odd,even) To achieve the decimation needed for 25 interlaced ("50i"), one out of every six fields (fields, not frames) was dropped. This too is obvious when walking frame-by-frame through the second clip using this script because you can easily see the big jumps in motion every sixth field. However -- and this is what makes it a little bit of a challenge to restore -- when a field was dropped during the original 60i --> 50i conversion, the next set of five fields had to maintain the same spatial order. Put another way, you still want to see the field in the correct spatial order so that you see the "up/down" motion when you bob the clip. The people who did this excellent NTSC to PAL conversion did that. To see how they did it, you can compare the moving foreground motion against the static background. What you will find, if you compared the two fields using my diagnostic script, is that the fields match in time for three fields in a row, followed by two fields where they are from different points in time. To see this, note that the foreground motion overlaps the background in exactly the same point for two frames, followed by three frames where the two fields overlap the background at different places. You can also see this if you simply look at the original interlaced video and, during periods of high horizontal motion you will see three frames in a row that show standard interlaced combing (when viewed frame-by-frame on a computer display without deinterlacing), followed by two frames which show no combing. What fools people when dealing with this sort of thing is that, if you only look at the video using a full-frame approach, it looks as if those two frames which show no combing must be progressive. That is the wrong conclusion, as my diagnostic script clearly shows. Using that script, you will see motion between each and every field, which is the definition of interlaced. You must only look at interlaced video field-by-field, and not frame-by-frame, or you will make lots of mistakes. I think I can figure this out, if I find a little more time, but perhaps someone else can give me the hint on how to best un-do the spatial shift done on every other set of five fields. So, the challenge is to un-do field order switch, while simultaneously using motion estimation to insert the missing field every sixth field. The motion estimation part of the challenge is easy to do. The part I am not quite figuring out is how to undo the five spatially shifted fields. I think I need to crop and shift up one pixel for those frames, insert the estimated field, and then re-weave. If I can figure out how to do this, I think I can make the result look almost identical to the original. |
20th March 2016, 21:00 | #10 | Link |
Registered User
Join Date: Feb 2002
Location: California
Posts: 2,695
|
P.S. Sharc's script give you a progressive result. If you don't mind losing over 50% of all the temporal information, and don't mind the residual jumps, they will do. However, I think it is possible to avoid that rather significant degradation, and achieve a true 50i result.
|
20th March 2016, 23:32 | #13 | Link |
Registered User
Join Date: Feb 2002
Location: California
Posts: 2,695
|
I'm pretty sure that the solution will involve interpolating to 150 fps (the common denominator of 25 and 30), decimating down to 30 fps, and then using some sort of selectevery()/weave()/interleave() combination to combine these three things:
1. Four unaltered fields from the original video. 2. A new MVTools2-synthesized field 3. Four Bob-shifted fields from the original video 4. A new MVTools2-synthesized field Repeat. I'm just being slow today and can't quite come up with the code. It should be a pretty simple script. Last edited by johnmeyer; 20th March 2016 at 23:32. Reason: typo |
21st March 2016, 10:40 | #14 | Link | |
Registered User
Join Date: May 2006
Posts: 3,997
|
Quote:
Thanks for your profound analysis and explanations. Perhaps a minor note: Using your diagnostic script which displays the fields as pairs, I found it sometimes difficult to discover the temporal movement between the 2 fields (top and bottom) in low motion scenes or when there is no static background. Stepping sequentially through the fields can make the temporal change more obvious: Code:
assumeTFF() separatefields() - if the picture changes with every step => interlaced (a b c d e ....) - for clip 2 the fields pattern is: a a b c c d d e f f g g h i i ..... indicating the dropped field for the original 29.97(59.94i)-to-25(50i) conversion The sequential display also helps to discover the correct field order easily: if the pictures jump back and forth when stepping through, the field order assumption was wrong. Now looking forward to your final script P.S. How to correctly specify interlaced stuff: as "framerate i" or "fieldrate i"? Is there a standard rule for this? Last edited by Sharc; 21st March 2016 at 13:05. |
|
21st March 2016, 17:22 | #15 | Link |
Registered User
Join Date: Feb 2002
Location: California
Posts: 2,695
|
I agree with using the simpler separatefields() as the starting point. That is the first thing I do when looking at any of the clips posted here. However, for this clip, it is incredibly misleading because it makes you thing that you are dealing with progressive frame and therefore you should do some sort of IVTC. This is totally the wrong conclusion and, as I mentioned earlier, is the result of the mistake that so many people make at looking at static "frames" when trying to figure out what to do with telecined or standards-converted video.
If I get a chance today I'll try to put together a brute-force way of getting an interlaced result. If I succeed, almost half the video fields will be passed through untouched; one field in every five will be motion estimated; and the other group of fields will bob-shifted. My idea is to create three video streams (the original, the bobbed, and the motion-estimated) and then use selectevery() and weave()/interleave() to combine the three streams together. |
21st March 2016, 19:39 | #16 | Link |
Registered User
Join Date: Feb 2002
Location: California
Posts: 2,695
|
Well, I couldn't quite complete this script because I need a function that can take five fields from one source, followed by five fields from another source, followed by one field from a third source, etc. Interleave only lets me take one field from each source at a time.
However, I think the basic idea of this is correct. The only problem I see is that it is "hard-wired" to the cadence of the video clip sample, and if the pattern changes -- which is likely to happen in a longer video -- there is no way to re-sync it. I think there needs to be some sort of conditional based on hints from something like TFM. Another possibility would be to look for big gaps in motion, which would identify the point at which a field was dropped. However, the first order of business, in order to prove out the concept, is to come up with a replacement for interleave, because it clearly is not the correct function. Anyway, here's my half-done script that is designed to produce a true 29.97 interlaced output from the original 25 fps interlaced source, and which uses as much of the original video, without alteration, as possible. The idea of the script is to start by taking the first five even and odd fields and passing them through to the output. Then, insert a motion estimated field. Following that, append five even and odd fields that have been spatially shifted by one scan line, using a simple bob function (a better bobber would be a nice touch, once the script is working). Code:
source = AVISource("e:\fs.avi").assumeTFF() orig_fields = separatefields(source) even = selecteven(orig_fields) odd = selectodd(orig_fields) #May need to trim(1,0) on bobbed source bobbed = bob(source) even_shifted = selecteven(bobbed).separatefields().selecteven() odd_shifted = selectodd(bobbed).separatefields().selecteven() #"Estimated" function designed to produce field halfway between (in time) two adjacent fields even_new = estimated(even) odd_new = estimated(odd) #Next set of statements gets groups of fields #Use the original five fields from every group of ten orig_even = selectevery(even,10,0,1,2,3,4) orig_odd = selectevery(odd, 10,0,1,2,3,4) #Use shifted fields from the second half of every group of ten shifted_even = selectevery(even_shifted,10,5,6,7,8,9) shifted_odd = selectevery(odd_shifted, 10,5,6,7,8,9) #Insert one motion estimated field in between each group of shifted #and un-shifted group of fields new_even = selectevery(even_new,10,4) new_odd = selectevery(odd_new, 10,9) #"Interleave" is the wrong function. I need function that will take #five fields from orig_even, followed by five frames from orig_odd, etc. interleave(orig_even,orig_odd,new_even,shifted_odd,shifted_even,new_odd) function estimated (clip c) { super=MSuper(c,pel=2) vfe=manalyse(super,truemotion=true,isb=false,delta=1) vbe=manalyse(super,truemotion=true,isb=true,delta=1) fixed = mflowinter(c,super,vbe,vfe,time=50) return fixed } |
23rd March 2016, 08:59 | #17 | Link |
Registered User
Join Date: May 2006
Posts: 3,997
|
Interesting script. The newly created fields look good.
Could "loop(....)" perhaps be a solution for the interleaving? Something like creating 5 intermediate clips, offset (advanced) by 1 frame (field) each, and then interleave the 5 clips plus the new frame like interleave(clip1,clip2,clip3,clip4,clip5,new_frame) ........ uggghhh. Last edited by Sharc; 23rd March 2016 at 10:14. |
23rd March 2016, 11:50 | #18 | Link |
Registered User
Join Date: Sep 2005
Location: Vancouver
Posts: 600
|
IMO the second clip contains zero intact fields from the source.
I believe it's: 59.94i -> 29.97p (blend deinterlace) -> 59.94p (simple dupe) -> 50i (patterned field drop) I've simulated this using another Disney show. The 3 interlaced, 2 progressive pattern matches the posted clip above. Google Drive: TSR.rar (2.49MB) Code:
AVISource("TSR blend-deinterlaced by VirtualDub.avi").ConvertToYV12() # Does Avisynth have a full blend deinterlacer? ChangeFPS("ntsc_double") ChangeFPS(50) BicubicResize(width,576) AssumeTFF().SeparateFields().SelectEvery(4,0,3).Weave() Last edited by ChiDragon; 23rd March 2016 at 11:53. |
24th March 2016, 15:21 | #19 | Link | |
Registered User
Join Date: Jan 2011
Location: Donetsk
Posts: 58
|
Quote:
Code:
AssumeTFF() i = last QTGMC(Preset="Fast", Sharpness=0.4) srestore(omode=4, mode=4, cache=10, dclip=i.bob(-0.2,0.6).reduceflicker(strength=1)).TDecimate(mode=2) # srestore(omode=4, mode=4, cache=10, dclip=i.bob(-0.2,0.6).reduceflicker(strength=1)).srestore(frate=23.976) Last edited by Tempter57; 24th March 2016 at 15:25. |
|
Thread Tools | Search this Thread |
Display Modes | |
|
|