Welcome to Doom9's Forum, THE in-place to be for everyone interested in DVD conversion.

Before you start posting please read the forum rules. By posting to this forum you agree to abide by the rules.

 

Go Back   Doom9's Forum > Capturing and Editing Video > Avisynth Usage

Reply
 
Thread Tools Search this Thread Display Modes
Old 3rd February 2018, 16:28   #1021  |  Link
Yanak
Registered User
 
Join Date: Oct 2011
Posts: 267
Quote:
Originally Posted by johnmeyer View Post
... and, of course, you can always assemble the image sequence into an AVI prior to feeding it to the script. Pretty much any NLE can do that, as can Virtualdub.
Using the built in frame-server option of Virtualdub to feed avisynth directly and skip the need to write an .avi is a good option too yes ^^
Yanak is offline   Reply With Quote
Old 3rd February 2018, 21:42   #1022  |  Link
bassquake
Registered User
 
Join Date: Jan 2007
Posts: 39
Quote:
Originally Posted by johnmeyer View Post
... and, of course, you can always assemble the image sequence into an AVI prior to feeding it to the script. Pretty much any NLE can do that, as can Virtualdub.
That's how I was doing it beforehand but it takes too long to render!
bassquake is offline   Reply With Quote
Old 5th February 2018, 23:02   #1023  |  Link
bassquake
Registered User
 
Join Date: Jan 2007
Posts: 39
Quote:
Originally Posted by manono View Post
Yes, you'd use ImageSource. I haven't read the script, but you might also have to change the colorspace. Others will know better than I.
Coolio, I got it working. Here's how to use an image sequence instead of avi:

Replace:

Code:
film= "C:\Users\You\Documents\Yourfile.avi"  # source clip, you must specify the full path here
with:

Code:
film= "C:\Users\You\Documents\Images\SAM_%04d.jpg"  #source clip, you must specify the full path here
start= 1 #Start frame
end= 3574 #Total number of frames
The %04d in the filename is the numbering padding used in the images. Mine is SAM_0001.jpg and never goes higher than 4 digits.

You need to tell it what frame it starts and ends with. Eg: Sequence is SAM_0003.jpg to SAM_0499.jpg then the start is 3 and end is 0499.

Also, replace the line:

Code:
source1= AviSource(film).assumefps(play_speed).trim(trim_begin,0).converttoYV12()
with:

Code:
source1= ImageSource(film, start, end).assumefps(play_speed).trim(trim_begin,0).converttoYV12().flipvertical()
Thats it. Can now load image sequences directly without having to render a lossless avi or similar to edit.

Note: I added flip vertical to the source1 line as my images are all upside down.

Hope that helps someone.

Last edited by bassquake; 22nd February 2019 at 15:25.
bassquake is offline   Reply With Quote
Old 10th February 2018, 18:34   #1024  |  Link
camelopardis
Registered User
 
Join Date: Feb 2018
Posts: 5
Hello all.

I'm very new to AVISynth, having just started trying to digitize my families old 8mm/Super8 films dating back to the 1950s.

I've bought a Reflecta Film Scanner super8/Normal8 (similar to the Somikon and Wolverine) and currently just experimenting, trying to learn the best way to scan and the best way to post-process.

Even though I realise the fantastic Videofred / John Meyer scripts on here are for the output of far superior scanning methods, I thought I'd give them a try.

I have a little test clip showing a side-by-side result.

Input= Super 8 Reflecta scan at -1.0 exposure (scans as 1440x1080 30fps H.264 MP4)

converted to 18fps AVI using AVC2AVI

Output= latest 2012 VideoFred Script (default parameters)

I'm completely new to this but can see the output doesnt look right. I suspect the original scan is too heavily compressed?

Does anyone have any tips or suggestions? Should I use different parameters in the script - or a different method altogether?

Thank you - all or any advice most welcome
camelopardis is offline   Reply With Quote
Old 10th February 2018, 22:44   #1025  |  Link
johnmeyer
Registered User
 
Join Date: Feb 2002
Location: California
Posts: 2,198
It was tough for me to tell much from the Vimeo post. It actually looked pretty good.

I am not familiar with the Reflecta units, but a quick trip to their site makes it look like they must be frame accurate, meaning you get precisely one from of film on each frame of video. That is a requirement for using VideoFred's script.

I see you are storing the original scans using H.264. That is obviously a compressed format. You didn't provide any information about what settings you used. This is important because any video format that uses compression can use only a little compression (big files) or a LOT of compression (small files). The more the compression, the worse artifacts you will get. So, if artifacts are the problem, then look for a setting in your capture software that lets you use less compression. You can always compress the files when you are finished, in order that your final result doesn't require a huge array of disks to hold the final result. So, do the compression as the final step, if you need it, and not when you do the original capture.

You mention you are using a script dated 2012, but I thought VideoFred had posted something more recently than that. He has made some amazing advances over the past few years, some of which I still need to steal, er, incorporate into my own work, especially StainlessS' GamMac filter, which I still can't get to work reliably, but which shows great promise.

If you can upload a 10-second clip of one of your original transfers, without re-encoding, that will make it a lot easier to see what is going on.
johnmeyer is offline   Reply With Quote
Old 11th February 2018, 17:28   #1026  |  Link
camelopardis
Registered User
 
Join Date: Feb 2018
Posts: 5
Quote:
Originally Posted by johnmeyer View Post
It was tough for me to tell much from the Vimeo post. It actually looked pretty good.

I am not familiar with the Reflecta units, but a quick trip to their site makes it look like they must be frame accurate, meaning you get precisely one from of film on each frame of video. That is a requirement for using VideoFred's script.

I see you are storing the original scans using H.264. That is obviously a compressed format. You didn't provide any information about what settings you used. This is important because any video format that uses compression can use only a little compression (big files) or a LOT of compression (small files). The more the compression, the worse artifacts you will get. So, if artifacts are the problem, then look for a setting in your capture software that lets you use less compression. You can always compress the files when you are finished, in order that your final result doesn't require a huge array of disks to hold the final result. So, do the compression as the final step, if you need it, and not when you do the original capture.

You mention you are using a script dated 2012, but I thought VideoFred had posted something more recently than that. He has made some amazing advances over the past few years, some of which I still need to steal, er, incorporate into my own work, especially StainlessS' GamMac filter, which I still can't get to work reliably, but which shows great promise.

If you can upload a 10-second clip of one of your original transfers, without re-encoding, that will make it a lot easier to see what is going on.
Hi John - thank you very much for your reply!

yes - the Reflecta does scan frame-by-frame but unfortunately it can only save the result as a highly compressed MP4 file (this is the main complaint of most users).

Here is a longer clip of the raw 1440x1080 output from the machine (you can't change the compression ratio, you can only adjust exposure, sharpness, and frame adjustment).
I order to use with AVISynth, the above clip was changed from MP4 to AVI by first extracting the raw h.264 using "My MP4Box" and then converting to AVI using AVC2AVI. If any of this is silly or wrong please let me know. I'm learning as I go along and am more than happy to be corrected!



Here is the same clip showing the output from Videofred's script. I used the latest 01_A script (posted on the first page of this thread (dated 20/06/2012). Is there a later one I should be using?



Here is the actual script used:
Code:
# 8mm film restoration script by videoFred.
# www.super-8.be
# info@super-8.be

# version 01.A with frame interpolation
# release date: june 20, 2012
#============================================================================================

# august 2010: added removerdirtMC() as suggested by John Meyer
# october 2010: auto sharpening parameters

# march 2011: new autolevels.dll by Jim Battle
# www.thebattles.net/video/autolevels.html

# june 2012: improved stabilisation


#=============================================================================================

# cleaning, degraining, resizing, stabilizing, sharpening, auto-levels and auto-white balance.
#=============================================================================================


film= "C:\Users\You\Documents\Yourfile.avi"  # source clip, you must specify the full path here






#PARAMETERS
#----------------------------------------------------------------------------------------------------------------------------
result="resultS1" # specify the wanted output here 

trim_begin=2  trim_end=10  play_speed=18   #trim frames and play speed (PAL: 16.6666 or 18.75)

numerator= 25  #numerator for the interpolator (final frame rate)
denumerator= 1 #denumerator  example: 60000/1001= 59.94fps


#COLOR AND LEVELS PARAMATERS
#----------------------------------------------------------------------------------------------------------------------------
saturation=1.2   #for all outputs

gamma= 1.2 # for all outputs 

blue= 0  red= 0  #manual color adjustment, when returning result3 or result4. Values can be positive or negative


black_level=0  white_level=255 output_black=0  output_white=255 # manual levels, when returning result4


#AUTO LEVELS PARAMETERS
#--------------------------------------------------------------------------------------------------------------------------------

autolev_low= 6     # limit of autolevels low output
autolev_high= 235  # limit of autolevels high output

 

#SIZE, CROP AND BORDERS PARAMETERS
#----------------------------------------------------------------------------------------------------------------------------
CLeft=32  CTop=32  CRight=32  CBottom=32  #crop values after Depan and before final resizing 

W=720  H=576  #final size after cropping 

bord_left=0  bord_top=0  bord_right=0  bord_bot=0  #720p= borders 150


#STABILISING PARAMETERS, YOU REALY MUST USE RESULTS7 TO CHECK STABILISATION!
#----------------------------------------------------------------------------------------------------------------------------
maxstabH=20 
maxstabV=20 #maximum values for the stabiliser (in pixels) 20 is a good start value

est_left=20   est_top=60  est_right=60  est_bottom=60  #crop values for special Estimate clip

trust_value= 1.0     # scene change detection, higher= more sensitive
cutoff_value= 0.5   # no need to change this, but you can play with it and see what you get





#CLEANING PARAMETERS
#--------------------------------------------------------------------------------------------------------------

dirt_strenght=30 # set this lower for clean films.


#DENOISING PARAMETERS
#----------------------------------------------------------------------------------------------------------------------------


denoising_strenght= 300  #denoising level of second denoiser: MVDegrainMulti() 
denoising_frames= 3  #number of frames for averaging (forwards and backwards) 3 is a good start value
block_size= 16  #block size of MVDegrainMulti()
block_size_v= 16
block_over= 8  #block overlapping of MVDegrainMulti()




# SHARPENING PARAMETERS
#--------------------------------------------------------------------------------------------------------------------------------

USM_sharp_ness= 40   USM_radi_us=3  #this is the start value for the unsharpmask sharpening
                                    #do not set radius less then 3 
                                    #the script will automatically add two other steps with lower radius 



last_sharp= 0.1 #final sharpening step after interpolation

last_blur= 0.2 #this smooths out the heavy sharpening effects





# END VARIABLES, BEGIN SCRIPT
#=================================================================================================================================


SetMemoryMax(800)  #set this to 1/3 of the available memory




LoadPlugin("plugins/Deflicker.dll")
Loadplugin("plugins/Depan.dll")
LoadPlugin("plugins/DepanEstimate.dll")
Loadplugin("plugins/removegrain.dll")
LoadPlugin("plugins/removedirt.dll")
LoadPlugin("plugins/MVTools.dll")
LoadPlugin("plugins/MVTools2.dll")
Loadplugin("plugins/warpsharp.dll")
LoadPlugin("plugins/autolevels_06.dll")
Import("plugins/03_RemoveDirtMC.avs")






source= AviSource(film).assumefps(play_speed).trim(trim_begin,0).converttoYV12()
trimming= framecount(source)-trim_end
source1= trim(source,0,trimming)






#STABILIZING/CROPPING
#...........................................................................................................................................

stab_reference= source1.crop(20,20,-20,-20).colorYUV(autogain=true).crop(est_left,est_top,-est_right,-est_bottom)

mdata=DePanEstimate(stab_reference,trust=trust_value,dxmax=maxstabH,dymax=maxstabV)
stab=DePanStabilize(source1,data=mdata,cutoff=cutoff_value,dxmax=maxstabH,dymax=maxstabV,method=0,mirror=15).deflicker()
stab2= stab.crop(CLeft,CTop,-CRight,-CBottom)
stab3=DePanStabilize(source1,data=mdata,cutoff=cutoff_value,dxmax=maxstabH,dymax=maxstabV,method=0,info=true)


WS= width(stab)
HS= height(stab)
stab4= stab3.addborders(10,10,10,10,$B1B1B1).Lanczos4Resize(WS,HS)
stab5= Lanczos4Resize(stab2,W,H).sharpen(0.5)


#UNSHARPMASK AUTO_PARAMETERS
#-------------------------------------------------------------------------------------------------------------------------------------------

USM_sharp_ness1 = USM_sharp_ness
USM_sharp_ness2 = USM_sharp_ness+(USM_sharp_ness/2)
USM_sharp_ness3 = USM_sharp_ness*2

USM_radi_us1 = USM_radi_us
USM_radi_us2 = USM_radi_us-1
USM_radi_us3 = USM_radi_us2-1


#CLEANING/PRESHARPENING/RESIZING
#..........................................................................................................................................


noise_baseclip= stab2.levels(0,gamma,255,0,255).tweak(sat=saturation)




cleaned= RemoveDirtMC(noise_baseclip,dirt_strenght).unsharpmask(USM_sharp_ness1,USM_radi_us1,0)\
.unsharpmask(USM_sharp_ness2,USM_radi_us2,0).Lanczos4Resize(W,H)



#DEGRAINING/SHARPENING
#...................................................................................................................................................................


vectors= cleaned.MVAnalyseMulti(refframes=denoising_frames, pel=2, blksize=block_size, blksizev= block_size_v, overlap=block_over, idx=1)
denoised= cleaned.MVDegrainMulti(vectors, thSAD=denoising_strenght, SadMode=1, idx=2).unsharpmask(USM_sharp_ness3,USM_radi_us3,0)






#CHANGING FRAME RATE WITH INTERPOLATION/FINALSHARPENING
#............................................................................................................................................................

super= denoised.MSuper()
backward_vec= MAnalyse(super, blksize=block_size, blksizev= block_size_v, overlap=block_over, isb=true)
forward_vec= MAnalyse(super,blksize=block_size, blksizev= block_size_v, overlap=block_over, isb= false)

interpolated= denoised.MFlowFps(super, backward_vec, forward_vec, num=numerator, den= denumerator, ml=100)\
.sharpen(last_sharp,mmx=false).sharpen(last_sharp,mmx=false).blur(last_blur,mmx=false)


#RESULT1: AUTOLEVELS,AUTOWHITE
#......................................................................................................................................................................
result1= interpolated.converttoRGB24().autolevels(output_low= autolev_low, output_high= autolev_high)\
.converttoYV12().coloryuv(autowhite=true).addborders(bord_left, bord_top, bord_right, bord_bot)

#RESULT2: MANUAL LEVELS, AUTOWHITE
#......................................................................................................................................................................
result2= interpolated.levels(black_level,1.0,white_level,0,255).coloryuv(autowhite=true)\
.addborders(bord_left, bord_top, bord_right, bord_bot)

#RESULT3: AUTOLEVELS, MANUAL COLOR CORRECTIONS
#.....................................................................................................................................................................
result3= interpolated.coloryuv(off_U=blue,off_V=red).converttoRGB24().autolevels(output_low= autolev_low, output_high= autolev_high)\
.converttoYV12().addborders(bord_left, bord_top, bord_right, bord_bot)

#RESULT4: MANUAL LEVELS, MANUAL COLOR CORRECTIONS
#.....................................................................................................................................................................
result4= interpolated.coloryuv(off_U=blue,off_V=red).levels(black_level,1.0,white_level,0,255)\
.addborders(bord_left, bord_top, bord_right, bord_bot)

#RESULT5: SPECIAL SERVICE CLIP FOR RESULT S5
#.....................................................................................................................................................................
result5= overlay(source1,greyscale(stab_reference),x=est_left,y=est_top).addborders(2,2,2,2,$FFFFFF).Lanczos4Resize(WS,HS)




#PARAMETERS FOR THE COMPARISONS
#.....................................................................................................................................................................
W2= W+bord_left+bord_right
H2= H+bord_top+bord_bot




final_framerate= numerator/denumerator
source4=Lanczos4Resize(source1,W2,H2).changeFPS(final_framerate)



#COMPARISONS: ORIGINAL VS RESULTS
#......................................................................................................................................................................
resultS1= stackhorizontal(subtitle(source4,"original",size=28,align=2),subtitle(result1,"result1: autolevels, autowhite",size=28,align=2))
resultS2= stackhorizontal(subtitle(source4,"original",size=28,align=2),subtitle(result2,"result2: autowhite, manual levels correction",size=28,align=2))
resultS3= stackhorizontal(subtitle(source4,"original",size=28,align=2),subtitle(result3,"result3: autolevels, manual color correction",size=28,align=2))
resultS4= stackhorizontal(subtitle(source4,"original",size=28,align=2),subtitle(result4,"result4: manual colors and levels correction",size=28,align=2))
resultS5= stackhorizontal(subtitle(result3,"result3: auto levels, manual color correction",size=28,align=2),subtitle(result4,"result4: manual colors and levels correction",size=28,align=2))
resultS6= stackhorizontal(subtitle(result1,"result1: autolevels, autowhite",size=28,align=2),subtitle(result2,"result2: manual levels, autowhite",size=28,align=2))

#SPECIAL COMPARISON CLIP FOR TESTING THE STABILIZER
#.........................................................................................................................................................................
resultS7= stackhorizontal(subtitle(result5,"baseclip for stabiliser -only the B/W clip is used",size=32,align=2),\
subtitle(stab4,"test stabiliser: dx=horizontal, dy=vertical",size=32,align=5)).converttoYUY2()




Eval(result)#.converttoRGB24()
The only I thing I changed in the above script (apart from sourcefile) was W=720 H=576 which I changed to W=720 H=540. This was because the aspect ratio looked wrong (people fatter) with 576, so I changed it to be exactly half of the original 1440x1080 ratio. I'll admit I am quite confused here as to what I should be doing regards output size!

I notice the output files from VirtualDub are quite huge (uncompressed RGB) - should I be changing to x264vfw-h.264? Also, if I do change output compression to x264vfw, am I right in thinking that I should also then change from "full processing mode" to "fast recompress" in VirtualDub?

I also still need to crop the borders properly!
camelopardis is offline   Reply With Quote
Old 11th February 2018, 18:50   #1027  |  Link
johnmeyer
Registered User
 
Join Date: Feb 2002
Location: California
Posts: 2,198
It sounds like you are doing everything right. I don't have any help to offer, other than to say that you need to find some way to store the original capture in something other than h.264, or at least find a way to change the compression setting used. It would be pretty strange if they completely hard-wired this setting. There must be a hack, .ini file, registry setting, or some other way to make these changes.

There are cropping options in VideoFred's script, and you can change those. I haven't looked at his script much since I branched off in my own direction, but I'm pretty sure his script crops and then re-sizes to whatever final size you have specified. Thus, you should be able to increase the crop at the top (that's where I saw the overhang from the previous frame). BTW, a better thing to do is adjust the framing control on the capture device so that the top and bottom frame border is roughly equal. Hopefully your transfer machine has such an adjustment.

Last edited by johnmeyer; 11th February 2018 at 18:51. Reason: typo
johnmeyer is offline   Reply With Quote
Old 12th February 2018, 22:58   #1028  |  Link
camelopardis
Registered User
 
Join Date: Feb 2018
Posts: 5
thanks very much John. It is a shame about a compression but at least I'm getting them done after so many years. I think its better the film subjects that are still alive get to see them even if sub-par rather than not at all. I'll be keeping all the film so I can always get them re-done with better equipment at a later date.
camelopardis is offline   Reply With Quote
Old 12th February 2018, 23:06   #1029  |  Link
poisondeathray
Registered User
 
Join Date: Sep 2007
Posts: 3,982
Quote:
Originally Posted by camelopardis View Post
I order to use with AVISynth, the above clip was changed from MP4 to AVI by first extracting the raw h.264 using "My MP4Box" and then converting to AVI using AVC2AVI. If any of this is silly or wrong please let me know. I'm learning as I go along and am more than happy to be corrected!
You can use lsmash or ffms2 to open the MP4 directly . Both are external plugins separate from the main avisynth install.
e.g
Code:
LSmashVideoSource("video.mp4")
poisondeathray is offline   Reply With Quote
Old 13th February 2018, 01:57   #1030  |  Link
JoeSuper8
Registered User
 
Join Date: Aug 2017
Location: New York
Posts: 7
Quote:
Originally Posted by videoFred View Post
Hi Joe,

Welcome here

First of all, please try to capture as good as possible. Later, we can talk about Gammac etc....

About the Jpeg image sequence: you have not captured the full 0-255 range. It's more like 0-125. You have set red auto and blue auto off. Why? One of the great benefits of C2D is the exellent auto exposure feature. If you set the limit to 245 you will almost never get blown out whites.

Sharpness can be better also. Have you tried to find the 'sweet spot' from your lens? Every lens has a sweet spot. Easy to check: in C2D, use internal flashing and play with the lens aperture. At a certain point the image will look sharpest. This is your sweet spot. Leave it there forever. Then you can fine focus with your camera slide.

Also, there is no need to save as image sequence. I always save straight to AVI, at 15fps! I'm using the Canopus HQX codec for this but Lagarith etc.. will work fine too. Uncompressed is also an option but it creates huge files and why would we do it.. You will see no visual difference.

About the speed from the script: downscaling the source will speed up the script a lot. You can upscale again at the very end of the script.
https://forum.doom9.org/showthread.p...55#post1810955

many greetings,
Fred.
Thanks Fred and johnmeyer for your ideas - sorry I haven't been able to get to testing sooner. I noticed no significant change in Cine2Digits when I turned the Camera Settings - Analog - White Balance Blue and Red to Continuous instead of manual and left the other settings as described in my 8/22/17 post (the Cine2Digits manual suggested manual values for the camera white balance because the C2D program itself auto-adjusts the colors through its LED control). In both my attempt over the summer and my try this week, the C2D program LED control is set to auto.

I tried turning down the RGB dials from 1.47 to 1 but that did not affect overexposure. I also tried changing the pixel format (BayerRG8,16, YCBCR, RGB8) but that did not change the highlights.

I tried adjusting RGB gamma below 1 in C2D but the highlights did not seem to recover. Could the highlights be overexposed on the original film? Or another setting that's not right?

Fred as you suggested I adjusted my white level targets for Red, Green, and Blue to 245 in the "Exposure Options" C2D menu. I set auto exposure sensitiveity "Over" to 350 and Under to 650 (I did not notice a difference changing these values much lower and much higher).

I am still not sure how the range is closer to 0-125 in my capture.

johnmeyer and Fred I will look into the script issues and downsizing after fixing the highlights issue to achieve something closer to 0-255. Any settings I could be incorrectly setting somewhere?

Here is a video showing the C2D settings and gamma adjustments - please let me know if I might want to change anything in addition to AVI capture instead of TIFF:
https://player.vimeo.com/video/255482660

Last edited by JoeSuper8; 13th February 2018 at 02:03.
JoeSuper8 is offline   Reply With Quote
Old 13th February 2018, 11:22   #1031  |  Link
videoFred
Registered User
 
videoFred's Avatar
 
Join Date: Dec 2004
Location: Gent, Flanders, Belgium, Europe, Earth, Milky Way,Universe
Posts: 663
Hi Joe,

It's a matter of settings, both camera settings and C2D settings.
Also, be sure that the ROI for the picture analysis is inside the black borders. Otherwise you wil get false results.

Fred.
__________________
About 8mm film:
http://www.super-8.be
Film Transfer Tutorial and example clips:
https://www.youtube.com/watch?v=W4QBsWXKuV8
More Example clips:
http://www.vimeo.com/user678523/videos/sort:newest
videoFred is offline   Reply With Quote
Old 13th February 2018, 11:36   #1032  |  Link
videoFred
Registered User
 
videoFred's Avatar
 
Join Date: Dec 2004
Location: Gent, Flanders, Belgium, Europe, Earth, Milky Way,Universe
Posts: 663
Quote:
Originally Posted by camelopardis View Post
Here is a longer clip of the raw 1440x1080 output from the machine (you can't change the compression ratio, you can only adjust exposure, sharpness, and frame adjustment).
Hello Camelopardis,

We can not download this Vimeo clip. We need a "downloadable" clip

Also, do not use any camera sharpness (or other) settings. It is much better to do sharpening in post.

Fred.
__________________
About 8mm film:
http://www.super-8.be
Film Transfer Tutorial and example clips:
https://www.youtube.com/watch?v=W4QBsWXKuV8
More Example clips:
http://www.vimeo.com/user678523/videos/sort:newest
videoFred is offline   Reply With Quote
Old 16th February 2018, 01:04   #1033  |  Link
camelopardis
Registered User
 
Join Date: Feb 2018
Posts: 5
Quote:
Originally Posted by videoFred View Post
Hello Camelopardis,

We can not download this Vimeo clip. We need a "downloadable" clip

Also, do not use any camera sharpness (or other) settings. It is much better to do sharpening in post.

Fred.
Hi Fred,

I've uploaded a raw output clip to my onedrive:
https://1drv.ms/v/s!Arqwxcq4sXstsA7drZ7CuhCNAQYD

The machine has 3 sharpness settings: low, medium, and high. You cant turn it completely off. I've set it to low.
camelopardis is offline   Reply With Quote
Old 16th February 2018, 01:27   #1034  |  Link
poisondeathray
Registered User
 
Join Date: Sep 2007
Posts: 3,982
@camelopardis - why is it 23.976 fps AVI with blends ? Is it a 1:1 scanner ? I thought it recorded MP4 ? You mentioned 18FPS and 30FPS, but this clip 23.976 . Or does it scan at a fixed 30FPS rate without adjustments or variable speed ? I'm wondering at what stage are the blends being introduced

Are there options to bump up the bitrate on the machine or in software ? If that's 1st generation, there is significant quality loss from the compression. 7Mb/s for 1440x1080 isn't very much, even for final delivery formats let alone 1st gen transfer.
poisondeathray is offline   Reply With Quote
Old 16th February 2018, 02:09   #1035  |  Link
johnmeyer
Registered User
 
Join Date: Feb 2002
Location: California
Posts: 2,198
Thanks for the upload.

Is this the unmodified, original video as it came directly from the Reflecta unit, or have you already done something to it, either with an AVISynth script, or something else? I ask this because what you posted is an awful mess and cannot possibly be used inside of VideoFred's script (or any other script designed to improve the transfer).

The problem is that you have blended fields. This is something you always see with a cheap transfer, where the person just points the video camera at the movie screen. However, I wouldn't think you would get it with a transfer unit that is frame accurate, which I thought was the case with the Reflecta.

So, unless you can upload something better, the short answer is that you cannot use VideoFred's script, or any other AVISynth script, because your movie frames are made up of blended fields.
johnmeyer is offline   Reply With Quote
Old 16th February 2018, 06:43   #1036  |  Link
JoeSuper8
Registered User
 
Join Date: Aug 2017
Location: New York
Posts: 7
Quote:
Originally Posted by videoFred View Post
Hi Joe,

It's a matter of settings, both camera settings and C2D settings.
Also, be sure that the ROI for the picture analysis is inside the black borders. Otherwise you wil get false results.

Fred.
Yep Fred I have set the ROI inside the Super 8 frame with room to spare, like in part 2 of your tutorial video.

As you saw in the link from my previous post, my histogram is very weak during the test capture (not much red, green, and blue area) whereas your tutorial shows significantly more R, G, and B in the C2D live histogram.

I have attempted to adjust the C2D and camera settings but I am still getting the same results.

Would you mind taking a look at the screen capture video link from my last post to see where I might want to change any settings to improve the histogram and range of capture?
JoeSuper8 is offline   Reply With Quote
Old 16th February 2018, 19:08   #1037  |  Link
videoFred
Registered User
 
videoFred's Avatar
 
Join Date: Dec 2004
Location: Gent, Flanders, Belgium, Europe, Earth, Milky Way,Universe
Posts: 663
Quote:
Originally Posted by JoeSuper8 View Post
Would you mind taking a look at the screen capture video link from my last post to see where I might want to change any settings to improve the histogram and range of capture?
Every machine vision camera type has a different settings menu. In your case: I would try "black level". But even better: try them all and see what you get.

In C2D your gamma settings are to low.

And please try it on different films and/or other scenes too.

Fred.
__________________
About 8mm film:
http://www.super-8.be
Film Transfer Tutorial and example clips:
https://www.youtube.com/watch?v=W4QBsWXKuV8
More Example clips:
http://www.vimeo.com/user678523/videos/sort:newest
videoFred is offline   Reply With Quote
Old 17th February 2018, 13:19   #1038  |  Link
camelopardis
Registered User
 
Join Date: Feb 2018
Posts: 5
Hi all.

My apologies! I realise now that wasnt the unmodified output from the Reflecta unit becuase I cut it into a short clip for upload in Sony Vegas (where it was then rendered as mpeg4 output.)

This time I uploaded a short clip DIRECT from the machine. This is the MP4 file it writes directly to the SDcard. I've done nothing to it and left it as the original 1440x1080 30fps MP4 file that the machine writes. (you cant change this output) I havent corrected the frame rate.

https://1drv.ms/v/s!Arqwxcq4sXstsA9RBsfWbucoH0By
camelopardis is offline   Reply With Quote
Old 3rd April 2018, 00:30   #1039  |  Link
clivesay
Registered User
 
Join Date: Mar 2018
Posts: 1
Hey everyone. I joined this forum just for this thread!

Long story short, at the first of the year I learned I have a sister that no one (including my father) knew anything about. My dad has been gone for over 25yrs. Some slides that I never seen before that were tucked away in a closet helped to unlock some of the mystery.

This now has me on a mission. I bought a good scanner to get good copies of my dad's slides and then was informed my grandparent had some 8mm reels that needed to be digitized. So, I jumped in and bought a wolverine moviemaker-pro and have jumped into 8mm film restoration. I have the scripts from here up and running and have been tinkering with them.

First question: Some of you people here have an incredible amount of experience in film restoration. How does someone like myself gain enough knowledge to do some basic but effective editing? The scripts make a difference on some of my test clips but I have no idea what parameters to tweak to refine some of the results. Is there a good tutorial somewhere? Do I post clips here and some of you can give me some pointers?

I am all about learning and not afraid to get my hands dirty.

Would appreciate any guidance.

Oh, if anyone is interested in my experience with my sister you can google "Crawfordsville DNA" and see our story that was broadcast on the news.

Thank you.
clivesay is offline   Reply With Quote
Old 3rd April 2018, 03:52   #1040  |  Link
`Orum
Registered User
 
Join Date: Sep 2005
Posts: 165
Quote:
Originally Posted by clivesay View Post
First question: Some of you people here have an incredible amount of experience in film restoration. How does someone like myself gain enough knowledge to do some basic but effective editing?
I have never had the (dis)pleasure of having to restore analog footage, but I can tell you that there's no "one-size-fits-all" solution. That said his scripts are probably a good place to start.
Quote:
Originally Posted by clivesay View Post
The scripts make a difference on some of my test clips but I have no idea what parameters to tweak to refine some of the results. Is there a good tutorial somewhere? Do I post clips here and some of you can give me some pointers?
Posting clips will never hurt, and while we can give recommendations things are always best hand-tuned by the end user, as you know exactly what you want and we can only tune to what we think is best.
Quote:
Originally Posted by clivesay View Post
I am all about learning and not afraid to get my hands dirty.
Good! Sometimes the best way to learn what parameters to tune is by playing with them yourself. That said, when there's a lot of parameters to tune practiced hands can help, as they sometimes interact in important ways. There's also a lot to know about video (and a lot you probably don't need to know unless you are writing your own filters), but a good place to start is the wiki. There's a section right on the home page for getting started, but it sounds like you've already got the initial setup done.

Software like AvsPmod can make things friendlier for those new to AviSynth and faster for those who are used to it, though it can be frustrating as it's far from "stable" and may crash on you (save your script frequently).
__________________
My filters: DupStep | PointSize
`Orum is offline   Reply With Quote
Reply

Thread Tools Search this Thread
Search this Thread:

Advanced Search
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump


All times are GMT +1. The time now is 21:20.


Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2019, vBulletin Solutions Inc.