Welcome to Doom9's Forum, THE in-place to be for everyone interested in DVD conversion. Before you start posting please read the forum rules. By posting to this forum you agree to abide by the rules. |
18th October 2013, 02:19 | #1 | Link |
Registered User
Join Date: Feb 2002
Location: California
Posts: 2,691
|
Simple 30->24 fps Srestore decimate not working
Here's a sample clip from the football game I'm restoring:
test clip.mpg It is NTSC video (29.97) from film. The pulldown is a simple frame blend (not field) every fifth frame. All that needs to be done is to detect this blended frame and decimate it. I did this with several other films and everything worked great. However, I've tried every Srestore parameter and cannot get it to work. I've tried first correcting the contrast and then using that corrected clip, but I still get incorrect decimation. The problem is that the script is decimating more than every fifth frame. Here's the script I'm using (using SRestore v2.7e): Code:
loadPlugin("C:\Program Files\AviSynth 2.5\plugins\Srestore\mt_masktools-25.dll") loadPlugin("c:\Program Files\AviSynth 2.5\plugins\MPEGDecoder.dll") Import("C:\Program Files\AviSynth 2.5\plugins\Srestore\srestore.avs") source=mpeg2source("E:\Richards\1970 Stanford at Washington State\test clip.d2v") output=srestore(source,frate=23.976, omode=1) #stackhorizontal(source.SeparateFields(),output.SeparateFields()) return output BTW, this script is obviously NOT doing any decimation (omode=1), but instead lets me observe (using stackhorizontal) which frames will be decimated. If I can get it working, I'll change omode to "omode=6". I'm using AVISynth 2.60 build Mar 9, 2013. Masktools 2.0.36.0. I've tried first converting the video to AVI in order to avoid any issues with caching when using DGIndex/MPEG2Source to read the video file (I've sometimes had caching issues). Thanks in advance for any hints or help. |
19th October 2013, 01:06 | #2 | Link |
Registered User
Join Date: Feb 2002
Location: California
Posts: 2,691
|
Well, I spent another two hours on this, but can't get it to work. I am really puzzled because I've used SRestore many times before. I had one problem when I stupidly used SetMTMode(), but I'm not doing that here. I've tried dozens of different settings, but the blend detection seems to be random.
I'll keep trying ... hopefully someone will be able to offer some advice. |
19th October 2013, 14:28 | #3 | Link |
Registered User
Join Date: Mar 2007
Posts: 407
|
I guess you're not just asking for SelectEvery(5,1,2,3)? I cannot help you with Srestore, but offer a different solution and maybe initiate a discussion about timeline manipulation from AviSynth's runtime environment.
First, I looked into Srestore.avsi, but I'm too lazy and stupid to understand that at quick. Framerate reduction seems to be done in the 2nd last line 'oclp.trim(opos,0)', where opos includes a difference between curent_frame and the sum of removed frames at current_frame position. It is embedded in a ScriptClip() call. Second, it is not difficult to determine a blended frame inside a ScriptClip(). You need RT_Stats for the example script. Code:
DirectShowSource("test clip.mpg") ScriptClip(""" # " c = last var1 = RT_LumaCorrelation(c, c, delta=-3, delta2=-2)+RT_LumaCorrelation(c, c, delta=-2, delta2=-1) var2 = RT_LumaCorrelation(c, c, delta=-2, delta2=-1)+RT_LumaCorrelation(c, c, delta=-1, delta2=+0) var3 = RT_LumaCorrelation(c, c, delta=-1, delta2=+0)+RT_LumaCorrelation(c, c, delta=+0, delta2=+1) var4 = RT_LumaCorrelation(c, c, delta=+0, delta2=+1)+RT_LumaCorrelation(c, c, delta=+1, delta2=+2) var5 = RT_LumaCorrelation(c, c, delta=+1, delta2=+2)+RT_LumaCorrelation(c, c, delta=+2, delta2=+3) RT_Debug(RT_String("%d %f %f %f %f %f", current_frame, var1, var2, var3, var4, var5)) var3 > max(var1,var2,var4,var5) ? c.subtitle("____to be removed") : c """) #" Third, I know of no way to permanently change the clip's timeline from a ScriptClip() (!) Manipulation of the current_frame - no problem. But timeline manipulation needs to persist outside the runtime environment. Clip properties like framecount are determined at script start, and not modifiable during script run. The question is: How can the clip, say at position 123, know how many frames were dropped in the range 0...122? I can imagine two solutions: recursion and restriction to linear editing. Recursion means, at position 9999 the script needs 9998 calls to the RTE, i.e. for each frame, the whole chain down to frame #0 is re-evaluated. Because that might be an issue with PC resources, restriction to linear editing might be favorable: The clip *must* be navigated from frame #0 towards the end. A global "frame offset" variable is used to shift the clip and this variable is incremented each time the ScriptClip() encounters a frame to drop. Based on that, the following script should work, Code:
global DroppedFrames = 0 DirectShowSource("test clip.mpg") ShowFrameNumber() c = last ScriptClip(""" # " var1 = RT_LumaCorrelation(c, c, delta=-3, delta2=-2, w=-62)+RT_LumaCorrelation(c, c, delta=-2, delta2=-1, w=-62) var2 = RT_LumaCorrelation(c, c, delta=-2, delta2=-1, w=-62)+RT_LumaCorrelation(c, c, delta=-1, delta2=+0, w=-62) var3 = RT_LumaCorrelation(c, c, delta=-1, delta2=+0, w=-62)+RT_LumaCorrelation(c, c, delta=+0, delta2=+1, w=-62) var4 = RT_LumaCorrelation(c, c, delta=+0, delta2=+1, w=-62)+RT_LumaCorrelation(c, c, delta=+1, delta2=+2, w=-62) var5 = RT_LumaCorrelation(c, c, delta=+1, delta2=+2, w=-62)+RT_LumaCorrelation(c, c, delta=+2, delta2=+3, w=-62) RT_Debug(RT_String("%d %f %f %f %f %f", current_frame, var1, var2, var3, var4, var5)) #~ (var3 > max(var1,var2,var4,var5)) ? c.Subtitle("____to be removed") : c global DroppedFrames = (var3 > max(var1,var2,var4,var5)) ? DroppedFrames+1 : DroppedFrames Trim(DroppedFrames,0) Subtitle(string(DroppedFrames)) """, after_frame=true) #" EDIT: Is not an AviSynth issue, but my fault using DirectShowSource and right keyboard autorepeat. DirectShow is unreliable regarding timeline. I tested positive, however, a ScriptClip variant that writes to a file while you move once through the clip. The file then contains the neccessary Trim() commands to do the actual frame dropping. Code:
global DroppedFrames = 0 RT_TxtWriteFile("""DirectShowSource("test clip.mpg")""", "TrimList.avs") RT_TxtWriteFile("ShowFrameNumber()", "TrimList.avs", append=true) DirectShowSource("test clip.mpg") ShowFrameNumber() ScriptClip(""" # " c = last var1 = RT_LumaCorrelation(c, c, delta=-3, delta2=-2, w=-62)+RT_LumaCorrelation(c, c, delta=-2, delta2=-1, w=-62) var2 = RT_LumaCorrelation(c, c, delta=-2, delta2=-1, w=-62)+RT_LumaCorrelation(c, c, delta=-1, delta2=+0, w=-62) var3 = RT_LumaCorrelation(c, c, delta=-1, delta2=+0, w=-62)+RT_LumaCorrelation(c, c, delta=+0, delta2=+1, w=-62) var4 = RT_LumaCorrelation(c, c, delta=+0, delta2=+1, w=-62)+RT_LumaCorrelation(c, c, delta=+1, delta2=+2, w=-62) var5 = RT_LumaCorrelation(c, c, delta=+1, delta2=+2, w=-62)+RT_LumaCorrelation(c, c, delta=+2, delta2=+3, w=-62) RT_Debug(RT_String("%d %f %f %f %f %f", current_frame, var1, var2, var3, var4, var5)) action = var3 > max(var1,var2,var4,var5) ? true : false action ? RT_TxtWriteFile("Trim(0,"+string(current_frame-1-DroppedFrames)+")++Trim("+string(current_frame+1-DroppedFrames)+",0)", "TrimList.avs", append=true) : nop() global DroppedFrames = action ? DroppedFrames+1 : DroppedFrames last """, after_frame=true) #" Last edited by martin53; 19th October 2013 at 21:19. Reason: spelling |
19th October 2013, 15:55 | #4 | Link | |
Avisynth language lover
Join Date: Dec 2007
Location: Spain
Posts: 3,431
|
Quote:
c = Trim(DroppedFrames, 0) (BTW, DroppedFrames does not need to be global.) For a similar problem, see here. For another approach, doing deletion at compile-time, see this thread. |
|
19th October 2013, 16:02 | #5 | Link |
HeartlessS Usurer
Join Date: Dec 2009
Location: Over the rainbow
Posts: 10,980
|
Might be of interest: http://forum.doom9.org/showthread.php?t=167971
Code:
FrameSel() is a simple plugin to select individual frames from a clip. Can select frames numbers by direct arguments to filter, or in a string, or in a command file. Video:- Planar, YUY2, RGB32, RGB24. Audio:- Returns NO AUDIO (Does not really make sense for individual frames). FrameSel(Clip, int F1, ... , int Fn, string 'scmd',string 'cmd', bool 'show', bool 'ver',bool "reject",bool "ordered",bool "debug") Reject bool, Default False. If true then selects frames NOT selected rather than selected frames. eg if you have a 5 frame clip (0->4) and have commands to select 4,2 and 0, then reject=true would actually select frames 1 and 3 instead. If reject=true, then orderering occurs no matter whether Ordered is set true or false and the frames are returned in sequential order. Reject might be of use to view all frames NOT in a frames command set. Code:
Mpeg2Source("test clip.d2v") Crop(100,24,-120,-24) DROPLIST="Droplist.txt" RT_FileDelete(DROPLIST) PrevFrm = -2 # Init ScriptClip(""" # " c = last corr1 = (PrevFrm==current_frame-1) ? corr2 : RT_LumaCorrelation(c, c, delta=-3, delta2=-2) corr2 = (PrevFrm==current_frame-1) ? corr3 : RT_LumaCorrelation(c, c, delta=-2, delta2=-1) corr3 = (PrevFrm==current_frame-1) ? corr4 : RT_LumaCorrelation(c, c, delta=-1, delta2= 0) corr4 = (PrevFrm==current_frame-1) ? corr5 : RT_LumaCorrelation(c, c, delta= 0, delta2= 1) corr5 = (PrevFrm==current_frame-1) ? corr6 : RT_LumaCorrelation(c, c, delta= 1, delta2= 2) corr6 = RT_LumaCorrelation(c, c, delta= 2, delta2= 3) v1 = (corr1 + corr2) / 2.0 v2 = (corr2 + corr3) / 2.0 v3 = (corr3 + corr4) / 2.0 v4 = (corr4 + corr5) / 2.0 v5 = (corr5 + corr6) / 2.0 # RT_Debug(RT_String("%d %6.4f %6.4f %6.4f %6.4f %6.4f", current_frame, v1, v2, v3, v4, v5)) action = v3 > max(v1,v2,v4,v5) ? true : false action ? RT_TxtWriteFile(string(current_frame),DROPLIST, append=true) : nop() action ? SubTitle("DROP",align=5,size=32) : NOP RT_SubTitle("%6.4f %6.4f %6.4f %6.4f %6.4f %6.4f",corr1,corr2,corr3,corr4,corr5,corr6) RT_SubTitle(" %6.4f %6.4f %6.4f %6.4f %6.4f",v1,v2,v3,v4,v5,y=20) PrevFrm = current_frame last """, after_frame=true) #" return last Code:
Mpeg2Source("test clip.d2v") DROPLIST="Droplist.txt" FrameSel(cmd=DROPLIST,show=true,reject=true) return last EDIT: There are a number of consecutive blends in source, perhaps that is throwing srestore off.
__________________
I sometimes post sober. StainlessS@MediaFire ::: AND/OR ::: StainlessS@SendSpace "Some infinities are bigger than other infinities", but how many of them are infinitely bigger ??? Last edited by StainlessS; 20th October 2013 at 17:12. |
19th October 2013, 16:40 | #6 | Link |
Registered User
Join Date: Feb 2002
Location: California
Posts: 2,691
|
Thanks for all the input, but as I said in my initial post, my problem is not the decimation, but the detection. Srestore is sometimes replacing two or even three frames in a group of five, and is sometimes missing really obvious (visually) blended frames. Also, note that in the script I posted in #1 above, "omode=1". This does not perform any frame deletion. Decimation can always be done in a second pass, using a list created in pass one.
So my problem is understanding why this clip causes problems for SRestore, while similar clips (progressive, with one blended frame every fifth frame) from other sources work just fine. I suppose I can try to find the several dozen places where the pulldown cadence changes, and then use a bunch of SelectEvery() statements, but that is going to take a long time. I won't be at the main computer until Monday, but at that time I'll try some of the ideas posted and see if I can finally solve this problem. During the course of trying to make this work I did look at srestore.avsi, but decided not to modify it. Based on the posts here, I'm now going to look at doing just that, but the focus will not be on the decimation logic, but on the detection code. That is where the problem lies. For those who didn't follow the evolution of srestore, it initially didn't do any decimation at all, but instead was designed to simply replace the blended field/frame with a neighboring frame, with the understanding that you would use TDecimate (or your favorite decimator) to eliminate the resulting dups. BTW, I have experimented using different versions of Masktools, and have found that I get different results with different versions. Therefore this variable is in the mix. [edit]One thing I forgot to mention is that I have tried creating a DClip that has been stabilized with Depan and then cropped. I did this because I realized that there is a lot of jumpiness from film gate weave and that this motion may be causing the detection problems. So far, this hasn't improved the detection, but I'm still working on it. Last edited by johnmeyer; 19th October 2013 at 16:44. Reason: Added last paragraph about Depan |
19th October 2013, 18:43 | #7 | Link | |
Registered User
Join Date: Mar 2007
Posts: 407
|
Quote:
I pondered on that and tried it, but it worked no better. Will thoroughly read the links. Found out that DirectShowSource when serving MPEG2 to an autorepeat forward in AvsPmod causes the jumps. Single frames good, so not AviSynth guilty, but DirectShowSource. Above example gives good output with good source filter. Indeed! I always programmed with the wrong concept on my mind that RTE variables lived only during one call, but the example proves you are right! Code:
ColorBars() ScriptClip(""" #" try { subtitle(s) } catch(err) {} s = "Here I am" last """) #" ScriptClip(""" #" try { subtitle(s + " too", align=9) } catch(err) {} last """) #" Last edited by martin53; 19th October 2013 at 21:26. |
|
19th October 2013, 19:39 | #9 | Link |
Registered User
Join Date: Feb 2002
Location: California
Posts: 2,691
|
Even though some of the discussion has moved away from detecting blends, it has actually been quite useful because it has forced me to consider developing my own logic that might be better than SRestore, at least for this video.
Here is my idea. The blended frame that happens every fifth frame is obviously nothing more than a blend of the two adjacent frames. I should therefore be able to construct this blend almost perfectly by simply doing a blend of the adjacent frames. Because the video underwent compression when it was encoded, my blend will not be a pixel-for-pixel exact duplicate, but it should be close. Next, building on some ideas I learned from Didée, in this thread: Inverse of Decimate() I should be able to create a second clip where each frame is a blend of its two neighbors. I will then interleave this with the original clip. If I then compare adjacent frames in this new stream, most frames should be quite different because in each comparison I'll be comparing a blend with an original frame. However, when the blend I create is compared to the blend created by the telecine, I should get an almost exact duplicate match which should be (relatively) easy to detect. I'll then use the tricks I learned from Didée in that thread linked to above to decimate all the frame I added, plus the frames marked as duplicates. I'll report back. |
19th October 2013, 20:24 | #10 | Link |
HeartlessS Usurer
Join Date: Dec 2009
Location: Over the rainbow
Posts: 10,980
|
What are you planning to do at eg frames 59/60 and 64/65 where you have double blends ?
EDIT: Both above mentioned blend pair frames are blends partially made from a frame for which there is no clean frame.
__________________
I sometimes post sober. StainlessS@MediaFire ::: AND/OR ::: StainlessS@SendSpace "Some infinities are bigger than other infinities", but how many of them are infinitely bigger ??? Last edited by StainlessS; 19th October 2013 at 20:51. |
19th October 2013, 20:28 | #11 | Link | |
Registered User
Join Date: Mar 2007
Posts: 407
|
Quote:
Reviewing Srestore, I am sorry to say that although you value it, which I respect, I see it uses risky methods and might be more unreliable than you guess. (1) The UPN expression in line 59 is buggy, evaluates only to "x 128 - 2 ^ x 128 - 2 ^ y 128 - 2 ^ + / 255 *", while "x y - == 128 ..." has no effect on the result. This might make bmask, and hence pmask, pp2 and pp3 invalid, if the original intention of the expression is no more fulfilled. (2) Lines 155...173 and 207...210 use Select() statements inside the RTE. I found this to be unreliable in a non-reproduceable manner (one day it worked, the next day it wouldn't) and do not use it any more. Wrong selections might go unnoticed. (3) Line 232 makes a timeline manipulation from RTE. - You did not use decimation with the example, I understand. But you still used replacement, i.e. either a Select() or the Trim() is definitely effective, and the above examples show that the results of timeline manipulation from RTE gan get seemingly unpredictable. EDIT: Since I identified DirectShowSource (keep forgetting how careful it must be used), i'm wrong with (3). Maybe an AviSynth version update you made has a side effect on (2) or (3), You might re-check with already successfully processed clips. When I had your clip open with AvsPmod today with the above timeline example, the result was completely unreasonable. And I experienced the same thing with (2) last autumn. Suddenly there was a moment when these Select() lines started to make problems, and I never identified the cause. Finally, you use MaskTools 2.0.36.0 . You might consider to update to 2.0.alpha 48 although the changelog does not explicitly state a fix related to this thread. Last edited by martin53; 19th October 2013 at 21:23. |
|
19th October 2013, 20:49 | #12 | Link | |
Registered User
Join Date: Mar 2007
Posts: 407
|
Quote:
Therefore, I used in the above example the next deduction from your idea: A blended frame should have less difference from its adjacent frames than a non blended one. The RT_LumaCorrelation() function directly gives you a high result if two frames are similar. That's why I added RT_LumaCorrelation() between current and previous, and between current and next frame, to create the "blended" measures var1...var5. Typically, I assume there's no need for subtracting frames etc. since RT_LumaCorrelation() is here. Before, a typical solution would have been 'AverageLuma(Overlay(mode="difference"))' Probably, the above example is almost all you need (except for your mpeg2source ...) |
|
19th October 2013, 20:55 | #13 | Link | |
Registered User
Join Date: Feb 2002
Location: California
Posts: 2,691
|
Quote:
Seriously, part of the plan I'm evolving, which is based on that thread I linked to, is to decimate the worst (or best) "n out m" frames, similar to Cycle/CycleR in TDecimate. That would remove double blends. However, I'd then be left with a jump that I'd have to fill in with FillDrops() or something similar. Another possibility is to just leave one of the blends. The main part of my restoration is to adjust the bad levels, but I was hoping to also use my version of VideoFred's scripts to further improve the film. That of course requires having a clean progressive source, but the occasional blended frame may not screw things up too badly. |
|
19th October 2013, 22:09 | #14 | Link | ||
Registered User
Join Date: Feb 2002
Location: California
Posts: 2,691
|
Quote:
Quote:
I do have one other way to proceed. Years ago I developed a process for decimating the pulldown frame that results from capturing film from a shutterless movie projector using 1/1000 of a second shutter speed on a video camera. I did this by capturing all the MIC parameters from TFM in pass 1; putting these into a spreadsheet where I developed logic that analyzes these parameters, looking both backwards and forwards, for several fields in each direction; and then I wrote Excel macros that exports parameters that are used by MultiDecimate and fed into TDecimate during pass 2. I've used this for years and it works flawlessly. I may be able to adapt that process to this situation. |
||
19th October 2013, 23:43 | #15 | Link | |
Avisynth language lover
Join Date: Dec 2007
Location: Spain
Posts: 3,431
|
Quote:
Select() is such a simple function it's hard to imagine it going wrong, either inside or outside the RTE. Regarding strange results with SRestore(), note that it cannot be called more than once in a script, and I figured it can go unpredictably wrong if you use certain variable names in your script. See this post. |
|
20th October 2013, 15:01 | #16 | Link |
Registered User
Join Date: Mar 2007
Posts: 407
|
@Gavino,
will be pleased to report, but it's improbable. I worked on that frame substitution script when it happened. You can find essential infos in the changelog and the fSelectFromStr() function. But it's usually not reasonable to post such a script the day you observe the problem. The 'bug environment' is just too confusing & complex. @johnmeyer, this time it seems it's not Select(). I can confirm that Srestore() replaces many frames with your settings, making the movie jumpy instead of deblending properly. I made an output file, then changed Srestore to avoid the Select() function (see attachment) and made another output file. Both are 100% identical. Changing the thresholds also hadn't helped btw. |
20th October 2013, 16:32 | #17 | Link | |
Registered User
Join Date: Feb 2002
Location: California
Posts: 2,691
|
Quote:
I'll try a few more things tomorrow, but it looks like this particular decimation problem just can't be solved. |
|
20th October 2013, 20:16 | #18 | Link |
Registered User
Join Date: Mar 2007
Posts: 407
|
Investigation results:
When output frame is replaced, it is always replaced with previous frame. That comes from line #232, 'oclp.loop(1-opos,0,0)'. opos is -1 because of line #231, 'opos=opos+dup...', dup is -1 because of line #224 'dup= ... bbool==false ? 0 : ... om==1?-1:1', bbool is calculated in line #177 'bbool = .8*bc*cb>bb*cc && .8*bc*cn>bn*cc && bc*bc>cc'. E.g. bc means blended clip current frame. So a comparison is made whether 0.8 times the product of current blended with previous clear clip frames is bigger than product of previous blended with current clear clip frames, etc. Frame #6 is the first replaced frame. It has a 'blended' value of 5, while the surrounding frames have 0.125, and a 'clear' value of 12, the previous frames having 62 and the following also 12. The next replaced frame #8 again has a maximum of its surrounding 'blended' values which is 4 between 0.125 and 3, and the 'clear' value is a minimum of 4 following a 12 and next being 25. The 'blended' values are 128 minus minimum luma of a metrics clip 'bclp', the 'clear' values are maximum luma minus 128 (lines #135ff). So, for the replaced frames, his clip has a high minimum luma and a low maximum luma, i.e. little contrast. 'bclp' is defined in line #47 'bclp = ... mt_lutxy(diff,diff.trim(1,0),expr=code0 ...)' - means, an expression 'code0' is calculated for current and next frame of clip 'diff'. 'diff' is a mt_makediff() of the input clip's current and next frame. It is reasonable that current and next frames are evaluated twice, because the results are not directly fed to the abovementioned 'bc' etc. variables, but to the variables for later frames, and then shifted through while current_frame advances. So, 'code0' deserves some attention. It is a UPN string (line #44). "RT_Debug(mt_infix(code0))" helps to translate it: "((x-128)*(y-128)>0 ? (abs(x-128)<abs(y-128) ? (x-128)*(128-x) : (y-128)*(128-y)) : (x+y-256)*(x+y-256))*0.25 +128" Interpretation: if diff and diff.Trim(1,0) are both brighter or darker than the average luma value of 128, then the negative square of the less contrastful pixel is used (the distance from 128 is squared). Otherwise, i.e. if diff and diff.Trim(1,0) have opposite distances from 128, then the square of the sum of their distances is calculated. Because later, the maximum over the frame is used to determine the 'clarity', and the minimum to determine the 'blendedness', the blendedness comes from the pixel where both frames are bright or dark, and one is close to 128, while the clarity comes from the pixel where they are different in sign and amplitide. Assumed diff were the clip, it would look similar to frame difference metrics I created over the months. But the input clip for the expression is not the clip itself, but already mt_makediff(). This makes for a second derivative over time which I can't explain. To have a look at bclp, the return line from post #1 can be replaced with 'StackHorizontal(source,bclp.BicubicResize(source.width,source.height))'. I don't feel qualified to interpret it. As a conclusion, I see no general defect in the code of the function, but some assumptions are made which may or may not apply to a certain footage. P.S. for the investigations , I inserted 'ScriptClip("""last.subtitle(String(current_frame), size=9)""")' before the Srestore call to see the original frame number but minimize confusing the script with frame number text. |
20th October 2013, 20:34 | #19 | Link |
Registered User
Join Date: Feb 2002
Location: California
Posts: 2,691
|
Your description is very useful to me because there are a few code constructs in the SRestore.avs script with which I am not familiar and therefore I can not quite follow what the script is doing.
If I can understand it, perhaps I can patch it to work with this source. |
21st October 2013, 03:25 | #20 | Link |
HeartlessS Usurer
Join Date: Dec 2009
Location: Over the rainbow
Posts: 10,980
|
M53, great respect that you decided to dissect srestore, I had a look and found it less than obvious to understand, and there is an awful lot of it.
Would have been nice if original authors had tried less to keep its operation a secret, did not however totally understand your description either, but that is understandable, is a somewhat complex script. When you sort it out, can you please give a rendition with more full names (variables) and any kind of additional comments on source would be greatly appreciated. If not, then what you have done already is a already a great help to I'm guessing, a lot of people. Thanx very much.
__________________
I sometimes post sober. StainlessS@MediaFire ::: AND/OR ::: StainlessS@SendSpace "Some infinities are bigger than other infinities", but how many of them are infinitely bigger ??? |
Thread Tools | Search this Thread |
Display Modes | |
|
|