Welcome to Doom9's Forum, THE in-place to be for everyone interested in DVD conversion.

Before you start posting please read the forum rules. By posting to this forum you agree to abide by the rules.

 

Go Back   Doom9's Forum > (HD) DVD, Blu-ray & (S)VCD > One click suites for DVD backup and DVD creation

Reply
 
Thread Tools Search this Thread Display Modes
Old 10th April 2004, 02:46   #1  |  Link
DDogg
Retired, but still around
 
DDogg's Avatar
 
Join Date: Oct 2001
Location: Lone Star
Posts: 3,058
Taking the dumbness out of CCE multipass - Revised 6

Summary - (See note 1 & 2 below)

The material below covers two areas.

1> How the complexity, and the resulting compressibility of a video source, can be quantified via the numerical bitrate derived from the physical filesize of a 1% sample where the sample has been created at a known Q value with CCE OnePassVbr (note 3).

Q values normally in the range of 10-40 would be considered the working range to provide excellent to acceptable quality. The goal is for the user to develop a personal quality constant that is always appropriate for the users display methods, and to be able to know in advance of encoding if that level of quality can be achieved. My personal 'quality constant' is Q28


2> How to provide a simple logic so that a program or user can know in advance the approximate quality that will be delivered from a CCE multipass encode, before actually doing the encode.

These goals, and the suggested method to achieve it, is based upon certain educated guesses Bach made since he did not having access to CCE source code (note 2 below). From a practical standpoint, and at the very least, it should provide the user with a very good quality indication before the full encoding process.

Explanation -

This average bitrate number derived from the filesize of the 1% OPV sample is the minimum needed to contain the complexity of the sampled video source and closely deliver the same quality of the OPV sample. This may seem obvious, yet the important fact is that the complexity, and thus the resulting compressibility of the sampled video source can, and has been quantified within a number, the actual bit size of the sample file. This size in bits can then be translated to an average bitrate in Kbps which is compatible with CCE VBR, and perhaps other encoders. This derived number will be refered to as DABR in the rest of the post.

This number, the derived average bitrate (DABR), can be used as a bridge between the two different encoding methods (OPV and VBR), each of which has distinct strengths and weaknesses when standing alone. By bridging these methods we can combine many of the strengths of both.

As a proof exercise, CCE Multipass VBR when using this DABR Kbps number and the same MIN/MAX very closely mirrors the encode that is produced by OPV using a Q number. This is shown in the attached spreadsheet, linked here and attached at the bottom of this post. The accuracy of the translation between Q and DABR expressed as KBPS seems clear.

Now, with a working method of translation, we can combine the strengths of both processes for certain situations.

One situation is where multipass is specified as the encoder tool because of its ease of use, and/or the need for its main strength of creating a final file size exactly on target. While multipass VBR is brilliant in achieving a size target by the nature of its size based process, it has an inherent weakness. It does not possess any internal process that allows it to understand when the ABR, specified by the user, will not be sufficiently high enough to yield a acceptable encode quality.Thus, the final quality of the encode is unknown until after the encode is finished (many hours later).

OPV has opposite characteristics in that quality, via a controlled quantizer approach, is achieved at the cost of non existent size control. OPV is one of the kings of quality and also the fastest software Mpeg2 encoding method available, but without accurate filesize prediction, OPV was only useful for encodes to be held on hard drive as the source complexity solely dictates the bitrate. The final encode size can fluctuate radically, depending on the unique compression characteristics of the individual source, even if those two sources are of the same running times/same total amount of frames. Thus, the final size of the encode is unknown until after the encode is finished .

Predictive sampling, sometimes called 1-PassBach or its iteration, 2-PassBach and best exampled in the D2SRoBa plugin used in combination with Dvd2Svcd/Dvd, was introduced to deal with this weakness in OPV. It uses multiple small samples and solves for the Q which will achieve a target size. Because it solves for Q, which indirectly manipulates bitrate, it effectively provided size control as well as the ability to predict the final quality before the actual encoding process. It does this brilliantly and yields a HQ encode in the shortest time possible. However, Conditional 2Pass Bach as implemented by D2SRoBa took a long time to develop and quietly does many complex actions not readily apparent to the user. The series of prediction samples, and the Newton-Raphson Method calculus (adapted by R6D2) as used by D2SRoBa, are complicated to implement. This can be beyond the scope of many casual hobby programmers, or manual encoding techniques.

Just to clarify, this post is not about using OPV, or any 1 pass method from any encoder to do the final encode. The above paragraph is provided for background. This post is about using only one small and quickly created 1 pass sample to help us lift the source's bitrate 'fingerprint' (DABR). This will show us how to identify, reasonably accurately, the true bitrate required to hold a certain percieved quality in advance of actually doing the full encode. Further, it will help us in any situation where we wish multiple encodes to be placed on one media disk and wish the viewer to enjoy the same approximent quality perception from all encodes on that one disk.

In this situation, the derived average bitrate of a known Q sample as described above, can be used to translate between the two processes. Because the OPV sampling process has actually encoded 1% of the video source as small equally spaced sections, the derived bitrate generated from the sample size inherently contains an awareness of the compressibility of the source. Thus, the D-ABR of the known Q sample is the minimum required bitrate that will be needed to achieve a similar quality in the bitrate based multipass VBR encode. In effect, the process matches the distinct compression characteristics, which are unique to the source complexity, to the appropriate Average Bitrate needed to hold and contain our desired quality. In addition, the identified Derived-ABR (as Kbps) will also provide the framework for creating branching program logic.

The derived Kbps value is used to establish a quality 'GO/NO-GO' conditional.

To do this, an extra step is added when using conventional multipass.

Normally an average bitrate would have been calculated to fill the space available and the encode started, whether the bitrate was adequate, or inadequate, to achieve an acceptable quality level for the particular compression needs of the source. There is not much difference in this method, except before starting the encode, a comparison is made between the Kbps calculated to fill the size available and the derived bitrate from the known quality sample.

GO - When the size calculated bitrate is equal or larger than the derived bitrate from the sample -

The encode is started as normal. This will yield a minimum guaranteed level of known quality within a known filesize. Note: if more space is available for the encode than the minimum requirement predicted by the sample D-ABR, the size calculated ABR will of course be larger than that of the sample's D-ABR. The quality will therefore also increase over that of the known quality sample.

NO-GO - When the size calculated bitrate is less that the derived bitrate from the sample -

Allows the programmer or manual user to detect, in advance of the encode, when a known quality cannot be achieved within a known filesize. Additional logic can then be triggered to deal with this situation such as resolution adjustment, compression filtering, downsizing of audio, etc., to make more space available for the encode.
---------------------------------------------------

Note 1: This is a work in progress as a self learning exercise and is based upon the original propositions posted by Bach and many subsequent revisions by the members of this community, especially those that relate to what we now call 1-Pass and 2-Pass Bach as used in D2SRoBa. This post targets how to apply some of these OPV techniques to CCE Multipass VBR. I highly doubt any of the information here is original, but I do hope this compilation may provide a useful method for those who wish to use CCE multipass and would like to know general quality information in advance of the actual encoding process. Also, I may have not been precise in the use of some of the technical terms, but they are accurate enough for now.

Note 2: This line from the original post is an important part of the assumptions here. However, it was never independently verified as correct, hence the word assumption [all emphasis is mine] - : Originally posted by Bach - "Attention that the Q.factor is an exclusive concept of CCE, directly related with the quantisation, but it does not mean QM nor S, and it is not very clear the way that it is calculated. Based in the results of my tests, I believe that Q.factor is in some way proportional to the average value of the product QM(i, j)*S. Anyway, the Q.factor is the direct measure of the quality to be gotten so that, if two different films are recompressed using the same settings and the same Q.factor, can be guaranteed that they have the same quality."

Note 3: The sampling method is only available when using AviSynth to frameserve video to CCE OnePassVbr (OPV). It assumes the reader understands the use of avisynth and is capable of editing an AVS script, loading the script into CCE, choosing OPV mode and a Q value. To produce an AVS for creating the sample, simple add this line at the bottom of your working script: SelectRangeEvery(1200,12). Obviously, this line must be removed after the sample has been created.

You will need the derived bitrate spreadsheet calculator, linked here, or attached to the NEXT post below, OR use this formula - (((Sample_Size_In_Bytes*(100/Sample_Size_Percentage))*8)/1000)/(Total_Frames/Frame_Rate)
Attached Files
File Type: zip opvq2.zip (6.0 KB, 2541 views)

Last edited by DDogg; 8th October 2006 at 13:20.
DDogg is offline   Reply With Quote
Old 14th April 2004, 20:27   #2  |  Link
DDogg
Retired, but still around
 
DDogg's Avatar
 
Join Date: Oct 2001
Location: Lone Star
Posts: 3,058
Derived bitrate calculator sheet attached.
Attached Files
File Type: zip kbps_frm_filesize.zip (2.7 KB, 2954 views)
DDogg is offline   Reply With Quote
Old 19th April 2004, 21:09   #3  |  Link
Crackhead
silent, but around
 
Join Date: Apr 2003
Posts: 26
It seems that I'm missing something here!
But isn't the method you are explaining exactly the same D2SRoba is doing in its auto-mode? (with conditional sizing pass on and no limits set)
plz don't take offence at that but I think someone who is going for multipass is primarily interested in size!(And of course getting the best quality the given size allows by doing multiple passes) For sure you can say: "Then they don't know the (numerical) "quality" of the encode" which is absolutely right. however, a "multipasser" will more or less know how many cd's he must apply to get a satisfying result!

nevertheless, i have to say, this method allows a lot of flexibility in the end, especially when you go for low-bitrate-encodes or 3-4 movies per dvd! On the other hand, D2SRoba/DVD2SVCD is such a powerful combination which can do all this automatically so i think i will stick to that! (when using cce, since i am going to have a look at mencoder/ffvfw)

Greetz, Crackhead
Crackhead is offline   Reply With Quote
Old 19th April 2004, 22:01   #4  |  Link
DDogg
Retired, but still around
 
DDogg's Avatar
 
Join Date: Oct 2001
Location: Lone Star
Posts: 3,058
Quote:
Originally posted by Crackhead - But isn't the method you are explaining exactly the same D2SRoba is doing in its auto-mode? (with conditional sizing pass on and no limits set)
No, although they are both from the same root "Bach" process. A single 1% sample is used in the above alternative process rather than multiple samples used in D2SRoBa for size prediction and most importantly CCE multpass is used as the encoding method so it should be clear that they are quite different. [edited]

This was primarily a project targeted at some other programs that needed to use multipass to fit multiple encodes on one disk, but also needed to monitor quality before the encode was started. A simple way to create a "GO - NO/GO" was needed without the complexity of the size convergence sampling of D2SRoBa. Remember, tylo had to spend a very long time, and I am sure quite a few gray hairs, on D2SRoBa to make it what it is today.

Also, this may be a way to provide a "quality monitor" as a possible future enhancement for D2S that would not require a large effort to implement.

As well, non DVD2SVCD/DVD users who do not have the great luxury of a D2SRoBa, might also find it helpful.

Being located in the DVD2SVCD advanced forum might make it confusing. As a mod for this forum, I could more easily add the attachments and work on it at my leisure.


Last edited by DDogg; 5th May 2004 at 23:00.
DDogg is offline   Reply With Quote
Old 24th April 2004, 12:29   #5  |  Link
Kedirekin
Registered User
 
Join Date: Oct 2001
Location: Minnesota
Posts: 1,110
I haven't had a chance to test this yet; I don't have time in the evenings, and my encoding/burning PC is busy on other things for the next few hours. I know the test itself won't take long, but pre-work (ripping a PGC from a DVD, creating a D2V file) stops me till the PC is more idle. Hopefully I'll get a chance to try it out later today (have to mow the lawn, change my oil and do the dishes and laundry first though).

That being said, I don't think trying it out really contributes much to this conversation. I'd have to try it on dozens (maybe hundreds) of disks, along with encoding the disks and viewing the encodes, to make any generalized empirical statements. For now theoretical discussion will have to suffice.

I've had time to think about it, and I don't see any reason why it won't work. It may not be universally applicable, but I think it will produce the intended results (a predicition of size to achieve a desired quality) in the vast majority of cases - enough that you probably don't need to worry about exceptions.

It does have limited applicability though (as you've stated). I think it's important that people realize that. It all depends on your goal in backing up your disks.

If your goal is to see if you can fit 2 or 3 movie-only encodes on one DVDR, then this anlysis technique has obvious appeal.

If your goal is one-to-one single DVDR backup (movie only or full disk) with maximum quality (accepting that maximum quality may not be that good, but single disk is mandatory), then analysis won't make any difference. You're best served to do a VBR regardless.

If your goal is like mine - a full disk backup where the decision is to transcode, encode, or split the disk, then there is some appeal but it is limited. The analysis really only applies in a medium-to-high compression scenario when trying to decide if you should attempt an encode or just split the disk. Considering my laziness (or as I prefer to put it, how highly I value my free time), and the fact that I prefer full disk backups (which would require analysis of several, perhaps many PGCs), I'd probably be better served to just re-encode the disk and see how it turns out. I'm mean no offense by this; that's just how I see it (of course, if the analysis were automated, that would be different).
Kedirekin is offline   Reply With Quote
Old 24th April 2004, 22:21   #6  |  Link
DDogg
Retired, but still around
 
DDogg's Avatar
 
Join Date: Oct 2001
Location: Lone Star
Posts: 3,058
Kedirekin , as always I appreciate your well thought out comments and welcome the chance to discuss them with you. One of the important ones is what IS the appropriate self flagellation for forgetting to mow my already long overdue lawn yesterday before the 4 inch rain came? Arghh! Ah, not important, my wife will offer a suggestion for sure
Quote:
If your goal is one-to-one single DVDR backup (movie only or full disk) with maximum quality (accepting that maximum quality may not be that good, but single disk is mandatory), then analysis won't make any difference. You're best served to do a VBR regardless.
Erm, I guess that depends upon whether you are a purist, or just like good quality encodes. Knowing when to crop overscan, change resolutions to 704, or lightly filter can make the difference in whether the encode is worth saving. Since you mentioned using time wisely, I think spending under 10 minutes to do this is time well spent to save hours of wasted effort.
Quote:
of course, if the analysis were automated, that would be different
Yup, that is the ultimate goal, and a good part of what this working post was about. Somewhere down the road I think all this stuff will be completely automated. Think of all the extra time we will have for mowing and car washing...

Last edited by DDogg; 24th April 2004 at 22:24.
DDogg is offline   Reply With Quote
Old 25th April 2004, 02:36   #7  |  Link
Kedirekin
Registered User
 
Join Date: Oct 2001
Location: Minnesota
Posts: 1,110
Jeez. Where did the day go. I didn't get the lawn mowed (it started raining here too) and I only got the oil drained (but that will at least force me to finish the job tomorrow).

Well, at least I managed to try your analysis trick, despite some completely baffling issues opening avs files in CCE.

I encoded Pokemon 5 (it was handy) OPV with Q of 28, plugged the numbers into the KBPS_frm_FileSize spreadsheet (thank god for Open Office), and it told me I'd need a bitrate of 3884. That's quite a bit higher than I expected for that Q level - almost twice what I expected in fact.

I think I'll try encoding it OPV with 28 and see how it turns out.

Some other things that have occurred to me:
  • you'd probably want to match your SelectRangeEvery to your GOP size (1200,12 for 3/4, 1500,15 for 3/5). Doing otherwise would probably induce CCE to introduce too many scene change detections, distorting the analysis
  • have you thought about the effect of scene changes that happen to fall in these 12 frame samplets? Do you think they would match the affect of scene changes in the full encode? I find this a bit of a mind twister - do 10 short GOPs in a 1% sample have the same effect as 1000 short GOPs in a full encode?
  • Heck, what effect will 12 frame samplets have. I believe each samplet will result in a closed GOP, which will again distort analysis (not sure how much though)
  • The samplets are ½ second, with 50 seconds between them. Not sure where I'm going with this, but I have a nagging feeling in the back of my mind that there is some fairly common scenario where the samplets will miss something, and produce missleading results.
BTW: we have different definitions of wasted effort. I'll gladly let my PC encode overnight if it will save me 10 minutes of work time. The cost of my personal free time figures heavily in my gestalt.
Kedirekin is offline   Reply With Quote
Old 25th April 2004, 05:54   #8  |  Link
DDogg
Retired, but still around
 
DDogg's Avatar
 
Join Date: Oct 2001
Location: Lone Star
Posts: 3,058
Just try it some, I assure you it does the job well. That sample line comes directly from D2SRoBa which has been consistently accurate to within 1 to 2%. It is extremely well tested and I never argue with that kind of success, it's way beyond my paygrade

Good point about the GOP 3/4. I should have mentioned that all the test results in the proof spreadsheet were based upon M=3 and N/M=4, which is all I ever use.

Last edited by DDogg; 26th April 2004 at 01:50.
DDogg is offline   Reply With Quote
Old 25th April 2004, 13:39   #9  |  Link
Kedirekin
Registered User
 
Join Date: Oct 2001
Location: Minnesota
Posts: 1,110
I looked at the OPV Q 28 encode this morning. I must say on casual perusal it look pretty good (of course, at 3884 I would expect it to).

Say, am I in for a rude awakening? Are DVDs widely erratic in the bitrate they require?

My past experience with action flicks and anime is that DVD resolution typically needs 3500 ABR to get good results, but your mention in the other thread that Spirited Away needs only 1873 really thru me. Guess I'll have to grab my copy of Spirited Away and see for myself, if I can just find the time (jeez, and I want to try DVDReBuilder too - there's never enough time).
Kedirekin is offline   Reply With Quote
Old 25th April 2004, 19:46   #10  |  Link
DDogg
Retired, but still around
 
DDogg's Avatar
 
Join Date: Oct 2001
Location: Lone Star
Posts: 3,058
Quote:
I looked at the OPV Q 28 encode this morning. I must say on casual perusal it look pretty good (of course, at 3884 I would expect it to).
Could you run that Q28 encode through Bitrate Viewer and verify the average quantization? It should be under 5, probably more like 4.5 to 4.8. I still need to get my arms around the direct relationship of OPV's use of the "Q" term and the resulting actual AQuant.
Quote:
Say, am I in for a rude awakening? Are DVDs widely erratic in the bitrate they require?
Don't know if it will be rude :-) Maybe more like a friendly eye opener, like a slug of 12 year old Jameson in your coffee. I seem to always just 'see' better after one of those :-)

Speaking of eye openers, here are a few more:

ItalianJob_16X9_23.976_Q28_FR_159006_Tm_1.50.25_DABR_2691.mpv
ISpy_____16X9_23.976_Q28_FR_138965_Tm_1.36.30_DABR_3610.mpv

Note the first requires much less ABR to achieve Q28 and is actually longer in length. Few more:

MatrixR_16x9_23.976_Q28_FR_185867_Tm_2.09.04_DABR_2652.mpv
XMen__16x9_23.976_Q28_FR_150067_Tm_1.44.12_DABR_1933.mpv
SmthnGottaGive_16x9_23.976_Q28_FR_184357_Tm_2.08.01_DABR_3147.mpv
Quote:
..., but your mention in the other thread that Spirited Away needs only 1873 really thru me. Guess I'll have to grab my copy of Spirited Away and see for myself ...
That would be highly appreciated if you can find the time. You reminded me that the 1873 ABR figure was based on a script I used that was 704x480, with overscan reduction. I re-sampled using full res and it shows a DABR of 2078. You should get this same figure.

If you have the playtime, do a quick sample on the same after adding a compression filter like Undot().Deen(). I think you should show a reduction in the DABR of around 15 to 25% [maybe not on anim]. Not to worry, I know it is a religious purist issue. Just mentioning it because it illustrates the point so clearly and it only takes a couple of minutes to do.

Last edited by DDogg; 26th April 2004 at 17:43.
DDogg is offline   Reply With Quote
Old 1st May 2004, 13:21   #11  |  Link
Kedirekin
Registered User
 
Join Date: Oct 2001
Location: Minnesota
Posts: 1,110
I finished encoding Spirited Away Thursday morning. Sorry I didn't have time for more (and unfortunately I'm looking at another busy weekend - no time to try DVDReBuilder [sigh]).

I did a variety of test analyses with different settings. As you mentioned (and as expected), reducing the resolution and using a noise filter (I just enabled CCE's filter as a quick test) reduces the required bitrate. I got the derived bitrate all the way down to 1900 - that's pretty close to the 1843 you reported. One analysis I didn't try that I want to is seeing what difference bilinear resizing makes as compared to bicubic(0.0,0.6).

I did an encode 704x480, Q=28, with no filtering. The predicted bitrate (based on 1% sample) was 2042 kbps. The full encode's bitrate was 2106 kbps (2,156,833 bps - I calculated this manually from the file size - 2015560632 bytes/7476 sec).

Bitrate Viewer reported the peak quant at 6 with an average of 1. It reported an average bitrate of 2546 though (is there something wrong with my calculation?). [I've never really trusted Bitrate Viewer].

I previewed this in a software player and it looked very good. I couldn't see any macroblock noise, and there was just a hint of ringing - not enough that I'd ever be able to see it on my TV (36" Sony XBR Trinitron). The results were much better than I would have expected at 2100 kbps, even for a movie that's relatively short on high-motion scenes. As a point of interest, I check the original to see if ringing was present (to see if the ringing was the result of re-encoding). There was some ringing in the original, but it was more pronounced in the re-encode. Again, the ringing was not very significant - just mentioning it for completeness.

I think you've made a convert of me. Once you get good size predictability, OPV is just a better way to encode. It's faster, gives acceptable quality, and final file size is no longer an issue.

However, I'll probably keep VBR on hand when I'm pushing the envelope (trying to get a file size more than 5-15% smaller than that predicted for Q=28). Under those circumstances, I think VBR will produce results as good as the optimum Q OPV would, and is less time consuming (in terms of working time) than testing different Q levels to find the optimum.

Now I just need to start using your analysis approach on work-a-day backups. I think that's the only way I'll have time to really "test" it. And I suspect before I can do that, I need to start using DVDReBuilder - if I don't have time to even try ReBuilder, I'm sure you can guess how much time I have to use the big 3.
Kedirekin is offline   Reply With Quote
Old 1st May 2004, 16:11   #12  |  Link
DDogg
Retired, but still around
 
DDogg's Avatar
 
Join Date: Oct 2001
Location: Lone Star
Posts: 3,058
Quote:
One analysis I didn't try that I want to is seeing what difference bilinear resizing makes as compared to bicubic(0.0,0.6).
You will be able to quantify resizing filters very easily, as well as many other very subtle changes. Even something like DC precision shows up readily in the results. Matrices are also very easy to quantify in regards to effect on bitrate. That, coupled with a good subjective eyeball, brings different matrices into a much clearer perspective. One I use quite often is one I call Bach1. I'll attach it in two formats if you wish to try it. I would be most interested in what your eye reports if you have the time sometime in the future.
Quote:
However, I'll probably keep VBR on hand when I'm pushing the envelope ...
I guess we kinda diverged off the original topic as we have been discussing OPV encodes, so just to clarify for anybody reading, the method mentioned at top is about using multipass for the encode but using one OPV sample to give additional information about the compressibility of the source to avoid wasted time, and thus the title of this thread.

To get back to your comment, one of the areas badly needing quantification is whether a 'hybrid' 2 pass using OPV as first pass (which also creates a VAF), followed by a bitrate based VBR (using the VAF from the OPV) is superior or inferior or equal to a 2 or 3 pass conventional multipass on 'pushed' bitrate situations like you mention. I'm thinking this is a very important area to identify. It is difficult for me because I can't find decent tools to help me understand what I am seeing. As you said, BRViewer is a little suspect sometimes.

I suppose the best way is to use the inbuilt ability of CCE to analyze the VAF using the "Bitrate Allocation" button. To do this properly requires you to derive the bitrate of the OPV created mpv, load the OPV project in CCE, shift to multipass in the dropdown, and enter the DABR into multipass as average bitrate with the same Min/Max used in OPV before using the bitrate allocation button.

Gee, its raining in Texas ... I guess the lawn has to wait again

/Add: Couple of things to add. One is that Tylo has 'bottled' D2SRoBa in a small app that may be called "FastCCE" when he releases it. It takes a standard multipass ECL as its argument. That would allow the huge amount of work and research in D2SRoBa to be transparently added into any program that uses standard multipass as part of its routine. In fact, it is easily used in a desktop batch-file where you can just drag your standard multipass ECL onto it and away it goes doing the 1PassBach process with the conditional 2nd VBR pass for exact sizing. I've asked tylo to consider adding 'droplet' functionality and hope he will agree as it makes a complex process a no-brainer to use for anybody.

One other thing for the record, you mentioned something to the effect that this was my process. All the stuff above is just a compilation of others work. There may be some uniqueness in using it with multipass, but it is a trivial connection on my part at best.
Attached Files
File Type: zip bach_matrix.zip (390 Bytes, 977 views)

Last edited by DDogg; 1st May 2004 at 16:34.
DDogg is offline   Reply With Quote
Old 1st May 2004, 16:40   #13  |  Link
marnum
Danke Joerg!
 
Join Date: Oct 2002
Location: Austria
Posts: 42
I don't know if you know this already, but it's also about filesize prediction and thus may prove useful for you:

[*** link removed by mod ***]

The kvcd formula is supposed to work with a gop-length of 15, but that's not a problem I guess. Alas, I can't find a link for the "Sampler" avs-plugin, but I have it on my hdd (please pm me if you need it). I used this method myself several times til I got tired of reencoding the sample avs until the filesize was right.. nonetheless, quality IS great
__________________
A man with a watch knows what time it is. A man with two watches is never sure. Aphorism makes the alliteration complete.

Last edited by DDogg; 1st May 2004 at 17:02.
marnum is offline   Reply With Quote
Old 1st May 2004, 17:06   #14  |  Link
DDogg
Retired, but still around
 
DDogg's Avatar
 
Join Date: Oct 2001
Location: Lone Star
Posts: 3,058
That has nothing to do with CCE so I have removed the link so as to avert confusion to this thread.

Last edited by DDogg; 1st May 2004 at 17:09.
DDogg is offline   Reply With Quote
Old 2nd May 2004, 13:43   #15  |  Link
marnum
Danke Joerg!
 
Join Date: Oct 2002
Location: Austria
Posts: 42
I don't understand.. you're doing filesize prediction with SelectRangeEvery(), right? The method I mentioned does exactly that, but the error margin is beneath 1 percent.
__________________
A man with a watch knows what time it is. A man with two watches is never sure. Aphorism makes the alliteration complete.
marnum is offline   Reply With Quote
Old 2nd May 2004, 18:45   #16  |  Link
DDogg
Retired, but still around
 
DDogg's Avatar
 
Join Date: Oct 2001
Location: Lone Star
Posts: 3,058
Sorry, marnum, I hope I did not upset you. I should have been more clear. I am doing my very best to keep this a simple internal to AviSynth sample method without external DLLs. The main topic of this post is about creating a derived bitrate from a CCE OPV sample and using that bitrate in a standard CCE multipass encode. The 1 or 2% accuracy of the 'SelectRangeEvery(1200,12)' statement is more than sufficient for this task and I did not wish this thread to be confused with other methods, especially ones dealing with TMPG CQ mode. Please start an original thread dealing with whatever you would like to discuss on the topic. I'll be happy to contribute, if I can, on the topic of your post here in this forum.
DDogg is offline   Reply With Quote
Old 2nd May 2004, 18:55   #17  |  Link
marnum
Danke Joerg!
 
Join Date: Oct 2002
Location: Austria
Posts: 42
No, I'm not that easy to upset
I just thought you searched for a more precise way of filesize prediction. However, I agree that easy way is better way - thumbs up for this project
__________________
A man with a watch knows what time it is. A man with two watches is never sure. Aphorism makes the alliteration complete.
marnum is offline   Reply With Quote
Old 24th May 2004, 02:23   #18  |  Link
RickA
Registered User
 
Join Date: May 2004
Posts: 89
marnum, sent you a PM.
RickA is offline   Reply With Quote
Old 26th May 2004, 15:50   #19  |  Link
Boulder
Pig on the wing
 
Boulder's Avatar
 
Join Date: Mar 2002
Location: Finland
Posts: 5,718
Here's a little spreadsheet that I created, heavily inspired by good ol' DDogg:

http://www.saunalahti.fi/sam08/kbps_calc.rar

It simply calculates the average bitrate for each movie (I assume you put 2-3 video streams per disc) based on compressibility, using CCE's OPV. Here's how it goes:

1) The number of frames per movie is known
2) The framerate is known
3) A 1-3% sample is encoded as DDogg has explained in this thread
4) The avg bitrate for the sample is known (use DDogg's spreadsheet for that or Bitrate Viewer)
5) The avg bitrate is used to calculate how big the file would be with a full encode
6) Total filesize = movie1 + movie2 + movie3
7) The sheet calculates how many percent each movie would be of the total filesize
8) The percentage is used to calculate the amount of megabytes each movie will get
9) The video bitrate is calculated from the amount; audio bitrate is deducted
10) For QCCE users (like me ) the sheet also calculates the 'target size', that is, the amount of megabytes the video should take

I used 4350 MB as the free space on DVD for video, audio and subs, no need to calculate any muxing overhead etc. I've noticed that this leaves only a little bit space unused.

I'm not 100% sure if this works as it should so feel free to let me know of any problems/errors. I'm only a Pig on the wing you see
__________________
And if the band you're in starts playing different tunes
I'll see you on the dark side of the Moon...
Boulder is offline   Reply With Quote
Reply

Thread Tools Search this Thread
Search this Thread:

Advanced Search
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off

Forum Jump


All times are GMT +1. The time now is 21:39.


Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2024, vBulletin Solutions Inc.