View Single Post
Old 29th June 2009, 08:30   #13  |  Link
CAFxX
Stray Developer
 
CAFxX's Avatar
 
Join Date: Mar 2003
Location: Italy
Posts: 82
Quote:
Originally Posted by Dark Shikari View Post
The term you're looking for is "minigop," one set of frames starting at a P-frame, containing X B-frames, and ending before the next P-frame.
Yep, sorry for the misunderstanding.

Quote:
So what you're really trying to say is that your patch allows the number of B-frames to increase by up to 4 between minigops, specifically that if the previous minigop had 4 B-frames, the search for the next one will look for up to 8. If the next one has 2 B-frames, we will decrease the threshold by one to 7 for the next minigop after that.

In other words, you predict that the current minigop will probably never need more than 4 B-frames more than the previous minigop.
Yes, that's the idea. Moreover the 4 B-frames overhead is kind of arbitrary. Ideally it should be exposed as a parameter.

Quote:
This isn't a bad statement at all; however I see two problems with it.

1. B-adapt 2 assumes that the max B-frames throughout its path-searching is uniform. "Properly" implementing this heuristic would probably involve modifying the B-adapt 2 search function to take your heuristic into account when selecting paths.
I didn't notice this. I'll look into it (pointers, anyone?). In the meanwhile I could port the same algorithm to the other two --b-adapt levels (that, IIUC, shouldn't have the same problem, right?)

Quote:
2. The most common case of a sudden jump from few to an enormous number of B-frames is that of a fade; this patch might worsen coding in fades. This probably won't be too big a deal with real content, which rarely has perfectly linear fades, and will be partly mitigated by the upcoming weighted P-frame prediction patch, but is still potentially an issue.
AFAICT there are a few scenarios where this method would simply improve the speed with no drawbacks, and a few scenarios where this speed improvement would waste a couple of P-frames (such as in my example a couple posts ago).
Moreover, IIRC, the 16 b-frames limit is arbitrary, right? Using this patch you could as well raise it and still be able to finish a two hour encode within a human lifespan.

As a side note, the same method (or slight variations thereof) could be also applied to a tons of other parameters (e.g. me_range).
__________________
CAFxXcrossway, a collection of my projects
CAFxX@strayorange, my blog
CAFxX is offline   Reply With Quote