Welcome to Doom9's Forum, THE in-place to be for everyone interested in DVD conversion.

Before you start posting please read the forum rules. By posting to this forum you agree to abide by the rules.

 

Go Back   Doom9's Forum > Video Encoding > New and alternative video codecs

Reply
 
Thread Tools Search this Thread Display Modes
Old 25th November 2013, 05:53   #161  |  Link
Keiyakusha
契約者
 
Keiyakusha's Avatar
 
Join Date: Jun 2008
Posts: 1,576
Quote:
Originally Posted by Lenchik View Post
For finding best x264 settings while not loosing time on repeating previoud work.
This is not what I'm talking about. If you're doing some 16bit processing, your video will no longer be simply upsampled.
Anyway, that was a rhetorical question.
Keiyakusha is offline   Reply With Quote
Old 25th November 2013, 12:59   #162  |  Link
zerowalker
Registered User
 
Join Date: Jul 2011
Posts: 1,121
Keyakusha,

What i simply meant is, i thought those tapes, or rather what was in the content of the tape was 10bit.
For example, letīs say Lord of the Rings, it was recorded in 10bit and thatīs itīs original source (an example here).
Then they are going to broadcast it on TV etc. And they get a copy of that source, meaning they put it on the broacast tape, so it isnīt upsampled, it stays the same.

Same goes for everything else, but of course if there is 8bit stuff it getīs upsampled.

But if Nothing is 10bit which seems to be the case, than thatīs my disappointment.

But of course i know that 10bit itself helps a lot if post-processing has been done at something above 8bit, as less information will be removed, the leap between 8bit to 10bit is quite high, though 10bit to 16bit is enormous, but luckily the improvement in actual quality is much less from my understanding, which makes it "a waste".
zerowalker is offline   Reply With Quote
Old 1st December 2013, 08:34   #163  |  Link
foxyshadis
Angel of Night
 
foxyshadis's Avatar
 
Join Date: Nov 2004
Location: Tangled in the silks
Posts: 9,559
Quote:
Originally Posted by zerowalker View Post
Keyakusha,

What i simply meant is, i thought those tapes, or rather what was in the content of the tape was 10bit.
For example, letīs say Lord of the Rings, it was recorded in 10bit and thatīs itīs original source (an example here).
Then they are going to broadcast it on TV etc. And they get a copy of that source, meaning they put it on the broacast tape, so it isnīt upsampled, it stays the same.

Same goes for everything else, but of course if there is 8bit stuff it getīs upsampled.

But if Nothing is 10bit which seems to be the case, than thatīs my disappointment.
Nothing consumer-side is is more than 8bits is what he meant, I'm sure. Studio-side, it's tremendously different; stuff is generated and normally archived in various camera raw formats during production at a minimum of 10bit (HDCAM SR) up to 16bit (RED). Only older original HDCAM and DVCPRO used 8bit. The transmission pipelines vary widely between fully analog and 8bit digital and up. The difficulty is in obtaining samples of those masters, not whether they exist. They are usually compressed, but at such wildly high bitrates that it's not how we think of compression (like DV vs VCD).

Consumers can generate 10bit video with prosumer cameras, like Canon XL H1 or Sony HVR-V1U, but it might be better to ask on a videographer enthusiast forum than here.
foxyshadis is offline   Reply With Quote
Old 1st December 2013, 08:52   #164  |  Link
Keiyakusha
契約者
 
Keiyakusha's Avatar
 
Join Date: Jun 2008
Posts: 1,576
Well, I tried to explain what I mean but it seems i failed at it so I've withdraw from this discussion. But let's try once more ^^
For some reason you got an impression that I'm saying that there are no >8bit material or something... this is not it. Of course there are >8bit things produced by some hardware and not generated on PC. For the movies like LotR it is of course more likely that they could be >8bit starting from the camera shooting stage. But if we are talking about global scale, which includes TV shows and various other material, then most of the content was never 10bit to begin with. It is whatever simply upsampled because standard requires it/someone requested it this way or it was modified/edited at some point.
That said, I believe the codec should be optimized for more common usage scenarios. Especially that for big studios who have things in >8 bit from the beginning there are 0 reason to use lagarith. I bet many of them never even heard of it and there is no reason for them to do such a research, they already have all the tools they need. Not to mention that many of them simply refuse to work with anything free/opensource. So why optimize codec for them and not for the world of "mere mortals"? (rhetorical question. I don't need an answer)

Edit: anyway, sorry for going offropic. Personally I'm not even interested in lagarith, it is too slow, so feel free to optimize for whatever you want.

Last edited by Keiyakusha; 1st December 2013 at 09:25.
Keiyakusha is offline   Reply With Quote
Old 1st December 2013, 14:04   #165  |  Link
zerowalker
Registered User
 
Join Date: Jul 2011
Posts: 1,121
Ah well than i had understood it correctly, at least half way.
That Tapes (Though not Broadcast tapes) can be done in higher bit depth.

But as you say, itīs impossible to obtain them, the only way to get such a "master tape" would be to either have the resources and contacts for it.
Or that it the company owning it has gone bankruptcy and itīs sold or something.

Keyakusha,

I fully understand what you mean, and as you say, the reason to even have 10bit as lossless currently, doesnīt even exist, the only reason for it i can think of with is for experimentations and certain individuals.
But i however donīt by any means want it NOT to be there, support and optimization in any case is always good, it doesnīt matter if the thing is useless now or not, If it can be added and someone wants to, then by all means do so!

And can also agree with it being slow. But for 2D (Pixel and simple stuff) itīs totally unbeatable, you canīt even compare it with UT Video Codec which is the rival i think.
On "Normal" videos however, Lagarith isnīt that great in terms of speed, but i think it can be optimized as it has been laying around for quite some time compared to UT which has been updated all the time.


But anyway, if i can help with anything for updating Lagarith i am on. Sadly Programming wise i am at a total newbie level, but making tests samples or something is something i should be able to do.
zerowalker is offline   Reply With Quote
Old 1st December 2013, 14:18   #166  |  Link
raffriff42
Retried Guesser
 
raffriff42's Avatar
 
Join Date: Jun 2012
Posts: 1,373
Quote:
Originally Posted by Dark Shikari View Post
Lagarith relies on floating point math in the arithmetic coder, which nearly guarantees that errors like this will occur eventually, depending on the phase of the moon and so forth. The format should generally be avoided if possible.
*This* is my issue with Lagarith. I get too many glitches.
raffriff42 is offline   Reply With Quote
Old 1st December 2013, 14:20   #167  |  Link
zerowalker
Registered User
 
Join Date: Jul 2011
Posts: 1,121
Wait what??
Errors as in, what?
zerowalker is offline   Reply With Quote
Old 1st December 2013, 14:54   #168  |  Link
LoRd_MuldeR
Software Developer
 
LoRd_MuldeR's Avatar
 
Join Date: Jun 2005
Location: Last House on Slunk Street
Posts: 13,248
Quote:
Originally Posted by zerowalker View Post
Wait what??
Errors as in, what?
He's referring to the "random" glitches (artifacts) that some people seem to be getting with Lagarith.

AFAIK, Lagarith uses arithmetic coding, which - in its most simple form - uses infinite-precision real numbers. Computers don't have that. So, in practice, we usually use a "range coder" instead of a plain arithmetic coder. It is essentially one and the same thing, yes. But the "range coder" works entirely on integer math (rather than using real numbers). This makes it more suitable for implementation on a real-world computer. Also ensure that the results are always deterministic.

Lagarith, on the other hand, seems to implement arithmetic coding based on limited-precision floating point math. This is kind of "delicate" with respect to rounding errors and stuff. And this is what may cause nondeterministic behavior
__________________
Go to https://standforukraine.com/ to find legitimate Ukrainian Charities 🇺🇦✊

Last edited by LoRd_MuldeR; 1st December 2013 at 15:59.
LoRd_MuldeR is offline   Reply With Quote
Old 1st December 2013, 15:15   #169  |  Link
zerowalker
Registered User
 
Join Date: Jul 2011
Posts: 1,121
But it does not effect Encoding does it?
Meaning all encodes movies will always be 100% alright.

But decoding them CAN fail, but redoing it CAN succeed, meaning itīs not a permanent error?


Weird that it uses such system if it isnīt perfect as these stuff are supposed to Always work in All scenarios.
zerowalker is offline   Reply With Quote
Old 1st December 2013, 15:41   #170  |  Link
LoRd_MuldeR
Software Developer
 
LoRd_MuldeR's Avatar
 
Join Date: Jun 2005
Location: Last House on Slunk Street
Posts: 13,248
Quote:
Originally Posted by zerowalker View Post
But it does not effect Encoding does it?
Meaning all encodes movies will always be 100% alright.

But decoding them CAN fail, but redoing it CAN succeed, meaning itīs not a permanent error?
Well, we have a problem as soon as the encoder and the decoder "desynchronize" in some way, i.e. do not agree on the exact input/output values any longer. Whether the encoder or the decoder (or both) made the "mistake" that caused the desynchronization doesn't matter much. What does matter is that, after a desynchronization, we don't get back the same values from the decoder that we originally fed into the encoder. So we might get some ugly artifacts!

(Or in other words: Regardless of weather the video already was encoded "wrongly", or whether it was encoded "correct" but now can't be decoded "correctly", the end result is always the same. You can't get the correct output now!)

Quote:
Originally Posted by zerowalker View Post
Weird that it uses such system if it isnīt perfect as these stuff are supposed to Always work in All scenarios.
Personally, I don't use Lagarith extensively. So I'm just trying to sum up the issues that some people have reported. There also seem to be many users which use Lagarith with no problem at all.

Also keep in mind that the "perfect" (error free) software doesn't exist in reality. We usually accept that a "high quality" software has about one bug per 1000 lines of code
__________________
Go to https://standforukraine.com/ to find legitimate Ukrainian Charities 🇺🇦✊

Last edited by LoRd_MuldeR; 1st December 2013 at 15:57.
LoRd_MuldeR is offline   Reply With Quote
Old 1st December 2013, 15:45   #171  |  Link
zerowalker
Registered User
 
Join Date: Jul 2011
Posts: 1,121
But if the error occurs (Artifact) is it always there, or does it appear from time to time depending if the decoder fails to reach the correct number?

And well yeah "error free" is more of a term what should be striven for. But something thatīs fundamentally wrong like this, isnīt what i would expect from a Codec.
zerowalker is offline   Reply With Quote
Old 1st December 2013, 15:51   #172  |  Link
LoRd_MuldeR
Software Developer
 
LoRd_MuldeR's Avatar
 
Join Date: Jun 2005
Location: Last House on Slunk Street
Posts: 13,248
Quote:
Originally Posted by zerowalker View Post
But if the error occurs (Artifact) is it always there, or does it appear from time to time depending if the decoder fails to reach the correct number?
That's the "nice" thing about undefined behavior: Everything is possible. You just don't know what is going to happen. And the result can (but doesn't have to) be different each time
__________________
Go to https://standforukraine.com/ to find legitimate Ukrainian Charities 🇺🇦✊

Last edited by LoRd_MuldeR; 1st December 2013 at 15:54.
LoRd_MuldeR is offline   Reply With Quote
Old 1st December 2013, 15:52   #173  |  Link
zerowalker
Registered User
 
Join Date: Jul 2011
Posts: 1,121
Well if the Possibility exist, then you can at least Save the content by converting it. If the error Always occurs thatīs impossible.
But i find it weird that i myself have never been inflected by this, i tend to use Lagarith quite a lot.
zerowalker is offline   Reply With Quote
Old 1st December 2013, 16:17   #174  |  Link
qyot27
...?
 
qyot27's Avatar
 
Join Date: Nov 2005
Location: Florida
Posts: 1,420
Quote:
Originally Posted by zerowalker View Post
you canīt even compare it with UT Video Codec which is the rival i think.
No, the closest 'rival' to Lagarith is actually FFV1. Ut Video mostly lies in the chasm between HuffYUV and Lagarith.

FFV1, for the record, also uses arithmetic coding (it's integer-based, though, not floating point). It loses to Lagarith in a couple of areas though - mainly solid color frames* and things like line drawings (and speed; Lagarith is faster than FFV1). FFV1 also supports all those high bit depth pixel formats.

*unless you use the lower efficiency VLC coder for FFV1 - then it actually gets close to Lagarith for solid color frames (because it switches to inter-frame? that's...unexpected, and feels wrong - it's what VirtualDub seems to report, though). Lagarith stores solid color frames as flags and presents them to the decoder as real intra frames.

Case in point, a simple BlankClip script (RGBA, 10 seconds, 640x480 @ 24 fps):
Lagarith: 15 KB
FFV1 (-coder 1 -context 1): 1.85 MB
FFV1 (-coder 1 -context 0): 1.85 MB
FFV1 (-coder 0 -context 1): 70.1 KB
FFV1 (-coder 0 -context 0): 70.1 KB

-coder 0 = VLC
-coder 1 = AC
qyot27 is offline   Reply With Quote
Old 1st December 2013, 16:26   #175  |  Link
foxyshadis
Angel of Night
 
foxyshadis's Avatar
 
Join Date: Nov 2004
Location: Tangled in the silks
Posts: 9,559
Quote:
Originally Posted by zerowalker View Post
Well if the Possibility exist, then you can at least Save the content by converting it. If the error Always occurs thatīs impossible.
But i find it weird that i myself have never been inflected by this, i tend to use Lagarith quite a lot.
It's more that floating point isn't entirely perfect, so things might be +1 or -1 from where they should be when decoded; those errors can add up, but they're pretty rare and hard to notice usually. It has C, MMX, SSE, and SSE2 versions of a lot of functions, all of which may have different rounding characteristics; it's impossible to tell since there's no testbed. Then again, there might just plain plain old bugs, too.
foxyshadis is offline   Reply With Quote
Old 1st December 2013, 16:49   #176  |  Link
LoRd_MuldeR
Software Developer
 
LoRd_MuldeR's Avatar
 
Join Date: Jun 2005
Location: Last House on Slunk Street
Posts: 13,248
Quote:
Originally Posted by foxyshadis View Post
It's more that floating point isn't entirely perfect, so things might be +1 or -1 from where they should be when decoded; those errors can add up, but they're pretty rare and hard to notice usually.
But for an entropy coder it can mean everything. Even the slightest difference in the value may cause a number to fall into a different range/bucket, and then the decoded symbol might be a completely different one.

And because the probability model is usually updated after each symbol, one wrong symbol can cause all future symbols to be wrong too

BTW: This is also the reason why, when dealing with floating point numbers, we never check for equality directly, but instead assume the numbers are "equal" if the absolute difference is below a certain threshold.
__________________
Go to https://standforukraine.com/ to find legitimate Ukrainian Charities 🇺🇦✊

Last edited by LoRd_MuldeR; 1st December 2013 at 16:53.
LoRd_MuldeR is offline   Reply With Quote
Old 1st December 2013, 18:48   #177  |  Link
foxyshadis
Angel of Night
 
foxyshadis's Avatar
 
Join Date: Nov 2004
Location: Tangled in the silks
Posts: 9,559
Quote:
Originally Posted by LoRd_MuldeR View Post
But for an entropy coder it can mean everything. Even the slightest difference in the value may cause a number to fall into a different range/bucket, and then the decoded symbol might be a completely different one.

And because the probability model is usually updated after each symbol, one wrong symbol can cause all future symbols to be wrong too

BTW: This is also the reason why, when dealing with floating point numbers, we never check for equality directly, but instead assume the numbers are "equal" if the absolute difference is below a certain threshold.
The entropy coder portion is all integer and only done in C. That at least doesn't seem to be the problem.
foxyshadis is offline   Reply With Quote
Old 1st December 2013, 18:58   #178  |  Link
LoRd_MuldeR
Software Developer
 
LoRd_MuldeR's Avatar
 
Join Date: Jun 2005
Location: Last House on Slunk Street
Posts: 13,248
Quote:
Originally Posted by Dark Shikari
Lagarith relies on floating point math in the arithmetic coder, which nearly guarantees that errors like this will occur eventually, depending on the phase of the moon and so forth. The format should generally be avoided if possible.
Quote:
Originally Posted by foxyshadis View Post
The entropy coder portion is all integer and only done in C. That at least doesn't seem to be the problem.
__________________
Go to https://standforukraine.com/ to find legitimate Ukrainian Charities 🇺🇦✊
LoRd_MuldeR is offline   Reply With Quote
Old 1st December 2013, 21:31   #179  |  Link
SirLagsalot
Registered User
 
Join Date: Mar 2011
Posts: 10
Quote:
Originally Posted by Keiyakusha
So why optimize codec for them and not for the world of "mere mortals"?
I expect that high-bitdepth will become more common over time with consumer-level gear, so I would like to aim Lagarith where the technology will be.

Quote:
Originally Posted by Dark Shikari
Lagarith relies on floating point math in the arithmetic coder, which nearly guarantees that errors like this will occur eventually, depending on the phase of the moon and so forth.
This is not correct. Lagarith uses an integer-only range coder to handle the compression, floating-point is only used to scale the symbol probability tables to a power of 2. Floating point math on the x86/x64 is deterministic for a given input and order of operations, so it does not randomly introduce error. To my knowledge, there has only been one error related to the use of floating-point math, and that was caused when the calling program changed the floating-point rounding mode or precision. Lagarith now checks the floating-point state before each frame, and adjusts it if need be to prevent this. I do plan to remove floating-point math in the rewrite though, in order to simplify porting and hopefully kill off that myth.
SirLagsalot is offline   Reply With Quote
Old 2nd December 2013, 01:59   #180  |  Link
LoRd_MuldeR
Software Developer
 
LoRd_MuldeR's Avatar
 
Join Date: Jun 2005
Location: Last House on Slunk Street
Posts: 13,248
Quote:
Originally Posted by SirLagsalot View Post
This is not correct. Lagarith uses an integer-only range coder to handle the compression, floating-point is only used to scale the symbol probability tables to a power of 2.
Well, as long as the encoder and the decoder need to do this step in the exactly same way, things could go still go wrong and break decoding, right?

Quote:
Originally Posted by SirLagsalot View Post
Floating point math on the x86/x64 is deterministic for a given input and order of operations, so it does not randomly introduce error. To my knowledge, there has only been one error related to the use of floating-point math, and that was caused when the calling program changed the floating-point rounding mode or precision. Lagarith now checks the floating-point state before each frame, and adjusts it if need be to prevent this. I do plan to remove floating-point math in the rewrite though, in order to simplify porting and hopefully kill off that myth.
Hmm, to my knowledge there's at least the classical "x87" FPU with 80-Bit internal precision and the newer SIMD (MMX, SSE, etc) instructions with "only" 64-Bit precision. Even if all variables are 64-Bit (double) precision in memory, the compiler may still keep intermediate results in registers, which would then be either 80-Bit or 64-Bit. Furthermore, even if you don't use SIMD assembly/intrinsic explicitly, compilers may still generate such instructions when targeting CPU's with MMX/SSE support. Finally, there are various compiler settings that effect floating point math ("fast math" option, etc). So there's at least the danger of "portability" issues when the encoder/decoder are compiled with different compilers (compiler configuration).
__________________
Go to https://standforukraine.com/ to find legitimate Ukrainian Charities 🇺🇦✊
LoRd_MuldeR is offline   Reply With Quote
Reply

Tags
lagarith

Thread Tools Search this Thread
Search this Thread:

Advanced Search
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump


All times are GMT +1. The time now is 02:40.


Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2024, vBulletin Solutions Inc.