How does the decoder know it's fake interlaced though, as opposed to the real thing, and how would the player/TV know not to de-interlace it but treat is as progressive?
The same questions in relation to chroma subsampling. The way I understand it the chroma for each field is subsampled individually for interlaced video, given each field is a different moment in time, but how would the decoder know the "fake interlaced" video has progressive chroma subsampling rather than interlaced chroma subsampling? I'm thinking it should matter, but maybe not??
I assume for PsF, how it's done is part of the specification (although I can't find definitive information on how PsF's chroma-subsampling works) but "fake interlaced" seems like it'd be a different story.
|