Welcome to Doom9's Forum, THE in-place to be for everyone interested in DVD conversion.

Before you start posting please read the forum rules. By posting to this forum you agree to abide by the rules.

 

Go Back   Doom9's Forum > Video Encoding > MPEG-4 ASP
Register FAQ Calendar Today's Posts Search

Reply
 
Thread Tools Search this Thread Display Modes
Old 8th October 2002, 22:57   #201  |  Link
-h
Kilted Yaksman
 
-h's Avatar
 
Join Date: Oct 2001
Location: South Carolina
Posts: 1,303
-h If you were referring to the stuff I posted: I was not talking about Nic's work. Can XviD (Mpeg4) code be reused to decode Mpeg2? I'm sorry if I got it all wrong.

Yeah my bad, but I thought both decoding plugins were based off the same source code, and the source Nic's working with lacks dequantization assembly (which can be nabbed from XviD). Then I started writing and got a bit ahead of myself.

But anyway, a while ago Isibaar replaced libmpeg2's assembly with XviD's and got a 50% speedup. I don't think he hung on to the work, but at least it shows that several parts of libmpeg2 could be significantly faster.

-h
-h is offline   Reply With Quote
Old 8th October 2002, 23:01   #202  |  Link
Emp3r0r
Registered User
 
Emp3r0r's Avatar
 
Join Date: Oct 2001
Location: Alabama, USA
Posts: 769
I did three encodes last night with nic's new Qpel build and I must say, I can tell a big increase in quality and decrease in filesize. I'd also like to mention that I used lumi masking and (maybe it is these HO fluoresent lights) didn't see any blocking in the dark area's. I'll look again when I can turn the lights off

sample script
Code:
source=mpeg2source("C:\VOBs\poa\poa.d2v").crop(3,60,-2,-60)
source=source.LanczosResize(640,272)
source=source.Convolution3d(0,3,3,3,3,2.2,0)
return source.freezeframe(0,0,1697)
#size 753301
#credits 162383 172579
__________________
ChapterGrabber - add names to your chapters | AtomSite - open source AtomPub server
Emp3r0r is offline   Reply With Quote
Old 9th October 2002, 11:09   #203  |  Link
JimiK
just me
 
Join Date: Sep 2002
Posts: 158
@Iago
Yes, I did use lumoff=-2. But as cjv in the Avisynth forum mentioned, lumoff seems to have no effect when using fast=true, so I would suggest to keep the hands off this option until it's fixed.
@Emp3r0r
I'm not sure if your good results are due to Nic's new build. After doing some test encodes I'm not sure that QPel is really activated. With constant quant, I had no decrease in filesize and the encoding speed dropped neither. I encoded a part with a dark scene, where I noticed as many blocks as with other builds. Setting lumoff=-2 helped this issue. You did not use that switch, so I wonder why your encodes don't seem to have that problem.

Sincerely,
JimiK
JimiK is offline   Reply With Quote
Old 9th October 2002, 12:23   #204  |  Link
Gaia
Registered User
 
Join Date: Feb 2002
Posts: 267
QPel is NOT activated in latest NIC build!
Gaia is offline   Reply With Quote
Old 9th October 2002, 12:48   #205  |  Link
Koepi
Moderator
 
Koepi's Avatar
 
Join Date: Oct 2001
Location: Germany
Posts: 4,454
QPEL is a little buggy in it's current state, Isibaar tries to add a fix today - I could upload a devel (unstable) binary with qpel activated additionally to the stable build then on my site. I'll keep you posted then.

regards,
Koepi
Koepi is offline   Reply With Quote
Old 9th October 2002, 15:26   #206  |  Link
iago
retired
 
iago's Avatar
 
Join Date: Jun 2002
Location: hollywood
Posts: 1,013
QPEL is a little buggy in it's current state, Isibaar tries to add a fix today - I could upload a devel (unstable) binary with qpel activated additionally to the stable build then on my site. (Koepi)

@Koepi

That would be really good and I'd be very glad to try it as usual . I wonder if it is also possible to activate B-frames in addition to QPel (in your new devel. binary to be released) and use these two "together" for testing purposes?

Thanks again for all your efforts,
iago
iago is offline   Reply With Quote
Old 9th October 2002, 16:18   #207  |  Link
Koepi
Moderator
 
Koepi's Avatar
 
Join Date: Oct 2001
Location: Germany
Posts: 4,454
@iago:

unfortunately, there's no qpel code for bframes added. So you have the choice between qpel and bframes - but they don't work together yet, when activating bframes by setting it >-1, qpel isn't used.
(I for one test bframes now as qpel crashes for me on monsters inc. )

Regards,
Koepi
Koepi is offline   Reply With Quote
Old 9th October 2002, 16:23   #208  |  Link
trbarry
Registered User
 
trbarry's Avatar
 
Join Date: Oct 2001
Location: Gainesville FL USA
Posts: 2,092
Quote:
Yeah my bad, but I thought both decoding plugins were based off the same source code, and the source Nic's working with lacks dequantization assembly (which can be nabbed from XviD). Then I started writing and got a bit ahead of myself.

But anyway, a while ago Isibaar replaced libmpeg2's assembly with XviD's and got a 50% speedup. I don't think he hung on to the work, but at least it shows that several parts of libmpeg2 could be significantly faster.
-h -

How similar is dequant between mpeg2 & mpeg4/xvid? And which Xvid functions are you talking about here? I've never looked at that part, in either codec. But you've mentioned it a few times.

- Tom (who mostly doesn't know how dequant works)
trbarry is offline   Reply With Quote
Old 9th October 2002, 17:07   #209  |  Link
-h
Kilted Yaksman
 
-h's Avatar
 
Join Date: Oct 2001
Location: South Carolina
Posts: 1,303
How similar is dequant between mpeg2 & mpeg4/xvid? And which Xvid functions are you talking about here? I've never looked at that part, in either codec. But you've mentioned it a few times.

MPEG dequantization is the same between MPEG-2 and MPEG-4 (but not MPEG-1.. I'm just going by code in libavcodec since MPEG-2 specs are hard to come by), as is interpolation, transfer and iDCT. I've no idea where the code is in libmpeg2 (ew), but XviD reads 64 quantized coefficients from the bitstream in mbcoding.c -> get_intra_block()/get_inter_block(), then dequantizes them in quant_mpeg4.c / quantize4_mmx.asm. Also I doubt libmpeg2 is using a lookup table to decode bitpacked coefficients, that could also speed decoding.

Just checked some spec-ish material for MPEG-2, and dequantization is different but could be simulated with XviD's code by increasing the final shift right by 1, or doubling the stored quantizer. Not sure why they changed that.

Also given that libavcodec already has optimizations for edging, dequant, iDCT, transfers, bitstreams and interpolation that are at least as fast as XviD's, I figured it'd be much faster than libmpeg2 already. I think once this avisynth filter is finished (or I get bored with it - much more likely), I'll make a "lavcSource()" plugin that'll read pretty much anything under the sun - MPEG1/2/4, AVI, ASF, WMV, MOV, DV, etc., most of them with audio as well.

-h
-h is offline   Reply With Quote
Old 9th October 2002, 18:07   #210  |  Link
Marc FD
XviD fan
 
Marc FD's Avatar
 
Join Date: Jun 2002
Location: France
Posts: 907
>I'll make a "lavcSource()" plugin that'll read pretty much anything under the sun - MPEG1/2/4, AVI, ASF, WMV, MOV, DV, etc., most of them with audio as well.

hehe. i'm a very lazy guy. if you're really gonna do this, say me if i can help ^^.
Because MPEG2Dec in the current state need a complete rewrite to have a more than 10 % speed increase.
Marc FD is offline   Reply With Quote
Old 9th October 2002, 18:16   #211  |  Link
Emp3r0r
Registered User
 
Emp3r0r's Avatar
 
Join Date: Oct 2001
Location: Alabama, USA
Posts: 769
I turned the lights out and then I saw the blocks. Oh well, I'm going to kept trying to get quant 2 encodes to look better. I wish it was easier.
__________________
ChapterGrabber - add names to your chapters | AtomSite - open source AtomPub server
Emp3r0r is offline   Reply With Quote
Old 9th October 2002, 18:43   #212  |  Link
iago
retired
 
iago's Avatar
 
Join Date: Jun 2002
Location: hollywood
Posts: 1,013
Quote:
Originally posted by Emp3r0r
I turned the lights out and then I saw the blocks. Oh well, I'm going to kept trying to get quant 2 encodes to look better. I wish it was easier.
@Emp3r0r,

Or you can still try lumoff=-2 (that will imho be enough in most cases without extra sharpening) with Marc's MPEG2Dec3.dll . (Personally, I prefer MPEG -or sometimes ModHQ- quantization with LanczosResize to compensate for the lack of an extra sharpening filter.)


@Koepi,

Thanks for the explanation, I guess -h had also mentioned it somewhere before. I'm looking forward to your new binary with QPel activated.

regards,
iago
iago is offline   Reply With Quote
Old 9th October 2002, 22:08   #213  |  Link
JimiK
just me
 
Join Date: Sep 2002
Posts: 158
Emp3r0r,
try lumoff=-2. I encoded some dark scenes without it, using constant quant 2 and the blocks where still there (though I think they were smaller).
@ -h Such an Avisynth filter does not sound boring to me But after I'm only frameserving DVDs, I'm happy with the available filters. It's good enough for me when XviD is becoming perfect

Best regards,
JimiK
JimiK is offline   Reply With Quote
Old 20th October 2002, 20:06   #214  |  Link
SansGrip
Noise is your friend
 
Join Date: Sep 2002
Location: Ontario, Canada
Posts: 554
Bet you thought you'd seen the last of this thread

While my interests lie mainly in DVD/capture -> VCD, this thread sparked my curiosity in finding out exactly what's happening to the signal in terms of range as it passes through my processing chain.

It seems that DVD2AVI (1.76) and Dividee's mpeg2dec, regardless of the "scale" setting -- which I believe applies only to YUY/RGB conversion anyway -- produces luma and chroma with the full range of 0-255.

I don't know whether this is the result of mpeg2dec's conversion to YUV2 or whether this is the actual range used on the DVD. Since DVDs are designed for playback on televisions it would seem strange if they contained levels which fall outside the legal range for TVs (apparently 16-235 for luma, 16-240 for chroma). One possibility is that DVD players do this range compression automatically on playback. Does anyone know more on this?

The reason I've never noticed this in my encodes, it seems, is because TMPGEnc automatically converts from 0-255 to 8-235 (why 8? I don't know, but that's what the tooltip says) if you leave the "Output YUV data sa Basic YCbCr not CCIR601" option unchecked. This would (partially, since the blackest pixels should be 16, not 8) correct the range in the resulting MPEG.

With this in mind I tried adding

Levels(0, 1, 255, 16, 235)

to my Avisynth script and my plugin verifies that no illegal values are being produced. In order to get correct output one must then check the "Output YUV data as Basic YCbCr not CCIR601" option in TMPGEnc, otherwise it will compress the range a second time.

This seems to produce a brightness level that looks right on my monitor (using my ViewSonic's UltraBrite feature, which approximately emulates the brightness level of a TV set). I've yet to try the output on my standalone DVD player.

I know this contributes nothing to the dark blocks problem, but it's something I've been mildly curious about for a while and finally got round to testing

(Incidentally, would Levels(2, 1, 255, 16, 235) produce similar output to lumoff=-2, or am I misunderstanding what lumoff does?)
SansGrip is offline   Reply With Quote
Old 20th October 2002, 20:15   #215  |  Link
iago
retired
 
iago's Avatar
 
Join Date: Jun 2002
Location: hollywood
Posts: 1,013
@SansGrip

Welcome man, to one of the most enthralling discussions of the encoding scene . I'm really glad one more tester arrived to work on this ugly problem!

regards,
iago
iago is offline   Reply With Quote
Old 20th October 2002, 20:20   #216  |  Link
Marc FD
XviD fan
 
Marc FD's Avatar
 
Join Date: Jun 2002
Location: France
Posts: 907
lumoff/lumgain is not the same as levels. it processes in 16 bits mmx and it's finally clipped.

lumoff=-2 will reduce all luma values by 2. it's the theorical equivalent of Levels(2,1,258,0,255) but 258 is impossible.
so levels can't do it because it's a scaler, not a clipper.

Regards,
MarcFD
Marc FD is offline   Reply With Quote
Old 20th October 2002, 21:18   #217  |  Link
SansGrip
Noise is your friend
 
Join Date: Sep 2002
Location: Ontario, Canada
Posts: 554
iago: Welcome man, to one of the most enthralling discussions of the encoding scene

I agree. The non-geek side of me is currently pointing at my geek side and laughing uncontrollably, but I don't care

iago: I'm really glad one more tester arrived to work on this ugly problem!

As the unofficial CIA motto goes: in God we trust, all others we polygraph . Theory's great in theory, but I like to try it before I make up my mind.

A month or so ago I actually wrote a filter to add noise back in to low-luma areas. It seemed to remove the blocks but was too noticible to be useful. It was a very simple rand()-based noise, though (adding between -5 and 5 to each pixel in the luma channel, if I remember correctly), so it's probable one would achieve better results using other algorithms. Something like Photoshop's Gaussian noise might be a candidate.

A better solution might be to leave the original noise in the dark areas, if there is any. Another filter I wrote might be useful for this, which mixes two clips based on luma levels. I need to fix it up before releasing it though, and might even generalize it to x number of clips for x different luma ranges.
SansGrip is offline   Reply With Quote
Old 20th October 2002, 21:22   #218  |  Link
SansGrip
Noise is your friend
 
Join Date: Sep 2002
Location: Ontario, Canada
Posts: 554
Marc FD: lumoff=-2 will reduce all luma values by 2. it's the theorical equivalent of Levels(2,1,258,0,255) but 258 is impossible.
so levels can't do it because it's a scaler, not a clipper.


Thanks for the explanation. Incidentally, what is luma gain?

Would tweak() be able to do this? If not, I think I'm going to throw together a filter for those circumstances when I'm not using your mpeg2dec.
SansGrip is offline   Reply With Quote
Old 20th October 2002, 22:02   #219  |  Link
iago
retired
 
iago's Avatar
 
Join Date: Jun 2002
Location: hollywood
Posts: 1,013
@SansGrip

Well, actually such a filter (an "add-noise-to-low-luma-regions" filter) is something I'm very willing to try for a long long time. Hope you continue working on it and even make a beta release perhaps . Btw, something else I wonder is: What was its effect on compressibility? Did it cause a big drop or was the effect somewhat negligible?

iago
iago is offline   Reply With Quote
Old 20th October 2002, 22:46   #220  |  Link
SansGrip
Noise is your friend
 
Join Date: Sep 2002
Location: Ontario, Canada
Posts: 554
iago: Well, actually such a filter (an "add-noise-to-low-luma-regions" filter) is something I'm very willing to try for a long long time. Hope you continue working on it and even make a beta release perhaps .

I'll lay it on the table and make some incisions and see what happens

Actually what just occurred to me is that it might be better to write a general noise generator, then use it with the luma threshold mixing filter. (I like to take a UNIX approach to stuff I write.)

iago: Btw, something else I wonder is: What was its effect on compressibility? Did it cause a big drop or was the effect somewhat negligible?

I didn't even look. At the time I was making CBR VCDs, so short of looking at the average Q level there's no real way of telling. But now I'm using TMPGEnc's CQ mode so when I've got the filter at least semi-functional I'll run some tests. I'm guessing it won't be pretty, but if you're more concerned about quality than size it might be worth it.
SansGrip is offline   Reply With Quote
Reply


Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump


All times are GMT +1. The time now is 06:03.


Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2024, vBulletin Solutions Inc.