Welcome to Doom9's Forum, THE in-place to be for everyone interested in DVD conversion.

Before you start posting please read the forum rules. By posting to this forum you agree to abide by the rules.

 

Go Back   Doom9's Forum > Video Encoding > MPEG-4 AVC / H.264

Reply
 
Thread Tools Search this Thread Display Modes
Old 12th January 2011, 04:46   #201  |  Link
Mixer73
Registered User
 
Join Date: Nov 2007
Posts: 240
Quote:
Originally Posted by Didée View Post
Having seen PLENTY of such results, it is hard to imagine that CUDA could ever output something acceptable. (at reasonable bitrates.)
Yes but I think deadrat's idea of reasonable bitrates is a bit off the planet.
Mixer73 is offline   Reply With Quote
Old 12th January 2011, 04:58   #202  |  Link
popper
Registered User
 
Join Date: Mar 2006
Posts: 272
deadrats:"i'll let you pick the source"
the 1080P birds sample might be fun , i forget the direct URL to the original though

Last edited by popper; 12th January 2011 at 05:01.
popper is offline   Reply With Quote
Old 12th January 2011, 04:59   #203  |  Link
CruNcher
Registered User
 
CruNcher's Avatar
 
Join Date: Apr 2002
Location: Germany
Posts: 4,926
Quote:
Originally Posted by Didée View Post
But then, those midrange SB's have HD2000 graphics with only 6 stream processors - opposed to the 12 stream processors in HD3000. With HD3000 only present in the more expensive "K" models (2500K, 2600K).

If the touting is "Quick Sync is double as fast as anything else", that of course refers to HD3000. If 12 shaders are "double as fast", what speed do you expect from only 6 shaders?
they have all the same speed the difference in the EU count is a GPU part not a MFX part "QuckSync" so every of the new Sandy Bridges should have virtually the same QuickSync encoding performance. More EUs are more important for faster Pre Processing stuff (Deinterlacing,Denoising,FRC,SR) and Games not the encoding part per se (the few current comparisons also show that)


Quote:
Originally Posted by Blue_MiSfit View Post
Please do! Until then I for one would be utterly shocked if we're on the same page as you with regard to "no perceptible differences between the cuda and the x264 encodes"!

I've evaluated the professional CUDA encoders and they were all rather awful compared to x264 or Mainconcept. I find it EXTREMELY difficult to believe that a consumer CUDA encoder could even hold a candle...

Some sample would sure be nice! Mind doing a few quick test encodes for us?

Derek
Nvidias Encoder is better then Mainconcepts Encoder in it's current state (will most probably change soon though).
The comparison Annandtech did is flawed he didn't actually tested the Quality of Nvidias GPU Encoder but that of Arcsofts X264 GPU Encoder which is awfully buggy, they also have another Encoder which isn't based on X264 @ all though not yet used in any of their Products (very fast Cuda Encoder not up to the Quality of Nvidia though).
Arcsoft are practically the only ones that do not use Nvidias own Encoder (nvcuvenc API) but their own X264 GPU mod inside of MediaConverter7.
We don't know yet how QuickSync was tuned in the Profiles used for Encoding (what settings they call in the Media SDK API (quality level) Cyberlink could have optimized it for Speed rather then Quality in MediaEspresso.
There indications that QuickSync (how it's used in Cyberlinks Encoder) favors to destroy the Grain layer more uniform (keeping over time temporal Grain differences low by "Denoising" more blurry result ) compared to Nvidias Cuda Encoder (coming from a recent compare).
__________________
all my compares are riddles so please try to decipher them yourselves :)

It is about Time

Join the Revolution NOW before it is to Late !

http://forum.doom9.org/showthread.php?t=168004
CruNcher is offline   Reply With Quote
Old 12th January 2011, 04:59   #204  |  Link
Blue_MiSfit
Derek Prestegard IRL
 
Blue_MiSfit's Avatar
 
Join Date: Nov 2003
Location: Los Angeles
Posts: 5,988
Parkjoy (maybe slowed down to 24p) is always a tough one.

Other than that, I don't know what you have access to. Any DVD or BluRay source is fine.

Cruncher, you're unintelligible as always, which is too bad because I get the feeling you have a unique perspective in all this talk of GPU encoding... or at least have access to lots of these encoders!
__________________
These are all my personal statements, not those of my employer :)

Last edited by Blue_MiSfit; 12th January 2011 at 05:04.
Blue_MiSfit is offline   Reply With Quote
Old 12th January 2011, 05:20   #205  |  Link
mariush
Registered User
 
Join Date: Dec 2008
Posts: 589
Deadrats.... encode this at 1080p 6-8 mbps and 720p 2.5 mbps : http://www.google.com/search?q=SAmsu...o+Oceanic+Life Should be a 500 MB h264 file at 40mbps.

Then look at the differences. But do post the encoding settings otherwise any comparison is useless.
mariush is offline   Reply With Quote
Old 12th January 2011, 05:56   #206  |  Link
CruNcher
Registered User
 
CruNcher's Avatar
 
Join Date: Apr 2002
Location: Germany
Posts: 4,926
There will also come support for Sandy Bridges Encoder from a company where i guess many wouldn't have suspected to see it from

Quote:
Badaboom 2.0 is coming soon!

And with it is coming hardware-accelerated support for both NVIDIA Fermi GPUs and Intel Sandy Bridge hardware! The blazingly fast media converter that formats video files for a variety of devices, such as the iPad or Sony PSP, will also add new features, including the capability to run on any Windows PC! Stay tuned for more information.


We are announcing today that the next version of Badaboom, called version 2.0, is on its way and is bringing with it additional hardware support! As promised, this version will include support for NVIDIA Fermi GPUs. All of the latest NVIDIA graphics cards will be supported. A piece of news that may be a bit more unexpected, however, is that Badaboom 2.0 will also support the new Intel Sandy Bridge chips and their hardware-accelerated video processing capabilities! We are impressed with the performance of Sandy Bridge, and adding support for it means a couple things:

1. Users that have a system with the new Sandy Bridge chip will now be able to utilize Badaboom's transcoding power for the same decode and encode formats as the NVIDIA version.

2. The implemention of the Intel MediaSDK into Badaboom means that when there is no NVIDIA GPU or Sandy Bridge chip detected, Badaboom will still be able to run in software mode. This means you will be able to run Badaboom on any Windows PC!

On the feature side of things, Badaboom 2.0 will add a queuing mechanism (the most requested feature) so that you no longer need to begin each individual transcode manually. More details around this to come. Other added features include audio gain and large .m2ts file support. Combine that with an updated user interface and added output device profiles, and Badaboom 2.0 will be ready to rock.

As we said in the post last week, Badaboom 2.0 will be available for purchase in February. We are excited to be partnering with both NVIDIA and Intel to provide the best transcoding experience possible.
The marked part also states what DS already said its Intels H.264 Software Encoder (see MSU compare) using the Sandy Bridge Hardware tools to accelerate itself when available.
Here is a Video demoing Elementals MediaSDK support http://www.youtube.com/watch?v=BNPf-lMMYLY unfortunately they always take Itunes H.264 Encoder as reference for the Consumer base (and we all know how slow that is)
__________________
all my compares are riddles so please try to decipher them yourselves :)

It is about Time

Join the Revolution NOW before it is to Late !

http://forum.doom9.org/showthread.php?t=168004

Last edited by CruNcher; 12th January 2011 at 06:25.
CruNcher is offline   Reply With Quote
Old 12th January 2011, 08:21   #207  |  Link
popper
Registered User
 
Join Date: Mar 2006
Posts: 272
its Not really a surprise as such CruNcher , its another tick in the 'PR Innovators' book.

it's not even a surprise to me that the AMD OpenCL SDK with its so called Open Decode library that uses the DRM/UVD didn't get a mention ether.

as everyone knows that's Tried to work with AMD you cant even get them to provide the Linux Open Decode library to go with the SDK, or even get simple and reported AMD driver bugs fixed in a timely manor.

so no reliable HW decode across the 3 versions of the broken DRM/UVD ASIC from even the one 3rd party Linux dev that did sign the NDA, and he's all but given up on them now, and who could blame him, AMD just dont want to play ball never mind be in the game it seems.

Last edited by popper; 12th January 2011 at 08:45.
popper is offline   Reply With Quote
Old 12th January 2011, 14:10   #208  |  Link
CruNcher
Registered User
 
CruNcher's Avatar
 
Join Date: Apr 2002
Location: Germany
Posts: 4,926
Quote:
Originally Posted by popper View Post
its Not really a surprise as such CruNcher , its another tick in the 'PR Innovators' book.

it's not even a surprise to me that the AMD OpenCL SDK with its so called Open Decode library that uses the DRM/UVD didn't get a mention ether.

as everyone knows that's Tried to work with AMD you cant even get them to provide the Linux Open Decode library to go with the SDK, or even get simple and reported AMD driver bugs fixed in a timely manor.

so no reliable HW decode across the 3 versions of the broken DRM/UVD ASIC from even the one 3rd party Linux dev that did sign the NDA, and he's all but given up on them now, and who could blame him, AMD just dont want to play ball never mind be in the game it seems.
It's sad that AMD missed those wide spreading opportunity early on (imho it was a bad mistake) though i remember when i made Donald aware that he could utilize Nvidias DSP back then under Win32/64 utilizing their SDK @ that time Nvidia didn't released the Linux part of it either it came later as the whole concept of VDPAU slowly emerged, though they still haven't open the Encoder part for Linux yet nvcuvenc api which they have done for Win32/64 since some SDKs back now.
The overall response to opening the Encoder (Opening means in that sense setting the API free from NDA barriers, though it seems they where internaly discussions about a entirely Open Source GPU Encoder based on their Research, maybe that idea didn't died fully yet,though you don't have to forget the ISVs making business with their closed Encoders now such a project could hurt in some ways and be helpful in others depending on your customer base) was the same as back then with the Decoder in some weeks a lot of even no name Applications (not always good ones though but do they need really to care about that ?) adopted it in no time (also because their excellent support and documentation) (not only the big ISVs had now access after some short ISV NDA exclusive time) and that's the Power of their Ecosystem (Donald and the Doom9 Community helped a lot Debugging VPx in the same way which helped improving the Quality for every Application and for every user involved in the Ecosystem) though it seems AMD cared more about exclusive stuff and keeping it down to a minimum ISVs @ first (for a very long time) and for Linux it seems even heavier restricted.
I can mostly only speak about Nvidia and they did everything right from the first day strategy wise without hurting the business aspects to much (on the Desktop) and Utilized the Power of the Community as best as possible (with braking down the NDA ISV barriers really fast without hurting anyone else to much and also seeing more the mostly positive things then negative ones that would arise from that "Openness" i can only congratulate the brain behind this decissions @ Nvidia) which in the end made everyone happy and be part of that Software/Hardware Ecosystem (ISVs,Consumer,non ISV Developers,Geeks)
AMD reacted a little slow (and the hell its absolute sad seeing that they have the more Powerful Hardware @ Hand also because many Developer made them aware that it would be good to have that support and saw the demand from users for it early on as Nvidia came up with it,on the other side most resources where going @ that time into Fusion and how to integrate ATI efficiently as possible into their future concept), its almost predictable that Fusion would have a much better adoption rate with such a "Open" available Ecosystem to mostly everyone and we can see AMD is trying to speed up their now again at least on the Win32/64 side finally, it's sad for the Linux part though and that Nvidia is still much stronger their in terms of overall Developer support as you explained it.


Btw everyone of the P67 Board buyers forget what reviews are telling you that Intels QuickSync would never work if a discrete GPU is being used that is not true it is either a Bios Restriction or a a Software lock on the OS Driver level but not has anything todo with a Chipset Hardware limitation thing.
Intel is doing since several years now allot on Software Restriction research @ their Hardware level they started this with making several things remotely available last year via a service call on their Motherboards and even the lower price Pentium Chips (upgrade it via a call to a higher series no problem). The same they did this time on the P67 series of Boards see this proof talk by some Intel guy http://www.youtube.com/watch?v=gui_6sNc7Eo
Intel is fully in the concept of DRMing their Hardware for Business purposes now, so yes who knows how its working on the Chip site and if the same Series is restricted the same way by a clever combination of Bios,Key,Motherboard and Operating System the same as they bringing in TPM now to finally make Hollywood being able to provide full 1080p experience On Demand and without any time differences to a Cinema release, as they feel save now with Microsoft and Intels TPM Ecosystem (we will see how long that holds up though)
Please also keep in mind i don't say its possible without this Lucid Virtu (Framebuffer copying under Vista/Win7 very efficiently doable as everything is on the GPU already though this most probably highers latency some ms again especial in Windows mode like shown in that Video so Gamers or people who think of Multi GPU data Displaying should be careful, though it is cool and indeed a dream i guess of everyone in the Consumer level to see that) to actually use both Discrete GPU/CPU together @ the same time that obviously isn't possible physically by the Hardware itself it's just the fact that their Software can see the QuickSync that makes it invalid that the P67 turns everything of via the Hardware, but Intels Software layer (Bios/OS Driver) is actually doing that and Lucid got the Permission to disable it to be able todo what you see inside that Video or the other Logic assumption would be the Intel guy is actually mixing up stuff here (mixing up H67 with P67 though several times repeatedly) and all the reviewers like Anandtech are correct if they say P67 doesn't support the Intel QuickSync GPU feature @ all (the information they get from intel and all believe to be true, the motto being better don't think believe everything my partner tells me and put that into public) you decide what is the truth

Now to make it even more Confusing

Anandtech Published Article http://www.anandtech.com/show/4113/l...n-sandy-bridge
Quote:
To demonstrate the technology Intel ran an H67 motherboard with a GeForce GTX 480. Lucid’s software was installed which allowed for the GTX 480 to run and its frame buffer output to be copied to main memory and sent out via Intel’s Flexible Display Interface through the DVI port on the back of the motherboard.
So the newest Intel innovation a Auto Switching Chipset Board inside this Machine http://www.youtube.com/watch?v=t81xbq53WIA
__________________
all my compares are riddles so please try to decipher them yourselves :)

It is about Time

Join the Revolution NOW before it is to Late !

http://forum.doom9.org/showthread.php?t=168004

Last edited by CruNcher; 12th January 2011 at 17:45.
CruNcher is offline   Reply With Quote
Old 13th January 2011, 00:05   #209  |  Link
deadrats
Banned
 
Join Date: Oct 2010
Posts: 119
media sdk vs cuda vs x264

alright everyone, as promised i conducted a test encode and i'm uploading the results for everyone to see. for source i went here:

http://www.demo-world.eu/trailers/hi...n-trailers.php

and downloaded "The Gorgeous Ocean World" demo under the "HDclub" heading. i think this source is a good test because it's a nice 1920x1080p29.97 blu-ray source, it only uses a nominal bit rate of 8.34 mb/s, has lots of good motion, lots of vibrant colors, so it makes a good test for any encoder. it also uses 256 kb/s ac3 audio.

for a target, i chose 1280x720p29.97 mkv, with 128 kb/s audio.

for bit rate i did the following: i calculated a per pixel bit rate for the 1080p source, works out to .134 bits/pixel and derived a nominal bit rate for the target encode of 3.7 mb/s for the video portion, all encodes were done using cbr.

i did test encodes with the default settings for x264 and cuda (remember, i don't speak japanese) and i through in a special treat. tmpg express 5 allows you to use the media sdk encoder in software mode if you don't have a QS enabled processor installed. i did a test encode with this as well, it was slow as hell but it should be indicative of the quality of the encodes QS will offer.

note: it appears that quick sync is little more than a hardware accelerated version of the IPP encoder, the only commercial app that uses IPP, as far as i know, is gom encoder (in addition to the default x264 encoder) and in numerous tests i found there to be no quality difference between the IPP encoder and x264, they pretty much traded punches, under some tests x264 showed slightly better quality under other tests IPP showed slightly better quality.

the big benefit of IPP was that it ran like stink on a monkey with an intel cpu: using a quad core phenom 2 x264 was easily twice as fast as IPP, using a dual core penryn (E7400) IPP smoked x264. this really shouldn't be surprising considering IPP stands for "intel performance primitive".

the source runs 4:05 long, x264 took 13:35 and cuda took 8:17, intel media sdk software encoder brought up the rear with a super slow 44:39.

you can download the finished encodes here:

cuda
http://www.mediafire.com/?0dy2c1n8mjy6rj1

media sdk
http://www.mediafire.com/?6xp933gxj2xtt9e

x264
http://www.mediafire.com/?rcc9yb3elv4ahag

from a quality standpoint, i would put the x264 encode and the cuda encode as overall equal, in certain parts x264 seemed to hold a minor edge in visual quality, in other parts it appeared to me that the cuda encoder produced the better results, overall i would call it a tie.

the sdk encoder was an interesting case: the over all impression i got was that it produced a slightly lower quality than either x264 or cuda but that it produced more vibrant colors.

my guess is if someone wanted to nitpick one could find frames from each encode to support the claim that any of the encoders was better than the other two.

the reality is that 3.7 mb/s for 720 is ridiculously low, people usually capture hdtv at 12-18 mb/s and let's be honest with ourselves and point out that 8.3 mb/s is atrocious for 1080p, if we had the original source, straight from the camera (not a previously compressed source), we could get much higher quality encodes with all 3 encoders, even at 3.7 mb/s for 720p.

quite frankly, most people do not have access to uncompressed or losslessly compressed film transfers, what most people have is blu-rays that they "backup" (i.e. steal, pirate, "fair use" copy) or the have hdtv captures or they have footage shot on consumer grade camcorders.

these people won't be transcoding down to 3.7 mb/s for 720p (unless they are complete imbeciles that have a "bit rate starve" fetish), they'll be using saner bit rates, at 5 mb/s for 720p any differences between the 3 encoders start to vanish and at what i consider the minimum for 720p, 8 mb/s, there is no difference at all.

as a test, i tried the cuda encoder at 720p 8 mb/s, just to see what kind of slow down could be expected, and the test encode finished in an almost identical 8 and a half minutes, while x264 slowed down to 14 and a half minutes.

this is definitely not a conclusive test, nor do i expect anyone to accept these results, i'm sure the criticism will fly from the x264 faithful, but it does seem like software based encoding is destined to go the way of the dodo.
deadrats is offline   Reply With Quote
Old 13th January 2011, 00:14   #210  |  Link
nurbs
Registered User
 
Join Date: Dec 2005
Posts: 1,460
Why would you use CBR encoding?
And why do you thing 3.7 Mbps is ridiculously low for 720p? I do encodes at CRF 21 (veryslow, film, lvl 3.1, 3 b-frames) at that resolution and most of them come out below 3 Mbps. They look fine on normal viewing.

Edit:
By the way, you certainly didn't encode with x264 using it's defaults. The defaults aren't main profile with both trellis and b-pyramid disabled. They would be slower though.

Last edited by nurbs; 13th January 2011 at 00:33.
nurbs is offline   Reply With Quote
Old 13th January 2011, 00:24   #211  |  Link
Mixer73
Registered User
 
Join Date: Nov 2007
Posts: 240
Quote:
Originally Posted by CruNcher View Post
So the newest Intel innovation a Auto Switching Chipset Board inside this Machine
Its not auto switching, its copying the framebuffer back from the Nvidia card to the Intel for display, the limitations make it totally worthless.
Mixer73 is offline   Reply With Quote
Old 13th January 2011, 00:29   #212  |  Link
Mug Funky
interlace this!
 
Mug Funky's Avatar
 
Join Date: Jun 2003
Location: i'm in ur transfers, addin noise
Posts: 4,555
i'll chime in and ask why use bits/pixel? there is nothing that bits/pixel can meaningfully say about the actual source. and sources vary enormously.

my tests from HDCAM-SR sources (as near to uncompressed as anyone's likely to get without working straight from the original camera neg or the output of a grading workstation), just using x264 at crf 22 show a massive range of bitrates for different sources.

some (usually shot on RED in daylight, or at least adequate light) features will come in at about 3.5-4mbps for a 1080p24, and some (usually 16mm or a very fast/grainy 35mm) will come in at 20mbps+ at the same res. bits/pixel will not help there. even some HDCAM shot in studio conditions will have huge bitrate requirements due to oversharpening and the everything-in-focus effect of 1/3 inch sensors.

i'll also chime in and say that with these sources, casual viewing reveals little to no difference between x264 encodes at crf22 (which is on the "lower-quality" end) and sonic cinevision at blu-ray bitrates. critical viewing will reveal some grain oddities in x264, but bear in mind we're talking about a 3:1 size difference over the cinevision encodes.
__________________
sucking the life out of your videos since 2004
Mug Funky is offline   Reply With Quote
Old 13th January 2011, 00:41   #213  |  Link
deadrats
Banned
 
Join Date: Oct 2010
Posts: 119
Quote:
Originally Posted by nurbs View Post
Why would you use CBR encoding?
And why do you thing 3.7 Mbps is ridiculously low for 720p? I do encodes at CRF 21 (veryslow, film, lvl 3.1, 3 b-frames) at that resolution and most of them come out below 3 Mbps. They look fine on normal viewing.

Edit:
By the way, you certainly didn't encode with x264 using it's defaults. The defaults aren't main profile with both trellis and b-pyramid disabled. They would be slower though.
1) i chose cbr because i wanted the fastest encode possible, i wanted to make sure the b frames would have adequate bit rate and since i don't speak japanese it was the easiest to figure out what was going on.

2) 3.7 mb/s, as i pointed out, works out to just .134 bits per pixel, considering uncompressed a pixel is an 8 bit "entity", using about 1/78th as any bits to represent the same data is stupid beyond belief.

when i said i used the default setting i should have added the defaults tmpg express 5 chooses for the encoders with 2 exceptions: it chooses "high" profile and 4.0 level, i changed that to "main" profile and 4.1 level, don't really know why, it just seemed like a good idea at the time.

as soon as i get an english version of the app i will be able to play around with the settings and really see what each encoder can do.
deadrats is offline   Reply With Quote
Old 13th January 2011, 00:49   #214  |  Link
deadrats
Banned
 
Join Date: Oct 2010
Posts: 119
Quote:
Originally Posted by Mug Funky View Post
i'll chime in and ask why use bits/pixel? there is nothing that bits/pixel can meaningfully say about the actual source.
it seems like the only logical method for choosing a target bit rate when down rezing, if your source of 1080p30 and the calculated per pixel bit rate is .5 bits per pixel, logic would dictate that if you wish to go to 720p30 and keep a comparable quality the target should use the same pixel/bit ratio, if you go higher then you are using too much bit rate as you can't add detail that isn't already there, if you use less bit rate you are bit rate starving the encode and killing the per pixel quality.

it's bad enough that you are losing about 1.1 million pixels per frame when you go from 1080p to 720p, why compound the quality lose by simultaneously reducing the per pixel bit rate?

the people that take a commercially encoded blu-ray, 1080p at 25-35 mb/s and transcode to 720p at < 4 mb/s and actually believe they know what they are doing or that their encodes are high quality productions are retards of the highest caliber, they should bared from using a computer until they squeeze their heads out of their cans.
deadrats is offline   Reply With Quote
Old 13th January 2011, 00:54   #215  |  Link
poisondeathray
Registered User
 
Join Date: Sep 2007
Posts: 5,346
Quote:
2) 3.7 mb/s, as i pointed out, works out to just .134 bits per pixel, considering uncompressed a pixel is an 8 bit "entity", using about 1/78th as any bits to represent the same data is stupid beyond belief.
You're disregarding content complexity and inter frame compression. Counting bits per pixel in video compression is next to useless. For example, a black frame will take a lot less to compress than a noisy detailed frame. A series of black frames will take a lot lot less than a detailed scene


You're also using a low quality source (8Mb/s for a 1080p blu-ray ? Come on WTF? ) I suspect using a higher quality source would show bigger differences, and it looks like you're deliberately using poor settings (or maybe it's just the japanese GUI)?


Does anyone else see the hiprocracy here? deadrats is calling dark shikari out for not being objective, and calling software based encoding dead, despite basing his "conclusions" on zero objective evidence


Come on guys, lets see some properly done objective testing.

Last edited by poisondeathray; 13th January 2011 at 00:57.
poisondeathray is offline   Reply With Quote
Old 13th January 2011, 01:00   #216  |  Link
CruNcher
Registered User
 
CruNcher's Avatar
 
Join Date: Apr 2002
Location: Germany
Posts: 4,926
Quote:
Originally Posted by Mixer73 View Post
Its not auto switching, its copying the framebuffer back from the Nvidia card to the Intel for display, the limitations make it totally worthless.
Ehh i didn't meant it that way it was sarcastic based on the different Stories (Intel Guy inside is a P67 Chipset) Anandtech (inside is a H67 Chipset) both @ the same time would be the Z68 Chipset
__________________
all my compares are riddles so please try to decipher them yourselves :)

It is about Time

Join the Revolution NOW before it is to Late !

http://forum.doom9.org/showthread.php?t=168004

Last edited by CruNcher; 13th January 2011 at 01:04.
CruNcher is offline   Reply With Quote
Old 13th January 2011, 01:19   #217  |  Link
Mug Funky
interlace this!
 
Mug Funky's Avatar
 
Join Date: Jun 2003
Location: i'm in ur transfers, addin noise
Posts: 4,555
Quote:
Originally Posted by deadrats View Post
it's bad enough that you are losing about 1.1 million pixels per frame when you go from 1080p to 720p, why compound the quality lose by simultaneously reducing the per pixel bit rate?

the people that take a commercially encoded blu-ray, 1080p at 25-35 mb/s and transcode to 720p at < 4 mb/s and actually believe they know what they are doing or that their encodes are high quality productions are retards of the highest caliber, they should bared from using a computer until they squeeze their heads out of their cans.
i wasn't aware we were resizing in these tests.

as far as scaling quality per pixel dimensions, a simple linear relation may help, but will not tell the whole story (ie, you are using fixed block sizes, which means you'll be proportionally increasing the high-frequency content as well, making compression and lossless-coding of compressed data less efficient in a nonlinear manner).

my approach (which i'll admit is slightly naive, but is more robust than bit/pel) is to simply find a crf i like the look of, and encode everything at that. x264 will decide the bitrate, which scales somewhat linearly with pixel dimension, but not quite (smaller frame sizes need a fair bit more per pixel). i think that TMPGenc has a constant quality setting, though being in japanese could be an issue of course (there wasn't an english translation yet? wow).

one very obvious alternative though - use the freaking vanilla x264 that everyone else uses! why complicate things with TMPGenc at this point? (a) you don't know how to use it by your own admisson, and (b) nobody else has it yet, and so verification is difficult.

use a build from here:

http://x264.nl/

also, an adult BD as a test? those are almost exclusively shot on HDV, with fast motion, and (according to various sources) they often use lenses specifically designed to soften away the ugly truth of what sex looks like close-up. in this situation of loads of blocks, double-compressed source (starting in-camera as mpeg-2), and soft source it's no wonder CUDA and x264 look the same.
__________________
sucking the life out of your videos since 2004
Mug Funky is offline   Reply With Quote
Old 13th January 2011, 01:23   #218  |  Link
kolak
Registered User
 
Join Date: Nov 2004
Location: Poland
Posts: 2,843
Quote:
Originally Posted by Mug Funky View Post
i'll also chime in and say that with these sources, casual viewing reveals little to no difference between x264 encodes at crf22 (which is on the "lower-quality" end) and sonic cinevision at blu-ray bitrates. critical viewing will reveal some grain oddities in x264, but bear in mind we're talking about a 3:1 size difference over the cinevision encodes.
Cinevision is not the greatest encoder, but hard to believe that x264 can do the same at 3x smaller file sizes.

Are both encodes BD compliant?

Andrew
kolak is offline   Reply With Quote
Old 13th January 2011, 01:26   #219  |  Link
kolak
Registered User
 
Join Date: Nov 2004
Location: Poland
Posts: 2,843
Quote:
Originally Posted by Mug Funky View Post
also, an adult BD as a test? those are almost exclusively shot on HDV, with fast motion, and (according to various sources) they often use lenses specifically designed to soften away the ugly truth of what sex looks like close-up. in this situation of loads of blocks, double-compressed source (starting in-camera as mpeg-2), and soft source it's no wonder CUDA and x264 look the same.
I was not interested in CUDA encoders at all, because x264 outperforms them massively? Is this has changed?

Andrew
kolak is offline   Reply With Quote
Old 13th January 2011, 01:26   #220  |  Link
deadrats
Banned
 
Join Date: Oct 2010
Posts: 119
Quote:
Originally Posted by poisondeathray View Post
You're also using a low quality source (8Mb/s for a 1080p blu-ray ? Come on WTF? ) I suspect using a higher quality source would show bigger differences, and it looks like you're deliberately using poor settings (or maybe it's just the japanese GUI)?
i'm with you, i'm the one that's been calling for using enough bit rate for an encode, but i wanted to use a source that could easily be shared with you guys (to see the "control" sample) and if that 8 mb/s 1080p is a first generation encode, then it's really not too bad.

if you have a higher quality source that can easily be shared with everyone here, then post a link.

as for deliberately using poor settings, as i said i don't speak japanese, i can't read japanese, i just was able to figure out how to use the default settings because i recognized the basic layout from previous versions of the software.

Quote:
Does anyone else see the hiprocracy here? deadrats is calling dark shikari out for not being objective, and calling software based encoding dead, despite basing his "conclusions" on zero objective evidence
DS is hardly objective, nor are his "followers". in any encoding test done where x264 hasn't been shown to be the clear, conclusive winner, DS has spent pages attacking the tester, his methodologies, the chosen source, a look through this forum and his "diary of an x264 developer" will find one diatribe after another complaining about testers that failed to find his "baby" the undisputed champion.

if you really want to see hypocrisy, read through his rants and then read his suggested methodologies for conducting an encoder test and you will see him recommend end users do the exact same thing that he rips other testers for doing.

Quote:
Come on guys, lets see some properly done objective testing.
i wasn't objective?!?

i uploaded the test encodes and linked to the source so that you guys can see for yourselves, how much more objective could i be?

feel free to conduct your own, better, test and feel free to post the results.
deadrats is offline   Reply With Quote
Reply

Tags
media engine, x.264

Thread Tools Search this Thread
Search this Thread:

Advanced Search
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump


All times are GMT +1. The time now is 06:26.


Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2024, vBulletin Solutions Inc.