Welcome to Doom9's Forum, THE in-place to be for everyone interested in DVD conversion.

Before you start posting please read the forum rules. By posting to this forum you agree to abide by the rules.

 

Go Back   Doom9's Forum > Video Encoding > MPEG-4 AVC / H.264

Reply
 
Thread Tools Search this Thread Display Modes
Old 6th July 2011, 14:07   #401  |  Link
kieranrk
Registered User
 
Join Date: Jun 2009
Location: London, United Kingdom
Posts: 707
Quote:
Originally Posted by Dark Shikari View Post
AMD has never contacted anyone on the x264 team for any reason ever, nor have I heard of plans to add encoding hardware to any AMD CPU.
I contacted the Head of Developer Relations at AMD through a contact last year, and our request was rebuffed in spite of describing our contact with Intel...
kieranrk is offline   Reply With Quote
Old 6th July 2011, 17:47   #402  |  Link
burfadel
Registered User
 
Join Date: Aug 2006
Posts: 2,229
I think part of the reason why it hasn't been so is if they help out one developer (for x264), then other developers would scream foul that they didn't have a chance too. The end result is a lot of consultation hours, risk of corporate espionage (yes that is a real thing! and way too common for it not to be a threat), and the risk of companies (not in x264's case) doing a poor implementation then saying it was done with assistance - which doesn't help Intel or AMD's case, it hinders it!

So at best you can really only deal with what they provide to everyone. Of course then there's the assistance to games developers by (mostly) Nvidia, which gets them a nice 'the way its meant to be played' video at the beginning of the games and joy that although games aren't specifically designed to run poorly on the competitors cards (AMD GPU's), they certainly aren't tweaked in their favour either. Its pretty much the reason why even after Directx 10.1 was released games only supported Direct 10, even though ATI (now part of AMD) cards supported Directx 10.1 - Nvidia did not recognise Directx 10.1 for a long time, even with 'newer generation' cards, as it would have required some redesigning. Some games like Bioshock were originally Directx 10.1, but were stripped down to Directx 10. The rumour behind this is because Directx 10.1 fixed one of the major flaws with Directx 10, slowness in antialiasing (I think it was that from memory!). The originall Bioshock heavily favoured ATI cards because of this, as the Nvidia cards only ran in Directx 10, and as a result it was pulled. The given reason was shown to be complete bull! Despite this, at the end of the day almost all games these days are console ports and either have poor engine support or very low quality ported textures that rely on the GPU power to make them look better.

Anyways, the reason why this applies to this argument is because providing support can be risky, even if it is seemingly in your favour. Something that favours Intel or AMD more would be signficantly more scrutinised than something that favours Nvidia or AMD in terms of graphics, but in both cases it can be risky, especially if say, the quality of the output derived from Intels help doesn't meet quality expectations etc...
burfadel is offline   Reply With Quote
Old 6th July 2011, 17:50   #403  |  Link
Dark Shikari
x264 developer
 
Dark Shikari's Avatar
 
Join Date: Sep 2005
Posts: 8,666
Quote:
Originally Posted by burfadel View Post
I think part of the reason why it hasn't been so is if they help out one developer (for x264), then other developers would scream foul that they didn't have a chance too.
Except that they do work with other developers. All the time. Especially developers of programs used to benchmark their CPUs.
Dark Shikari is offline   Reply With Quote
Old 7th July 2011, 00:15   #404  |  Link
cyberbeing
Broadband Junkie
 
Join Date: Oct 2005
Posts: 1,859
Quote:
Originally Posted by Dark Shikari View Post
Especially developers of programs used to benchmark their CPUs.
Then why not create an 'official' x264 benchmark and field it to the tech sites.

Effort
+
GUI
+
benchmarks for easily identifiable operations (SAD, motion search, etc) in x264
+
checkasm --bench for advanced results
+
x264 1080p encode test with --preset superfast , --preset medium , and --preset veryslow
+
Pretty graphs of results which compares results to reference systems from each CPU platform x264 supports
+
Weight results and generate an arbitrary score 1000-10000 (bigger numbers are better ) and ranking system similar to other benchmarks
+
(Optional) Video ABX test to assess video quality in motion between presets and non-x264 encodes
+
(Optional) Image ABX test to assess per-frame quality PNG captures between presets and non-x264 encodes
+
(Optional) Source decoding speed (bottleneck) benchmark



I'm half-joking, but if you truly think it could cause CPU vendors to care more about x264, how hard would it be for someone on the dev team to create and maintain such a thing. Give CPU vendors something to point to and brag about if/when they introduce architectural changes that benefit x264 and give them a performance advantage vs competitors. It could also serve the additional purpose of dispelling the myth that x264 is unable to match GPU/Quicksync encoders for speed at similar or better quality. I'm sure you are unsurprised that the common user on an internet forum uses a GPU/Quicksync encoder because they honestly believe marketing hype that it has a speed advantage vs CPU encoding, not because it uses less CPU resources.

Last edited by cyberbeing; 7th July 2011 at 04:07.
cyberbeing is offline   Reply With Quote
Old 7th July 2011, 00:21   #405  |  Link
Dark Shikari
x264 developer
 
Dark Shikari's Avatar
 
Join Date: Sep 2005
Posts: 8,666
Quote:
Originally Posted by cyberbeing View Post
Then why not create an 'official' x264 benchmark and field it to the tech sites.
There already is one.
Dark Shikari is offline   Reply With Quote
Old 7th July 2011, 00:44   #406  |  Link
cyberbeing
Broadband Junkie
 
Join Date: Oct 2005
Posts: 1,859
Well you could always replace the quasi-official Tech ARP x264 benchmark with something that suits your needs better for x264 vs GPU/Quicksync comparisons and exposes checkasm results to the world. The current x264 bench is obviously not doing enough to make CPU vendors care about x264. CPU/GPU vendors love synthetic benchmarks, and have marketing departments which care more about non-subjective speed claims than quality. Show them you can beat them at the speed game with stock quality-focused x264, identify to the user slow operations taking the majority of x264's CPU time, while having big-name tech websites inform users that x264 is faster/higher-quality than XXX GPU/Quicksync encoder, and maybe they'll start to care.

Last edited by cyberbeing; 7th July 2011 at 01:39.
cyberbeing is offline   Reply With Quote
Old 7th July 2011, 08:08   #407  |  Link
Ghitulescu
Registered User
 
Ghitulescu's Avatar
 
Join Date: Mar 2009
Location: Germany
Posts: 5,769
I think Dark Shikari is right. No developer wants to trade speed for incertitude. Once one relies on 3rd party implementations, their bugs will be silently incorporated into the product. Nobody would care whether it's the CPU or the software, bottom line the encoding is faulty. It may be even more work, to patch for CPU errors. Plus the DRM issue that might be implemented deep in the core.
__________________
Born in the USB (not USA)
Ghitulescu is offline   Reply With Quote
Old 9th July 2011, 23:04   #408  |  Link
burfadel
Registered User
 
Join Date: Aug 2006
Posts: 2,229
Quote:
Originally Posted by Dark Shikari View Post
Except that they do work with other developers. All the time. Especially developers of programs used to benchmark their CPUs.
Quite true! I wasn't even thinking about benchmarks when I said that

I probably should have said developers of everyday actual useful programs , benchmark apps aren't really 'everyday' apps. I guess the difference is they see most apps as too insignificant benefit wise to be bothered considering!

Maybe get more hardware review sites to consider using the x264 benchmark and things may change!
burfadel is offline   Reply With Quote
Old 17th September 2011, 08:25   #409  |  Link
hajj_3
Registered User
 
Join Date: Mar 2004
Posts: 1,120
new quicksync info for Intel Ivy Bridge platform: http://www.anandtech.com/show/4830/i...ture-exposed/5

"The increase in EUs and improvements to their throughput both contribute to increases in Quick Sync transcoding performance. Presumably Intel has also done some work on the decode side as well, which is actually one of the reasons Sandy Bridge was so fast at transcoding video. The combination of all of this results in up to 2x the video transcoding performance of Sandy Bridge. There's also the option of seeing less of a performance increase but delivering better image quality."

Lets cross our fingers that there is x264 support, i can't see that happening though.

Last edited by hajj_3; 17th September 2011 at 14:53.
hajj_3 is offline   Reply With Quote
Old 17th September 2011, 13:18   #410  |  Link
CruNcher
Registered User
 
CruNcher's Avatar
 
Join Date: Apr 2002
Location: Germany
Posts: 4,926
Imho Quicksync does a nice job already and putting this performance improvement of Ivy Bridge in actual Quality so that we continue to climb up in comparison to Software implementations like x264 is the right way to go instead of making it even faster it is already very fast and a very nice user experience
Though you should still find a right balance for this the balance of Quality/Power Consumption/Encoding Performance and Easy achievable fast workflows this always remain key factors in User Experience. Microsoft especially is becoming very aggressive currently in this user Experience of "Fast and Fluid" vs Google Androids Ecosystem, there a whole lot of changes on those areas including the decimation of Dshow completely from this Top Part of the UI Experience. Especially Intels Quicksync Transcoding Engine plays a major role here and is by standard one of the Encoders Million of Users are gonna use in the Future (for 8 Bit Personal content Encoding).
User don't care what is running behind the curtain they care about the End Result and having a fluid experience while (transcoding,editing,playing.streaming) they care more about how long will my systems battery hold and rather accept a lose in quality that in most user cases isn't really visible @ all. Though there is a chance for bringing the X264 experience over to it but im not sure if it can survive in such a environment anymore http://channel9.msdn.com/Events/BUIL...2011/PLAT-783T
__________________
all my compares are riddles so please try to decipher them yourselves :)

It is about Time

Join the Revolution NOW before it is to Late !

http://forum.doom9.org/showthread.php?t=168004

Last edited by CruNcher; 17th September 2011 at 13:57.
CruNcher is offline   Reply With Quote
Old 17th September 2011, 14:52   #411  |  Link
hajj_3
Registered User
 
Join Date: Mar 2004
Posts: 1,120
you can do either, you can choose the current video quality and have it upto 2x faster or choose the better quality setting and it will be slightly faster than quicksync is currently with sandybridge. Its a very nice improvement. Intel is already way faster than CUDA/Stream so the speed improvement may hopefully mean microsoft may build support in windows 8 for encoding using h264 or something as it would be extremely fast.
hajj_3 is offline   Reply With Quote
Old 17th September 2011, 15:38   #412  |  Link
CruNcher
Registered User
 
CruNcher's Avatar
 
Join Date: Apr 2002
Location: Germany
Posts: 4,926
Quote:
Originally Posted by hajj_3 View Post
you can do either, you can choose the current video quality and have it upto 2x faster or choose the better quality setting and it will be slightly faster than quicksync is currently with sandybridge. Its a very nice improvement. Intel is already way faster than CUDA/Stream so the speed improvement may hopefully mean microsoft may build support in windows 8 for encoding using h264 or something as it would be extremely fast.
That's by default the case if the hardware supports in anyway Transcoding it will be utilized if not most probably the CPU is going to be used but im not sure if Microsoft is going to allow this didn't saw the Certification part yet that might indeed say "You need to provide a DSP for Encoding" it would though fit in their "Fast and Fluid" Agenda.
Windows 8 is combining Android and Chrome OS or Mac OS and IOS under 1 Hood it runs aside (though it is a integral part in terms of Performance) and for some tasks it's more efficient with the right combination of Input Devices or Formfactor then the old Windows Shell .
That's also why you can Decide what you want to use and when which gives great flexibility also on the Desktop (you not forced to change wouldn't be possible anyways in the early days of this ).
The changes can be compared when Microsoft integrated IE back then as Core component into Windows (which caused the Windows vs State trial) as a integral part now they even do it on a much broader scale deep down to the core functions (abstracted by the WinRT API to multiple programming languages) exactly what HTML5 promised Direct Hardware usage of all Devices melting OS and the Internet (services) together also on the Desktop. When you found .NET scary then you will find this ultimately scary

PS: AMD released their first Bulldozer benchmarks (aside of IDF running some blocks down @ a Press conference called Fussion Zone http://nl.hardware.info/nieuws/24619...d-fx-processor) and surprise surprise they used X264 in form of Handbrake for Benchmark winning it by 19% with 4 modules vs 4 cores , and surprise surprise Anandtech was not reporting about this but kept straight focused onto IDF and Ivy Bridge .
Though you can most probably guess that this win is bought @ almost the full 125W scale (though nothing really was revealed about the benchmark itself just the result) which everyone knows is not hard to win vs Intels more efficient 95W output driving your max Power to the EDGE.
At least it seems they reach almost Intels Efficiency now and don't hang 2 Generations behind anymore which already is a good sign they are back in a fully recovered state (after all the Fusion investment including ATI)
__________________
all my compares are riddles so please try to decipher them yourselves :)

It is about Time

Join the Revolution NOW before it is to Late !

http://forum.doom9.org/showthread.php?t=168004

Last edited by CruNcher; 17th September 2011 at 21:42.
CruNcher is offline   Reply With Quote
Old 30th September 2011, 15:33   #413  |  Link
JoeH
Registered User
 
Join Date: Jan 2009
Posts: 251
This study was recently published comparing X264, QuickSync, CUDA and AMD's solution. Incredible article. Final conclusion is that X264 is still the way to go, and that even when QuickSync approaches X264 in SSIM and PSNR scores, visually X264 is noticeably better.

http://www.behardware.com/articles/8...-and-x264.html

Last edited by JoeH; 30th September 2011 at 15:36.
JoeH is offline   Reply With Quote
Old 30th September 2011, 16:21   #414  |  Link
smok3
brontosaurusrex
 
smok3's Avatar
 
Join Date: Oct 2001
Posts: 2,392
JoeH: I can't even imagine the purpose of this test? Is it about power consumption, is it about plastic looking GUI designs or something entirely else? Will i become a one-day hero if i actually encode something with mikisoft GPU accelerated media converter 7?
__________________
certain other member
smok3 is offline   Reply With Quote
Old 30th September 2011, 20:04   #415  |  Link
nm
Registered User
 
Join Date: Mar 2005
Location: Finland
Posts: 2,641
Quote:
Originally Posted by smok3 View Post
JoeH: I can't even imagine the purpose of this test?
Huh? It's an encoder comparison, and at a quick glance, much more thorough and correct than they usually are. There are speed, quality and power consumption figures among other things.
nm is offline   Reply With Quote
Old 30th September 2011, 20:12   #416  |  Link
smok3
brontosaurusrex
 
smok3's Avatar
 
Join Date: Oct 2001
Posts: 2,392
27 pages with illustrations to tell, quote: "By offering rapid encoding solutions, but with quality that leaves too much to be desired, H.264 encoding via GPGPU solutions remains, as yet, a poor solution to what is a real problem." ? (but nevermind, i guess i just don't get it )
__________________
certain other member
smok3 is offline   Reply With Quote
Old 1st October 2011, 10:30   #417  |  Link
JoeH
Registered User
 
Join Date: Jan 2009
Posts: 251
Well, it's actually the first serious comparison I've seen of X264 and QuickSync quality/speed, using objective metrics. If you have seen better studies please share! It is far more thorough even than MSU's last annual study.

Before this there were lots of random comments of people that X264's quality was superior, but very few if any objective metrics to show exactly how much better or worse. This study offers the objective metrics.. finally. But it also offers the subjective side as well, for anyone to see, as you can click on the images and actually change which encoder output you see. Seems pretty valuable to me... just my opinion.

As far as the software included in the study, they are of interest precisely because they use QuickSync. And of course, whether you're using QuickSync with a cheap consumer application or with something more serious like Adobe Premiere, since it's always QuickSync at least in theory you should get the same quality.
JoeH is offline   Reply With Quote
Old 1st October 2011, 10:36   #418  |  Link
CruNcher
Registered User
 
CruNcher's Avatar
 
Join Date: Apr 2002
Location: Germany
Posts: 4,926
Quote:
As far as the software included in the study, they are of interest precisely because they use QuickSync. And of course, whether you're using QuickSync with a cheap consumer application or with something more serious like Adobe Premiere, since it's always QuickSync at least in theory you should get the same quality.
It's unfortunately not that easy as the frameworks itself and the implementation also can differ you need to make your own to compare efficiently and without any hidden quirks, especially Arcsofts Media Converter you need to know whats going on (in the background).
I have my framework almost finished for 3 General Platforms Intel,Nvidia and X264 and by now im used to all 3 Encoder
Though im currently working alongside on my Dekstop Realtime Recording Framework on these 3 and i make great process their also, this is also something which in the end will be compared

glimps (Quicksync)



Also 27 pages for something you could do in much less seems overbloated you get totally lost and don't know after some time what actually is compared :P (and for me it seems they compare actually Encoder Software Frameworks and not individual Cores)

But yeah even with those quirks it is currently the nicest compare in those regards something i wished MSU would have done early on (obviously not in that form though ), i see already a lot of things which are wrong in that compare and let some encoder look more bad then they are based on the test framework it's not fair.

Quote:
The Arcsoft CUDA encoder is quite simply not up to the task. Our other encoders show differences in sharpness.
They actually have 3 different Cuda Encoder

Also the compare system they use (this funny half tiled compare with 1000 of options) geez why making such easy things with a common goal extra complex that is completely bogus it looks nice and maybe professional to some but it's bogus and just melts your brain.
Though the Graph Compare system is nicely done and also efficient http://www.behardware.com/marc/h264/..._avatar1080ps1

Quote:
GeForce 8800s offered decoding of accelerated H.264 streams, though this was partial. There was no GPU support for CABAC decoding at the time (this was added to the following generations).
Also incorrect G92, VP2 fully supports Cabac so they cant write any 8800s series Card doesn't support VLD Decoding.
__________________
all my compares are riddles so please try to decipher them yourselves :)

It is about Time

Join the Revolution NOW before it is to Late !

http://forum.doom9.org/showthread.php?t=168004

Last edited by CruNcher; 1st October 2011 at 11:52.
CruNcher is offline   Reply With Quote
Old 18th August 2012, 16:27   #419  |  Link
JoeTF
Registered User
 
Join Date: Dec 2006
Posts: 5
Quote:
Originally Posted by deadrats View Post
this is an absurd question, if you could achieve the same quality at 10 mb/s using x264 as you can with something like cce-hd or blu-code at 30+ mb/s; do you really think that the professional movie studios would spend 40-70+ grand on a per seat license to acquire those encoders? don't you think they would try and save a butt load of dough and use the legally free open source alternative?

furthermore, a blu-ray disk holds up to 25-50 gigs of data and as i said a 1.5tb hdd costs under $100 (i currently have 3 of them), you feel like saving the space for a rainy day?

(...)
Considering how much hdds cost nowday due to, well, a few too many rainy days in Malaysia, I would say that was prophetic

PS. Don't ban me for this little necro.

Last edited by JoeTF; 18th August 2012 at 16:31.
JoeTF is offline   Reply With Quote
Reply

Tags
media engine, x.264

Thread Tools Search this Thread
Search this Thread:

Advanced Search
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump


All times are GMT +1. The time now is 17:14.


Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2024, vBulletin Solutions Inc.