Welcome to Doom9's Forum, THE in-place to be for everyone interested in DVD conversion. Before you start posting please read the forum rules. By posting to this forum you agree to abide by the rules. |
6th July 2011, 17:47 | #402 | Link |
Registered User
Join Date: Aug 2006
Posts: 2,229
|
I think part of the reason why it hasn't been so is if they help out one developer (for x264), then other developers would scream foul that they didn't have a chance too. The end result is a lot of consultation hours, risk of corporate espionage (yes that is a real thing! and way too common for it not to be a threat), and the risk of companies (not in x264's case) doing a poor implementation then saying it was done with assistance - which doesn't help Intel or AMD's case, it hinders it!
So at best you can really only deal with what they provide to everyone. Of course then there's the assistance to games developers by (mostly) Nvidia, which gets them a nice 'the way its meant to be played' video at the beginning of the games and joy that although games aren't specifically designed to run poorly on the competitors cards (AMD GPU's), they certainly aren't tweaked in their favour either. Its pretty much the reason why even after Directx 10.1 was released games only supported Direct 10, even though ATI (now part of AMD) cards supported Directx 10.1 - Nvidia did not recognise Directx 10.1 for a long time, even with 'newer generation' cards, as it would have required some redesigning. Some games like Bioshock were originally Directx 10.1, but were stripped down to Directx 10. The rumour behind this is because Directx 10.1 fixed one of the major flaws with Directx 10, slowness in antialiasing (I think it was that from memory!). The originall Bioshock heavily favoured ATI cards because of this, as the Nvidia cards only ran in Directx 10, and as a result it was pulled. The given reason was shown to be complete bull! Despite this, at the end of the day almost all games these days are console ports and either have poor engine support or very low quality ported textures that rely on the GPU power to make them look better. Anyways, the reason why this applies to this argument is because providing support can be risky, even if it is seemingly in your favour. Something that favours Intel or AMD more would be signficantly more scrutinised than something that favours Nvidia or AMD in terms of graphics, but in both cases it can be risky, especially if say, the quality of the output derived from Intels help doesn't meet quality expectations etc... |
7th July 2011, 00:15 | #404 | Link | |
Broadband Junkie
Join Date: Oct 2005
Posts: 1,859
|
Quote:
Effort + GUI + benchmarks for easily identifiable operations (SAD, motion search, etc) in x264 + checkasm --bench for advanced results + x264 1080p encode test with --preset superfast , --preset medium , and --preset veryslow + Pretty graphs of results which compares results to reference systems from each CPU platform x264 supports + Weight results and generate an arbitrary score 1000-10000 (bigger numbers are better ) and ranking system similar to other benchmarks + (Optional) Video ABX test to assess video quality in motion between presets and non-x264 encodes + (Optional) Image ABX test to assess per-frame quality PNG captures between presets and non-x264 encodes + (Optional) Source decoding speed (bottleneck) benchmark I'm half-joking, but if you truly think it could cause CPU vendors to care more about x264, how hard would it be for someone on the dev team to create and maintain such a thing. Give CPU vendors something to point to and brag about if/when they introduce architectural changes that benefit x264 and give them a performance advantage vs competitors. It could also serve the additional purpose of dispelling the myth that x264 is unable to match GPU/Quicksync encoders for speed at similar or better quality. I'm sure you are unsurprised that the common user on an internet forum uses a GPU/Quicksync encoder because they honestly believe marketing hype that it has a speed advantage vs CPU encoding, not because it uses less CPU resources. Last edited by cyberbeing; 7th July 2011 at 04:07. |
|
7th July 2011, 00:44 | #406 | Link |
Broadband Junkie
Join Date: Oct 2005
Posts: 1,859
|
Well you could always replace the quasi-official Tech ARP x264 benchmark with something that suits your needs better for x264 vs GPU/Quicksync comparisons and exposes checkasm results to the world. The current x264 bench is obviously not doing enough to make CPU vendors care about x264. CPU/GPU vendors love synthetic benchmarks, and have marketing departments which care more about non-subjective speed claims than quality. Show them you can beat them at the speed game with stock quality-focused x264, identify to the user slow operations taking the majority of x264's CPU time, while having big-name tech websites inform users that x264 is faster/higher-quality than XXX GPU/Quicksync encoder, and maybe they'll start to care.
Last edited by cyberbeing; 7th July 2011 at 01:39. |
7th July 2011, 08:08 | #407 | Link |
Registered User
Join Date: Mar 2009
Location: Germany
Posts: 5,769
|
I think Dark Shikari is right. No developer wants to trade speed for incertitude. Once one relies on 3rd party implementations, their bugs will be silently incorporated into the product. Nobody would care whether it's the CPU or the software, bottom line the encoding is faulty. It may be even more work, to patch for CPU errors. Plus the DRM issue that might be implemented deep in the core.
__________________
Born in the USB (not USA) |
9th July 2011, 23:04 | #408 | Link | |
Registered User
Join Date: Aug 2006
Posts: 2,229
|
Quote:
I probably should have said developers of everyday actual useful programs , benchmark apps aren't really 'everyday' apps. I guess the difference is they see most apps as too insignificant benefit wise to be bothered considering! Maybe get more hardware review sites to consider using the x264 benchmark and things may change! |
|
17th September 2011, 08:25 | #409 | Link |
Registered User
Join Date: Mar 2004
Posts: 1,120
|
new quicksync info for Intel Ivy Bridge platform: http://www.anandtech.com/show/4830/i...ture-exposed/5
"The increase in EUs and improvements to their throughput both contribute to increases in Quick Sync transcoding performance. Presumably Intel has also done some work on the decode side as well, which is actually one of the reasons Sandy Bridge was so fast at transcoding video. The combination of all of this results in up to 2x the video transcoding performance of Sandy Bridge. There's also the option of seeing less of a performance increase but delivering better image quality." Lets cross our fingers that there is x264 support, i can't see that happening though. Last edited by hajj_3; 17th September 2011 at 14:53. |
17th September 2011, 13:18 | #410 | Link |
Registered User
Join Date: Apr 2002
Location: Germany
Posts: 4,926
|
Imho Quicksync does a nice job already and putting this performance improvement of Ivy Bridge in actual Quality so that we continue to climb up in comparison to Software implementations like x264 is the right way to go instead of making it even faster it is already very fast and a very nice user experience
Though you should still find a right balance for this the balance of Quality/Power Consumption/Encoding Performance and Easy achievable fast workflows this always remain key factors in User Experience. Microsoft especially is becoming very aggressive currently in this user Experience of "Fast and Fluid" vs Google Androids Ecosystem, there a whole lot of changes on those areas including the decimation of Dshow completely from this Top Part of the UI Experience. Especially Intels Quicksync Transcoding Engine plays a major role here and is by standard one of the Encoders Million of Users are gonna use in the Future (for 8 Bit Personal content Encoding). User don't care what is running behind the curtain they care about the End Result and having a fluid experience while (transcoding,editing,playing.streaming) they care more about how long will my systems battery hold and rather accept a lose in quality that in most user cases isn't really visible @ all. Though there is a chance for bringing the X264 experience over to it but im not sure if it can survive in such a environment anymore http://channel9.msdn.com/Events/BUIL...2011/PLAT-783T
__________________
all my compares are riddles so please try to decipher them yourselves :) It is about Time Join the Revolution NOW before it is to Late ! http://forum.doom9.org/showthread.php?t=168004 Last edited by CruNcher; 17th September 2011 at 13:57. |
17th September 2011, 14:52 | #411 | Link |
Registered User
Join Date: Mar 2004
Posts: 1,120
|
you can do either, you can choose the current video quality and have it upto 2x faster or choose the better quality setting and it will be slightly faster than quicksync is currently with sandybridge. Its a very nice improvement. Intel is already way faster than CUDA/Stream so the speed improvement may hopefully mean microsoft may build support in windows 8 for encoding using h264 or something as it would be extremely fast.
|
17th September 2011, 15:38 | #412 | Link | |
Registered User
Join Date: Apr 2002
Location: Germany
Posts: 4,926
|
Quote:
Windows 8 is combining Android and Chrome OS or Mac OS and IOS under 1 Hood it runs aside (though it is a integral part in terms of Performance) and for some tasks it's more efficient with the right combination of Input Devices or Formfactor then the old Windows Shell . That's also why you can Decide what you want to use and when which gives great flexibility also on the Desktop (you not forced to change wouldn't be possible anyways in the early days of this ). The changes can be compared when Microsoft integrated IE back then as Core component into Windows (which caused the Windows vs State trial) as a integral part now they even do it on a much broader scale deep down to the core functions (abstracted by the WinRT API to multiple programming languages) exactly what HTML5 promised Direct Hardware usage of all Devices melting OS and the Internet (services) together also on the Desktop. When you found .NET scary then you will find this ultimately scary PS: AMD released their first Bulldozer benchmarks (aside of IDF running some blocks down @ a Press conference called Fussion Zone http://nl.hardware.info/nieuws/24619...d-fx-processor) and surprise surprise they used X264 in form of Handbrake for Benchmark winning it by 19% with 4 modules vs 4 cores , and surprise surprise Anandtech was not reporting about this but kept straight focused onto IDF and Ivy Bridge . Though you can most probably guess that this win is bought @ almost the full 125W scale (though nothing really was revealed about the benchmark itself just the result) which everyone knows is not hard to win vs Intels more efficient 95W output driving your max Power to the EDGE. At least it seems they reach almost Intels Efficiency now and don't hang 2 Generations behind anymore which already is a good sign they are back in a fully recovered state (after all the Fusion investment including ATI)
__________________
all my compares are riddles so please try to decipher them yourselves :) It is about Time Join the Revolution NOW before it is to Late ! http://forum.doom9.org/showthread.php?t=168004 Last edited by CruNcher; 17th September 2011 at 21:42. |
|
30th September 2011, 15:33 | #413 | Link |
Registered User
Join Date: Jan 2009
Posts: 251
|
This study was recently published comparing X264, QuickSync, CUDA and AMD's solution. Incredible article. Final conclusion is that X264 is still the way to go, and that even when QuickSync approaches X264 in SSIM and PSNR scores, visually X264 is noticeably better.
http://www.behardware.com/articles/8...-and-x264.html Last edited by JoeH; 30th September 2011 at 15:36. |
30th September 2011, 16:21 | #414 | Link |
brontosaurusrex
Join Date: Oct 2001
Posts: 2,392
|
JoeH: I can't even imagine the purpose of this test? Is it about power consumption, is it about plastic looking GUI designs or something entirely else? Will i become a one-day hero if i actually encode something with mikisoft GPU accelerated media converter 7?
__________________
certain other member |
30th September 2011, 20:12 | #416 | Link |
brontosaurusrex
Join Date: Oct 2001
Posts: 2,392
|
27 pages with illustrations to tell, quote: "By offering rapid encoding solutions, but with quality that leaves too much to be desired, H.264 encoding via GPGPU solutions remains, as yet, a poor solution to what is a real problem." ? (but nevermind, i guess i just don't get it )
__________________
certain other member |
1st October 2011, 10:30 | #417 | Link |
Registered User
Join Date: Jan 2009
Posts: 251
|
Well, it's actually the first serious comparison I've seen of X264 and QuickSync quality/speed, using objective metrics. If you have seen better studies please share! It is far more thorough even than MSU's last annual study.
Before this there were lots of random comments of people that X264's quality was superior, but very few if any objective metrics to show exactly how much better or worse. This study offers the objective metrics.. finally. But it also offers the subjective side as well, for anyone to see, as you can click on the images and actually change which encoder output you see. Seems pretty valuable to me... just my opinion. As far as the software included in the study, they are of interest precisely because they use QuickSync. And of course, whether you're using QuickSync with a cheap consumer application or with something more serious like Adobe Premiere, since it's always QuickSync at least in theory you should get the same quality. |
1st October 2011, 10:36 | #418 | Link | |||
Registered User
Join Date: Apr 2002
Location: Germany
Posts: 4,926
|
Quote:
I have my framework almost finished for 3 General Platforms Intel,Nvidia and X264 and by now im used to all 3 Encoder Though im currently working alongside on my Dekstop Realtime Recording Framework on these 3 and i make great process their also, this is also something which in the end will be compared glimps (Quicksync) Also 27 pages for something you could do in much less seems overbloated you get totally lost and don't know after some time what actually is compared :P (and for me it seems they compare actually Encoder Software Frameworks and not individual Cores) But yeah even with those quirks it is currently the nicest compare in those regards something i wished MSU would have done early on (obviously not in that form though ), i see already a lot of things which are wrong in that compare and let some encoder look more bad then they are based on the test framework it's not fair. Quote:
Also the compare system they use (this funny half tiled compare with 1000 of options) geez why making such easy things with a common goal extra complex that is completely bogus it looks nice and maybe professional to some but it's bogus and just melts your brain. Though the Graph Compare system is nicely done and also efficient http://www.behardware.com/marc/h264/..._avatar1080ps1 Quote:
__________________
all my compares are riddles so please try to decipher them yourselves :) It is about Time Join the Revolution NOW before it is to Late ! http://forum.doom9.org/showthread.php?t=168004 Last edited by CruNcher; 1st October 2011 at 11:52. |
|||
18th August 2012, 16:27 | #419 | Link | |
Registered User
Join Date: Dec 2006
Posts: 5
|
Quote:
PS. Don't ban me for this little necro. Last edited by JoeTF; 18th August 2012 at 16:31. |
|
Tags |
media engine, x.264 |
Thread Tools | Search this Thread |
Display Modes | |
|
|