Welcome to Doom9's Forum, THE in-place to be for everyone interested in DVD conversion.

Before you start posting please read the forum rules. By posting to this forum you agree to abide by the rules.

 

Go Back   Doom9's Forum > Hardware & Software > PC Hard & Software

Reply
 
Thread Tools Search this Thread Display Modes
Old 3rd September 2015, 13:42   #1  |  Link
NikosD
Registered User
 
Join Date: Aug 2010
Location: Athens, Greece
Posts: 2,348
Nvidia scandal with Async Compute support (DX12)

Nvidia refuses to reply officially but the truth is there and should be uncovered fully.

Nvidia DX12 cards, even latest Maxwell 2.0, do not support Async shaders in HW.

Fermi, Kepler and Maxwell 1.0 don't support Async shaders at all and Maxwell 2.0 supports that feature using software scheduling, which means practically is not supported.

DX12 games are not ready yet in quantities to see how much gain could give this feature to AMD GCN 1.x-only cards, but from the well-known game benchmark the gain is huge.

Maybe Pascal could change that, but as it is right now the AMD GCN card owners, should be more than happy.

http://wccftech.com/nvidia-amd-direc...s-explained/4/
__________________
Win 10 x64 (17134.81) - Core i3-4170/ iGPU HD 4400 (v.4835)
HEVC decoding benchmarks
H.264 DXVA Benchmarks for all

Last edited by NikosD; 6th September 2015 at 12:09.
NikosD is offline   Reply With Quote
Old 3rd September 2015, 19:35   #2  |  Link
baii
Registered User
 
Join Date: Dec 2011
Posts: 180
real life result is what matters though.
baii is offline   Reply With Quote
Old 4th September 2015, 13:31   #3  |  Link
NikosD
Registered User
 
Join Date: Aug 2010
Location: Athens, Greece
Posts: 2,348
Quote:
Originally Posted by baii View Post
real life result is what matters though.
The game - benchmark is pure real life.

The actual game shares its game engine with the benchmark.
It's just one game till now.

Don't expect the situation to change a lot though with other games.
__________________
Win 10 x64 (17134.81) - Core i3-4170/ iGPU HD 4400 (v.4835)
HEVC decoding benchmarks
H.264 DXVA Benchmarks for all
NikosD is offline   Reply With Quote
Old 6th September 2015, 07:00   #4  |  Link
NikosD
Registered User
 
Join Date: Aug 2010
Location: Athens, Greece
Posts: 2,348
At last!

The scandal of Nvidia tactics regarding Async Compute and the pressure to developers to cover it, has been uncovered broadly to a wider extent than the original Async Compute scandal.

"Given its growing market-share, NVIDIA could use similar tactics to keep game developers away from industry-standard API features that it doesn't support, and which rival AMD does. NVIDIA drivers tell Windows that its GPUs support DirectX 12 feature-level 12_1. We wonder how much of that support is faked at the driver-level, like async compute. The company is already drawing flack for using borderline anti-competitive practices with GameWorks, which effectively creates a walled garden of visual effects that only users of NVIDIA hardware can experience for the same $59 everyone spends on a particular game."

http://www.techpowerup.com/215663/la...irectx-12.html
__________________
Win 10 x64 (17134.81) - Core i3-4170/ iGPU HD 4400 (v.4835)
HEVC decoding benchmarks
H.264 DXVA Benchmarks for all
NikosD is offline   Reply With Quote
Old 6th September 2015, 07:53   #5  |  Link
Sulik
Registered User
 
Join Date: Jan 2002
Location: San Jose, CA
Posts: 215
That "scandal" is a whole bunch of noise over not much IMO - unrelated to HEVC decoding in any case - you come across with a bit of an anti-NVidia agenda FWIW (You work for Intel or something ?).
Sulik is offline   Reply With Quote
Old 6th September 2015, 09:52   #6  |  Link
P.J
🎸
 
Join Date: Jun 2008
Posts: 497
It's soon to discuss about DX12. We should blame Microsoft first x)
And since I know, it does nothing with DXVA
P.J is offline   Reply With Quote
Old 6th September 2015, 10:12   #7  |  Link
NikosD
Registered User
 
Join Date: Aug 2010
Location: Athens, Greece
Posts: 2,348
You clearly have misunderstood something.

DX12 is a brand new low level and real multi-threaded API like Mantle and OpenGL Vulcan or Apple's Metal.

It is more than clear that Nvidia wasn't expecting or didn't manage to stop this evolution on the PC graphics and their GPUs are still optimized for DX11, which is not low level or real multi-threaded.

So, for the first time in recent GPU history, AMD GCN cards (from late 2011 onwards) can really expose the power of their HW which can be leveraged more efficiently than ever by ALL low - level APIs like DX12 and Vulcan.

So, for the first time in recent history, AMD cards are more future - proof and a lot better value - for - money than Nvidia cards.

In just one day, July 29th of 2015, with the official release of Win10 and DX12, things turned over and AMD is on top again.

We are here to watch Nvidia struggling with Marketing (money), Blackmails (money), Media (money) to change the picture like Intel tried to with the release of AMD Athlon back on 1999.

That's why I like AMD.

A small company fighting with the giants.
__________________
Win 10 x64 (17134.81) - Core i3-4170/ iGPU HD 4400 (v.4835)
HEVC decoding benchmarks
H.264 DXVA Benchmarks for all
NikosD is offline   Reply With Quote
Old 6th September 2015, 10:16   #8  |  Link
nevcairiel
Registered Developer
 
Join Date: Mar 2010
Location: Hamburg/Germany
Posts: 9,274
The techpowerup article is just a rehash of the Oxide Async Compute thing, I don't see how its anything new or "uncovered broadly to a wider extent than the original Async Compute scandal"
Funny enough everytime it gets re-hashed, it presents the situation slightly differently. Need to draw readers, I guess?

I don't even understand how people make everything a scandal these days.
So NVIDIA emulates one DX12 feature in software - thats not forbidden, and unless they explicitly stated the opposite, also not worth a scandal.

Performance numbers of actual games in the future will determine the merit of which card to buy, not theoretical feature limitations and so-called "scandals".
If this feature is so important, than performance will show it on DX12 titles (once we have more than one single benchmark), and that alone should be a driving factor.
__________________
LAV Filters - open source ffmpeg based media splitter and decoders

Last edited by nevcairiel; 6th September 2015 at 10:20.
nevcairiel is offline   Reply With Quote
Old 6th September 2015, 10:26   #9  |  Link
NikosD
Registered User
 
Join Date: Aug 2010
Location: Athens, Greece
Posts: 2,348
You have to read again the paragraph I quoted here.

What it's been uncovered, is a pressure of Nvidia to cover the "cheat" of the driver exposing Async Compute which is not implemented yet.

The reason of not implemented is because it is based on SW scheduling and other slow SW processes that AMD supports in HW.

Nvidia tried to attack by saying that activating Async Compute is a biased move with code specifically written for AMD (!)

What a lie (!).
It's exactly the opposite.

Nvidia demanded to remove Async Compute code when running on their HW because although it was exposed by the driver it wasn't implemented.

Now, read again the last paragraph and remember the 4GB RAM-gate of 970.

Nvidia is FULL OF LIES.
__________________
Win 10 x64 (17134.81) - Core i3-4170/ iGPU HD 4400 (v.4835)
HEVC decoding benchmarks
H.264 DXVA Benchmarks for all
NikosD is offline   Reply With Quote
Old 6th September 2015, 10:39   #10  |  Link
NikosD
Registered User
 
Join Date: Aug 2010
Location: Athens, Greece
Posts: 2,348
Quote:
Originally Posted by nevcairiel View Post
The techpowerup article is just a rehash of the Oxide Async Compute thing, I don't see how its anything new or "uncovered broadly to a wider extent than the original Async Compute scandal"
Funny enough everytime it gets re-hashed, it presents the situation slightly differently. Need to draw readers, I guess?

I don't even understand how people make everything a scandal these days.
So NVIDIA emulates one DX12 feature in software - thats not forbidden, and unless they explicitly stated the opposite, also not worth a scandal.

Performance numbers of actual games in the future will determine the merit of which card to buy, not theoretical feature limitations and so-called "scandals".
If this feature is so important, than performance will show it on DX12 titles (once we have more than one single benchmark), and that alone should be a driving factor.
I don't want to change anything on my previous post because it was written before you complete your post and had already replied why it was A REAL SCANDAL before you ask !

I have to become an Oracle after all.
__________________
Win 10 x64 (17134.81) - Core i3-4170/ iGPU HD 4400 (v.4835)
HEVC decoding benchmarks
H.264 DXVA Benchmarks for all
NikosD is offline   Reply With Quote
Old 6th September 2015, 10:45   #11  |  Link
nevcairiel
Registered Developer
 
Join Date: Mar 2010
Location: Hamburg/Germany
Posts: 9,274
Everything on the internet is a scandal these days.
Next year when DX12 becomes actually relevant, people will buy whatever GPU is the fastest, and if by chance thats going to be NVIDIA, noone is going to care anymore.
__________________
LAV Filters - open source ffmpeg based media splitter and decoders
nevcairiel is offline   Reply With Quote
Old 6th September 2015, 10:54   #12  |  Link
NikosD
Registered User
 
Join Date: Aug 2010
Location: Athens, Greece
Posts: 2,348
No, not everything on the Internet is a scandal these days.

REAL SCANDALS can spread faster due to internet when they are uncovered.

What is going to happen next year, let next year to worry about.

We are talking about today that Nvidia did lie again about their HW capabilities which are lower than expected and missing from their drivers although exposed by them (!) and that AMD cards are better suited for low - level APIs like DX12 and Vulcan.

All GCN graphics cards owners should be more than happy for their buy regarding DX12 games, in clear contrast with the Nvidia cards owners regarding value-for-money and DX12 games.
__________________
Win 10 x64 (17134.81) - Core i3-4170/ iGPU HD 4400 (v.4835)
HEVC decoding benchmarks
H.264 DXVA Benchmarks for all
NikosD is offline   Reply With Quote
Old 6th September 2015, 18:51   #13  |  Link
Sulik
Registered User
 
Join Date: Jan 2002
Location: San Jose, CA
Posts: 215
Real-life performance in the applications you're using is what matters - obviously HW manufacturers make different design tradeoffs, and it's way too early to speculate about which will work out best. Focusing on one specific feature may expose these differences, but is meaningless.
AMD & NVidia are in the same boat, the real competition is Intel, slowly eating away at their GPU pie (and from the looks of it, AMD may very well go bankrupt within a year or two - I'd worry about that more than truly-async vs pseudo-async compute ).
Sulik is offline   Reply With Quote
Old 6th September 2015, 18:58   #14  |  Link
NikosD
Registered User
 
Join Date: Aug 2010
Location: Athens, Greece
Posts: 2,348
The problem for Nvidia is that Async Compute is not just a "feature"

Nvidia has to redesign the whole GPU architecture in order to support it.
It has to change fundamentally the way that their GPUs work.

There is a chance that not even Pascal, their next generation GPU will support that "feature" in the degree that AMD does.

That's a good thing because AMD sells cards to consoles, that's why changed to GCN architecture and low-level API support, but has less than 20% of discrete cards in PC market.

Time to rise that percentage
__________________
Win 10 x64 (17134.81) - Core i3-4170/ iGPU HD 4400 (v.4835)
HEVC decoding benchmarks
H.264 DXVA Benchmarks for all
NikosD is offline   Reply With Quote
Old 6th September 2015, 19:05   #15  |  Link
NikosD
Registered User
 
Join Date: Aug 2010
Location: Athens, Greece
Posts: 2,348
But in order to not go off - topic, Nvidia first has to change its attitude against its rival AMD, developers, reviewers and eventually customers and potential customers.

Nvidia must stop telling SO MUCH LIES and treats everyone like their own pets.

It must stop cheating in benchmarks, accusing the others for doing that and stop forcing developers to support their products only, like it is the only IHV in graphics cards.

DX12 proved that the other company - AMD - has faster and better products regarding value-for-money than the greedy Nvidia.
__________________
Win 10 x64 (17134.81) - Core i3-4170/ iGPU HD 4400 (v.4835)
HEVC decoding benchmarks
H.264 DXVA Benchmarks for all
NikosD is offline   Reply With Quote
Old 6th September 2015, 19:40   #16  |  Link
nevcairiel
Registered Developer
 
Join Date: Mar 2010
Location: Hamburg/Germany
Posts: 9,274
One of the Oxide developers, Kollock, has posted a few more information. He now says that NVIDIA is not done with Async Compute support in their drivers yet, and they (Oxide) are working with NVIDIA to get the driver part of that done, before its properly enabled in the driver.
Until a driver actually enables that feature properly, everything else is just speculation. Today, it cannot use the Async Warp Schedulers on the Maxwell GPU - but if the driver support is finished, the picture may be quite different.

Software scheduling may be less efficient than full hardware, but its not "no support", and until we see how the end result looks, i'll keep an open mind, in any case.

Summary here:
http://www.guru3d.com/news-story/nvi...r-support.html
__________________
LAV Filters - open source ffmpeg based media splitter and decoders

Last edited by nevcairiel; 6th September 2015 at 19:42.
nevcairiel is offline   Reply With Quote
Old 6th September 2015, 19:59   #17  |  Link
NikosD
Registered User
 
Join Date: Aug 2010
Location: Athens, Greece
Posts: 2,348
Yes, after the "outing" in a world scale, Nvidia has started to act like it understands the situation, more "normally", accepting the facts.

But in the beginning Nvidia accused the developers that they made specific optimizations for AMD and not Nvidia.

The truth is this:

The developer tried to enable a feature exposed by the driver of Nvidia - Async Compute - and as he describes the situation he saw Dante's hell or the signs of Apocalypse.

He told Nvidia what happened and they replied "don't use that feature" although was clearly exposed by the driver.

He did that for Nvidia but of course he couldn't/ shouldn't do the same thing for AMD because AMD supports it properly as a basic performance feature of DX12.

And suddenly after the low results of Nvidia and the high results of AMD, Nvidia accused the developer of biased code because he used a DX12 feature that Nvidia CAN'T use due to its architecture and AMD CAN use due to its own architecture.

Nvidia lied:

1) In their drivers and their product marketing and the specifications and the potential customers that they can do Async Compute right here, right now.

They probably thought that nobody would use it, right here right now so nobody would discover the fraud.

2) To the developer that it's OK to not use that feature but then it accused him for biased code, because he didn't use that nonexistent feature.

and

3) Is totally responsible if after the 4GB memory - gate and that fake Async Compute feature enabled in the drivers, people starts to believe that their products are different, with lower performance than advertised.

As I read in various forums and sites there is a wave of buyers of Nvidia cards asking for the money back, because of the lack of that feature which gives a HUGE performance boost to GCN AMD cards and they thought they would get it too.
__________________
Win 10 x64 (17134.81) - Core i3-4170/ iGPU HD 4400 (v.4835)
HEVC decoding benchmarks
H.264 DXVA Benchmarks for all
NikosD is offline   Reply With Quote
Old 6th September 2015, 20:00   #18  |  Link
jones1913
random user
 
Join Date: May 2014
Location: #Neuland
Posts: 94
Quote:
Until a driver actually enables that feature properly, everything else is just speculation.
That was the problem: The nvidia driver reported the async compute capability but when oxide enabled that feature then the performance was worse.


Apart from that Nvidia has a long history lying to their customers, but their sales shows: Customers are very forgetful or dont care.

2015: incorrect informations about directx 12 feature levels of maxwell
2015: GTX970 ... you know it
2012: incorrect informations about directx 11.1 capability of kepler chips
2011: incorrect informations about tegra 3 performance
2010: fermi wood dummy
2008: lots of bricked laptops with G84 und G86 chips
2001: incorrect informations in a comparisation geforce 2/3 with kyro II

- sabotages the performance of competitors with gameworks software (eg. disable physics hardware support if secondary AMD card is detected)
- selling expensive g-sync modules to display manufacturers which mostly act as drm blackbox
- were the first who introduced excessive rebranding (8800GT/2007 -> 9600GSO/2008 -> GTS150/2009 -> GTS240/2009 -> GT330/2010)
- driver cheating in some 3d benchmarks

But I am not forgetful in this respect, and try to avoid buying products from such companies.

*Dont ask me for sources of these allegations, google show all this with a few clicks.
__________________
BeHappy Audio Transcoder > <Doom9 forum> <Gleitz forum> <GitHub>
MP4.tool GUI for MP4Box & L-SMASH muxer > https://www.mediafire.com/folder/3i6y6cbkyhblm/MP4.tool
jones1913 is offline   Reply With Quote
Old 7th September 2015, 08:40   #19  |  Link
foxyshadis
ангел смерти
 
foxyshadis's Avatar
 
Join Date: Nov 2004
Location: Lost
Posts: 9,314
Quote:
Originally Posted by NikosD View Post
But in order to not go off - topic,
*snort*

Quote:
Originally Posted by jones1913 View Post
Apart from that Nvidia has a long history lying to their customers, but their sales shows: Customers are very forgetful or dont care.
Although they've pulled some shenanigans, like everyone else, I still think most of it is just their developers (hardware and software) being wildly over-optimistic about what they can do, and not testing nearly enough. In fact, "We don't test" should be the motto of all the big GPU makers, right after "We don't listen to you." But AAA games still sell, and performance and capabilities do improve, so I guess that's something.
__________________
There are four boxes to be used in defense of liberty: soap, ballot, jury, and ammo. Please use in that order. ~ Ed Howdershelt
foxyshadis is offline   Reply With Quote
Old 9th September 2015, 09:52   #20  |  Link
pandy
Registered User
 
Join Date: Mar 2006
Posts: 1,038
Anyway i'm forced to buy NVidia as AMD ignoring completely video - NVENC can't be compared to AMD solution...
pandy is offline   Reply With Quote
Reply

Thread Tools Search this Thread
Search this Thread:

Advanced Search
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump


All times are GMT +1. The time now is 20:03.


Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2018, vBulletin Solutions Inc.