Welcome to Doom9's Forum, THE in-place to be for everyone interested in DVD conversion.

Before you start posting please read the forum rules. By posting to this forum you agree to abide by the rules.

 

Go Back   Doom9's Forum > Video Encoding > MPEG-4 AVC / H.264

Reply
 
Thread Tools Search this Thread Display Modes
Old 25th March 2012, 14:35   #21  |  Link
hajj_3
Registered User
 
Join Date: Mar 2004
Posts: 979
i wonder what difference the speed of the nvidia gpu will have on the encoding speed, maybe if intel's gpu was as fast as nvidias it could possibly be faster than the 680 at encoding?
hajj_3 is offline   Reply With Quote
Old 26th March 2012, 20:50   #22  |  Link
CruNcher
Registered User
 
CruNcher's Avatar
 
Join Date: Apr 2002
Location: Germany
Posts: 4,949
Intels GPU is nice if they would pack some more cores or make a discrete card (wee need more then ivybridge) it would be interesting
the Intel Encoder still utilizes the EUs @ least on SB not sure how it will be for IB but i think not much will change here and the speedups come from other things (more EUs,Clocks) for the Motion Estimation so you have a slowing down effect their.
Im trying currently to get this as a Quicksync Result http://blip.tv/file/get/Cr4bl3r-inte...64tuned349.mp4 this is the x264 result not bad for a Intel IGP (GT1 @ stock) the record overhead is very minimal so only 1 or 2 fps get lost overall
__________________
all my compares are riddles so please try to decipher them yourselves :)

It is about Time

Join the Revolution NOW before it is to Late !

http://forum.doom9.org/showthread.php?t=168004

Last edited by CruNcher; 26th March 2012 at 21:03.
CruNcher is offline   Reply With Quote
Old 4th April 2012, 01:49   #23  |  Link
Ranguvar
Registered User
 
Ranguvar's Avatar
 
Join Date: Feb 2007
Location: ::1
Posts: 1,236
Quote:
Originally Posted by CruNcher View Post
also this is most interesting in this Mpeg-2 test the old GTX 580 cuda cores could beat the GTX 680 encoder very strange this let me believe the Mpeg-2 Cuda fix was setting in here which was introduced to avoid a design bug in the VP4 decoder so the Mpeg-2 decoding get relayed to the CUDA cores not the DSP, though Tomshardware doesn't know that, so they must have forgotten to disable that for the VP5 decoder in the GTX 680
It's possible, but remember that in general compute, the GTX 680 is worse than the GTX 580.

Quote:
Originally Posted by CruNcher View Post
Though most impressive for me personaly is this http://www.guru3d.com/article/geforce-gtx-680-review/4 and ill wait for a version with 1 Pin replacing my 2 Pin 460 GTX
Just ordered a GTX 680 myself, also upgrading from a 460. This will be fun.
Ranguvar is offline   Reply With Quote
Old 4th April 2012, 15:32   #24  |  Link
mandarinka
Registered User
 
mandarinka's Avatar
 
Join Date: Jan 2007
Posts: 739
I think it is only worse at compute if you use double-precission floats. The performance at single-precission should be increased.
mandarinka is offline   Reply With Quote
Old 4th April 2012, 15:40   #25  |  Link
CruNcher
Registered User
 
CruNcher's Avatar
 
Join Date: Apr 2002
Location: Germany
Posts: 4,949
That's a Nvidia thing since introduction of the Tesla Series they keep it that way, also it could be that the Performance isn't lower but a restriction either in the Firmware or on the Driver level makes it artificially slower (it is really common these days to build blocks either into the Firmware or inside the Software above (Driver), much easier to handle and create different Product lines, without needing to produce different Hardware)
In earlier days you could lift these restrictions easily but be sure Nvidia learned from these System mistakes and Identification stuff becomes better, these applies to the whole usage of such artificial restrictions they also evolve with every brake, for the easy shader activation brakes (just replace the Bios with a different one) they decided to go back to the Hardware part and fix it with laser cutting the unused (or damaged) Shader impossible in the end (though it's easier todo even for multiple lines of Hardware just drive the chip to another line and cut it to what you want to be the end result) to circumvent in anyway

Another good example would be Intels advancements here restricting overclocking and introducing the upgrade ability of CPUs with features with just a Code (still more a research thing then wildly used, good or bad you decide it in the end it has many cons)
Though i find it funny that people yet doesn't understood that overclocking is dead and it's part of every product now and that especially the younger generation really let themselves be fooled with this, it's not anymore what it used to be in my days

Since Software introduced the Multiple modular version system (develop the application with full features in the first place and then just leave features away and decide how to call the product that's now restricted differently, compared to the all feature version and set a different Price for it) Hardware manufactures saw the same possibility how to make use of that for their Purposes

Btw these system even found it's way to Games in the End (DLCs) not every DLC though is developed this way but allot are, though instead of paying a lower price you pay a upgrade price
Mass Effect 3 really showed how you use this System efficiently for maximum profits

As Bill Gates Envisioned it everything becomes Software driven and use the same principles, though it's easy to Envision such things if you just walk into the Microsoft Research part and see how people their create such things "I have to tell you something i saw the future" is easy then and always was for him on Stage
__________________
all my compares are riddles so please try to decipher them yourselves :)

It is about Time

Join the Revolution NOW before it is to Late !

http://forum.doom9.org/showthread.php?t=168004

Last edited by CruNcher; 4th April 2012 at 16:38.
CruNcher is offline   Reply With Quote
Old 7th April 2012, 02:37   #26  |  Link
Ranguvar
Registered User
 
Ranguvar's Avatar
 
Join Date: Feb 2007
Location: ::1
Posts: 1,236
Quote:
Originally Posted by mandarinka View Post
I think it is only worse at compute if you use double-precission floats. The performance at single-precission should be increased.
Single precision is fine, yes.
http://www.brightsideofnews.com/news....aspx?pageid=4

My GTX 680 just arrived... fun stuff. Tweaking overclocks now. Just need to find a way to test NVENC and then I'll compare it to x264.

EDIT: Was just supplied with an NVENC-enabled build of MediaEspresso, I'll do a 'formal' test if I can get time.
EDIT 2: Not sure this is worth doing, at least at this stage -- MediaEspresso is restricted to Baseline@4.0, 25fps (with 50fps source), and CBR.
No way of seeing if these are limitations of the software, or of NVENC.
parkjoy is encoded in about 5 seconds (from a 50Mbps H.264 encode of the original so ME could read it), so twice realtime for that, which x264 can easily match.

Last edited by Ranguvar; 8th April 2012 at 13:07.
Ranguvar is offline   Reply With Quote
Old 8th April 2012, 11:35   #27  |  Link
Mixer73
Registered User
 
Join Date: Nov 2007
Posts: 240
Quote:
Originally Posted by Ranguvar View Post
EDIT 2: Not sure this is worth doing, at least at this stage -- MediaEspresso is restricted to Baseline@4.0, 25fps, and CBR.
Not at all surprising for a hardware encoder to be limited to CBR.
Mixer73 is offline   Reply With Quote
Old 8th April 2012, 12:23   #28  |  Link
Atak_Snajpera
RipBot264 author
 
Atak_Snajpera's Avatar
 
Join Date: May 2006
Location: Poland
Posts: 7,457
lol and no cabac obviously. typical.
Atak_Snajpera is offline   Reply With Quote
Old 8th April 2012, 13:02   #29  |  Link
Ranguvar
Registered User
 
Ranguvar's Avatar
 
Join Date: Feb 2007
Location: ::1
Posts: 1,236
Quote:
Originally Posted by Mixer73 View Post
Not at all surprising for a hardware encoder to be limited to CBR.
Ah, I'm not well familiar with hardware encoders. Doesn't that pretty much destroy any chance of it even competing with x264 though?
Ranguvar is offline   Reply With Quote
Old 9th April 2012, 08:02   #30  |  Link
Dark Eiri
Registered User
 
Join Date: May 2006
Posts: 328
I remember when I used to get super excited when those GPU H264 encoders were announced or released.
Now I just sigh.
Dark Eiri is offline   Reply With Quote
Old 9th April 2012, 10:40   #31  |  Link
hajj_3
Registered User
 
Join Date: Mar 2004
Posts: 979
Quote:
Originally Posted by Dark Eiri View Post
I remember when I used to get super excited when those GPU H264 encoders were announced or released.
Now I just sigh.
why sigh, they are still great! It would be nice if x264 added support for GPU decoding then it would become amazing.
hajj_3 is offline   Reply With Quote
Old 9th April 2012, 13:08   #32  |  Link
Atak_Snajpera
RipBot264 author
 
Atak_Snajpera's Avatar
 
Join Date: May 2006
Location: Poland
Posts: 7,457
Quote:
why sigh, they are still great! It would be nice if x264 added support for GPU decoding then it would become amazing.
GPU encoders still suck. Baseline profile? is this a joke? With x264 we have even access to 10bit 4:4:4. GPU encoding in x264 will probably be never implemented. To much work. VP9 is supposed to be more OpenCL friendly.
Atak_Snajpera is offline   Reply With Quote
Old 10th April 2012, 00:21   #33  |  Link
Audionut
Registered User
 
Join Date: Nov 2003
Posts: 1,272
Quote:
Originally Posted by Atak_Snajpera View Post
GPU encoders still suck. Baseline profile? is this a joke? With x264 we have even access to 10bit 4:4:4. GPU encoding in x264 will probably be never implemented. To much work. VP9 is supposed to be more OpenCL friendly.
That's nice. But it's got nothing to do with GPU decoding support.
__________________
http://www.7-zip.org/
Audionut is offline   Reply With Quote
Old 10th April 2012, 01:48   #34  |  Link
Ranguvar
Registered User
 
Ranguvar's Avatar
 
Join Date: Feb 2007
Location: ::1
Posts: 1,236
x264 does have GPU decoding support.
It's called AviSynth input, where you can use whatever decoder tickles your fancy
Would be nice, but not high priority because of the above working solution, I assume.
Ranguvar is offline   Reply With Quote
Old 10th April 2012, 11:52   #35  |  Link
Atak_Snajpera
RipBot264 author
 
Atak_Snajpera's Avatar
 
Join Date: May 2006
Location: Poland
Posts: 7,457
indeed for decoding we have nvidia cards and quicksync from intel.
Atak_Snajpera is offline   Reply With Quote
Old 25th May 2012, 10:24   #36  |  Link
itou
Registered User
 
Join Date: May 2009
Posts: 1
Hello all,
I use vegas pro 11 and would like some GPU acceleration, I have to choose the 580GTX or the 670GTX for nearly the same price (+ 5% for the 670). If Vegas pro will implement the new API NVENC, I think that could be a big advantage over the 580gtx on encoding process.
But on workflow editing and preview, anyone could tell me if the 670 is better than the 580 ?
Thanks
(a trial version of vegas pro 11 is available at sony website).
itou is offline   Reply With Quote
Old 25th May 2012, 14:49   #37  |  Link
SubOne
Registered User
 
Join Date: May 2009
Posts: 32
There's also VCE for AMD, but for whatever reason they haven't released working drivers yet, even though 7000 series has been 5 months on the market!
SubOne is offline   Reply With Quote
Old 26th May 2012, 15:50   #38  |  Link
littleD
Registered User
 
littleD's Avatar
 
Join Date: Aug 2008
Posts: 337
There is written in Release Notes of AMD APP SDK 2.7:
Additional features supported in SDK 2.7 and the Catalyst 12.4 drivers include:
• Video encode using VCE Encode (Win7)
• Open Encode update (12.4)


Looks like it is supported, It may be new MFT AMD Encoder? Wonder what is Open Encode, similar to OpenCLDecode?
littleD is offline   Reply With Quote
Old 4th June 2012, 04:53   #39  |  Link
zn
Registered User
 
Join Date: Jan 2009
Posts: 85
new article with nvenc testing:

http://www.hardware.fr/focus/imprimer/67/

funny thing - at first they published article with results from MediaEspresso in CUDA mode, now it is updated with nvenc mode results
zn is offline   Reply With Quote
Old 13th January 2013, 04:39   #40  |  Link
MOS-Marauder
Registered User
 
Join Date: Apr 2006
Posts: 134
Some more Informations...

https://developer.nvidia.com/sites/d...09-001_v02.pdf

Mara
MOS-Marauder is offline   Reply With Quote
Reply

Tags
nvidia

Thread Tools Search this Thread
Search this Thread:

Advanced Search
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump


All times are GMT +1. The time now is 10:04.


Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2020, vBulletin Solutions Inc.