Welcome to Doom9's Forum, THE in-place to be for everyone interested in DVD conversion.

Before you start posting please read the forum rules. By posting to this forum you agree to abide by the rules.

 

Go Back   Doom9's Forum > General > DVD2AVI / DGIndex

Reply
 
Thread Tools Search this Thread Display Modes
Old 12th July 2010, 17:52   #2041  |  Link
RedDwarf1
RedDwarf Fan
 
Join Date: Jun 2005
Location: United Kingdom
Posts: 188
Quote:
Originally Posted by tormento View Post
Today I'll feed some memory intensive jobs to your newer version.

Yep, just waiting for a less hungry Fermi.
Why not just get a GT240 GDDR5 1GB? I have a Zotac GT240 GDDR5 1GB card with 4GHz Samsung memory chips which overclock quite well, 3400MHz default and overclock to 4GHz quite easily using Zotac Firestorm. But memory bandwidth has never been an issue as it has never gone over 20% so far.

The Zotac card is a single slot card so leaves more expansion slots free than dual slot designs.

Fermi and GT 240 are meant to share the same VP engine, it's only Gaming or GPU encoding that Fermi would benefit and GPU encoding has been a waste of time so far, plus the improved audio output if that was needed on a PC. So why spend large sums on cards that are largely unnecessary?

I think a comparison of cards decoding capabilities and speeds would be beneficial. If suitable clip(s) could be found to be used for comparing cards.

I wonder whether there has been any speed improvement in the VP engine in recent nVidia cards like has been suggested.

How about it? Are you up for it?

Last edited by RedDwarf1; 12th July 2010 at 18:48.
RedDwarf1 is offline   Reply With Quote
Old 12th July 2010, 19:36   #2042  |  Link
Guest
Guest
 
Join Date: Jan 2002
Posts: 21,922
Quote:
Originally Posted by RedDwarf1 View Post
How about it? Are you up for it?
Who, me? I can't afford to buy all those different cards.

VP engine clock rate and memory bandwidth are the specs to compare. We could all post our card and what the Nvidia control panel shows for those specs.

My card at work (my home card is better, a 9600GT):

NVIDIA System Information report created on: 07/12/2010 13:42:58
System name: SCHDTLABPC1

[Display]
Processor: AMD Athlon(tm) 64 X2 Dual Core Processor 3800+ (2004 MHz)
Operating System: Microsoft Windows XP, 32-bit (Service Pack 3)
DirectX version: 9.0c
GPU processor: GeForce 8500 GT
Driver version: 190.38
Stream processors: 16
Core clock: 500 MHz
Shader clock: 1020 MHz
Memory clock: 400 MHz (800 MHz data rate)
Memory interface: 128-bit
Memory: 1024 MB
Video BIOS version: 60.86.41.00.23
IRQ: 16
Bus: PCI Express x16

Last edited by Guest; 12th July 2010 at 19:41.
Guest is offline   Reply With Quote
Old 12th July 2010, 20:11   #2043  |  Link
GZZ
Registered User
 
Join Date: Jan 2002
Posts: 558
I have a GT9500, it gives the same speed as my GTX 260 I have somewhere and only put in when I want to play.... Not very offen.

Here are the spec om my 9500 GT card (passiv cooled card at a price around 350 dkr ~ 70$) - Max Power use ~ 50 watt

[Display]
Processor: Intel(R) Core(TM)2 Quad CPU Q9450 @ 2.66GHz (3200 MHz)
Operating System: Windows 7 Ultimate, 64-bit
DirectX version: 11.0
GPU processor: GeForce 9500 GT
Driver version: 197.45
CUDA Cores: 32
Core clock: 550 MHz
Shader clock: 1400 MHz
Memory clock: 400 MHz (800 MHz data rate)
Memory interface: 128-bit
Total available graphics memory: 2815 MB
Dedicated video memory: 1024 MB
System video memory: 0 MB
Shared system memory: 1791 MB
Video BIOS version: 62.94.3C.00.00
IRQ: 16
Bus: PCI Express x16 Gen2
GZZ is offline   Reply With Quote
Old 12th July 2010, 20:19   #2044  |  Link
Guest
Guest
 
Join Date: Jan 2002
Posts: 21,922
You have 2GB? I hate you.

OK, I wanna see the most kick-ass specs. Bring it on!

P.S. I believe the VP is clocked off the core clock.
Guest is offline   Reply With Quote
Old 12th July 2010, 20:54   #2045  |  Link
Sharktooth
Mr. Sandman
 
Sharktooth's Avatar
 
Join Date: Sep 2003
Location: Haddonfield, IL
Posts: 11,768
i can post my ATI card specs... those rox... but i think they wont be usefull LOL
Sharktooth is offline   Reply With Quote
Old 12th July 2010, 21:38   #2046  |  Link
cweb
Registered User
 
cweb's Avatar
 
Join Date: Oct 2002
Location: The Pandorica
Posts: 527
This is my main system, with a GTX 285 card.

[Display]
Processor: AMD Athlon(tm) 64 FX-62 Dual Core Processor (2800 MHz)
Operating System: Windows 7 Professional, 32-bit
DirectX version: 11.0
GPU processor: GeForce GTX 285
Driver version: 196.21
CUDA Cores: 240
Memory interface: 512-bit
Total available graphics memory: 2303 MB
Dedicated video memory: 1024 MB
System video memory: 0 MB
Shared system memory: 1279 MB
Video BIOS version: 62.00.45.00.00
IRQ: 16
Bus: PCI Express x16
__________________
PC specs for bug reports: Athlon 64-bit Phenom II X6 2.8Ghz Win7/Linux PCI express NVIDIA GTX 285 graphics card
http://twitter.com/cwebdesign
cweb is offline   Reply With Quote
Old 12th July 2010, 21:50   #2047  |  Link
RedDwarf1
RedDwarf Fan
 
Join Date: Jun 2005
Location: United Kingdom
Posts: 188
Quote:
Originally Posted by neuron2 View Post
Who, me? I can't afford to buy all those different cards.
No, I was referring to tormento. You have already said you don't have the time to do it.

Different systems, such as CPU, Windows version and GPU might influence the decoding speed but how big a difference the GPU makes is the question that has never been discovered. Do we really need the latest, fastest, highest electrically powered card to provide fast video decoding or will a very basic card be sufficient? For H264 transcoding the card probably doesn't make much difference as the encoding is the slow part and the decoding speed is unlikely to be the bottleneck. It's only when encoding from HD to SD that decoding speed becomes an issue as encoding is very fast and often faster than the decoding speed. That's a big issue for anyone doing such things.

If I was to encode an XVid from HD video, I can decode@ about 100fps for 1440x1088i@25fps but I can encode at around 140+ fps from an SD Mpeg2 source. So the bottleneck would be decoding HD video using a GPU.

Quote:
VP engine clock rate and memory bandwidth are the specs to compare. We could all post our card and what the Nvidia control panel shows for those specs.
You nVidia contact said that the VP engine is usually clocked the same right across the nVidia chipset range. It usually clocks at 250MHz but in some cases can be clocked lower for lower end cards. That is what was show in your development notes.

Quote:
My card at work (my home card is better, a 9600GT):

NVIDIA System Information report created on: 07/12/2010 13:42:58
System name: SCHDTLABPC1

[Display]
Processor: AMD Athlon(tm) 64 X2 Dual Core Processor 3800+ (2004 MHz)
Operating System: Microsoft Windows XP, 32-bit (Service Pack 3)
DirectX version: 9.0c
GPU processor: GeForce 8500 GT
Driver version: 190.38
Stream processors: 16
Core clock: 500 MHz
Shader clock: 1020 MHz
Memory clock: 400 MHz (800 MHz data rate)
Memory interface: 128-bit
Memory: 1024 MB
Video BIOS version: 60.86.41.00.23
IRQ: 16
Bus: PCI Express x16
I don't think the Core clock has anything to do with the VP engine clock. I think that they are totally separate and don't influence one another. It's the VP engine which is used to decode the video as you have mentioned before and is shown by GPU-z. The GPU load is usually fairly low when decoding video but the Video engine hits 99% on mine.

What might be helpful would be to have some test videos which people could use to preview using DGIndexNV and then post the average decode speed as a graphic showing the window. Post a GPU-z image of the graphics card details and maybe some shots from CPU-z. Then see whether the Graphics card makes any difference.

The only HD video I have is HD broadcast video from BBC HD & ITV HD TV channels which is 1440x1088i@25fps.

Movie trailers might be useful if those are permitted by forum rules.

@neuron2

They are all 1GB cards, note the Dedicated video memory: 1024 MB. I don't think they have 2GB of memory. His looks like it might have onboard AMD video so that might be why his is showing as being over 2GB. The video memory is the total of shared and dedicated video memory.
  1. Processor: Intel(R) Core(TM)2 Quad CPU Q9450 @ 2.66GHz (2666 MHz)
  2. Operating System: Microsoft Windows XP, 32-bit (Service Pack 3)
  3. DirectX version: 9.0
  4. GPU processor: GeForce GT 240
  5. Driver version: 197.45
  6. CUDA Cores: 96
  7. Core clock: 550 MHz
  8. Shader clock: 1340 MHz
  9. Memory clock: 1700 MHz (3400 MHz data rate)
  10. Memory interface: 128-bit
  11. Memory: 1024 MB
  12. Video BIOS version: 70.15.38.00.01
  13. IRQ: 16
  14. Bus: PCI Express x16 Gen2

or
  1. Processor: Intel(R) Core(TM)2 Quad CPU Q9450 @ 2.66GHz (2666 MHz)
  2. Operating System: Microsoft Windows XP, 32-bit (Service Pack 3)
  3. DirectX version: 9.0
  4. GPU processor: GeForce GT 240
  5. Driver version: 197.45
  6. CUDA Cores: 96
  7. Core clock: 550 MHz
  8. Shader clock: 1340 MHz
  9. Memory clock: 2000 MHz (4000 MHz data rate) A bit more memory bandwidth if needed
  10. Memory interface: 128-bit
  11. Memory: 1024 MB
  12. Video BIOS version: 70.15.38.00.01
  13. IRQ: 16
  14. Bus: PCI Express x16 Gen2

Last edited by RedDwarf1; 12th July 2010 at 22:28.
RedDwarf1 is offline   Reply With Quote
Old 12th July 2010, 21:56   #2048  |  Link
Guest
Guest
 
Join Date: Jan 2002
Posts: 21,922
650 not 250.

I know the card makes a difference. On one poor system with 8500 at work I get 15 fps decoding on a stream for which I get 68 fps at home.
Guest is offline   Reply With Quote
Old 12th July 2010, 22:09   #2049  |  Link
RedDwarf1
RedDwarf Fan
 
Join Date: Jun 2005
Location: United Kingdom
Posts: 188
Quote:
Originally Posted by neuron2 View Post
650 not 250.
I think we both got the wrong numbers.

The notes say 450MHz with some running at 400MHz. Search the notes for MHz

Quote:
Originally Posted by neuron2 View Post
I know the card makes a difference. On one poor system with 8500 at work I get 15 fps decoding on a stream for which I get 68 fps at home.
Different video, bitrate, resolution, framerate can be decoded at different speeds. These are factors and the VP engine/GPU might also play a part.

As I said I get around 100fps for 1440x1088i@25fps low bitrate HDTV. I don't have any sources to check Blu Ray or high bitrate video so don't know what mine is capable of.

It would be interesting to know whether a badass high end card provides any benefit for such uses or whether a more power efficient low end card is just as good.

Does the updated VP engine make any difference to decoding speed? Is a VP4 beneficial? In what way? Speed? Quality? The video it can decode? Some are known because of information from various sources but other things are not. These things can only really be found through testing.

Last edited by RedDwarf1; 12th July 2010 at 22:18.
RedDwarf1 is offline   Reply With Quote
Old 12th July 2010, 22:16   #2050  |  Link
Guest
Guest
 
Join Date: Jan 2002
Posts: 21,922
Quote:
Originally Posted by RedDwarf1 View Post
It would be interesting to know whether a badass high end card provides any benefit for such uses or whether a more power efficient low end card is just as good.
I just got done saying I know it makes a difference. Are you doubting that or just missed my statement?
Guest is offline   Reply With Quote
Old 12th July 2010, 22:59   #2051  |  Link
Blue_MiSfit
Derek Prestegard IRL
 
Blue_MiSfit's Avatar
 
Join Date: Nov 2003
Location: Los Angeles
Posts: 5,711
Awesome work, neuron2! This is a very solid version of DGNV!

Here's a question for you - say I need to run a lot of 1080p transcodes at once, say enough to overrun even a 2GB card.

If I install multiple cards in the system, will DGNV spawn instances on all the cards, or is it restricted to one?

Derek
Blue_MiSfit is offline   Reply With Quote
Old 12th July 2010, 23:46   #2052  |  Link
RedDwarf1
RedDwarf Fan
 
Join Date: Jun 2005
Location: United Kingdom
Posts: 188
Quote:
Originally Posted by neuron2 View Post
I just got done saying I know it makes a difference. Are you doubting that or just missed my statement?
I don't doubt that, I wondered how much of a difference it makes.

From what I have seen, memory bandwidth doesn't seem a big issue on my card so far as mine has never gone over 20% with a default 54.4GB/s bandwidth on a 128 bit bus. If the VP cores run at similar speeds where is the difference coming from. Are new VP cores more efficient at decoding and hence faster.

Data would reveal a lot and remove a lot of guesswork on which card to choose.
RedDwarf1 is offline   Reply With Quote
Old 13th July 2010, 00:30   #2053  |  Link
Guest
Guest
 
Join Date: Jan 2002
Posts: 21,922
Quote:
Originally Posted by Blue_MiSfit View Post
Here's a question for you - say I need to run a lot of 1080p transcodes at once, say enough to overrun even a 2GB card.

If I install multiple cards in the system, will DGNV spawn instances on all the cards, or is it restricted to one?
Multiple instances are supported of course. But currently a new instance looks at only the first CUDA-enabled card. tormento has already asked for a way to specify which of multiple cards should be used. Now you bring up another mode that should be supported: dynamically find a card that has enough memory for another instance. Adding to the to-do list.
Guest is offline   Reply With Quote
Old 13th July 2010, 01:35   #2054  |  Link
Sharktooth
Mr. Sandman
 
Sharktooth's Avatar
 
Join Date: Sep 2003
Location: Haddonfield, IL
Posts: 11,768
Something i might be interested into.
9500GT comes with stock frequencies at 550/1400/400 (gpu/shaders/memory), while this is pretty "low" i can safely overclock it (without volt mod) to 700/1820/510.
Does the overclocking influence also video decoding speed or has it a dedicated unit for video decoding that's completely independent from core/shaders/memory clocks?
Sharktooth is offline   Reply With Quote
Old 13th July 2010, 02:03   #2055  |  Link
adiabatic
Registered User
 
adiabatic's Avatar
 
Join Date: Oct 2008
Posts: 17
Main PC:

[Display]
Processor: Intel(R) Core(TM) i7 CPU 920 @ 2.67GHz (2673 MHz)
Operating System: Windows 7 Professional, 64-bit
DirectX version: 11.0
GPU processor: GeForce GTX 275
Driver version: 257.21
CUDA Cores: 240
Core clock: 648 MHz
Shader clock: 1458 MHz
Memory clock: 1188 MHz (2376 MHz data rate)
Memory interface: 448-bit
Total available graphics memory: 3707 MB
Dedicated video memory: 896 MB GDDR3
System video memory: 0 MB
Shared system memory: 2811 MB
Video BIOS version: 62.00.60.00.70
IRQ: 24
Bus: PCI Express x16 Gen2

Secondary PC (my old workhorse)
[Display]
Processor: Intel(R) Core(TM)2 CPU 4400 @ 2.00GHz (1995 MHz)
Operating System: Windows Vista (TM) Home Premium, 32-bit (Service Pack 2)
DirectX version: 10.1
GPU processor: GeForce 8600 GT
Driver version: 257.21
CUDA Cores: 32
Core clock: 540 MHz
Shader clock: 1188 MHz
Memory clock: 700 MHz (1400 MHz data rate)
Memory interface: 128-bit
Total available graphics memory: 1021 MB
Dedicated video memory: 256 MB GDDR3
System video memory: 0 MB
Shared system memory: 765 MB
Video BIOS version: 60.84.51.00.00
IRQ: 16
Bus: PCI Express x16

Last edited by adiabatic; 13th July 2010 at 02:06.
adiabatic is offline   Reply With Quote
Old 13th July 2010, 02:16   #2056  |  Link
MrVideo
Registered User
 
MrVideo's Avatar
 
Join Date: May 2007
Location: Wisconsin
Posts: 1,694
Here is mine, for the new quadcore. The same graphics card is in the dual core system.

Code:
[Display]
Processor:              AMD Phenom(tm) II X4 965 Processor (3411 MHz)
Operating System:       Microsoft Windows XP, 32-bit (Service Pack 2)
DirectX version:        9.0 
GPU processor:          GeForce GT 240
Driver version:         257.21
CUDA Cores:             96 
Core clock:             550 MHz 
Shader clock:           1340 MHz
Memory clock:           1700 MHz (3400 MHz data rate) 
Memory interface:       128-bit 
Memory:                 512 MB
Memory type:            GDDR5
Video BIOS version:     70.15.2C.00.51
IRQ:                    24
Bus:                    PCI Express x16 Gen2
MrVideo is offline   Reply With Quote
Old 13th July 2010, 03:19   #2057  |  Link
Guest
Guest
 
Join Date: Jan 2002
Posts: 21,922
Quote:
Originally Posted by adiabatic View Post
Memory interface: 448-bit
Hit man is on the way.

I think we should hold off further reports and do what RedDwarf1 suggested. I will get a sample clip prepared with instructions for playing it in DGIndexNV and then we will report the data above together with our FPS. Stand by...
Guest is offline   Reply With Quote
Old 13th July 2010, 03:21   #2058  |  Link
Guest
Guest
 
Join Date: Jan 2002
Posts: 21,922
Quote:
Originally Posted by Sharktooth View Post
Does the overclocking influence also video decoding speed or has it a dedicated unit for video decoding that's completely independent from core/shaders/memory clocks?
Good question. I'll see what Nvidia has to say about it. I'll also ask if there is a way to determine the VP engine clock rate.
Guest is offline   Reply With Quote
Old 13th July 2010, 03:53   #2059  |  Link
Audionut
Registered User
 
Join Date: Nov 2003
Posts: 1,271
I asked the rivatuner author about these things not long after dgindexnv was released and got this response.

Quote:
Hmm, interesting. I did some researches tonight and it looks like there is indeed separate 4xx MHz clocked domain in the chip which can be used to feed VP independently of SPs. However, forget about controlling it: drivers doesn't provide overclocking support for it and implementing low-level way doesn't worth the efforts from my POV.
http://forums.guru3d.com/showthread.php?t=274311

Perhaps a few of you guys should register there and bug the dev about adding overclock support for the chip.
__________________
http://www.7-zip.org/

Last edited by Audionut; 13th July 2010 at 03:56.
Audionut is offline   Reply With Quote
Old 13th July 2010, 04:17   #2060  |  Link
adiabatic
Registered User
 
adiabatic's Avatar
 
Join Date: Oct 2008
Posts: 17
Quote:
Originally Posted by neuron2 View Post
Hit man is on the way.
It is a nice card It rips through SETI work units like crazy too!
adiabatic is offline   Reply With Quote
Reply

Thread Tools Search this Thread
Search this Thread:

Advanced Search
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump


All times are GMT +1. The time now is 02:54.


Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2020, vBulletin Solutions Inc.