Welcome to Doom9's Forum, THE in-place to be for everyone interested in DVD conversion. Before you start posting please read the forum rules. By posting to this forum you agree to abide by the rules. |
12th July 2010, 17:52 | #2041 | Link | |
RedDwarf Fan
Join Date: Jun 2005
Location: United Kingdom
Posts: 198
|
Quote:
The Zotac card is a single slot card so leaves more expansion slots free than dual slot designs. Fermi and GT 240 are meant to share the same VP engine, it's only Gaming or GPU encoding that Fermi would benefit and GPU encoding has been a waste of time so far, plus the improved audio output if that was needed on a PC. So why spend large sums on cards that are largely unnecessary? I think a comparison of cards decoding capabilities and speeds would be beneficial. If suitable clip(s) could be found to be used for comparing cards. I wonder whether there has been any speed improvement in the VP engine in recent nVidia cards like has been suggested. How about it? Are you up for it? Last edited by RedDwarf1; 12th July 2010 at 18:48. |
|
12th July 2010, 19:36 | #2042 | Link |
Guest
Join Date: Jan 2002
Posts: 21,901
|
Who, me? I can't afford to buy all those different cards.
VP engine clock rate and memory bandwidth are the specs to compare. We could all post our card and what the Nvidia control panel shows for those specs. My card at work (my home card is better, a 9600GT): NVIDIA System Information report created on: 07/12/2010 13:42:58 System name: SCHDTLABPC1 [Display] Processor: AMD Athlon(tm) 64 X2 Dual Core Processor 3800+ (2004 MHz) Operating System: Microsoft Windows XP, 32-bit (Service Pack 3) DirectX version: 9.0c GPU processor: GeForce 8500 GT Driver version: 190.38 Stream processors: 16 Core clock: 500 MHz Shader clock: 1020 MHz Memory clock: 400 MHz (800 MHz data rate) Memory interface: 128-bit Memory: 1024 MB Video BIOS version: 60.86.41.00.23 IRQ: 16 Bus: PCI Express x16 Last edited by Guest; 12th July 2010 at 19:41. |
12th July 2010, 20:11 | #2043 | Link |
Registered User
Join Date: Jan 2002
Posts: 581
|
I have a GT9500, it gives the same speed as my GTX 260 I have somewhere and only put in when I want to play.... Not very offen.
Here are the spec om my 9500 GT card (passiv cooled card at a price around 350 dkr ~ 70$) - Max Power use ~ 50 watt [Display] Processor: Intel(R) Core(TM)2 Quad CPU Q9450 @ 2.66GHz (3200 MHz) Operating System: Windows 7 Ultimate, 64-bit DirectX version: 11.0 GPU processor: GeForce 9500 GT Driver version: 197.45 CUDA Cores: 32 Core clock: 550 MHz Shader clock: 1400 MHz Memory clock: 400 MHz (800 MHz data rate) Memory interface: 128-bit Total available graphics memory: 2815 MB Dedicated video memory: 1024 MB System video memory: 0 MB Shared system memory: 1791 MB Video BIOS version: 62.94.3C.00.00 IRQ: 16 Bus: PCI Express x16 Gen2 |
12th July 2010, 20:54 | #2045 | Link |
Mr. Sandman
Join Date: Sep 2003
Location: Haddonfield, IL
Posts: 11,768
|
i can post my ATI card specs... those rox... but i think they wont be usefull LOL
__________________
MPEG-4 ASP Custom Matrices: EQM V1(old), EQM AutoGK Sharpmatrix (aka EQM V2), EQM V3HR (updated 01/10/2004), EQM V3LR, EQM V3ULR (updated 04/02/2005), EQM V3UHR (updated 17/12/2004) and EQM V3EHR (updated 05/10/2004) Info about my ASP matrices. MPEG-4 AVC Custom Matrices: EQM AVC-HR Info about my AVC matrices My x264 builds. Mooo!!! |
12th July 2010, 21:38 | #2046 | Link |
Registered User
Join Date: Oct 2002
Location: The Pandorica
Posts: 527
|
This is my main system, with a GTX 285 card.
[Display] Processor: AMD Athlon(tm) 64 FX-62 Dual Core Processor (2800 MHz) Operating System: Windows 7 Professional, 32-bit DirectX version: 11.0 GPU processor: GeForce GTX 285 Driver version: 196.21 CUDA Cores: 240 Memory interface: 512-bit Total available graphics memory: 2303 MB Dedicated video memory: 1024 MB System video memory: 0 MB Shared system memory: 1279 MB Video BIOS version: 62.00.45.00.00 IRQ: 16 Bus: PCI Express x16
__________________
PC specs for bug reports: Intel Core i7-4790K @4Ghz Win10(Linux VM) PCI express NVIDIA RTX 2060 SUPER graphics card http://twitter.com/cwebdesign |
12th July 2010, 21:50 | #2047 | Link | ||
RedDwarf Fan
Join Date: Jun 2005
Location: United Kingdom
Posts: 198
|
No, I was referring to tormento. You have already said you don't have the time to do it.
Different systems, such as CPU, Windows version and GPU might influence the decoding speed but how big a difference the GPU makes is the question that has never been discovered. Do we really need the latest, fastest, highest electrically powered card to provide fast video decoding or will a very basic card be sufficient? For H264 transcoding the card probably doesn't make much difference as the encoding is the slow part and the decoding speed is unlikely to be the bottleneck. It's only when encoding from HD to SD that decoding speed becomes an issue as encoding is very fast and often faster than the decoding speed. That's a big issue for anyone doing such things. If I was to encode an XVid from HD video, I can decode@ about 100fps for 1440x1088i@25fps but I can encode at around 140+ fps from an SD Mpeg2 source. So the bottleneck would be decoding HD video using a GPU. Quote:
Quote:
What might be helpful would be to have some test videos which people could use to preview using DGIndexNV and then post the average decode speed as a graphic showing the window. Post a GPU-z image of the graphics card details and maybe some shots from CPU-z. Then see whether the Graphics card makes any difference. The only HD video I have is HD broadcast video from BBC HD & ITV HD TV channels which is 1440x1088i@25fps. Movie trailers might be useful if those are permitted by forum rules. @neuron2 They are all 1GB cards, note the Dedicated video memory: 1024 MB. I don't think they have 2GB of memory. His looks like it might have onboard AMD video so that might be why his is showing as being over 2GB. The video memory is the total of shared and dedicated video memory.
or
Last edited by RedDwarf1; 12th July 2010 at 22:28. |
||
12th July 2010, 22:09 | #2049 | Link | |
RedDwarf Fan
Join Date: Jun 2005
Location: United Kingdom
Posts: 198
|
I think we both got the wrong numbers.
The notes say 450MHz with some running at 400MHz. Search the notes for MHz Quote:
As I said I get around 100fps for 1440x1088i@25fps low bitrate HDTV. I don't have any sources to check Blu Ray or high bitrate video so don't know what mine is capable of. It would be interesting to know whether a badass high end card provides any benefit for such uses or whether a more power efficient low end card is just as good. Does the updated VP engine make any difference to decoding speed? Is a VP4 beneficial? In what way? Speed? Quality? The video it can decode? Some are known because of information from various sources but other things are not. These things can only really be found through testing. Last edited by RedDwarf1; 12th July 2010 at 22:18. |
|
12th July 2010, 22:59 | #2051 | Link |
Derek Prestegard IRL
Join Date: Nov 2003
Location: Los Angeles
Posts: 5,988
|
Awesome work, neuron2! This is a very solid version of DGNV!
Here's a question for you - say I need to run a lot of 1080p transcodes at once, say enough to overrun even a 2GB card. If I install multiple cards in the system, will DGNV spawn instances on all the cards, or is it restricted to one? Derek
__________________
These are all my personal statements, not those of my employer :) |
12th July 2010, 23:46 | #2052 | Link | |
RedDwarf Fan
Join Date: Jun 2005
Location: United Kingdom
Posts: 198
|
Quote:
From what I have seen, memory bandwidth doesn't seem a big issue on my card so far as mine has never gone over 20% with a default 54.4GB/s bandwidth on a 128 bit bus. If the VP cores run at similar speeds where is the difference coming from. Are new VP cores more efficient at decoding and hence faster. Data would reveal a lot and remove a lot of guesswork on which card to choose. |
|
13th July 2010, 00:30 | #2053 | Link |
Guest
Join Date: Jan 2002
Posts: 21,901
|
Multiple instances are supported of course. But currently a new instance looks at only the first CUDA-enabled card. tormento has already asked for a way to specify which of multiple cards should be used. Now you bring up another mode that should be supported: dynamically find a card that has enough memory for another instance. Adding to the to-do list.
|
13th July 2010, 01:35 | #2054 | Link |
Mr. Sandman
Join Date: Sep 2003
Location: Haddonfield, IL
Posts: 11,768
|
Something i might be interested into.
9500GT comes with stock frequencies at 550/1400/400 (gpu/shaders/memory), while this is pretty "low" i can safely overclock it (without volt mod) to 700/1820/510. Does the overclocking influence also video decoding speed or has it a dedicated unit for video decoding that's completely independent from core/shaders/memory clocks?
__________________
MPEG-4 ASP Custom Matrices: EQM V1(old), EQM AutoGK Sharpmatrix (aka EQM V2), EQM V3HR (updated 01/10/2004), EQM V3LR, EQM V3ULR (updated 04/02/2005), EQM V3UHR (updated 17/12/2004) and EQM V3EHR (updated 05/10/2004) Info about my ASP matrices. MPEG-4 AVC Custom Matrices: EQM AVC-HR Info about my AVC matrices My x264 builds. Mooo!!! |
13th July 2010, 02:03 | #2055 | Link |
Registered User
Join Date: Oct 2008
Posts: 17
|
Main PC:
[Display] Processor: Intel(R) Core(TM) i7 CPU 920 @ 2.67GHz (2673 MHz) Operating System: Windows 7 Professional, 64-bit DirectX version: 11.0 GPU processor: GeForce GTX 275 Driver version: 257.21 CUDA Cores: 240 Core clock: 648 MHz Shader clock: 1458 MHz Memory clock: 1188 MHz (2376 MHz data rate) Memory interface: 448-bit Total available graphics memory: 3707 MB Dedicated video memory: 896 MB GDDR3 System video memory: 0 MB Shared system memory: 2811 MB Video BIOS version: 62.00.60.00.70 IRQ: 24 Bus: PCI Express x16 Gen2 Secondary PC (my old workhorse) [Display] Processor: Intel(R) Core(TM)2 CPU 4400 @ 2.00GHz (1995 MHz) Operating System: Windows Vista (TM) Home Premium, 32-bit (Service Pack 2) DirectX version: 10.1 GPU processor: GeForce 8600 GT Driver version: 257.21 CUDA Cores: 32 Core clock: 540 MHz Shader clock: 1188 MHz Memory clock: 700 MHz (1400 MHz data rate) Memory interface: 128-bit Total available graphics memory: 1021 MB Dedicated video memory: 256 MB GDDR3 System video memory: 0 MB Shared system memory: 765 MB Video BIOS version: 60.84.51.00.00 IRQ: 16 Bus: PCI Express x16 Last edited by adiabatic; 13th July 2010 at 02:06. |
13th July 2010, 02:16 | #2056 | Link |
Registered User
Join Date: May 2007
Location: Wisconsin
Posts: 2,129
|
Here is mine, for the new quadcore. The same graphics card is in the dual core system.
Code:
[Display] Processor: AMD Phenom(tm) II X4 965 Processor (3411 MHz) Operating System: Microsoft Windows XP, 32-bit (Service Pack 2) DirectX version: 9.0 GPU processor: GeForce GT 240 Driver version: 257.21 CUDA Cores: 96 Core clock: 550 MHz Shader clock: 1340 MHz Memory clock: 1700 MHz (3400 MHz data rate) Memory interface: 128-bit Memory: 512 MB Memory type: GDDR5 Video BIOS version: 70.15.2C.00.51 IRQ: 24 Bus: PCI Express x16 Gen2 |
13th July 2010, 03:19 | #2057 | Link |
Guest
Join Date: Jan 2002
Posts: 21,901
|
Hit man is on the way.
I think we should hold off further reports and do what RedDwarf1 suggested. I will get a sample clip prepared with instructions for playing it in DGIndexNV and then we will report the data above together with our FPS. Stand by... |
13th July 2010, 03:53 | #2059 | Link | |
Registered User
Join Date: Nov 2003
Posts: 1,281
|
I asked the rivatuner author about these things not long after dgindexnv was released and got this response.
Quote:
Perhaps a few of you guys should register there and bug the dev about adding overclock support for the chip.
__________________
http://www.7-zip.org/ Last edited by Audionut; 13th July 2010 at 03:56. |
|
Thread Tools | Search this Thread |
Display Modes | |
|
|