Welcome to Doom9's Forum, THE in-place to be for everyone interested in DVD conversion.

Before you start posting please read the forum rules. By posting to this forum you agree to abide by the rules.

 

Go Back   Doom9's Forum > Video Encoding > MPEG-4 Encoder GUIs

Reply
 
Thread Tools Search this Thread Display Modes
Old 11th July 2012, 21:56   #841  |  Link
VideoFanatic
Registered User
 
Join Date: Sep 2011
Posts: 241
Encoding Speeds with Several Cores & Cuda?

I currently have a Dual Core PC with 32-bit Windows Vista. I use Avisynth (with Set's MT mode) with Simple x264 Launcher to encode a 1 hour 30 minute standard definition video to h264:

Here is my script:

Code:
setmtmode(5,2)
Mpeg2Source("H:\New\Raw 1996 March 11 new.d2v", CPU=6)
setmtmode(2,0)
Load_Stdcall_plugin("C:\Program Files\AviSynth 2.5\plugins\yadif.dll")
QTGMC(Preset="Ultra Fast") 
Vinverse()
TTempSmoothF(maxr=3, lthresh=8, cthresh=5, strength=4, interlaced=true)
It takes 8 hours to encode at around 10 FPS. How much faster would a quad core be with Vista 64-bit?

How much faster would an 8 core be?

Does having a better graphics card speed up encoding? Should I get a graphics card with CUDA if that will speed up encoding?
VideoFanatic is offline   Reply With Quote
Old 11th July 2012, 22:10   #842  |  Link
LoRd_MuldeR
Software Developer
 
LoRd_MuldeR's Avatar
 
Join Date: Jun 2005
Location: Last House on Slunk Street
Posts: 13,248
Switching from a Dualcore to an Octocore processor of the same processor generation (same microarchitecture) will roughly multiply your throughput by four

That's because the x264 encoding speed scales almost linearly with the number of CPU cores. At least up to something like ~16 cores.

Of course the above only applies if x264 itself is your performance bottleneck! If your Avisynth script is the bottleneck, then it highly depends on what filters/plugins your are using.

Some Avisynth plug-in's are multi-threaded nicely, while others are not multi-threaded at all. Also note that the Avisynth core itself is not multi-threaded at all.

(There is an "MT" branch of Avisynth available, but it is known to be an unstable mess. Feel free to experiment with Avisynth-MT tough!)


As for CUDA: x264 currently does not use/need/support CUDA or any other form of GPGPU (such as OpenCL or DirectCompute) at all. Thus a faster graphics card won't speed up x264 encoding at all!

If you have followed the discussion on this forum, then you should know that the hype around "GPU accelerated" video encoders is nothing but a marketing hoax

Company xyz will tell you "our GPU-based encoder is 10x faster than x264", but they won't tell you that they compared the "crap quality" setting of their encoder to x264 running with "very slow" settings

If GPU's really were as suitable for video encoding as the GPU vendors try to make you believe, then somebody would already have developed a GPU-based encoder that can keep up with x264 in a fair comparison.

We are still waiting to see this happen! This all doesn't mean that you can't use a GPU-based decoder (e.g. DGDecodeNV) to feed x264. But you really don't need a "high end" graphics card for that...
__________________
Go to https://standforukraine.com/ to find legitimate Ukrainian Charities 🇺🇦✊

Last edited by LoRd_MuldeR; 11th July 2012 at 22:57.
LoRd_MuldeR is offline   Reply With Quote
Old 12th July 2012, 00:52   #843  |  Link
mini-moose
Registered User
 
Join Date: Oct 2007
Posts: 385
Quote:
Originally Posted by LoRd_MuldeR View Post
There is no need to to use "--level", because x264 will automatically set the proper H.264 Level, based on the properties of your video and based on your encoding settings.
well, if you encode 1920x1080 with --profile high --preset slower, the ref will be set to 8 which exceeds the max ref h264 specs allows for that resolution. if you use --level 4.1 it will be set to 4 as it needs to be. That's mainly what I'm talking about. Try encoding a short clip with and without --level 4.1, mediainfo it and see.
Blu-Rays of course use those h264 specs - High@L4.1 and ref 4.

Here are the results from a quick test I just did (both on the same source of course):

1920x1080 using --preset slow --profile high --level 4.1 :
mediainfo : High@L4.1, ref=4

1920x1080 using --preset slow --profile high :
mediainfo : High@L5.0, ref=5

Quote:
Originally Posted by LoRd_MuldeR View Post
The identical parameters should be used for both passes of a 2-Pass encode!
unless "--slow-firstpass" is added explicitly.
First pass doesn't use the same parms as 2nd. As you said it's it reduces certain parms to speedup by using profile main.
personally I sometimes change certain elements that are lower on first pass (and of course much higher on 2nd pass) that my differ from --slow-firstpass, which is why I would have liked to be able to make my own modifications for first pass as well.

Quote:
Originally Posted by LoRd_MuldeR View Post
Also you don't need to set "-o" at all. The Simple x264 Launcher will set that option for you and thus it won't allow you to enter it for a second time.
that's not what I meant. The way it is now, first pass uses -o as well, which means it writes a video file to hdd for first pass and then overwrites it on 2nd. IMO there is no need to for that on 1st pass.
so first pass can be set to use -o NUL.

from the log file (I modified the paths):

"x264_8bit_x64.exe" --bitrate 5212 --pass 1 --stats test.stats --preset slow --profile high --output test.mkv --frames 401 --demuxer y4m --stdin y4m -

Quote:
Originally Posted by LoRd_MuldeR View Post
The difference, of course, is not only the revision number
Fair enough, I wanted to be clear on that.
I don't think however that the various tweaks/changes/fixes made on each revision really effect the main settings available on the launcher. the presets/profiles/psy tunes are still the same as far as the cmd goes (unless there is some major change which happens once in a long time). They may have internal changes but that shouldn't effect the cmds themselves.
If I'm wrong with my assumptions I apologize in advance I'm perfectly fine with getting new versions and just wanted to be be clarify that subject.

Anyway, I'm happy I found out about your launcher it seems to be very useful.

Last edited by mini-moose; 12th July 2012 at 01:50.
mini-moose is offline   Reply With Quote
Old 12th July 2012, 01:37   #844  |  Link
VideoFanatic
Registered User
 
Join Date: Sep 2011
Posts: 241
Quote:
Originally Posted by LoRd_MuldeR View Post
Switching from a Dualcore to an Octocore processor of the same processor generation (same microarchitecture) will roughly multiply your throughput by four

As for CUDA: x264 currently does not use/need/support CUDA or any other form of GPGPU (such as OpenCL or DirectCompute) at all. Thus a faster graphics card won't speed up x264 encoding at all!

This all doesn't mean that you can't use a GPU-based decoder (e.g. DGDecodeNV) to feed x264. But you really don't need a "high end" graphics card for that...

http://www.videohelp.com/tools/DGAVCDec. That page says The DGDecNV version (costs money) is similar to the DGAVCDec program but it uses your Nvidia card to decode video much faster using CUDA video decoding.

So are you still saying that there's no need to buy a CUDA supported card because it won't speed up x264 encoding? If CUDA is useless then do I only need a basic £30 graphics card?

Any idea why the "DGVC1DecNV" link doesn't work. Where do I go to get that program?
VideoFanatic is offline   Reply With Quote
Old 12th July 2012, 08:42   #845  |  Link
mini-moose
Registered User
 
Join Date: Oct 2007
Posts: 385
Quote:
Originally Posted by holygamer View Post
So are you still saying that there's no need to buy a CUDA supported card because it won't speed up x264 encoding? If CUDA is useless then do I only need a basic £30 graphics card?

Any idea why the "DGVC1DecNV" link doesn't work. Where do I go to get that program?
I'm not an expert but from what I've experienced and read, using dgavcnv will make decoding your video faster (serve the frames faster to your encoder) depending on what pure video engline your cuda card has. for it to drastically effect the encoding speeds you'll need a kick ass cpu too as at the end the bottle neck is how fast your cpu can handle encoding under high 264 settings. You do not need a high end card to get faster decoding, what you need is one with the latest vp engine. the vp engine doesn't change. i.e if you buy a $50 card with vp5 or a $500 card with vp5, it will still do the same job as it's the same vp on both.
http://neuron2.net/dgdecnv/dgdecnv.html is the software's webpage.
mini-moose is offline   Reply With Quote
Old 12th July 2012, 12:46   #846  |  Link
LoRd_MuldeR
Software Developer
 
LoRd_MuldeR's Avatar
 
Join Date: Jun 2005
Location: Last House on Slunk Street
Posts: 13,248
Quote:
Originally Posted by holygamer View Post
http://www.videohelp.com/tools/DGAVCDec. That page says The DGDecNV version (costs money) is similar to the DGAVCDec program but it uses your Nvidia card to decode video much faster using CUDA video decoding.
DGAVCDec is a "dead" project. It has a lot of problems that are never going to be fixed, as the developer dropped that project. Also it only handled AVC/H.264.

DGDecNV supports H.264/AVC, VC-1 and MPEG-2 via "hardware" decoding. And it's under active development. It does not neccesarrily decode "faster" (a decent CPU can actually decode faster than the graphic card's built-in video decoder unit), but it does keep the CPU free for other work...

Quote:
Originally Posted by holygamer View Post
So are you still saying that there's no need to buy a CUDA supported card because it won't speed up x264 encoding? If CUDA is useless then do I only need a basic £30 graphics card?
For DGDecNV you need a supported NVidia card. There is a list of supported cards on the web-site. You do not need a "high end" model though. For video decoding the actual GPU is not used! Instead a dedicated decoder unit (NVidia calls it "Pure Video") integrated on the graphic's card is used. The video decoder unit is pretty much identical between "high end" and "low end" cards. A cheap one will do the job just as well...

And, as explaind before, x264 itself doesn't use the graphic's card. It doesn't even notoice what graphics card you have. You could be running x264 on a server machine without any graphics card ^^
__________________
Go to https://standforukraine.com/ to find legitimate Ukrainian Charities 🇺🇦✊

Last edited by LoRd_MuldeR; 12th July 2012 at 12:52.
LoRd_MuldeR is offline   Reply With Quote
Old 12th July 2012, 14:58   #847  |  Link
LoRd_MuldeR
Software Developer
 
LoRd_MuldeR's Avatar
 
Join Date: Jun 2005
Location: Last House on Slunk Street
Posts: 13,248
Quote:
Originally Posted by mini-moose View Post
well, if you encode 1920x1080 with --profile high --preset slower, the ref will be set to 8 which exceeds the max ref h264 specs allows for that resolution. if you use --level 4.1 it will be set to 4 as it needs to be. That's mainly what I'm talking about. Try encoding a short clip with and without --level 4.1, mediainfo it and see.
Each H.264 Level defines a "Decoded Picture Buffer" (DPB) size. It is the DPB size and the resolution of the video that defines the maximum number of references. Now if you feed x264 with 1080p video and set 8 referfences (either explicitely or implicitely via Preset), then x264 will of course pick the Level that can support 8 referneces for 1080p. The result will perfectly comply to the H.264 specifictions! It may not come out as Level 4.1 though, because x264 cannot magically know that you are expecting Level 4.1. It selects the Level that is suitable for your input video and for your settings! If you need Level 4.1 for whatever reason, then you may enforce this by setting "--level 4.1". But generally this won't be needed...

Quote:
Originally Posted by mini-moose View Post
First pass doesn't use the same parms as 2nd. As you said it's it reduces certain parms to speedup by using profile main.
personally I sometimes change certain elements that are lower on first pass (and of course much higher on 2nd pass) that my differ from --slow-firstpass, which is why I would have liked to be able to make my own modifications for first pass as well.
What I mean is that x264 is designed to use the exactly same command-line parameters for both passes. It will automatically lower certain settings in the first pass (those that are "safe" to be lowered in a first pass), even when you use the identical parameters (except for "--pass" of course) in both passes. Thus there is no need to enter different parameters for pass1/pass2. Using different parameters can even be dangerous, because there are many options that are NOT supposed to be changed between the passes...

Quote:
Originally Posted by mini-moose View Post
that's not what I meant. The way it is now, first pass uses -o as well, which means it writes a video file to hdd for first pass and then overwrites it on 2nd. IMO there is no need to for that on 1st pass.
so first pass can be set to use -o NUL.
It could be done, yes. But it doesn't have to. The way it currently is, we can inspect the temporary file while the first pass is still running...

Quote:
Originally Posted by mini-moose View Post
Fair enough, I wanted to be clear on that.
I don't think however that the various tweaks/changes/fixes made on each revision really effect the main settings available on the launcher. the presets/profiles/psy tunes are still the same as far as the cmd goes (unless there is some major change which happens once in a long time). They may have internal changes but that shouldn't effect the cmds themselves.
If I'm wrong with my assumptions I apologize in advance I'm perfectly fine with getting new versions and just wanted to be be clarify that subject.
Most of the time the command-line interface of x264 doesn't change, so we can update the x264 binary without modifying the GUI. Nevertheless changes to the command-line interface may happen and may break compatibility. Also keep in mind that the GUI needs to read x264's console output. If the console output changes, it may (and probably will) also break the compatibility.
__________________
Go to https://standforukraine.com/ to find legitimate Ukrainian Charities 🇺🇦✊

Last edited by LoRd_MuldeR; 12th July 2012 at 15:01.
LoRd_MuldeR is offline   Reply With Quote
Old 12th July 2012, 15:17   #848  |  Link
mini-moose
Registered User
 
Join Date: Oct 2007
Posts: 385
Quote:
Originally Posted by LoRd_MuldeR View Post
It may not come out as Level 4.1 though, because x264 cannot magically know that you are expecting Level 4.1. It selects the Level that is suitable for your input video and for your settings!
I'm well aware that the level is auto chosen in relevance to presets etc. The thing I'm saying that using anything over level 4.1 might cause compatibility issues with certain hardwares. Sure you can set it to whatever you want but if you spent several hours making your video and your hardware chokes on it you end up with wasting your time.
http://mewiki.project357.com/wiki/X264_Settings#level :
"Level 4.1 is often considered the highest level you can rely on desktop consumer hardware to support. Blu-ray Discs only support level 4.1, and many non-mobile devices like the Xbox 360 specify level 4.1 as the highest they officially support"
mini-moose is offline   Reply With Quote
Old 12th July 2012, 15:38   #849  |  Link
LoRd_MuldeR
Software Developer
 
LoRd_MuldeR's Avatar
 
Join Date: Jun 2005
Location: Last House on Slunk Street
Posts: 13,248
Well, not everybody is encoding for consumer hardware decoders. A lot of people encode to watch the result on a PC and there Levels are pretty much irrelevant (for software decoders). If you need to hit a specific Level, well, go ahead an add "--level x.y" to the custom parameters. It will even be included in your template! But I don't think we should enforce a specific Level by default. Especially because enforcing, e.g. Level 4.1, can be far too high - depending on the source. I can't remember what x264 does when the request Level is higher than needed. But we either get a video with a "wrong" (too high) Level flag or we get a video whose Level will be different from what the user has selected. And you know, user will be confused if selected Level is 4.1, but the video comes out at Level 3.0, for example...
__________________
Go to https://standforukraine.com/ to find legitimate Ukrainian Charities 🇺🇦✊

Last edited by LoRd_MuldeR; 12th July 2012 at 16:05.
LoRd_MuldeR is offline   Reply With Quote
Old 12th July 2012, 15:54   #850  |  Link
mini-moose
Registered User
 
Join Date: Oct 2007
Posts: 385
Quote:
Originally Posted by LoRd_MuldeR View Post
Well, not everybody is encoding for consumer hardware decoders.
fair enough. I didn't suggest forcing a certain level just adding a lever selector next to tune/preset/profile.
Every one of the main settings can cause some confusion.
All such confusions can be resolved after some trial and error and spending some time reading mans (or do a bit of research).
mini-moose is offline   Reply With Quote
Old 12th July 2012, 18:25   #851  |  Link
WasF
Registered User
 
Join Date: Apr 2009
Posts: 57
Why does the GUI enforce the use of a preset ? Where is the "none" entry in the drop menu ?
WasF is offline   Reply With Quote
Old 12th July 2012, 18:35   #852  |  Link
LoRd_MuldeR
Software Developer
 
LoRd_MuldeR's Avatar
 
Join Date: Jun 2005
Location: Last House on Slunk Street
Posts: 13,248
The "Medium" preset is identical to x264's default.

Actually x264 first sets its defaults, then applies the selected preset and finally applies all other user options, if any.

(The "Medium" preset is implemented as changing nothing)
__________________
Go to https://standforukraine.com/ to find legitimate Ukrainian Charities 🇺🇦✊
LoRd_MuldeR is offline   Reply With Quote
Old 12th July 2012, 18:54   #853  |  Link
WasF
Registered User
 
Join Date: Apr 2009
Posts: 57
That was fast ! Thanks.
Would help (psychologically that is) if you added a "none" entry that secretly applies "Medium".

Now this launcher is looking close to perfection !
One concern though: can it really pause the encode ? I remember this as a hot topic few years ago, and artifacts were feared and all..
I guess you just suspend the process. To your knowledge, does x264 tolerate this with no effect on the output ?
(really, this feature should be implemented in x264 itself - like through a signal)
WasF is offline   Reply With Quote
Old 12th July 2012, 19:07   #854  |  Link
VideoFanatic
Registered User
 
Join Date: Sep 2011
Posts: 241
Quote:
Originally Posted by mini-moose View Post
I'm not an expert but from what I've experienced and read, using dgavcnv will make decoding your video faster (serve the frames faster to your encoder) depending on what pure video engline your cuda card has.You do not need a high end card to get faster decoding, what you need is one with the latest vp engine. the vp engine doesn't change. i.e if you buy a $50 card with vp5 or a $500 card with vp5, it will still do the same job as it's the same vp on both.
http://neuron2.net/dgdecnv/dgdecnv.html is the software's webpage.
Thanks. I know about that webpage already but I was just wondering why the link on the following page to DGVC1DecNV doesn't work. Does that program no longer exist?:

http://www.videohelp.com/tools/DGAVCDec

Also on the http://neuron2.net/dgdecnv/dgdecnv.html page I don't see anything that mentions VP engine. Could you please show me a page from that software documentation that mentions VP engines?

Any idea what the cheapest VP5 supported graphics card is?
VideoFanatic is offline   Reply With Quote
Old 12th July 2012, 19:17   #855  |  Link
mini-moose
Registered User
 
Join Date: Oct 2007
Posts: 385
Quote:
Originally Posted by holygamer View Post
I was just wondering why the link on the following page to DGVC1DecNV doesn't work. Does that program no longer exist?:
DGVC1DecNV link I guess doesn't work cause that's an old version of what is now DGDecNV, for VC1 streams only. It was migrated into DGDecNV which supports also h264 and mpeg2 in addition to VC1 (all in one).
It costs a little but you get tons of licenses. I believe DGVC1DecNV was a payware as well.
Quote:
Originally Posted by holygamer View Post
I don't see anything that mentions VP engine. Could you please show me a page from that software documentation that mentions VP engines?
Any idea what the cheapest VP5 supported graphics card is?
I will need to search for it but you can have a look through the dgavcnv support forum. It should be discussed there. for example:
http://neuron2.net/board/viewtopic.php?f=8&t=199
http://neuron2.net/board/viewtopic.php?f=8&t=167

When you download the software package it comes with some html documentations. Might have more info there.

see also what op wrote earlier:
http://forum.doom9.org/showpost.php?...&postcount=846

I think GT 520 is one of the cheaper ones.
you can look here at a list of gpus, find the ones that say vp5 and compare prices:
http://en.wikipedia.org/wiki/Nvidia_..._.28HD.29_GPUs

edit:

look at the DGDecodeNVManual.html and DGIndexNVManual.html that come with the application package:
"The "NV" in the name "DGDecodeNV" indicates that this version of the program is designed for use with the VP2 (or greater) GPU decoder on some Nvidia video cards...."

Last edited by mini-moose; 12th July 2012 at 19:47.
mini-moose is offline   Reply With Quote
Old 12th July 2012, 19:56   #856  |  Link
VideoFanatic
Registered User
 
Join Date: Sep 2011
Posts: 241
OK now I'm confused. I saw a post that said "DGDecNV has allowed us to decode video using CUDA, as well as the option we have always had to decode using VP". How do I know which to use? CUDA or VP?
VideoFanatic is offline   Reply With Quote
Old 12th July 2012, 20:56   #857  |  Link
mini-moose
Registered User
 
Join Date: Oct 2007
Posts: 385
Quote:
Originally Posted by holygamer View Post
OK now I'm confused.
you seem to be overthinking it. I don't know a whole lot on how something works just what it does for my needs

DGAVCNV decodes streams using the purevideo (vp) engine on nvidia cuda cards. You load your video on this tool, save it to a project file. Serve the project file on your .avs script and encode. Simple

Last edited by mini-moose; 12th July 2012 at 21:08.
mini-moose is offline   Reply With Quote
Old 12th July 2012, 21:36   #858  |  Link
LoRd_MuldeR
Software Developer
 
LoRd_MuldeR's Avatar
 
Join Date: Jun 2005
Location: Last House on Slunk Street
Posts: 13,248
Quote:
Originally Posted by holygamer View Post
OK now I'm confused. I saw a post that said "DGDecNV has allowed us to decode video using CUDA, as well as the option we have always had to decode using VP". How do I know which to use? CUDA or VP?
CUDA is an interface to access the graphic's processor (GPU) for running "arbitrary" calculations. OpenCL and DirectCompute basically do the same. However nobody would implement a video decoder on the GPU using CUDA (or OpenCL or DirectCompute). That's because all graphic's cards from the last ~8 years have dedicated video decoding units! NVidia calls it "PureVideo HD" while AMD/ATI calls it "Unified Video Decoder" (UVD). Once again, several interfaces to access the graphic card's hardware video decoder unit exist, such as DirectX Video Acceleration (DXVA). Another one, created by NVidia and supported exclusively by NVidia cards, is CUVID (the "CUDA Video API"). Don't confuse CUVID with CUDA itself! Applications using CUDA are using the actual GPU and they have to upload their own program code (called a "kernel") into the GPU. At the same time, applications using CUVID are simply using the hardwired video decoder unit, which is not programmable at all.

As for DGDecNV: It is based on CUVID and thus the minimum hardware requirement is an NVidia card with VP2 (i.e. the second generation of "PureVideo HD") or newer. Look at the table to see which cards are suitable...
__________________
Go to https://standforukraine.com/ to find legitimate Ukrainian Charities 🇺🇦✊

Last edited by LoRd_MuldeR; 12th July 2012 at 21:51.
LoRd_MuldeR is offline   Reply With Quote
Old 12th July 2012, 22:15   #859  |  Link
LoRd_MuldeR
Software Developer
 
LoRd_MuldeR's Avatar
 
Join Date: Jun 2005
Location: Last House on Slunk Street
Posts: 13,248
Quote:
Originally Posted by WasF View Post
That was fast ! Thanks.
Would help (psychologically that is) if you added a "none" entry that secretly applies "Medium".
I prefer to stick with the same names that x264 itself uses.

Quote:
Originally Posted by WasF View Post
One concern though: can it really pause the encode ? I remember this as a hot topic few years ago, and artifacts were feared and all..
I guess you just suspend the process. To your knowledge, does x264 tolerate this with no effect on the output ?
(really, this feature should be implemented in x264 itself - like through a signal)
Interestingly the Win32 API doesn't provide a function to suspend a complete process. Instead we have to enumerate all threads of the process and suspend them one-by-one. Later we will enumerate the process' threads again and resume them one-by-one. In case of x264, however, it is sufficient to just suspend/resume the "main" thread

And yes, this should be perfectly safe. Actually the operating system's scheduler suspends and resumes the processes/threads running on your computer all the time. This is called "preemptive multi-tasking" and it is the way how multi-tasking is implemented on all modern operating systems - including Microsoft Windows. Please see here for details!

In other words: Operating systems, except for special "real time" operating systems, never give any timing guarantees. Thus a program requiring that it never gets suspended (for longer than x seconds) is bound to fail in the first place...

(BTW: The only "side effect" of suspending the x264 process I ever noticed is that the "fps" and "eta" progress indicators will get confused)
__________________
Go to https://standforukraine.com/ to find legitimate Ukrainian Charities 🇺🇦✊

Last edited by LoRd_MuldeR; 12th July 2012 at 23:18.
LoRd_MuldeR is offline   Reply With Quote
Old 13th July 2012, 04:17   #860  |  Link
WasF
Registered User
 
Join Date: Apr 2009
Posts: 57
Quote:
Originally Posted by LoRd_MuldeR View Post
I prefer to stick with the same names that x264 itself uses.
That was.. not so fast this time But hey: worth the wait, though !
Don't worry, my suggestion was of "psychological" order, to remove the false impression that the GUI was imposing something it shouldn't impose..
Funny to see how "open sourcers" keep having allergies to this kind of stuff ! (dirty codename: usability ).

Quote:
Originally Posted by LoRd_MuldeR View Post
The only "side effect".. is that the "fps" and "eta" progress indicators will get confused
That rings a bell ! I seem to remember this as "slow restart", when it's actually the indicators that get crooked. I guess the fps and eta values are computed using absolute time, not "actually running time", so it's normal. This is why this should be supported by x264 itself: the process can't know it got suspended, and could receive this info through a signal and subtract the "pause time" from absolute time.
(I remember now: the "hot topic" I mentioned was actually about complete stop/resume, not just pause, which, I guess, is not even on x264 people's radar, given that they don't even support pause/resume. Would be a huge convenience though, but, hey, it's open source, so, remember, dirty codename again !)

Alright, Simple x264 Launcher::Pesky_users++ (how is this good news for you ?! )
WasF is offline   Reply With Quote
Reply

Thread Tools Search this Thread
Search this Thread:

Advanced Search
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump


All times are GMT +1. The time now is 14:28.


Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2024, vBulletin Solutions Inc.