Welcome to Doom9's Forum, THE in-place to be for everyone interested in DVD conversion. Before you start posting please read the forum rules. By posting to this forum you agree to abide by the rules. |
29th May 2016, 06:02 | #3801 | Link | |
Registered User
Join Date: Nov 2012
Posts: 218
|
Quote:
Our testcases are carefully taken from commercial BluRay Disc sources so none of them are heavily compressed. I'm asking my teammate to prepare a 720p one(properly down-scaled) that is short enough and it shouldn't violate the forum policies to be posted. Meanwhile, an x264 benchmark will also be available. We shall get back to you in a few days. The three massive tuning tests we've done before is very expensive to carry(>500 encodes per time) so we only do that for stable builds; the latest one is with v1.9, and the next one shall be with v2.0. Indeed we don't have a good idea about the most updated version; but we shall be the first to cheer for your guys if major breakthrough is seen next time. Last but not least, don't expect many users to tolerate --placebo. It's not pragmatic for daily use. The slowest one we are able to accept would be veryslow, if not deciding to use x264 when speed is a concern. Last edited by littlepox; 29th May 2016 at 06:04. |
|
29th May 2016, 10:07 | #3802 | Link |
German doom9/Gleitz SuMo
Join Date: Oct 2001
Location: Germany, rural Altmark
Posts: 6,784
|
Just as a suggestion for a commonly available and uncompressed, "Creative Commons" licensed source: "Tears of Steel" should provide a good variety of different scenes, from stills to heavy action. The cartoonish credits may have their own challenge for an encoder possibly optimized for real-world footage.
|
29th May 2016, 17:55 | #3803 | Link |
Guest
Posts: n/a
|
No apologies necessary - I'm not offended. We want your feedback, good or bad. We want to address any concerns head-on. x265 has all of the coding tools of x264, and many, many more. In other words, there is never any reason why x265 should be inferior to x264. In the worst case, x265 could encode the video with the exact same frame types, block structures, motion vectors and modes. But thanks to the HEVC standard, x265 has many other options to choose from, including larger block sizes. If there are any areas where x265 is not equal to or better than x264, we need to understand and fix them. If this is a myth, we want to bust it.
|
29th May 2016, 17:56 | #3804 | Link | |
Registered User
Join Date: Jul 2015
Posts: 708
|
Quote:
Mainconcept 6000kbps, 1920x1080, bframes=3, best, pass=1 X265 6000kbps, 1920x1080, bframes=5, veryslow, pass=2 What can I require? |
|
29th May 2016, 17:58 | #3805 | Link | |
Registered User
Join Date: May 2015
Posts: 185
|
Quote:
By then you should be able to compare original vs. 1.9+1 encode vs. 2.0 encodes and I believe this is where we'll see most of the improvements between 1.9 and 2.0, from there you should also be able to compare your best tuning results (from 2.0, I believe) vs any x264 encodes of the same segment. My two cents. Last edited by pingfr; 29th May 2016 at 18:01. |
|
29th May 2016, 18:03 | #3807 | Link | |
Guest
Posts: n/a
|
Quote:
|
|
29th May 2016, 19:35 | #3808 | Link | |
1.16 MileHi
Join Date: Feb 2008
Location: Denver, CO
Posts: 26
|
Quote:
I also tested Nvidia x265 encoder and it produces even worse color ghosting. I also tested similar footage shot in 1080 rather than 4K and there appears to be no issue, it seems to be 4K related. Cheers ! |
|
29th May 2016, 20:54 | #3809 | Link | |
German doom9/Gleitz SuMo
Join Date: Oct 2001
Location: Germany, rural Altmark
Posts: 6,784
|
Quote:
|
|
29th May 2016, 21:12 | #3810 | Link |
Registered User
Join Date: Oct 2001
Location: Germany
Posts: 7,277
|
@x265_Project: are there any plans to further improve the x265 performance on multi socket systems when encoding SD content? (for SD cpu usage&speed became better with https://patches.videolan.org/patch/13436/ but it still doesn't seem good,...)
|
29th May 2016, 23:48 | #3811 | Link | |
Registered User
Join Date: Jan 2007
Posts: 729
|
Quote:
|
|
30th May 2016, 01:38 | #3812 | Link | |
Guest
Posts: n/a
|
Quote:
|
|
30th May 2016, 03:06 | #3813 | Link | |
Registered User
Join Date: May 2015
Posts: 185
|
Quote:
Any public pricing models available? Cheers. |
|
30th May 2016, 07:33 | #3814 | Link | |
German doom9/Gleitz SuMo
Join Date: Oct 2001
Location: Germany, rural Altmark
Posts: 6,784
|
Quote:
I already suspected that the more complex an algorithm gets, the more restricted parallelizability gets as well (because many intermediate results have to be collected to a final result). You confirmed this assumption here. __ P.S.: Matheusz just mentioned in the mailinglist that there are some quirks regarding DLLs and compilers and speed ... not all compilers can handle optimization of multilib builds correctly, so in general, using a build with separate encoder library DLLs per bitdepth should be the fastest solution. On top, GCC 6.1 seems to be faster for Win32 8-bit, but GCC 5.3 for 10-bit and 12-bit code. I won't be able to use several compiler versions easily, therefore I will keep building with GCC 5.3. To be able to place both 32-bit and 64-bit builds in the same directory, the scripts will soon produce a new naming pattern, libx265-32_main[10|12].dll for separate Win32 DLLs. Last edited by LigH; 30th May 2016 at 08:09. |
|
30th May 2016, 16:51 | #3815 | Link | |
Registered User
Join Date: Jan 2010
Posts: 709
|
Quote:
ie a program with function a() and function b(), where a() take 80% of time and b() 20%, optimizing b() to run twice faster will make the program run in 80%+(20%/2) = 90% the time not 50%. In this case x265's unparallelizable parts will slowing down so much any further (if) possible optimization of the parallelized parts, or parallelizations, will give negible speedup with SD content.
__________________
powered by Google Translator |
|
31st May 2016, 06:30 | #3816 | Link | |
Guest
Posts: n/a
|
Quote:
There are many routines involved in HEVC encoding that are serial in nature. For example, to find the true "cost" (# of bits) of a candidate encoding mode for a block, we have to encode and decode the predicted block, calculate the residual error (the difference between the source block and the predicted block), calculate the discrete cosine transform of the residual error, quantize the transformed residual error, and compress the encoded result with CABAC entropy coding. All in series. This can't be further parallelized. CABAC encoding itself is an inherently serial process. Thanks to Wavefront Parallel Processing, we can encode multiple rows of blocks in parallel, and thanks to x265's frame parallelism, we encode multiple frames in parallel. But the number of rows per frame is limited by the frame size. x265 can operate on more rows per frame with larger frames. |
|
2nd June 2016, 12:50 | #3817 | Link |
German doom9/Gleitz SuMo
Join Date: Oct 2001
Location: Germany, rural Altmark
Posts: 6,784
|
x265 1.9+200-6098ba3e0cf16b11 (oh, revision hashes are longer now): some thread pool changes, git version ID and other fixes
|
4th June 2016, 11:21 | #3818 | Link |
Registered User
Join Date: Feb 2015
Posts: 326
|
x264 has 7 chars long revision hash, x265 had 12 chars long, now it is 16 chars long. On page https://bitbucket.org/multicoreware/x265/commits/all there are 7 chars long revision hashes (and it is enough).
I think we should switch to 7 chars long revision hashes in x265. |
4th June 2016, 13:46 | #3819 | Link |
Registered User
Join Date: Jan 2004
Posts: 69
|
I don't know why I thought {node|short} gave 16-char long hash, but yeah, it should be 12 character instead.
It shouldn't be 7 characters because that can be mistaken with git short form and git/hg commits have no relation whatsoever. |
|
|