Welcome to Doom9's Forum, THE in-place to be for everyone interested in DVD conversion.

Before you start posting please read the forum rules. By posting to this forum you agree to abide by the rules.

 

Go Back   Doom9's Forum > Video Encoding > High Efficiency Video Coding (HEVC)

Reply
 
Thread Tools Search this Thread Display Modes
Old 13th February 2019, 18:30   #6721  |  Link
Boulder
Pig on the wing
 
Boulder's Avatar
 
Join Date: Mar 2002
Location: Hollola, Finland
Posts: 4,513
You can safely use --rskip, it will speed up things a lot and you won't notice any difference. I'd also use something like --subme 3.
__________________
And if the band you're in starts playing different tunes
I'll see you on the dark side of the Moon...
Boulder is offline   Reply With Quote
Old 13th February 2019, 22:13   #6722  |  Link
redbtn
Registered User
 
redbtn's Avatar
 
Join Date: Jan 2019
Posts: 10
Boulder thank you!

Quote:
Originally Posted by Boulder View Post
This is probably a silly question, but here goes anyway: if I use --hdr-opt, do I need to feed the encoder with 10-bit data or is 16-bit data as good if the source is a standard UHD with HDR? I always process things in 16-bit domain and let the encoder dither down to 10 bits.
Quote:
Originally Posted by FranceBB View Post
I would let x265.exe do the dithering, 'cause other dithering options like the Floyd Steinberg error diffusion may have a nicer look, but they could increase the bitrate required by x265. The built in dithering filter in x265 is supposed to dither everything down to the target bit depth without introducing banding. Blocks and macro blocks dithered by x265 are more likely to be recognised during the motocompensation by x265 than the ones dithered using a third party dithering method, therefore compression should be better.

In a nutshell, let x265 do the dithering and always pipe to it the highest bit depth you have, unless you like a specific dithering method and you have enough bitrate.
Im encode 10bit HDR, so if I want to do the same, pipe 16bit to x265 with the flag --dither, I need LWLibavSource format="YUV420P10" or format="YUV420P16" ?
And i should remove this? clip = core.resize.Bicubic(clip=clip, format=vs.YUV420P10, range_s="limited")

Thank you!

My VS scrypt:
Quote:
clip = core.lsmas.LWLibavSource(source="video.mkv", format="YUV420P10", cache=1)
clip = core.resize.Point(clip, matrix_in_s="2020ncl",range_s="limited")
clip = core.std.AssumeFPS(clip, fpsnum=24000, fpsden=1001)
clip = core.std.SetFrameProp(clip=clip, prop="_ColorRange", intval=1)
clip = core.std.CropRel(clip=clip, left=0, right=0, top=276, bottom=276)
clip = core.fmtc.resample(clip=clip, kernel="spline64", w=1920, h=804, interlaced=False, interlacedd=False)
clip = core.resize.Bicubic(clip=clip, format=vs.YUV420P10, range_s="limited")
clip.set_output()

Last edited by redbtn; 13th February 2019 at 22:17.
redbtn is offline   Reply With Quote
Old 14th February 2019, 01:33   #6723  |  Link
hevc_enocder
Registered User
 
Join Date: Jan 2019
Posts: 1
advice

Code:
--crf 18 --profile main10 --level-idc 5.1 --output-depth 10 --rd 4 --ctu 32 --amp --aq-mode 2 --vbv-bufsize 160000 --vbv-maxrate 160000 --ipratio 1.3
--pbratio 1.2 --no-cutree --subme 7 --me star --merange 24 --max-merge 3 --bframes 12 --rc-lookahead 60 --lookahead-slices 4 --ref 6 --min-keyint 24 --keyint 240 --deblock -3:-3
--no-sao --no-strong-intra-smoothing --high-tier
Hi guys, I would like to know, how to improve my setting, Iam quite satified, but there is problem with red color and if there is sometig to change for encoding normal new movies.

Someone told me, that there is no reason to use subme 7, I am use to from x264 use subme 10 so I thought that it could be usefull.

Thank you for your advice and some explonation.

Last edited by hevc_enocder; 14th February 2019 at 02:24. Reason: spelling
hevc_enocder is offline   Reply With Quote
Old 14th February 2019, 06:20   #6724  |  Link
Selur
Registered User
 
Selur's Avatar
 
Join Date: Oct 2001
Location: Germany
Posts: 5,800
Quote:
there is problem with red color
You probably did use the search in this thread and already tried using negative chroma offsets (--cbqpoffs and --crqpoffs), right?
__________________
Hybrid here in the forum, homepage
Selur is offline   Reply With Quote
Old 14th February 2019, 09:09   #6725  |  Link
RainyDog
Registered User
 
Join Date: May 2009
Posts: 166
Quote:
Originally Posted by hevc_enocder View Post
Code:
--crf 18 --profile main10 --level-idc 5.1 --output-depth 10 --rd 4 --ctu 32 --amp --aq-mode 2 --vbv-bufsize 160000 --vbv-maxrate 160000 --ipratio 1.3
--pbratio 1.2 --no-cutree --subme 7 --me star --merange 24 --max-merge 3 --bframes 12 --rc-lookahead 60 --lookahead-slices 4 --ref 6 --min-keyint 24 --keyint 240 --deblock -3:-3
--no-sao --no-strong-intra-smoothing --high-tier
Hi guys, I would like to know, how to improve my setting, Iam quite satified, but there is problem with red color and if there is sometig to change for encoding normal new movies.

Someone told me, that there is no reason to use subme 7, I am use to from x264 use subme 10 so I thought that it could be usefull.

Thank you for your advice and some explonation.
In x264, RDO is tied to subme. So higher levels of subme also incorporate higher levels of RDO.

Subme and RD(O) level are separate settings in x265 and I personally don't think it's worth using anything higher than subme 5.

Whenever I've tested, it's never actually been worth it to use anything higher than subme 3 really which is the lowest subme to include chroma residual cost.

I'd definitely choose --rd 5 (or 6 since they're the same) plus --rd-refine over subme 7 everytime. Though that will half your encoding time so I rarely ever bother with that either
RainyDog is offline   Reply With Quote
Old 14th February 2019, 10:35   #6726  |  Link
excellentswordfight
Lost my old account :(
 
Join Date: Jul 2017
Posts: 59
Quote:
Originally Posted by redbtn View Post
I tried this settings only without (--rd 6 --rd-refine --opt-cu-delta-qp), with (--rd 3 --no-rskip). For example for file bitrate 57 Mbs size 45 GB output bitrate 26 Mb/s size 21 GB with 1.4fps encode speed. I think with new settings it's be around 0.9-1fps on my 6 core i5 8400.
I view at 4K UHD TV, but it is connected to a computer as duplicated with a monitor with a resolution of 1080p because windows 10 cannot normally scale many applications in 4K (it's sadly). Therefore, I decided that there is no point in reducing the image via MadVR in real time. Although I have a GTX 1080 which does it without problems. And also save about 50 percent of the HDD space.
As I said, I'm sure that you have your reasons, but I dont find it that appealing to spend 48h per title to get a filesize that probably would look undisguisable at half the bitrate.

I just did a quick test on tears of steal with those settings and compared it an encode at native res with slow preset (+ --deblock -1:-1 --no-sao --no-strong-intra-smoothing)... the 1080p one ended up at 40Mbps (downscaled with spline), and the 2160 one at 25, the one at at native res was sharper and was about 3x faster to encode...

Ofc, if you find the time/compression/quality to be a good tradeoff for you go ahead with that. But when you even have a 4k TV I would jut go with a 4k workflow cause you can get away with 4k re-encodes with that bitrate target. Not gonna tell you what to do, but I would at least play with different preset levels and crf values and look at the actually video to see if you can find a better "sweetspot". Cause it sounds awful to me o spend 2days for an encode, just to get a bloated file that still has a loss of detail/sharpness (cause of downscaling).

Last edited by excellentswordfight; 14th February 2019 at 11:18.
excellentswordfight is offline   Reply With Quote
Old 14th February 2019, 14:19   #6727  |  Link
LigH
German doom9/Gleitz SuMo
 
LigH's Avatar
 
Join Date: Oct 2001
Location: Germany, rural Altmark
Posts: 5,791
x265 3.0 Au+4-dcbec33bfb0f (MSYS2, MinGW32 + GCC 7.4.0 / MinGW64 + GCC 8.2.1)

Gold!
__________________

New German Gleitz board
MediaFire: x264 | x265 | VPx | AOM | Xvid
LigH is offline   Reply With Quote
Old 18th February 2019, 18:14   #6728  |  Link
poller
Registered User
 
Join Date: Sep 2018
Posts: 10
is there a way to compile the x64 version without avx512?

it only increases the size of the library and i will not use avx512 anyway. for now i'm sticking to v2.7
thanks

Last edited by poller; 18th February 2019 at 18:19.
poller is offline   Reply With Quote
Old 18th February 2019, 18:42   #6729  |  Link
MeteorRain
紺野木綿季
 
Join Date: Dec 2003
Location: NJ; OR; Shanghai
Posts: 491
Quote:
Originally Posted by poller View Post
is there a way to compile the x64 version without avx512?
Not sure how the size of the library matters, but anyway.

Open \source\common\x86\asm-primitives.cpp, find X265_CPU_AVX512 and note all the avx512 asms.

Remove them from *.asm files, remove the X265_CPU_AVX512 code block, and then try to compile them.
MeteorRain is offline   Reply With Quote
Old 18th February 2019, 21:25   #6730  |  Link
benwaggoner
Moderator
 
Join Date: Jan 2006
Location: Portland, OR
Posts: 2,794
Quote:
Originally Posted by excellentswordfight View Post
I just did a quick test on tears of steal with those settings and compared it an encode at native res with slow preset (+ --deblock -1:-1 --no-sao --no-strong-intra-smoothing)... the 1080p one ended up at 40Mbps (downscaled with spline), and the 2160 one at 25, the one at at native res was sharper and was about 3x faster to encode...

Ofc, if you find the time/compression/quality to be a good tradeoff for you go ahead with that. But when you even have a 4k TV I would jut go with a 4k workflow cause you can get away with 4k re-encodes with that bitrate target. Not gonna tell you what to do, but I would at least play with different preset levels and crf values and look at the actually video to see if you can find a better "sweetspot". Cause it sounds awful to me o spend 2days for an encode, just to get a bloated file that still has a loss of detail/sharpness (cause of downscaling).
Yeah, an important thing about 4K encoding is that the pixels are getting near the minimize visible size in typical viewing environments, especially with non-HDR content. So small artifacts that could be visible in 1080p content become MUCH less so at 4K. I've heard a lot of "but 4K will take 4x the bitrate of 1080p), but the frequency distribution gets softer and latitude for non-perceptible distortion gets greater. Doubling bitrate from 1080p to 2160p is typically sufficient to get value out of the extra pixels without any regressions.

In fact a satisfactory H.264 8-bit 1080p bitrate will generally be a satisfactory HEVC 10-bit 2160p bitrate. The "4K will kill the internets!" panic was badly overblown.
__________________
Ben Waggoner
Principal Video Specialist, Amazon Prime Video

My Compression Book

Last edited by benwaggoner; 18th February 2019 at 21:26. Reason: specified bit depths
benwaggoner is offline   Reply With Quote
Old 18th February 2019, 22:31   #6731  |  Link
poller
Registered User
 
Join Date: Sep 2018
Posts: 10
Quote:
Originally Posted by MeteorRain View Post
Not sure how the size of the library matters, but anyway.

Open \source\common\x86\asm-primitives.cpp, find X265_CPU_AVX512 and note all the avx512 asms.

Remove them from *.asm files, remove the X265_CPU_AVX512 code block, and then try to compile them.
that's how i started, but i thought there might be a simpler way.

but it does work, size is only slighter bigger than v2.7 now.

thanks.
poller is offline   Reply With Quote
Old 18th February 2019, 23:00   #6732  |  Link
benwaggoner
Moderator
 
Join Date: Jan 2006
Location: Portland, OR
Posts: 2,794
Quote:
Originally Posted by poller View Post
but it does work, size is only slighter bigger than v2.7 now.
I am curious as to why that is worth this effort.
__________________
Ben Waggoner
Principal Video Specialist, Amazon Prime Video

My Compression Book
benwaggoner is offline   Reply With Quote
Old 19th February 2019, 18:32   #6733  |  Link
poller
Registered User
 
Join Date: Sep 2018
Posts: 10
Quote:
Originally Posted by benwaggoner View Post
I am curious as to why that is worth this effort.
well, it didn't take really long, maybe 30 minutes.
i need ffmpeg for recording from an emulator, and i don't want to see the ffmpeg.dll being 5 times bigger than the actual emu.
so i'm trying to keep it small.


and now... more questions.

first, v3.0 is slower and produces bigger output than previous versions. i did read the changelog, but fail to see what could be the reason (yes, noob here).
/edit: ok, it seems to be aq-mode=2


second, x86 builds by LigH are about 10% (!) faster than mine. is there some magical compiler flag? i tried -O2 and -Ofast... no luck. -Ofast being the slowest of them actually.
x64 builds are at the same speed.


Last edited by poller; 19th February 2019 at 18:41.
poller is offline   Reply With Quote
Old 19th February 2019, 18:52   #6734  |  Link
LigH
German doom9/Gleitz SuMo
 
LigH's Avatar
 
Join Date: Oct 2001
Location: Germany, rural Altmark
Posts: 5,791
No magical flags. Just out of media-autobuild suite (more or less, negligible differences in the handling). Do we use the same compiler?
__________________

New German Gleitz board
MediaFire: x264 | x265 | VPx | AOM | Xvid
LigH is offline   Reply With Quote
Old 19th February 2019, 18:56   #6735  |  Link
Boulder
Pig on the wing
 
Boulder's Avatar
 
Join Date: Mar 2002
Location: Hollola, Finland
Posts: 4,513
v3.0 changed some presets, so maybe the change in speed comparing to v2.7 comes from there. I don't know if the x265 docs are helpful in checking out what changed (I doubt )
__________________
And if the band you're in starts playing different tunes
I'll see you on the dark side of the Moon...
Boulder is offline   Reply With Quote
Old 19th February 2019, 20:35   #6736  |  Link
benwaggoner
Moderator
 
Join Date: Jan 2006
Location: Portland, OR
Posts: 2,794
Quote:
Originally Posted by Boulder View Post
v3.0 changed some presets, so maybe the change in speed comparing to v2.7 comes from there. I don't know if the x265 docs are helpful in checking out what changed (I doubt )
Specifically, slower got a lot slower, and veryslow is even more very slow. Slower was a nice sweet spot quality/perf step; I think it would have made more sense to add a superslow or something.

Try using slow and seeing if that helps.
__________________
Ben Waggoner
Principal Video Specialist, Amazon Prime Video

My Compression Book
benwaggoner is offline   Reply With Quote
Old 19th February 2019, 20:37   #6737  |  Link
poller
Registered User
 
Join Date: Sep 2018
Posts: 10
Quote:
Originally Posted by LigH View Post
No magical flags. Just out of media-autobuild suite (more or less, negligible differences in the handling). Do we use the same compiler?
strange. i now tested with GCC 7.3 from MSYS2 (it seems this is the one you are using), the executable is a lot bigger and even slightly slower than my other builds (GCC 4.9.3).
i am using the default values of x265.exe for testing.
this is beyond me.


Quote:
Originally Posted by Boulder View Post
v3.0 changed some presets, so maybe the change in speed comparing to v2.7 comes from there. I don't know if the x265 docs are helpful in checking out what changed (I doubt )
as mentioned, if i set --aq-mode=1 speed of v3.0 is pretty much the same as before, also the size of output.
no idea about the quality, there might be some other changes as you say.

Last edited by poller; 19th February 2019 at 20:40.
poller is offline   Reply With Quote
Old 19th February 2019, 21:45   #6738  |  Link
benwaggoner
Moderator
 
Join Date: Jan 2006
Location: Portland, OR
Posts: 2,794
Quote:
Originally Posted by poller View Post
strange. i now tested with GCC 7.3 from MSYS2 (it seems this is the one you are using), the executable is a lot bigger and even slightly slower than my other builds (GCC 4.9.3).
i am using the default values of x265.exe for testing.
this is beyond me.
By "default values" you would be implicitly using --preset medium. No idea if that is what you want to be using for your scenario.

Quote:
as mentioned, if i set --aq-mode=1 speed of v3.0 is pretty much the same as before, also the size of output.
no idea about the quality, there might be some other changes as you say.
--aq-mode 2 is the default in 3.0, but was 1 in previous versions. I wouldn't think it would change performance THAT much.
__________________
Ben Waggoner
Principal Video Specialist, Amazon Prime Video

My Compression Book
benwaggoner is offline   Reply With Quote
Old 19th February 2019, 22:55   #6739  |  Link
poller
Registered User
 
Join Date: Sep 2018
Posts: 10
it seems i started some confusion here.

i tested another video, about the same results.

Quote:
--aq-mode=1
own builds: 40.5 sec
LigH builds: 35.5 sec
so my builds are even more than 10% slower, no matter what preset i use. so annoying.
GCC 9.01, GCC 8.2... always the same bad speed.

Quote:
--aq-mode=2
own builds: 47.5 sec
LigH builds: 40.5 sec
so yes, for MY short low res test videos mode 2 is slower.
that might be different for other input files.
poller is offline   Reply With Quote
Old 20th February 2019, 09:45   #6740  |  Link
MeteorRain
紺野木綿季
 
Join Date: Dec 2003
Location: NJ; OR; Shanghai
Posts: 491
Quote:
Originally Posted by poller View Post
second, x86 builds by LigH are about 10% (!) faster than mine. is there some magical compiler flag? i tried -O2 and -Ofast... no luck. -Ofast being the slowest of them actually.
x64 builds are at the same speed.

Could be some profiling magic. I'm unsure if LigH uses any profiling when compiling the binary though.

I used it once, made it slower. And I haven't used it since, but things might have changed.
MeteorRain is offline   Reply With Quote
Reply

Thread Tools Search this Thread
Search this Thread:

Advanced Search
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump


All times are GMT +1. The time now is 12:10.


Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2019, vBulletin Solutions Inc.