Welcome to Doom9's Forum, THE in-place to be for everyone interested in DVD conversion.

Before you start posting please read the forum rules. By posting to this forum you agree to abide by the rules.

 

Go Back   Doom9's Forum > General > Subtitles

Reply
 
Thread Tools Search this Thread Display Modes
Old 19th September 2019, 15:40   #1  |  Link
asxzwang
Registered User
 
Join Date: Sep 2013
Posts: 3
Spp2Pgs (ASS to BD SUP pgs)

Spp2Pgs 0.9.3.7
Blu-ray subtitle files can be exported without installing AVS
Enter ass files, AVS in bgra format, or other playable bgra source files, or bgra raw streams from stdin
Output sup file, or stdout
Details of the parameters are shown in the output of the program.
At runtime, the partition where% temp% path is located needs more than 4G space; you can set% temp% path before execution to transfer the storage location of temporary files during work.
The devel directory is. Net Library and other development-related content; the legacy directory is xy-VSSppf based on version 3.0.0.306.

Published in accordance with GPLv3 license agreement, the content of the agreement can be found in the source code, see the following article for details.
Can withstand general effects; Full-screen dynamic blackboard can generally generate files, but frame drops for compatibility.
Sup files generated strictly comply with Blu-ray standards, so large area + dynamic will lead to frame loss, the severity of which is mainly related to the area of the picture, please combine with clinical practice.
By default, "front black" is added at the starting point to avoid the need to manually specify the starting offset when subtitles are mixed in gold.
In addition, multiple ass files can be continuously combined into a single sup file through secondary development. See the sample code.

Usage: Hold down shift + right-click in the software catalog, select powershell, and enter commands

.\spp2pgs -i "1.ass" -s -1080 -r 23 "output.sup"

-i <filename>
Input subtitle file name. Use '-' for a stdin raw input.
-s <format>
Frame format:
480i = 1/240/-480
576i = 2/288/-576
480p = 3/480
1080i = 4/540/-1080
720p = 5/720
1080p = 6/1080
576p = 7/576
-r <rate>
Frame rate:
23.976 = 1/23
24.0 = 2/24
25.0 = 3/25
29.97 = 4/29
30.0 = 5/30
50.0 = 6/50
59.94 = 7/59
60.0 = 8/60
-b <frameid>
Beginning index of valid frames.
-e <count>
-n <count>
count of frames.
-z[0|1]
Setting up an extra epoch at the very beginning or not.
0 = No;
1 = Yes;
(blank)= Yes. (default)
-x[0|1]
Using extremely strict mode or not.
0 = No;
1 = Yes;
(blank)= Yes. (default)
-v[level]
Output level.
(blank)= All, verbose;
63 = Errors only.
127 = Errors and warnings.
144 = Normal outputs. (default)
<filename>
Output file name. Use '-' for a stdout output.
Method 2:
@Echo OFF & CD/D "%~dp0"
:Enc1
IF "%~1"=="" GOTO :EOF


echo.
echo.Start generating sup!
"Spp2Pgs.exe" -i "%~1" -s 1080 -r 23 "%~1.sup"


echo.
echo.Start the PES conversion!
"Pgs2Pes.exe" -i "%~1.sup" "%~1.pes"


SHIFT /1
GOTO :Enc1


to bat,Drag to bat to convert.Or drag the sup file directly onto Pgs2Pes.exe to generate the. PES file.

At present, it works perfectly, surpassing 99% of ASS conversion PGs software, but the software has stopped updating.It now needs a window and 4K settings.. Can a good-hearted person publish one?

Download Address: Link: https://pan.baidu.com/s/12OLnfdyL-OSkeGJPk65UxQ Extraction Code: 3BEN
https://22.gigafile.nu/0928-cbcef616...00f93ad071d5da

Github-zip link: link: https://pan.baidu.com/s/1lh12UdbFCdW5CnXOdlgNUw extraction code: 1a7y

Last edited by asxzwang; 21st September 2019 at 11:50.
asxzwang is offline   Reply With Quote
Old 19th September 2019, 15:42   #2  |  Link
asxzwang
Registered User
 
Join Date: Sep 2013
Posts: 3
Project Profile]



Current Project Maintenance Address (git):

Https://github.com/subelf/Spp2Pgs



Intermittent parsing of documents related to Blu-ray standards has been done since 13 years ago.

Lack of information and relevant experience. What's more, there are few open source Blu-ray related tools available.

For the time being... At least it's an open source thing, even if I jump on the street and have other hopes...



At present, the idea is that the structure of small files and directories is given to C, and the compilation of stream data in large files should be written in C++, encapsulated and then sent to CLR for invocation. On the one hand, it solves the efficiency problem, on the other hand, it avoids some licensing agreement problems.

At present, small files can handle the whole MPLS file and half of clpi files. The directory structure has been written as Xiang by me. Please refer to the disk rewriting of mgvc and 3D.

It's thrown here now (hg):

Https://bitbucket.org/subelf/bluraysharp



Now start processing large files.




Overview of External Storage Structure Manufacturing



This project attaches great importance to the fabrication of plug-in structure. Of course, the most ideal situation is that ass directly changes to plug-in.

Then briefly describe the whole process of making the traditional plug-in structure:

1. The ass is ready to adjust its axis and hang ARGB video clips with AVS when it is ready.

2. Using the tool avs2bdnxml to analyze the fragment frame by frame, determine the duration of each frame and picture; remove the blank area and output it as XML + PNG after proper processing; also output sup file, but there are many problems.

3. The generated xml+png, as bndxml, is imported by SceanaristBD and coded into pes+mui file. Among them, PES stores coded display graphics (PG). The specific meaning of Mui is not clear. For pes, the time stamp information of each data unit is stored. After that, PES + Mui can be used by Dajin to mix to form M2TS files. The time stamp of mixing needs to refer to the corresponding video clips.

3. Sup files can also be directly mixed with audio or video by tsmuxer to form M2TS files. But this method may have compatibility problems.

4. After that, by modifying MPLS file, the subtitle M2TS is associated with video M2TS to form a plug-in structure.



In the whole process, the most desirable is the complete automation of 2-4.

At present, tsmuxer is available, avs2bdnxml is more troublesome, MPLS modification is not a problem at present. Daijin's involvement in automation is basically impossible.

Therefore, the principles for realizing the automation of plug-in subtitles are as follows:

1. The final expectation is that there will be a graphical interface.

2. Core code, efficiency, in addition, readability is the most important, especially in reflecting the file structure. Open source.

3. Firstly, whether there is one, then stability and compatibility. Finally, try to replace closed-source tools as much as possible.



So it is possible to select PES to generate available PGs files from ass.

Then we consider the problem of M2TS generated by PGs / sup.
Directory]



Basic Top-level Concept of PG

PG-related coding concepts

Definition: File structure

Definition: timestamp formula

Selling Shana Chrysanthemum

Other details




Currently completed)



1. In the process of parsing the encoding from the original Epoch to the PES file, special actions need to be done: filling blank segments, time stamp generation, etc.

2. The core logic of avs2bdnxml is rewritten to C++ code, and the original SSE2 assembly is retained and slightly modified.

3. Initial generation: key frame archiving function, frame clipping + window area coverage statistics; color palette generation and mapping algorithm, modification; Demo v0.1, complete video stream parsing action, capture frames, and simulate coding PGs

4. Second generation: coding of PGs files; coding of graphics components; statistics and calculation of timestamps; identification and cutting of window areas; PGs output; pipeline input

3 Generations: area optimization algorithm for graphics components; C++/CLI package version; removal of AVS dependence




Futures Completed)

6.Generation 4: Avoid generating temporary files; Optimize the use of primitive buffers; Optional generation version of pes+mui;

7.Five Generations: So far away the future can not be seen with my ability

8.Six Generations: Perfect documentation, sell in this post, add pictures, add special effects; Verify the standard definition of 3D disk



Coding]



CompositionObject is compressed using RLE. A brief description of the algorithm:

1. Code each line of the graph and end with two zero lines after coding. The encoding results of each row of data are consecutively saved.

2. Code each line in bytes. For each byte value x, the number of consecutive occurrences of the byte value (stroke) Nx is counted.

3. Write RLE encoding using the following strategies:

When A. x is 0, first write 1 byte 0, then write the Nx after the coding, the coding rule f (Nx, 0)

When b.x is non-zero, if Nx is not greater than 2, copy 1 or 2 x directly.

When c.x is non-zero, if Nx is greater than 2, first write 1 byte 0, then write Nx after coding, the coding rule f (Nx, 1), and then write X.

4. Coding rules f (Nx, S):

A. If the value of Nx is small and is not more than 6 bits in binary system, the output value of 1 byte is Nx at 6 bits low, 0 at 6 bits and S at 7 bits high.

B. If the value of Nx exceeds 6 bits, the output value is 2 bytes, the lower 14 bits is Nx, the 14th bit is 1, and the highest 15 bits is S. The byte order is large.
In PES and pgs, the calculation method of DTS and PTS value (pseudocode): backup of https://bitbucket.org/snippets/subel...-GxSegTime.cpp

About Time Interval]



1. Version reuse in Epoch



At some point, DTS time has entered, and DisplaySets that have not yet entered PTS time can have up to eight. That is to say, the playback device only needs to store up to eight palettes.

So the ID number of PDS from 0 to 7 is enough.

Similarly, ODS limits the maximum ID to 63.

Version, on the other hand, can be updated continuously, allowing a maximum of 255.



Therefore, it should also be noted that PDS and ODS of different versions of the same ID can not overlap in time. Otherwise, the data of the former DisplaySet has not been displayed after decoding, and will be overwritten by the content of the latter DisplaySet.



2. Minimum Interval of DisplaySet in Epoch



They must be spaced at least one frame apart. Unless DisplaySet only updates the palette, the minimum interval can be "the decoding time of the Epoch's Windows", as detailed in the formula for calculating the decoding time of the 9th floor window.



3. Minimum time interval between Epochs



The DTS of the latter must be after the PTS of the former. That is to say, the former can only load the resources of the latter after it has displayed and released all the resources space.




On the Number of Instances in Segment]



1. Composition (PCS) The number of this shipment is global. Starting from 0, each Composition will be increased by 1, including blank PCS.

2. Window (WDS). Within the same Epoch, the WDS of each DisplaySet should be the same, including the blank PCS. In theory, the size and location of the Windows determines the area that the DisplaySet needs to erase on the cache before it is displayed. Since each Epoch has at most two Windows, the Windows number can only be 0 or 1.

3. Palette (PDS). Each DisplaySet can only use one Palette, while the DisplaySet decoded into the cache allows up to eight, so the total space left to Palette is only eight, numbering 0-7. The number of requirements is met by updating the version number and content.

Each color in the palette also has a number, 0-255. Normally, 0 is fixed for transparency, and RLE is the least coding. As for leaving 0 to other colors, such as the most commonly used ones, its compatibility has not been verified.

4. Object (ODS). The number of each Object is also within the scope of Epoch, allowing a maximum of 64 numbers from 0 to 63. Similarly, synchronous updates of version numbers and content are used to meet the number. But the size of Object cannot be increased when the version is updated. As for using different sizes of images with the same or smaller area as version substitution, its compatibility has not been verified at present.

The original design idea of HDMV-PGS)



There are two core ideas in the formulation of the official standard of HDMV-PGS:

1. To present subtitle content, two rectangular pictures with transparency information on a screen are enough.

2. Complex memory management is computed and planned by powerful encoders while encoding, whereas the decoder in the player only needs to follow the label to complete the picture drawing.



The above two together determine the egg pain of PGS.



Then, starting with the structure of PGS, we will briefly describe the decoding and rendering process to help understand its encoding process.



First is the structure:

PGs actually consists of a series of segments. Each segment has decoding time DTS and rendering time PTS two timestamps. When the corresponding time comes, the controller processes it accordingly.

PGS contains five segments:

Presentation composite segment (PCS)

A window definition segment (WDS)

A palette definition segment (PDS)

An object definition segment (ODS)

And an END segment



The purpose of these Segments is to complete a drawing or erasing command on the PG buffer plane. Such a command is called composition. Each composition is generally composed of a set of segments. The PG buffer plane is an image buffer with transparent information. It stores the current subtitle image, and the size of the picture is the same as that of the video picture. Similarly, the menu plane is the IG buffer plane. Finally, the PG and IG planes will be superimposed on the video to display to the user in turn.

In the whole process, the process of reading, decompressing and transferring Segment data is the decoding process; drawing the data transferred into the buffer onto the PG plane is the rendering process; the process of overlapping IG, PG and video three planes need not be concerned.



Then the decoding process of various Segments:

First of all, PCS, which is an outline of composition, indicates that data needs to be decoded and presented, and the PG decoder will enter the working state. PCS reports to the decoder the total number of primitives, the number of caches where the primitives are located, which area to draw in the primitives, the number of the target window area when drawing, and the relative position, and the color plate number used. In addition, if there is EpochStart tag in PCS, the player system needs to make corresponding initialization preparations, such as memory allocation, image buffer plane erasure, primitive and palette buffer initialization, etc. In addition, PCS contains the corresponding video picture size and frame rate information, which is provided to the decoder. Generally speaking, the information contained in each PCS in a PG stream should be exactly the same.

Then WDS describes the number of windows used, and the location and size of each window.

Then PDS is used to update the palette cache. The number and version number of the palette and some palette data are recorded. If the version number is higher than the current version of the number cache, the palette data will be sent directly to the palette cache by the decoder for updating.

Then ODS is used to update the meta-cache. Among them, the recording metadata object, which is the picture content, also has the number and version number, and RLE encoding metadata. Similarly, after the decoder compares the versions, it decodes them into indexed color pictures and sends them to the corresponding picture buffer. ODS may have two.

Finally, the END tag is used to indicate that the decoding work is completed and the PG decoder is out of operation.

Then there's the next PCS. If the EpochStart tag is not included in PCS, all cache resources will continue to be used. Otherwise, all resources will be reinitialized, and the previous numbering and version number will be invalidated.

So back and forth, complete the preparation of drawing or erasing commands one by one. The above is a simple decoding process for each Segment in a composition.



Among them, there are up to 64 image caches, and the size of each cache is determined by the area where the first version of the image is stored, while the size of the subsequent version of the image needs to be the same; the total size of all caches should not exceed 4M. The maximum number of palette caches is 8, and the size of the cache is fixed without concern. Both versions are also brushed to 255 at most.

There are at most two windows and meta-objects, which can not overlap, but two pictures can be drawn to the same window area as long as they do not overlap each other.




Next, we will talk about the process of presentation.

The so-called rendering refers to drawing decoded resources onto the PG buffer plane, and the whole process is synchronized with decoding.

The PTS timestamp of PCS determines the time when the composition command is displayed on the screen, that is, the actual display time. When PTS arrives, the PG buffer plane is drawn, and the system will automatically superimpose the plane on the video.



First, when PCS is marked with EpochStart, resources need to be reallocated. The PG plane needs to be redistributed and erased completely.

Then, if the primitives in a PCS are drawn to only one window, there may be an empty window, then the unused window area should be erased on the PG plane first. Similarly, if the elements to be drawn in a window are old pictures and need not be updated, the window area will be drawn first.

Secondly, all the elements in the window that need to be updated and the corresponding ODS are decoded, the target window will be drawn.

In this way, the PG buffer plane is drawn. After these drawings and erases are implemented on the PG plane, the picture will remain unchanged until the next command.

The method of image erasing is very simple. For a PCS with PTS of target time, only WDS is provided to describe the target window area, without any graphic elements to be drawn, and an END node is used.

Last edited by asxzwang; 19th September 2019 at 15:44.
asxzwang is offline   Reply With Quote
Reply

Tags
pgs, sup file

Thread Tools Search this Thread
Search this Thread:

Advanced Search
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump


All times are GMT +1. The time now is 15:10.


Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2024, vBulletin Solutions Inc.